This book contains all JNode documentation.
JNode is a Java New Operating System Design Effort.
The goal is to create an easy to use and install Java operating system for personal use. Any java application should run on it, fast & secure!
General user information.
Already very early in the Java history, around JDK 1.0.2, Ewout Prangsma (the founder of JNode) dreamed of building a Java Virtual Machine in Java.
It should be a system that was not only a VM, but a complete runtime environment that does not need any other form of operating system. So is had to be a light weight and most important flexibel system.
Ewout made various attempts in achieving these goals, starting with JBS; the Java Bootable System. It became a somewhat functional system, but had far too much native code (C and assembler) in it. So he started working on a new JBS system, called JBS2 and finally JNode. It had with a simple target, using NO C code and only a little bit of assembly code.
In may on 2003 Ewout came public with JNode and development proceeded ever faster from that point on.
Several versions have been released and there are now concrete plans for the first major version.
This page lists java applications that we use to test JNode.
Here is details about applications whose name starts with A
website : http://ant.apache.org/
comments :
Here is details about applications whose name starts with B
website : http://www.beanshell.org/
comments :
Here is details about applications whose name starts with E
website : http://www.eclipse.org ???
comments :
Here is details about applications whose name starts with H
website : http://hsqldb.org/
comments :
Here is details about applications whose name starts with J
website : http://junit.org/
comments : only tested in console mode.
website : http://openjdk.java.net/
comments : the sun compiler for java. It works fine, but you can run into GC bugs when you compile your first program. The following "warm-up" sequence avoids this:
website : http://www.mortbay.org/
comments : it works partially
website : http://www.jedit.org/
comments : by using jedit.jar alone, I only see the splash screen. If I try the installer, it fails at 0% of progress with an "IOException" dialog box but no stacktrace.
website : http://jchatirc.sourceforge.net/
comments :
Here is details about applications whose name starts with N
website : http://elonen.iki.fi/code/nanohttpd/
comments :
Here is details about applications whose name starts with R
website : http://www.mozilla.org/rhino/
comments :
Here is details about applications whose name starts with T
website : http://tomcat.apache.org/
comments :
To start using JNode you have two options:
Once JNode has booted, you will see a JNode > command prompt. See the shell reference for available commands.
This is a quick guide to get started with JNode. It will help you to download a JNode boot image, and explain how to use it. It also will give you get you started with exploring JNode's capabilities and give you some tips on how to use the JNode user interfaces.
To start with, you need to download a JNode boot image. Go to this page and click on the link for the latest version. This will take you to a page with the downloadable files for the version. The page also has a link to a page listing the JNode hardware requirements.
At this point, you have two choices. You can create a bootable CD ROM and then run JNode on real hardware by booting your PC from the CD ROM drive. Alternatively, you can run JNode on virtual PC using VMWare.
To run JNode on real hardware:
To run JNode from VMWare:
When you start up JNode, the first thing you will see after the BIOS messages is the Grub bootloader menu allowing you to select between various JNode configurations. If you have at 500MB or more of RAM (or 500MB assigned to the VM if you are using VMware), we recommend the "JNode (all plugins)" configuration. This allows you to run the GUI. Otherwise, we recommend the "JNode (default)" or "JNode (minimal shell)" configurations. (For more information on the available JNode configurations, ...).
Assuming that you choose one of the recommended configurations, JNode will go through the bootstrap sequence, and start up a text console running a command shell, allowing you to issue commands. The initial command will look like this:
JNode />
Try a couple of commands to get you started:
JNode /> dir
will list the JNode root directory,
JNode /> alias
will list the commands available to you, and
JNode /> help <command>
will show you a command's online help and usage informatiom.
There are a few more useful things to see:
The JNode completion mechanism is more sophisticated than the analogs in typical Linux and Windows shells. In addition to performing command name and file name completion, it can do completion of command options and context sensitive argument completion. For example, if you want to set up your network card using the "dhcp" command, you don't need to go hunting for the name of the JNode network device. Instead, enter the following:
JNode /> dhcp eth<TAB>
The completer will show a list of known ethernet devices allowing you to select the appropriate one. In this case, there is typically one name only, so this will be added to the command string.
For more information on using the shell, please refer to the JNode Shell page,
I bet you are bored with text consoles by now, and you are eager to see the JNode GUI. You can start it as follows:
JNode /> gc
JNode /> startawt
The GUI is intended to be intuitive, so give it a go. It currently includes a "Text Console" app for entering commands, and a couple of games. If you have problems with the GUI, ALT+F12 should kill the GUI and drop you back to the text console.
By the way, you can switch the font rendering method used by the GUI before you run "startawt", as follows:
JNode /> set jnode.font.renderer ttf|bdf
If you have questions or you just want to talk to us, please consider joining our IRC channel (#JNode.org@irc.oftc.net). We're all very friendly and try to help everyone *g*
If you find a bug, we would really appreciate you posting a bug report via our bug tracker. You can also enter support and feature requests there.
Feel free to continue trying out JNode. If you have the time and the skills, please consider joining us to make it better.
2 options are available here
If you do not have a bootable network card, you can create a network boot disk instead. See the GRUB manual for details, or use ROM-o-matic or the GRUB network boot-disk creator.
To boot JNode from the network, you will need a bootable network card, a DHCP/BOOTP and TFTP server setup.
This guide shows you how to boot JNode from an USB memory stick.
You'll need a Windows machine to build on and a clean USB memory stick (it may be wiped!).
Step 1: Build a JNode iso image (e.g. build cd-x86-lite)
Step 2: Download XBoot.
Step 3: Run XBoot with Administrator rights
Step 4: Open file: select the ISO created in step 1. Choose "Add grub4dos using iso emulation".
Step 5: Click "Create USB"
XBoot will now install a bootloader (choose the default) and prepare the USB memory stick.
After then, eject the memory stick and give it a try.
When it boots, you'll first have to choose JNode from the menu. Then the familiar JNode Grub boot menu appears.
This chapter explains how to use the Eclipse 3.2 IDE with JNode.
JNode contains several Eclipse projects within a single SVN module. To checkout and import these projects into Eclipse, execute the following steps:
The listed projects will appear when the root directory has been selected.
You can build JNode within Eclipse by using the build.xml Ant file found in the JNode-All project. However, due to the memory requirements of the build process, it is better to run the build from the commandline using build.bat (on windows) or build.sh (on unix).
Running JNode in Bochs does not seem to work out of the box yet. It fails on setting CPU register CR4 into 4Mb paging mode.
A compile setting that enables 4Mb pages is known to solve this problem. To enable this settings, running configure with the --enable-4meg-pages argument or add #define BX_SUPPORT_4MEG_PAGES 1 to config.h.
If you have a CPU with hardware virtualization support you can run JNode in kvm wich is much faster than vmware-[player|server] (at least for me). You need a CPU that either supports Intel's IVT (aka Vanderpool) or AMD's AMD-V (aka Pacifica).
With
egrep '^flags.*(vmx|svm)' /proc/cpuinfo"
you can easily check if your CPU supports VT or not. If you receive output your CPU is supported, else it is not. If your CPU is supported also check that VT is enabled in your system bios.
Load the kvm modules matching your CPU, either "modprobe kvm_intel" or "modprobe kvm_amd", install kvm user tools and setup permissions so users may run kvm (Have a look at a HOWTO for your distro for details: Ubuntu, Gentoo).
Once you have setup everything you can start kvm from the commandline (I think there are GUI frontends too, but I'm using the command line). You should be carefull though, acpi in JNode seems to kill kvm, so allways disable acpi. I also had to deactivate the kvm-irqchip as it trashed JNode. The command that works for me is:
kvm -m 768 -cdrom jnode-x86-lite.iso -no-acpi -no-kvm-irqchip -serial stdout -net none -std-vga
The "-serial" switch is optional but I need it for kdb (kernel debugger). If you want to use the Vesa mode of JNode you should also use "-std-vga", overwise you will not have a vesa mode. Set the memory whatever you like (768MB is my default).
I found an only way to run JNode with parallels.
In Options->Emulation flags, there is a parameter called Acceleration level that takes 3 values :
- disabled : JNode will work but that's very slow
- normal : JNode won't boot (freeze at "Detected 1 processor")
- high : JNode won't boot (freeze at "Detected 1 processor")
You can now run JNode on virtual box too. ACPI is not working but you'll get a prompt and can use JNode.
TODO: Test network, usb,...
This page will discripe how to run JNode in Virtual PC
At the current state JNode doesn't run in Virtual PC.
Basic Procedure
The JNode build process creates a VMWare compatible ".vmx" file that allows you to run JNode using recent releases of VMWare Player.
Assuming that you build JNode using the "cd-x86-lite" target, the build will create an ISO format CDROM image called "jnode-x86-lite.iso" in the "all/build/cdroms/" directory. In the same directory, the build also generates a file called "jnode-x86-lite.iso.vmx" for use with VMWare. To boot JNode from this ".iso" file, simply run the player as follows:
$ vmplayer all/build/cdroms/jnode-x86-lite.iso.vmx
Altering the VMWare virtual machine configuration
By default, the generated ".vmx" file configures a virtual machine with a virtual CDROM for the ISO, a bridged virtual ethernet and a virtual serial port that is connected to a "debugger.txt" file. If you want to configure the VMWare virtual machine differently, the simplest option is to buy VMWare Workstation and use its graphical interfaces to configure and run JNode. (Copy the generated ".vmx" file to a different location, and use that as the starting point.)
If you don't want to pay for VMWare Workstation, you can achieve the same effect by changing the ".vmx" file by hand. However, changes you make that way will be clobbered next time you run the "build" script. The solution is to do the following:
This procedure assumes some changes in a patch that is waiting to be committed.
This should create the "jnode-x86-lite.iso.vmx" file with the VMX settings from your file as overrides to the default settings.
Unfortunately, VMWare have not released any documentation for the myriad VMX settings. The best 3rd-party documentation that I can find is the sanbarrow.com website. There are also various "builder" applications around, but they don't look all that good.
VMWare disks and Boot device order
If you add a VMWare virtual (or real) disk drive, the VMWare virtual machine's BIOS will try to boot from that drive. Unless you have set up the drive to be bootable, this won't work. The easy way to fix this is to change VMWare's BIOS settings to alter the boot device order.
By default the NVRAM settings are stored in the "JNode.nvram" file in "all/build/cdroms" directory, and will be clobbered when you run "build clean". If this is a problem, create a VMX override (see above) with a "nvram" entry that uses a different location for the file.
To run JNode on a PC using the bootable CDROM, your PC must comply with the following specifications:
The first JNode related information you will see after booting from a JNode CDROM image is the GRUB bootloader page. The GNU GRUB bootloader is responsible for selecting a JNode configuration from a menu, loading the corresponding kernel image and parameters into RAM and causing the JNode kernel to start executing.
When GRUB starts, it displays the bootloader page and pauses for a few seconds. If you do nothing, GRUB will automatically load and start the default configuration. Pressing any key at this point will interrupt the automatic boot sequence and allow you to select a specific configuration. You can use the UP-ARROW and DOWN-ARROW to choose a JNode configuration, then hit ENTER to boot it up.
There are a number of JNode configurations in the menu:
It is currently not a good idea to boot JNode straight to the GUI. If want to run the GUI, it is best to boot the one of the non-GUI configurations; typically "JNode (all plugins)". Then from the text console, run the following commands:
JNode /> gc
JNode /> startawt
Use this list to find out if JNode already supports your hardware.
If you find out that your device is not on the list, or the information provided here is incorrect, please submit
your changes.
To be able to run JNode, you're hardware should be at least equal to or better then:
In order to run JNode the following hardware is recommended:
This page contains the available documentation for most of the useful JNode commands. For commands not listed below, try running help <alias> at the JNode command prompt to get the command's syntax description and built-in help. Running alias will list all available command aliases.
If you come across a command that is not documented, please raise an issue. (Better still, if you have website content authoring access, please add a page yourself using one of the existing command pages as a template.)
acpi
Synopsis | ||
acpi | displays ACPI details | |
acpi | --dump | -d | lists all devices that can be discovered and controlled through ACPI |
acpi | --battery | -b | displays information about installed batteries |
Details | ||
The acpi command currently just displays some information that can be gleaned from the host's "Advanced Configuration & Power Interface" (ACPI). In the future we would like to interact with ACPI to do useful things. However, this appears to be a complex topic, rife with compatibility issues, so don't expect anything soon.
The ACPI specifications can be found on the net, also have a look at wikipedia. |
||
Bugs | ||
This command does nothing useful at the moment; it is a work in progress. |
alias
Synopsis | ||
alias | prints all available aliases and corresponding classnames | |
alias | <alias> <classname> | creates an alias for a given classname |
alias | -r <alias> | removes an existing alias |
Details | ||
The alias command creates a binding between a name (the alias) and the fully qualified Java name of the class that implements the command. When an alias is created, no attempt is made to check that the supplied Java class name denotes a suitable Java class. If the alias name is already in use, alias will update the binding.
If the classname argument is actually an existing alias name, the alias command will create a new alias that is bound to the same Java classname as the existing alias. A command class (e.g. one whose name is given by an aliased classname) needs to implement an entry point method with one of the following signatures:
If a command class has both execute and main methods, most invokers will use the former in preference to the latter. Ideally, a command class should extend org.jnode.shell.AbstractCommand. |
arp
Synopsis | ||
arp | prints the ARP cache | |
arp | -d | clears the ARP cache |
Details | ||
ARP (the Address Resolution Protocol) is a low level protocol for discovering the MAC address of a network interface on the local network. MAC address are the low-level network addresses for routing IP (and other) network packets on a physical network.
When a host needs to comminutate with an unknown local IP address, it broadcasts an ARP request on the local network, asking for the MAC address for the IP address. The node with the IP address broadcasts a response giving the MAC address for the network interface corresponding to the IP address. The ARP cache stores IP to MAC address mappings that have previously been discovered. This allows the network stack to send IP packets without repeatedly broadcasting for MAC addresses. The arp command allows you to examine the contents of the ARP cache, and if necessary clear it to get rid of stale entries. |
basename
Synopsis |
basename String [Suffix] |
Details |
Strip directory and suffix from filenames |
Compatibility |
JNode basename is posix compatible. |
Links |
beep
Synopsis | ||
beep | makes a beep noise | |
Details | ||
Useful for alerting the user, or annoying other people in the room. |
bindkeys
Synopsis | ||
bindkeys | print the current key bindings | |
bindkeys | --reset | reset the key bindings to the JNode defaults |
bindkeys | --add <action> (<vkSpec> | <character>) | add a key binding |
bindkeys | --remove <action> [<vkSpec> | <character>] | remove a key binding |
Details | ||
The bindkeys command prints or changes the JNode console's key bindings; i.e. the mapping from key events to input editing actions. The bindkeys form of the command prints the current bindings to standard output, and the bindkeys --reset form resets the bindings to the hard-wired JNode default settings.
The bindkeys --add ... form of the command adds a new binding. The <action> argument specifies an input editing action; e.g. 'KR_ENTER' causes the input editor to append a newline to the input buffer and 'commit' the input line for reading by the shell or application. The <vkSpec> or <character> argument specifies an input event that is mapped to the <action>. The recognized <action> values are listed in the output the no-argument form of the bindkeys command. The <vkSpec> values are formed from the "VK_xxx" constants defined by the "java.awt.event.KeyEvent" class and "modifier" names; e.g. "Shift+VK_ENTER". The <character> values are either single ASCII printable characters or standard ASCII control character names; e.g. "NUL", "CR" and so on. The bindkeys --add ... form of the command removes a single binding or (if you leave out the optional <vkSpec> or <character> argument) all bindings for the supplied <action>. |
||
Bugs | ||
Changing the key bindings in one JNode console affects all consoles.
The bindkeys command provides no online documentation for what the action codes mean / do. |
bootp
Synopsis | ||
bootp | <device> | configures a network interface using BOOTP |
Details | ||
The bootp command configures the network interface given by <device> using settings obtained using BOOTP. BOOTP is a network protocol that allows a host to obtain its IP address and netmask, and the IP of the local gateway from a service on the local network. |
bsh
Synopsis | ||
bsh | [ --interactive | -i ] [ --file | -f <file> ] [ --code | -c <code> ] | Run the BeanShell interpreter |
Details | ||
The bsh command runs the BeanShell interpreter. The options are as follows:
If no arguments are given, --interactive is assumed. |
bzip2
Synopsis |
bzip2 [Options] [File ...] bunzip2 [Options] [File ...] bzcat [File ...] |
Details |
The bzip2 program handles compression and decompression of files in the bzip2 format. |
Compatibility |
JNode bzip2 aims to be fully compatible with BZip2. |
Links |
cat
Synopsis | ||
cat | copies standard input to standard output | |
cat | <filename> ... | copies files to standard output |
cat | --urls | -u <url> ... | copies objects identified by URL to standard output |
Details | ||
The cat command copies data to standard output, depending on the command line arguments:
The name "cat" is borrowed from UNIX, and is short for "concatenate". |
||
Bugs | ||
There is no dog command. |
cd
Synopsis | ||
cd | [ <dirName> ] | change the current directory |
Details | ||
The cd command changes the "current" directory for the current isolate, and everything running within it. If a <dirName> argument is provided, that will be the new "current" directory. Otherwise, the current directory is set to the user's "home" directory as given by the "user.home" property.
JNode currently changes the "current" directory by setting the "user.dir" property in the system properties object. |
||
Bugs | ||
The global (to the isolate) nature of the "current" directory is a problem. For example, if you have two non-isolated consoles, changing the current directory in one will change the current directory for the other, |
class
Synopsis | ||
class | <className> | print details of a class |
Details | ||
The class command allows you to print some details of any class on the shell's current classpath. The <className> argument should be a fully qualified Java class name. Running class will cause the named class to be loaded if this hasn't already happened. |
classpath
Synopsis | ||
classpath | prints the current classpath | |
classpath | <url> | adds the supplied url to the end of the classpath |
classpath | --clear | clears the classpath |
classpath | --refresh | cause classes loaded from the classpath to be reloaded on next use |
Details | ||
The classpath command controls the path that the command shell uses to locate commands to be loaded. By default, the shell loads classes from the currently loaded plug-ins. If the shell's classpath is non-empty, the urls on the path are searched ahead of the plug-ins. Each shell instance has its own classpath.
If the <url> argument ends with a '/', it will be interpreted as a base directory that may contain classes and resources. Otherwise, the argument is interpreted as the path for a JAR file. While "file:" URLs are the norm, protocols like "ftp:" and "http:" should also work. |
clear
Synopsis | ||
clear | clear the console screen | |
Details | ||
The clear command clears the screen for the current command shell's console. |
compile
Synopsis | ||
compile | [ --test ] [ --level <level> ] <className> | compile a class to native code |
Details | ||
The compile command uses the native code compiler to compile or recompile a class on the shell's class path. The <className> argument should be the fully qualified name for the class to be compiled
The --level option allows you to select the optimization level. The --test option allows you to compile with the "test" compilers. This command is primarily used for native code compiler development. JNode will automatically run the native code compiler on any class that is about to be executed for the first time. |
console
Synopsis | ||
console | --list | -l | list all registered consoles |
console | --new | -n [--isolated | --i] | starts a new console running the CommandShell |
console | --test | -t | starts a raw text console (no shell) |
Details | ||
The console command lists the current consoles, or creates a new one.
The first form of the console command list all consoles registered with the console manager. The listing includes the console name and the "F<n>" code for selecting it. (Use ALT F<n> to switch consoles.) The second form of the console command starts and registers a new console running a new instance of CommandShell. If the --isolate option is used with --new, the new console's shell will run in a new Isolate. The last form of the console command starts a raw text console without a shell. This is just for testing purposes. |
cpuid
Synopsis | ||
cpuid | print the computer's CPU id and metrics | |
Details | ||
The cpuid command prints the computer's CPU id and metrics to standard output. |
date
Synopsis | ||
date | print the current date | |
Details | ||
The date command prints the current date and time to standard output. The date / time printed are relative to the machine's local time zone. | ||
Bugs | ||
A fixed format is used to output date and times.
Printing date / time values as UTC is not supported. This command will not help your love life. |
del
Synopsis | ||
del | [ -r | --recursive ] <path> ... | delete files and directories |
Details | ||
The del command deletes the files and/or directories given by the <path> arguments.
Normally, the del command will only delete a directory if it is empty apart from the '.' and '..' entries. The -r option tells the del command to delete directories and their contents recursively. |
device
Synopsis | ||
device | shows all devices | |
device | <device> | shows a specific device |
device | ( start | stop | restart | remove ) <device> | perform an action on a device |
Details | ||
The device command shows information about JNode devices and performs generic management actions on them.
The first form of the device command list all devices registered with the device manager, showing their device ids, driver class names and statuses. The second form of the device command takes a device id given as the <device> argument. It shows the above information for the corresponding device, and also lists all device APIs implemented by the device. Finally, if the device implements the "DeviceInfo" API, it is used to get further device-specific information. The last form of the device command performs actions on the device denoted by the device id given as the <device> argument. The actions are as follows:
|
||
Bugs | ||
This command does not allow you to perform device-specific actions. |
df
Synopsis | ||
df | [ <device> ] | display disk space usage info |
Details | ||
The df command prints disk space usage information for file systems. If a <device>, usage information is displayed for the file system on the device. Otherwise, information is displayed for all registered file systems. |
dhcp
Synopsis | ||
dhcp | <device> | configures a network interface using DHCP |
Details | ||
The dhcp command configures the network interface given by <device> using settings obtained using DHCP. DHCP is the most commonly used network configuration protocol. The protocol provides an IP address and netmask for the machine, and the IP addresses of the local gateway and the local DNS service.
DHCP allocates IP address dynamically. A DHCP server will often allocate the same IP address to a given machine, but this is not guaranteed. If you require a fixed IP address for your JNode machine, you should use bootp or ifconfig. (And, if you have a DHCP service on your network, you need to configure it to not reallocate your machine's staticly assigned IP address.) |
dir
Synopsis | ||
dir | [ <path> ] | list a file or directory |
Details | ||
The dir command lists the file or directory given by the <path> argument. If no argument is provided, the current directory is listed. |
dirname
Synopsis |
dirname String |
Details |
Strip non-directory suffix from file names |
Compatibility |
JNode dirname is posix compatible. |
Links |
disasm
Synopsis | ||
disasm | [ --test ] [ --level <level> ] <className> [ <methodName> ] | disassemble a class or method |
Details | ||
The disasm command disassembles a class or method for a class on the shell's class path The <className> argument should be the fully qualified name for the class to be compiled. The <methodName> should be a method declared by the class. If the method is overloaded, all of the overloads will be disassembled.
The --level option allows you to select the optimization level. The --test option allows you to compile with the "test" compilers. This command is primarily used for native code compiler development. Note, contrary to its name and description above, the command doesn't actually disassemble the class method(s). Instead it runs the native compiler in a mode that outputs assembly language rather than machine code. |
echo
Synopsis | ||
echo | [ <text> ... ] | print the argument text |
Details | ||
The echo command prints the text arguments to standard output. A single space is output between the arguments, and text is completed with a newline. |
edit
Synopsis | ||
edit | <filename> | edit a file |
Details | ||
The edit command edits a text file given by the <filename> argument. This editor is based on the "charva" text forms system.
The edit command displays a screen with two parts. The top part is the menu section; press ENTER to display the file action menu. The bottom part is the text editing window. The TAB key selects menu entries, and also moves the cursor between the two screen parts. |
||
Bugs | ||
This command needs more comprehensive user documentation. |
eject
Synopsis | ||
eject | [ <device> ... ] | eject a removable medium |
Details | ||
The eject command ejects a removable medium (e.g. CD or floppy disk) from a device. |
env
Synopsis | ||
env | [ -e | --env ] | print the system properties or environment variables |
Details | ||
By default, the env command prints the system properties to standard output. The properties are printed one per line in ascending order based on the property names. Each line consists of a property name, the '=' character, and the property's value.
If the -e or --env option is given, the env command prints out the current shell environment variables. At the moment, this only works with the bjorne CommandInterpreter and the proclet CommandInvoker. |
exit
Synopsis | ||
exit | cause the current shell to exit | |
Details | ||
The exit command causes the current shell to exit. If the current shell is the JNode main console shell, JNode will shut down. | ||
Bugs | ||
This should be handled as a shell interpreter built-in, and it should only kill the shell if the user runs it directly from the shell's command prompt. | ||
gc
Synopsis | ||
gc | run the garbage collector | |
Details | ||
The gc command manually runs the garbage collector.
In theory, it should not be necessary to use this command. The garbage collector should run automatically at the most appropriate time. (A modern garbage collector will run most efficiently when it has lots of garbage to collect, and the JVM is in a good position to know when this is likely to be.) In practice, it is necessary to run this command:
|
grep
Synopsis |
grep [Options] Pattern [File ...] |
grep [Options] [ -e Pattern | -f File ...] [File ...] |
Details |
grep searches the input Files (or standard input if not files are give, or if - is given as a file name) for lines containing a match to the Pattern. By default grep prints the matching lines. |
Compatibility |
JNode grep implements most of the POSIX grep standard JNode grep implements most of the GNU grep extensions |
Links |
Bugs |
|
gzip
Synopsis |
gzip [Options] [-S suffix] [File ...] gunzip [Options] [-S suffix] [File ...] zcat [-f] [File ...] |
Details |
The gzip program handles the compress and decompression of files in the gzip format. |
Compatibility |
JNode gzip aims to be fully compatible with gnu zip. |
Links |
reboot
Synopsis | ||
halt | shutdown and halt JNode | |
Details | ||
The halt command shuts down JNode services and devices, and puts the machine into a state in which it is safe to turn off the power. |
help
Synopsis | ||
help | [ <name> ] | print command help |
Details | ||
The help command prints help for the command corresponding to the <name> argument. This should be either an alias known to the current shell, or a fully qualified name of a Java command class. If the <name> argument is omitted, this command prints help information for itself.
Currently, help prints command usage information and descriptions that it derives from a command's old or new-style argument and syntax descriptors. This means that (unlike Unix "man" for example), the usage information will always be up-to-date. No help information is printed for Java applications which have no JNode syntax descriptors. |
hexdump
Synopsis | ||
hexdump | <path> | print a hex dump of a file |
hexdump | -u | --url <url> | print a hex dump of a URL |
hexdump | print a hex dump of standard input | |
Details | ||
The hexdump command prints a hexadecimal dump of a file, a URL or standard input. |
history
Synopsis | ||
history | print the history list | |
history | [-t | --test] <index> | <prefix> | find and execute a command from the history list |
Details | ||
The history command takes two form. The first form (with no arguments) simply prints the current command history list. The list is formatted with one entry per line, with each line starting with the history index.
The second form of the history command finds and executes a command from the history list and executes it. If an <index> "i" is supplied, the "ith" entry is selected, with "0" meaning the oldest entry, "1" the second oldest and so on. If a <prefix> is supplied, the first command found that starts with the prefix is executed. The --test (or -t) flag tells the history command to print the selected command instead of executing it. |
||
Bugs | ||
The history command currently does not execute the selected command. This is maybe a good thing.
When the shell executes a command, the history list gets reordered in a rather non-intuitive way. |
ifconfig
Synopsis | ||
ifconfig | List IP address assignments for all network devices | |
ifconfig | <device> | List IP address assignments for one network device |
ifconfig | <device> <ipAddress> [ <subnetMask> ] | Assign an IP address to a network device |
Details | ||
The ifconfig command is used for assigning IP addresses to network devices, and printing network address bindings.
The first form prints the MAC address, assigned IP address(es) and MTU for all network devices. You should see the "loopback" device in addition to devices corresponding to each of your machine's ethernet cards. The second form prints the assigned IP address(es) for the given <device>. The final form assigns the supplied IP address and associated subnet mask to the given <device>. |
||
Bugs | ||
Only IPv4 addresses are currently supported.
When you attempt to bind an address, the output shows the address as "null", irrespective of the actual outcome. Run "ifconfig <device>" to check that it succeeded |
java
Synopsis | ||
java | <className> [ <arg> ... ] | run a Java class via its 'main' method |
Details | ||
The java command runs the supplied class by calling its 'public static void main(String[])' entry point. The <className> should be the fully qualified name of a Java class. The java command will look for the class to be run on the current shell's classpath. If that fails, it will look in the current directory. The <arg> list (if any) is passed to the 'main' method as a String array. |
kdb
Synopsis | ||
kdb | show the current kdb state | |
kdb | --on | turn on kdb |
kdb | --off | turn off kdb |
Details | ||
The kdb command allows you to control "kernel debugging" from the command line. At the moment, the kernel debugging functionality is limited to copying the output produced by the light-weight "org.jnode.vm.Unsafe.debug(...)" calls to the serial port. If you are running under VMWare, you can configure it to capture this in a file in the host OS.
The kdb command turns this on and off. Kernel debugging is normally off when JNode boots, but you can alter this with a bootstrap switch. |
leed & levi
Synopsis | ||
leed | <filename> | edit a file |
levi | <filename> | view a file |
Details | ||
The leed and levi command respectively edit and view the text file given by the <filename> argument. These commands open the editor in a new text console, and provide simple intuitive screen-based editing and viewing.
The leed command understands the following control functions:
The levi command understands the following control function:
|
||
Bugs | ||
These commands need more comprehensive user documentation. |
loadkeys
Synopsis | ||
loadkeys | print the current keyboard interpreter | |
loadkeys | <country> [ <language> [<variant> ] ] | change the keyboard interpreter |
Details | ||
The loadkeys command allows you to change the current keyboard interpreter. A JNode keyboard interpreter maps device specific codes coming from the physical keyboard into device independent keycodes. This mapping serves to insulate the JNode operating system and applications from the fact that keyboards designed for different countries have different keyboard layouts and produce different codes.
A JNode keyboard interpreter is identified by a triple consisting of a 2 character ISO country code, together with an optional 2 character ISO language code and an optional variant identifier. Examples of valid country codes include "US", "FR", "DE", and so on. Examples of language code include "en", "fr", "de" and so on. (You can use JNode completion to get complete lists of the codes. Unfortunately, you cannot get the set of supported triples.) When you run "loadkeys <country> ..., the command will attempt to find a keyboard interpreter class that matches the supplied triple. These classes are in the "org.jnode.driver.input.i10n" package, and should be part of the plugin with the same identifier. If loadkeys cannot find an interpreter that matches your triple, try making it less specific; i.e. leave out the <language> and <variant> parts of the triple. Note: JNode's default keyboard layout is given by the "org/jnode/shell/driver/input/KeyboardLayout.properties" file. (The directory location in the JNode source code tree is "core/src/driver/org/jnode/driver/input/".) |
||
Bugs | ||
Loadkeys should allow you to find out what keyboard interpreter are available without looking at the JNode source tree or plugin JAR files.
Loadkeys should allow you to set the keyboard interpreter independently for each connected keyboard. Loadkeys should allow you to change key bindings at the finest granularity. For example, the user should be able to (say) remap the "Windows" key to "Z" to deal with a broken "Z" key. This would allow you to configure JNode to use a currently unsupported keyboard type. (It would also help those game freaks out there who have been pounding on the "fire" key too much.) |
locale
Synopsis | ||
locale | print the current default Locale | |
locale | --list | -l | list all available Locales |
locale | <language> [ <country> [<variant> ] ] | change the default Locale |
Details | ||
The locale command allows you to print, or change JNode's default Locale, or list all available Locales. |
console
Synopsis | ||
log4j | --list | -l | list the current log4j Loggers |
log4j | <configFile> | reloads log4j configs from a file |
log4j | --url | -u <configURL> | reloads log4j configs from a URL |
log4j | --setLevel | -s <level> [ <logger> ] | changes logging levels |
Details | ||
The log4j command manages JNode's log4j logging system. It can list loggers and logging levels, reload the logging configuration and adjust logging levels.
The first form of the log4j command list the currently defined Loggers and there explicit or effective logging levels. An effective level is typically inherited from the "root" logger, and is shown in parentheses. The second and third forms of the log4j command reload the log4j configurations from a file or URL. The final form of the log4j command allows you to manually change logging levels. You can use completion to see what the legal logging levels and the current logger names are. If no <logger> argument is given, the command will change the level for the root logger. |
ls
Synopsis | ||
ls | [ <path> ... ] | list files and directories |
Details | ||
The ls command lists the files and/or directories given by the <path> arguments. If no arguments are provided, the current directory is listed. | ||
Bugs | ||
The current output format for 'ls' does not clearly distinguish between an argument that is a file and one that is a directory. A format that looks more like the output for UNIX 'ls' would be better. |
lsirq
Synopsis | ||
lsirq | print IRQ handler information | |
Details | ||
The lsirq command prints interrupt counts and device names for each IRQ. |
locale
Synopsis | ||
memory | show JNode memory usage | |
Details | ||
The memory command shows how much JNode memory is in use and how much is free. |
mkdir
Synopsis | ||
mkdir | <path> | create a new directory |
Details | ||
The mkdir command creates a new directory. All directories in the supplied path must already exist. |
mount
Synopsis | ||
mount | show all mounted file systems | |
mount | <device> <directory> <fsPath> | mount a file system |
Details | ||
The mount command manages mounted file systems. The first form of the command shows all mounted file systems showing the mount points and the device identifiers.
The second form of the command mounts a file system. The file system on <device> is mounted as <directory>, with <fsPath> specifying the directory in the file system being mounted that will be used as the root of the file system. Note that the mount point given by <directory> must not exist before mount is run. (JNode mounts the file system as the mount point, not on top of it as UNIX and Linux do.) |
locale
Synopsis | ||
namespace | Print the contents of the system namespace | |
Details | ||
The namespace command shows the contents of the system namespace. The output gives the class names of the various managers and services in the namespace. |
netstat
Synopsis | ||
netstat | Print network statistics | |
Details | ||
The netstat command prints address family and protocol statistics gathered by JNode's network protocol stacks. |
onheap
Synopsis | ||
onheap | [--minCount <count>] [--minTotalSize <size>] [--className <size>]* | Print per-class heap usage statistics |
Details | ||
The onheap command scans the heap to gather statistics on heap usage. Then it outputs a per-class breakdown, showing the number of instances of each class and the total space used by those instances.
When you run the command with no options, the output report shows the heap usage for all classes. This is typically too large to be directly useful. If you are looking for statistics for specific classes, you can pipe the output to the grep command and select the classes of interest with a regex. If you are trying to find out what classes are using a lot of space, you can use the onheap command's options to limit the output as follows:
|
page
Synopsis | ||
page | [ <file> ] | page a file |
page | page standard input | |
Details | ||
The page command displays the supplied file a screen page at a time on a new virtual console. If no arguments are provided, standard input is paged.
The command uses keyboard input to control paging. For example, a space character advances one screen page and ENTER advances one line. Enter 'h' for a listing of the available pager commands and actions. |
||
Bugs | ||
While the current implementation does not pre-read an input stream, nothing will be displayed until the next screen full is available. Also, the entire contents of the file or input stream will be buffered in memory.
A number useful features supported by typical 'more' and 'less' commands have not been implemented yet. |
ping
Synopsis | ||
ping | <host> | Ping a remote host |
Details | ||
The ping command sends ICMP PING messages to the remote host given by <host> and prints statistics on the replies received. Pinging is a commonly used technique for testing that a remote host is contactable. However, ping "failure" does not necessarily mean that the machine is uncontactable. Gateways and even hosts are often configured to silently block or ignore PING messages. | ||
Bugs | ||
The ping command uses hard-wired parameters for the PING packet's TTL, size, count, interval and timeout. These should be command line options. |
plugin
Synopsis | ||
plugin | List all plugins and their status | |
plugin | <plugin> | List a given plugin |
plugin | --load | -l <plugin> [ <version> ] | Load a plugin |
plugin | --unload | -u <plugin> | Unload a plugin |
plugin | --reload | -r <plugin> [ <version> ] | Reload a plugin |
plugin | --addLoader | -a <url> | Add a new plugin loader |
Details | ||
The plugin command lists and manages plugins and plugin loaders.
The no argument form of the command lists all plugins known to the system, showing each one's status. The one argument form lists a single plugin. The --load, --unload and --reload options tell the plugin command to load, unload or reload a specified plugin. The --load and --reload forms can also specify a version of the plugin to load or reload. The --addLoader option configures a new plugin loader that will load plugin from the location given by the <url>. |
propset
Synopsis | ||
propset | [ -s | --shell ] <name> [ <value> ] | Set or remove a property |
Details | ||
The propset command sets and removes properties in either the System property space or (if -s or --shell is used) Shell property space. If both <name> and <value> are supplied, the property <name> is set to the supplied <value>. If just <name> is given, the named property is removed.
The System property space consists of the properties returned by "System.getProperty()". This space are currently isolate-wide, but there are moves afoot to make it proclet specific. The Shell property space consists of properties stored by each Shell instance. This space is is separate from an shell interpreter's variable space, and persists over changes in a Shell's interpreter. The 'set' command is an alias for 'propset', but if you are using the 'bjorne' interpreter the 'set' alias is obscured by the POSIX 'set' builtin command which has incompatible semantics. Hence 'propset' is the recommended alias. |
pwd
Synopsis | ||
pwd | print the pathname for current directory | |
Details | ||
The pwd command prints the pathname for the current directory; i.e. the value of the System "user.dir" property mapped to an absolute pathname. Note that the current directory is not guaranteed to exist, or to ever have existed. |
ramdisk
Synopsis | ||
ramdisk | -c | --create [ -s | --size <size> ] | |
Details | ||
The ramdisk command manages RAM disk devices. A RAM disk is a simulated disk device that uses RAM to store its state.
The --create form of the command creates a new RAM disk with a size in bytes given by the --size option The default size is 16K bytes. Note that the RAM disk has a notional block size of 512 bytes, so the size should be a multiple of that. |
reboot
Synopsis | ||
reboot | shutdown and reboot JNode | |
Details | ||
The reboot command shuts down JNode services and devices, and then reboots the machine. |
remoteout
Synopsis | ||
remoteout | [--udp | -u] --host | -h <host> [--port | -p <port>] | Copy console output and logging to a remote receiver |
Details | ||
Running the remoteout command tells the shell to copy console output (both 'out' and 'err') and logger output to a remote TCP or UDP receiver. The options are as follows:
Before you run remoteout on JNode, you need to start a TCP or UDP receiver on the relevant remote host and port. The JNode codebase includes a simple receiver application implemented in Java. You can run as follows: java -cp $JNODE/core/build/classes org.jnode.debug.RemoteReceiver & Running the RemoteReceiver application with the --help option will print out a "usage" message. Notes:
|
||
Bugs | ||
In addition to the inherent lossiness of UDP, the UDPOutputStream implementation can discard output arriving simultaneously from multiple threads.
Logger output redirection is disabled in TCP mode due to a bug that triggers kernel panics. There is currently no way to turn off console/logger copying once it has been started. Running remoteout and a receiver on the same JNode instance, may cause JNode to lock up in a storm of console output. |
resolver
Synopsis | ||
resolver | List the DNS servers the resolver uses | |
resolver | --add | -a <ipAddr> | Add a DNS server to the resolver list |
resolver | --del | -d <ipAddr> | Remove a DNS server from the resolver list |
Details | ||
The resolver manages the list of DNS servers that the Resolver uses to resolve names of remote computers and services.
The zero argument form of resolver list the IP addresses of the DNS servers in the order that they are used. The --add form adds a DNS aerver (identified by a numeric IP address) to the front of the resolver list. The --del form removes a DNS server from the resolver list. |
route
Synopsis | ||
route | List the network routing tables | |
route | --add | -a <target> <device> [ <gateway> ] | Add a new route to the routing tables |
route | --del | -d <target> <device> [ <gateway> ] | Remove a route from the routing tables |
Details | ||
The routing table tells the JNode network stacks which network devices to use to send packets to remote machines. A routing table entry consists of the "target" address for a host or network, the device to use when sending to that address, and optionally the address of the local gateway to use.
The route command manages the routing table. The no-argument form of the command lists the current routing table. The --add and --del add and delete routes respectively. For more information on how to use route to configure JNode networking, refer to the FAQ. |
rpcinfo
Synopsis | ||
rpcinfo | <host> | Probe a remote host's ONC portmapper service |
Details | ||
The rpcinfo command sends a query to the OMC portmapper service running on the remote <host> and lists the results. |
run
Synopsis | ||
run | <file> | Run a command script |
Details | ||
The run command runs a command script. If the script starts with a line of the form
#!<interpreter> where <interpreter> is the name of a registered CommandInterpreter, the script will be run using the nominated interpreter. Otherwise, the script will be run using the shell's current interpreter. |
startawt
Synopsis | ||
startawt | start the JNode Graphical User Interface | |
Details | ||
The startawt command starts the JNode GUI and launches the desktop class specified by the system property jnode.desktop. The default value is "org.jnode.desktop.classic.Desktop"
There is more information on the JNode GUI page, including information on how to exit the GUI. |
syntax
Synopsis | ||
syntax | lists all aliases that have a defined syntax | |
syntax | --load | -l | loads the syntax for an alias from a file |
syntax | --dump | -d | dumps the syntax for an alias to standard output | syntax | --dump-all | dumps all syntaxes to standard output |
syntax | --remove | -r alias | remove the syntax for the alias |
Details | ||
The syntax command allows you to override the built-in syntaxes for commands that use the new command syntax mechanism. The command can "dump" a command's current syntax specification as XML, and "load" a new one from an XML file. It can also "remove" a syntax, provided that the syntax was defined or overridden in the command shell's syntax manager.
The built-in syntax for a command is typically specified in the plugin descriptor for a parent plugin of the command class. If there is no explicit syntax specification, a default one will be created on-the-fly from the command's registered arguments. Note: not all classes use the new syntax mechanism. Some JNode command classes use an older mechanism that is being phased out. Other command classes use the classic Java approach of decoding arguments passed via a "public static void main(String[])" entry point. |
||
Bugs | ||
The XML produced by "--dump" or "--dump-all" should be pretty-printed to make it more readable / editable. |
tar
Synopsis |
tar -Acdtrux [Options] [File ...] |
Details |
The tar program provides the ability to create tar archives, as well as various other kinds of manipulation. For example, you can use tar on previously created archives to extract files, to store additional files, or to update or list files which were already stored. |
Compatibility |
JNode tar aims to be fully compliant with gnu tar. |
Links |
tcpinout
Synopsis | ||
tcpinout | <host> <port> | Run tcpinout in client mode |
tcpinout | <local port> | Run tcpinout in server mode |
Details | ||
The tcpinout command is a test utility that sets up a TCP connection to a remote host and then connects the command's input and output streams to the socket. The command's standard input is read and sent to the remote machine, and simultaneously output from the remote machine is written to the command's standard output. This continues until the remote host closes the socket or a network error occurs.
In "client mode", the tcpinout command opens a connection to the supplied <host> and <port>. This assumes that there is a service on the remote host that is "listening" for connections on the port. In "server mode", the tcpinout command listens for an incoming TCP connection on the supplied <local port>. |
thread
Synopsis | ||
thread | [--groupDump | -g] | Display info for all extand Threads |
thread | <threadName> | Display info for the named Thread |
Details | ||
The thread command can display information for a single Thread or all Threads that are still extant.
The first form of the command traverses the ThreadGroup hierarchy, displaying information for each Thread that it finds. The information displayed consists of the Thread's 'id', its 'name', its 'priority' and its 'state'. The latter tells you (for example) if the thread is running, waiting on a lock or exited. If the Thread has died with an uncaught exception, you will also see a stacktrace. If you set the --groupDump flag, the information is produced by calling the "GroupInfo.list()" debug method. The output contains more information but the format is ugly. The second form of the thread command outputs information for the thread given by the <threadName> argument. No ThreadGroup information is shown. |
||
Bugs | ||
The output does not show the relationship between ThreadGroups unless you use --groupDump.
The second form of the command should set a non-zero return code if it cannot find the requested thread. There should be a variant for selecting Threads by 'id'. |
time
Synopsis |
time Alias [Args] |
Details |
Executes the command given by Alias and outputs the total execution time of that command. |
cd
Synopsis | ||
touch | <filename> | create a file if it does not exist |
Details | ||
The touch command creates the named file if it does not already exist. If the <filename> is a pathname rather than a simple filename, the command will also create parent directories as required. |
unzip
Synopsis |
unzip [Options] Archive [File ...] [-x Pattern] [-d Directory] |
Details |
The unzip program handles the extraction and listing of archives based on the PKZIP format. |
Compatibility |
JNode unzip aims to be compatible with INFO-Zip. |
Links |
utest
Synopsis | ||
utest | <classname> | runs the JUnit tests in a class. |
Details | ||
The utest command loads the class given by <className> creates a JUnit TestSuite from it, and then runs the TestSuite using a text-mode TestRunner. The results are written to standard output. |
vminfo
Synopsis | ||
vminfo | [ --reset ] | show JNode VM information |
Details | ||
The vminfo command prints out some statistics and other information about the JNode VM. The --reset flag causes some VM counters to be zeroed after their values have been printed. |
wc
Synopsis |
wc [-cmlLw] [File ...] |
Details |
print newline, word and byte counts for each file. |
Compatibility |
JNode wc is posix compatible. |
Links |
zip
Synopsis |
zip [Options] [Archive] [File ...] [-xi Pattern] |
Details |
The zip program handles the creation and modification of zip archives based on the PKZIP format. |
Compatibility |
JNode zip aims to be compatible with INFO-Zip. |
Links |
Starting the JNode GUI
JNode supports a GUI which runs a graphical desktop and a limited number of applications. The normal way to launch the JNode GUI is to boot JNode normally, and then at the console command prompt run the following:
JNode> gc JNode> startawt
The screen will go blank for some time (30 to 60 seconds is common), and then the JNode desktop will be displayed.
Using the JNode GUI
The JNode GUI enables the following special key bindings:
<ALT> + <CTRL> + <F5> | Refresh the GUI | |
<ALT> + <F11> | Leaves the GUI | |
<ALT> + <F12> | Quits the GUI | |
<ALT> + <CTRL> + <BackSpace> | Quits the GUI | (Don't use this if you are under Linux/Unix : it will quits Linux GUI) |
Trouble-shooting
If the GUI fails to come up after a reasonable length of time, try using <ALT> + <F12> or <ALT> + <CTRL> + <BackSpace> to return to the text console. When you get back to the console, look for any relevant messages on the current console and on the logger console (<ALT> + <F7>).
One possible cause of GUI not launching is that JNode may run out of memory while compiling the GUI plugins to native code. If this appears to be the case and you are running a virtual PC (e.g. using VMware, etc), try changing the memory size of the virtual PC.
Another possible cause of problems may be that JNode doesn't have a working device driver for your PC's graphics card. If this is the case, you could try booting JNode in VESA mode. To do this, simply boot JNode selecting a "(VESA mode)" entry from the GRUB boot menu.
Introduction
The JNode command shell allows commands to be entered and run interactively from the JNode command prompt or run from command script files. Input to entered at the command prompt (or read from a script file) is first split into command lines by a command interpreter; see below. Each command line is split into command name (an alias in JNode parlance) and a sequence of arguments. Finally, each command alias is mapped to a class name, and run by a command invoker.
The available aliases can be listed by typing
JNode /> alias<ENTER>
and an aliases syntax and built-in help can be displayed by typing
JNode /> help alias<ENTER>
More extensive documentation for most commands can be found in the JNode Commands index.
Keyboard Bindings
The command shell (or more accurately, the keyboard interpreter) implements the following keyboard events:
<SHIFT>+<UP ARROW> | Scroll the console up a line |
<SHIFT>+<DOWN-ARROW> | Scroll the console down a line |
<SHIFT>+<PAGE-UP> | Scroll the console up a page |
<SHIFT>+<PAGE-DOWN> | Scroll the console down a page |
<ALT>+<F1> | Switch to the main command console |
<ALT>+<F2> | Switch to the second command console |
<ALT>+<F7> | Switch to the Log console (read only) |
<ESC> | Show command usage message(s) |
<TAB> | Command / input completion |
<UP-ARROW> | Go to previous history entry |
<DOWN-ARROW> | Go to next history entry |
<LEFT-ARROW> | Move cursor left |
<RIGHT-ARROW> | Move cursor right |
<BACKSPACE> | Delete character to left of cursor |
<DELETE> | Delete character to right of cursor |
<CTRL>+<C> | Interrupt command (currently disabled) |
<CTRL>+<D> | Soft EOF |
<CTRL>+<Z> | Continue the current command in the background |
<CTRL>+<L> | Clear the console and the input line buffer |
Note: you can change the key bindings using the bindkeys command.
Command Completion and Incremental Help
The JNode command shell has a sophisticated command completion mechanism that is tied into JNode's native command syntax mechanisms. Completion is performed by typing the <TAB> key.
If you enter a partial command name as follows.
JNode /> if
If you now enter <TAB> the shell will complete the command as follows:
JNode /> ifconfig
with space after the "g" so that you can enter the first argument. If you enter <TAB>
again, JNode will list the possible completions for the first argument as follows:
eth-pci(0,16,0)
loopback
JNode /> ifconfig
This is telling you that the possible values for the first argument are "eth-pci(0,16,0)" and "loopback"; i.e. the names of all network devices that are currently available. If you now enter "l" followed by <TAB>, the shell will complete the first argument as follows:
JNode /> ifconfig loopback
and so on. Completion can be performed on aliases, option names and argument types such as file and directory paths and device and plugin names.
While completion can be used to jolt your memory, it is often useful to be able to see the syntax description for the command you are entering. If you are in the middle of entering a command, entering <CTRL-?> will parse what you have typed in so far against the aliases syntax, and then print the syntax description for the alternative(s) that match what you have entered.
The JNode command shell uses a CommandInterpreter object to translate the characters typed at the command prompt into the names and arguments for commands to be executed. There are currently 3 interpreters available:
The JNode command shell currently consults the "jnode.interpreter" property to determine what interpreter to use. You can change the current interpreter using the "propset -s" command; e.g.
JNode /> propset -s jnode.interpreter bjorne
Note that this only affects the current console, and that the setting does not persist beyond the next JNode reboot.
Command Invokers
The JNode command shell uses a CommandInvoker object to execute commands extracted from the command line by the interpreter. This allows us to run commands in different ways. There are currently 4 command invokers available:
The JNode command shell currently consults the "jnode.invoker" property to determine what invoker to use. You can change the current invoker using the "propset -s" command; e.g.
JNode /> propset -s jnode.invoker isolate
Note that this only affects the current console, and that the setting does not persist beyond the next JNode reboot.
If you want to test some java application, but don't want to recompile JNode completly every time you change your application, you can use the classpath command.
Set up your network, if you don't know how, read the FAQ.
Now you have to setup a webserver or tftp server on your remote mashine, where you place your .class or .jar files.
With the classpath command you can now add a remote path. E.g. "classpath add http://192.168.0.1/path/to/classes/". Using "classpath" without arguments shows you the list of added paths. To start your application simply type the class file's name.
For more info read the original forum topic from Ewout, read more about shell commands or have a look at the following example:
On your PC:
Install a Webserver (e.g. Apache) and start it up. Let's say it has 192.168.0.1 as its IP. Now create a HelloWorld.java, compile it and place the HelloWorld.class in a directory of your Webserver, for me that is "/var/www/localhost/htdocs/jnode/".
Inside JNode:
Type the following lines inside JNode. You just have to replace IP addesses and network device by values matching your configuration.
ifconfig eth-pci(0,17,0) 192.168.0.6
route add 192.168.0.1 eth-pci(0,17,0)
classpath add http://192.168.0.1/jnode/
now that a new classpath is added you can run your HelloWorld App by simply typing
HelloWorld
Performance of an OS is critical. That's why many have suggested that an OS cannot be written in Java. JNode will not be the fastest OS around for quite some time, but it is and will be a proof that it can be done in Java.
Since release 0.1.6, the interpreter has been removed from JNode, so now all methods are compiled before being executed. Currently two new native code compilers are under development that will add various levels of optimizations to the current compiler. We expect these compilers to bring us much closer to the performance of Sun's J2SDK.
This page will keep track of performance of JNode, measured using various benchmarks, over time.
The performance tests are done on a Pentium4 2Ghz with 1GB of memory.
ArithOpt, org.jnode.test.ArithOpt | Lower numbers are better. | ||
Date | JNode Interpreted | JNode Compiled | Sun J2SDK |
12-jul-2003 | 1660ms | 108ms | 30ms |
19-jul-2003 | 1639ms | 105ms | 30ms |
17-dec-2003 | 771ms | 63ms | 30ms |
20-feb-2004 | n.a. | 59ms | 30ms |
03-sep-2004 | n.a. | 27ms* | 30ms |
28-jul-2005 | n.a. | 20ms* | 30ms** |
Sieve, org.jnode.test.Sieve | Higher numbers are better. | ||
Date | JNode Interpreted | JNode Compiled | Sun J2SDK |
12-jul-2003 | 53 | 455 | 5800 |
19-jul-2003 | 55 | 745 | 5800 |
17-dec-2003 | 158 | 1993 | 5800 |
20-feb-2004 | n.a. | 2002 | 5800 |
03-sep-2004 | n.a. | 4320* | 5800 |
28-jul-2005 | n.a. | 3660* | 4252** |
*) Using L1A compiler
**) Using J2SDK 1.5.0 (others 1.4.2)
JNode is now working on its second major release (0.3).
This second release will focus on stability, speed and memory usage. Further more it will add a real installer, provide isolates and many more.
In the mean time, we continue to release intermediate releases reflecting the state of development. Feel free to download them and enjoy using them.
Look here for the plans for this upcoming release.
We need your help to make it possible, so join us and help us realize the future of Operating Systems.
Look at the contribute page if you want to help us.
Bellow you will find various reports updated daily about the current state of the project:
Changes from JNode 0.2.8 to current SVN trunc version
Features ======== progress with OpenJDK integration class library updated to OpenJDK6 b13 JNode now builds with both Java 6 SE and OpenJDK6+IcedTea javac source level and target level raised to 1.6 introduced mauve based regression testing improved isolate support added isolate invoker added Russian keyboard support improved NTFS support added HFS+ formatter progress with Bjorne shell improved modal dialogs console & shell improvements a large number of bug fixes and improvements in the overall system aiming better Java compatibility, stability and performance real world applications starting to work: Jetty + recent Servlet/JSP examples, PHP with Jetty + Quercus, JEdit, Groovy Contributors to this release ============================ Levente Sántha Fabien Duminy Peter Barth Martin Husted Hartvig Stephen Crawley Fabien Lesire Daniel Noll Tim Sparg Stephen Meslin-Weber Sergey Mashkov Ben Bucksch
Features
============================
Contributors to this release
============================
Special thanks to Jens Hatlak for integrating our patch to JIIC (version named "a "JNode release")
Note to committers : This topic will serve to build the changelogs for the next release (and avoid searching at release time).
Feel free to add the new features and their author (the patch submitter or, by default, you)
Features
========
Integrated the OpenJDK implementations of Swing and AWT
Improved java.awt.Graphics and Graphics2D
Improved BDF font rendering
Added VESA based frame buffer support
Added a frame buffer based console with custom backgrounds
Implemented software cursor support
Added a JPEG decoder
Various ImageIO improvements
Added a Samba file system (rw) and support for smb:// and nfs:// URLs
Replaced argument syntax and completion framework for shell commands
Converted existing commands to the new syntax framework
Added a configure tool for the JNode build environment
Various bugfixes to networking, memory management, math support, FAT support, and the core VM.
Contributors to this release
============================
Levente Santha
Fabien Duminy
Peter Barth
Martin Husted Hartvig
Stephen Crawley
Fabien Lesire
Chris Boertien
Brett Lawrence
Daniel Noll
Jacob Kofod
Ian Darwin
Helmut Dersch
Stephen Meslin-Weber
Features
========
More progress with OpenJDK integration
Wildcards support in shell
NTFS improvements
NFS2 read write support
Command shell improvements
Improved support for pipes and command completion
Experimental Bjorne shell implementation
Added support for JDBC drivers
Fixed object serialization
Support for prefrences API
Improved support for native methods
Code hotswapping support
Fixed DNS support
Included Jetty6, Servlet and JSP support
Read-only HFS+ file system
File System API refactoring & improvements
Experimental telnet server
Added CharvaCommander
Improved BDF font rendering
Contributors to this release
============================
Levente Santha
Martin Husted Hartvig
Fabien Duminy
Fabien Lesire
Stephen Crawley
Daniel Noll
Andrei Dore
Ian Darwin
Peter Barth
Robert Murphey
Michael Klaus
Tanmoy Deb
GriffenJBS (jstephen)
Features
========
Openjdk integration, roughly 80% completed
Included standard javac and javap from openjdk
Targetting Java 6 compatibility
Build process migrated to Java 6
netcat command
Improved Image I/O support
Improved build process (parallel build using fork)
Included BeanShell and Rhino (JavaScript) as scripting languages
(encouraging results with Jython, Kawa (Scheme), JRuby 1.0 and Scala)
Improved Eclipse support
Nanosecond accurate timer
Started JNode installer (grub support)
Improvements in text consoles
Experimental via-rhine NIC driver
PXE booting support for via-rhine
ANT is getting usable
Improved support for mauve based tests
A mechanism for supporting the native keyword for arbitrary applications
Experimental support for isolates (static data isolation, access to fs/net/gui from isolates)
Various gc and memory management related improvements
Improvements to jfat and ext2 filesystems
Promising experiments with JPC running under JNode and running FreeDOS on the JPC/JNode stack
Support for transparency in the GUI
Many improvement to command execution and input/output streams of commands
Introduced 'proclets' - small programs running in the same isolate with their own in/out/err streams
Proper command line editing and input line history for third party command line based programs (like bsh, rhino)
Contributors
============
Andrei Dore
Daniel Noll
Fabien Lesire
Fabien Duminy
Giuseppe Vitillaro
Levente Sántha
Michael Klaus
Martin Husted Hartvig
Peter Barth
Stephen Crawley
Tanmoy Deb
Changes from JNode 0.2.3 to JNode 0.2.4
Changes from JNode 0.2.2 to JNode 0.2.3
Changes from JNode 0.2.1 to JNode 0.2.2
Changes from JNode 0.2.0 to JNode 0.2.1
Changes from JNode 0.1.10 to JNode 0.2.0
You'll find the changelogs for old releases below.
Changes from JNode 0.1.9 to JNode 0.1.10
Changes from JNode 0.1.8 to JNode 0.1.9
Changes from JNode 0.1.7 to JNode 0.1.8
Changes from JNode 0.1.6 to JNode 0.1.7
Changes from JNode 0.1.5 to JNode 0.1.6
This page gives an overview of the support for J2SDK 5.0 features.
It reflects the status of the SVN trunk.
Feature | Status | Can be used |
---|---|---|
Generics | Supported | |
Generics in collection framework | Supported | |
Enhanced for loop | Supported | |
Autoboxing/unboxing | Supported | |
Typesafe enums | Supported | |
Varargs | Supported | |
Static import | Supported | |
Metadata (annotations) | Supported | |
Covariant return types | Supported |
Look at GitHub wiki
This part contains all technical documentation about JNode. This part is intended for JNode developers.
This chapter is a small introduction to the technical documentation of JNode.
It covers the basic parts of JNode and refers to their specific documentation.
JNode is a Virtual Machine and an Operating System in a single package. This implies that the technical documentation covers both the Virtual Machine side and the Operating System side.
Besides these two, there is one aspect to JNode that is shared by the Virtual Machine and the Operating System. This aspect is the PluginManager. Since every module in JNode is a Plugin, the PluginManager is a central component responsible for plugin lifecycle support, plugin permissions and plugin loading, unloading & reloading.
The picture below gives an overview of JNode and its various components.
It also states which parts of JNode are written in Java (green) and which part are written in native assembler (red).
As can be seen in the picture above, many parts of the class library are implemented using services provided by the JNode Operating System. These services include filesystems, networking, gui and many more.
For developing JNode you first need to get the sources. There are basically 3 possible ways to get them with different advantages and disadvantages. These possibilities contain:
Have a look at the subpages for a more detailed description of the commands.
This page is deprecated since we have moved to GitHub
This is a slight overview of SVN and how to use it. First of all there are three ways to access SVN: svn, svn+ssh and via https. Sourceforge uses WebDAV, that means you can also browse the repository online with your favorite browser, just click on this link.
Subversion uses three toplevel directories named trunk, branches and tags. Trunk can be compared to CVS Head, branches and tags are self-explanatory, I think
To checkout the source simply type:
svn co https://jnode.svn.sourceforge.net/svnroot/jnode/trunk/ jnode
which creates a new directory called jnode (Mind the space between "trunk/" and "jnode"!) . In this directory all stuff of the repository in /jnode/trunk will be copied to jnode/ on your local computer.
svn up|add|commit as expected.
New to subversion is copy, move and delete. If you copy or move a file, the history is also copied or moved, if you e.g. delete a directory it will not show up anymore if you do a new checkout.
If you want to make a branch from a version currently in trunk you can simply copy the content from trunk/ to branches/, e.g. by:
svn copy https://jnode.svn.sourceforge.net/svnroot/jnode/trunk/ https://jnode.svn.sourceforge.net/svnroot/jnode/branches/my-big-change-branch/
I think that's the most important for the moment, for more information have a look at the SVN Handbook located here.
Btw, for using SVN within eclipse you have to install subclipse located here.
The URLs for official git repository at GitHub are listed below :
Site: https://github.com/jnode/jnode Https: https://github.com/jnode/jnode.git SSH: git@github.com:jnode/jnode.git
For those who know what they are doing already and simply want push access, refer to the page on setting up push access. For those that are unfamiliar with git, there are a few git pages below that explain some of the common tasks of setting up and using git. This of course is not meant to be a replacement for the git manual.
Git Manual: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html Git Crash Course for SVN users: http://git.or.cz/course/svn.html
In order to gain push access to the repository you will have to create a username on the hosting site and upload a public ssh key. Have the key ready as it will as for it when you sign up. If you have a key already you can register your username here and give your username, email and public key. There's no password involved with the account, the only password is the one you put on your ssh key when you create it, if you choose to do so. Its not like there is sensitive material involved, so dont feel compelled to use a password.
In order to generate an ssh key will require ssh be installed. Most linux distributions will have this already. Simply type in:
ssh-keygen
and your key will be generated. Your public key will be in ~/.ssh/ in a file with a .pub suffix, likely id_rsa.pub or id_dsa.pub. Open the file in a text editor (turn off line wrapping if its enabled), and copy/paste the key into your browser. Its important that the line not be broken over multiple lines.
Once your account has been created, send an email to the address found by the Owner tag on the jnode repo website, which is here. Once you are added you will need to configure your git configuration to use the new push url with your username.
When you originally cloned the repository the configuration setup a remote named origin that referenced the public repo using the anonymous pull url. We'll now change that using git-config.
git config remote.origin.url git@github.com:[user]/jnode.git
Of course replacing [user] with your username, which is case sensitive.
Now you should be setup. Please see the page on push rules and etiquette before continuing.
The first thing you want to do, obviously, is install git if its not already. Once git is intalled we need to clone the public repository to your local system. At this time of this writing this requires about a 130MB download.
First, position your current directory in the location where you want your working directory to be created. Don't create the working directory as git will refuse to init inside an existing directory. For this example we will clone to a jnode directory in ~/git/.
cd ~/git git clone git@github.com:jnode/jnode.git jnode
Once this has finished you will have a freshly created working directory in ~/git/jnode and the git repository itself will be located in ~/git/jnode/.git For more info see Git Manual Chapter 1: Repositories and Branches
This process has also setup what git refers to as a remote. The default remote after cloning is labeled as origin and it refers to the public repository. In order to keep your repository up to date with origin, you will have to fetch them. See Updating with git-fetch for more info.
When fetch pulls in new objects, you may want to update any branches you have locally that are tracking branches on origin. This will almost always be true of the master branch, as it is highly recommended that you keep your master branch 'clean' and in sync with origin/master. It's not necessary, but it may make life easier until you understand git more fully. To update your master branch to that of origin/master simply
git rebase origin master
Then if you wish to rebase your local topic branches you can
git rebase master [branch]
The reason we're using git rebase instead of git merge is because we do not generally want merge commits to be created. This is partly to do with the svn repo that commits will eventually be pulled into. svn does not handle git merges properly, as a git merge object has multiple parent commits, and svn has no concept of parents. Where git employs a tree structure for its commits, svn is more like a linked list, and is therefore strictly linear. This is why its also important to fetch and rebase often, as it will make the transition of moving branches over to the svn repo much easier.
To learn more about branches refer to the git manual. It is highly recommended that new users to git read through chapters 1-4, as this explains alot of how git operates, and you will likely want to keep it bookmarked for quick referencing until you get a handle on things.
For those users that find the git command line a bit much, there is also `git gui` that is a very nice tool. It allows you to do alot of the tasks you would do on the command line via gui. There is also an eclipse plugin that is under development called egit, part of the jgit project implementing a pure java git implementation.
Once you have push access to the public git repo, there are a few simple rules i'd like everyone to observe.
1) Do not push to origin/master
This branch is to be kept in sync with the svn repo. This branch is updated hourly. When it is updated, any changes made to it will be lost anyway as the udpate script is setup in overwrite mode. Even with this, if someone goes to fetch changes from origin/master before the update script has had a chance to bring it back in sync, then those people will have an out of sync master, which is a pain for them. To be on the safe side, when pulling from origin master, it doesnt hurt to do a quick 'git log origin/master' before fetching to see if the commit messages have a git-svn-id: in the message. This is embedded by git for commits from svn. If the top commits do not have this tag, then someone has pushed into origin/master.
2) Do not push into branches unless you know whats going on with it.
If you have a branch created on your local repo and you would like to have your changes pulled upstream then push your branch to the repo and ask for it to be pulled. You can push your branch to the public repo by
git push origin [branch]
so long as a branch by that name does not already exist. Once the changes have been pulled the branch will be removed from the public repo. That is unless the branch is part of some further development. You will still have your branch on your local repo to keep around or delete at your leisure.
3) Sign-off on your work.
Although git preserves the author on its commits, svn overwrites this information when it is commited. Also it is your waying of saying that this code is yours, or that you have been given permission to submit it to the project under the license of the project. Commits will not be pulled upstream without a sign-off. The easiest way to set this up is to configure git with two variables.
git config user.name [name] git config user.email [email]
Then when you go to make your commit, add an -s flag to git commit and it will automatically append a Signed-off-by: line to the commit message. This is not currently being enforced project wide, although it should be. Also if someone sends you a patch, you can add a Created-by: tag for that person, along with your own sign-off tag.
JNode has a number of configuration options that can be adjusted prior to performing a build. This section describes those options, the process of configuring those options, and the tools that support the process.
JNode is currently configured by copying the "jnode.properties.dist" file to "jnode.properties" and editing this and other configuration files using a text editor.
In the future, we will be moving to a new command-line tool that interactively captures configuration settings, and creates or updates the various configuration files.
The Configure tool is a Java application that is designed to be run in the build environment to capture and record JNode's build-time configuration settings. The first generation of this tool is a simple command-line application that asks the user questions according to an XML "script" file and captures and checks the responses, and records them in property files and other kinds of file.
The Configuration tool supports the following features:
The configuration tool is launched using the "configure.sh" script:
$ ./configure.sh
When run with no command arguments as above, the script launches the tool using the configuration script at "all/conf-source/script.xml". The full command-line syntax is as follows:
./configure.sh ./configure.sh --help ./configure.sh [--verbose] [--debug] <script-file>
The command creates and/or updates various configuration files, depending on what the script says. Before a file is updated, a backup copy is created by renaming the existing file with a ".bak" suffix.
The Configure tool uses a "script" to tell it what configuration options to capture, how to capture them and where to put them. Here is a simple example illustrating the basic structure of a script file:
<configureScript> <type name="integer.type" pattern="[0-9]+"/> <type name="yesno.type"> <alt value="yes"/> <alt value="no"/> </type> <propFile name="test.properties"> <property name="prop1" type="integer.type" description="Enter an integer" default="0"/> <property name="prop2" type="yesno.type" description="Do you want to?" default="no"/> </propFile> <screen title="Testing set 1"> <item property="prop1"/> <item property="prop2"/> </screen> </configureScript>
The main elements of a script are "types", "property sets" and "screens". Lets describe these in that order.
A "type" element introduces a property type which defines a set of allowed values for properties specified later in the script file. A property type's value set can be defined using a regular expression (pattern) or by listing the value set. For more details refer to the "Specifying property types" page.
A "propFile" element introduces a property set consisting of the properties to be written to a given property file. Each property in the property set is specified in terms of a property name and a previously defined type, together with a (one line) description and an optional default value. For more details refer to the "Specifying property files" page.
A "screen" element defines the dialog sequence that is used to request configuration properties from the user. The screen consists of a list of properties, together with (multi-line) explanations to be displayed to the user. For more details refer to the "Specifying property screens" page.
Finally, the "Advanced features" page describes the control properties and the import mechanism.
Configuration property types define sets of allowable values that can be used in values defined elsewhere in a script file. A property type can be defined either using a regular expression or by listing the set of allowable values. For example:
<type name="integer.type" pattern="[0-9]+"/> <type name="yesno.type"> <alt value="yes"/> <alt value="no"/> </type>
The first "type" element defines a type whose values are unsigned integer literals. The second one defines a type that can take the value "yes" or "no".
In both cases, the value sets are modeled in terms of the "token" character sequences that are entered by the user and the "value" character sequences that are written to the property files. For a property types specified using regular expressions, the "token" and "value" sequences are the same, with one exception. The exception is that a sequence of zero characters is not a valid input token. So if the "pattern" could match an empty token, you must define an "emptyToken" that the user will use to enter this value. For example, the following defines a variant of the previous "integer.type" in which the token "none" is used to specify that the corresponding property should have an empty value:
<type name="optinteger.type" pattern="[0-9]*" emptyToken="none"/>
For property types specified by listing the values, you can make the tokens and values different for any pair. For example:
<type name="yesno.type"> <alt token="oui" value="yes"/> <alt token="non" value="no"/> </type>
Type values and tokens can contain just about any printable character (modulo the issue of zero length tokens). Type names however are restricted to ASCII letters, digits, '.', '-' and '_'.
A "propFile" element in a script file specifies details about a file to which configuration properties will be written. In the simplest case, a "propFile" element specifies a file name and a set of properties to be written. For example:
<propFile fileName="jnode.properties"> <property name="jnode.vm.size" type="integer.type" description="Enter VM size in Mbytes" default="512"/> <property name="jnode.vdisk.enabled" type="yesNo.type" description="Configure a virtual disk" default="no"/> </propFile<
This specifies a classic Java properties file called "jnode.properties" which will contain two properties. The "jnode.vm.size" property will have a value that matches the type named "integer.type", with a default value of "512". The "jnode.vdisk.enabled" will have a value that matches the "yesno.type", defaulting to "no".
The Configure tool will act as follows for the example above.
Attributes of a "property" element
Each "property" element can have the following attributes:
Attributes of a "propFile" element
The Configure tool will read and write properties in different ways depending on the "propFile" element's attributes:
Alternative file formats
As described above, the Configure tool supports five different file types: more if you use plugin classes. These are as follows:
The file types "xml", "java" and "text" require the use of a template file, and do not permit properties to be loaded.
Template file expansion
If Configure uses a java.util.Properties.saveXXX method to write properties, you do not have a great deal of control over how the file is generated. For example, you cannot include comments for each property, and you cannot control the order of the properties.
The alternative is to create a template of a file that you want the Configure tool to add properties to. Here is a simple example:
# This file contains some interesting properties # The following property is interesting interesting=@interesting@ # The following property is not at all interesting boring=@boring@
If the file above is specified as the "templateFile" for a property set that includes the "interesting" and "boring" properties, the Configure tool will output the property set by expanding the template to replace "@interesting@" and "@boring@" with the corresponding property values.
The general syntax for @...@ sequences is:
at_sequence ::= '@' name [ '/' modifiers ] '@' name ::= ... # any valid property name modifiers ::= ... # one or more modifier chars
The template expansion process replaces @...@ sequences as follows:
The template expansion is aware of the type of the file being expanded, and performs file-type specific escaping of properties before writing them to the output stream:
The "dialog" between the Configure tool and the user is organized into sequences of questions called screens. Each screen is a described by a "screen" element in the configuration script. Here is a typical example:
<screen title="Main JNode Build Settings"> <item property="jnode.virt.platform"> The JNode build can generate config files for use with various virtualization products. </item> <item property="expert.mode"> Some JNode build settings should only be used by experts. </item> </screen>
When the Configure tool processes a screen, it first outputs the screen's "title" and then iterates over the "item" elements in the screen. For each item, the tool outputs the multi-line content of the item, followed by a prompt formed from the designated property's description, type and default value. The user can enter a value, or just hit ENTER to accept the default. If the value entered by the user is acceptable, the Configure tool moves to the next item in the screen. If not, the prompt is repeated.
Conditional Screens
The screen mechanism allows you to structure the property capture dialog(s) independently of the property files. But the real power of this mechanism is that screens can be made conditional on properties captured by other screens. For example:
<screen title="Virtualization Platform Settings" guardProp="jnode.virt.platform" valueIsNot="none"> <item property="jnode.vm.size"> You can specify the memory size for the virtual PC. We recommended a memory size of least 512 Mbytes. </item> <item property="jnode.virtual.disk"> Select a disk image to be mounted as a virtual hard drive. </item> </screen>
This screen is controlled by the state of a guard property; viz the "guardProp" attribute. In this case, the "valueIsNot" attribute says that property needs to be set to some value other than "none" for the screen to be acted on. (There is also a "valueIs" attribute with an analogous meaning.)
The Configuration tool uses an algorithm equivalent to the following one to decide which screen to process next:
The "changed" attribute
The "item" element of a screen can take an attribute called "changed". If present, this contains a message that will be displayed after a property is captured if the new value is different from the previous (or default) value. For example, it can be used to remind the user to do a full rebuild when critical parameters are changed.
The primary JNode build configuration file is the "jnode.properties" file in the project root directory.
Other important configuration files are the plugin lists. These specify the list plugins that make up the JNode boot image and the lists that are available for demand loading in various the Grub boot configurations.
The build process of JNode consists of the following steps.
Boot image building
When JNode boots, the Grub bootload is used to load a Multiboot compliant kernel image and boot that image. It is the task of the BootImageBuilder to generate that kernel image.
The BootImageBuilder first loads java classes that are required to start JNode into there internal Class structures. These classes are resolved and the most important classes are compiled into native code.
The object-tree that results from this loading & compilation process is then written to an image in exactly the same layout as an object in memory is. This means that the the necessary heap headers, object headers and instance variables are all written in the correct sequence and byte-ordering.
The memory image of all of these objects is linked with the bootstrapper code containing the microkernel. Together they form a kernel image loaded & booted by Grub.
Boot disk building
To run JNode, in a test environment or create a bootable CD-ROM, a bootable disk image is needed. It is the task of the BootDiskBuilder to create such an image.
The bootable disk image is a 16Mb large disk image containing a bootsector, a partition table and a single partion. This single partition contains a FAT16 filesystem with the kernel image and the Grub stage2 and configuration files.
This chapter details the environment needed to setup a JNode development environment.
Sub-Projects
JNode has been divided into several sub-projects in order to keep it "accessible". These sub-projects are:
JNode-All | The root project where everything comes together JNode-Core The core java classes, the Virtual Machine, the OS kernel and the Driver framework |
JNode-FS | The Filesystems and the various block device drivers |
JNode-GUI | The AWT implementation and the various video & input device drivers |
JNode-Net | The Network implementation and the various network device drivers |
JNode-Shell | The Command line shell and several system commands |
Each sub-project has the same directory structure:
<subprj>/build | All build results |
<subprj>/descriptors | All plugin descriptors |
<subprj>/lib | All sub-project specific libraries |
<subprj>/src | All sources |
<subprj>/.classpath | The eclipse classpath file |
<subprj>/.project | The eclipse project file |
<subprj>/build.xml | The Ant buildfile |
Eclipse
JNode is usually developed in Eclipse. (It can be done without)
The various sub-projects must be imported into eclipse. Since they reference each other, it is advisably to import them in the following order:
For a more details please have a look at this Howto.
IntelliJ IDEA
JetBrains Inc has donated a Open Source License for Intellij IDEA to the dedicated developers working on JNode.
Developers can get a license by contacting Martin.
Setup of the sub-projects is done with using the modules feature like with Eclipse.
One should increase the max memory used in the bin/idea.exe.vmoptions or bin/idea.sh.vmoptions file, edit the -Xmx line to about 350mb. IntelliJ can be downloaded at http://www.jetbrains.com/idea/download/ Use at least version 5.1.1. Note that this version can import Eclipse projects.
Requirements for building under Windows
Now, can start a Windows command prompt, change directory to the JNode root, and build JNode as explained the next section.
Requirements for building under Linux
Building
Running "build.sh" or "build.bat" with no arguments to list the available build targets. Then choose the target that best matches your target environment / platform.
Alternatively, from within Eclipse, execute the "all" target of all/build.xml. Building in Eclipse is not advised for Eclipse version 2.x because of the amount of memory the build process takes. From Eclipse 3.x make sure to use Ant in an external process.
A JNode build will typically generate in the following files:
all/build/jnodedisk.pln | A disk image for use in VMWare 3.0 |
all/build/x86/netboot/jnodesys.gz | A bootable kernel image for use in Grub. |
all/build/x86/netboot/full.jgz | A initjar for use in Grub. |
Some builds also generate an ISO image which you can burn to disk, and then use to boot into JNode from a CD / DVD drive.
This chapter explains how to use IntelliJ IDEA 4.5.4 with JNode. JetBrains Inc has donated a Open Source License to the dedicated developers working on JNode. The license can optained by contacting Martin.
New developers not yet on the JNode project can get a free 30-day trial license from JetBrains Inc.
Starting
JNode contains several modules within a single CVS module. To checkout and import these modules in IntelliJ, execute the following steps:
Dedicated developer should use a Cvs root like ":ssh:developername@cvs.sourceforge.net:/cvsroot/jnode"
Other should use Anonymous CVS Access and use Cvs root ":pserver:anonymous@cvs.sourceforge.net:/cvsroot/jnode"
The rest has been setup in the project and you should now be able to start.
Building
You can build JNode within IntelliJ by using the build.xml Ant file. In the right side of IntelliJ you find a "Ant Build" tab where the ant file is found. Run the "help" Target to get help on the build system.
Due to the memory requirements of the build process, it could be better to run the build from the commandline using build.bat (on windows) or build.sh (on unix).
(These instructions were contributed by "jarrah".)
I've successfully built jnode on MacOS X from the trunk and the 2.6 sources. Here's what I needed to do:
You should end up with an ISO image called jnode-x86.iso in all/build/cdroms.
Cheers,
Greg
Using OSX and PPC for JNode development and testing
What we want is:
1. CVS tool
2. IDE for development
3. A way to build JNode
4. A way to boot JNode for testing
First of all we need to install the XCode tools from apple. Usually it is shipped with your OSX, look in /Applications/Installers/. If it is not there you, you can download it from apple’s site.
1. CVS tool
Well cvs is already in the OSX installation. There are some GUI tools to make the use of cvs easier. SmartCVS is a good one, which you can use it in your windows/PC computer, or linux etc.
2. IDE
Eclipse.
3. How to build JNode with a ppc machine (not FOR, WITH ppc)
Good for us, JNode build process is based on apache ant, which as a java tool runs everywhere. The only problem is the native assembly parts of JNode. For them JNode build process uses nasm and yasm.
So the only thing we need is to build them for ppc and use them. They will still make x86 binaries as they are written to do.
First of all we have to get the nasm and yasm sources. The first one is on
http://nasm.sourceforge.net
and the other is on
http://www.tortall.net/projects/yasm/
After that we unzip them and start the compile.
NASM
Open a terminal window and go inside the directory with the nasm sources
Run ./configure to create the Makefile for nasm
If everything is ok you now are ready to compile nasm. Just run ‘’make nasm‘’. Maybe there will be a problem if you try to compile all the nasm tools by running ‘’make’’ (I had), but you dont need them. Nasm is enought.
Now copy nasm in your path. /usr/bin is a good place.
YASM
The same as for nasm open a terminal window and go to the directory with yasm sources.
Run ‘’./configure’’
Run ‘’make’’
Now you can either copy yasm to /usr/bin or run ‘’make install’’ which will install the yasm tools under /usr/local/bin.
That’s all with nasm and yasm. You are ready to build JNode. You may have problems using the buildl.sh script, but you can always run the build command manually ‘’java -Xmx512M -Xms128M -jar core/lib/ant-launcher.jar -lib core/lib/ -lib /usr/lib/java/lib -f all/build.xml cd-x86’’
4. Booting JNode
Well there is only one way to do that. Emulation.
There is VirtualPC for OSX, which is pretty good and fast. To use it just create a new virtual PC and start it. When the virtual PC is started right click on the CD-Rom icon at the bottom of the window (hmm I know there is no right click on macs I assume you know to press ctrl+click). Now tell the VirtualPC to use the JNode iso image as cdrom drive and boot from it. There you are!
I think there is also qemu for ppc. I have not ever used it, so I don’t know how you can configure it.
This chapter explains the structure of the JNode source tree and the JNode package structure.
The JNode sources are divided into the following groups:
Every group is a directory below the root of the JNode CVS archive. Every group contains one or more standard directories.
This page lists some tips on how to write good JNode code.
Please add other tips as required.
In a user-level command should use streams provided by the Command API; e.g. by calling the 'getInput()' method from within the command's 'execute' method. Device drivers, services and so on that do not have access to these streams should use log4j logging.
Services, etc should make appropriate log4j calls, passing the offending Throwable as an argument.
If the APIs don't do what you want, raise an issue. Bear in mind that some requests may be deemed to be "to hard", or to application specific.
All code that is developed as part of the JNode project must conform to the style set out in Sun's "Java Style Guidelines" (JSG) with variations and exceptions listed below. Javadocs are also important, so please try to make an effort to make them accurate and comprehensive.
Note that we use CheckStyle 4.4 as our arbiter for Java style correctness. Run "./build.sh checkstyle" to check your code style before checking it in or submitting it as a patch. UPDATE: And also, please run "./build.sh javadoc" to make sure that you haven't introduced any new javadoc warnings.
try { if (condition) { something(); } else { somethingElse(); } } catch (Exception ex) { return 42; }
Note that the else is on the same line as the preceding }, as is the catch.
public void loopy() { int i; LOOP: for (i = 100; i < 1000000; i++) { if (isPrime(i) && isPrime(i + 3) && isPrime(i + 5) { break LOOP; } } }
try { myStream.close(); } catch (IOException ex) { // we can safely ignore this }
//******************************************************* // // Start of private methods // //*******************************************************
/** * {@inheritDoc} */ public void myMethod(String param1, int param2) { }
The java classes of JNode are organized using the following package structure.
Note that not all packages are listed, but only the most important. For a full list, refer to the javadoc documentation.
All packages start with org.jnode.
There are some packages that do not comply to the rule that all packages start with org.jnode. These are:
All java source files must contain the standard JNode header found in <jnode>/all/template/header.txt.
Do not add extra information to the header, since this header is updated automatically, at which time these extra pieces of information are lots.
Add any extra information about the class to the classes javadoc comment. If you make significant contribution to a class, feel free to add yourself as an @author. However, adding a personal copyright notice is "bad form", and unnecessary from the standpoint of copyright law. (If you are not comfortable with this, please don't contribute code to the project.)
All Java source files and other text-based files in the JNode project must be US-ASCII encoded. This means that extended characters in Java code must be encoded in the '\uxxxx' form. Lines should end with an ASCII linefeed (LF) character, not CR LF or LF CR, and hard tab (HT) characters should not be used.
If there is a pressing need to break these rules in some configuration or regression data file, we can make an exception. However, it is advisable to highlight the use of "nasty" characters (e.g. as comments in the file) so that someone doesn't accidentally "fix" them.
In JNode, all code, services and resources are packaged into plugins.
Each plugin has a descriptor that defines the packages it contains, the plugins it depends on, and any extensions. The plugin-descriptors are held in the descriptors/ directory of each subproject. During the build, once the subprojects have been compiled, the plugins are assembled based on the descriptors that are found.
Plugins are collectively packaged into an initjar. This jar file is passed on the command line to grub when booting JNode and defines what is available to JNode during boot (drivers and such), as well after boot (commands/applications).
A JNode plugin is defined by an xml file, its descriptor, contained in the descriptors/ directory of the subproject it belongs too. Filesystem plugins are in fs/descriptors, shell plugins in shell/descriptors and so on.
The root node of a plugin descriptor is >plugin< which takes a few required arguments that give the id, name, version and license.
id : the plugin id. This is the name that other plugins will use for dependencies, and the plugin-list will use to include the plugin in an initjar. name : A short descriptive name of what the plugin is. version : The version of the plugin. For non-jnode plugins, this should be the version of the software being included. For JNode plugins, use @VERSION@. license-name : the name of the license the code in the plugin defines. JNode uses lgpl provider-name : The name of the project that provided the code, JNode.org for jnode plugins. class(optional) : If the plugin requires special handling when loading/unloading the plugin, it can define a class here that extends org.jnode.plugin.Plugin, overriding the start() and stop() methods.
Under the <plugin> node are definitions for different parts of the plugin. Here you define what the plugin includes, what it depends on, and any extensions.
The <runtime> node defines what a plugin is to include in its jar-file.
<runtime> <library> name="foo.jar> <export name="foo.*"> </library> </runtime>
This will export the classes that match foo.* in foo.jar to a jar file. This is how you would include classes from a non-jnode library into a plugin for use in jnode. To have a plugin include jnode-specific classes, the library name is of the form "jnode-
To declare dependencies for a plugin, a list of <import> nodes under a <requires> node is required.
<requires> <import plugin="org.jnode.shell"/> </requires>
Will add a dependency to the org.jnode.shell plugin for this plugin. The dependency does two things. When a plugin is included in a plugin-list, its dependencies must also be included, or the initjar builder will fail.
Each plugin has its own classloader. If commands or applications defined in a plugin are run, instead of using a classpath to find classes and jars, the plugin uses the dependencies to search for the proper classes. Every plugin class loader has access to the system plugins, its own plugin, and any plugins listed as dependencies. This means that no plugin needs to require a system plugin.
The last part of a plugin are the extensions. These are not specific to plugins, but rather to different parts of jnode that use the plugin. An extension is defined as :
<extension point="some.extension.point">
The content of an extension is defined by its point. Below is a brief list of extension points and where to find documentation on them.
Shell Extensions point="org.jnode.shell.aliases" Used to define aliases for the alias manager in the shell. point="org.jnode.shell.syntaxes" Used to define a syntax for command line arguments to an alias. Core Extensions point="org.jnode.security.permissions" Used to define a syntax for command line arguments to an alias. Core Extensions point="org.jnode.security.permissions" Used to define what permissions the plugin is granted.
A plugin list is used to build an initjar and includes all the plugin jars that are defined in its list. The default plugin lists are in all/conf and these lists are read, and their initjars built by default. To change this behavior there are two options in jnode.properties that can be added to tell the build system where to look for custom plugin-lists, and also to turn off building the default plugins.
jnode.properties
custom.plugin-list.dir =Directory can be any directory. ${root.dir} can be used to prefix the path with the directory of your jnode build. no.default.initjars = 1 Set to 1 to disable building the default initjars
A plugin list has a very simple definition. The root node is <plugin-list> that takes a single name attribute that will be the name of the actual initjar. The list of plugins are defined by adding <plugin id="some.plugin"> entries. If a plugin is included that has dependencies, and those plugins are not in the list, the initjar builder will fail.
You can add entries into the initjar manifest file by adding a <manifest> node with a list of <attribute> nodes. Attributes have two arguments, key and value. At a minimum you will want the following manifest entries :
<manifest> <attribute key="Main-Class" value="org.jnode.shell.CommandShell"/> <attribute key="Main-Class-Arg" value="boot"/> </manifest>
This tells jnode, when it finishes initializing, and loads the initjar, that it should run CommandShell.main() with a single argument "boot", so that it knows that this shell is the root shell.
There are many reasons to create your own initjar plugin-list. The most basic reason would be to reduce the overhead of building jnode. By turning off building the default initjars, and defining your own plugin-list for a custom initjar, you can reduce the rebuild time of jnode when making simple changes. It can also allow you to create new plugins and define them in a plugin-list without disturbing the default initjar plugin-lists.
For a basic starting point, the shell-plugin-list.xml creates an initjar that has the minimal plugins for loading jnode and starting a CommandShell. From there you can add plugins that you want, to add various features.
This page will describe how to add a java program to JNode as plugin, so that it can be called via its alias.
First of all you need to set up Eclipse (or your favorit IDE) as described in the readme, so that JNode builds without errors and you can use it (e.g. use JNode in VMWare).
There are different ways of extending JNode with a plugin.
A plugin can contain a class that extends Plugin and (or) normal java programs.
Every plugin is described by a descriptor.
For our example we will develop a plugin that contains a normal java program.
We need a name for our plugin : we will use sample, wich is also the packagename of our plugin.
It belongs to one of the JNodes subprojects in our case we will use the ordername sample in the shell subproject.
Every java-file for our plugin has to be in (or in subfolders):
\shell\src\shell\org\jnode\shell\sample
(for me it is d:\jnode\shell\src\shell\org\jnode\shell\sample)
Now we will write a small HelloWorld.java wich will be one of our plugin programs.
Here is the source of the file HelloWorld.java :
package org.jnode.shell.sample;
public class HelloWorld{
public static void main(String[] args){
System.out.println(“HelloWorld – trickkiste@gmx.de“);
}
}
thats ok, but it will not be build until we create a descriptor and add our plugin to the JNode full-plugin-list.xml.
The plugin descriptor (org.jnode.shell.sample.xml stored in the descriptors folder of the shell subproject) and looks like this :
<?xml version="1.0" encoding="UTF-8"? >
<!DOCTYPE plugin SYSTEM "jnode.dtd">
<plugin id="org.jnode.shell.sample"
name="Sample Plugin"
version="0.2"
license-name="lgpl"
provider-name="Trickkiste">
<requires>
<import plugin="org.jnode.shell"/>
</requires>
<runtime>
<library name="jnode-shell.jar">
<export name="org.jnode.shell.sample.*"/>
</library>
</runtime>
<extension point="org.jnode.shell.aliases">
<alias name="HelloWorld" class="org.jnode.shell.sample.HelloWorld"/>
</extension>
</plugin>
Now we need to add our Plugin to the JNode full-plugin-list.xml, this file is located in jnode\all\conf your entry should look like this :
[...]
<plugin id="org.jnode.util"/>
<plugin id="org.jnode.vm"/>
<plugin id="org.jnode.vm.core"/>
<plugin id="org.jnode.shell.sample"/>
</plugin-list>
thats it, you can now build JNode and test your HelloWorld plugin by typing HelloWorld.
What we can do now is add „normal“ programs to JNode via its provided Pluginstructure.
In JNode's command line interface, the Argument types are the Command programmers main tool for interacting with the user. The Argument provides support to the syntax mechanism to accept parameters, or reject malformed parameters, issuing a useful error message. The argument also supplies completion support, allowing a command to provide specific completions on specific domains.
At the moment, Arguments are mostly grouped into the shell project under the org.jnode.shell.syntax package. For the time being they will remain here. There is an effort being made to 'untangle' the syntax/argument APIs so this designation is subject to change in the future.
New arguments that are created should be placed into the cli project under the org.jnode.command.argument package if their only use is by the commands under the cli project.
Every command that accepts an option will require Arguments to capture the options and their associated values. The syntax parser makes use of an argument 'label' to map a syntax node to a specific argument. The parser then asks the Argument to 'accept' the given value. The argument may reject the token if it doesn't not satisfy it's requirements, and provide a suitable error message as to why. If it accepts the token, then it will be captured by the argument for later use by it's command.
Arguments also provide the ability to 'complete' a partial token. In some situations completions are not possible or do not make sense, but in many situations completions can be very helpful and save on typing, reduce errors, and even provide a little help if there are alot of options. The more characters there are in the token, the narrower the list of completions becomes. If the argument supplies only a single completion, this completion will be filled in for the user. This is a very powerful capability that can be used to great effect!
Before writing a command, it is important to consult the various specifications that many commands may have. Once you have an idea of the arguments you will need for the command, and you have a syntax put together, you can begin by adding your arguments to the command.
Along with the label that was discussed earlier, commands also take a set of flags. A set of flags is supplied by the Argument class, but individual Argument types may also supply their own specific flags. At the end of this document will be a list of known flags and their purpose, but for now we will discuss the common Argument flags.
Most arguments have overloaded constructors that allow you to not set any flags. If no such constructor exists, then feel free to create one! Optionally, it is safe to provide '0'(zero) for the flags parameter to mean no flags.
Once you have created the arguments that your command will need, you need to 'register' the arguments. This needs to be done in the Command constructor. Each argument needs to be passed to the registerArguments(Argument...) method. Once this is done, your arguments are ready to be populated the syntax parser.
(Note: Arguments that have been registered, but do not have a matching syntax node with its label will not cause an error at runtime. But they do make trouble for the 'help' command. For this reason it is recommended to not register arguments that have not yet been mapped in the syntax.)
When your command enters at the execute() method, the arguments will be populated with any values that were capture from the command line. For the most part, you will only need to be concerned with three methods supplied by Argument.
Thats about it for arguments. Simple huh? Arguments are designed to allow for rapid development of commands and as such provide a nice simple interface for using arguments 'out of the box' so to speak. But the real power of arguments are their ability to be extended and manipulated in many ways so as to provide a more feature filled command line interface.
Here are a list of the more common argument types, along with a short description on their purpose, features and usage.
The syntax of a command is the definition of options, symbols and arguments that are accepted by commands. Each command defines its own syntax, allowing customization of flags and parameters, as well as defining order. The syntax is constructed using several different mechanisms, which when combined, allow for a great deal of control in restricting what is acceptable in the command line for a given command.
When you define a new command, you must give define a syntax bundle within a syntax extension point. When the plugin is loaded, the syntax bundle is parsed from the descriptor and loaded into the syntax manager. When the bundle is needed, when completing or when preparing for execution, the bundle is retrieved. Because a syntax bundle is immutable, it can be cached completely, and used concurrently.
Also, the help system uses the syntax to create usage statements and to map short & long flags to the description from an argument.
See this document page for a concise description of the various syntax elements.
When setting out to define the syntax for a command, it is helpful to layout the synopsis and options that the command will need. The synopsis of a command can be used to define separate modes of operation. The syntax block itself is an implied <alternatives>, which means if parsing one fails, the next will be tried. To give an example of how breaking down a command into multiple synopsis can be helpful, we'll setup the syntax for a hypothetical 'config' command that allows listing, setting and clearing of some system configurations.
First, our synopsis...
config Lists all known configuration options and their values config -l
And our syntax...
<syntax alias="config"> <empty /> <option argLabel="list" shortName="l"> <sequence> <option argLabel="set" shortName="s"> <argument argLabel="value"> </sequence> <option argLabel="clear" shortName="c"> </syntax> To be continued...
The cli project contains a few utility classes to make implementation of common features across multiple commands easier. Because it is recommended that these classes be used when possible, they are quite well documented, and provide fairly specific information on their behavior, and how to use them. A brief outline will be provided here, along with links to the actual javadoc page.
ADW is _the_ tool for doing recursive directory searches. It provides a Visitor pattern interface, with a set of specific callbacks for the implementor to use. It has many options for controlling what it returns, and with the right configuration, can be made to do very specific searching.
Control
The walker is mainly controlled by FileFilter instances. Multiple filters can be supplied, providing an implied '&&' between each filter. If any of the filters reject the file, then the extending class will not be asked to handle the file. This can be used to create very precise searches by combining multiple boolean filters with specific filter types.
The walker also provides the ability to filter files and directories based on a depth. When the minimum depth is set, files and directories below a given level will not be handled. The directories that are passed to walk() are considered to be at level 0. Therefore setting a min-depth of 0 will not pass those directories to the callbacks. When the maximum depth is set, directories that are at the maximum depth level will not be recursed into. They will however still be passed to the callbacks, pending acceptance by the filter set. Therefore setting a value of 0 to the max level may return the initial directories supplied to walk(), but it will not recurse into them.
Note: Boolean filters are not yet implemented, but they are on the short list.
Extending the walker
Although you can extend the walker to a class of it's own, the recommended design pattern is to implement the walker as a non-static inner class, or an anonymous inner class. This design gives the implemented callbacks of the walker access to the inner structure of the command it's used in. When the walker runs it will pass accepted files and directories to the appropriate callback methods. The walker also has callbacks for specific events, including the beginning and end of a walk, as well as when a SecurityException is encountered when attempting to access a file or directory.
Debugging code running on the JNode platform is no easy task. The platform currently has none of the nice debugging support that you normally find on a mature Java platform; no breakpoints, no looking at object fields, stack frames, etc.
Instead, we typically have to resort to sending messages to the system Logger, adding traceprint statements and calling 'dumpStack' and 'printStackTrace'. Here are some other pointers:
There is also a simple debugger that can be used in textmode to display threads and their stacktraces. Press Alt-SysRq to enter the debugger and another Alt-SysRq to exit the debugger. Inside the debugger, press 'h' for usage information.
Note: the Alt-SysRq debugger isn't working at the moment: see this issue.
A very simple kernel debugger has been added to the JNode nano-kernel. This debugger is able to send all data outputted to the console (using Unsafe.debug) to another computer via a null-modem cable connected to COM1.
From the other computer you can give simple commands to the debugger, such as dump the processor thread queues and print the current thread.
The kernel debugger can be enabled by adding " kdb" to the grub kernel command line, or by activating it in JNode using a newly added command: "kdb".
Ewout
The remoteout command allows you to send a copy of console output and logger output to a remote TCP or UDP receiver. This allows you to capture console output for bug reports, and in the cases where JNode is crashing.
Before you run the command, you need to set up a receiver application on the remote host to accept and record the output. More details (including a brief note on the JNode RemoteReceiver application) may be found in the remoteout command page. Please read the Bugs section as well!
This part contains the technical documentation of the JNode Operating System.
During the boot process of JNode, the kernel image is loaded by Grub and booted. After the bootstrapper code, we're running plain java code. The fist code executed is in org.jnode.boot.Main#vmMain() which initializes the JVM and starts the plugin system.
The basic device driver design involves 3 components:
There is a DeviceManager where all devices are registered. It delegates to DeviceToDriverMapper instances to find a suitable driver for a given device. Instances of this mapper interface use e.g. the PCI id of a device (in case of PCIDevice) to find a suitable driver. This is configurable via a configuration file.
For a device to operate there are the following resources available:
The filesystem support in JNode is split up into a generic part and a filesystem specific part. The role of the generic part is:
The role of the filesystem specific part is:
We should be more specific about what a filesystem is. JNode makes a distinction the a FileSystemType and a FileSystem. A FileSystemType has a name, can detect filesystems of its own type on a device and can create FileSystem instances for a specific device (usually a disk). A FileSystem implements storing/retrieving files and directories.
To access files in JNode, use the regular classes in the java.io package. They are connected to the JNode filesystem implementation. A direct connection to the filesystem implementation is not allowed.
This chapter details the FrameBuffer device design and the interfaces involved in the design.
All framebuffer devices must implement this API.
TODO write me.
TODO write me.
TODO write me.
TODO write me.
This chapter details the design of network devices and describe the interfaces involved.
Every network device must implement this API.
The API contains methods to get the hardware address of the device, send data through the device and get/set protocol address information.
When a network deivce receives data, it must deliver that data to the NetworkLayerManager. The AbstractNetworkDriver class (which is usually the baseclass for all network drivers) contains a helper method (onReceive) for this purpose.
This chapter will detail the interfaces involved in the network protocol layer.
This interface must be implemented by all network protocol handlers.
This interface must be implemented by OSI transport layer protocols.
This interface must be implemented by OSI link layer protocols.
To register a network layer, the network layer class must be specified in an extension of the "org.jnode.net.networkLayers" extension point.
This is usually done in the descriptor of the plugin that holds the network layer.
This chapter contains the specific technical operating system details about the various architectures that JNode is operating on.
The X86 architecture is targets the Intel IA32 architecture implemented by the Intel Pentium (and up) processors and the AMD Athlon/Duron (etc) processors.
This architecture uses a physical memory layout as given in the picture below.
In the classical Java world, a command line application is launched by calling the "main" entry point method on a nominated class, passing the user's command arguments as an array of Strings. The command is responsible for working out which arguments represent options, which represent parameters and so on. While there are (non-Sun) libraries to help with this task (like the Java version of GNU getOpts), they are rather primitive.
In JNode, we take a more sophisticated approach to the issue of command arguments. A native JNode command specifies its formal arguments and command line syntax. The task of matching actual command line arguments is performed by JNode library classes. This approach offers a number of advantages over the classical Java approach:
In addition, this approach allows us to do some things at the Shell level that are difficult with (for example) UNIX style shells.
As the above suggests, there are two versions of JNode command syntax and associated mechanisms; i.e parsing, completion, help and so on. In the first version (the "old" mechanisms) the application class declares a static Argument object for each formal parameter, and creates a static "Help.Info" data structure containing Syntax objects that reference the Arguments. The command line parser and completer traverse the data structures, binding values to the Arguments.
The problems with the "old" mechanisms include:
The second version (the "new" mechanisms) are a ground-up redesign and reimplementation:
(This example is based on material provided by gchii)
The cat command is a JNode file system command for the concatenation of files.
The alternative command line syntaxes for the command are as follows:
cat cat -u | -urls <url> ... | cat <file> ...
The simplest use of cat is to copy a file to standard output displaying the contents of a file; for example.
cat d.txt
The following example displays a.txt, followed by b.txt and then c.txt.
cat a.txt b.txt c.txt
The following example concatenates a.txt, b.txt and c.txt, writing the resulting file to d.txt.
cat a.txt b.txt c.txt > d.txt
In fact, the > output redirection in the example above is performed by the command shell and interpreter, and the "> d.txt" arguments are removed before the command arguments are processed. As far the command class is concerned, this is equivalent to the previous example.
Finally, the following example displays the raw HTML for the JNode home page:
cat --urls http ://www.jnode.org/
Syntax specification
The syntax for the cat command is defined in fs/descriptors/org.jnode.fs.command.xml.
The relevant section of the document is as follows:
39 <extension point="org.jnode.shell.syntaxes"> 40 <syntax alias="cat"> 41 <empty description="copy standard input to standard output"/> 42 <sequence description="fetch and concatenate urls to standard output"> 43 <option argLabel="urls" shortName="u" longName="urls"/> 44 <repeat minCount="1"> 45 <argument argLabel="url"/> 46 </repeat> 47 </sequence> 48 <repeat minCount="1" description="concatenate files to standard output"> 49 <argument argLabel="file"/> 50 </repeat> 51 </syntax>
Line 39: "org.jnode.shell.syntaxes" is an extension point for command syntax.
Line 40: The syntax entity represents the entire syntax for a command. The alias attribute is required and associates a syntax with a command.
Line 41: When parsing a command line, the empty tag does not consume arguments. This is a description of the cat command.
Line 42: A sequence tag represents a group of options and arguments, and others.
Line 43: An option tag is a command line option, such as -u and --urls. Since -u and --urls are actually one and the same option, the argLable attribute identifies an option internally.
Line 44: An option might be used more than once on a command line. When minCount is one or more, an option is required.
Line 45: An argument tag consumes one command line argument.
Line 48: When minCount is 1, an option is required.
Line 49: An argument tag consumes one command line argument.
The cat command is implemented in CatCommand.java. The salient parts of the command's implementation are as follows.
54 private final FileArgument ARG_FILE = 55 new FileArgument("file", Argument.OPTIONAL | Argument.MULTIPLE, 56 "the files to be concatenated");
This declares a formal argument to capture JNode file/directory pathnames from the command line; see the specification of the org.jnode.shell.syntax.FileArgument. The "Argument.OPTIONAL | Argument.MULTIPLE" parameter gives the argument flags. Argument.OPTIONAL means that this argument may be optional in the syntax. The Argument.MULTIPLE means that the argument may be repeated in the syntax. Finally, the "file" label matches the "file" attribute in the XML above at line 49.
58 private final URLArgument ARG_URL = 59 new URLArgument("url", Argument.OPTIONAL | Argument.MULTIPLE, 60 "the urls to be concatenated");
This declares a formal argument to capture URLs from the command line. This matches the "url" attribute in the XML above at line 45.
62 private final FlagArgument FLAG_URLS = 63 new FlagArgument("urls", Argument.OPTIONAL, "If set, arguments will be urls");
This declares a formal flag that matches the "urls" attribute in the XML above at line 43.
67 public CatCommand() { 68 super("Concatenate the contents of files, urls or standard input to standard output"); 69 registerArguments(ARG_FILE, ARG_URL, FLAG_URLS); 70 }
The constructor for the CatCommand registers the three formal arguments, ARG_FILE, ARG_URL and FLAG_URLS. The registerArguments() method is implemented in AbstractCommand.java. It simply adds the formal arguments to the command's ArgumentBundle, making them available to the syntax mechanism.
79 public void execute() throws IOException { 80 this.err = getError().getPrintWriter(); 81 OutputStream out = getOutput().getOutputStream(); 82 File[] files = ARG_FILE.getValues(); 83 URL[] urls = ARG_URL.getValues(); 84 85 boolean ok = true; 86 if (urls != null && urls.length > 0) { 87 for (URL url : urls) { ... 107 } else if (files != null && files.length > 0) { 108 for (File file : files) { ... 127 } else { 128 process(getInput().getInputStream(), out); 129 } 130 out.flush(); 131 if (!ok) { 132 exit(1); 133 } 134 }
The "execute" method is called after the syntax processing has occurred, and after the command argument values have been converted to the relevant Java types and bound to the formals. As the code above shows, the method uses a method on the formal argument to retrieve the actual values. Other methods implemented by AbstractCommand allow the "execute" to access the command's standard input, output and error streams as Stream objects or Reader/Writer objects, and to set the command's return code.
Note: ideally the syntax of the JNode cat command should include this alternative:
cat ( ( -u | -urls <url> ) | <file> ) ...
or even this:
cat ( <url> | <file> ) ...
allowing <file> and <url> arguments to be interspersed. The problem with the first alternative syntax above is that the Argument objects do not allow the syntax to capture the complete order of the interspersed <file> and <url> arguments. In order to support this, we would need to replace ARG_FILE and ARG_URL with a suitably defined ARG_FILE_OR_URL. The problem with the second alternative syntax above is some legal <url> values are also legal <file> values, and the syntax does not allow the user to control the disambiguation.
For more information, see also org.jnode.fs.command.xml - http://jnode.svn.sourceforge.net/viewvc/jnode/trunk/fs/descriptors/org.j... .
CatCommand.java - http://jnode.svn.sourceforge.net/viewvc/jnode/trunk/fs/src/fs/org/jnode/...
Here are some ideas for work to be done in this area:
This page is an overview of the JNode APIs that are involved in the new syntax mechanisms. For more nitty-gritty details, please refer to the relevant javadocs.
Note:
Java package structure
The following classes mostly reside in the "org.jnode.shell.syntax" package. The exceptions are "Command" and "AbstractCommand" which live in "org.jnode.shell". (Similarly named classes in the "org.jnode.shell.help" and "org.jnode.shell.help.args" packages are part of the old-style syntax support.)
Command
The JNode command shell (or more accurately, the command invokers) understand two entry points for launching classes as "commands". The first entry point is the "public static void main(String[])" entry point used by classic Java command line applications. When a command class has (just) a "main" method, the shell will launch it by calling the method, passing the command arguments. What happens next is up to the command class:
The preferred entry point for a JNode command class is the "Command.execute(CommandLine, InputStream, PrintStream, PrintStream)" method. On the face of it, this entry point offers a number of advantages over the "main" entry point:
Unless you are using the "default" command invoker, a command class with an "execute" entry point will be invoked via that entry point, even it it also has a "main" entry point. What happens next is up to the command class:
AbstractCommand
The AbstractCommand class is a base class for JNode-aware command classes. For command classes that do their own argument processing, or that use the old-stle syntax mechanisms, use of this class is optional. For commands that want to use the new-style syntax mechanisms, the command class must be a direct or indirect subclass of AbstractCommand.
The AbstractCommand class provides helper methods useful to all command class.
The "getCommandLine" method returns a CommandLine instance that holds the command's command name and unparsed arguments.
But more importantly, the AbstractCommand class provides infrastructure that is key to the new-style syntax mechanism. Specifically, the AbstractCommand maintains an ArgumentBundle for each command instance. The ArgumentBundle is created when either of the following happens:
If it was created, the ArgumentBundle is populated with argument values before the "execute" method is called. The existence of an ArgumentBundle determines whether the shell uses old-style or new-style syntax, for command execution and completion. (Don't try to mix the two mechanisms: it is liable to lead to inconsistent command behavior.)
Finally, the AbstractCommand class provides an "execute(String[])" method. This is intended to provide a bridge between the "main" and "execute" entry points for situations where a JNode-aware command class has to be executed via the former entry point. The "main" method should be implemented as follows:
public static void main(String[] args) throws Exception { new XxxClass().execute(args); }
CommandIO and its implementation classes
The CommandIO interfaces and its implementation classes allow commands to obtain "standard io" streams without knowing whether the underlying data streams are byte or character oriented. This API also manages the creation of 'print' wrappers.
Argument and sub-classes
The Argument classes play a central place in the new syntax mechanism. As we have seen above, the a command class creates Argument instances to act as value holders for its formal arguments, and adds them to its ArgumentBundle. When the argument parser is invoked, traverses the command syntax and binds values to the Arguments in the bundle. When the command's "execute" entry point is called, the it can access the values bound to the Arguments.
The most important methods in the Argument API are as follows:
The constructors for the descendent classes of Argument provide the following common parameters:
The descendent classes of Argument correspond to different kinds of argument. For example:
There are two abstract sub-classes of Argument:
Please refer to the javadoc for an up-to-date list of the Argument classes.
Syntax and sub-classes
As we have seen above, Argument instances are used to specify the command class'es argument requirements. These Arguments correspond to nodes in one or more syntaxes for the command. These syntaxes are represented in memory by the Syntax classes.
A typical command class does not see Syntax objects. They are typically created by loading XML (as specified here), and are used by various components of the shell. As such, the APIs need not concern the application developer.
ArgumentBundle
This class is largely internal, and a JNode application programmer doesn't need to access it directly. Its purpose is to act as the container for the new-style Argument instances that belong to a command class instance.
MuSyntax and sub-classes
The MuSyntax class and its subclasses represent the BNF-like syntax graphs that the command argument parser actually operate on. These graphs are created by the "prepare" method of new-style Syntax objects, in two stages. The first stage is to build a tree of MuSyntax objects, using symbolic references to represent cycles. The second stage is to traverse the tree, replacing the symbolic references with their referents.
There are currently 6 kinds of MuSyntax node:
MuParser
The MuParser class does the real work of command line parsing. The "parse" method takes input parameters that provide a MuSyntax graph, a TokenSource and some control parameters.
The parser maintains three stacks:
In normal parsing mode, the "parse" method matches tokens until either the parse is complete, or an error occurs. The parse is complete if the parser reaches the end of the token stream and discovers that the syntax stack is also empty. The "parse" method then returns, leaving the Arguments bound to the relevant source tokens. The error case occurs when a MuSyntax does not match the current token, or the parser reaches the end of the TokenSource when there are still unmached MuSyntaxes on the syntax stack. In this case, the parser backtracks to the last "choicepoint" and then resumes parsing with the next alternative. If no choicepoints are left, the parse fails.
In completion mode, the "parse" method behaves differently when it encounters the end of the TokenSource. The first thing it does is to attempt to capture a completion; e.g. by calling the current Argument's "complete(...)" method. Then itstarts backtracking to find more completions. As a result, a completion parse may do a lot more work than a normal parse.
The astute reader may be wondering what happens if the "MuParser.parse" method is applied to a pathological MuSyntax; e.g. one which loops for ever, or that requires exponential backtracking. The answer is that the "parse" method has a "stepLimit" parameter that places an upper limit on the number of main loop iterations that the parser will perform. This indirectly addresses the issue of space usage as well, though we could probably improve on this. (In theory, we could analyse the MuSyntax for common pathologies, but this would degrade parser performance for non-pathological MuSyntaxes. Besides, we are not (currently) allowing applications to supply MuSyntax graphs directly, so all we really need to do is ensure that the Syntax classes generate well-behaved MuSyntax graphs.)
As the parent page describes, the command syntax "picture" has two distinct parts. A command class registers Argument objects with the infrastructure to specify its formal command parameters. The concrete syntax for the command line is represented in memory by Syntax objects.
This page documents the syntactic constructs provided by the Syntax objects, and the XML syntax that provides the normal way of specifying a syntax.
You will notice that there can be a number of ways to build a command syntax from the constructs provided. This is redundancy is intentional.
The Syntax base class
The Syntax class is the abstract base class for all classes that represent high-level syntactic constructs in the "new" syntax mechanisms. A Syntax object has two (optional) attributes that are relevant to the process of specifying syntax:
These attributes are represented in an XML syntax element using optional XML attributes named "label" and "description" respectively.
ArgumentSyntax
An ArgumentSyntax captures one value for an Argument with a given argument label. Specifically, an ArgumentSyntax instance will cause the parser to consume one token, and to attempt to bind it to the Argument with the specified argument label in the current ArgumentBundle.
Note that many Arguments are very non-selective in the tokens that they will match. For example, while an IntegerArgument will accept "123" as valid, so will "FileArgument" and many other Argument classes. It is therefore important to take account the parser's handling of ambiguity when designing command syntaxes; see below.
Here are some ArgumentSyntax instances, as specified in XML:
<argument argLabel="foo"> <argument label="foo" description="this controls the command's fooing" argLabel="foo">
An EmptySyntax matches absolutely nothing. It is typically used when a command requires no arguments.
<empty description="dir with no arguments lists the current directory">
OptionSyntax
An OptionSyntax also captures a value for an Argument, but it requires the value token to be preceded by a token that gives an option "name". The OptionSyntax class supports both short option names (e.g. "-f filename") and long option names (e.g. "--file filename"), depending on the constructor parameters.
<option argLabel="filename" shortName="f"> <option argLabel="filename" longName="file"> <option argLabel="filename" shortName="f" longName="file">
If the Argument denoted by the "argLabel" is a FlagArgument, the OptionSyntax matches just an option name (short or long depending on the attributes).
SymbolSyntax
A SymbolSyntax matches a single token from the command line without capturing any Argument value.
<symbol symbol="subcommand1">
VerbSyntax
A VerbSyntax matches a single token from the command line, setting an associated Argument's value to "true".
<verb symbol="subcommand1" argLabel="someArg">
SequenceSyntax
A SequenceSyntax matches a list of child Syntaxes in the order specified.
<sequence description="the input and output files"> <argument argLabel="input"/> <argument argLabel="output"/> </sequence>
AlternativesSyntax
An AlternativesSyntax matches one of a list of alternative child Syntaxes. The child syntaxes are tried one at a time in the order specified until one is found that matches the tokens.
<alternatives description="specify an input or output file"> <option shortName="i" argLabel="input"/> <option shortName="o" argLabel="output"/> </alternatives>
RepeatSyntax
A RepeatSyntax matches a single child Syntax repeated a number of times. By default, any number of matches (including zero) will satisfy a RepeatSyntax. The number of required and allowed repetitions can be constrained using the "minCount" and "maxCount" attributes. The default behavior is to match lazily; i.e. to match as few instances of the child syntax as is possible. Setting the attribute eager="true" causes the powerset to match as many child instances as possible, within the constraints of the "minCount" and "maxCount" attributes.
<repeat description="zero or more files"> <argument argLabel="file"/> </repeat> <repeat minCount="1" description="one or more files"> <argument argLabel="file"/> </repeat> <repeat maxCount="5" eager="true" description="as many files as possible, up to 5"> <argument argLabel="file"/> </repeat>
OptionalSyntax
An OptionalSyntax optionally matches a sequence of child Syntaxes; i.e. it matches nothing or the sequence. The default behavior is to match lazily; i.e. to try the "nothing" case first. Setting the attribute eager="true" causes the "nothing" case to be tried second.
<optional description="nothing, or an input file and an output file"> <argument argLabel="input"/> <argument argLabel="output"/> </optional> <optional eager="true" description="an input file and an output file, or nothing"> <argument argLabel="input"/> <argument argLabel="output"/> </optional>
PowerSetSyntax
A PowerSetSyntax takes a list of child Syntaxes and matches any number of each of them in any order or any interleaving. The default behavior is to match lazily; i.e. to match as few instances of the child syntax as is possible. Setting the attribute eager="true" causes the powerset to match as many child instances as possible.
<powerSet description="any number of inputs and outputs"> <option argLabel="input" shortName="i"/> <option argLabel="output" shortName="o"/> </powerSet>
OptionSetSyntax
An OptionSetSyntax is like a PowerSetSyntax with the restriction that the child syntaxes must all be OptionSyntax instances. But what OptionSetSyntax different is that it allows options for FlagArguments to be combined in the classic Unix idiom; i.e. "-a -b" can be written as "-ab".
<optionSet description="flags and value options"> <option argLabel="flagOne" shortName="1"/> <option argLabel="flagTwo" shortName="2"/> <option argLabel="argThree" shortName="3"/> </optionSet>
Assuming that the "flagOne" and "flagTwo" correspond to FlagArguments, and "argThree" corresponds to (say) a FileArgument, the above syntax will match any of the following: "-1 -2 -3 three", "-12 -3 three", "-1 -3 three -1", "-3 three" or even an empty argument list.
The <syntax ... > element
The outermost element of an XML Syntax specification is the <syntax> element. This element has a mandatory "alias" attribute which associates the syntax with an alias that is in force for the shell. The actual syntax is given by the <syntax> element's zero or more child elements. These must be XML elements representing Syntax sub-class instances, as described above. Conceptually, each of the child elements represents an alternative syntax for the command denoted by the alias.
Here are some examples of complete syntaxes:
<syntax alias="cpuid"> <empty description="output the computer's id"> </syntax> <syntax alias="dir"> <empty description="list the current directory"/> <argument argLabel="directory" description="list the given directory"/> </syntax>
Ambiguous Syntax specifications
If you have implemented a language grammar using a parser generator (like Yacc, Bison, AntLR and so on), we will recall how the parser generator could be very picky about your input grammar. For example, these tools will often complain about "shift-reduce" or "reduce-reduce" conflicts. This is a parser generator's way of saying that the grammar appears (to it) to be ambiguous.
The new-style command syntax parser takes a different approach. Basically, it does not care if a command syntax supports multiple interpretations of a command line. Instead, it uses a simple runtime strategy to resolve ambiguity: the first complete parse "wins".
Since the syntax mechanisms don't detect ambiguity, it is up to the syntax designer to be aware of the issue, and take it into account when designing the syntax. Here is an example:
<alternatives> <argument argLabel="number"> <argument argLabel="file"> </alternatives>
Assuming that "number" refers to an IntegerArgument, and "file" refers to a FileArgument, the syntax above is actually ambiguous. For example, a parser could in theory bind "123" to the IntegerArgument or the FileArgument. In practice, the new-style command argument parser will pick the first alternative that gives a complete parse, and bind "123" to the IntegerArgument. If you (the syntax designer) don't want this (e.g. because you want the command to work for all legal filenames), you will need to use OptionSyntax or TokenSyntax or something else to allow the end user to force a particular interpretation.
SyntaxSpecLoader and friends
More about the Syntax base class.
If you are planning on defining new sub-classes of Syntax, the two key behavioral methods that must be implemented are as follows:
Note: this page describes the old syntax mechanism which is currently being phased out. Please refer to the parent menu for the pages on the new syntax mechanism.
The JNode Command Line Completion is one of the central aspects of the shell. JNode makes use of a sophisticated object model to declare command line arguments. This also provides for a standard way to extract a help-document that can be viewed by the user in different ways. Additionally, the very same object model can be used to access the arguments in a convenient manner, instead of doing the 133735th command line parsing implementation in computer history.
The following terms play an important role in this architecture:
The command used in this document is a ZIP-like command. I will call it sip. It provides for a variety of different parameter types and syntaxes.
The sip command,in this example, will have the following syntaxes:
sip -c [-password <password>] [-r] <sipfile> [<file> ...]
sip -x [-password <password>] <sipfile> [<file> ...]
Named Parameters:
Arguments:
Let's set some preconditions, which will be of importance in the following chapters.
Therefore, the first lines of out Command class look like this:
package org.jnode.tools.command;
import org.jnode.tools.sip.*;
import org.jnode.shell.Command;
import org.jnode.shell.CommandLine;
import org.jnode.shell.help.*;
public class SipCommand implements Command{
After importing the necessary packages, let's dive into the declaration of the Arguments. This is almost necessarily the first step when you want to reuse arguments. Good practise is to always follow this pattern, so you don't have to completely rework the declaration sometime. In short, we will work above definition from bottom up.
You will note that all Arguments, Parameters and Syntaxes will be declared as static. This is needed because of the inner workings of the Command Line Completion, which has to have access to a static HELP_INFO field providing all necessary informations.
static StringArgument ARG_PASSWORD = new StringArgument("password", "the password for the sipfile");
static FileArgument ARG_SIPFILE = new FileArgument("sipfile", "the sipfile to perform the operation on");
static FileArgument ARG_FILE = new FileArgument("file", "if given, only includes the files in this list", Argument.MULTI);
Now we can declare the Parameters, beginning with the ones taking no Arguments.
Note: all Parameters are optional by default!
// Those two are mandatory, as we will define the two distinct syntaxes given above
static Parameter PARAM_COMPRESS = new Parameter(
"c", "compress directory contents to a sipfile", Parameter.MANDATORY);
static Parameter PARAM_EXTRACT = new Parameter(
"x", "extract a sipfile", Parameter.MANDATORY);
static Parameter PARAM_RECURSE = new Parameter(
"r", "recurse through subdirectories");
static Parameter PARAM_PASSWORD = new Parameter(
"password", "use a password to en-/decrypt the file", ARG_PASSWORD);
// here come our two anonymous Parameters used to pass the files
static Parameter PARAM_SIPFILE = new Parameter(
ARG_SIPFILE, Parameter.MANDATORY);
static Parameter PARAM_FILE = new Parameter(
ARG_FILE);
Wait!
There is something special about the second Syntax, the extract one. The command line completion for this one will fail, as it will try to suggest files that are in the current directory, not in the sipfile we want to extract from. We will need a special type of Argument to provide a convenient completion, along with an extra Parameter which uses it.
Whenever you add some new functionality to JNode, please considering implementing some test code to exercise it.
Your options include:
We have a long term goal to be able to run all tests automatically on the new test server. New tests should be written with this in mind.
Overview
This page gives some guidelines for specifying "black-box tests" to be run using the TestHarness class; see Running black-box tests".
A typical black-box test runs a JNode command or script with specified inputs, and tests that its outputs match the outputs set down by the test specification. Examples set specification may be found in the "Shell" and "CLI" projects in the respective "src/test" tree; look for files named "*-tests.xml".
Syntax for test specifications
Let's start with a simple example. This test runs "ExprCommand" command class with the arguments "1 + 1", and checks that it writes "2" to standard output and sets the return code to "0".
<testSpec title="expr 1 + 1" command="org.jnode.shell.command.posix.ExprCommand" runMode="AS_ALIAS" rc="0"> <arg>1</arg> <arg>+</arg> <arg>1</arg> <output>2 </output> </testSpec>
Notes:
An "testSpec" element and its nested elements specifies a single test. The elements and attributes are as follows:
Syntax for "file" elements
A "file" element specifies an input or output file for a test. The attributes and content
are as follows:
Script expansion
Before a script is executed, it is written to a temporary directory. Any @TEMP_DIR@ sequence in the script will be replaced with the name of the directory where input files are created and where output files are expected to appear.
Syntax for test sets
While the test harness can handle XML files containing a single <testSpec> element, it is more convenient to assemble multiple tests into a test set. Here is a simple example:
<testSet title="expr tests""> <include setName="../somewhere/more-tests.xml"/> <testSpec title="expr 1 + 1" ...> ... </testSpec> <testSpec title="expr 2 * 2" ...> ... </testSpec> </testSet>
The "include" element declares that the tests in another test set should be run as part of this one. If the "setName" is relative, it will be resolved relative to this testSet's parent directory. The "testSpec" elements specify tests that are part of this test set.
Plugin handling
As a general rule, JNode command classes amd aliases are defined in plugins. When the test harness is run, it needs to know which plugins need to be loaded or the equivalent if we are running on the development platform. This is done using "plugin" elements; for example:
... <plugin id="org.jnode.shell.bjorne" class="org.jnode.test.shell.bjorne.BjornePseudoPlugin"/> <plugin id="org.jnode.shell.command.posix"/> ...
These elements may be child elements of both "testSpec" or "testSet" elements. A given plugin may be specified in more than one place, though if a plugin is specified differently in different places, the results are undefined. If a "plugin" element is in a "testSpec", the plugin will be loaded before the test is run. If a "plugin" element is in a "testSet", the plugin will be loaded before any test in the set, as well as any test that are "included".
The "plugin" element has the following attributes:
When the test harness is run on JNode, a "plugin" element causes the relevant Plugin to be loaded via the JNode plugin manager, using the supplied plugin id and the supplied (or default) version string.
When the test harness is run outside of JNode, the Emu is used to provide a minimal set of services. Currently, this does not include a plugin manager, so JNode plugins cannot be loaded in the normal way. Instead, a "plugin" element triggers the following:
This part contains the technical documentation of the JNode virtual machine.
Arrays are allocated just like normal java objects. The number of elements of the array is stored as the first (int) instance variable of the object. The actual data is located just after this length field.
Bytecodes that work on arrays are do the index checking. E.g. on the X86 this is implemented using the bound instruction.
Each class is represented by an internal structure of class information, method information and field information. All this information is stored in normal java objects.
Every object is located somewhere in the heap. It consists of an object header and space for instance variables.
At the start of each method invocation a frame for that method is created on the stack. This frame contains references to the calling frame and contains a magic number that is used to differentiate between compiled code invocations and interpreted invocations.
When an exception is thrown, the exception table of the current method is inspected. When an exception handler is found, the calculation stack is cleaned and code executing continues at the handler address.
When no suitable exception handler is found in the current method, the stackframe of the current method is destroyed and the process continues at the calling method.
When stacktrace of an exception is created from the frames of each method invocation. A class called VmStackFrame has exactly the same layout as a frame on the stack and is used to enumerate all method-invcocations.
JNode uses a simple mark&sweep collector. You can read on the differences, used terms and some general implementation details at wikipedia. In these terms JNode uses a non-moving stop-the-world conservative garbage collector.
About the JNode memory manager you should know the following: There is org.jnode.vm.memmgr.def.VmBootHeap. This class manages all objects that got allocated during the bootimage creation. VmDefaultHeap contains objects allocated during runtime. Each Object on the heap has a header that contains some extra information about the heap. There's information about the object's type, a reference to a monitor (if present) and the object's color (see wikipedia). JNode objects can have one of 4 different colors and one extra finalization bit. The values are defined in org.jnode.vm.classmgr.ObjectFlags.
At the beginning of a gc cycle all objects are either WHITE (i.e. not visited/newly allocated) or YELLOW (this object is awaiting finalization).
The main entry point for the gc is org.jnode.vm.memmgr.def.GCManager#gc() which triggers the gc run. As you can see one of the first things in gc() is a call to "helper.stopThreadsAtSafePoint();" which stops all threads except the garbace collector. The collection then is divided into 3 phases: markHeap, sweep and cleanup. The two optional verify calls at the beginning and end are used for debugging, to watch if the heap is consistent.
The mark phase now has to mark all reachable objects. For that JNode uses org.jnode.vm.memmgr.def.GCStack (the so called mark stack) using a breadth-first search (BFS) on the reference graph. At the beginning all roots get marked where roots are all references in static variables or any reference on any Thread's stack. Using the visitor pattern the Method org.jnode.vm.memmgr.def.GCMarkVisitor#visit get called for each object. If the object is BLACK (object was visited before and all its children got visited. Mind: This does not mean that the children's children got visited!) we simply return and continue with the next reference in the 'root set'. If the object is GREY (object got visited before, but not all children) or in the 'root set' the object gets pushed on the mark stack and mark() gets called.
Let's make another step down and examine the mark() method. It first pops an Object of the mark stack and trys to get the object type. For all children (either references in Object arrays or fields of Objects) the processChild gets called and each WHITE (not visited yet) object gets modified to be GREY. After that the object gets pushed on the mark stack. It is important to understand at that point that the mark stack might overflow! If that happens the mark stack simply discards the object to push and remembers the overflow. Back at the mark method we know one thing for sure: All children of the current object are marked GREY (or even BLACK from a previous mark()) and this is even true if the mark stack had an overflow. After examining the object's Monitor and TIB it can be turned BLACK.
Back at GCManager#markHeap() we're either finished with marking the object or the mark stack had an overflow. In the case it had an overflow we have to repeat the mark phase. Since many objects now are allready BLACK it is less likly the stack will overflow again but there's one important point to consider: All roots got marked BLACK but as said above not all children's children need to be BLACK and might be GREY or even WHITE. That's why we have to walk all heaps too in the second iteration.
At the end of the mark phase all objects are either BLACK (reachable) or WHITE (not reachable) so the WHITE ones can be removed.
The sweep again walks the heap (this time without the 'root set' as they do not contain garbage by definition) and again visits each object via org.jnode.vm.memmgr.def.GCSweepVisitor#visit. As WHITE objects are not reachable anymore it first tests if the object got previously finalized. If it was it will be freed, if not and the object has a Finalizer it will be marked YELLOW (Awaiting finalization) else it will be freed too. If the object is neither WHITE nor YELLOW it will be marked WHITE for the next gc cycle.
The cleanup phase at the end sets all objects in the bootHeap to WHITE (as they will not be swept above) as they might be BLACK and afterwards calls defragment() for every heap.
Some other thoughts regarding the JNode implementation include:
It should be also noted that JNode does not know about the stack's details. I.e. if the mark phase visits all objects of a Thread's stack it never knows for a value if it is a reference or a simple int,float,.. value. This is why the JNode garbage collector can be called conservative. Every value on the stack might be a reference pointing to a valid object. So even if it is a float on the stack, as we don't know for sure we have to visit the object and run a mark() cycle. This means on the one hand that we might mark memory as reachable that in reality is garbage on the other hand it means that we might point to YELLOW objects from the stack. As YELLOW objects are awaiting finalization (and except the case the finalizer will reactivate the object) they are garbage and so they can not be in the 'root set' (except the case where we have a random value on the stack that we incorrectly consider to be a reference). This is also the reason for the current "workaround" in GCMarkVisitor#visit() where YELLOW objects in the 'root set' trigger error messages instead of killing JNode.
There is some primilary code for WriteBarrier support in JNode. This is a start to make the gc concurrent. If the WriteBarrier is enabled during build time, the JNode JIT will include some special code into the compiled native code. For each bytecode that sets a reference to any field or local the writebarrier gets called and the object gets marked GREY. So the gc will know that the heap changed during mark. It is very tricky to do all that with proper synchronization and the current code still has bugs, which is the reason why it's not activated yet.
This chapter covers the Java security implemented in JNode. This involves the security manager, access controller and privileged actions.
It does not involve user management.
The Java security in JNode is an implementation of the standard Java security API. This means that permissions are checked against an AccessControlContext which contains ProtectionDomain's. See the Security Architecture for more information.
In JNode the security manager is always on. This ensures that permissions are always checked.
The security manager (or better the AccessController) executes the security policy implemented by JNodePolicy. This policy is an implementation of the standard java.security.Policy class.
This policy contains some static permissions (mainly for access to certain system properties) and handles dynamic (plugin) permissions.
The dynamic permissions are plugin based. Every plugin may request certain permissions. The Policy implementation decides if these permissions are granted to the plugin.
To request permissions for a plugin, add an extension to the plugin-descriptor on connected to the "org.jnode.security.permission" extension-point.
This extension has the following structure:
class | The full classname of the permission. e.g. "java.util.PropertyPermission" |
name | The name of the permission. This attribute is permission class dependent. e.g. "os.name" |
actions | The actions of the permission. This attribute is permission class dependent. e.g. "read" |
Multiple permission's can be added to a single extension.
If you need specific permissions, make sure to run that code in a PrivilegedAction. Besides you're own actions, the following standard PrivilegedAction's are available:
gnu.java.security.actions.GetPropertyAction | Wraps System.getProperty |
gnu.java.security.actions.GetIntegerAction | Wraps Integer.getInteger |
gnu.java.security.actions.GetBooleanAction | Wraps Boolean.getBoolean |
gnu.java.security.actions.GetPolicyAction | Wraps Policy.getPolicy |
gnu.java.security.actions.InvokeAction | Wraps Method.invoke |
Multithreading in JNode involves the scheduling of multiple java.lang.Thread instances between 1 or more physical processors. (In reality, multiprocessor support is not yet stable). The current implementation uses the yieldpoint scheduling model as described below.
Yieldpoint scheduling
Yieldpoint scheduling means that every thread checks at certain points (called "yieldpoints") in the native code to see if it should let other threads run. The native code compiler adds yieldpoints into the native code stream at the beginning and end of a method, at backward jumps, and at method invocations. The yieldpoint code checks to see if the "yield" flag has been set for the current thread, and if is has, it issues a yield (software-)interrupt. The kernel takes over and schedules a new thread.
The "yield" flag can be set by a timer interrupt, or by the (kernel) software itself, e.g. to perform an explicit yield or in case of locking synchronization methods.
The scheduler invoked by the (native code) kernel is implemented in the VmProcessor class. This class (one instance for every processor) contains a list of threads ready to run, a list of sleeping threads and a current thread. On a reschedule, the current thread is appended to the end of the ready to run thread-list. Then the sleep list is inspected first for threads that should wake-up. These threads are added to the ready to run thread-list. After that the first thread in the ready to run thread-list is removed and used as current thread. The reschedule method returns and the (native code) kernel does the actual thread switching.
The scheduler itself runs in the context of the kernel and should not be interrupted. A special flag is set to prevent yieldpoints in the scheduler methods themselves from triggering reentrant yieldpoint interrupts. The flag is only cleared when the reschedule is complete
Why use yieldpoint scheduling?
JNode uses yield point scheduling to simplify the implementation of the garbage collector and to reduce the space needed to hold GC descriptors.
When the JNode garbage collector runs, it needs to find all "live" object references so that it can work out which objects are not garbage. A bit later, it needs to update any references for objects that have been moved in memory. Most object references live either in other objects in the heap, or in local variables and parameters held on one of the thread stacks. However, when a thread is interrupted, the contents of the hardware registers are saved in a "register save" area, an this may include object references.
The garbage collector is able to find these reference because the native compiler creates descriptors giving the offsets of references. For each class, there is a descriptor giving the offsets of its reference attributes and statics in their respective frames. For each method or constructor, another descriptor gives the corresponding stack frame layout. But we still have to deal with the saved registers.
If we allowed a JNode thread to be interrupted at any point, the native compiler would need to create descriptors all possible saved register sets. In theory, we might need a different descriptor corresponding to every bytecode. By using yield points, we can guarantee that "yields" only occur at a fixed places, thereby reducing the number of descriptors that that need to be kept.
However, the obvious downside of yieldpoints is the performance penalty of repeatedly testing the "yield" flag, especially when executing a tight loop.
Thread priorities
Thread can have different priorities, ranging from Thread.LOW_PRIORITY to Thread.HIGH_PRIORITY. In JNode these priorities are implemented via the ready to run thread-list. This list is (almost) always sorted on priority, which means that the threads with the highest priority comes first.
There is one exception on this rule, which is in the case of busy-waiting in the synchronization system. Claiming access to a monitor (internals) involves a busy-waiting loop with an explicit yield. This yield ignores the thread priority to avoid starvation of lower-priority threads, which will lead to an endless waiting time for the high priority thread.
Classes involved
The following classes are involved in the scheduling system. All of these classes are in the org.jnode.vm package.
All methods are compiled before being executed. At first, the method is "compiled" to a stub that calls the most basic compiler and then invokes the compiled code.
Better compilers are invoked when the VM detects that a method is invoked often. These compilers perform more optimizations.
JNode has now two different native code compilers for the Intel X86 platform and 1 stub compiler.
STUB is a stub compiler that generates a stub for each method that invokes the L1 compiler for a method and then invokes the generated code itself. This compiler ensures that method are compiled before being executed, but avoids compilation time when the method is not invoked at all.
L1A is a basic compiler that translated java bytecode directly to decent X86 instructions. This compiler uses register allocation and a virtual stack to eliminate much of the stack operations. The focus of this compiler is on fast compilation and reasonably fast generated code.
L2 is an optimizing compiler that focuses on generating very fast code, not on compilation speed. This compiler is currently under construction.
All X86 compilers can be found below the org.jnode.vm.x86.compiler package.
Optimizing compilers use an intermediate representation instead of java bytecodes. The intermediate representation (IR) is an abstract representation of machine operations which are eventually mapped to machine instructions for a particular processor. Many optimizations can be performed without concern for machine details, so the IR is a good start. Additional machine dependent optimizations can be performed at a later stage. In general, the most important optimizations are machine independent, whereas machine dependent optimizations will typically yield lesser gains in performance.
The IR is typically represented as set of multiple operand operations, usually called triples or quads in the literature. The L2 compiler defines an abstract class org.jnode.vm.compiler.ir.quad.Quad to describe an abstract operation. Many concrete implementations are defined, such as BinaryQuad, which represents binary operations, such as a = x + y. Note that the left hand side (lhs) of the operation is also part of the quad.
A set of Quads representing bytecodes for a given method are preprared by org.jnode.vm.compiler.ir.IRGenerator.
The L2 compiler operates in four phases:
1. Generate intermediate representation (IR)
2. Perform second pass optimizations (pass2)
3. Register allocation
4. Generate native code
The first phase parses bytecodes and generates a set of Quads. This phase also performs simple optimizations, such as copy propagation and constant folding.
Pass2 simplifies operands and tries to eliminate dead code.
Register allocation is an attempt to assign live variable ranges to available machine registers. As register access is significantly faster than memory access, register allocation is an important optimization technique. In general, it is not always possible to assign all live variable ranges to machine registers. Variables that cannot be allocated to registers are said to be 'spilled' and must reside in memory.
Code is generated by iterating over the set of IR quads and producing machine instructions.
All new statements used to allocate new objects are forwarded to a HeapManager. This class allocates & initializes the object. The objects are allocated from one of several heaps. Each heap contains objects of various sizes. Allocation is currently as simple as finding the next free space that is large enough to fit all instance variables of the new object and claiming it.
An object is blanked on allocation, so all instance variables are initialized to their default (null) values. Finally the object header is initialized, and the object is returned.
To directly manipulate memory at a given address, a class called Unsafe is used. This class contains native methods to get/set the various java types.
Synchronization involves the implementation of synchronized methods and blocks and the wait, notify, notifyAll method of java.lang.Object.
Both items are implemented using the classes Monitor and MonitorManager.
Lightweight locks
JNode implement a lightweight locking mechanism for synchronized methods and blocks. For this purpose a lockword is added to the header of each object. Depending on the state of the object on which a thread wants to synchronize a different route it taken.
This is in principle how the various states are handled.
All manipulation of the lockword is performed using atomic instructions prefixed with multiprocessor LOCK flags.
When the lockcount part of the lockword is full, an inflated lock is also installed.
Once an object has an inflated lock installed, this inflated lock will always be used.
Wait, notify
Wait and notify(all) requires that the current thread is owner of the object on which wait/notify are invoked. The wait/notify implementation will install an inflated lock on the object if the object does not already have an inflated lock installed.
The following reports are generated nightly reflecting the state of SVN trunk.
Plugins:
Javadocs:
Nightly build:
This part is intended for JNode testers.
Here you can find informations related to the tests on filesystems.
Running the tests outside of JNode
With Ant run the target tests in the file /JNode-All/build.xml.
The results are sent to the standard output and unexpected exceptions are also sent as for any other JAVA application.
To debug the functionalities whose tests are failing, you can use Log4j that is configured through the file /JNode-FS/src/test/org/jnode/test/log4jForTests.properties.
By default traces are sent to the localhost on the port 4445. I recommand to use Lumbermill as a server to receive the log4j messages.
Running the tests in JNode
Type AllFSTest in the JNode shell. The results and unexpected exceptions are sent to the console. Log4j is automatically configured by JNode itself and manually with the shell command log4j.
We assume in the following tests that IP address of JNode is 192.168.44.3.
The JNode-shell project includes a test harness for running "black-box" tests on JNode commands and scripts. The harness is designed to allow tests to be run both in a development sandbox and on a running JNode system.
The test methodology is very straight-forward. Each testcase consists of a JNode alias, Java classname or inline script, together with an optional set of arguments and an optional inline input file. The command (alias or class) or script it run with the prescribed arguments and input, and the resulting output stream, error stream and return code are compared with expected results in the testcase. If there are any discrepancies between the expected and actual results, the testcase is counted as "failed". If there are any uncaught exceptions, this counts as a test "error".
The testcases are specified in XML files. Examples may be found in the JNode-Shell project in the "src/test" tree; e.g. "src/test/org/jnode/test/shell/bjorne/bjorne-shell-tests.xml". Each file typically specifies a number of distinct testcases, as outlined above. The "all-tests.xml" file (i.e. "shell/src/test/org/jnode/test/shell/all-tests.xml") should include all tests in the shell tree.
Running tests from Eclipse
The following steps can be used to run a set of tests from Eclipse.
You should now see a Console view displaying the output from running the tests. The last line should give a count of tests run, failures and errors.
Running tests from the Linux shell
Running the tests from the Linux shell is simply a matter of doing what the Eclipse launcher is doing. Put the relevant "JNode-*/classes" directories on the classpath, then run:
java org.jnode.test.shell.harness.TestHarness <xxx-tests.xml>
Running tests from within JNode.
In order to run the tests on the JNode platform, you must first boot JNode with the all plugins and tests loaded. Then, run the following command at the JNode command prompt:
org.jnode.test.shell.harness.TestHarness -r /org/jnode/test/shell/all-tests.xml
Notes:
TestHarness command syntax.
The TestHarness command has the following syntax:
command [ <opt> ...] <spec-file> ...
where <opt> is one of:
--verbose | - v # output more information about tests run --debug | - d # enable extra debugging --stopOnError | -E # stop test harness on the first 'error' --stopOnFailure | -F # stop test harness on the first 'failure' --sandbox | -s <dir-name> # specifies the dev't sandbox root directory --resource | -r # looks for <spec-file> as a resource on the classpath
The first two options enable more output. In particular, "-d" causes all output captured by the harness to be echoed to the console as it is captured.
The "-s" option can be used when the running the test harness outside of JNode when the root of the development sandbox is not "..".
The "-r" option tells the harness to try to find the test specification files on TestHarness's classpath rather in the file system. This allows the test harness to be run on JNode without the hassle of copying test and test suite specification files into the JNode filesystem.
Note that the TestHarness command class does not implement JNode's "Command" API, so command completion is not available when running on JNode. This is a deliberate design decision, not an oversight.
Reading and writing test specifications.
If a test fails, you will probably need to read and understand the test's specification as a first step in diagnosing the problem. For a description of specification file syntax and what they mean, please refer to the Black-box command tests with TestHarness page.
To run the mauve tests, proceed as follow :
- boot JNode and choose the choose the option "all plugins + tests" (should be the latest choice in the menu), that will allow you to use the mauve plugin.
- when boot is finished, type the command cd /jnode/tmp to go in a writable directory
- type mauve-filter and answer to the asked questions, that will create a file named "tests" in the current directory
- type testCommand and the tests will run
If you find some bugs by this way, don't forget to submit it or fix it. Depending on the case, it is possible that a patch to fix the bug is needed on Classpath/Openjdk side ... and/or on JNode side.
We have created a straight-forward way to run various tests from the Linux commandline in the development sandbox. The procedure is as follows:
$ ./test.sh all
For brief help on the script's arguments, just run "./build.sh" with no arguments.
The "./build.sh" script is just a wrapper script for using Ant to run tests that are
defined as targets in the "<project>/build-tests.xml" files. You can add new
tests to be run by adding targets, or (if required) cloning an existing file into
a project that doesn't have one.
TODO
This guide is intended for developers porting JNode to different platforms.
Porting JNode to another platform involves the components:
The nano-kernel is the piece of code that is executed when JNode boots. It is responsible for setting the CPU into the correct mode and initialize the physical memory management structures (like page tables, segments etc).
The nano-kernel is also responsible for caching and dispatching hardware interrupts.
The Unsafe class contains some native methods that must be implemented according to the given architecture.
The JNode system requires some specific classes for each architecture. These classes describe the architecture like the size of an object reference and implement architecture specific items like thread states and processor information.
The essential classes needed for every architecture are:
At least 1 native code compiler must be written for an architecture. This compiler generates native code for java methods.
It is possible to implement multiple native code compilers for a specific architecture. Every compiler has a specific optimization level.
Part of the native code compiler infrastructure is usually an assembler. This assembler is a java class that writes/encodes native code instructions. The assembler is used by the native code compilers and usually the build process.
A final but important part of a port to a specific architecture is the build process. The standard JNode build files will compile all java code, prepare plugins and initial jar files, but will not build any architecture dependent structures.
The build process contains an architecture specific ant-file which will call a task used to create the JNode kernel. This task is derived from AbstractBootImageBuilder.
This part contains all release plans, the projects organization and the development process in general.
Development in the JNode project is done in development teams.
These teams have been introduced to:
Overall coordination of the JNode project remains in the hands of the
projects founder: Ewout Prangsma.
This team will develop the virtual machine itself and the basic structure of the JNode kernel.
Team members
Topics:
The plugin framework, The device framework, The VM, classloaders, native code compilers, memory management, The build & boot process, PCI drivers.
All development issues are discusses in our forum.
Targets for the near future:
Targets for the longer term:
This team will develop the filesystem layer of JNode.
Team leader: Guillaume BINET.
Topics:
The filesystem framework, The various filesystems, The integration with java.io, java.nio, Block device drivers (IDE, Floppy, SCSI, Ramdisk)
All development issues are discusses in our forum.
This team will develop the graphics layer of JNode.
Team leader: Valentin Chira.
Topics:
The AWT implementation, The window manager, The input handlers (keyboard/mouse) & drivers, Video drivers
All development issues are discusses in our forum.
This team will develop the network layer of JNode.
Team leader: Martin Husted Hartvig.
Topics:
The networking framework, The various network protocols, The integration with java.net, Network drivers
Team members:
Lesire Fabien
Mark Hale
Pavlos Georgiadis
Christopher Cole
Eduardo Millan
All development issues are discusses in our forum.
This team will develop the command line shell and the basic commands.
Team leader: Bengt Baverman.
Topics:
The command line shell, including the help system, The basic commands, Help the development of other parts of JNode to support the Shell
All development issues are discusses in our forum.
We always welcome new dedicated developers.
Following content is outdated since we have moved to GitHub.
If you want to join the development team, contact one of the developers who is working on an issue you want to contribute to, or contact the project admin for more information.
You will be asked to submit your first patches via email, before you'll be granted access to the SVN repository.
The document lays out the feature set for the next major release of JNode designated release 0.2.
This plan is intended to guide the development towards our first major release. It is not a fixed plan that cannot be deviated from. Suggestions & remarks are always welcome and will be considered.
Release target
We want this release to be the first usable version of JNode where we can run real world Java programs on.
This means that we need a working filesystem, a stable virtual machine, a class library mostly compatible with JDK 1.1, a working TCP/IP implementation and way to install it on a PC. It is not expected to have a fully working GUI yet.
Additional features
To achieve the target outlined above, each team will have to add/complement a number of features. These features are listed below. The percentages specify finised work, so 100% means completed.
JNode-Core
JNode-FS
JNode-GUI
JNode-Net
JNode-Shell
Release milestones
Right now no date is set for this release. There will be intermediate releases reflecting the state of development on the 0.1.x series until the target is reached.
Looking towards the future; 0.3
The next major release after 0.2 should bring a graphical user interface, we should really consider using J2SDK 1.5 features like generic types and add numerous drivers for CDROMs, USB, Video cards.
The document lays out the feature set for the next major release of JNode designated release 0.3.
This plan is intended to guide the development towards our second major release. It is not a fixed plan (as we have seen with the 0.2 release). Suggestions & remarks are always welcome and will be considered.
This release will improve the stability of the JNode operating system and enhance the usability.
A major goal of this release is to reduce the memory footprint required by JNode. The VM will be enhanced to support this, and all parts of JNode will have to be more concerned about their memory usage.
JNode will become localizable and translations for some locales will be added.
Every new part of JNode will have to be localizable according to a set of rules that will be determined.
The one and only language for the source code of JNode is and will remain to be English.
The remainder of this page will describe the targets and enhancements of the various subprojects of JNode. The names between brackets in the enhancements sections are the names of the lead developer for that enhancement.
The enhancements are given a priority:
The virtual machine will become more stable, reduce memory usage and will add support for Isolates (JSR 121). Furthermore it will enhance the J2SDK compatibility level.
The operating system will add support for power management and make enhancements for that in the driver framework.
An installer will be developed that is used to install JNode onto a PC system. This installer will put the essential structures/files on the harddisk of the PC.
A persistent storage mechanism for plugin preferences will be added.
Enhancements:
The network layer will be enhanced to fully support wireless networks. Furthermore, the existing TCP/IP stack will be improved in terms of reliability, safety and speed.
Enhancements:
The filesystem layer will become more stable and will be refactored to make use to the NIO classes.
Support will be added for a virtual filesystem that allows links between filesystems.
A new "system" filesystem will be added that gives access to a distributed filesystem that contains the JNode system information. This system information is about plugins, kernels & preferences.
Enhancements:
The existing GUI will be improved in terms of stability, Java2D support and speed.
The video driver interface may be adjusted to make better use of hardware acceleration.
A user friendly desktop environment will be developed or integrated.
Enhancements:
The shell will be extended with a graphical console, in order to display not ASCII characters.
Enhancements:
We want to make life of the JNode developer much easier. This will mean adding good documentation and also provide ways to develop JNode in JNode.
Enhancements:
The document states some of the TODOs with regard to future releases of JNode. There is no particular date when the targets should be finished, but it should give you some hints, what you could look at :
In order to imply students in JNode, I will expose here some projects.
Git repository
The gitorious project is located here : http://gitorious.org/jnode.
It's compound of:
Contact
If you are interested, you can contact me (Fabien DUMINY) :
Remarks
Here is a list of classical projects.
Legend :
(A) : project assigned
Level :
(*) : easy
(**) : average
(***) : difficult
(****) : very hard
: unknow
Here is a list of generic projects
Legend :
(A) : project assigned
Level :
(*) : easy
(**) : average
(***) : difficult
(****) : very hard
: unknow
Here is a list of experimental projects
Legend :
(A) : project assigned
Level :
(*) : easy
(**) : average
(***) : difficult
(****) : very hard
: unknow
JNode is discussed in several forums. These forums are now prefered above the Sourceforge mailinglists.
You can also use the #JNode.org at irc.oftc.net IRC channel.
Follow us on GitHub to track all code changes.
Use #JNode to talk about JNode on twitter.
For all other questions, suggestion and remarks, please contact the project admins: Ewout Prangsma (aka epr), Levente Santha (aka lsantha).
This document describes what is needed to make and publish a new release of JNode.
Preparation
$ cp all/build/descriptors/jnode-configure.jar \ builder/lib/jnode-configure-dist.jar $ svn commit builder/lib/jnode-configure-dist.jar
Uploading
Website adjustments
SVN actions
This page gives working definitions for common terms that we use in the JNode documentation. While we will try to be consistent with terminology used in other places, we reserve the right to be inconsistent.
Please feel free to add extra terms or offer better definitions as comments.
In this part of the documentation, new feature, design & architecture proposals are worked out.
Each proposal should contain at least to following topics:
Anyone is invited to comment on the proposals.
A while ago I had a suggestion for garbage collection that relied on the MMU units of modern processors. EPR didn't like this for want of a simple/generic solution. So this is a second attempt.
Deep breath, here goes....
Introduction
There are two goals of a OS Java GC:
I hope people agree with me so far. So what is the state of play with current algorithms? They mostly seem to be based on the Generational GC principle. That is only GC the memory allocated since the last GC unless the system is out of memory, in which case collect the generation before that and so on.
The problem with this approach, is it delays the inevitable. Eventually a full system GC is required. This halts the system for a large amount of time. Also a simple write barrier is required on pointer writes. This is so the roots to the lastest generation can be detected.
Having said this, generational GC is a good system that works in most environments, it's efficient and modern VM's use generational GC. In my opinion generational GC would work very well with a OS VM if the pause time for the occasional full memory GC could be mitigated. This is what my proposal concerns.
Overview
So lets interleave a slow running full system GC with normal program operation. The problem is the directed connectivaty graph of the program is constantly changing. By requiring to much co-operation between the mutator (running threads) and the GC slows down the mutator. You might end up with a fine grained GC algorithm with no pause times but the whole system would run slowly.
I see the answer as a compromise. Break the memory into chunks. The GC can halt the entire system while it collects the chunk. The bigger the chunk the bigger the pause time but the more efficient the overall system. The advantage of this approach is the integration of the mutator with the GC is very small. In fact no larger than would be required with a traditional generational GC algorithm.
Trapping intra-block pointer writes
So elaborating on the chunk idea, what is required is that we trap all pointer references between chunks. by doing this we have a set of all possible roots to a chunk. For efficiencies sake lets assume all chunks are the same size and are a power of 2. There are no gaps between the chunks and the first starts at byte n * chunksize. Location 0 is memory is a bad page to trap null pointer exceptions. what we're essentially talking about is memory blocks.
Its possible to trap intra-block pointers with three instructions on every pointer write. A smart compiler can cut down the times even this check is done by using the notion that pointer copying local to an object can never trigger the case. This assumes that objects never cross block boundaries. There are exceptions for this, for instance large objects bigger than the blocksize but these can be handled separately.
The code to trap intra block pointer writes looks like the following:
xor sourcePointer, destPointer
or result, blockSizeMinus1
jnz codeToHandleIntraBlockPointers
As people can see, including the jump its only three instructions on the x86 (I think!!!)
This only has to be triggered when a member pointer of one class is set. Not for intermediate local variables.
Storing intra-block pointer writes
Pointers that are detected as being intra-block need to be stored for later analysis. What this document proposes, is to have a system wide array of pointers with as many elements in it as blocks. The size of this array would be determined by the following equation:
size of array = amount of system memory / size of block
Each element in the array corresponds to a block in memory. Each array element contains a list of pointers pointing to elements held in the corresponding block.
The address of the source pointer is added to the linked list pointed to by the array element that corresponds to the block that contains the destination pointer. The effect of this is each block now has a set of addresses of pointers pointing into it. Of course there can be duplicates, but the critical thing is this list is time ordered. Thats very important.
Now the elements in these lists do not get modified by the mutator after they are created. This means that a thread running in parrallel can scan these lists and do the following:
This will trim the list down to the bare root set of a given block. The time taken to do the above is proportional to the size of the list which is directly proportional to the number of intra-block pointer writes. Essentially what we're doing is delaying the processing of the lists so we can process a longer list in one go. This increases the change of duplicates in the list and therefore can make the process a lot more efficient. We can also run the list scanning algorithms on different threads therefore processors and possibly schedule more at system idle time.
But how are duplicates in the list detected and obsolete references.
Firstly to detect duplicates. With modern architectures we generally cant locate an object across a machine word. This means for instance that on a 32 bit architecture which is 4 bytes the bottom 3 bits of all pointers will be zero. This could be used as an information store. When scanning the list the pointers can be marked. If a pointer is allready marked dont add it to the new list thats building.
Secondly what about obsolete references? This is simple, if the pointer points to some other block now, its obsolete.
So the algorithm so far is this. Run a paralled thread to the mutators to cut down the pointers into a certain block. This incoming pointer list for the block will keep growing so we can say as soon as a certain percentage of the list is processed all the mutators are halted. The next step is to process the remained part of the list for the block we are just about to garbage collect. Now contained in that list we should have a valid set of roots for the block. We can garbage collect the block and move all the reachable objects to the new heap thats growing. Fix up the roots to point to the new object locations and the old block is now ready to be recycled.
the more observant readers will have asked about intra-block cycles. The technique I use is to have two categories of reached objects:
The idea of strongly reached objects is that there was a proveable link to the object sometime during the last cycle. Weakly reached objects could either be junk or referenced. Strong reached objects are copied out to a new heap. Weakly reached objects can either be left of place or copied to a weakly reached heap. When we have no referenced from the strongly reached haep to anywhere else we know we can stop garbage collecting.
... more to come ...
This is where the graphics framework requirements & specification will come.
Current project members:
My current thoughts on the graphics framework is that the graphics driver is passed the basic hardware resources it needs. It cannot access any other hardware resources.
The driver needs to provide a list of supported video modes, resolutions and refresh rates. The driver might also provide infromation about the attached video display device. E.g. flat panel, resolution, make model refresh rate etc.
There needs to be a standard way to query the driver about the supported display modes. Either we have the display driver implementing an interface or have an abstract method on a base class.
The DisplayMode interface might have the following methods:
The above interface begs the question whether there should be two sub-interfaces, TextDisplayMode and GraphicsDisplayMode. Should the graphics driver export the two lists separately? Most graphics cards enter a text display mode by default I think. Some export their own special text display modes.
Coments from Valentin:
My Opinion is that we should pack this informations that the driver can send back into an external class called GraphichDriverInfo and that we should have 2 other interfaces called TextDisplayMode and GraphicsDisplayMode that extend DisplayMode interface. So the list of classes/interfaces(and there methods) should look like this:
Everyone's input on this is wellcomed.
Goal
Isolates are intended to seperate java programs from each other, in order to protect java programs from (intentional) errors in other programs.
Proposal (in concept)
I propose to make an Isolates implementation that is somewhere between full isolatation and full sharing. With the goal in mind, programs should be protected from each other, which means that they should not interfere each other in terms of resources, which are typically:
This means that everything that can be used to interfere other programs on these items must somehow be protected. Everything else must be as shared as possible, in order to minimize resource usage.
Think can be achieved by isolating certain resources, and making serveral services aware of the fact that there are isolates. E.g. a network stack should be aware that network bandwidth is shared nicely across all users, in order to prevent a single program from eating away all network bandwitdh.
What is Isolated v.s. Shared
Isolated:
Shared:
Explaination:
Static variables in JNode are implemented in statics tables. The index (of a static variable) in this table must be constant across all isolates, otherwise the compiled code needs to retrieve the index on every static variable get/set which is very expensive. Havig said this, this does imply that statics table will become pretty large. This can be solved by implementing a cleanup strategy that frees indexes when they are no longer used.
For the rest, the seperation is made in terms of publicly accessible versus internal. E.g. a java program can synchronize on a Class instance, but since these are isolated, this will not block all other programs.
Isolated java heaps will have significant implications on the code of the shared services, so this will something to do in step 2.
Isolating memory allocation will cause some problems that need to be dealed with.
Problems
When an object is allocated in one isolate, and this isolate is closed, the object is also removed. But what if this object is registered somewhere in another isolate?
Answer: A problem. Objects may not cross isolate bounderies.
The index.xml file will contains at least one entry for each installed module and all created clusters. This is should look pretty much like a database(maybe we could use here a small xml database engine). The index file is loaded at first use of the installer and should be cached in memory. The index.xml structure should be something like this:
The process of installing a new application should be no more than downloading the jnlp file, copy the files described there and than modify the index.xml file to add the new installed application. Here we could have problems if another application with the same name is installed. In this case I think we should just ask the user for an alternative name. Maybe in the index.xml we could store not just 1 name but 2: originalname="app1" and name="editor". In this way one could install applications with the same jnlp name. A trickier problem is version handling.
The most important operation that the installer must support is updating the packages which are already installed. This operation must care for all dependant packages and should start by downloading the new jnlp file into a temp folder. Let’s assume the following scenario:
The user runs in console the following command:
"/> installer –u system"
What the installer should do first is find what is associated with name system. Lets say that system is a cluster of modules than the installer should update all modules from “system†cluster. This means finding all the modules that belong to this cluster or sub-clusters read their updateURL entry and download the new JNLP files and execute them. In the process of installation of a new version the installer must check if the external resources of the new version have the version described in the JNLP file and if they don’t than this resources must be updated as well. The update of resources must be done only if no other module uses the current version of the resource. In our example lets say that module service-manager has a new version which needs ant version 1.6 but locally we have ant 1.5 installed than ant must be updated as well if no other modules use ant 1.5. If ant 1.5 is still used than the new version of ant must be installed separately and the old version kept.
Addendums
markhale: Details on use of JNLP files.
Three types of modules have been identified; applications/applets, libraries and system level plugins (includes drivers). The purpose of this note is to detail how each type of module is described by a JNLP file.
To allow for finer-grain control of security, introduce jnode:permission child element of jnlp security element.
jnode:permission will have the following required attributes: class, name, actions.
It is time for us to start thinking about how JNode should run from a normal harddisk based system, how it should be installed and maintained.
An essential part of this, is how and where system files should be stored, and what system files are. This page answers the question of what system files are and proposes a tree structure for system files to be stored.
What are system files
In JNode there are a few different types of system files. These are:
Tree structure
In JNode system files should be placed in a container in a structure that is "well known".
A container can be a filesystem, but can also be some kind of database. See the next paragraph for a discussion on this.
The proposed tree is as follows:
/jnode | The root of the container |
/jnode/system | Contains kernel and the initial jars |
/jnode/plugins | Contains the plugins |
/jnode/boot | Contains the bootloader files |
/jnode/config | Contains the configuration data |
System file container
Traditionally the system files are stored in a directory on a "root" filesystem. This filesystem is identified by a parameter, or some flag in a partition table.
This method is easy, because all normal filesystem tools can be used on them, but makes it harder to protect these files against virusses, ignorent users etc. Also this method limits the system files to being harddisk based. (E.g. look at the trouble Linux had to have an NFS root filesystem).
For these reasons, I propose a more generic method, that of an abstract container interface for system file access. This interface can have implementation ranging from a single file based container to a network loader. The essential part is that the actual implementation is hidden from the part of JNode that uses it (either to load files, or to install them).
This is all for now, please comment on this proposal.
Ewout
This is a branch for the proposals of the networking subsystem
Networking framework
The goals of this proposal:
- Flexibility, more capabilities
- Simplicity, better Object-Oriented design
The following are the 3 basic types of entities within this framework:
- Network devices
- Network filters
- Applications.
The network devices are the device drivers of the Network interfaces or virtual network drivers (like the local loopback). We assume that a device driver like that takes a packet stored in a memory buffer and write it on the media (wire, air or anything else). Also it receives a packet from the media and writes it to a memory buffer. The device drivers that do some job of the networking system, such as checksum calculation or crypto work, may be treated also as Network filters (they will be discussed later).
At the opposite hand are the applications. This is any software that creates packets and injects them, or receives a packet from the networking subsystem. For example: a ping command, or dns server software, and also the java SocketImpl system.
Both Network device drivers and applications are the endpoints of this model. From them the packets are coming in and out of the networking subsystem of the OS. Between them we have the Filters.
The filters take packets from any component and forward them to other components. A subcategory of them, are the protocol stacks. The filters are the components that are used to transform the packets or do other things with these packets. For example a TCP/IP stack is a filter that parses the incoming byte arrays (RawPacket from the device driver) to TCP/IP packets, or encapsulates data into TCP/IP packets to be sent later over another protocol. There is more that the TCP/IP stack will do internally, but it’s a matter of the TCP/IP stack implementation and not of this framework proposal.
These filters have one or more PacketStreams. Most components may have two packet streams. Any packet stream implements two interfaces, the PacketListener and the PacketFeeder. Any PacketListener may be registered to a feeder to receive packets from him. This way we can have Chains of packet streams, some of them may split (?)*. Also a filter may be just a PacketFeeder or PacketListener. For example a component that captures packets directly to the filesystem, a counter for some specific packets, a traffic generator, etc (but they may not be treated as endpoints).
For performance reasons we can use a “listening under criteria†system. And only when these criteria are matched the feeder will send the packet to this listener. We can have an interface ListenerCriteria with a simple listener specific method that reads some bytes or anything else from the incoming packet and return true or false. This method will be called from the packet feeder, before he sends the packet to the listener. For example an IPListenerCriteria will check the ethertype bytes if the packet feeder is EthernetLayer. Or another ListenerCriteria implementer may check the class of a packet to see if it is instance of the ICMPPacket. Listening under criteria will be a way to filter packets that will be passed from stream to stream.
The PacketFeeder’s will have a registry, where they store the currently registered listeners to them. For this registry I suggest to use a List of bindings, where every binding of this list will have as key a PacketCriteria type instance and as value a list of the PacketListeners that are listening under these criteria.
For performance reasons, a packet feeder may have two or more different registries of listeners, one for the high priority listeners and one for the others or one registry for the protocol to protocol submission and one for the others (and if they exist).
To avoid deadlocks between components and performance degradation, when a feeder passes a packet to a listener, the listener may use incoming queue and have its own thread to handle the packet. Except if the packet handling that he would do is something very quick.
Another issue, is how all this relations between the components will be managed. A command or a set of commands will be useful. This is a part of Ifconfig mainly.
The result of all these will be a web of networking components, where everyone can communicate to everyone. Think the possibilities, I have found many already.
This is an abstract of what I am thinking. The details is better to be discussed here.
Pavlos Georgiadis
Packet representation
The goal of this proposal is mainly to speed up the packet handling and to provide the developers a simpler and more Object Oriented packet representation. It aims to remove the current SocketBuffer and replace it with Packets.
Currently the packets are represented with the SocketBuffer class. The SocketBuffer is something like a dynamic buffer (to be more accurate it is more like a list of dynamic buffers). When a program wants to send a network packet, it creates the protocol headers, and then the headers are inserted in this buffer list with the packet payload (if there is a payload). Finally the NIC driver copies all the bytes of the packet into another fixed array to be used from the driver.
When we send a packet we move all the packet data 2 times (from the headers to the SocketBuffer and from it to the final fixed array). When we receive a packet, the SocketBuffer acts like a fixed array, which provides us with some useful methods to read data from him.
What I suggest is to represent the packets as…Packets. All the protocol packets are the same thing. They have a header and a payload (some of them have a tail too). Every packet is able to know its size and how to handle its data (including the payload).
So let’s say we have the interface Packet.
What we need from the Packet implementers:
- Set and Get methods for the data of the packets (class specific and wont be in the Packet interface)
- A way to store the data in an object and to have a method that will return the entire packet as a byte[] (when we send the packet).
- Methods that will parse a byte array to create the Packet object (when we receive a packet).
Any packet is represented with a class that implements the Packet interface. For example IPv4Packet, ICMPPacket, TCPPacket etc. Every component of the networking can access the data of this packet with the appropriate set and get methods (If a program uses a packet knows it’s specific assessors).
To accomplish the second and the third we need the following methods for the interface Packet:
public void getData(byte[] b);
public void getData(byte[] b, int offset);
public void getSize();
The getData(byte[] b) writes the packet in the given byte array. If the length of this array is not enough we can throw an exception. The getData(byte[] b, int offset) writes a packet to the array b, starting from the offset position. The getSize method will return the length of this packet, including the length of its payload.
These methods will be called mainly from the network drivers. This is the point where the packet is converted from object to a memory buffer and vice versa.
When the Ethernet protocol sends a packet to the driver the driver will call the getSize() of the Ethernet packet to determine how big will be the array that will store the entire packet. The Ethernet packet getSize() method will return for example 14+payload.getSize(). Remember that the payload is also a packet, let’s say an IP packet that may return for example 20+payload.getSize(). As the driver has determine the length of the entire packet, it will create the memory buffer b and it will call the getData(byte[] b) to the Ethernet packet, which will write the first 14 bytes and also it will internally call payload.getData(b, 14). This way we move the data only once, from the objects to the final byte array.
A common class that implements the Packet interface is the RawPacket, which is a packet that maps a byte array, or a portion of this array as packet.
The Raw packet will be mainly used:
- To store the data payload of a packet (for example the data payload of a TCP packet)
- To map a received packet before it will be parsed from the networking components.
A practical example for the second:
When a packet is received from a NIC, the driver will create a RawPacket with the array that stores the currently received frame. Later this RawPacket will be send for example to the Ethernet protocol, which will parse the first 14 bytes to its attributes and create another (or modify the same) RawPacket that will map the same byte[] from the position 15 to the end. This RawPacket will be the payload of this Ethernet packet, that later will be send to the IPv4 stack for example and so on.
Pavlos Georgiadis
GNU classpath is currently used. There are however some minor differences.
It is intended that Classpath is used out of the box somewhere in the future. In the mean time, classpath is part of the JNode source repository and is synced on a somewhat regular based with the latest version from classpath.org.
Edited by Fabien D :
Since it is open sourced, we are moving to openjdk (instead of using GNU Classpath).
Since it is open sourced, we are moving to openjdk (instead of using GNU Classpath).
At the time I am writing this article, we have misc sources from both GNU Classpath, openjdk and icedtea(the parts that are not free in jdk will be replaced by free parts from GNU world)
The network can be configured with dynamic IP address or with a fixed IP address.
Note : To find the name of your network card, just type "ifconfig" and you will get a list of available devices.
Configuring the "loopback" interface
You need to configure the loopback interface for the DNS setup performed by the dhcp to work.
Configuring JNode with a dynamic address
Configuring JNode with a fixed IP address
To read more about the commands and their options see the user docs.
The answer is on this page.
To run JNode on and X86 PC, you must have at least the following hardware.
Pentium processor
256Mb RAM
To run it a bit more interesting, the following hardware is recommended (or better).
Pentium 3 processor
512Mb RAM
32-bit graphics card
People often want to know if their favorite Java-based application or library runs on JNode.
The short answer is usually: "We don't know; why don't you give it a try?".
The long answer is that it depends on the nature of the application. Here are some guidelines:
those programs can be coded in Java and ported to JNode.
The page contains references to papers and presentations written about JNode.
If you're paper/presentation is not yet listed here, contact the admin.
Papers:
Presentations:
Grub boot loader
Ext2
BeFS
HFS/HFS+/HFSX
General Graphics standards
Graphics Card Specifications
The page contains references to research done with and around JNode.
If you're research topic is not yet listed here, contact the admin.