Questions about JNode

Hello,

I am researching the JNode operating system for a college course I am currently taking.

After finding much of the information I need on the JNode website, I still have a few questions I am unable to find details about. I would greatly appreciate any input you could give me.

The questions are as follows:

  1. What type of algorithm is used in the scheduler? (This would be either First Come First Serve, Shortest Job First, or Round-Robin; depending on how processes are scheduled. If it is something else, please explain.)
  2. Is the scheduler pre-emptive or non-pre-emptive?
  3. What kind of techniques are used for process synchronization? (Such as semaphores or lock registers)
  4. Is there support for shared memory and message passing?
  5. How is memory management handled? (Continuous Memory Allocation (Best Fit, Worst Fit, or First Fit), or Paging)
  6. How are deadlocks handled?

Thank you.

Thank you

Thank you all who have taken the time to answer my questions. All of your answers will greatly help me and hopefully other readers as well.

I just have a one more question after reading the responses. For the round-robin scheduler, is there a maximum time that a process or thread can execute at a time, ie. time quantum? So, for example, a yield flag is never set to pre-emt a process, will that process be allowed to execute in its entirety or is there a time that it must give up the CPU? If there is a time, what is it and how was it determined to be best?

The yield falg is set once

The yield falg is set once in every 8 milliseconds. The effective thread switch will occure after this when the next yieldpoint is reached by the thread currently executing. So we can estimate the minimum value of the time quantum to be around 8 ms second and the maximum value is 8 ms plus the longest period of time for reaching the next yieldpoint.

Remarks

2. The scheduler is pre-emptive and it's implemented using techniques reminiscent to cooperative scheduling. The JNode bytecode-to-native compiler inserts so called yieldpoints into the generated machine code automatically based on an algorithm. This means that the program doesn't need to be written in a cooperative style using the Thread.yield() method calls. On the other hand the thread will be interruped at a yieldpoint only when an additional global flag is set accordingly. This flag is checked at each yieldpoint and if it's set the execution of the thread will be suspended temporally, otherwise the thread will continue to execute as if there was no yieldpoint. This solution ensures that the interruption of a thread will occure only at well define points in the code. The locations of these points is under the control of JNode by the means of the bytecode to native compiler and is only loosely related to the implementation details of the Java program in execution.

6. JNode tries to detect deadlocks but there is no attempt made yet for gracefully handling the situation when a deadlock is detected.

Answers from the author, Ewout Prangsma

I forwarded the questions to the original author of the related components and here are the answers:

1) Round robin
2) Scheduler is pre-emptive.
3) Synchronization is done by an implementation of the synchronized feature in java. It is implemented via light weight locks (using a few bits per object) supported by CPU instructions. Once light weight locks overflow a heavy weight lock is created and entered.
4) No shared memory, just one flat address space, with the java memory manager on top of that.
5) The memory manager is actually a two step approach. Java heaps are created on larger blocks of (flat) memory. Java objects are created in a heap. As far as I remember it is first fit. Large objects (e.g. large arrays) are handled seperately.
6) Good question, probably not.

Some answers

Hi slimbody,

I'll try and answer your questions.
1. There is a not so old thread about scheduling and some new ideas. Perhaps it helps a bit. Afaik the current scheduler is simply round-robin.
2. We're non-pre-emptive. We're cooperative: The compiler inserts yield-points that give up the CPU.
3. Where appropriate we use synchronized/Monitors. For lowlevel stuff and for the Monitor implementation itself we use memory spinlocks.
4. We have one single flat memory space. No virtual memory, so everything is shared and I'd call a simple method invocation as "message passing".
5. A very simplistic memory manager, kind of first fit
6. I'm not quite sure on that but I saw code a while back that suggests that some deadlocks might get recognized. When they are you get a stacktrace and JNode dies Smiling

Hope that helps.