JNode and realtime : javolution library ?

I found a pure java library (with String*, xml parser, collections as in jdk...) for realtime and embedded computers : javolution.
It's under BSD licence and I don't know if it's compatible with LGPL.

The idea :
avoid as much as possible charging the GC by allocating/freeing/reallocating ... objects of the same class in the critical ("realtime") parts. Thus, the GC thread have less job to do and this give more CPU time to the critical parts.

It encourages the reuse of objects of the same class by using the design patterns Factory and Pool. The reusable class has to implement the "Reusable" interface.

All of this is done transparently (in pure java) for the JVM. So, the GC is still running/doing some job and non-critical (non-realtime) parts don't have to take care of this library.

I think that, even if we don't want or can't use this library, we can take advantage of their ideas to improve JNode performances in critical parts.

What do you think ?

Performance Comparison

They already provide a benchmark for their API. I would be interested to see a comparison of a the same task being implemented using javolution vs. traditional Java APIs.

I'm somewhat sceptical about the performance gain because the pooling and preparing objects for reuse will also consume cpu cycles. Also I have read through two presentations from JavaOne 2003 (links below) that suggest to use object pooling only for objects that take long time to create / initialize and where the initialization is not specific to every single object. I.e. not to optimize GC but to optimize object initialization. Also it seems like modern generational GCs don't have problems with huge amounts of short-lived objects.

As I understand it, the specific thing about realtime applications is not that they are fast, but that performance is predictable. This means that realtime applications have more problems with the way GC is performed than with possibly bad GC performance. E.g. a GC algorithm that runs every 5 minutes and leads to perceived decrease of application performance is suboptimal, whereas one that runs the whole time and incrementally frees only small amounts of memory, but doesn't impact application performance is wanted. The latter algorithm may have even worse performance than the first, but is perdictable.

Links to JavaOne 2003 presentations:
http://servlet.java.sun.com/javaone/resources/content/sf2003/conf/sessio...
http://servlet.java.sun.com/javaone/resources/content/sf2003/conf/sessio...

Sebastian

Pooling has some value

> a comparison of a the same task being implemented using javolution
> vs. traditional Java APIs.

In general, you gain between 200% and 300%. Creating an object is still a very expensive operation, at least on Sun JRE 1.5. By reusing objects, you really speed up. OTOH, Object initialization is standard java code.
Always good to optilize and/or skip.

> I'm somewhat sceptical about the performance gain because the
> pooling and preparing objects for reuse will also consume cpu
> cycles.

No because you don't prepare objects. You ask for one and if there isn't, it will be created as usual.

> the specific thing about realtime applications is not that they
> are fast, but that performance is predictable.

You're right. It adds some complexity to the code but a real-time kernel is something valuable. Because it would make the code a bit longer, it may be postponed.

I don't know how it applies to JNode. Maybe the GC is fast enough and so there is no need for such a library. But on classic JVMs, the performance gain is there.

For Fabien, the BSD license is compatible with the LGPL.