The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as a PulseAudio sound driver, the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.
This release updates our OpenJDK 6 support in the 1.13.x series with the October 2016 security fixes from OpenJDK 6 b41.
This is the final security update to IcedTea 1.x. Users should upgrade to IcedTea 2.x for OpenJDK 7 or 3.x for OpenJDK 8. See the earlier post on this for further details.
If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome. There will be a final 1.14.0 release at some point and this can include fixes for any issues with this release. However, it will not include any further security backports.
Full details of the release can be found below.
The tarballs can be downloaded from:
We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.
The tarballs are accompanied by digital signatures available at:
GnuPG >= 2.1 is required to be able to handle this key.
The checksums can be downloaded from:
The following people helped with these releases:
We would also like to thank the bug reporters and testers!
To get started:
$ tar xzf icedtea6-1.13.13.tar.gz
$ tar x -I xz -f icedtea6-1.13.13.tar.xz
$ mkdir icedtea-build
$ cd icedtea-build
Full build requirements and instructions are available in the INSTALL file.
Following the proposed end-of-life of OpenJDK 6 upstream, we will be concluding support for the IcedTea 1.x series with the following upcoming releases:
We have no plans to produce backports of the security updates for January 2017 and beyond, either upstream in OpenJDK 6 or in IcedTea 1.x.
Others are, of course, welcome to take over maintainership of IcedTea 1.x if they wish to do so.
Many thanks for your support over the years.
Welcome to Wildebeest blogs. This is your first post. Edit or delete it, then start blogging!
|Workspace Content Inspector and ViewPDF|
A year or so ago I was asked to debug a crash in the Firefox devtools. Crashes are easy! I fired up gdb and reproduced the crash… which turned out to be in some code JITted by SpiderMonkey. I was immediately lost; even a simple
bt did not work. Someone more familiar with the JIT — hi Shu — had to dig out the answer :-(.
I did take the opportunity to get some information from him about how he found the result, though. He pointed me to the code responsible for laying out JIT stack frames. It turned out that gdb could not unwind through JIT frames, but it could be done by hand — so I resolved then to eventually fix this.
I knew from my gdb hacking that gdb has a JIT unwinding API. Actually — and isn’t this the way most programs end up working? — it has two.
The first JIT API requires some extra work on the part of the JIT: it constructs an object file, typically ELF and DWARF, in memory, then calls a hook. GDB sets a breakpoint on this hook and, when hit, it reads the data from the inferior. This lets the JIT provide basically any kind of information — but it’s pretty heavy.
So, I focused my attention on the second API. In this mode, the JIT author would provide a shared library that used some callbacks to inform gdb of the details of what was going on. The set of callbacks was much more limited, but could at least describe how to unwind the registers. So, I figured that this is what I would do.
But… I didn’t really want to write this in C. That would be a real pain! C is fiddly and hard to deal with, and it would mean constant rebuilding of the shared library while debugging, and SpiderMonkey already had a reasonable number of gdb-python scripts — surely this could be done in Python.
So I took the quixotic approach, namely writing a shared library that used the second gdb JIT API but only to expose this API to Python.
Of course, this turned out to be Rube Goldbergian. Various parts of the gdb Python API could not be called from the JIT shared library, because those bits depended on other state in gdb, which wasn’t set properly when the JIT library was being called. So, I had gdb calling into my shared library, which called my Python code, which then invoked a new gdb command (written in Python and supplied by my package) — that existed solely for the purpose of setting this internal state properly — and that in turn invoked the code I wanted to run, say to fetch memory or a register or something.
Well, that took a while. But it sort of worked! And maybe I could just keep it in github and not put it in Mozilla Central and avoid learning about the Firefox build system and copying in some gdb header file and license review and whatnot.
So I started writing the actual Python code… OMG. And see below since you will totally want to know about this. But meanwhile…
… while I was hacking away on this crazy idea, someone implemented the much more sane idea of just exposing gdb’s unwinder API to gdb’s Python layer.
Hmm… why didn’t I do that? Well, I left gdb under a bit of a cloud, and didn’t really want to be that involved at the time. Plus, you know, gdb is a high quality project; which means that if you write a giant patch to expose the unwinding API, you have to be prepared for 17 rounds of patch review (this really happened once), plus writing documentation and tests. Sometimes it’s just easier to channel one’s inner Rube.
The integrated Python API was a great development. Now I could delete my shared library and my insane trampoline hacks, and focus on my insane unwinding code.
A lot of this work was straightforward, in the sense that the general outline was clear and just the details remained. The details amount to things like understanding the SpiderMonkey frame descriptor (which partly describes the previous frame and partly the new frame; there’s one comment explaining this that somehow eluded me for quite a while); duplicating the SpiderMonkey JIT unwinding code in Python; and of course carefully reading the SpiderMonkey code that JITs the “entry frame” code to understand how registers are spilled.
Naturally, while doing this it turned out that I was maybe the first person to use these gdb APIs in anger. I found some gdb crashes, oops! The docs would have been impenetrable, except I already knew the underlying C APIs on which they were based… whew! The Python API was unexpectedly picky in other areas, too.
But then there was also some funny business, one part in gdb, and one part in SpiderMonkey.
GDB is probably more complicated than you realize. In this case, the complexity is that, in gdb, each stack frame can have its own architecture. This seemingly weird functionality is actually used; I think it was invented for the SPU, but some other chips have multiple modes as well. But what this means is that the question “what architecture is this program?” is not well-defined, and anyway gdb’s Python layer doesn’t provide you a way to find whatever approximation it is that would make sense in your specific case. However, when writing the SpiderMonkey unwinder, it kind of actually is well-defined and we’d like to know the answer to know which unwinder to choose.
For this problem I settled on the probably terrible idea of checking whether a given register is available. That is, if you see “
$rip“, you can guess it’s x86-64.
The other problem here is that gdb thinks that, since you wrote an unwinder, it should get the first stab at unwinding. That’s very polite! But for SpiderMonkey, deciding “hey, is this PC in some code the JIT emitted?” is actually a real pain, or at least outside the random bits of it I learned in order to make all this work.
Aha! I know, there’s probably a Python API to say “is this address associated with some shared library?” I remembered reading and/or reviewing a patch… but no,
gdb.solib_name is close but doesn’t do the right thing for addresses in the main executable. WAT.
I tried several tricks without success, and in the end I went with parsing
/proc/maps to get the mappings to decide whether a given frame should be handled by this unwinder or by gdb. Horrible. And fails with remote debugging.
Luckily, nobody does remote debugging.
Oh, wait, people do remote debugging at Mozilla all the time. They don’t call it “remote debugging” though — they call it “using RR“, which while it runs locally, appears to be remote to gdb; and, importantly, during replay mode fakes the PID, and does other deep magic, though not deep enough to extend to making a fake map file that could be read via gdb’s
remote get command.
By the way, you should be using RR. It’s the best advance in debugging since, well, gdb. It’s a process record-and-replay program, but unlike gdb’s built-in reverse debugging, it handles threads properly and has decent performance.
Oh well. It just won’t work remotely. Or at least not until fellow Mozillian (this always seems like it should be “Mozillan” to me, but it’s not, there really is that extra “i”) and all-star Nicolas Pierron wrote some additional Python to read some SpiderMonkey tables to make the decision in a more principled way. Now it will all work!
Though looking now I wonder if I dreamed this, because the code isn’t checked in. I know he had a patch but my memory is a bit fuzzy — maybe in the end it didn’t work, because RR didn’t implement the
qGetTLSAddr packet, which gdb uses to read thread-local storage. Did I mention the thread-locals?
So, way back at the beginning, during my initial foray into this code, I found that a crucial bit of information — the appropriately-named
TlsPerThreadData — was stashed away in a thread-local variable. Information stored here is needed by the unwinder in order to unwind from a C++ frame into a JIT frame.
Only, Firefox didn’t use “real” thread-local variables, the things that so many glibc and gcc hackers put so much effort into micro-optimizing. No, it just used a template class that wrapped
pthread_setspecific and friends in a relatively ergonomic way.
Naturally, for an unwinder this is a disaster. Why? Unwinding is basically the dissection of the stack; but in order to compute the value of one of these thread-local-storage objects, the unwinder would have to make some function calls in the inferior (in fact this prevents it from working on OSX). But these would affect the stack, and also potentially let other inferior code (in other threads — remember, gdb is complicated and you can exert various unusual kinds of control like this) run as well.
So I neglected to mention the very first step: changing Firefox to use
__thread. (Ok, I didn’t really neglect to mention it, I was just being lazy and anyway it’s a shaggy dog story.)
RR did not implement
qGetTLSAddr, which we needed, because lots of people at Mozilla use RR. So I set out to implement that. This meant a foray into the dangerous world of
For reasons I do not know, and suspect that I do not want to know, glibc has historically followed many Solaris conventions. One such Solaris innovation was
libthread_db — a library that debuggers use to find certain information from libc, information like the address of a thread-local variable
On the surface this seems like a great idea: don’t bake the implementation details of the C library into the debugger. Instead, let the debugger use a debugging library that comes with the C library. And, if you designed it that way, it would be a good idea.
libthread_db was not designed that way. Oh no.
libthread_db has a callback interface. The calling program — gdb or rr — must provide some functions that
libthread_db can call, to do some simple things like “read some memory”; or some very complicated things like “find the address of a symbol given its name”. Normal C programmers might implement these callbacks using a structure containing function pointers. But not
libthread_db! Instead it uses fixed symbol names that must be provided by the calling application. Not all of these are required for it to work (you get to figure out which, yay!), but some definitely are. And, you have to
libthread_db that matches the
libc of the inferior that you’re debugging (or link against it, but that’s also obviously bad).
Wait, you say. Doesn’t that mess up cross-debugging? Why yes! Yes it does! Which is why
qGetTLSAddr has to be in the gdb remote serial protocol to start with.
Hey, maybe the Linux vendors should fix this. They are — see Gary Benson’s Infinity project — but unfortunately that’s still in development and I wanted RR to work sooner.
Ok, so whew. I wrote
qGetTLSAddr support for RR. This was a small patch in the end, but an unusual pain in an already painful series. Hopefully this won’t spill out into other programs.
Hahaha, you are so funny. Of course it spills out: remember how you have to define a bunch of functions with specific names in your program in order to use
libthread_db? Well, how do you know you got the types correct?
Yeah, you include
<proc_service.h> (a name deliberately chosen to confuse, I suppose, why not, it doesn’t bear any obvious relationship to the library). Only, that was never installed by glibc. Instead, gdb just copied it into the source tree.
So naturally I went and fixed this in glibc. And, even more naturally, this broke the gdb build, which was autoconf’d to check for a file that never existed in the past. LOL.
At this point I figured it was only a matter of time until I had to patch the kernel. Thankfully this hasn’t been necessary yet.
In gdb the actual unwinding and the display of frames are separate concerns.
And let me digress here to say that gdb’s unwinder design is excellent. I believe it was redone by Andrew Cagney (this was well before my active time in gdb, so apologies if you’re reading this and you did it and I’ve misattributed it). Like much of gdb, many of the details are bizarre and take one back to the byte-counting days of 1987; but the high level design is very solid and has endured with, I think, just one significant change (to support inline functions) in the intervening 15 or so years. I’ve long thought that this is a remarkable accomplishment in the programming world.
So, yes. It’s not enough to just unwind. Simply having an unwinder yields backtraces with lines like:
#5 0xfeefee ???
Better than nothing! But not yet great.
The second part of the SpiderMonkey unwinder is, therefore, a gdb “frame filter”. This is an object that takes raw frames and decorates them with information like a function name, or a file name, or arguments.
Work to add this information is ongoing — I landed one patch just yesterday, and another one, to add more information about interpreted frames, is still in the works. And there are two more bugs filed… maybe this project, like this blog post, will never conclude. It will just scroll endlessly.
But now, with all the code in place,
bt can show something like:
#6 0x00007ffff7ff20f3 in <<JitFrame_BaselineJS "f1">> (this=JSVAL_VOID, arg1=$jsval(4700))
This is the call
Of course we still couldn’t enable this unwinder by default. You have to enable it by hand.
And by the way, in the first release of gdb’s Python unwinder feature, enabling or disabling an unwinder didn’t flush the frame cache, so it wouldn’t actually take effect until some invisible-to-the-user state change took place. I fixed this bug, but here Pedro Alves also taught me the secret gdb command
flushregs, which in fact just flushes the frame cache. (I’m going to go out on a limb and guess that this command predates the already ancient
maint prefix command, hence its weird name.)
Anyway, you have to enable it by hand because the unwinder itself doesn’t work properly if the outermost frame is in JIT code. The JIT, in the interest of performance, doesn’t maintain a frame pointer. This means that in the outermost frame, there’s no reliable way to find the object that describes this frame and links to the previous frame.
Now, normally in this case gdb would either resort to debug info (not available here), or in extremis its encyclopedic suite of prologue analyzers (yes, gdb can analyze common function prologues for all architectures developed in the last 25 years to figure out stuff) — but naturally JIT compilers go their own way here as well.
Humans, like Shu back at the start of this story, can do this by dumping parts of the stack and guessing which bytes represent the frame header.
But, I’ve been reluctant and a bit afraid to hack a heuristic into the unwinder.
To sum up — in case you missed it — this means that all the code written during this entire saga would still not have helped with my original bug.
This is a very important machine that really deserves to get built. Anyone who cares about Free Software should consider funding this project at some level, and spreading the word to their friends. If this project succeeds, it will bootstrap a market for new, owner-controlled performant desktop machines. If it fails, no such computers will exist. The project page and updates explain the current (rather depressing) state of general purpose computing better than I could, so take a look.
Valgrind will also have a developer room at Fosdem on Saturday 4 February 2017 in Brussels, Belgium. Please join us, regardless of whether you are a Valgrind core hacker, Valgrind tool hacker, Valgrind user, Valgrind packager or hacker on a project that integrates, extends or complements Valgrind.
Please see the Call for Participation for more information on how to propose a talk or discussion topic.
I originally posted this on G+ but I thought maybe I should expand it a little and archive it here.
The patch to delete gcj went in recently.
When I was put on the gcj project at Cygnus, I remember thinking that Java was just a fad and that this was just a temporary thing for me. I wasn’t that interested in it. Then I ended up working on it for 10 years.
In some ways it was the high point of my career.
Socially it was fantastic, especially once we merged with the Classpath community — I’ve always considered Mark Wielaard’s leadership in that community as the thing that made it so great. I worked with and met many great people while working on gcj and Classpath, but I especially wanted to mention Andrew Haley, who is the single best debugger I’ve ever met, and who stayed in the Java world, now working on OpenJDK.
We also did some cool technical things in gcj. The binary compatibility ABI was great, and the split verifier was very fun to come up with. Per Bothner’s early vision for gcj drove us for quite a while, long after he left Cygnus and stopped working on it.
On the downside, gcj was never quite up to spec with Java. I’ve met Java developers even as recently as last year who harbor a grudge against gcj.
I don’t apologize for that, though. We were trying something difficult: to make a free Java with a relatively small team.
When OpenJDK came out, the Sun folks at FOSDEM were very nice to say that gcj had influenced the opening of the JDK. Now, I never truly believed this — I’m doubtful that Sun ever felt any heat from our ragtag operation — but it was very gracious of them to say so.
Since the gcj days I’ve been searching for basically the same combination that kept me hacking on gcj all those years: cool technology, great social environment, and a worthwhile mission.
This turned out to be harder than I expected. I’m still searching. I never thought it was possible to go back, though, and with this deletion, this is clearer than ever.
There’s a joy in deleting code (though in this case I didn’t get to do the deletion… grrr); but mainly this weekend I’m feeling sad about the final close of this chapter of my life.
As an addendum to the parting message on
coin-dev, one of the most popular candidate coin features not included in JDK 7 was some sort of literal syntax for collections, sets, lists, and/or maps.  
Fortunately, this functionality is approximated in JDK 9 via Stuart Marks' JEP 269: Convenience Factory Methods for Collections which added factory methods named "
of" that return immutable collections to the List, Set, and Map interfaces. I've enjoyed using this feature in my JDK 9 programming and suggest you give it a try too.
With that, I declare Project Coin well and fully minted! Happy coding.
PS I has heartened to see the improvements to numeric literals included by Project Coin in JDK 7 seem to have partially inspired the features set of an upcoming version of another widely-used language.
About a month ago, Fedora 24 has been released. This is an important milestone for Christine and myself, because it includes Shenandoah in its Java VM by default. We consider this our first official release!
If you want to try it out, it’s very simple: pass -XX:+UseShenandoahGC at the cmd line, and your favorite application runs with Shenandoah!
Please report back any issues that you find to the shenandoah-dev mailing list.
RoboVM's main focus is to compile Java to native for deployment on mobile devices such as iOS and Android. RoboVM uses a Java to Objective-C bridge built using LLVM. Good news is that the same process work for converting Java applications to native applications on GNU/Linux systems as well!
Mario Zechner the author of libgdx posted this nice picture from inside DDD/GDB of his first HelloWorld compiled to native X86 code running on a GNU/Linux machine.
JogAmp is the home of high performance Java™ libraries for 3D Graphics, Multimedia and Processing.
JOGL, JOCL and JOAL provide cross platform Java™ language bindings to the OpenGL®, OpenCL™, OpenAL and OpenMAX APIs.
Running on Android, Linux, Window, OSX, and Solaris across devices using Java.
Security fixes are marked in red on the above bug tracking page.
JogAmp send out thanks to the FuzzMyApp security researchers for healthy communication that triggered the security review work.
If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place inside the JogAmp forum & mailing-list and the #jogamp IRC channel on irc.freenode.net.
Meet us @
If you’ve been following Infinity and would like to, you know, download some code and try it out… well, now you can!
Project Jigsaw is an enormous effort, encompassing six JEPs implemented by dozens of engineers over many years. So far we’ve defined a modular structure for the JDK (JEP 200), reorganized the source code according to that structure (JEP 201), and restructured the JDK and JRE run-time images to support modules (JEP 220). The last major component, the module system itself (JSR 376 and JEP 261), was integrated into JDK 9 earlier this week and is now available for testing in early-access build 111.
Breaking changes Like the previous major change, the introduction of modular run-time images, the introduction of the module system might impact you even if you don’t make direct use of it. That’s because the module system is now fully operative at both compile time and run time, at least for the modules comprising the JDK itself. Most of the JDK’s internal APIs are, as a consequence, fully encapsulated and hence, by default, inaccessible to code outside of the JDK.
An existing application that uses only standard Java SE APIs and
runs on JDK 8 should just work, as they say, on JDK 9. If,
however, your application uses a JDK-internal API, or uses a library or
framework that does so, then it’s likely to fail. In many cases you can
work around this via the
-XaddExports option of the
commands. If, e.g., your application uses the internal
sun.security.x509.X500Name class then you can enable access to it via
This causes all members of the
sun.security.x509 package in the
java.base module to be exported to the special unnamed module
in which classes from the class path are defined.
A few broadly-used internal APIs that cannot reasonably be implemented
outside of the JDK, such as
sun.misc.Unsafe, are still accessible for
now. As outlined in JEP 260, however, these will be removed in a
future release after suitable standard replacements are available.
The encapsulation of JDK-internal APIs is the change you’re most likely to notice when running an existing application. Other relevant but, for the most part, less-noticeable changes are described in the risks-and-assumptions section of JEP 261.
If you have trouble running an existing application on JDK 9 build 111 or later, and you think that’s due to the introduction of the module system but not caused by one of the changes listed in JEPs 260 or 261, then please let us know on the jigsaw-dev mailing list (you’ll need to subscribe first, if you haven’t already), or else submit a bug report via bugs.java.com.
New features If you’d like to start learning about the module system itself, the video of my Devoxx BE 2015 keynote gives a high-level overview and The State of the Module System summarizes the design of the module system proposed for JSR 376. Further details are available in the six Jigsaw JEPs, listed on the main project page, and in videos of other sessions given at JavaOne 2015 and Devoxx BE 2015.
The module-system design will continue to evolve in the JSR for a while yet, based on feedback and experience. The implementation will evolve in parallel in the Project Jigsaw “Jake” forest, and we’ll continue to publish bleeding-edge early-access builds based on that code, separately from the more-stable JDK 9 builds.
I finished getting Excorporate and all its dependencies into GNU ELPA. Excorporate lets Emacs retrieve calendar items from an Exchange server.
I had to rewrite the default UI to use Org Mode, because Calfw isn’t entirely copyright-assigned to the FSF yet. The Calfw UI is still there for reference, but as a text file so that GNU ELPA’s build and publishing steps ignore it. Both UI handlers use the same updated APIs from the main
I made sure Excorporate and all its dependencies use only features available since GNU Emacs 24.1. This is pretty good coverage; Emacs 24.1 introduced the packaging system, so if an Emacs version supports packages, it supports Excorporate.
Other than DNS lookups, Excorporate is completely asynchronous, so it won’t block the Emacs main loop. And it is pure Emacs Lisp so it runs on any operating system that Emacs does.
In addition to Org Mode support, release 0.7.0 collects all the suggestions users have made on this blog and adds Exchange 2007 support.
M-x package-install RET excorporate
To get the source code:
git clone git://git.savannah.gnu.org/emacs/elpa.git
To report bugs:
I wanted to attent FOSDEM two weeks ago, but couldn’t because I was sick, in bed with fever. I should have done a presentation about Shenandoah. Unfortunately, my backup Andrew Dinn also became sick that weekend, so that presentation simply didn’t happen. I want to summarize some interesting news that I wanted to show there. About Shenandoah’s performance.
When I talked about Shenandoah at FOSDEM 2015, I didn’t really announce any performance numbers, because we would have been embarrassed by them We spent the better part of last year optimizing the heck out of it, especially the barriers in the C2 compiler, and here we are, with some great results.
Ok. This doesn’t really exist. The last SPECjvm release was SPECjvm2008. Unfortunately, SPEC doesn’t seem to care about SPECjvm anymore, which means the last Java version that runs it without any modifications is Java7. We did some small fixes, that allows it to run with Java9 too. This invalidates compliance of the results. But they are still tremendously useful for comparison. So here it comes:
This was run on a 32 core box with 160GB of RAM, giving the JVM 140GB of heap. Exact same JVM and settings with G1 and Shenandoah. No special tuning parameters.
In terms of numbers, we get:
Throughput: Shenandoah: 374 ops/m vs. G1: 393 ops/m (95%, min 80%, max 140%)
Pauses: Shenandoah: avg: 41 ms, max 202 ms G1: avg: 240 ms, max 1126 ms
This means, throughput of Java apps, running with Shenandoah is on average 95% that of G1, depending on the actual application, it’ll range from around 80% to around 140%. However, pause times on such large heaps are significantly better with Shenandoah!
SPECjbb2015 measures throughput of a simulated shop system under response time constraints, or service level agreements (SLAs). It measures ‘max-jops’ which is maximum throughput of the system without SLA, and critical-jops, which is throughput of the system under a restrictive SLA. Here are the numbers, G1 vs. Shenandoah, same machine and JVM settings as above:
Other exciting news is that Shenandoah is now stable enough that we want to encourage everybody who’s interested to try it out. The nice folks at Adopt-OpenJDK have set up a nightly build from where you can grab binaries (Shenandoah JDK8 or Shenandoah JDK9). Enjoy! (And please report back if you encounter any problems!)
This month I've released Orson PDF version 1.7, a compact and fast API for creating PDF content in Java through the standard Graphics2D API. This release features:
With the new GPLv3 license option, I've now also made the OrsonPDF repo at GitHub public, which will make it easier for other developers to work directly with the source code. You can also use GitHub to report any bugs or other issues.
The original version of this blog entry is published at http://www.object-refinery.com/blog/blog-20151008.html.
The first release candidate is finally available. It can be downloaded here or from NuGet.
What's New (relative to IKVM.NET 8.0):
Changes since previous development snapshot:
Binaries available here: ikvmbin-8.1.5717.0.zip
There is a nice profile of me on the Java Magazine of this bimestre, and I am very flattened for this so let me share it right away with you.
There is one question I was expecting though but didn’t come: “When did you start working on Java?”.
So, in order to give some more context, let me play with it and answer my own question here (and without space limits!). I think this is important, because it is about how I started to contribute to OpenJDK, it shows that you can do the same… if you are patient.
JM: When did you start working on Java?
Torre: I started to work in Java around its 1.3 release, and I used it ever since. I did start working on Java quite later though, around the Java 1.5/1.6 era probably. I was working to create an MSN messenger clone in Java on my Linux box, since all my friends where using it (MSN I mean, not Linux unfortunately), including the dreaded emoticons, and no Linux client supported those at the time.
I had all the protocol stuff working, I could handshake and share messages (although I still had to figure out the emoticons part!), but I had a terrible problem. I needed to save user credentials. Well, Java has a fantastic Preferences API, easy enough, right? Except that what I was using wasn’t the proprietary JDK, it was the Free Software version of it: GNU Classpath.
Classpath at the time didn’t have Preferences support, so I was stuck. I think somebody was writing a filesystem based preferences, or perhaps it was in Classpath but not GCJ, which is what everybody was using as a VM with the Classpath library, anyway when I started to look at the problem, I realised it would have been nicer to offer a GConf based Preferences store, and integrate the whole thing into the Gnome desktop (at the time, Gnome was a great desktop, nothing like today’s awfulness).
I was hooked. In fact, I even never finished my MSN messenger! After GConf, all sort of stuff came in, Decimal Formatter, GStreamer Sound backend, various fixes here and here, and this is when I learned a lot of how Swing works internally by following Sven de Marothy, Roman Kennke and David Gilbert work.
When Sun was about to release OpenJDK, I was in that very first group and witnessed the whole thing, a lot of behind the scenes of the creation of this extremely important code contribution. OpenJDK license is “GPL + Classpath exception” for a reason. I remember all the heroes that made Java Free Software.
I guess I was lucky, and the timing was perfect.
However right at the beginning contributing actual code to OpenJDK wasn’t at all easy like in Classpath. There was (is!) lot of process, things took a lot of time for anything but the most trivial changes etc…
But eventually I insisted and me and Roman where the first external guys to have code landing in the JDK, Roman was, I believe, the first independent person to have commit rights (I think that the people that are still today in my team at Red Hat and then also SAP had some changes already in, but at the time we two were the only guys completely external).
It wasn’t easy, I had to challenge ourselves and push a lot, and not give up. I had to challenge Sun, and even more challenge Oracle when it took the lead. But I did it. This is what I mean that everybody can do it, you can develop the skills and then you need to build the trust and then not let it go. I’m not sure what is more complex here, but if you persist it eventually come. And then all of a sudden billions of people will use your code and you are a Java Champion.
So this is how it started.
Final 8.1 development snapshot. Release candidate 0 will be next (after .NET 4.6 RTM).
Binaries available here: ikvmbin-8.1.5666.zip
Just a couple of days ago I found out that some of my favourite musicians decided to join together to release an album, and allowed to preorder it on a crowdfunding website, Music Raiser.
The name of the band is “O.R.k.” and the founders are none but Lef, Colin Edwin, Pat Mastelotto and Carmelo Pipitone.
You probably have heard their names, if not, Colin Edwin is the bassist from Porcupine Tree while Carmelo Pipitone is the gifted guitarist from Marta Sui Tubi, an extremely original Italian band, they probably did the most interesting things in Italian music in the last 15 years or so; Lef, aka Lorenzo Esposito Fornasari, has done so many things that is quite hard to pick just one, but in Metal community he is probably best know for Obake. Finally, Pat Mastelotto is the drummer of King Crimson, and this alone made me jump on my seat!
One of the pre-order bonus was the ability to participate to a Remix Contest, and although I only got the stems yesterday in the late morning I could not resist to at least give it a try, and it’s a great honour for me that they have put my attempt on their Youtube channel:
It’s a weird feeling editing this music, after all, who am I to cut and remix and change the drum part (King Crimson, please forgive me!), how I ever dare to touch the guitars and voice, or rearrange the bass!?
But indeed it was a really fun experience, and I hope to be able do this again in the future.
And who knows, maybe they even like how I messed up their art and they decide to put me on their album! Nevertheless, it has been already a great honour for me to be able to see this material in semi-raw form (and a very interesting one!), so this has been already my first prize.
I’m looking forward now to listen the rest of the album!
I'm happy to announce that JFreeSVG version 3.0 has been uploaded to SourceForge. JFreeSVG is a fast and lightweight API for creating SVG content in Java. This release features:
To ensure that JFreeSVG provides a fully functional Graphics2D implementation, I tested it using the Swingset3 demo with modifications to redirect the screen output directly to JFreeSVG to produce SVG output. I've always liked the way that Swing uses the Java2D API to cleanly separate its rendering from having any direct knowledge of the actual output target. Here is an example:
SVG not supported in your browser!
This turned out to be an effective test, because it uncovered a bug in one of the drawImage() methods that has remained undetected in all previous JFreeSVG releases.
One last thing...the JFreeSVG repo at GitHub is now public, which will make it easier for other developers to tweak the code for experimentation or bug fixes (if you spot a bug though, please report it to me).
If you'd like to give feedback on this post, please comment via the JFreeSVG forum.
I recently had occasion to scan some papers using a sheet-fed Ricoh printer/scanner/fax/copier. It seems to think that about 6 MB is as big of an email attachment as it can send so it splits up the PDFs into base64-encoded attachments. If you find yourself in a similar situation:
cat part1 part2 > all.base64.
cat all.base64 | base64 -d > myscan.pdf.
As I wrote previously, Project Jigsaw is coming into JDK 9 in several large steps. JEP 200 defines the modular structure of the JDK, JEP 201 reorganizes the JDK source code into modular form, and JEP 220 restructures the JDK and JRE run-time images to support modules. The actual module system will be defined in JSR 376, which is just getting under way, and implemented by a corresponding JEP, yet to be submitted.
We implemented the source-code reorganization (JEP 201) last August. This step, by design, had no impact on developers or end users.
Most of the changes for modular run-time images (JEP 220) were integrated late last week and are now available in JDK 9 early-access build 41. This step, in contrast to the source-code reorganization, will have significant impact on developers and end users. All of the details are in the JEP, but here are the highlights:
JRE and JDK images now have identical structures. Previously a JDK
image embedded the JRE in a
jre subdirectory; now a JDK image is
simply a run-time image that happens to contain the full set of
development tools and other items historically found in the JDK.
User-editable configuration files previously located in the
directory are now in the new
conf directory. The files that remain
lib directory are private implementation details of the
run-time system, and should never be opened or modified.
The endorsed-standards override mechanism has been removed.
Applications that rely upon this mechanism, either by setting the
java.endorsed.dirs or by placing
jar files into
lib/endorsed directory of a JRE, will not work. We expect to
provide similar functionality later in JDK 9 in the form of
The extension mechanism has been removed. Applications that
rely upon this mechanism, either by setting the system property
java.ext.dirs or by placing
jar files into the
directory of a JRE, will not work. In most cases,
jar files that
were previously installed as extensions can simply be placed at the
front of the class path.
The internal files
dt.jar have been
removed. The content of these files is now stored in a more
efficient format in implementation-private files in the
directory. Class and resource files previously in
dt.jar are now always visible via the bootstrap or application
class loaders in a JDK image.
A new, built-in NIO file-system provider can be used to access the
class and resource files stored in a run-time image. Tools that
rt.jar and other internal
jar files directly
should be updated to use this file system.
We’re aware that these changes will break some applications, in particular IDEs and other development tools which rely upon the internal structure of the JDK. We think that the improvements to performance, security, and maintainability enabled by these changes are, however, more than worth it. We’ve already reached out to the maintainers of the major IDEs to make sure that they know about these changes, and we’re ready to assist as necessary.
If you have trouble running an existing application on JDK 9 build 41 or later and you think it’s due to this restructuring, yet not caused by one of the changes listed above or in JEP 220, then please let us know on the jigsaw-dev mailing list (you’ll need to subscribe first, if you haven’t already), or else submit a bug report via bugs.java.com. Thanks!