Planet Classpath

We are pleased to announce the release of IcedTea 3.6.0!

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 8 support with the October 2017 security fixes from OpenJDK 8 u151.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 3.6.0 (2017-10-31)

  • Security fixes
  • New features
    • PR3469: Alternative path to tzdb.dat
    • PR3483: Separate addition of nss.cfg and tz.properties into separate targets
    • PR3484: Move SystemTap support to its own target
    • PR3485: Support additional targets for the bootstrap build
  • Import of OpenJDK 8 u151 build 12
    • S8029659: Keytool, print key algorithm of certificate or key entry
    • S8057810: New defaults for DSA keys in jarsigner and keytool
    • S8075484, PR3473, RH1490713: SocketInputStream.socketRead0 can hang even with soTimeout set
    • S8077670: sun/security/krb5/auto/MaxRetries.java may fail with BindException
    • S8087144: sun/security/krb5/auto/MaxRetries.java fails with Retry count is -1 less
    • S8153146: sun/security/krb5/auto/MaxRetries.java failed with timeout
    • S8157561: Ship the unlimited policy files in JDK Updates
    • S8158517: Minor optimizations to ISO10126PADDING
    • S8171319: keytool should print out warnings when reading or generating cert/cert req using weak algorithms
    • S8177569: keytool should not warn if signature algorithm used in cacerts is weak
    • S8177837: need to upgrade install tools
    • S8178714: PKIX validator nameConstraints check failing after change 8175940
    • S8179423: 2 security tests started failing for JDK 1.6.0 u161 b05
    • S8179564: Missing @bug for tests added with JDK-8165367
    • S8181048: Refactor existing providers to refer to the same constants for default values for key length
    • S8182879: Add warnings to keytool when using JKS and JCEKS
    • S8184937: LCMS error 13: Couldn’t link the profiles
    • S8185039: Incorrect GPL header causes RE script to miss swap to commercial header for licensee source bundle
    • S8185040: Incorrect GPL header causes RE script to miss swap to commercial header for licensee source bundle
    • S8185778: 8u151 L10n resource file update
    • S8185845: Add SecurityTools.java test library
    • S8186503: sun/security/tools/jarsigner/DefaultSigalg.java failed after backport to JDK 6/7/8
    • S8186533: 8u151 L10n resource file update md20
    • S8186674: Remove JDK-8174109 from CPU Aug 21 week builds
  • Backports
    • S8035496, PR3487: G1 ARM: missing remset entry noticed by VerifyAfterGC for vm/gc/concurrent/lp50yp10rp70mr30st0
    • S8146086, PR3439, RH1478402: Publishing two webservices on same port fails with “java.net.BindException: Address already in use”
    • S8184673, PR3475, RH1487266: Fix compatibility issue in AlgorithmChecker for 3rd party JCE providers
    • S8185164, PR3438: GetOwnedMonitorInfo() returns incorrect owned monitor
    • S8187822, PR3478, RH1494230: C2 conditonal move optimization might create broken graph
  • Bug fixes
  • PPC port
  • AArch64 port
    • S8161190, PR3488: AArch64: Fix overflow in immediate cmp instruction
    • S8187224, PR3488: aarch64: some inconsistency between aarch64_ad.m4 and aarch64.ad
  • SystemTap
    • PR3467, RH1492139: Hotspot object_alloc tapset uses HeapWordSize incorrectly
  • Shenandoah
    • Add missing UseShenandoahGC checks to C2
    • [backport] Add JVMTI notifications to Shenandoah GC pauses.
    • [backport] After Evac verification should run consistently
    • [backport] All definitions should start with Shenandoah*
    • [backport] Allocation latency tracing
    • [backport] Allow allocations in pinned regions
    • [backport] Assorted monitoring support fixes
    • [backport] Avoid Full STW GC on System.gc() + related fixes
    • [backport] BrooksPointer tracing overwhelms -Xlog:gc=trace
    • [backport] Cannot do more than 1000 Full GCs
    • [backport] Cap heap size for TestRegionSizeArgs test
    • [backport] Cleanup “dirty” mentions
    • [backport] Cleanup unused methods and statements + Trivial cleanup: removed unused field, etc.
    • [backport] Common pause marker to capture everything before/after pause
    • [backport] Consistent print_on and tty handling
    • [backport] “continuous” heuristics
    • [backport] Disable biased locking by default
    • [backport] Fix build error: avoid loops with empty bodies
    • [backport] Fix build error: switches over enums should take all enums
    • [backport] Fix build error: verifier liveness should not be implicitly casted to size_t
    • [backport] Fixed assertion failures when printing heap region to trace output
    • [backport] Fixed C calling convention of shenandoah_wb() on Windows
    • [backport] LotsOfCycles test always degrades to Full GC
    • [backport] Made ShenandoahPrinter debug only
    • [backport] Make sure different Verifier levels work
    • [backport] Make sure we have at least one memory pool per memory manager (JMX) + JMX double-counts heap used size
    • [backport] Mark heuristics diagnostic/experimental
    • [backport] Move Verifier “start” message under (gc,start)
    • [backport] On-demand commit as heap resizing strategy
    • [backport] Periodic GC
    • [backport] PhiNode::has_only_data_users() needs to apply to shenandoah barrier only
    • [backport] Pinning humongous regions should be allowed
    • [backport] Reclaimed humongous regions should count towards immediate garbage
    • [backport] Refactor region flags into finite state machine
    • [backport] Refactor ShConcThread dispatch
    • [backport] Refactor ShenandoahFreeSet + Fast-forward over humongous regions to keep “current” non-humongous
    • [backport] Refactor ShenandoahHeapLock
    • [backport] Refactor ShenandoahHeapRegionSet
    • [backport] Region (byte|word) shifts as the replacement for divisions
    • [backport] Rehash -XX:-UseTLAB in tests + Rehash allocation tests
    • [backport] Rename inline guards
    • [backport] Selectable humongous threshold + Humongous top() should be correct for iteration
    • [backport] Shortcut concurrent cycle when enough immediate garbage is reclaimed
    • [backport] Templatize and improve inlining of arraycopy and clone barriers.
    • [backport] TestRegionSampling test
    • [backport] TestSmallHeap test for Shenandoah
    • [backport] Uncommit heap regions after given delay
    • [backport] Underflow in adaptive free_threshold calculation
    • [backport] Unlock more GC-specific tests for Shenandoah
    • [backport] Update counters on slow-path more rarely
    • [backport] Verifier should avoid pushing on stack when walking objects past TAMS
    • [backport] Verifier should walk cset and humongous regions
    • [backport] Verify humongous regions liveness
    • [backport] Verify liveness data
    • Correct way to fix Windows call convention issue
    • Fix build error in release config.
    • Fixed Fixed message logging
    • Handle Java heap initialization and expansion failures
    • Make sure -verbose:gc, PrintGC, PrintGCDetails work consistently
    • Missing barriers on constant oops + acmp rework + cas fix + write barrier on constant oop fix
    • Missing UseShenandoahGC check in LibraryCallKit::inline_multiplyToLen()
    • Missing UseShenandoahGC check to C2
    • OOME in SurrogateLockerThread deadlocks the GC cycle
    • Properly unlock ShenandoahVerify
    • Remove unused memory_for, fixing the build
    • Remove useless code following acmp rework
    • Revert accidental G1 closure rename
    • Test bug: test library and flags in TestHeapAlloc
    • UnlockDiagnosticVMOptions flag is needed for ShenandoahVerify
    • Write barrier pin and expand cleanup

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

  • PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net)
  • Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222

GnuPG >= 2.1 is required to be able to handle this key.

SHA256 checksums:

  • 74a43c4e027c72bb1c324f8f73af21565404326c9998f534f234ec2a36ca1cdb icedtea-3.6.0.tar.gz
  • 6050c8e69974a33641b764afdbd91f07725336abd20e7f260e2a0dbf562f8b32 icedtea-3.6.0.tar.gz.sig
  • d0a0a9ce58b3ed29f2deecef8b78f28a79315f4a6330ee833410f79cbf48417e icedtea-3.6.0.tar.xz
  • 2de4119e3e59cf7acbb1f9c93a5760397c846116067d3c45504bd3ff6297f9a8 icedtea-3.6.0.tar.xz.sig

The checksums can be downloaded from:

A 3.6.0 ebuild for Gentoo is available.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-3.6.0.tar.gz

or:

$ tar x -I xz -f icedtea-3.6.0.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-3.6.0/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

I attended the 5th Chrome Dev Summit this week. The talks were all recorded and are available via the schedule (the keynote and leadership panel on day 1 are perhaps of broadest interest and highest bang-for-buck viewing value). It was a high quality, well-produced event with an intimate feel – I was very surprised when Robert Nyman told me it was over 700 people! I appreciated the good vegetarian food options and noticed and was very impressed by the much-better-than-typical-tech-conferences gender representation and code of conduct visibility.

It doesn’t always look this way from the outside, but the various browser engine teams are more often than not working toward the same goals and in constant contact. For those who don’t know that, it was nice to see the shoutouts for other browsers and the use of Firefox’s new logo!

The focus of the event IMO was, as expected, the mobile Web. While the audience was Web developers, it was interesting to see what the Chrome team is focusing on. Some of the efforts felt like Firefox OS 4 years ago but I guess FxOS was just ahead of its time 😉

From my perspective, Firefox is in pretty good shape for supporting the things Chrome was promoting (Service Workers, Custom Elements and Shadow DOM, wasm, performance-related tooling, etc.). There are of course areas we can improve: further work on Fennec support for add-to-homescreen, devtools, and seeing a few things through to release (e.g. JS modules, Custom Elements, and Shadow DOM – work is underway and we’re hoping for soon!). Oh, and one notable exception to being aligned on things is the Network Information API that the Mozilla community isn’t super fond.

Other highlights for me personally included the Chrome User Experience Report (“a public dataset of key user experience metrics for popular origins on the web, as experienced by Chrome users under real-world conditions”) and the discussion about improving the developer experience for Web Workers.

It was great putting faces to names and enjoying sunny San Francisco (no, seriously, it was super sunny and hot). Thanks for the great show, Google!

I did an interview with the Software Freedom Conservancy to discuss why I try to contribute to the Conservancy whenever I can. Because I believe many more free software communities deserve to have a home for their project at the Conservancy.

Please support the Software Freedom Conservancy by donating so they will be able to provide a home to many more communities. A donation of 10 US dollars a month will make you an official sponsor. Or donate directly to one of their many member projects.

Software Freedom Conservancy Member Projects

Software Freedom Conservancy Member Projects

It’s been a long road, but at last the puzzle is complete: Today we delivered Project Jigsaw for general use, as part of JDK 9.

Jigsaw enhances Java to support programming in the large by adding a module system to the Java SE Platform and to its reference implementation, the JDK. You can now leverage the key advantages of that system, namely strong encapsulation and reliable configuration, to climb out of JAR hell and better structure your code for reusability and long-term evolution.

Jigsaw also applies the module system to the Platform itself, and to the massive, monolithic JDK, to improve security, integrity, performance, and scalability. The last of these goals was originally intended to reduce download times and scale Java SE down to small devices, but it is today just as relevant to dense deployments in the cloud. The Java SE 9 API is divided into twenty-six standard modules; JDK 9 contains dozens more for the usual development and serviceability tools, service providers, and JDK-specific APIs. As a result you can now deliver a Java application together with a slimmed-down Java run-time system that contains just the modules that your application requires.

We made all these changes with a keen eye — as always — toward compatibility. The Java SE Platform and the JDK are now modular, but that doesn’t mean that you must convert your own code into modules in order to run on JDK 9 or a slimmed-down version of it. Existing class-path applications that use only standard Java SE 8 APIs will, for the most part, work without change.

Existing libraries and frameworks that depend upon internal implementation details of the JDK may require change, and they may cause warnings to be issued at run time until their maintainers fix them. Some popular libraries, frameworks, and tools — including Maven, Gradle, and Ant — were in this category but have already been fixed, so be sure to upgrade to the latest versions.

Looking ahead It’s been a long road to deliver Jigsaw, and I expect it will be a long road to its wide adoption — and that’s perfectly fine. Many developers will make use of the newly-modular nature of the platform long before they use the module system in their own code, and it will be easier to use the module system for new code rather than existing code.

Modularizing an existing software system can, in fact, be difficult. Sometimes it won’t be worth the effort. Jigsaw does, however, ease that effort by supporting both top-down and bottom-up migration to modules. You can thus begin to modularize your own applications long before their dependencies are modularized by their maintainers. If you maintain a library or framework then we encourage you to publish a modularized version of it as soon as possible, though not until all of its dependencies have been modularized.

Modularizing the Java SE Platform and the JDK was extremely difficult, but I’m confident it will prove to have been worth the effort: It lays a strong foundation for the future of Java. The modular nature of the platform makes it possible to remove obsolete modules and to deliver new yet non-final APIs in incubator modules for early testing. The improved integrity of the platform, enabled by the strong encapsulation of internal APIs, makes it easier to move Java forward faster by ensuring that libraries, frameworks, and applications do not depend upon rapidly-changing internal implementation details.

Learning more There are by now plenty of ways to learn about Jigsaw, from those of us who created it as well as those who helped out along the way.

If your time is limited, consider one or more of the following:

  • The State of the Module System is a concise, informal written overview of the module system. (It’s slightly out of date; I’ll update it soon.)

  • Make Way for Modules!, my keynote presentation at Devoxx Belgium 2015, packs a lot of high-level information into thirty minutes. I followed that up a year later with a quick live demo of Jigsaw’s key features.

  • Alex Buckley’s Modular Development with JDK 9, from Devoxx US 2017, covers the essentials in more depth, in just under an hour.

If you really want to dive in:

Comments, questions, and suggestions are welcome on the jigsaw-dev mailing list. (If you haven’t already subscribed to that list then please do so first, otherwise your message will be discarded as spam.)

Thanks! Project Jigsaw was an extended, exhilarating, and sometimes exhausting nine-year effort. I was incredibly fortunate to work with an amazing core team from pretty much the very beginning: Alan Bateman, Alex Buckley, Mandy Chung, Jonathan Gibbons, and Karen Kinnear. To all of you: My deepest thanks.

Key contributions later on came from Sundar Athijegannathan, Chris Hegarty, Lois Foltan, Magnus Ihse Bursie, Erik Joelsson, Jim Laskey, Jan Lahoda, Claes Redestad, Paul Sandoz, and Harold Seigel.

Jigsaw benefited immensely from critical comments and suggestions from many others including Jayaprakash Artanareeswaran, Paul Bakker, Martin Buchholz, Stephen Colebourne, Andrew Dinn, Christoph Engelbert, Rémi Forax, Brian Fox, Trisha Gee, Brian Goetz, Mike Hearn, Stephan Herrmann, Juergen Hoeller, Peter Levart, Sander Mak, Gunnar Morling, Simon Nash, Nicolai Parlog, Michael Rasmussen, John Rose, Uwe Schindler, Robert Scholte, Bill Shannon, Aleksey Shipilëv, Jochen Theodorou, Weijun Wang, Tom Watson, and Rafael Winterhalter.

To everyone who contributed, in ways large and small: Thank you!

Thanks to Alan Bateman and Alex Buckley
for comments on drafts of this entry.

The Talos II is now available for pre-order. It is the more affordable, more power-efficient successor to the Talos I machine I wrote about in a previous post.

This is a very affordable machine for how powerful it will be, and there are minimal mainboard + CPU + RAM bundles (e.g., this one), around which one can build a workstation with more readily-available parts. I’ve placed an order for one of the bundles, and will buy the chassis and GPU separately (mainly to avoid high cross-border shipping fees for the full workstation).

The Talos II is an important machine for Free Software, and will likely be RYF-certified by the FSF. Pre-orders end September 15th!

For over twenty years the Java SE Platform and the JDK have evolved in large, irregular, and somewhat unpredictable steps. Each feature release has been driven by one or a few significant features, and so the schedule of each release has been adjusted as needed — sometimes more than once! — in order to accommodate the development of those features.

This approach made it possible to deliver big new features at a high level of quality, after thorough review and testing by early adopters. The downside, however, was that smaller API, language, and JVM features could only be delivered when the big features were ready.

This was an acceptable tradeoff in the decades before and after the turn of the century, when Java competed with just a few platforms which evolved at a similar stately pace. Nowadays, however, Java competes with many platforms which evolve at a more rapid pace.

For Java to remain competitive it must not just continue to move forward — it must move forward faster.

Back on the train Five years ago I mused in this space on the tension between developers, who prefer rapid innovation, and enterprises, which prefer stability, and the fact that everyone prefers regular and predictable releases.

To address these differing desires I suggested, back then, that we switch from the historical feature-driven release model to a time-driven “train” model, with a feature release every two years. In this type of model the development process is a continuous pipeline of innovation that’s only loosely coupled to the actual release process, which itself has a constant cadence. Any particular feature, large or small, is merged only when it’s nearly finished. If a feature misses the current train then that’s unfortunate but it’s not the end of the world, since the next train will already be waiting and will also leave on schedule.

The two-year train model was appealing in theory, but proved unworkable in practice. We took an additional eight months for Java 8 in order to address critical security issues and finish Project Lambda, which was preferable to delaying Lambda by two years. We initially planned Java 9 as a two-and-a-half year release in order to include Project Jigsaw, which was preferable to delaying Jigsaw by an additional eighteen months, yet in the end we wound up taking an additional year and so Java 9 will ship this month, three and a half years after Java 8.

A two-year release cadence is, in retrospect, simply too slow. To achieve a constant cadence we must ship feature releases at a more rapid rate. Deferring a feature from one release to the next should be a tactical decision with minor inconveniences rather than a strategic decision with major consequences.

So, let’s ship a feature release every six months.

That’s fast enough to minimize the pain of waiting for the next train, yet slow enough that we can still deliver each release at a high level of quality.

Proposal Taking inspiration from the release models used by other platforms and by various operating-system distributions, I propose that after Java 9 we adopt a strict, time-based model with a new feature release every six months, update releases every quarter, and a long-term support release every three years.

  • Feature releases can contain any type of feature, including not just new and improved APIs but also language and JVM features. New features will be merged only when they’re nearly finished, so that the release currently in development is feature-complete at all times. Feature releases will ship in March and September of each year, starting in March of 2018.

  • Update releases will be strictly limited to fixes of security issues, regressions, and bugs in newer features. Each feature release will receive two updates before the next feature release. Update releases will ship quarterly in January, April, July, and October, as they do today.

  • Every three years, starting in September of 2018, the feature release will be a long-term support release. Updates for these releases will be available for at least three years and quite possibly longer, depending upon your vendor.

In this model the overall rate of change should be about the same as it is today; what’s different is that there will be many more opportunities to deliver innovation. The six-month feature releases will be smaller than the multi-year feature releases of the past, and therefore easier to adopt. Six-month feature releases will also reduce the pressure to backport new features to older releases, since the next feature release will never be more than six months away.

Developers who prefer rapid innovation, so that they can leverage new features in production as soon as possible, can use the most recent feature release or an update release thereof and move on to the next one when it ships. They can deliver an application in a Docker image, or other type of container package, along with the exact Java release on which the application will run. Since the application and the Java release can always be tested together, in a modern continuous-integration and continuous-deployment pipeline, it will be straightforward to move from one Java release to the next.

Enterprises that prefer stability, so that they can run multiple large applications on a single shared Java release, can instead use the current long-term support release. They can plan ahead to migrate from one long-term support release to the next, like clockwork, every three years.

To make it clear that these are time-based releases, and to make it easy to figure out the release date of any particular release, the version strings of feature releases will be of the form $YEAR.$MONTH. Thus next year’s March release will be 18.3, and the September long-term support release will be 18.9.

Implications This proposal will, if adopted, require major changes in how contributors in the OpenJDK Community produce the JDK itself; I’ve posted some initial thoughts as to how we might proceed there. It will be made easier if we can reduce the overhead of the Java Community Process, which governs the evolution of the Java SE Platform; my colleagues Brian Goetz and Georges Saab have already raised this topic with the JCP Executive Committee.

This proposal will, ultimately, affect every developer, user, and enterprise that relies upon Java. It will, if successful, help Java remain competitive — while maintaining its core values of compatibility, reliability, and thoughtful evolution — for many years to come.

Comments and questions are welcome, either on the OpenJDK general discussion list (please subscribe to that list in order to post to it) or on Twitter, with the hashtag #javatrain.

An update on my notes to compile NetBSD kernels and userland.


Build / update the tools:

-U : for unprivilged building
-u : to update
-m : to specify architecture

./build.sh -U -u tools

To cross compile, this is would enough:
./build.sh -U -m i386 -u tools
However, since I do want to build on the same computer and the build script would be confused, we add -T /usr/tools-${HOST_ARCH}-${TARGET_ARCH} and also separate the object dir with -O!

./build.sh -U -m i386 -u -O /usr/obj-amd64-i386 -T /usr/tools-amd64-i386 tools


Then we build the kernel

./build.sh -U kernel=CONFNAME

or for cross compilation:
./build.sh -U -O /usr/obj-amd64-i386 -T /usr/tools-amd64-i386 -m i386 -u kernel=GENERIC

The modules:
./build.sh -U -u modules installmodules=/

Now to build userland, including X11. I did not attempt to cross-build userland yet.

./build.sh -U -x -u distribution

./build.sh -U -x -u distribution install=/
LaternaMagica is the image viewing and exporting tool of the GNUstep Application Project.
Quickly create a list of images and export it by converting the file type and resizing the images to a uniform size, to prepare your next slide show, a folder of thumbnails or the folder to give your local printer and print out pictures.

The new 0.5 version comes with a lot of fixes and improvements.
  • Improved drag&drop support
  • Improved recursion of folders, including drag&drop of folders
  • Rotation has key shortcuts
  • Image rotation is remembered for export
  • Code cleanup, optimization and minor bug fixes 
  • Exporting (including Scaling and Rotation) have many fixes:
    • Better support of Alpha images
    • Resolution and Size are better interpreted and preserved. 
    • better scaling algorithm imported from PRICE



The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 7 support in the 2.6.x series with the July 2017 security fixes from OpenJDK 7 u151.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 2.6.11 (2017-08-08)

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

  • PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net)
  • Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222

GnuPG >= 2.1 is required to be able to handle this key.

SHA256 checksums:

  • 5dfbe0f40d8b6004d49add4ec398d1c91d4c02b11716297055e5d73919fb85be icedtea-2.6.11.tar.gz
  • f100c3bfffa5ea0b9a2184346856a1d3db7f8d2a45c74523ad928dcf179ad0e3 icedtea-2.6.11.tar.gz.sig
  • 20063c314535e4ed4b8099e497b880e4f346c85e7315a2573d0f398b973777c5 icedtea-2.6.11.tar.xz
  • 43bf76c60d219ef76b0e03484ee92d0d7657dafae51f21ed088ee5bb5ee654ca icedtea-2.6.11.tar.xz.sig

The checksums can be downloaded from:

A 2.6.11 ebuild for Gentoo is available.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-2.6.11.tar.gz

or:

$ tar x -I xz -f icedtea-2.6.11.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-2.6.11/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

Advogato has been archived.

When I started working on Free Software advogato was the “social network” where people would keep their diaries (I don’t believe we called them blogs yet). I still remember how proud I was when people certified me as Apprentice.

A lot of people on Planet Classpath still have their diaries imported from Advogato. robilad, audriusa, saugart, rmathew, Anthony, kuzman, jvic, jserv, aph, twisti, Ringding please let me know if you found a new home for your diary.

After almost fifteen years I have decided to quit working on IKVM.NET. The decision has been a long time coming. Those of you that saw yesterday’s Twitter spat, please don’t assume that was the cause. It rather shared an underlying cause. I’ve slowly been losing faith in .NET. Looking back, I guess this process started with the release of .NET 3.5. On the Java side things don’t look much better. The Java 9 module system reminds me too much of the generics erasure debacle.

I hope someone will fork IKVM.NET and continue working on it. Although, I’d appreciate it if they’d pick another name. I’ve gotten so much criticism for the name over the years, that I’d like to hang on to it 😊

I’d like to thank the following people for helping me make this journey or making the journey so much fun: Brian Goetz, Chris Brumme, Chris Laffra, Dawid Weiss, Erik Meijer, Jb Evain, John Rose, Mads Torgersen, Mark Reinhold, Volker Berlin, Wayne Kovsky, The GNU Classpath Community, The Mono Community.

And I want to especially thank my friend Miguel de Icaza for his guidance, support, inspiration and tireless efforts to promote IKVM.

Thank you all and goodbye.

[NOTE: This article talks about commercial products and contains links to them, I do not receive any money if you buy those tools, nor I work for or I am affiliated to any of those companies. The opinion expressed here are mine and the review is subjective]

This is my attempt at a review of Spitfire Audio BT Phobos. Before diving into the review, and since I know I will be critic particularly on some aspects, I think it’s fair to assess the plugin right away: BT Phobos is an awesome tool, make no mistakes.

BT Phobos is a “polyconvolution” synthesiser. It is, in fact, the first “standalone” plugin produced by Spitfire Audio, which is one of the companies I respect the most when it comes to music production and sample based instruments.

The term polyconvolution is used by the Spitfire Audio team to indicate the simultaneous use of three convolvers for four primary audio paths: you can send any amount of each of those four primary sources (numbered 1 to 4) outputs to each of the three convolution engines (named W, X and Y).

Screen Shot 2017-04-18 at 13.31.39

Source material controls

There is lot of flexibility in the mixing capabilities; there are, of course, separate dry/wet signal knobs that send a specific portion of the unprocessed source material to the “amplifier” module, control how much of the signal goes to the convolution circuits, and finally how much of each of the convolution engines applies to each of the source sound.

This last bit is achieved by means of an interesting nabla shaped X/Y pad: by positioning the icon that represents the source module closer to a corner it’s possible activate just the convolution engine that represents that corner; for example, top left is the W engine, top right the X and bottom the Y. Manually moving the icon gradually introduces contributions from the other engines, and double clicking on the icon makes all convolvers contribute equally to the wet sound, by positioning them to the center of the nabla.

Screen Shot 2017-04-18 at 15.45.15

The convolution mixer

Finally, each convolver has a control that allows to change the output level of the convolution engine before it reaches its envelope shaper. Spitfire Audio has released a very interesting flow diagram that shows the signal path in detail, which is linked below for reference.

BT Phobos signal path

In addition to the controls just described, the main GUI has basic controls to tweak the source material with an ADSR envelope which is directly accessible below each of the main sound sources as well as the convolutions modules, but it’s possible to have access to more advanced settings by clicking on the number or the letter that identifies the module name.

Screen Shot 2017-04-18 at 13.32.21

The advanced controls interface

An example of such controls is the Hold parameter, which let the user adjust the time the sound is held at full level before entering the Decay phase of its envelope; another useful tool is the sampling and IR offset controls, which allow to tweak parameters like the starting point of the material or the quantisation and its Speed (the playback speed for the samples, and is a function of the host tempo); there is also a control to influence the general pitch of the sound; finally a simple but effective section is dedicated to filtering – although a proper EQ is missing – as well as panning and level adjustments.

All those parameters are particularly important settings when using loops, but also contribute to shaping the sound with the pitched material, and can be randomised for interesting effects and artefacts generated from the entropy (you can just randomise the material selection only as opposed to all the parameters).

Modulation is also present, of course, with various LFOs of various kind that can be used to modulate basically everything. You can access them either by clicking on the mappings toggle below the ADSR envelope of each section, or by using the advanced settings pages.

The amount of tweaks that can be made to the material in both the source and the convolution engines is probably the most important aspect of BT Phobos, since it gives an excellent amount of freedom to create new sounds from what’s available, which is already a massive amount of content, and allows to build wildly different patches with a bit of work, but it’s definitely not straightforward and needs time to understand the combined effects that each setting has on the whole.

Since the material is polyphonic, the Impulse Responses for the convolution are created on the fly, and in fact, one interesting characteristic of BT Phobos is that there is no difference between a material for the convolution engines and one for the source module,  both draw from the same pool of sounds.

Screen Shot 2017-04-18 at 14.41.57

BT Phobos beautiful GUI

There is a difference on the type of material though, where loop based samples are, well, looped (and tempo sync’ed), and their pitch does not change based on the key that triggers them (although you can still affect the general pitch of the sound with the advanced controls), “tonal” material are pitched and change following the midi notes.

One note about the LFOs: the mappings are “per module”. In other words, it is possible to modulate almost every parameter inside a single module, be it one of the four input sources or one of the three convolution engines, but there seem to be no way to define a global mapping of some kind. For example, I found a very nice patch from Mr. Christian Henson (which incidentally made, at least in my opinion, the best and most balanced overall presets), and I noticed I could make it even more interesting by using the modulation wheel. I wanted to modulate the CC1 message with an LFO (in fact, ideally it would be even better to have access to a custom envelope, but BT Phobos doesn’t have any for modulation use), but I could not find a way to do that other than using Logic’s own Midi FX. I understand that MIDI signals are generated outside the scope of the plugin, but it would be fantastic to have the option of tweaking and modulate everything from within the synth itself.

All the sources and convolvers can be assigned to separate parts of the keyboard by tweaking the mapper at the bottom of the GUI. It is not possible to map a sound to start from an offset in the keyboard controls – for example to play C1 on the keyboard but trigger C2, or any other note – but of course you can change the global pitch so this has effectively the same result, and as said before it can also be modulated with an LFO or via DAW automation, for more interesting effects.

Screen Shot 2017-04-18 at 21.42.19

Keyboard mapping tool

Indeed, the flexibility of the tool, and the number of options at disposal for tweaking the sounds are very impressive. Most patches are very nice and ready to be used as they are, and blend nicely with lots of disparate styles. Some patches are very specific though, and pose a challenge to be used. Generally, I would consider these as starting points for exploration, rather than “final”.

When reading about BT Phobos in the weeks before its release many people asked whether you could add your own sound to it or not. It’s not possible, unfortunately.

At first, I thought that wasn’t a limitation or a deal breaker. I still think it’s not a deal breaker, but I see the value added that BT Phobos has even just as a standalone synth, as opposed to recreate the same kind of signal path manually with external tools, to give your own content the “Phobos treatment”, which is something that is entirely possible of course, for example just with Alchemy and Space Designer (which are both included in MainStage, so you can get them for a staggering 30 euros if you are a Mac user, even if you don’t use Logic Pro X!), but of course, we would be trading away the immediacy that BT Phobos delivers.

That, maybe, is my main criticism to this synth, and I hope Spitfire Audio turns BT Phobos into a fully fledged tool for sound design over time, maybe enabling access to spectral shaping in some form or another, so we can literally paint over (or paint away!) portions of the sound, which is something you can do with iZotope Iris or Alchemy and is a very powerful way to shape a sound and do sound design in general.

Another thing that is missing is a sound effect module, although I don’t know how important that is, given that there are thousands of outstanding plugins that do all sort of effects from delay to chorus etc… And, in fact, many patches benefit for added reverb (I use Eventide Blackhole and found that works extremely well with BT Phobos, since it’s also prominently used for weird sound effects). But it may be interesting to play by putting some effects (including a more proper EQ section) in various places in the signal path, although it’s all too easy to generate total chaos from such experimentation, so it’s possible the Spitfire Audio simply thought to leave this option for another time and instead focus on a better overall experience.

And there’s no arpeggiator! Really!

The number of polyphonic voices can be altered. Spitfire Audio states that the synth tweaks the number of voices at startup to match the characteristics of your computer, but I can’t confirm that, since every change I do seems to remain, even if I occasionally hear some pop and cracks at higher settings. Nevertheless, the CPU usage is pretty decent unless you go absolutely crazy with the polyphony count. I also noted that the numbers effect the clarity of the sound. This is understandable since an higher count means more notes can be generated at the same time, which means more things are competing for the same spectrum, and things can become very confusing very quickly. On the other end, a lower polyphony count has a bad impact on how the notes are generated. I feel sometime that things just stop generating sound, which is counter intuitive and very disturbing, especially since it’s very easy to have a high polyphony count with all those sources and convolvers.

Also to note is that, by nature, some patches have very wild difference in their envelopes and level settings, which means it’s all to easy to move from a quiet to a very loud patch just by clicking “next” (which is possible in Logic at least with the next/prev patch buttons on top of the plugin main frame). The synth does not stop the sound, nor does any attempt to fade from one sound to the next, instead, the convolutions simply keep working on the next sample in queue with the new settings! I still have to decide if this is cool or not, perhaps it’s not intentional, but I can see how this could be used to automate patch changes in some clever way during playback! And indeed, a was able to create a couple of interesting side effects just by changing between patches at the right time.

More on the sounds. The amount of content is really staggering, and simply cycling through the patches does not make justice to this synth, at all!

What BT Phobos wants is a user that spends time tweaking the patches and play with the source material to get the most out it, however it’s easy to see how limiting this may feel at the same time, particularly with the more esoteric and atonal sounds, and there’s certainly a limit on how good a wood stick convolved with an aluminium thin can may sound, so indeed some patches do feel repetitive at times, as the source material does. There are quite a few very similar drum loops for example, or various pitches “wind blowing into a pipe” kind of things.

This is a problem common to other synths based on the idea of tweaking sounds from the environment, though. For example, I have the amazing Geosonics from Soniccouture, which is an almost unusable library that, once tweaked, is capable of amazing awesomeness. Clearly, the authors of both synths – but this is especially valid for BT Phobos I think – are looking at an audience that is capable of listening through the detuned and dissonant sound waves and shape a new form of music.

This is probably the reason why so many of the pre assembled patches dive the user full speed into total sound design territory; however, and this is another important point of criticism, this is sound design that has already been done for you… A lot of the BT patches, in particular, are clearly BT patches, using them as they are means you are simply redoing something that has already been done before, and, despite with a very experimental feeling still strongly present, it’s not totally unheard or new.

For example, I also happen to have Break Tweaker and Stutter Edit (tools that also originally come from BT), and I could not resist to the temptation to play something that resembles BT work on “This Binary Universe” or “_” (fantastic albums)! While this seems exciting – BT in a box! And you can also see the democratising aspect of BT Phobos, I can do that in half hour instead of six months of manual CSound programming! – it’s an unfortunate and artificial limitation on a tool that is otherwise a very powerful enabler, capable of bringing complex sound design one step closer to the general public. Having the ability to process your own sounds would mitigate this aspect I think.

I do see how this is useful for a composer in need of a quick solution for an approaching deadline even with the most experimental tones, though: those patches can resolve a deadlock or take you out of an impasse in a second.

The potential for BT Phobos to become a must have tool for sound design are all there, especially if Spitfire Audio keeps adding content, perhaps more varied (and even better, allow to load your own content). The ability to shape the existing sounds already make it very usable. I don’t think it’s a general tool at this stage, though, and definitely it should not be the first synth or sound shaping processor in your arsenal, especially if you are starting out now.

But it’s not just a one trick pony either, it does offer you quite a lot of possibilities, and the more you work on that, the more addictive it becomes, and I can see Spitfire Audio offering soon this synth within a collection comprising of some of their more experimental stuff like LCO and Enigma, which would be very nice, indeed.

It’s unfortunate that Spitfire Audio does not offer an evaluation period: contrary to most of their offering, BT Phobos needs time to be fully grasped and it’s all but immediate (well, unless you are happy with the default patches or you really just need to “get out of troubles” quickly, but be careful with that because the tax is on the originality), but it can, and does, evolve, as its convolutions do, over time and it can absolutely deliver total awesomeness if used correctly.

Most patches are also usable out of the box, and especially by adding some reverb or doing some post processing with other tools, it’s possible to squeeze even more life out of them.

Overall, I do recommend BT Phobos, is a wonderful, very addictive synthesiser.


I don’t usually post photos of my family, especially my kids, but this is a very special occasion, that needs celebration.

On the 29th, at 7:31 in the morning (and what a long night!), my second child, Luca, was born in Hamburg. I guess this makes him an official “hamburger” now 🤣

Luca was named after his uncle, one of the most eclectic and interesting person I ever met, and it was a great honour for us.

I don’t have much words really, being a father is amazing, and I’m very proud, and very in love, with my kids. Very, very in love.

Welcome Luca, son of Hamburg, and citizen of the World!

IMG_2989

Luca and Fiorenza 🙂

P.S. I just realised that it’s a lot over one year I don’t post anything, I will try to change that, I already have a few things that will be probably very interesting share!


Quantum Curling

Last week we had a work week at Mozilla’s Toronto office for a bunch of different projects including Quantum DOM, Quantum Flow (performance), etc. It was great to have people from a variety of teams participate in discussions and solidify (and change!) plans for upcoming Firefox releases. There were lots of sessions going on in parallel and I wasn’t able to attend them all but some of the results were written up by the inimitable Ehsan in his fourth Quantum Flow newsletter.

Near the end of the week, Ehsan gave an impromptu walkthrough of the Gecko profiler. I’m planning on taking some of the tips he gave and that were discussed and put them onto the documentation for the profiler. If you’re interested in helping, please let me know!

The photo above is of us going curling at the High Park Curling Club. It was a lot of fun and I was happy that only one other person had ever curled before so it was a unique experience for almost everyone!

As previously reported, the JSR 269 annotation processing APIs in the javax.lang.model and javax.annotation.processing packages are undergoing maintenance review as part of Java SE 9.

All the planned changes to the JSR 269 API are in JDK 9 build 164, downloadable as an early access binary. Of note new in build 164 is the annotation type javax.annotation.processing.Generated, meant to be a drop-in replacement for javax.annotation.Generated since the latter is not in a convenient module.

Please try out your existing annotation processors -- compiling them, running them, etc. -- on JDK 9 and report your experiences, good or bad, to compiler-dev@openjdk.java.net.

As has been done previously during Java SE 7 and Java SE 8, the JSR 269 annotation processing API is undergoing a maintenance review (MR) as part of Java SE 9.

Most of the API changes are in support of adding modules to the platform, both as a language structure in javax.lang.model.* as well as another interaction point in javax.annotation.processing in the Filer and elsewhere. A small API change was also done to better support repeating annotations. A more detailed summary of the API changes is included in the MR material.

The API changes are intended to be largely compatible with the sources of existing processors, their binary linkage, as well as their runtime behavior. However, it would be helpful to verify that your existing processors work as expected when run under JDK 9. JDK 9 early access binaries are available for download. Please report experiences running processors under JDK 9 as comments here or to me as email. Feedback on the API changes can be sent to compiler-dev@openjdk.java.net.

Note: this article is also available in German.

What is Conversations?

Conversations is an app for Android Smartphones for sending each other messages, pictures, etc, much like WhatsApp. However, there are a number of important differences to WhatsApp:

  • Conversations does not use your phone number for identification, and doesn’t read your address book to find contacts. It uses an ID that looks much like an email address (the so-called Jabber-ID), and you can find contacts by exchanging Jabber-IDs with people, just like you do with email addresses, phone numbers, etc.
  • Conversations uses an open protocol called XMPP, that is used by many other programs on a wide range of systems, for example on desktop PCs.
  • Converations is Open Source, i.e. everybody can inspect the source code, check it for security issues, see what the program actually does, or even modify and distribute it.
  • XMPP builds on a decentralized infrastructure. This means that not one company is in control of it, but instead there are many providers, or you can even run your own server if you want.
  • Conversations does not collect and sell any information from you or your contacts.

There are more differences, but I don’t want to go into detail here, others have already done it, and better (German).

Install Conversations

From Google Play

Conversations is easily installed from Google Play. However, it currently costs 2,39€. I’d recommend everybody who can to buy the it, it supports development of this really good app.

Alternative: From F-Droid

For all those who cannot or don’t want to spend the money, there is another way to get it for free. It is available in the F-Droid. It is an alternative app store, that only distributes Open Source software. In order to do that, you first need to install F-Droid. Then you can start F-Droid and search for Conversations and install it.

Set-up Jabber account

Next step is to set up a Jabber account. You need two things: an ID, and a provider. The first part, the ID, you can choose freely, e.g. a fantasy name or something like firstname.surname, but this is really up to you. In order to find a provider, I recommend this list https://gultsch.de/compliance_ranked.html. The providers at the top of the list have best support for the XMPP features that are relevant for smartphone users. I’d recommend trashserver.net because this supports in-band registration (directly from Conversations) and is very well maintained. If you want to further support the developer of Conversations, I’d recommend an account on conversations.im, this currently costs 8€/year. I think it is worth it, but you have the choice.

If you choose, for example, the ID ‘joe.example’ on the provider ‘provider.org’, then your Jabber-ID is joe.example@provider.org. When you’re decided on a Jabber-ID, you can easily register an account by starting Conversations, entering the Jabber-ID in the set-up screen, check the box ‘register new account on server’, enter your preferred password 2x and confirm it.

Adding contacts

Adding contacts is different than WhatsApp. You have to manually add contacts to your roster. Tap on the ‘+’ symbol next to the little people icon, enter your contact’s Jabber-ID and confirm it. Now you’re ready to start chatting. Have fun!


Diesen Artikel gibt es auch in Englisch.

Was ist Conversations?

Conversations ist eine App für Android Smartphones, mit der man sich gegenseitig Nachrichten, Bilder, etc schicken kann, sehr ähnlich wie WhatsApp. Es gibt allerdings ein paar wichtige Unterschiede zu WhatsApp:

  • Conversations verwendet nicht Deine Telefon-Nummer zur Identifikation, und nicht Dein Adressbuch um Kontakte zu finden. Deine ID sieht aus wie eine Email-Adresse (Deine sogenannte Jabber-ID), und Kontakte findest Du indem Du Deine Jabber-ID mit Bekannten austauschst, genau wie bei Email-Adressen oder Telefonnummern auch.
  • Conversations benutzt ein offenes Protokoll, genannt XMPP, das von vielen anderen Programmen auf vielen verschiedenen Systemen genutzt werden kann, z.B. auch auf Desktop PCs.
  • Conversations ist Open Source, d.h. jeder kann den Quellcode einsehen, und z.B. auf Sicherheitsprobleme überprüfen, oder sich vergewissern was das Programm eigentlich macht, oder es ändern, etc.
  • XMPP baut auf eine dezentrale Infrastruktur, das bedeutet daß nicht ein Unternehmen alles kontrolliert, sondern daß es viele verschiedene Anbieter gibt, oder man z.B. selbst entsprechende Server betreiben kann, wenn man möchte.
  • Conversations sammelt und verkauft keinerlei Informationen über Dich und Deine Kontakte.

Es gibt noch einige andere Unterschiede, aber ich will hier nicht im Detail darauf eingehen, das haben andere schon viel besser getan.

Conversations installieren

Von Google Play

Conversations lässt sich ganz einfach von Google Play installieren. Es kostet dort allerdings momentan 2,39€. Ich möchte allen, die die Möglichkeit haben empfehlen, die App zu kaufen, ihr unterstützt damit die Entwicklung dieser wirklich guten App.

Alternative: Von F-Droid

Allen, die Google Play nicht nutzen können, oder die aus welchen Gründen auch immer nicht 2,39€ dafür zahlen können oder möchten, sei die Installation über F-Droid ans Herz gelegt. F-Droid ist ein alternativer App Store, der aussschliesslich Open Source Software bereitstellt. Dazu muss man sich zunächst F-Droid installieren. Dann kann man in der F-Droid App nach ‘Conversations’ suchen, und dort installieren. Das ist kostenlos und legal.

Jabber-Account einrichten

Als nächstes musst Du einen Jabber Account einrichten. Dazu benötigst Du zwei Dinge: eine ID, und einen Anbieter. Den ersten Teil, die ID, kannst Du selbst wählen, z.B. einen Phantasienamen, oder etwas wie vorname.nachname, aber das ist wirklich Dir überlassen. Um einen Anbieter zu finden, empfehle ich diese Liste: https://gultsch.de/compliance_ranked.html. Die Anbieter ganz oben unterstützen die meisten Features die für Smartphone-Nutzer wichtig sind. Ich kann trashserver.net empfehlen, da dieser Server die Registrierung direkt aus der Conversations-App heraus unterstützt, und auch sonst sehr gut gewartet wird. Wenn Du dem Entwickler von Conversations zusätzlich Unterstützung zukommen lassen möchtest, dann ist ein Account auf conversations.im empfehlenswert, dies kostet aber momentan 8 Euro im Jahr. Ich finde, das ist es wert, aber das muss jeder selbst entscheiden.

Wenn Du z.B. die ID ‘max.mustermann’ auf dem Anbieter ‘anbieter.de’ aussuchst, ist Deine Jabber-ID: max.mustermann@anbieter.de . Wenn Du Dich für eine ID und einen Anbieter entschieden hast, dann kannst Du ganz einfach einen Account einrichten, indem Du Conversations startest und im Einrichtungs-Bildschirm die gewünschte Jabber-ID eingibst, das Häkchen bei ‘Neues Konto auf Server erstellen’ aktivierst, Dein gewünschtest Passwort 2x eingibst und dann auf ‘Weiter’ tippst.

Kontakte hinzufügen

Anders als in WhatsApp musst Du in Conversations Deine Kontakte selbst hinzufügen. Dazu einfach auf das Symbol mit dem ‘+’ neben Männchen tippen, die Jabber-ID des Kontaktes eingeben, fertig. Und dann kannst Du loschatten! Viel Spaß!


After turning off comments on this blog a few years ago, the time has now come to remove all the posts containing links. The reason is again pretty much the same as it was when I decided to turn off the comments - I still live in Hamburg, Germany.

So, I've chosen to simply remove all the posts containing links. Unfortunately, that were pretty much all of them. I only left my old post up explaining why this blog allows no comments, now updated to remove all links, of course.

Over the past years, writing new blog posts here has become increasingly rare for me. Most of my 'social media activity' has long moved over to Twitter.

Unfortunately, I mostly use Twitter as a social bookmarking tool, saving and sharing links to things that I find interesting.

As a consequence, I've signed up for a service that automatically deletes my tweets after a short period of time. I'd link to it, but ...

A year or so ago I was asked to debug a crash in the Firefox devtools.  Crashes are easy!  I fired up gdb and reproduced the crash… which turned out to be in some code JITted by SpiderMonkey.  I was immediately lost; even a simple bt did not work.  Someone more familiar with the JIT — hi Shu — had to dig out the answer :-(.

I did take the opportunity to get some information from him about how he found the result, though.  He pointed me to the code responsible for laying out JIT stack frames.  It turned out that gdb could not unwind through JIT frames, but it could be done by hand — so I resolved then to eventually fix this.

Phase One

I knew from my gdb hacking that gdb has a JIT unwinding API.  Actually — and isn’t this the way most programs end up working? — it has two.

The first JIT API requires some extra work on the part of the JIT: it constructs an object file, typically ELF and DWARF, in memory, then calls a hook.  GDB sets a breakpoint on this hook and, when hit, it reads the data from the inferior.  This lets the JIT provide basically any kind of information — but it’s pretty heavy.

So, I focused my attention on the second API.  In this mode, the JIT author would provide a shared library that used some callbacks to inform gdb of the details of what was going on.  The set of callbacks was much more limited, but could at least describe how to unwind the registers.  So, I figured that this is what I would do.

But… I didn’t really want to write this in C.  That would be a real pain!  C is fiddly and hard to deal with, and it would mean constant rebuilding of the shared library while debugging, and SpiderMonkey already had a reasonable number of gdb-python scripts — surely this could be done in Python.

So I took the quixotic approach, namely writing a shared library that used the second gdb JIT API but only to expose this API to Python.

Of course, this turned out to be Rube Goldbergian.  Various parts of the gdb Python API could not be called from the JIT shared library, because those bits depended on other state in gdb, which wasn’t set properly when the JIT library was being called.  So, I had gdb calling into my shared library, which called my Python code, which then invoked a new gdb command (written in Python and supplied by my package) — that existed solely for the purpose of setting this internal state properly — and that in turn invoked the code I wanted to run, say to fetch memory or a register or something.

Computer Science!

Well, that took a while.  But it sort of worked!  And maybe I could just keep it in github and not put it in Mozilla Central and avoid learning about the Firefox build system and copying in some gdb header file and license review and whatnot.

So I started writing the actual Python code… OMG.  And see below since you will totally want to know about this.  But meanwhile…

… while I was hacking away on this crazy idea, someone implemented the much more sane idea of just exposing gdb’s unwinder API to gdb’s Python layer.

Hmm… why didn’t I do that?  Well, I left gdb under a bit of a cloud, and didn’t really want to be that involved at the time.  Plus, you know, gdb is a high quality project; which means that if you write a giant patch to expose the unwinding API, you have to be prepared for 17 rounds of patch review (this really happened once), plus writing documentation and tests.  Sometimes it’s just easier to channel one’s inner Rube.

Phase Two

The integrated Python API was a great development.  Now I could delete my shared library and my insane trampoline hacks, and focus on my insane unwinding code.

A lot of this work was straightforward, in the sense that the general outline was clear and just the details remained.  The details amount to things like understanding the SpiderMonkey frame descriptor (which partly describes the previous frame and partly the new frame; there’s one comment explaining this that somehow eluded me for quite a while); duplicating the SpiderMonkey JIT unwinding code in Python; and of course carefully reading the SpiderMonkey code that JITs the “entry frame” code to understand how registers are spilled.

Naturally, while doing this it turned out that I was maybe the first person to use these gdb APIs in anger.  I found some gdb crashes, oops!  The docs would have been impenetrable, except I already knew the underlying C APIs on which they were based… whew!  The Python API was unexpectedly picky in other areas, too.

But then there was also some funny business, one part in gdb, and one part in SpiderMonkey.

GDB is probably more complicated than you realize.  In this case, the complexity is that, in gdb, each stack frame can have its own architecture.  This seemingly weird functionality is actually used; I think it was invented for the SPU, but some other chips have multiple modes as well.  But what this means is that the question “what architecture is this program?” is not well-defined, and anyway gdb’s Python layer doesn’t provide you a way to find whatever approximation it is that would make sense in your specific case.  However, when writing the SpiderMonkey unwinder, it kind of actually is well-defined and we’d like to know the answer to know which unwinder to choose.

For this problem I settled on the probably terrible idea of checking whether a given register is available.  That is, if you see “$rip“, you can guess it’s x86-64.

The other problem here is that gdb thinks that, since you wrote an unwinder, it should get the first stab at unwinding.  That’s very polite!  But for SpiderMonkey, deciding “hey, is this PC in some code the JIT emitted?” is actually a real pain, or at least outside the random bits of it I learned in order to make all this work.

Aha!  I know, there’s probably a Python API to say “is this address associated with some shared library?”  I remembered reading and/or reviewing a patch… but no, gdb.solib_name is close but doesn’t do the right thing for addresses in the main executable.  WAT.

I tried several tricks without success, and in the end I went with parsing /proc/maps to get the mappings to decide whether a given frame should be handled by this unwinder or by gdb.  Horrible.  And fails with remote debugging.

Luckily, nobody does remote debugging.

Remote Debugging

Oh, wait, people do remote debugging at Mozilla all the time.  They don’t call it “remote debugging” though — they call it “using RR“, which while it runs locally, appears to be remote to gdb; and, importantly, during replay mode fakes the PID, and does other deep magic, though not deep enough to extend to making a fake map file that could be read via gdb’s remote get command.

By the way, you should be using RR.  It’s the best advance in debugging since, well, gdb.  It’s a process record-and-replay program, but unlike gdb’s built-in reverse debugging, it handles threads properly and has decent performance.

Oh Well

Oh well.  It just won’t work remotely.  Or at least not until fellow Mozillian (this always seems like it should be “Mozillan” to me, but it’s not, there really is that extra “i”) and all-star Nicolas Pierron wrote some additional Python to read some SpiderMonkey tables to make the decision in a more principled way.  Now it will all work!

Though looking now I wonder if I dreamed this, because the code isn’t checked in.  I know he had a patch but my memory is a bit fuzzy — maybe in the end it didn’t work, because RR didn’t implement the qGetTLSAddr packet, which gdb uses to read thread-local storage.  Did I mention the thread-locals?

The Real Start of the Story

So, way back at the beginning, during my initial foray into this code, I found that a crucial bit of information — the appropriately-named TlsPerThreadData — was stashed away in a thread-local variable.  Information stored here is needed by the unwinder in order to unwind from a C++ frame into a JIT frame.

Only, Firefox didn’t use “real” thread-local variables, the things that so many glibc and gcc hackers put so much effort into micro-optimizing.  No, it just used a template class that wrapped pthread_setspecific and friends in a relatively ergonomic way.

Naturally, for an unwinder this is a disaster.  Why?  Unwinding is basically the dissection of the stack; but in order to compute the value of one of these thread-local-storage objects, the unwinder would have to make some function calls in the inferior (in fact this prevents it from working on OSX).  But these would affect the stack, and also potentially let other inferior code (in other threads — remember, gdb is complicated and you can exert various unusual kinds of control like this) run as well.

So I neglected to mention the very first step: changing Firefox to use __thread.  (Ok, I didn’t really neglect to mention it, I was just being lazy and anyway it’s a shaggy dog story.)

Do Not Use libthread_db

RR did not implement qGetTLSAddr, which we needed, because  lots of people at Mozilla use RR.  So I set out to implement that.  This meant a foray into the dangerous world of libthread_db.

For reasons I do not know, and suspect that I do not want to know, glibc has historically followed many Solaris conventions.  One such Solaris innovation was libthread_db — a library that debuggers use to find certain information from libc, information like the address of a thread-local variable

On the surface this seems like a great idea: don’t bake the implementation details of the C library into the debugger.  Instead, let the debugger use a debugging library that comes with the C library.  And, if you designed it that way, it would be a good idea.

Sadly, though, libthread_db was not designed that way.  Oh no.

For example, libthread_db has a callback interface.  The calling program — gdb or rr — must provide some functions that libthread_db can call, to do some simple things like “read some memory”; or some very complicated things like “find the address of a symbol given its name”.  Normal C programmers might implement these callbacks using a structure containing function pointers.  But not libthread_db!  Instead it uses fixed symbol names that must be provided by the calling application.  Not all of these are required for it to work (you get to figure out which, yay!), but some definitely are.  And, you have to dlopen a libthread_db that matches the libc of the inferior that you’re debugging (or link against it, but that’s also obviously bad).

Wait, you say.  Doesn’t that mess up cross-debugging?  Why yes!  Yes it does!  Which is why qGetTLSAddr has to be in the gdb remote serial protocol to start with.

Hey, maybe the Linux vendors should fix this.  They are — see Gary Benson’s Infinity project — but unfortunately that’s still in development and I wanted RR to work sooner.

Ok, so whew.  I wrote qGetTLSAddr support for RR.  This was a small patch in the end, but an unusual pain in an already painful series.  Hopefully this won’t spill out into other programs.

glibc

Hahaha, you are so funny.  Of course it spills out: remember how you have to define a bunch of functions with specific names in your program in order to use libthread_db?  Well, how do you know you got the types correct?

Yeah, you include <proc_service.h> (a name deliberately chosen to confuse, I suppose, why not, it doesn’t bear any obvious relationship to the library).  Only, that was never installed by glibc.  Instead, gdb just copied it into the source tree.

So naturally I went and fixed this in glibc.  And, even more naturally, this broke the gdb build, which was autoconf’d to check for a file that never existed in the past.  LOL.

Thank You Cthulhu

At this point I figured it was only a matter of time until I had to patch the kernel.  Thankfully this hasn’t been necessary yet.

It Says What

In gdb the actual unwinding and the display of frames are separate concerns.

And let me digress here to say that gdb’s unwinder design is excellent.  I believe it was redone by Andrew Cagney (this was well before my active time in gdb, so apologies if you’re reading this and you did it and I’ve misattributed it).  Like much of gdb, many of the details are bizarre and take one back to the byte-counting days of 1987; but the high level design is very solid and has endured with, I think, just one significant change (to support inline functions) in the intervening 15 or so years.  I’ve long thought that this is a remarkable accomplishment in the programming world.

So, yes.  It’s not enough to just unwind.  Simply having an unwinder yields backtraces with lines like:

#5 0xfeefee ???

Better than nothing!  But not yet great.

The second part of the SpiderMonkey unwinder is, therefore, a gdb “frame filter”.  This is an object that takes raw frames and decorates them with information like a function name, or a file name, or arguments.

Work to add this information is ongoing — I landed one patch just yesterday, and another one, to add more information about interpreted frames, is still in the works.  And there are two more bugs filed… maybe this project, like this blog post, will never conclude.  It will just scroll endlessly.

But now, with all the code in place, bt can show something like:

#6 0x00007ffff7ff20f3 in <<JitFrame_BaselineJS "f1">> (this=JSVAL_VOID, arg1=$jsval(4700))

This is the call f1(4700).

Let’s Just Have One More

Of course we still couldn’t enable this unwinder by default.  You have to enable it by hand.

And by the way, in the first release of gdb’s Python unwinder feature, enabling or disabling an unwinder didn’t flush the frame cache, so it wouldn’t actually take effect until some invisible-to-the-user state change took place.  I fixed this bug, but here Pedro Alves also taught me the secret gdb command flushregs, which in fact just flushes the frame cache. (I’m going to go out on a limb and guess that this command predates the already ancient maint prefix command, hence its weird name.)

Anyway, you have to enable it by hand because the unwinder itself doesn’t work properly if the outermost frame is in JIT code.  The JIT, in the interest of performance, doesn’t maintain a frame pointer.  This means that in the outermost frame, there’s no reliable way to find the object that describes this frame and links to the previous frame.

Now, normally in this case gdb would either resort to debug info (not available here), or in extremis its encyclopedic suite of prologue analyzers (yes, gdb can analyze common function prologues for all architectures developed in the last 25 years to figure out stuff) — but naturally JIT compilers go their own way here as well.

Humans, like Shu back at the start of this story, can do this by dumping parts of the stack and guessing which bytes represent the frame header.

But, I’ve been reluctant and a bit afraid to hack a heuristic into the unwinder.

To sum up — in case you missed it — this means that all the code written during this entire saga would still not have helped with my original bug.

The End

This is a very important machine that really deserves to get built. Anyone who cares about Free Software should consider funding this project at some level, and spreading the word to their friends. If this project succeeds, it will bootstrap a market for new, owner-controlled performant desktop machines. If it fails, no such computers will exist. The project page and updates explain the current (rather depressing) state of general purpose computing better than I could, so take a look.

I originally posted this on G+ but I thought maybe I should expand it a little and archive it here.

The patch to delete gcj went in recently.

When I was put on the gcj project at Cygnus, I remember thinking that Java was just a fad and that this was just a temporary thing for me. I wasn’t that interested in it. Then I ended up working on it for 10 years.

In some ways it was the high point of my career.

Socially it was fantastic, especially once we merged with the Classpath community — I’ve always considered Mark Wielaard’s leadership in that community as the thing that made it so great.  I worked with and met many great people while working on gcj and Classpath, but I especially wanted to mention Andrew Haley, who is the single best debugger I’ve ever met, and who stayed in the Java world, now working on OpenJDK.

We also did some cool technical things in gcj. The binary compatibility ABI was great, and the split verifier was very fun to come up with.  Per Bothner’s early vision for gcj drove us for quite a while, long after he left Cygnus and stopped working on it.

On the downside, gcj was never quite up to spec with Java. I’ve met Java developers even as recently as last year who harbor a grudge against gcj.

I don’t apologize for that, though. We were trying something difficult: to make a free Java with a relatively small team.

When OpenJDK came out, the Sun folks at FOSDEM were very nice to say that gcj had influenced the opening of the JDK. Now, I never truly believed this — I’m doubtful that Sun ever felt any heat from our ragtag operation — but it was very gracious of them to say so.

Since the gcj days I’ve been searching for basically the same combination that kept me hacking on gcj all those years: cool technology, great social environment, and a worthwhile mission.

This turned out to be harder than I expected. I’m still searching. I never thought it was possible to go back, though, and with this deletion, this is clearer than ever.

There’s a joy in deleting code (though in this case I didn’t get to do the deletion… grrr); but mainly this weekend I’m feeling sad about the final close of this chapter of my life.

RoboVM 0.0.1 got released this week by Trillian AB.

RoboVM's main focus is to compile Java to native for deployment on mobile devices such as iOS and Android. RoboVM uses a Java to Objective-C bridge built using LLVM. Good news is that the same process work for converting Java applications to native applications on GNU/Linux systems as well!

Mario Zechner the author of libgdx posted this nice picture from inside DDD/GDB of his first HelloWorld compiled to native X86 code running on a GNU/Linux machine.
GNU/Linux machine code generated by RoboVM seen from inside DDD/GDB

http://www.robovm.org/

JogAmp is the home of high performance Java™ libraries for 3D Graphics, Multimedia and Processing.
JOGL, JOCL and JOAL provide cross platform Java™ language bindings to the OpenGL®, OpenCL™, OpenAL and OpenMAX APIs.
Running on Android, Linux, Window, OSX, and Solaris across devices using Java.

Release announcement for JogAmp 2.0.2-rc12

"You're encouraged to stop using the now-ancient 2.0-rc11!"

This 2.0.2-rc12 release include the largest security review in the 10-year history of JOGL

  • Security Fixes

    • Dynamic Linker Usage / Impl.
    • ProcAddressTable field visibility
    • Perform SecurityManager checks where required
    • Validation of property access
    • JAR Manifest tags:
      • Codebase
      • Permissions
      • Sealed
    • Use latest Java7 toolchain
      • Generating Java 1.6 bytecode
      • HTML API doc

https://jogamp.org/wiki/index.php/SW_Tracking_Report_Objectives_for_the_release_2.0.2_of_JOGL
Security fixes are marked in red on the above bug tracking page.
JogAmp send out thanks to the FuzzMyApp security researchers for healthy communication that triggered the security review work.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place inside the JogAmp forum & mailing-list and the #jogamp IRC channel on irc.freenode.net.


Meet us @

JogAmp @ SIGGRAPH 2013

If you’ve been following Infinity and would like to, you know, download some code and try it out… well, now you can!

screenshot

The first release candidate is finally available. It can be downloaded here or from NuGet.

What's New (relative to IKVM.NET 8.0):

  • Integrated OpenJDK 8u45.
  • Many fixes to late binding support.
  • Added ikvmc support for deterministic output files.
  • Various sun.misc.Unsafe improvements.
  • Many minor bug fixes and performance tweaks.

Changes since previous development snapshot:

  • Assemblies are strong named.
  • Fix for bug #303. ikvmc internal compiler error when trying to get interfaces from type from missing assembly reference.
  • Implemented NIO atomic file move on Windows.

Binaries available here: ikvmbin-8.1.5717.0.zip

Sources: ikvmsrc-8.1.5717.0.zip, openjdk-8u45-b14-stripped.zip

Thanks to everybody who commented on the JamVM 2.0.0 release, and apologies it's taken so long to approve them - I was expecting to get an email when I had an unmoderated comment but I haven't received any.

To answer the query regarding Nashorn.  Yes, JamVM 2.0.0 can run Nashorn.  It was one of the things I tested the JSR 292 implementation against.  However, I can't say I ran any particularly large scripts with it (it's not something I have a lot of experience with).  I'd be pleased to hear any experiences (good or bad) you have.

So now 2.0.0 is out of the way I hope to do much more frequent releases.  I've just started to look at OpenJDK 9.  I was slightly dismayed to discover it wouldn't even start up (java -version), but it turned out to be not a lot of work to fix (2 evenings).  Next is the jtreg tests...

I'm pleased to announce a new release of JamVM.  JamVM 2.0.0 is the first release of JamVM with support for OpenJDK (in addition to GNU Classpath). Although IcedTea already includes JamVM with OpenJDK support, this has been based on periodic snapshots of the development tree.

JamVM 2.0.0 supports OpenJDK 6, 7 and 8 (the latest). With OpenJDK 7 and 8 this includes full support for JSR 292 (invokedynamic). JamVM 2.0.0 with OpenJDK 8 also includes full support for Lambda expressions (JSR 335), type annotations (JSR 308) and method parameter reflection.

In addition to OpenJDK support, JamVM 2.0.0 also includes many bug-fixes, performance improvements and improved compatibility (from running the OpenJDK jtreg tests).

The full release notes can be found here (changes are categorised into those affecting OpenJDK, GNU Classpath and both), and the release package can be downloaded from the file area.

It was pretty easy to write a new graphical backend for Jainja using the Pepper Plugin API. So it's possible to write Java graphical user interfaces on NaCl now !

A demo is available on the Chrome Web Store.