Planet Classpath

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 7 support in the 2.6.x series with the July 2017 security fixes from OpenJDK 7 u151.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 2.6.11 (2017-08-08)

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

  • PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net)
  • Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222

GnuPG >= 2.1 is required to be able to handle this key.

SHA256 checksums:

  • 5dfbe0f40d8b6004d49add4ec398d1c91d4c02b11716297055e5d73919fb85be icedtea-2.6.11.tar.gz
  • f100c3bfffa5ea0b9a2184346856a1d3db7f8d2a45c74523ad928dcf179ad0e3 icedtea-2.6.11.tar.gz.sig
  • 20063c314535e4ed4b8099e497b880e4f346c85e7315a2573d0f398b973777c5 icedtea-2.6.11.tar.xz
  • 43bf76c60d219ef76b0e03484ee92d0d7657dafae51f21ed088ee5bb5ee654ca icedtea-2.6.11.tar.xz.sig

The checksums can be downloaded from:

A 2.6.11 ebuild for Gentoo is available.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-2.6.11.tar.gz

or:

$ tar x -I xz -f icedtea-2.6.11.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-2.6.11/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

An update on my notes to compile NetBSD kernels and userland.


Build / update the tools:

-U : for unprivilged building
-u : to update
-m : to specify architecture

./build.sh -U -u tools

To cross compile, this is enough:
./build.sh -U -m i386 -u tools
However, since I do want to build on the same computer and the build script would be confused, we add -T tools-${HOST_ARCH}-${TARGET_ARCH}

./build.sh -U -m i386 -u -T tools-amd64-i386 tools


Then we build the kernel

./build.sh -U kernel=CONFNAME

or for cross compilation:
./build.sh -U -T tools-amd64-i386 -m i386 -u GENERIC

The modules:
./build.sh -U -u modules installmodules=/

Now to build userland, including X11. I did not attempt to cross-build userland yet.

./build.sh -U -x -u distribution

./build.sh -U -x -u distribution install=/

We are pleased to announce the release of IcedTea 3.5.1!

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 8 support with the additional fix provided in OpenJDK 8 u144. It also brings in the latest Shenandoah updates.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 3.5.1 (2017-07-27)

  • Import of OpenJDK 8 u144 build 01
    • S8184993: Jar file verification failing with SecurityException: digest missing xxx
  • Shenandoah
    • Amend “ArrayCopy verification code fix” with 8u-specific node hierarchy test
    • Amend “Refactor asm acmp” with a few missing changes
    • [backport] aarch64 store check fix
    • [backport] Account “shared” out-of-LAB allocations separately
    • [backport] Adaptive should not be scared of user-requested System.gc()
    • [backport] Added assertion for page alignment of heap’s base address
    • [backport] Add “verify jcstress” acceptance test
    • [backport] “Allocation failure” cause should not be overwritten
    • [backport] ArrayCopy verification code fix
    • [backport] Assorted cleanups
    • [backport] “Before Full GC” verification is too strong for OOME-during-evac
    • [backport] C1 stores constants without read barriers
    • [backport] Cleanup AArch64 code
    • [backport] Cleanup class unloading and string intern code
    • [backport] Cleanup duplicated Shenandoah task queue declarations
    • [backport] Cleanups
    • [backport] Cleanup ShenandoahBarrierSet::write_barrier
    • [backport] Cleanup ShenandoahHeap::do_evacuation
    • [backport] Clean up unused fields and methods
    • [backport] Cleanup: update-refs check in_collection_set twice
    • [backport] Code cache roots styles
    • [backport] Concurrent code cache evacuation + bugfixes
    • [backport] Concurrent preclean + Fix weakref precleaning
    • [backport] Correct prefetch offset for marked object iteration
    • [backport] Deferred region cleanup.
    • [backport] Dense ShenandoahHeapRegion printout
    • [backport] Detailed ParallelCleanupTask statistics + Split out Full GC stats for parallel cleaning
    • [backport] Disable aggressive+verification test configs (jtreg eats up last configuration)
    • [backport] Do not abandon RP discovery on conc GC cancel, do that only before Full GC
    • [backport] Eliminating _num_regions variable in ShenandoahHeap
    • [backport] Ensure collection set and cset map are consistent
    • [backport] Fallback to shared allocation if GCLAB is not available
    • [backport] Fast synchronizer root scanning
    • [backport] “F: Code Cache Roots” is missing from gc+stats
    • [backport] Fix DerivedPointerTable handling when scanning roots twice in init-evac phase
    • [backport] Fixed a few of early returns that calling register_gc_end()
    • [backport] Fix live data accounting for humongous region
    • [backport] Fix memory Phis with only data uses
    • [backport] Fix recycled regions zapping
    • [backport] Fix up pointer volatility
    • [backport] Generic verification should not trust bitmaps
    • [backport] Heap/matrix verification for all reachable objects
    • [backport] Heap memory usage counting not longer needs to be atomic
    • [backport] Heap region recycling should call explicit clear() and request zapping
    • [backport] Heap region verification
    • [backport] Implementation of interpreter matrix barrier on aarch64
    • [backport] Implement early update references phase.
    • [backport] implicit null checks broken on aarch64
    • [backport] Increase timeout for EvilSyncBug test
    • [backport] Lazy parallel code cache iterator
    • [backport] Make statistics gathering span more operations
    • [backport] Make sure atomic operations are done on “volatile” fields
    • [backport] Make sure new_active_workers is used
    • [backport] Make {T,GC}LAB statistics unconditional
    • [backport] Mark-compact and heuristics should consistently process refs and unload classes
    • [backport] minor fix to optimization of java mirror comparison
    • [backport] more barrier on constant oop fixes + couple small unrelated fixes
    • [backport] More collection set and matrix cleanup
    • [backport] Nit: mark-compact phase 3 (Adjust Pointers) should announce itself as “Phase 3″
    • [backport] Optimize heap region size checks
    • [backport] Optimize heap verification
    • [backport] Out-of-TLAB evacuation should overwrite stale copies
    • [backport] Parallel code cache scanning
    • [backport] Parallel verification
    • [backport] Print correct message about gross times in stats
    • [backport] Print heap changes in phases that actually change heap occupancy
    • [backport] Print more detailed final UR stats
    • [backport] Print more details for weak ref and class unloading stats
    • [backport] Properly react on -ClassUnloading
    • [backport] Purge ealier version of redefined classes during class unloading
    • [backport] Purge ratio, global, connections heuristics.
    • [backport] Purge shenandoahHumongous.hpp
    • [backport] Purge ShenandoahVerify(Reads|Writes)ToFromSpace.
    • [backport] Reduce region retirement during tlab allocation
    • [backport] Refactor asm acmp (x86, aarch64, renames)
    • [backport] Refactor BrooksPointer asserts
    • [backport] Refactor heap verification
    • [backport] Reference processing deadlocks with -ParallelRefProcEnabled
    • [backport] Reference processors might use non-forwarded alive checks
    • [backport] Region sampling may not be enabled because last timetick is uninitialized
    • [backport] Rehash ShenandoahHeap section in hs_err
    • [backport] Reinstate “Purge” block in final-mark stats
    • [backport] Relax assert to not fire at safepoint
    • [backport] Remove heap printing routines from ShenandoahHeap
    • [backport] Remove obsolete compile_resolve_oop_runtime() methods
    • [backport] Rename final mark operations
    • [backport] Rename ShenandoahBarriersForConst
    • [backport] Replace ShHeapRegionSet::get with get_fast
    • [backport] Report correct total garbage data. Print out garbage and cset data with -Xlog:gc+ergo
    • [backport] Report oops and fwdptrs verification failures fully
    • [backport] Result of write barrier on constant not used
    • [backport] Separate Full GC root operations in GC stats
    • [backport] ShenandoahCollectionSet refactor
    • [backport] ShenandoahGCSession used wrong timer for full GC
    • [backport] ShenandoahHeap::evacuate_object() with boolean result flag.
    • [backport] Shenandoah options should be uintx
    • [backport] shenandoah_wb should fallback to slow path with -UseTLAB + Fix aarch64 compilation error due to shenandoah_wb change
    • [backport] ShenandoahWriteBarrierNode::memory_dominates_all_paths() assert failure when compiling methods using unsafe
    • [backport] Shortcut reference processing when no work is available
    • [backport] Simplify parallel synchronizer roots iterator
    • [backport] Skip RESOLVE when references update is not needed
    • [backport] Stats should attribute “Resize TLABs” properly, and mention “Pause” for init/final mark
    • [backport] Stats should not record past-shutdown events
    • [backport] “String/Symbol/CodeCache” -> “Str/Sym, Code Cache”
    • [backport] Tests should use all heuristics and pass heap verification + Disable aggressive+verification test configs
    • [backport] Total pauses should include final-mark pauses
    • [backport] Trim down native GC footprint
    • [backport] Update region sampling to include TLAB/GCLAB allocation data
    • [backport] Update roots should always handle derived pointers
    • [backport] Update ShenandoahHeapSampling to avoid double counting.
    • [backport] Update statistics to capture thread data accurately
    • [backport] Use CollectedHeap::base() instead of ShenandoahHeap::first_region_bottom()
    • [backport] Use lock version heap region memory allocator
    • [backport] Use scoped object for gc session/phases recording
    • [backport] Variable steps in adaptive heuristics
    • [backport] Verification error log is truncated
    • [backport] Verification levels
    • [backport] Verification should assert complete bitmaps in most phases + Disable complete bitmap verification in init mark
    • [backport] Verifier performance improvements: scan objects once, avoid double oop checks
    • [backport] Verifier should not assert cset in forwarded test block
    • [backport] Verifier should print extended info on referenced location
    • [backport] Verifier should use non-optimized root scans
    • [backport] Verify marked objects
    • [backport] Verify TAMS and object sizes
    • [backport] write barrier can get stuck below predicates resulting in unschedulable graph
    • S8140584: nmethod::oops_do_marking_epilogue always runs verification code
    • S8180175, S8180599: Cherry-pick/synchronize
    • Cleanup: Removed redundant ClassLoaderData::clear_claimed_marks() calls
    • Cleanup shared code.
    • Fixed memory leak in region garbage cache
    • Fix return type of ShenandoahHeapRegion::region_size_words_jint()
    • Improved comment about AArch64bit addressing in assembler.
    • Leak mutex in ShenandoahTaskTerminator
    • Make sure C2 arguments are not used when C2 is disabled.
    • Refactor parallel ClassLoaderData iterator
    • Revert G1 changes and bring shared BitMap
    • Add missing cmpoops() declaration to AArch64 macro assembler. Back out matrix related code from AArch64 interpreter.
    • Fix build without precompiled headers.
    • Fixed build issues on Windows

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

  • PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net)
  • Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222

GnuPG >= 2.1 is required to be able to handle this key.

SHA256 checksums:

  • b229f2aa5d743ff850fa695e61f65139bb6eca1a9d10af5306ad3766fcea2eb2 icedtea-3.5.1.tar.gz
  • 801497164168171b7aedae37aabde7821e0df0cfe76736054a2a91f96ae3d0b0 icedtea-3.5.1.tar.gz.sig
  • 8eaa6ac93d4a1989460109246f78427acc5493f847c7b2fc80d3a5d918d811c9 icedtea-3.5.1.tar.xz
  • 9ac863f00398ac51bf62aa4a1e22889baf5a088256755f3dde849849a2bc518f icedtea-3.5.1.tar.xz.sig

The checksums can be downloaded from:

A 3.5.1 ebuild for Gentoo is available.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-3.5.1.tar.gz

or:

$ tar x -I xz -f icedtea-3.5.1.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-3.5.1/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

Advogato has been archived.

When I started working on Free Software advogato was the “social network” where people would keep their diaries (I don’t believe we called them blogs yet). I still remember how proud I was when people certified me as Apprentice.

A lot of people on Planet Classpath still have their diaries imported from Advogato. robilad, audriusa, saugart, rmathew, Anthony, kuzman, jvic, jserv, aph, twisti, Ringding please let me know if you found a new home for your diary.

Hi Fedora Packagers,

rawhide rpmbuild contains various debuginfo improvements that hopefully will make various hacks in spec files redundant.

If you have your own way of handling debuginfo packages, calling find-debuginfo.sh directly, need hacks for working around debugedit limitations or split your debuginfo package by hand then please try out rpmbuild in rawhide and read below for some macros you can set to tweak debuginfo package generation.

If you still need hacks in your spec file because setting macros isn’t enough to get the debuginfo packages you want then please let us know. Also please let us know about packages that need to set debuginfo rpm macros to non-default values because they would crash and burn with the default settings (best to file a bug against rpmbuild).

The improvements have been mainly driven by the following two change proposals for f27 (some inspired by what other distros do):

https://fedoraproject.org/wiki/Changes/ParallelInstallableDebuginfo
https://fedoraproject.org/wiki/Changes/SubpackageAndSourceDebuginfo

The first is completely done and has been enabled by default for some months now in rawhide. The second introduces two new macros to enable separate debugsource and sub-debuginfo packages, but has not been enabled by default yet. If people like the change and no bugs are found (and fesco and releng agree) we can enable them for the f27 mass rebuild.

If your package already splits debuginfo packages in a (common) source package and/or sub-debuginfo packages, please try out the new macros introduced by the second change. You can enable the standard splitting by adding the following to your spec file:

%global _debugsource_packages 1
%global _debuginfo_subpackages 1

Besides the above two changes debuginfo packages can now (and are by default in rawhide) build by running debug extraction in parallel. This should speed up building with lots of binaries/libraries. If you do invoke find-debuginfo.sh by hand you most likely will want to add %{?_smp_mflags} as argument to get the parallel processing speedup.

If your package is invoking find-debuginfo.sh by hand also please take a look at all the new options that have been added. Also note that almost all options can be changed by setting (or undefining) rpm macros now. Using the rpm macros is preferred over invoking find-debuginfo.sh directly since it means you get any defaults and improvements that might need new find-debuginfo.sh arguments automatically.

Here is an overview of various debuginfo rpm macros that you can define undefine in your spec file with the latest rpmbuild:

#
# Should an ELF file processed by find-debuginfo.sh having no build ID
# terminate a build?  This is left undefined to disable it and defined to
# enable.
#
%_missing_build_ids_terminate_build    1

#
# Include minimal debug information in build binaries.
# Requires _enable_debug_packages.
#
%_include_minidebuginfo        1

#
# Include a .gdb_index section in the .debug files.
# Requires _enable_debug_packages and gdb-add-index installed.
#
%_include_gdb_index    1

#
# Defines how and if build_id links are generated for ELF files.
# The following settings are supported:
#
# - none
#   No build_id links are generated.
#
# - alldebug
#   build_id links are generated only when the __debug_package global is
#   defined. This will generate build_id links in the -debuginfo package
#   for both the main file as /usr/lib/debug/.build-id/xx/yyy and for
#   the .debug file as /usr/lib/debug/.build-id/xx/yyy.debug.
#   This is the old style build_id links as generated by the original
#   find-debuginfo.sh script.
#
# - separate
#   build_id links are generate for all binary packages. If this is a
#   main package (the __debug_package global isn't set) then the
#   build_id link is generated as /usr/lib/.build-id/xx/yyy. If this is
#   a -debuginfo package (the __debug_package global is set) then the
#   build_id link is generated as /usr/lib/debug/.build-id/xx/yyy.
#
# - compat
#   Same as for "separate" but if the __debug_package global is set then
#   the -debuginfo package will have a compatibility link for the main
#   ELF /usr/lib/debug/.build-id/xx/yyy -> /usr/lib/.build-id/xx/yyy
%_build_id_links compat

# Whether build-ids should be made unique between package version/releases
# when generating debuginfo packages. If set to 1 this will pass
# --build-id-seed "%{VERSION}-%{RELEASE}" to find-debuginfo.sh which will
# pass it onto debugedit --build-id-seed to be used to prime the build-id
# note hash.
%_unique_build_ids      1

# Do not recompute build-ids but keep whatever is in the ELF file already.
# Cannot be used together with _unique_build_ids (which forces recomputation).
# Defaults to undefined (unset).
#%_no_recompute_build_ids 1

# Whether .debug files should be made unique between package version,
# release and architecture. If set to 1 this will pass
# --unique-debug-suffix "-%{VERSION}-%{RELEASE}.%{_arch} find-debuginfo.sh
# to create debuginfo files which end in -<ver>-<rel>.<arch>.debug
# Requires _unique_build_ids.
%_unique_debug_names    1

# Whether the /usr/debug/src/<package> directories should be unique between
# package version, release and architecture. If set to 1 this will pass
# --unique-debug-src-base "%{name}-%{VERSION}-%{RELEASE}.%{_arch}" to
# find-debuginfo.sh to name the directory under /usr/debug/src as
# <name>-<ver>-<rel>.<arch>.
%_unique_debug_srcs     1

# Whether rpm should put debug source files into its own subpackage
#%_debugsource_packages 1

# Whether rpm should create extra debuginfo packages for each subpackage
#%_debuginfo_subpackages 1

# Number of debugging information entries (DIEs) above which
# dwz will stop considering file for multifile optimizations
# and enter a low memory mode, in which it will optimize
# in about half the memory needed otherwise.
%_dwz_low_mem_die_limit          10000000
# Number of DIEs above which dwz will stop processing
# a file altogether.
%_dwz_max_die_limit              50000000

%_find_debuginfo_dwz_opts --run-dwz\\\
--dwz-low-mem-die-limit %{_dwz_low_mem_die_limit}\\\
--dwz-max-die-limit %{_dwz_max_die_limit}

If there are settings missing that would be useful, bugs with the default settings or defaults that should be changed please do file a bug report.

Dear Members of the JCP Executive Committee:

I am the Specification Lead for JSR 376, the Java Platform Module System.

The goal of this JSR is to design a module system that is approachable by all developers for use in their own code yet scalable to the modularization of the Java SE Platform itself, as stated in the JSR submission which the EC approved in December 2014.

The present draft Specification achieves that goal, as further expressed in a set of requirements agreed to by the JSR 376 Expert Group in April 2015. The Specification reflects input not only from the EG but, also, from an active community of developers that includes the maintainers of some of the most widely-used open-source Java libraries, frameworks, and tools. We are still working on a few minor open issues, but I am confident that we can resolve them in short order. Just yesterday I posted a revised proposal for the automatic-modules issue raised by some of the core Maven developers, and they have responded positively.

The Public Review Ballot for this JSR will close in a few days. Red Hat Middleware has indicated, as you know, that they will not support this JSR. That is disappointing, but not surprising.

Red Hat Middleware initially agreed to the goals and requirements of the JSR, but then worked consistently to undermine them. They attempted to turn this JSR into something other than it was intended to be. Rather than design one module system that is both approachable and scalable they instead wanted to design a “meta” module system via which multiple different module systems could interoperate on an intimate basis. I can only assume that they pursued this alternate goal in order to preserve and protect their home-grown, non-standard module system, which is little used outside of the JBoss/Wildfly ecosystem.

Designing a “meta” module system would be an interesting project, but it would be even larger in scope and much more difficult than this JSR. By focusing on an audience of module-system experts it would likely result in a design that is far from approachable by all developers. That is why I repeatedly pointed out to Red Hat Middleware that many of the features they advocated were out of scope, but they chose not to accept those decisions.

IBM has declared publicly that they will cast an explicit vote against this JSR. That is disappointing also—and surprising.

IBM has said very little during the course of this JSR. After they announced that they would vote against it they later sent a list of specific issues to the EG, but only in response to a request from another EG member. None of those issues is new, many of them were discussed long ago, and IBM was silent during most of the discussions.

IBM’s recent position appears rooted in a vague desire for “closer consensus” amongst EG members. I would prefer more consensus too, but that is not possible given Red Hat Middleware’s position. I can only conclude that IBM has decided that their interests are best served by delaying this JSR and, also, JSR 379 (Java SE 9)—which is regrettable.

Is the present Specification for this JSR perfect? No, of course not. It does, however, reflect years of development, testing, and refinement with active feedback from many developers.

Could we make the Specification better if we spent more time on it? Yes, of course we could. What we have now does not solve every practical modularity-related problem that developers face, but it meets the agreed goals and requirements and is a solid foundation for future work. It is time to ship what we have, see what we learn, and iteratively improve. Let not the perfect be the enemy of the good.

Should we further delay this JSR—possibly for years—in order to gain “closer consensus” by pursuing a different goal that will likely result in a design so bloated and complex that no working developer would ever use it? I do not see how that could possibly be in the best interest of the Java community.

As you consider how to cast your vote I urge you to judge the Specification on its merits and, also, to take into account the nature of the precedent that your vote will set.

A vote against this JSR due to lack of consensus in the EG is a vote against the Java Community Process itself. The Process does not mandate consensus, and for good reason. It intentionally gives Specification Leads broad decision powers, precisely to prevent EG members from obstructing progress in order to defend their own narrow interests. If you take away that authority then you doom future JSRs to the consensus of self-serving “experts.”

Many failed technologies have been designed in exactly that way.

That is not the future that I want for Java.

Respectfully yours,
Mark Reinhold

After (too) many years, finally a new release of Graphos.

This release has two new important features: cusps and images.

Splines support now cusps, that is left and right asymmetrical tangents to a control point.

A new image item object exists. This allows you to paste a preferably small image and move it around like it were a box, resizing it at will.
This is quite useful to be able to manually overlay lines and trace images.
Since the image is directly encoded in the file and not saved as a separate image it is not very efficient and using large images is not advisable. In the future a new bundle file format needs to be implemented.



The screenshot shows an example of tracing the GNUstep logo imported as a bitmap, using the cusp point in the upper right.

Many bug fixes and improvements in these past years, some major:
  •  Text improvements (editor display, reading/&saving, Mac support)
  •  Circles/Ovals save/read fix
  •  Properties inspector fixes
  •  Portability fixes
To support cusps, the file format changed again, reading of old formats is still supported.
After almost fifteen years I have decided to quit working on IKVM.NET. The decision has been a long time coming. Those of you that saw yesterday’s Twitter spat, please don’t assume that was the cause. It rather shared an underlying cause. I’ve slowly been losing faith in .NET. Looking back, I guess this process started with the release of .NET 3.5. On the Java side things don’t look much better. The Java 9 module system reminds me too much of the generics erasure debacle.

I hope someone will fork IKVM.NET and continue working on it. Although, I’d appreciate it if they’d pick another name. I’ve gotten so much criticism for the name over the years, that I’d like to hang on to it 😊

I’d like to thank the following people for helping me make this journey or making the journey so much fun: Brian Goetz, Chris Brumme, Chris Laffra, Dawid Weiss, Erik Meijer, Jb Evain, John Rose, Mads Torgersen, Mark Reinhold, Volker Berlin, Wayne Kovsky, The GNU Classpath Community, The Mono Community.

And I want to especially thank my friend Miguel de Icaza for his guidance, support, inspiration and tireless efforts to promote IKVM.

Thank you all and goodbye.

[NOTE: This article talks about commercial products and contains links to them, I do not receive any money if you buy those tools, nor I work for or I am affiliated to any of those companies. The opinion expressed here are mine and the review is subjective]

This is my attempt at a review of Spitfire Audio BT Phobos. Before diving into the review, and since I know I will be critic particularly on some aspects, I think it’s fair to assess the plugin right away: BT Phobos is an awesome tool, make no mistakes.

BT Phobos is a “polyconvolution” synthesiser. It is, in fact, the first “standalone” plugin produced by Spitfire Audio, which is one of the companies I respect the most when it comes to music production and sample based instruments.

The term polyconvolution is used by the Spitfire Audio team to indicate the simultaneous use of three convolvers for four primary audio paths: you can send any amount of each of those four primary sources (numbered 1 to 4) outputs to each of the three convolution engines (named W, X and Y).

Screen Shot 2017-04-18 at 13.31.39

Source material controls

There is lot of flexibility in the mixing capabilities; there are, of course, separate dry/wet signal knobs that send a specific portion of the unprocessed source material to the “amplifier” module, control how much of the signal goes to the convolution circuits, and finally how much of each of the convolution engines applies to each of the source sound.

This last bit is achieved by means of an interesting nabla shaped X/Y pad: by positioning the icon that represents the source module closer to a corner it’s possible activate just the convolution engine that represents that corner; for example, top left is the W engine, top right the X and bottom the Y. Manually moving the icon gradually introduces contributions from the other engines, and double clicking on the icon makes all convolvers contribute equally to the wet sound, by positioning them to the center of the nabla.

Screen Shot 2017-04-18 at 15.45.15

The convolution mixer

Finally, each convolver has a control that allows to change the output level of the convolution engine before it reaches its envelope shaper. Spitfire Audio has released a very interesting flow diagram that shows the signal path in detail, which is linked below for reference.

BT Phobos signal path

In addition to the controls just described, the main GUI has basic controls to tweak the source material with an ADSR envelope which is directly accessible below each of the main sound sources as well as the convolutions modules, but it’s possible to have access to more advanced settings by clicking on the number or the letter that identifies the module name.

Screen Shot 2017-04-18 at 13.32.21

The advanced controls interface

An example of such controls is the Hold parameter, which let the user adjust the time the sound is held at full level before entering the Decay phase of its envelope; another useful tool is the sampling and IR offset controls, which allow to tweak parameters like the starting point of the material or the quantisation and its Speed (the playback speed for the samples, and is a function of the host tempo); there is also a control to influence the general pitch of the sound; finally a simple but effective section is dedicated to filtering – although a proper EQ is missing – as well as panning and level adjustments.

All those parameters are particularly important settings when using loops, but also contribute to shaping the sound with the pitched material, and can be randomised for interesting effects and artefacts generated from the entropy (you can just randomise the material selection only as opposed to all the parameters).

Modulation is also present, of course, with various LFOs of various kind that can be used to modulate basically everything. You can access them either by clicking on the mappings toggle below the ADSR envelope of each section, or by using the advanced settings pages.

The amount of tweaks that can be made to the material in both the source and the convolution engines is probably the most important aspect of BT Phobos, since it gives an excellent amount of freedom to create new sounds from what’s available, which is already a massive amount of content, and allows to build wildly different patches with a bit of work, but it’s definitely not straightforward and needs time to understand the combined effects that each setting has on the whole.

Since the material is polyphonic, the Impulse Responses for the convolution are created on the fly, and in fact, one interesting characteristic of BT Phobos is that there is no difference between a material for the convolution engines and one for the source module,  both draw from the same pool of sounds.

Screen Shot 2017-04-18 at 14.41.57

BT Phobos beautiful GUI

There is a difference on the type of material though, where loop based samples are, well, looped (and tempo sync’ed), and their pitch does not change based on the key that triggers them (although you can still affect the general pitch of the sound with the advanced controls), “tonal” material are pitched and change following the midi notes.

One note about the LFOs: the mappings are “per module”. In other words, it is possible to modulate almost every parameter inside a single module, be it one of the four input sources or one of the three convolution engines, but there seem to be no way to define a global mapping of some kind. For example, I found a very nice patch from Mr. Christian Henson (which incidentally made, at least in my opinion, the best and most balanced overall presets), and I noticed I could make it even more interesting by using the modulation wheel. I wanted to modulate the CC1 message with an LFO (in fact, ideally it would be even better to have access to a custom envelope, but BT Phobos doesn’t have any for modulation use), but I could not find a way to do that other than using Logic’s own Midi FX. I understand that MIDI signals are generated outside the scope of the plugin, but it would be fantastic to have the option of tweaking and modulate everything from within the synth itself.

All the sources and convolvers can be assigned to separate parts of the keyboard by tweaking the mapper at the bottom of the GUI. It is not possible to map a sound to start from an offset in the keyboard controls – for example to play C1 on the keyboard but trigger C2, or any other note – but of course you can change the global pitch so this has effectively the same result, and as said before it can also be modulated with an LFO or via DAW automation, for more interesting effects.

Screen Shot 2017-04-18 at 21.42.19

Keyboard mapping tool

Indeed, the flexibility of the tool, and the number of options at disposal for tweaking the sounds are very impressive. Most patches are very nice and ready to be used as they are, and blend nicely with lots of disparate styles. Some patches are very specific though, and pose a challenge to be used. Generally, I would consider these as starting points for exploration, rather than “final”.

When reading about BT Phobos in the weeks before its release many people asked whether you could add your own sound to it or not. It’s not possible, unfortunately.

At first, I thought that wasn’t a limitation or a deal breaker. I still think it’s not a deal breaker, but I see the value added that BT Phobos has even just as a standalone synth, as opposed to recreate the same kind of signal path manually with external tools, to give your own content the “Phobos treatment”, which is something that is entirely possible of course, for example just with Alchemy and Space Designer (which are both included in MainStage, so you can get them for a staggering 30 euros if you are a Mac user, even if you don’t use Logic Pro X!), but of course, we would be trading away the immediacy that BT Phobos delivers.

That, maybe, is my main criticism to this synth, and I hope Spitfire Audio turns BT Phobos into a fully fledged tool for sound design over time, maybe enabling access to spectral shaping in some form or another, so we can literally paint over (or paint away!) portions of the sound, which is something you can do with iZotope Iris or Alchemy and is a very powerful way to shape a sound and do sound design in general.

Another thing that is missing is a sound effect module, although I don’t know how important that is, given that there are thousands of outstanding plugins that do all sort of effects from delay to chorus etc… And, in fact, many patches benefit for added reverb (I use Eventide Blackhole and found that works extremely well with BT Phobos, since it’s also prominently used for weird sound effects). But it may be interesting to play by putting some effects (including a more proper EQ section) in various places in the signal path, although it’s all too easy to generate total chaos from such experimentation, so it’s possible the Spitfire Audio simply thought to leave this option for another time and instead focus on a better overall experience.

And there’s no arpeggiator! Really!

The number of polyphonic voices can be altered. Spitfire Audio states that the synth tweaks the number of voices at startup to match the characteristics of your computer, but I can’t confirm that, since every change I do seems to remain, even if I occasionally hear some pop and cracks at higher settings. Nevertheless, the CPU usage is pretty decent unless you go absolutely crazy with the polyphony count. I also noted that the numbers effect the clarity of the sound. This is understandable since an higher count means more notes can be generated at the same time, which means more things are competing for the same spectrum, and things can become very confusing very quickly. On the other end, a lower polyphony count has a bad impact on how the notes are generated. I feel sometime that things just stop generating sound, which is counter intuitive and very disturbing, especially since it’s very easy to have a high polyphony count with all those sources and convolvers.

Also to note is that, by nature, some patches have very wild difference in their envelopes and level settings, which means it’s all to easy to move from a quiet to a very loud patch just by clicking “next” (which is possible in Logic at least with the next/prev patch buttons on top of the plugin main frame). The synth does not stop the sound, nor does any attempt to fade from one sound to the next, instead, the convolutions simply keep working on the next sample in queue with the new settings! I still have to decide if this is cool or not, perhaps it’s not intentional, but I can see how this could be used to automate patch changes in some clever way during playback! And indeed, a was able to create a couple of interesting side effects just by changing between patches at the right time.

More on the sounds. The amount of content is really staggering, and simply cycling through the patches does not make justice to this synth, at all!

What BT Phobos wants is a user that spends time tweaking the patches and play with the source material to get the most out it, however it’s easy to see how limiting this may feel at the same time, particularly with the more esoteric and atonal sounds, and there’s certainly a limit on how good a wood stick convolved with an aluminium thin can may sound, so indeed some patches do feel repetitive at times, as the source material does. There are quite a few very similar drum loops for example, or various pitches “wind blowing into a pipe” kind of things.

This is a problem common to other synths based on the idea of tweaking sounds from the environment, though. For example, I have the amazing Geosonics from Soniccouture, which is an almost unusable library that, once tweaked, is capable of amazing awesomeness. Clearly, the authors of both synths – but this is especially valid for BT Phobos I think – are looking at an audience that is capable of listening through the detuned and dissonant sound waves and shape a new form of music.

This is probably the reason why so many of the pre assembled patches dive the user full speed into total sound design territory; however, and this is another important point of criticism, this is sound design that has already been done for you… A lot of the BT patches, in particular, are clearly BT patches, using them as they are means you are simply redoing something that has already been done before, and, despite with a very experimental feeling still strongly present, it’s not totally unheard or new.

For example, I also happen to have Break Tweaker and Stutter Edit (tools that also originally come from BT), and I could not resist to the temptation to play something that resembles BT work on “This Binary Universe” or “_” (fantastic albums)! While this seems exciting – BT in a box! And you can also see the democratising aspect of BT Phobos, I can do that in half hour instead of six months of manual CSound programming! – it’s an unfortunate and artificial limitation on a tool that is otherwise a very powerful enabler, capable of bringing complex sound design one step closer to the general public. Having the ability to process your own sounds would mitigate this aspect I think.

I do see how this is useful for a composer in need of a quick solution for an approaching deadline even with the most experimental tones, though: those patches can resolve a deadlock or take you out of an impasse in a second.

The potential for BT Phobos to become a must have tool for sound design are all there, especially if Spitfire Audio keeps adding content, perhaps more varied (and even better, allow to load your own content). The ability to shape the existing sounds already make it very usable. I don’t think it’s a general tool at this stage, though, and definitely it should not be the first synth or sound shaping processor in your arsenal, especially if you are starting out now.

But it’s not just a one trick pony either, it does offer you quite a lot of possibilities, and the more you work on that, the more addictive it becomes, and I can see Spitfire Audio offering soon this synth within a collection comprising of some of their more experimental stuff like LCO and Enigma, which would be very nice, indeed.

It’s unfortunate that Spitfire Audio does not offer an evaluation period: contrary to most of their offering, BT Phobos needs time to be fully grasped and it’s all but immediate (well, unless you are happy with the default patches or you really just need to “get out of troubles” quickly, but be careful with that because the tax is on the originality), but it can, and does, evolve, as its convolutions do, over time and it can absolutely deliver total awesomeness if used correctly.

Most patches are also usable out of the box, and especially by adding some reverb or doing some post processing with other tools, it’s possible to squeeze even more life out of them.

Overall, I do recommend BT Phobos, is a wonderful, very addictive synthesiser.


I don’t usually post photos of my family, especially my kids, but this is a very special occasion, that needs celebration.

On the 29th, at 7:31 in the morning (and what a long night!), my second child, Luca, was born in Hamburg. I guess this makes him an official “hamburger” now 🤣

Luca was named after his uncle, one of the most eclectic and interesting person I ever met, and it was a great honour for us.

I don’t have much words really, being a father is amazing, and I’m very proud, and very in love, with my kids. Very, very in love.

Welcome Luca, son of Hamburg, and citizen of the World!

IMG_2989

Luca and Fiorenza 🙂

P.S. I just realised that it’s a lot over one year I don’t post anything, I will try to change that, I already have a few things that will be probably very interesting share!


Quantum Curling

Last week we had a work week at Mozilla’s Toronto office for a bunch of different projects including Quantum DOM, Quantum Flow (performance), etc. It was great to have people from a variety of teams participate in discussions and solidify (and change!) plans for upcoming Firefox releases. There were lots of sessions going on in parallel and I wasn’t able to attend them all but some of the results were written up by the inimitable Ehsan in his fourth Quantum Flow newsletter.

Near the end of the week, Ehsan gave an impromptu walkthrough of the Gecko profiler. I’m planning on taking some of the tips he gave and that were discussed and put them onto the documentation for the profiler. If you’re interested in helping, please let me know!

The photo above is of us going curling at the High Park Curling Club. It was a lot of fun and I was happy that only one other person had ever curled before so it was a unique experience for almost everyone!

As previously reported, the JSR 269 annotation processing APIs in the javax.lang.model and javax.annotation.processing packages are undergoing maintenance review as part of Java SE 9.

All the planned changes to the JSR 269 API are in JDK 9 build 164, downloadable as an early access binary. Of note new in build 164 is the annotation type javax.annotation.processing.Generated, meant to be a drop-in replacement for javax.annotation.Generated since the latter is not in a convenient module.

Please try out your existing annotation processors -- compiling them, running them, etc. -- on JDK 9 and report your experiences, good or bad, to compiler-dev@openjdk.java.net.

As has been done previously during Java SE 7 and Java SE 8, the JSR 269 annotation processing API is undergoing a maintenance review (MR) as part of Java SE 9.

Most of the API changes are in support of adding modules to the platform, both as a language structure in javax.lang.model.* as well as another interaction point in javax.annotation.processing in the Filer and elsewhere. A small API change was also done to better support repeating annotations. A more detailed summary of the API changes is included in the MR material.

The API changes are intended to be largely compatible with the sources of existing processors, their binary linkage, as well as their runtime behavior. However, it would be helpful to verify that your existing processors work as expected when run under JDK 9. JDK 9 early access binaries are available for download. Please report experiences running processors under JDK 9 as comments here or to me as email. Feedback on the API changes can be sent to compiler-dev@openjdk.java.net.

Note: this article is also available in German.

What is Conversations?

Conversations is an app for Android Smartphones for sending each other messages, pictures, etc, much like WhatsApp. However, there are a number of important differences to WhatsApp:

  • Conversations does not use your phone number for identification, and doesn’t read your address book to find contacts. It uses an ID that looks much like an email address (the so-called Jabber-ID), and you can find contacts by exchanging Jabber-IDs with people, just like you do with email addresses, phone numbers, etc.
  • Conversations uses an open protocol called XMPP, that is used by many other programs on a wide range of systems, for example on desktop PCs.
  • Converations is Open Source, i.e. everybody can inspect the source code, check it for security issues, see what the program actually does, or even modify and distribute it.
  • XMPP builds on a decentralized infrastructure. This means that not one company is in control of it, but instead there are many providers, or you can even run your own server if you want.
  • Conversations does not collect and sell any information from you or your contacts.

There are more differences, but I don’t want to go into detail here, others have already done it, and better (German).

Install Conversations

From Google Play

Conversations is easily installed from Google Play. However, it currently costs 2,39€. I’d recommend everybody who can to buy the it, it supports development of this really good app.

Alternative: From F-Droid

For all those who cannot or don’t want to spend the money, there is another way to get it for free. It is available in the F-Droid. It is an alternative app store, that only distributes Open Source software. In order to do that, you first need to install F-Droid. Then you can start F-Droid and search for Conversations and install it.

Set-up Jabber account

Next step is to set up a Jabber account. You need two things: an ID, and a provider. The first part, the ID, you can choose freely, e.g. a fantasy name or something like firstname.surname, but this is really up to you. In order to find a provider, I recommend this list https://gultsch.de/compliance_ranked.html. The providers at the top of the list have best support for the XMPP features that are relevant for smartphone users. I’d recommend trashserver.net because this supports in-band registration (directly from Conversations) and is very well maintained. If you want to further support the developer of Conversations, I’d recommend an account on conversations.im, this currently costs 8€/year. I think it is worth it, but you have the choice.

If you choose, for example, the ID ‘joe.example’ on the provider ‘provider.org’, then your Jabber-ID is joe.example@provider.org. When you’re decided on a Jabber-ID, you can easily register an account by starting Conversations, entering the Jabber-ID in the set-up screen, check the box ‘register new account on server’, enter your preferred password 2x and confirm it.

Adding contacts

Adding contacts is different than WhatsApp. You have to manually add contacts to your roster. Tap on the ‘+’ symbol next to the little people icon, enter your contact’s Jabber-ID and confirm it. Now you’re ready to start chatting. Have fun!


Diesen Artikel gibt es auch in Englisch.

Was ist Conversations?

Conversations ist eine App für Android Smartphones, mit der man sich gegenseitig Nachrichten, Bilder, etc schicken kann, sehr ähnlich wie WhatsApp. Es gibt allerdings ein paar wichtige Unterschiede zu WhatsApp:

  • Conversations verwendet nicht Deine Telefon-Nummer zur Identifikation, und nicht Dein Adressbuch um Kontakte zu finden. Deine ID sieht aus wie eine Email-Adresse (Deine sogenannte Jabber-ID), und Kontakte findest Du indem Du Deine Jabber-ID mit Bekannten austauschst, genau wie bei Email-Adressen oder Telefonnummern auch.
  • Conversations benutzt ein offenes Protokoll, genannt XMPP, das von vielen anderen Programmen auf vielen verschiedenen Systemen genutzt werden kann, z.B. auch auf Desktop PCs.
  • Conversations ist Open Source, d.h. jeder kann den Quellcode einsehen, und z.B. auf Sicherheitsprobleme überprüfen, oder sich vergewissern was das Programm eigentlich macht, oder es ändern, etc.
  • XMPP baut auf eine dezentrale Infrastruktur, das bedeutet daß nicht ein Unternehmen alles kontrolliert, sondern daß es viele verschiedene Anbieter gibt, oder man z.B. selbst entsprechende Server betreiben kann, wenn man möchte.
  • Conversations sammelt und verkauft keinerlei Informationen über Dich und Deine Kontakte.

Es gibt noch einige andere Unterschiede, aber ich will hier nicht im Detail darauf eingehen, das haben andere schon viel besser getan.

Conversations installieren

Von Google Play

Conversations lässt sich ganz einfach von Google Play installieren. Es kostet dort allerdings momentan 2,39€. Ich möchte allen, die die Möglichkeit haben empfehlen, die App zu kaufen, ihr unterstützt damit die Entwicklung dieser wirklich guten App.

Alternative: Von F-Droid

Allen, die Google Play nicht nutzen können, oder die aus welchen Gründen auch immer nicht 2,39€ dafür zahlen können oder möchten, sei die Installation über F-Droid ans Herz gelegt. F-Droid ist ein alternativer App Store, der aussschliesslich Open Source Software bereitstellt. Dazu muss man sich zunächst F-Droid installieren. Dann kann man in der F-Droid App nach ‘Conversations’ suchen, und dort installieren. Das ist kostenlos und legal.

Jabber-Account einrichten

Als nächstes musst Du einen Jabber Account einrichten. Dazu benötigst Du zwei Dinge: eine ID, und einen Anbieter. Den ersten Teil, die ID, kannst Du selbst wählen, z.B. einen Phantasienamen, oder etwas wie vorname.nachname, aber das ist wirklich Dir überlassen. Um einen Anbieter zu finden, empfehle ich diese Liste: https://gultsch.de/compliance_ranked.html. Die Anbieter ganz oben unterstützen die meisten Features die für Smartphone-Nutzer wichtig sind. Ich kann trashserver.net empfehlen, da dieser Server die Registrierung direkt aus der Conversations-App heraus unterstützt, und auch sonst sehr gut gewartet wird. Wenn Du dem Entwickler von Conversations zusätzlich Unterstützung zukommen lassen möchtest, dann ist ein Account auf conversations.im empfehlenswert, dies kostet aber momentan 8 Euro im Jahr. Ich finde, das ist es wert, aber das muss jeder selbst entscheiden.

Wenn Du z.B. die ID ‘max.mustermann’ auf dem Anbieter ‘anbieter.de’ aussuchst, ist Deine Jabber-ID: max.mustermann@anbieter.de . Wenn Du Dich für eine ID und einen Anbieter entschieden hast, dann kannst Du ganz einfach einen Account einrichten, indem Du Conversations startest und im Einrichtungs-Bildschirm die gewünschte Jabber-ID eingibst, das Häkchen bei ‘Neues Konto auf Server erstellen’ aktivierst, Dein gewünschtest Passwort 2x eingibst und dann auf ‘Weiter’ tippst.

Kontakte hinzufügen

Anders als in WhatsApp musst Du in Conversations Deine Kontakte selbst hinzufügen. Dazu einfach auf das Symbol mit dem ‘+’ neben Männchen tippen, die Jabber-ID des Kontaktes eingeben, fertig. Und dann kannst Du loschatten! Viel Spaß!


After turning off comments on this blog a few years ago, the time has now come to remove all the posts containing links. The reason is again pretty much the same as it was when I decided to turn off the comments - I still live in Hamburg, Germany.

So, I've chosen to simply remove all the posts containing links. Unfortunately, that were pretty much all of them. I only left my old post up explaining why this blog allows no comments, now updated to remove all links, of course.

Over the past years, writing new blog posts here has become increasingly rare for me. Most of my 'social media activity' has long moved over to Twitter.

Unfortunately, I mostly use Twitter as a social bookmarking tool, saving and sharing links to things that I find interesting.

As a consequence, I've signed up for a service that automatically deletes my tweets after a short period of time. I'd link to it, but ...

A year or so ago I was asked to debug a crash in the Firefox devtools.  Crashes are easy!  I fired up gdb and reproduced the crash… which turned out to be in some code JITted by SpiderMonkey.  I was immediately lost; even a simple bt did not work.  Someone more familiar with the JIT — hi Shu — had to dig out the answer :-(.

I did take the opportunity to get some information from him about how he found the result, though.  He pointed me to the code responsible for laying out JIT stack frames.  It turned out that gdb could not unwind through JIT frames, but it could be done by hand — so I resolved then to eventually fix this.

Phase One

I knew from my gdb hacking that gdb has a JIT unwinding API.  Actually — and isn’t this the way most programs end up working? — it has two.

The first JIT API requires some extra work on the part of the JIT: it constructs an object file, typically ELF and DWARF, in memory, then calls a hook.  GDB sets a breakpoint on this hook and, when hit, it reads the data from the inferior.  This lets the JIT provide basically any kind of information — but it’s pretty heavy.

So, I focused my attention on the second API.  In this mode, the JIT author would provide a shared library that used some callbacks to inform gdb of the details of what was going on.  The set of callbacks was much more limited, but could at least describe how to unwind the registers.  So, I figured that this is what I would do.

But… I didn’t really want to write this in C.  That would be a real pain!  C is fiddly and hard to deal with, and it would mean constant rebuilding of the shared library while debugging, and SpiderMonkey already had a reasonable number of gdb-python scripts — surely this could be done in Python.

So I took the quixotic approach, namely writing a shared library that used the second gdb JIT API but only to expose this API to Python.

Of course, this turned out to be Rube Goldbergian.  Various parts of the gdb Python API could not be called from the JIT shared library, because those bits depended on other state in gdb, which wasn’t set properly when the JIT library was being called.  So, I had gdb calling into my shared library, which called my Python code, which then invoked a new gdb command (written in Python and supplied by my package) — that existed solely for the purpose of setting this internal state properly — and that in turn invoked the code I wanted to run, say to fetch memory or a register or something.

Computer Science!

Well, that took a while.  But it sort of worked!  And maybe I could just keep it in github and not put it in Mozilla Central and avoid learning about the Firefox build system and copying in some gdb header file and license review and whatnot.

So I started writing the actual Python code… OMG.  And see below since you will totally want to know about this.  But meanwhile…

… while I was hacking away on this crazy idea, someone implemented the much more sane idea of just exposing gdb’s unwinder API to gdb’s Python layer.

Hmm… why didn’t I do that?  Well, I left gdb under a bit of a cloud, and didn’t really want to be that involved at the time.  Plus, you know, gdb is a high quality project; which means that if you write a giant patch to expose the unwinding API, you have to be prepared for 17 rounds of patch review (this really happened once), plus writing documentation and tests.  Sometimes it’s just easier to channel one’s inner Rube.

Phase Two

The integrated Python API was a great development.  Now I could delete my shared library and my insane trampoline hacks, and focus on my insane unwinding code.

A lot of this work was straightforward, in the sense that the general outline was clear and just the details remained.  The details amount to things like understanding the SpiderMonkey frame descriptor (which partly describes the previous frame and partly the new frame; there’s one comment explaining this that somehow eluded me for quite a while); duplicating the SpiderMonkey JIT unwinding code in Python; and of course carefully reading the SpiderMonkey code that JITs the “entry frame” code to understand how registers are spilled.

Naturally, while doing this it turned out that I was maybe the first person to use these gdb APIs in anger.  I found some gdb crashes, oops!  The docs would have been impenetrable, except I already knew the underlying C APIs on which they were based… whew!  The Python API was unexpectedly picky in other areas, too.

But then there was also some funny business, one part in gdb, and one part in SpiderMonkey.

GDB is probably more complicated than you realize.  In this case, the complexity is that, in gdb, each stack frame can have its own architecture.  This seemingly weird functionality is actually used; I think it was invented for the SPU, but some other chips have multiple modes as well.  But what this means is that the question “what architecture is this program?” is not well-defined, and anyway gdb’s Python layer doesn’t provide you a way to find whatever approximation it is that would make sense in your specific case.  However, when writing the SpiderMonkey unwinder, it kind of actually is well-defined and we’d like to know the answer to know which unwinder to choose.

For this problem I settled on the probably terrible idea of checking whether a given register is available.  That is, if you see “$rip“, you can guess it’s x86-64.

The other problem here is that gdb thinks that, since you wrote an unwinder, it should get the first stab at unwinding.  That’s very polite!  But for SpiderMonkey, deciding “hey, is this PC in some code the JIT emitted?” is actually a real pain, or at least outside the random bits of it I learned in order to make all this work.

Aha!  I know, there’s probably a Python API to say “is this address associated with some shared library?”  I remembered reading and/or reviewing a patch… but no, gdb.solib_name is close but doesn’t do the right thing for addresses in the main executable.  WAT.

I tried several tricks without success, and in the end I went with parsing /proc/maps to get the mappings to decide whether a given frame should be handled by this unwinder or by gdb.  Horrible.  And fails with remote debugging.

Luckily, nobody does remote debugging.

Remote Debugging

Oh, wait, people do remote debugging at Mozilla all the time.  They don’t call it “remote debugging” though — they call it “using RR“, which while it runs locally, appears to be remote to gdb; and, importantly, during replay mode fakes the PID, and does other deep magic, though not deep enough to extend to making a fake map file that could be read via gdb’s remote get command.

By the way, you should be using RR.  It’s the best advance in debugging since, well, gdb.  It’s a process record-and-replay program, but unlike gdb’s built-in reverse debugging, it handles threads properly and has decent performance.

Oh Well

Oh well.  It just won’t work remotely.  Or at least not until fellow Mozillian (this always seems like it should be “Mozillan” to me, but it’s not, there really is that extra “i”) and all-star Nicolas Pierron wrote some additional Python to read some SpiderMonkey tables to make the decision in a more principled way.  Now it will all work!

Though looking now I wonder if I dreamed this, because the code isn’t checked in.  I know he had a patch but my memory is a bit fuzzy — maybe in the end it didn’t work, because RR didn’t implement the qGetTLSAddr packet, which gdb uses to read thread-local storage.  Did I mention the thread-locals?

The Real Start of the Story

So, way back at the beginning, during my initial foray into this code, I found that a crucial bit of information — the appropriately-named TlsPerThreadData — was stashed away in a thread-local variable.  Information stored here is needed by the unwinder in order to unwind from a C++ frame into a JIT frame.

Only, Firefox didn’t use “real” thread-local variables, the things that so many glibc and gcc hackers put so much effort into micro-optimizing.  No, it just used a template class that wrapped pthread_setspecific and friends in a relatively ergonomic way.

Naturally, for an unwinder this is a disaster.  Why?  Unwinding is basically the dissection of the stack; but in order to compute the value of one of these thread-local-storage objects, the unwinder would have to make some function calls in the inferior (in fact this prevents it from working on OSX).  But these would affect the stack, and also potentially let other inferior code (in other threads — remember, gdb is complicated and you can exert various unusual kinds of control like this) run as well.

So I neglected to mention the very first step: changing Firefox to use __thread.  (Ok, I didn’t really neglect to mention it, I was just being lazy and anyway it’s a shaggy dog story.)

Do Not Use libthread_db

RR did not implement qGetTLSAddr, which we needed, because  lots of people at Mozilla use RR.  So I set out to implement that.  This meant a foray into the dangerous world of libthread_db.

For reasons I do not know, and suspect that I do not want to know, glibc has historically followed many Solaris conventions.  One such Solaris innovation was libthread_db — a library that debuggers use to find certain information from libc, information like the address of a thread-local variable

On the surface this seems like a great idea: don’t bake the implementation details of the C library into the debugger.  Instead, let the debugger use a debugging library that comes with the C library.  And, if you designed it that way, it would be a good idea.

Sadly, though, libthread_db was not designed that way.  Oh no.

For example, libthread_db has a callback interface.  The calling program — gdb or rr — must provide some functions that libthread_db can call, to do some simple things like “read some memory”; or some very complicated things like “find the address of a symbol given its name”.  Normal C programmers might implement these callbacks using a structure containing function pointers.  But not libthread_db!  Instead it uses fixed symbol names that must be provided by the calling application.  Not all of these are required for it to work (you get to figure out which, yay!), but some definitely are.  And, you have to dlopen a libthread_db that matches the libc of the inferior that you’re debugging (or link against it, but that’s also obviously bad).

Wait, you say.  Doesn’t that mess up cross-debugging?  Why yes!  Yes it does!  Which is why qGetTLSAddr has to be in the gdb remote serial protocol to start with.

Hey, maybe the Linux vendors should fix this.  They are — see Gary Benson’s Infinity project — but unfortunately that’s still in development and I wanted RR to work sooner.

Ok, so whew.  I wrote qGetTLSAddr support for RR.  This was a small patch in the end, but an unusual pain in an already painful series.  Hopefully this won’t spill out into other programs.

glibc

Hahaha, you are so funny.  Of course it spills out: remember how you have to define a bunch of functions with specific names in your program in order to use libthread_db?  Well, how do you know you got the types correct?

Yeah, you include <proc_service.h> (a name deliberately chosen to confuse, I suppose, why not, it doesn’t bear any obvious relationship to the library).  Only, that was never installed by glibc.  Instead, gdb just copied it into the source tree.

So naturally I went and fixed this in glibc.  And, even more naturally, this broke the gdb build, which was autoconf’d to check for a file that never existed in the past.  LOL.

Thank You Cthulhu

At this point I figured it was only a matter of time until I had to patch the kernel.  Thankfully this hasn’t been necessary yet.

It Says What

In gdb the actual unwinding and the display of frames are separate concerns.

And let me digress here to say that gdb’s unwinder design is excellent.  I believe it was redone by Andrew Cagney (this was well before my active time in gdb, so apologies if you’re reading this and you did it and I’ve misattributed it).  Like much of gdb, many of the details are bizarre and take one back to the byte-counting days of 1987; but the high level design is very solid and has endured with, I think, just one significant change (to support inline functions) in the intervening 15 or so years.  I’ve long thought that this is a remarkable accomplishment in the programming world.

So, yes.  It’s not enough to just unwind.  Simply having an unwinder yields backtraces with lines like:

#5 0xfeefee ???

Better than nothing!  But not yet great.

The second part of the SpiderMonkey unwinder is, therefore, a gdb “frame filter”.  This is an object that takes raw frames and decorates them with information like a function name, or a file name, or arguments.

Work to add this information is ongoing — I landed one patch just yesterday, and another one, to add more information about interpreted frames, is still in the works.  And there are two more bugs filed… maybe this project, like this blog post, will never conclude.  It will just scroll endlessly.

But now, with all the code in place, bt can show something like:

#6 0x00007ffff7ff20f3 in <<JitFrame_BaselineJS "f1">> (this=JSVAL_VOID, arg1=$jsval(4700))

This is the call f1(4700).

Let’s Just Have One More

Of course we still couldn’t enable this unwinder by default.  You have to enable it by hand.

And by the way, in the first release of gdb’s Python unwinder feature, enabling or disabling an unwinder didn’t flush the frame cache, so it wouldn’t actually take effect until some invisible-to-the-user state change took place.  I fixed this bug, but here Pedro Alves also taught me the secret gdb command flushregs, which in fact just flushes the frame cache. (I’m going to go out on a limb and guess that this command predates the already ancient maint prefix command, hence its weird name.)

Anyway, you have to enable it by hand because the unwinder itself doesn’t work properly if the outermost frame is in JIT code.  The JIT, in the interest of performance, doesn’t maintain a frame pointer.  This means that in the outermost frame, there’s no reliable way to find the object that describes this frame and links to the previous frame.

Now, normally in this case gdb would either resort to debug info (not available here), or in extremis its encyclopedic suite of prologue analyzers (yes, gdb can analyze common function prologues for all architectures developed in the last 25 years to figure out stuff) — but naturally JIT compilers go their own way here as well.

Humans, like Shu back at the start of this story, can do this by dumping parts of the stack and guessing which bytes represent the frame header.

But, I’ve been reluctant and a bit afraid to hack a heuristic into the unwinder.

To sum up — in case you missed it — this means that all the code written during this entire saga would still not have helped with my original bug.

The End

This is a very important machine that really deserves to get built. Anyone who cares about Free Software should consider funding this project at some level, and spreading the word to their friends. If this project succeeds, it will bootstrap a market for new, owner-controlled performant desktop machines. If it fails, no such computers will exist. The project page and updates explain the current (rather depressing) state of general purpose computing better than I could, so take a look.

I originally posted this on G+ but I thought maybe I should expand it a little and archive it here.

The patch to delete gcj went in recently.

When I was put on the gcj project at Cygnus, I remember thinking that Java was just a fad and that this was just a temporary thing for me. I wasn’t that interested in it. Then I ended up working on it for 10 years.

In some ways it was the high point of my career.

Socially it was fantastic, especially once we merged with the Classpath community — I’ve always considered Mark Wielaard’s leadership in that community as the thing that made it so great.  I worked with and met many great people while working on gcj and Classpath, but I especially wanted to mention Andrew Haley, who is the single best debugger I’ve ever met, and who stayed in the Java world, now working on OpenJDK.

We also did some cool technical things in gcj. The binary compatibility ABI was great, and the split verifier was very fun to come up with.  Per Bothner’s early vision for gcj drove us for quite a while, long after he left Cygnus and stopped working on it.

On the downside, gcj was never quite up to spec with Java. I’ve met Java developers even as recently as last year who harbor a grudge against gcj.

I don’t apologize for that, though. We were trying something difficult: to make a free Java with a relatively small team.

When OpenJDK came out, the Sun folks at FOSDEM were very nice to say that gcj had influenced the opening of the JDK. Now, I never truly believed this — I’m doubtful that Sun ever felt any heat from our ragtag operation — but it was very gracious of them to say so.

Since the gcj days I’ve been searching for basically the same combination that kept me hacking on gcj all those years: cool technology, great social environment, and a worthwhile mission.

This turned out to be harder than I expected. I’m still searching. I never thought it was possible to go back, though, and with this deletion, this is clearer than ever.

There’s a joy in deleting code (though in this case I didn’t get to do the deletion… grrr); but mainly this weekend I’m feeling sad about the final close of this chapter of my life.

RoboVM 0.0.1 got released this week by Trillian AB.

RoboVM's main focus is to compile Java to native for deployment on mobile devices such as iOS and Android. RoboVM uses a Java to Objective-C bridge built using LLVM. Good news is that the same process work for converting Java applications to native applications on GNU/Linux systems as well!

Mario Zechner the author of libgdx posted this nice picture from inside DDD/GDB of his first HelloWorld compiled to native X86 code running on a GNU/Linux machine.
GNU/Linux machine code generated by RoboVM seen from inside DDD/GDB

http://www.robovm.org/

JogAmp is the home of high performance Java™ libraries for 3D Graphics, Multimedia and Processing.
JOGL, JOCL and JOAL provide cross platform Java™ language bindings to the OpenGL®, OpenCL™, OpenAL and OpenMAX APIs.
Running on Android, Linux, Window, OSX, and Solaris across devices using Java.

Release announcement for JogAmp 2.0.2-rc12

"You're encouraged to stop using the now-ancient 2.0-rc11!"

This 2.0.2-rc12 release include the largest security review in the 10-year history of JOGL

  • Security Fixes

    • Dynamic Linker Usage / Impl.
    • ProcAddressTable field visibility
    • Perform SecurityManager checks where required
    • Validation of property access
    • JAR Manifest tags:
      • Codebase
      • Permissions
      • Sealed
    • Use latest Java7 toolchain
      • Generating Java 1.6 bytecode
      • HTML API doc

https://jogamp.org/wiki/index.php/SW_Tracking_Report_Objectives_for_the_release_2.0.2_of_JOGL
Security fixes are marked in red on the above bug tracking page.
JogAmp send out thanks to the FuzzMyApp security researchers for healthy communication that triggered the security review work.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place inside the JogAmp forum & mailing-list and the #jogamp IRC channel on irc.freenode.net.


Meet us @

JogAmp @ SIGGRAPH 2013

If you’ve been following Infinity and would like to, you know, download some code and try it out… well, now you can!

screenshot

Project Jigsaw is an enormous effort, encompassing six JEPs implemented by dozens of engineers over many years. So far we’ve defined a modular structure for the JDK (JEP 200), reorganized the source code according to that structure (JEP 201), and restructured the JDK and JRE run-time images to support modules (JEP 220). The last major component, the module system itself (JSR 376 and JEP 261), was integrated into JDK 9 earlier this week and is now available for testing in early-access build 111.

Breaking changes Like the previous major change, the introduction of modular run-time images, the introduction of the module system might impact you even if you don’t make direct use of it. That’s because the module system is now fully operative at both compile time and run time, at least for the modules comprising the JDK itself. Most of the JDK’s internal APIs are, as a consequence, fully encapsulated and hence, by default, inaccessible to code outside of the JDK.

An existing application that uses only standard Java SE APIs and runs on JDK 8 should just work, as they say, on JDK 9. If, however, your application uses a JDK-internal API, or uses a library or framework that does so, then it’s likely to fail. In many cases you can work around this via the -XaddExports option of the javac and java commands. If, e.g., your application uses the internal sun.security.x509.X500Name class then you can enable access to it via the option

-XaddExports:java.base/sun.security.x509=ALL-UNNAMED

This causes all members of the sun.security.x509 package in the java.base module to be exported to the special unnamed module in which classes from the class path are defined.

A few broadly-used internal APIs that cannot reasonably be implemented outside of the JDK, such as sun.misc.Unsafe, are still accessible for now. As outlined in JEP 260, however, these will be removed in a future release after suitable standard replacements are available.

The encapsulation of JDK-internal APIs is the change you’re most likely to notice when running an existing application. Other relevant but, for the most part, less-noticeable changes are described in the risks-and-assumptions section of JEP 261.

If you have trouble running an existing application on JDK 9 build 111 or later, and you think that’s due to the introduction of the module system but not caused by one of the changes listed in JEPs 260 or 261, then please let us know on the jigsaw-dev mailing list (you’ll need to subscribe first, if you haven’t already), or else submit a bug report via bugs.java.com.

New features If you’d like to start learning about the module system itself, the video of my Devoxx BE 2015 keynote gives a high-level overview and The State of the Module System summarizes the design of the module system proposed for JSR 376. Further details are available in the six Jigsaw JEPs, listed on the main project page, and in videos of other sessions given at JavaOne 2015 and Devoxx BE 2015.

The module-system design will continue to evolve in the JSR for a while yet, based on feedback and experience. The implementation will evolve in parallel in the Project Jigsaw “Jake” forest, and we’ll continue to publish bleeding-edge early-access builds based on that code, separately from the more-stable JDK 9 builds.

I finished getting Excorporate and all its dependencies into GNU ELPA. Excorporate lets Emacs retrieve calendar items from an Exchange server.

Excorporate in GNU ELPA

I had to rewrite the default UI to use Org Mode, because Calfw isn’t entirely copyright-assigned to the FSF yet. The Calfw UI is still there for reference, but as a text file so that GNU ELPA’s build and publishing steps ignore it. Both UI handlers use the same updated APIs from the main excorporate.el library.

Excorporate Org handler

I made sure Excorporate and all its dependencies use only features available since GNU Emacs 24.1. This is pretty good coverage; Emacs 24.1 introduced the packaging system, so if an Emacs version supports packages, it supports Excorporate.

Other than DNS lookups, Excorporate is completely asynchronous, so it won’t block the Emacs main loop. And it is pure Emacs Lisp so it runs on any operating system that Emacs does.

In addition to Org Mode support, release 0.7.0 collects all the suggestions users have made on this blog and adds Exchange 2007 support.

To install: M-x package-install RET excorporate

To get the source code:

git clone git://git.savannah.gnu.org/emacs/elpa.git

To report bugs: M-x report-emacs-bug


EclipseCon NA 2016

It was a great pleasure to have a chance to serve on this year’s EclipseCon Program Committee. As Java SE 8 adoption took place at “record-setting pace” during the past year, I was glad to see the EclipseCon team set their sights ahead, towards JDK 9, with its own track at EclipseCon. If you’d just like to take a peek at the changes being considered, developed and integrated into the JDK 9 Project, you can check out its web site in the OpenJDK community, and try out the Early Access builds.

If you’d like to hear what JDK 9 means for Eclipse, though, then you should come to EclipseCon in March and hear about it first hand from Jay Arthanareeswaran from IBM and Manoj Palat, who will talk about Java 9 support in Eclipse. In their session, they will look at what kind of support JDT provides for developers who would like to use JDK 9 in their projects, discussing planned Eclipse features as well as what modules could mean to different projects and how to best leverage the upcoming module system.

Within the OpenJDK community, Project Jigsaw is where development of the reference implementation of JSR 376 – Java Platform Module System takes place, along with the modularization of the JDK itself and the development of a new run-time image format. That’s a lot of new stuff to digest – fortunately, we’ll have Thomas Schindl at EclipseCon to give us a personal view and overview of what he calls "most likely the biggest change in Java’s history" in the “You, Me and Jigsaw” session.

At this point, you may be wondering if JDK 9 is all about modules. Modularity plays a huge role, but there is a lot more to it – more than 70 JDK Enhancement Proposals have been targeted for the JDK 9 release so far. To walk us through some of Java 9’s other puzzle pieces, we’ll have Erik Costlow from Oracle.

Finally, closing this track on Thursday, Erik will discuss “Preparing your code for JDK 9”. There are some steps you can take already to make your code ready to benefit from the new features planned for JDK 9, such as analyzing your project’s library dependencies for unintentional reliance on JDK-internal APIs.

I hope that you will enjoy this EclipseCon track, and that you will be inspired to start experimenting with JDK 9 and Eclipse.

This month I've released Orson PDF version 1.7, a compact and fast API for creating PDF content in Java through the standard Graphics2D API. This release features:

  • support for transparent images;
  • an implementation of the create() method to better support use against existing Java2D code;
  • addition of the GNU General Public License version 3 as the default license (a commercial license remains available for those that prefer it);
  • various bug fixes;
While Orson PDF has been created to provide PDF export for any Java2D-based code, my own use for it is within JFreeChart and Orson Charts. To provide an example, here is a chart that was exported with Orson PDF being viewed within Acrobat Reader: chartpdf.png

With the new GPLv3 license option, I've now also made the OrsonPDF repo at GitHub public, which will make it easier for other developers to work directly with the source code. You can also use GitHub to report any bugs or other issues.

The original version of this blog entry is published at http://www.object-refinery.com/blog/blog-20151008.html.

The first release candidate is finally available. It can be downloaded here or from NuGet.

What's New (relative to IKVM.NET 8.0):

  • Integrated OpenJDK 8u45.
  • Many fixes to late binding support.
  • Added ikvmc support for deterministic output files.
  • Various sun.misc.Unsafe improvements.
  • Many minor bug fixes and performance tweaks.

Changes since previous development snapshot:

  • Assemblies are strong named.
  • Fix for bug #303. ikvmc internal compiler error when trying to get interfaces from type from missing assembly reference.
  • Implemented NIO atomic file move on Windows.

Binaries available here: ikvmbin-8.1.5717.0.zip

Sources: ikvmsrc-8.1.5717.0.zip, openjdk-8u45-b14-stripped.zip

I'm happy to announce that JFreeSVG version 3.0 has been uploaded to SourceForge. JFreeSVG is a fast and lightweight API for creating SVG content in Java. This release features:

  • new handling for BasicStroke cap, join and miterlimit;
  • a new ZIP option when writing SVG to files;
  • a demo for exporting Swing UIs to SVG;
  • removal of the CanvasGraphics2D implementation (to focus on SVG only);
  • a fix for handling of PathIterator.SEG_CLOSE;
  • a fix for y-coordinate bug in drawImage();
  • a workaround for ClassCastException when exporting Swing UIs on MacOSX with Nimbus L&F.

To ensure that JFreeSVG provides a fully functional Graphics2D implementation, I tested it using the Swingset3 demo with modifications to redirect the screen output directly to JFreeSVG to produce SVG output. I've always liked the way that Swing uses the Java2D API to cleanly separate its rendering from having any direct knowledge of the actual output target. Here is an example:

SVG not supported in your browser!

This turned out to be an effective test, because it uncovered a bug in one of the drawImage() methods that has remained undetected in all previous JFreeSVG releases.

One last thing...the JFreeSVG repo at GitHub is now public, which will make it easier for other developers to tweak the code for experimentation or bug fixes (if you spot a bug though, please report it to me).

If you'd like to give feedback on this post, please comment via the JFreeSVG forum.

I recently had occasion to scan some papers using a sheet-fed Ricoh printer/scanner/fax/copier. It seems to think that about 6 MB is as big of an email attachment as it can send so it splits up the PDFs into base64-encoded attachments. If you find yourself in a similar situation:

  • save the raw base64 text (if you’re using GMail, “show original” is your friend) and trim the extraneous text.
  • concatenate the multiple pieces together: cat part1 part2 > all.base64.
  • decode the whole thing: cat all.base64 | base64 -d > myscan.pdf.