Planet Classpath

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 8 support in the 3.0.x series with the April 2016 security fixes from OpenJDK 8 u91.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 3.0.1 (2016-04-23)

  • Security fixes
  • Import of OpenJDK 8 u91 build 14
    • S8002116: This JdbReadTwiceTest.sh gets an exit 1
    • S8007890: [TESTBUG] JcmdWithNMTDisabled.java fails when invoked with NMT explicitly turned on
    • S8036132: Tab characters in test/com/sun/jdi files
    • S8038963: com/sun/jdi tests fail because cygwin’s ps sometimes misses processes
    • S8044419: TEST_BUG: com/sun/jdi/JdbReadTwiceTest.sh fails when run under root
    • S8059661: Test SoftReference and OOM behavior
    • S8067422: Lambda method names are unnecessarily unstable
    • S8073735: [TEST_BUG] compiler/loopopts/CountedLoopProblem.java got OOME
    • S8074146: [TEST_BUG] jdb has succeded to read an unreadable file
    • S8130212: Thread::current() might access freed memory on Solaris
    • S8132890: Text Overlapping on Dot Matrix Printers
    • S8134297: NPE in GSSNameElement nameType check
    • S8134650: Xsl transformation gives different results in 8u66
    • S8134828: Scrollbar thumb disappears with Nimbus L&F
    • S8138589: Correct limits on unlimited cryptography
    • S8138811: Construction of static protection domains
    • S8140268: Generate link to specification license for JavaDoc API documentation
    • S8141229: [Parfait] Null pointer dereference in cmsstrcasecmp of cmserr.c
    • S8143002: [Parfait] JNI exception pending in fontpath.c:1300
    • S8143959: Certificates requiring blacklisting
    • S8146477: [TEST_BUG] ClientJSSEServerJSSE.java failing again
    • S8146518: Zero interpreter broken with better byte behaviour
    • S8146967: [TEST_BUG] javax/security/auth/SubjectDomainCombiner/Optimize.java should use 4-args ProtectionDomain constructor
    • S8147567: InterpreterRuntime::post_field_access not updated for boolean in JDK-8132051
    • S8148446: (tz) Support tzdata2016a
    • S8148475: Missing SA Bytecode updates.
    • S8148487: PPC64: Better byte behavior
    • S8148522: Backout JDK-8138811 from 2016 Apr CPU repo
    • S8149170: Better byte behavior for native arguments
    • S8149367: PolicyQualifierInfo/index_Ctor JCk test fails with IOE: Invalid encoding for PolicyQualifierInfo
    • S8150012: Better byte behavior for reflection
    • S8150790: 8u75 L10n resource file translation update
  • Backports
    • S8148752, PR2943: Compiled StringBuilder code throws StringIndexOutOfBoundsException
    • S8154210: Zero: Better byte behaviour
    • S8154413: AArch64: Better byte behaviour
  • Bug fixes
    • PR2933: Support ccache 3.2 and later
    • PR2934, G579676: SunEC provider throwing KeyException with current NSS

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

  • PGP Key: ed25519/35964222 (hkp://keys.gnupg.net)
  • Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222

GnuPG >= 2.1 is required to be able to handle this key.

SHA256 checksums:

  • 8babade1717fff48bcc4e1e2f3159c2c7d97cfb44ef10124bbab3f7dc34a0582 icedtea-3.0.1.tar.gz
  • 8a5e702a114117ed301a632b1a41651d0577c9c59cfae4d10ff41f6a52185fc7 icedtea-3.0.1.tar.gz.sig
  • 346ce30de1de6c493729b79b246f250438fc5b8df7eae47229a97f9000a73af2 icedtea-3.0.1.tar.xz
  • b440f83a05788157b752cc3b1a239261bcbb52bf82211c93173e93cb4f3fa760 icedtea-3.0.1.tar.xz.sig

The checksums can be downloaded from:

A 3.0.1 ebuild for Gentoo is available.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-3.0.1.tar.gz

or:

$ tar x -I xz -f icedtea-3.0.1.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-3.0.1/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 7 support in the 2.6.x series with the April 2016 security fixes from OpenJDK 7 u101.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 2.6.6 (2016-04-21)

  • Security fixes
  • Import of OpenJDK 7 u101 build 0
    • S4858370: JDWP: Memory Leak: GlobalRefs never deleted when processing invokeMethod command
    • S7127906: (launcher) convert the launcher regression tests to java
    • S8002116: This JdbReadTwiceTest.sh gets an exit 1
    • S8004007: test/sun/tools/jinfo/Basic.sh fails on when runSA is set to true
    • S8007890: [TESTBUG] JcmdWithNMTDisabled.java fails when invoked with NMT explicitly turned on
    • S8027705: com/sun/jdi/JdbMethodExitTest.sh fails when a background thread is generating events.
    • S8028537: PPC64: Updated the JDK regression tests to run on AIX
    • S8036132: Tab characters in test/com/sun/jdi files
    • S8038963: com/sun/jdi tests fail because cygwin’s ps sometimes misses processes
    • S8044419: TEST_BUG: com/sun/jdi/JdbReadTwiceTest.sh fails when run under root
    • S8059661: Test SoftReference and OOM behavior
    • S8072753: Nondeterministic wrong answer on arithmetic
    • S8073735: [TEST_BUG] compiler/loopopts/CountedLoopProblem.java got OOME
    • S8074146: [TEST_BUG] jdb has succeded to read an unreadable file
    • S8134297: NPE in GSSNameElement nameType check
    • S8134650: Xsl transformation gives different results in 8u66
    • S8141229: [Parfait] Null pointer dereference in cmsstrcasecmp of cmserr.c
    • S8143002: [Parfait] JNI exception pending in fontpath.c:1300
    • S8146477: [TEST_BUG] ClientJSSEServerJSSE.java failing again
    • S8146967: [TEST_BUG] javax/security/auth/SubjectDomainCombiner/Optimize.java should use 4-args ProtectionDomain constructor
    • S8147567: InterpreterRuntime::post_field_access not updated for boolean in JDK-8132051
    • S8148446: (tz) Support tzdata2016a
    • S8148475: Missing SA Bytecode updates.
    • S8149170: Better byte behavior for native arguments
    • S8149367: PolicyQualifierInfo/index_Ctor JCk test fails with IOE: Invalid encoding for PolicyQualifierInfo
    • S8150012: Better byte behavior for reflection
    • S8150790: 8u75 L10n resource file translation update
    • S8153673: [BACKOUT] JDWP: Memory Leak: GlobalRefs never deleted when processing invokeMethod command
    • S8154210: Zero: Better byte behaviour
  • Bug fixes
    • PR2889: OpenJDK should check for system cacerts database (e.g. /etc/pki/java/cacerts)
    • PR2929: configure: error: “A JDK home directory could not be found.”
    • PR2935: Check that freetype defines FT_CONFIG_OPTION_INFINALITY_PATCHSET if enabling infinality
    • PR2938: Fix build of 8148487 backport
    • PR2939: Remove rogue ReleaseStringUTFChars line remaining from merge of 7u101 b00
  • PPC & AIX port
  • AArch64 port
    • S8154413: AArch64: Better byte behaviour
    • PR2914: byte_map_base is not page aligned on OpenJDK 7
  • JamVM
    • PR2665: icedtea/jamvm 2.6 fails as a build VM for icedtea

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

  • PGP Key: ed25519/35964222 (hkp://keys.gnupg.net)
  • Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222

GnuPG >= 2.1 is required to be able to handle this key.

SHA256 checksums:

  • d6d92e9b20e321d51b2f428868b6de3d3ebc2b4eedde19e5cf2e2452da6d0fde icedtea-2.6.6.tar.gz
  • 765e3dfbaa5eef6fccd9cc53c153681ad2c70384b31fe3691e44709dbeeae3d2 icedtea-2.6.6.tar.gz.sig
  • 79949744436158d9ded3a758c22da7629f843ea3913afdffc65ea0f1a26d544a icedtea-2.6.6.tar.xz
  • a8049026f7b7f8503ce7ff25c28b822e97cce5c495fdaa0c9b734315d99596bd icedtea-2.6.6.tar.xz.sig

The checksums can be downloaded from:

A 2.6.6 ebuild for Gentoo is available.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-2.6.6.tar.gz

or:

$ tar x -I xz -f icedtea-2.6.6.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-2.6.6/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

This past few weeks I’ve been working on an Infinity client library. This is what GDB will use to execute notes it finds. It’s early days, but it executed its first note this morning so I thought I’d put something together so people can see what I’m doing. Here’s how to try it out:

  1. Install elfutils libelf development stuff if you don’t have it already, the tlsdump example program needs it:
    sudo yum install elfutils-libelf-devel  # Fedora, RHEL, etc...
    sudo apt-get install libelf-dev         # Debian, Ubuntu, etc...
  2. Download and build the Infinity client library and example program:
    git clone -b libi8x-0.0.1 https://github.com/gbenson/libi8x.git libi8x-0.0.1
    cd libi8x-0.0.1
    ./autogen.sh
    ./configure --enable-logging --enable-debug
    make
  3. Check the tlsdump example program built:
    bash$ ls -l examples/tlsdump
    -rwxr-xr-x. 1 gary gary 5540 Apr 20 12:52 examples/tlsdump

    Yeah, there it is! (if it’s not there go back to step 0)

  4. Build a program with notes to run the example program against:
    gcc -o tests/ifact tests/ifact.S tests/main.c
  5. Run the program you just built:
    bash$ tests/ifact &
    [2] 8301
    Hello world I'm 8301
  6. Run the libi8x tlsdump example program with the test program’s PID as it’s argument:
    $ examples/tlsdump 8301
    0! = 1
    1! = 1
    2! = 2
    3! = 6
    4! = 24
    5! = 120
    6! = 720
    7! = 5040
    8! = 40320
    9! = 362880
    10! = 3628800
    11! = 39916800
    12! = 479001600

What just happened? The executable test/ifact you built contains a single Infinity note, test::factorial(i)i, the source for which is in tests/ifact.i8. The tlsdump example located the ifact executable, loaded test::factorial(i)i from it, and ran it a few times printing the result:

  err = i8x_ctx_get_funcref (ctx, "test", "factorial", "i", "i", &fr);
  if (err != I8X_OK)
    error_i8x (ctx, err);

  err = i8x_xctx_new (ctx, 512, &xctx);
  if (err != I8X_OK)
    error_i8x (ctx, err);

  for (int i = 0; i < 13; i++)
    {
      union i8x_value args[1], rets[1];

      args[0].i = i;
      err = i8x_xctx_call (xctx, fr, NULL, args, rets);
      if (err != I8X_OK)
	error_i8x (ctx, err);

      printf ("%d! = %d\n", i, rets[0].i);
    }

To see some debug output try this:

I8X_LOG=debug examples/tlsdump PID

Also try I8X_DEBUG=true in addition to I8X_LOG=debug to trace the bytecode as it executes.

Behold ProjectCenter running on Windows with the debugger module open and GDB running in it.
 

GNUstep's ProjectCenter debugger module - something which was initiated by Greg and has always been quite experimental and unfinished - was based on running GDB via a virtual terminal by using openpty(). Sadly openpty() is not very portable and also.
I restructuured the debugger module to have a separation between the view handling the visualization and a delegate which handles the actual execution of the debugger and sending commands and receiving output.
Instead of using a terminal I implemented a std-in and std-out mechanism.
While some interactive editing properties get lost when using GDB this way (e.g. ability to answer y/n questions) it is the right way to the a machine interface. For example a stacktrace doesn't get paged but printed out fully. Different data sources now get nicely colored too!

And last but not least, it runs on Windows.

Project Jigsaw is an enormous effort, encompassing six JEPs implemented by dozens of engineers over many years. So far we’ve defined a modular structure for the JDK (JEP 200), reorganized the source code according to that structure (JEP 201), and restructured the JDK and JRE run-time images to support modules (JEP 220). The last major component, the module system itself (JSR 376 and JEP 261), was integrated into JDK 9 earlier this week and is now available for testing in early-access build 111.

Breaking changes Like the previous major change, the introduction of modular run-time images, the introduction of the module system might impact you even if you don’t make direct use of it. That’s because the module system is now fully operative at both compile time and run time, at least for the modules comprising the JDK itself. Most of the JDK’s internal APIs are, as a consequence, fully encapsulated and hence, by default, inaccessible to code outside of the JDK.

An existing application that uses only standard Java SE APIs and runs on JDK 8 should just work, as they say, on JDK 9. If, however, your application uses a JDK-internal API, or uses a library or framework that does so, then it’s likely to fail. In many cases you can work around this via the -XaddExports option of the javac and java commands. If, e.g., your application uses the internal sun.security.x509.X500Name class then you can enable access to it via the option

-XaddExports:java.base/sun.security.x509=ALL-UNNAMED

This causes all members of the sun.security.x509 package in the java.base module to be exported to the special unnamed module in which classes from the class path are defined.

A few broadly-used internal APIs that cannot reasonably be implemented outside of the JDK, such as sun.misc.Unsafe, are still accessible for now. As outlined in JEP 260, however, these will be removed in a future release after suitable standard replacements are available.

The encapsulation of JDK-internal APIs is the change you’re most likely to notice when running an existing application. Other relevant but, for the most part, less-noticeable changes are described in the risks-and-assumptions section of JEP 261.

If you have trouble running an existing application on JDK 9 build 111 or later, and you think that’s due to the introduction of the module system but not caused by one of the changes listed in JEPs 260 or 261, then please let us know on the jigsaw-dev mailing list (you’ll need to subscribe first, if you haven’t already), or else submit a bug report via bugs.java.com.

New features If you’d like to start learning about the module system itself, the video of my Devoxx BE 2015 keynote gives a high-level overview and The State of the Module System summarizes the design of the module system proposed for JSR 376. Further details are available in the six Jigsaw JEPs, listed on the main project page, and in videos of other sessions given at JavaOne 2015 and Devoxx BE 2015.

The module-system design will continue to evolve in the JSR for a while yet, based on feedback and experience. The implementation will evolve in parallel in the Project Jigsaw “Jake” forest, and we’ll continue to publish bleeding-edge early-access builds based on that code, separately from the more-stable JDK 9 builds.

Like every new GCC release, GCC6 introduces a lot of new useful warnings. My favorite is still -Wmisleading-indentation. But there are many more that have found various bugs. Not all of them are enabled by default, but it makes sense to enable as many as possible when writing new code.

Duplicate logic

In GCC6 -Wlogical-op (must be enabled explicitly) now also warns when the operands of a logical operator are the same. For example the following typo is detected:

points.c: In function 'distance':
points.c:10:19: warning: logical 'or' of equal expressions [-Wlogical-op]
   if (point.x < 0 || point.x < 0)
                   ^~

Similar logic for detection duplicate conditions in an if-else-if chain has been added with -Wduplicated-cond. It must be enabled explicitly, which I would highly recommend because it found some real bugs like:

elflint.c: In function 'compare_hash_gnu_hash':
elflint.c:2483:34: error: duplicated 'if' condition [-Werror=duplicated-cond]
  else if (hash_shdr->sh_entsize == sizeof (Elf64_Word))
           ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~

elflint.c:2448:29: note: previously used here
  if (hash_shdr->sh_entsize == sizeof (Elf32_Word))
      ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~

GCC is correct, a Word in both an Elf32 and Elf64 file is 4 bytes. We meant to check for sizeof (Elf64_Xword) which is 8 bytes.

And with -Wtautological-compare (enabled by -Wall) GCC6 will also detect comparisons of variables against themselves which will always be true or false. Like in the case where we made a typo:

result.c: In function 'check_fast':
result.c:14:14: warning: self-comparison always evaluates to false [-Wtautological-compare]
  while (res > res)
             ^

Finally -Wswitch-bool (enabled by default) has been improved to only warn about switch statements on a boolean type if any of the case statements is outside the range of the boolean type.

Bit shifting

GCC5 already had warnings for -Wshift-count-negative and -Wshift-count-overflow. Both are enabled by default.

value.c: In function 'calculate':
value.c:7:9: warning: left shift count is negative [-Wshift-count-negative]
  b = a << -3;
        ^~
value.c:8:9: warning: right shift count >= width of type [-Wshift-count-overflow]
  b = a >> 63;
        ^~

GCC6 adds -Wshift-negative-value (enabled by -Wextra) which warns about left shifting a negative value. Such shifts are undefined because they depend on the representation of negative values.

value.c:9:10: warning: left shift of negative value [-Wshift-negative-value]
  b = -1 << 5;
         ^~

Also added in GCC6 is -Wshift-overflow (enabled by default) to detect left shift overflow.

value.c:10:11: warning: result of '10 << 30' requires 35 bits to represent, but 'int' only has 32 bits [-Wshift-overflow=]
 b = 0xa << (14 + 16);
         ^~

You can increase the warnings given with -Wshift-overflow=2 (not enabled by default) which makes GCC also warn if the compiler can detect you are shifting a signed value that would change the sign bit.

value.c:11:10: warning: result of '1 << 31' requires 33 bits to represent, but 'int' only has 32 bits [-Wshift-overflow=]
  b |= 1 << 31;
         ^~				

NULL

The new -Wnull-dereference (must be enabled explicitly) warns when GCC detects you (might) dereference a null pointer that will cause erroneous or undefined behavior (higher optimization levels might catch more cases).

dereference.c: In function 'test2':
dereference.c:30:21: error: null pointer dereference [-Werror=null-dereference]
  if (s == NULL && s->bar > 2)
                   ~^~~~~

-Wnonnull (enabled by -Wall) already warned for passing a null pointer for an argument marked with the nonnull attribute. In GCC6 it has been extended to also warn for comparing an argument marked with nonnull against NULL inside a function.

nonnull.c: In function 'foo':
nonnull.c:8:7: error: nonnull argument 'bar' compared to NULL [-Werror=nonnull]
  if (!bar)
      ^

C++

-Wterminate (enabled by default) warns when a throw will immediate result in a call to terminate like in an noexcept function. In particular it will warn when something is thrown from a C++11 destructor since they default to noexcept, unlike in C++98 (GCC6 defaults to -std=gnu++14).

collect.cxx: In destructor 'area_container::~area_container()':
collect.cxx:23:50: warning: throw will always call terminate() [-Wterminate]
    throw sanity_error ("disposed while negative");
                                                 ^
collect.cxx:23:50: note: in C++11 destructors default to noexcept

The help with some ODR issues GCC6 has -Wlto-type-mismatch and -Wsubobject-linkage.

C++ allows “placement new” of objects at a specified memory location. You are responsible for making sure the memory location provided is of the correct size. This might result in A New Class of Buffer Overflow Attacks. When GCC6 detects the provided buffer is too small it will issue a warning with -Wplacement-new (enabled by default).

placement.C: In function 'S* f(S*)':
placement.C:9:27: warning: placement new constructing an object of type 'S' and size '16' in a region of type 'char [8]' and size '8' [-Wplacement-new=]
     S *t = new (buf) S (*s);
                           ^

And if you actually want less C++ then GCC6 will give you -Wtemplates, -Wmultiple-inheritance, -Wvirtual-inheritance and -Wnamespaces to help enforce coding styles that don’t like those C++ features.

Unused side effects

In GCC6 -Woverride-init-side-effects (enabled by default) is its own warning when you use Designated Initializers multiple times with side effects. If the same field, or array element, is initialized multiple times, it has the value from the last initialization. But if any such overridden initializations has side-effects, it is unspecified whether the side-effect happens or not. So you’ll get a warning for such overrides:

side.c: In function 'foo':
side.c:18:68: warning: initialized field with side-effects overwritten [-Woverride-init-side-effects]
struct Secrets s = { .alpha = count++, .beta = count++, .alpha = count++ };
                                                                 ^~~~~
side.c:18:68: note: (near initialization for 's.alpha')

Before GCC6 -Wunused-variable (enabled by -Wall) didn’t warn for unused static const variables in C. This was because some old code had constructs like: static const char rcs_id[] = "$Id:...";. But this old special use case is not very common anymore. And not warning about such unused variables was hiding real bugs. So GCC6 introduces -Wunused-const-variable (enabled by -Wunused-variable for C, but not for C++). There is still some debate on how to fine tune this warning. So please comment if you find some time to experiment with it before GCC6 is officially released.

framed

Calling __builtin_frame_address or __builtin_return_address with a level other than zero (the current function) is almost always a mistake (it cannot be guaranteed to return a valid value and might even crash the program). So GCC6 now has -Wframe-address (enabled by -Wall) to warn about any such usage.

I wanted to attent FOSDEM two weeks ago, but couldn’t because I was sick, in bed with fever. I should have done a presentation about Shenandoah. Unfortunately, my backup Andrew Dinn also became sick that weekend, so that presentation simply didn’t happen. I want to summarize some interesting news that I wanted to show there. About Shenandoah’s performance.

When I talked about Shenandoah at FOSDEM 2015, I didn’t really announce any performance numbers, because we would have been embarrassed by them :-) We spent the better part of last year optimizing the heck out of it, especially the barriers in the C2 compiler, and here we are, with some great results.

SPECjvm2015

Ok. This doesn’t really exist. The last SPECjvm release was SPECjvm2008. Unfortunately, SPEC doesn’t seem to care about SPECjvm anymore, which means the last Java version that runs it without any modifications is Java7. We did some small fixes, that allows it to run with Java9 too. This invalidates compliance of the results. But they are still tremendously useful for comparison. So here it comes:

SPECjvm2015

This was run on a 32 core box with 160GB of RAM, giving the JVM 140GB of heap. Exact same JVM and settings with G1 and Shenandoah. No special tuning parameters.

In terms of numbers, we get:

Throughput:   Shenandoah: 374 ops/m  vs. G1: 393 ops/m (95%, min 80%, max 140%)
Pauses:          Shenandoah: avg: 41 ms, max 202 ms                     G1:               avg: 240 ms, max 1126 ms

This means, throughput of Java apps, running with Shenandoah is on average 95% that of G1, depending on the actual application, it’ll range from around 80% to around 140%. However, pause times on such large heaps are significantly better with Shenandoah!

SPECjbb2015

SPECjbb2015 measures throughput of a simulated shop system under response time constraints, or service level agreements (SLAs). It measures ‘max-jops’ which is maximum throughput of the system without SLA, and critical-jops, which is throughput of the system under a restrictive SLA. Here are the numbers, G1 vs. Shenandoah, same machine and JVM settings as above:

SPECjbb2015This basically confirms the results that we got from SPECjvm2008: general throughput in the area of 95% behind G1, but much better pause times, reflected in a much higher score of critical-jops.

Other exciting news is that Shenandoah is now stable enough that we want to encourage everybody who’s interested to try it out. The nice folks at Adopt-OpenJDK have set up a nightly build from where you can grab binaries (Shenandoah JDK8 or Shenandoah JDK9). Enjoy! (And please report back if you encounter any problems!)


A package is more than a binary – make it observable

Introduction

I gave a presentation at Fosdem 2016 in the distributions developer room. This article is an extended version of the slide presenter notes. You can get the original from the talk page (press ‘h’ for help and ‘p’ to get the presenter view for the slides).

If any of this sounds interesting and you would like to get help implementing some of this for your distribution please contact me. I work upstream on valgrind and elfutils, which take advantage of having symbols and debuginfo available for programs. And elfutils is used by systemtap, systemd and perf to use some of that information (if available). I am also happy to work on gdb, gcc, rpm, binutils, etc. if that would help make some of this possible or more usable. I work for Red Hat which might explain some of my bias towards how Fedora handles some of this. Please do point out when I am making assumptions that are just plain wrong for other distributions. Ideally all this will be usable cross-distros and when running programs in VMs or containers based on different distro images, you’ll simply reach in and trace, profile and debug anything running seamlessly.

Goal

The main goal is to seamlessly go from a binary back to the original source. In this case limited to “native” ELF code (stuff you build with GCC or any other compiler that produces native code and sufficiently good debuginfo). Once we get this right for “native code” we can look at how to setup similar conventions for other language execution environments.

Whether you are tracing, profiling or debugging a locally running program, get a core file or want to interpret trace or profile data captured on some system, we want to make sure as many symbols, debuginfo and sources are available and easily accessible. Get them wherever they are. Or have a standard way to get them.

I know how most of this stuff works in Fedora, and how to improve some things for that distribution. But I would need help with other distributions and sanity checking that these ideas make sense in other context.

Why?

If you are running Free Software then you should be able to get back at the source code of your binaries. Code is never perfect and real issues always happen in production. Every user really is (and should allowed to be) a “debugger”. Observing (tracing, profiling) their system as it is running. Since actual running (optimized) code in a specific setup really is different from development code. You will observe different behavior in an actual deployed binary compared to how it behaved on the packager or developer setup.

And this isn’t just for the user on the machine getting useful backtraces. The user might just capture a trace or profile on their machine. Or you might get a core file that needs analysis “off-line”. Then having everything ready beforehand makes recreating a “debug environment” that matches the “production environment” precisely so much easier.

Meta observation

We do want users to trace, profile and debug processes running on their systems so they know precisely what is going on with their machine. But we also care about security so all code should run with the minimal privileges possible. Different users shouldn’t be able to trace each other processes, services should run in separate security context, processes handling sensitive data should make sure to pin security sensitive memory to prevent dumping such data to disk and processes that aren’t supposed to use introspection syscalls should be sandboxed. That is all good stuff. It makes sure users/services can synchronize, signal, debug, trace and profile their own processes, but not more than that.

There are however some kernel tweaks that don’t obey process separation and don’t respect different security scopes. Like setting selinux/yama ptrace_deny/scope. Enabling those will break stuff and will cause use of more privileged code than necessary. These “deny ptrace” features aren’t just about blocking the ptrace system call. They don’t just block “debuggers”. They block all inter-process synchronization, signaling, tracing and profiling by normal (unprivileged) users. Both settings were tried in Fedora and were disabled by default in the end. Because with them users can no longer observe their own processes. So they will have to raise their privileges to root. It also means a privileged monitoring process cannot just drop privileges to trace or profile lesser privileged code. So you’ll have to debug, profile and trace as root! It can also be seen as a form of security theater since a compromised process that is running in the same user/security context, might not be able to easily observe another process directly, but it can still get at the same inputs, read and manipulate the other process files, settings, install preload code to disable any restrictions, etc. Making observing other processes much more cumbersome, but not impossible.

So please don’t use these system breaking tweaks on normal setups where users and administrators should be able to monitor their own processes. We need real solutions that don’t require running everything as root and that respects normal user privileges and security contexts.

build-ids

A build-id is a globally unique identifier for an executable ELF image. Luckily everybody gets this right now (all support is upstream and enabled by default in the GNU toolchain). An build-id is an (allocated) ELF note put into the binary by the linker. It is (normally) the SHA1 hash over all code sections in the ELF image. The build-id can be found in each executable, shared library, kernel, module, etc. It is loaded into memory and automatically dumped into core files.

When you know the build-ids and the addresses where the ELF images are/were loaded then you have enough information to match any address to original source.

If your build is reproducible then the build-id will also be exactly the same. The build-id identifies the executable code. So stripping symbols or adding debuginfo doesn’t change it. And in theory with reproducible builds you could “just” rebuild your binaries with all debug options turned on (GCC guarantees that producing debug output will not change the executable code generated) and not strip out any symbols. But that is not really practical and a bit cumbersome (you will need to also keep around the exact build environment for any binary on the system).

Because they are so useful and so essential it really makes sense to make it an error when no build-id is found in an executable or shared library, not just warn about it when creating a package.

backtraces/unwind tables

Backtraces are the backbone of everything (tracing, profiling, debugging). They provide the necessary context for any observation. If you have any observability this should be it. To make it possible to get accurate and precise backtraces in any context always use gcc -fasynchronous-unwind-tables everywhere. It is already the default on the most widely used architectures, but you should enable it on all architectures you support. Fedora already does this (either through making it the default in gcc or by passing it explicitly in the build flags used by rpmbuild).

This will get you unwind tables which are put into .eh_frame sections, which are always kept with the binary and loaded in memory and so can be accessed easily and fast. frame pointers only get you so far. It is always the tricky code, signal handlers, fork/exec/clone, unexpected termination in the prologue/epilogue that manipulates the frame pointer. And it is often this tricky situations where you want accurate backtraces the most and you get bad results when only trying to rely on frame pointers. Maintaining frame pointers bloats code and reduces optimization oppertunities. GCC is really good at automatically generating it for any higher level language. And glibc now has CFI for all/most hand written assembler.

The only exception might be the kernel (for reasons mainly to do with the fact that linux kernel modules are ET_REL files, loadable .eh_frame sections are somewhat problematic). But even for the kernel please do generate accurate unwind tables and then put those in the .debug_frame section (which can then be stripped out and put into a separate debug file later). You can do this with the .cfi_sections assembler directive.

function symbols

When you do get backtraces for observations it would be really nice to immediately be able to match any addresses to the function names from the original source code. But normally only the .dynsym symbols are available (these are only those symbols that are necessary for dynamic linking your application and shared libraries). The full .symtab is normally stripped away since it is strictly only necessary for the linker combining object files.

Because .dynsym provides too little symbols and .symtab provides too much symbols, Fedora introduced the mini-symtab (sometimes called mini-debuginfo). This is a special (non-loaded) .gnu_debugdata section that contains a xz compressed ELF image. This ELF image contains minimal .symtab + .strtab sections for just the function symbols of the original .symtab section.

gdb and elfutils support using/reading .gnu_debugdata upstream. But it is only generated only by some obscure function inside the rpm find-debuginfo.sh script. This really should be its own reusable script/program.

An alternative might be to just not strip the full .symtab out together with the full debuginfo and maybe use the new ELF compressed section support. (valgrind needs the full .symtab in some cases – although only really for ld.so, and valgrind doesn’t support compressed sections at the moment).

Together with accurate unwind tables having the function symbols available (and not stripped away or put into a separate debug file that might not be immediately accessible) provides the minimal requirements for easy, simple and useful tracing and profiling.

Full debuginfo

Other debug information can be stored separately from the main executable, but we still need to generate it. Some recommendations:

  • Always use -g (-gdwarf-4 is the default in recent GCC)
  • Do NOT disable -fvar-tracking-assignments
  • gdb-add-index (.gdb_index)
  • Maybe use -g3 (adds macro definitions)

This is a lot of info and sadly somewhat entangled. But please always generate it and then strip it into a separate .debug file.

This will give you inlines (program structure, which code ended up where). Arguments to functions and local variables, plus which value they have at which point in the program.

The types and structures used by the program. Matching addresses to source lines. .gdb_index provides debuggers a quick way to navigate some of these structures, so they don’t need to scan it all at startup, even if you only want to use a small portion. -g3 used to be very expensive, but recent GCC versions generate much denser data. Nobody really uses them much though, since nobody generates them… so chicken, egg. Both indexing and dense macros are proposed as DWARFv5 extensions.

Not using -fvar-tracking-assignments, which is the default with gcc now, really provides very poor results. Some projects disable it because they are afraid that generating extra debuginfo will somehow impact the generated code. If it ever does that is a bug in GCC. If you do want to double check then you can enable GCC -fcompare-debug or define the environment variable GCC_COMPARE_DEBUG to explicitly make GCC check this invariant isn’t violated.

Compressing

Full debuginfo is big! So yes, compression is something to think about. But ELF section compression is the wrong level. It isn’t supported by many programs (valgrind for example doesn’t). There are two variants (if you use any please not .zdebug, which is now a deprecated GNU extension). It prevents simply mmapping the data and using an index to only read/use what you need. It causes very slow startup.

You should however use DWZ, the DWARF optimization and duplicate removal tool. Given all debuginfo in a package this tool will make sure duplicate DWARF information is stored in a common place, reducing the size of the individual debug files.

You could use both DWZ and ELF section compression together if you really want to get the most compression. But I would recommend using DWZ only and then compress the whole file(s) for storage (like in a package), but install them uncompressed for direct usage.

Sources

The DWARF debuginfo references sources, and you really want to have them easily available. So package the (generated) sources (as they were compiled) and put them somewhere under /usr/src/debug/[package-version]/.

There is however one gotcha. DWARF references the sources where they were build. So unless you put and build the sources precisely where you want to install them you will have to adjust the. This can be done in two ways:

  • rpm debugedit
  • gcc -fdebug-prefix-map=old=new

debugedit is both more and less flexible. It is more flexible because it provides you the actual source file list used in the DWARF describing the program. It is less flexible because it isn’t a full DWARF rewriter. It adjusts the location/directories as long as they are smaller… So setup a big enough build root PATH name. It is probably time to rewrite debugedit to support proper DWARF rewriting and make it an independent tool that can easily be reused not just by rpm.

Separating and “linking”

There are two ways to “link” your binaries to their debug files:

  • .gnu_debuglink section in main file with name (and CRC) of .debug file
  • /usr/lib/debug/.build-id/XX/XXXXXXXXXXXXXXXXXXXXXX.debug

The .gnu_debuglink name has to be searched under well known paths (/usr/lib/debug + original location and/or subdirs). This makes it fragile, but more tools support it and it is the fallback used for when there is no build-id. But it might be time to deprecate/remove it because they inherently conflict between package versions.

Fedora supports both linking build-id.debug -> debuglink file. Fedora also throws in extra link to main exe under .build-id. But in the debuginfo package, so that link could mismatch if the main package and debug package versions don’t match up. It is not recommended to mimic this setup.

Preventing conflict

This is work in progress in Fedora:

  • Want to install both 64-bit and 32-bit debug package.
  • Have older/newer version of a debuginfo package installed. (inspecting a core file).

By making debuginfo packages parallel installable across arches and versions you should be able easily trace, profile and debug 32 and 64 bit programs at the same time. Inspect a core file generated against slightly different versions of the executable and libraries installed on the developer machine. And be able to install all debug files matching the executables running in a container for deep inspection.

To get there:

  • Hash in full name-version-arch of package into build-id.
  • Get rid of .gnu_debuglink files.
  • No more build-id main file backlinks.
  • Put sources under full name-version-arch subdir

This is where I still have more questions than answers. build-ids can conflict for minor version updates (the files should be completely identical though). Should we hash-in the full package name to make them unique again or accept that multiple packages can provide the same ELF images/build-ids? Dropping .gnu_debuglink (or changing install/renamed paths) will need program updates. Should the build-id-main-file backlinks be moved into the main package?

Should we really package debug files?

We might also want to explore alternatives to parallel installable debuginfo packages. Would it make sense to completely do away with debuginfo packages by:

  • Making /usr/lib/debug and /usr/src/debug “magic” fuse file systems?
  • Populate through something like darkserver
  • Have a cross-distro federated registry of build-ids?

Something like the above is being experimented with in the Clear Linux Project.


EclipseCon NA 2016

It was a great pleasure to have a chance to serve on this year’s EclipseCon Program Committee. As Java SE 8 adoption took place at “record-setting pace” during the past year, I was glad to see the EclipseCon team set their sights ahead, towards JDK 9, with its own track at EclipseCon. If you’d just like to take a peek at the changes being considered, developed and integrated into the JDK 9 Project, you can check out its web site in the OpenJDK community, and try out the Early Access builds.

If you’d like to hear what JDK 9 means for Eclipse, though, then you should come to EclipseCon in March and hear about it first hand from Jay Arthanareeswaran from IBM and Manoj Palat, who will talk about Java 9 support in Eclipse. In their session, they will look at what kind of support JDT provides for developers who would like to use JDK 9 in their projects, discussing planned Eclipse features as well as what modules could mean to different projects and how to best leverage the upcoming module system.

Within the OpenJDK community, Project Jigsaw is where development of the reference implementation of JSR 376 – Java Platform Module System takes place, along with the modularization of the JDK itself and the development of a new run-time image format. That’s a lot of new stuff to digest – fortunately, we’ll have Thomas Schindl at EclipseCon to give us a personal view and overview of what he calls "most likely the biggest change in Java’s history" in the “You, Me and Jigsaw” session.

At this point, you may be wondering if JDK 9 is all about modules. Modularity plays a huge role, but there is a lot more to it – more than 70 JDK Enhancement Proposals have been targeted for the JDK 9 release so far. To walk us through some of Java 9’s other puzzle pieces, we’ll have Erik Costlow from Oracle.

Finally, closing this track on Thursday, Erik will discuss “Preparing your code for JDK 9”. There are some steps you can take already to make your code ready to benefit from the new features planned for JDK 9, such as analyzing your project’s library dependencies for unintentional reliance on JDK-internal APIs.

I hope that you will enjoy this EclipseCon track, and that you will be inspired to start experimenting with JDK 9 and Eclipse.

I’m winding down for a month away from Infinity. The current status is that the language and note format changes for 0.0.2 are all done. You can get them with:

git clone https://github.com/gbenson/i8c.git

There’s also the beginnings of an Emacs major mode for i8 in there too. My glibc tree now has notes for td_ta_thr_iter as well as td_ta_map_lwp2thr. That’s two of the three hard ones done. Get them with:

git clone https://github.com/gbenson/glibc.git -b infinity2

FWIW td_thr_get_info is just legwork and td_thr_tls_get_addr is just a wrapper for td_thr_tlsbase; td_thr_tlsbase is the other hard note.

All notes have testcases with 100% bytecode coverage. I may add a flag for I8X to make not having 100% coverage a failure, and make glibc use it so nobody can commit notes with untested code.

The total note size so far is 720 bytes so I may still manage to get all five libpthread notes implemented in less than 1k:

Displaying notes found at file offset 0x00018f54 with length 0x000002d0:
  Owner                 Data size	Description
  GNU                  0x00000063	NT_GNU_INFINITY (inspection function)
    Signature: libpthread::__lookup_th_unique(i)ip
  GNU                  0x00000088	NT_GNU_INFINITY (inspection function)
    Signature: libpthread::map_lwp2thr(i)ip
  GNU                  0x000000cd	NT_GNU_INFINITY (inspection function)
    Signature: libpthread::__iterate_thread_list(Fi(po)oipii)ii
  GNU                  0x000000d2	NT_GNU_INFINITY (inspection function)
    Signature: libpthread::thr_iter(Fi(po)oiipi)i
At JavaOne, I'll be at the


OpenJDK Adoption Group BOF [BOF3377]

Monday, Oct 26, 8:00 p.m. | Hilton—Continental Ballroom 4


See you there!
Today I finally moved out of my lovely Ubuntu 10.04 LTS I was so proud in 2012, sad to read. That there are no longer any updates for it, only half of the problem. Many projects I am interested in started to fail during compilation because the toolchain is getting old. I was not updating because I did not like Unity, say whatever you want about it. I could learn it, of course, I have learned lots of way more sophisticated things (what about building GNU Classpath?). But I do not like the attitude, do not like decisions being made the way they are. Design must follow the user demands. Not in reverse. I gave a try to Gnome 3 as well with Fedora 22, played for some time but was not very impressed either and decided to drop after discovering that while it is possible to install the Gnome 2 - like Mate desktop, it is very difficult to actually switch into it.

My next distribution will be Ubuntu Mate. Value your freedom otherwise you will lose it!

Unexpectedly there were significant problems during updates . While in 2012 Linux installed no problem for me, now both Ubuntu Mate and Fedora 22 have booted in UEFI mode, raising lots of esoteric complains. It took half of the day for me to find where to switch of these UEFI beaties off in my wondercard, and another half to discover that the wondercard does not remember this setting, spontaneously reverting to "UEFI on". I cannot switch into UEFI, I have two other Linux distros on the same machine and do not want to loose. But now seems done.

This month I've released Orson PDF version 1.7, a compact and fast API for creating PDF content in Java through the standard Graphics2D API. This release features:

  • support for transparent images;
  • an implementation of the create() method to better support use against existing Java2D code;
  • addition of the GNU General Public License version 3 as the default license (a commercial license remains available for those that prefer it);
  • various bug fixes;
While Orson PDF has been created to provide PDF export for any Java2D-based code, my own use for it is within JFreeChart and Orson Charts. To provide an example, here is a chart that was exported with Orson PDF being viewed within Acrobat Reader: chartpdf.png

With the new GPLv3 license option, I've now also made the OrsonPDF repo at GitHub public, which will make it easier for other developers to work directly with the source code. You can also use GitHub to report any bugs or other issues.

The original version of this blog entry is published at http://www.object-refinery.com/blog/blog-20151008.html.

It’s been a long while since I posted anything. In the meatime we’ve made lots and lots of progress with Shenandoah. The most important news of the week is that Shenandoah has now been accepted as an official OpenJDK project. We’ve got a website, a mailing list, mercurial repositories, and a wiki page. The code hasn’t been moved yet, I am in the process of doing it. It will first land in our JDK9 forest (as we’re doing our main development there) and will be backported to JDK8 in a while.

Just to give you a summary of what happened in the last 1.5 years since I posted anything (bad me): we’ve implemented all that we wanted (runtime, interpreter, C1 and C2 barriers, weak-reference support, JNI critical regions support, System.gc() support, and lots of other smallish things), it’s fairly stable (still expect bugs here and there, but should run quite a lot of your code), performance looks good (on average, ~90% of what G1 does, some benchmarks as bad as ~70% relative to G1, some beat it by ~150%), pause times on largish heaps are significantly better than with G1 (but still not quite where we want it to be).

If you’re interested, subscribe to the new list, watch out for the code to land (or grab it from IcedTea in the meantime), and give it a try! Any feedback is welcome, as always! :-)


DataBasin's Select-Identify, an invaluable tool for many working with salesforce.com, showed erratic behaviour: extremely hard to reproduce even by sometimes re-running the same query on the same data set, the operation would just stop without any error in the console log, trapped exception or else.

After extensive debugging I found the problem in the queryMore method of the API implementation in DataBasinKit. If queryMore had to return just one record, it would malfunction.
Technically this happened because the size reported by Salesforce.com in the queryMore is not the size of the objects of the queryMore, but of the original query.

The problem affects thus anything using queryMore: If you would select and had a batch size in download of 500, you would get a problem with 501, 1001, 1501 records and so on. 500 or 502 would just work fine. Combine this that the query size of the selectIdentify is dynamic and you get the idea on how difficult it was to reproduce.

Now it is fixed and the upcoming 0.9 version will have this fixed. All currently released DataBasin versions are affected by this bug.

The first release candidate is finally available. It can be downloaded here or from NuGet.

What's New (relative to IKVM.NET 8.0):

  • Integrated OpenJDK 8u45.
  • Many fixes to late binding support.
  • Added ikvmc support for deterministic output files.
  • Various sun.misc.Unsafe improvements.
  • Many minor bug fixes and performance tweaks.

Changes since previous development snapshot:

  • Assemblies are strong named.
  • Fix for bug #303. ikvmc internal compiler error when trying to get interfaces from type from missing assembly reference.
  • Implemented NIO atomic file move on Windows.

Binaries available here: ikvmbin-8.1.5717.0.zip

Sources: ikvmsrc-8.1.5717.0.zip, openjdk-8u45-b14-stripped.zip

There is a nice profile of me on the Java Magazine of this bimestre, and I am very flattened for this so let me share it right away with you.

There is one question I was expecting though but didn’t come: “When did you start working on Java?”.

So, in order to give some more context, let me play with it and answer my own question here (and without space limits!). I think this is important, because it is about how I started to contribute to OpenJDK, it shows that you can do the same… if you are patient.

JM: When did you start working on Java?

Torre: I started to work in Java around its 1.3 release, and I used it ever since. I did start working on Java quite later though, around the Java 1.5/1.6 era probably. I was working to create an MSN messenger clone in Java on my Linux box, since all my friends where using it (MSN I mean, not Linux unfortunately), including the dreaded emoticons, and no Linux client supported those at the time.

I had all the protocol stuff working, I could handshake and share messages (although I still had to figure out the emoticons part!), but I had a terrible problem. I needed to save user credentials. Well, Java has a fantastic Preferences API, easy enough, right? Except that what I was using wasn’t the proprietary JDK, it was the Free Software version of it: GNU Classpath.

Classpath at the time didn’t have Preferences support, so I was stuck. I think somebody was writing a filesystem based preferences, or perhaps it was in Classpath but not GCJ, which is what everybody was using as a VM with the Classpath library, anyway when I started to look at the problem, I realised it would have been nicer to offer a GConf based Preferences store, and integrate the whole thing into the Gnome desktop (at the time, Gnome was a great desktop, nothing like today’s awfulness).

I was hooked. In fact, I even never finished my MSN messenger! After GConf, all sort of stuff came in, Decimal Formatter, GStreamer Sound backend, various fixes here and here, and this is when I learned a lot of how Swing works internally by following Sven de Marothy, Roman Kennke and David Gilbert work.

When Sun was about to release OpenJDK, I was in that very first group and witnessed the whole thing, a lot of behind the scenes of the creation of this extremely important code contribution. OpenJDK license is “GPL + Classpath exception” for a reason. I remember all the heroes that made Java Free Software.

I guess I was lucky, and the timing was perfect.

However right at the beginning contributing actual code to OpenJDK wasn’t at all easy like in Classpath. There was (is!) lot of process, things took a lot of time for anything but the most trivial changes etc…

But eventually I insisted and me and Roman where the first external guys to have code landing in the JDK, Roman was, I believe, the first independent person to have commit rights (I think that the people that are still today in my team at Red Hat and then also SAP had some changes already in, but at the time we two were the only guys completely external).

It wasn’t easy, I had to challenge ourselves and push a lot, and not give up. I had to challenge Sun, and even more challenge Oracle when it took the lead. But I did it. This is what I mean that everybody can do it, you can develop the skills and then you need to build the trust and then not let it go. I’m not sure what is more complex here, but if you persist it eventually come. And then all of a sudden billions of people will use your code and you are a Java Champion.

So this is how it started.


Processing 3 is running for the first time on a Raspberry Pi using Eric Anholt's Mesa3D VC4 driver!

Video of the Processing 3 RGB cube demo running on the Raspberry Pi using Eric Anholt's Mesa3D VC4 OpenGL 2 driver:
http://labb.zafena.se/jogamp/vc4/video20150710_113912325.mp4

Thanks to the free software Mesa3d vc4 driver, the raspberry pi suddenly turned from a mobile opengl es 2 system into a "desktop" opengl 2 system.

Processing 3 is using JogAmp JOGL to tap into OpenGL hardware acceleration on the armv6 Raspberry Pi 1 and armv7 RaspberryPi 2 systems.

Hold on what is going on here, how can I setup the free software vc4 driver on my Raspberry Pi system?

This is a collaboration with Eric Anholt, anholt, and Gottfried Haider, gohai, to get Processing 3 running on the Raspberry Pi.

Eric Anholt has worked about a year to implement a full OpenGL 2 Mesa3D driver for use on the Raspberry Pi by using the Video Core 4, VC4, GPU.
http://anholt.livejournal.com/

Getting Eric Anholt's Mesa3D VC4 driver running on a Raspberry Pi is easily done thanks to the work by gohai.
gohai started out roughly following Eric's notes here: http://dri.freedesktop.org/wiki/VC4/
And then put together a buildbot in Python, to produce system images for use on the Pi or Pi2.

Kernel, Mesa, XServer packages & dependencies are from git. System image produced using gohai's buildbot
https://github.com/gohai/vc4-buildbot

gohai publish daily builds using his bot at:
http://sukzessiv.net/~gohai/vc4-buildbot/build/

Myself I have contributed to fix corner-cases in JogAmp JOGL OpenGL initialization to get it all running.

What was the problem using the proprietary OpenGL ES vc4 driver on the Raspberry Pi system?

The OpenGL ES standard do not cover how the native window is initialized.
When you initialize opengl es then you must pass a platform specific EGLNativeWindowType, EGLNativePixmapType and EGLNativeDisplayType depending on the OS you use.

If you read the Khronos header for eglplatform.h you will notice that the EGLNativeWindowType is different for
Windows: typedef HWND EGLNativeWindowType;
Mac: typedef void *EGLNativeWindowType;
Android: typedef struct ANativeWindow* EGLNativeWindowType;
Unix X11: typedef Window EGLNativeWindowType;

Creating an on-screen EGL rendering surface requires you to to use the eglCreateWindowSurface function, which takes a EGLNativeWindowType parameter. On the Raspberry Pi, however this is implemented as a EGL_DISPMANX_WINDOW_T struct, which is defined in eglplatform.h as:

typedef struct {
DISPMANX_ELEMENT_HANDLE_T element;
int width; /* This is necessary because dispmanx elements are not queriable. */
int height;
} EGL_DISPMANX_WINDOW_T;

As you can see RaspberryPi using the properitary binary drivers uses a Broadcom unique native window type that is incompatible with X11.
This is the reason why we cant use the Processing code as is to pass a Java AWT Unix X11 Window to initialize OpenGL ES, EGL will return an error that you have passed an incompatible structure to eglCreateWindowSurface.

When using Eric Anholt's Mesa3D vc4 driver then EGL will expect a Unix X11 Window for its EGLNativeWindowType and this is why processing will work out of the box when Erics Mesa3D vc4 driver in use. Eric's vc4 driver also implements OpenGL 2 that can be initialized using GLX. GLX allows you to run Processing with OpenGL acceleration across remote X11 network connections!

Cheers
Xerxes Rånby

Final 8.1 development snapshot. Release candidate 0 will be next (after .NET 4.6 RTM).

Changes:

  • Updated HOWTO reference to OpenJDK 8u45.
  • Extract Windows version from kernel32.dll to avoid version lie. Idea stolen from OpenJDK.
  • Moved unused field removal optimization to a later stage in the compilation.
  • Made field removal optimization check more strict to only remove final fields and not remove fields that have annotations.
  • Added support for automatically passing in fields to "native" methods.
  • Various minor clean ups.
  • Added FieldWrapper.IsSerialVersionUID property to properly (and consistently) detect serialVersionUID fields.
  • Improved side effect free static initializer detection.
  • Improved -removeassertions ikvmc optimization to remove more code (esp. allow otherwise empty static initializers to be optimized away).

Binaries available here: ikvmbin-8.1.5666.zip

Just a couple of days ago I found out that some of my favourite musicians decided to join together to release an album, and allowed to preorder it on a crowdfunding website, Music Raiser.

The name of the band is “O.R.k.” and the founders are none but Lef, Colin Edwin, Pat Mastelotto and Carmelo Pipitone.

You probably have heard their names, if not, Colin Edwin is the bassist from Porcupine Tree while Carmelo Pipitone is the gifted guitarist from Marta Sui Tubi, an extremely original Italian band, they probably did the most interesting things in Italian music in the last 15 years or so; Lef, aka Lorenzo Esposito Fornasari, has done so many things that is quite hard to pick just one, but in Metal community he is probably best know for Obake. Finally, Pat Mastelotto is the drummer of King Crimson, and this alone made me jump on my seat!

One of the pre-order bonus was the ability to participate to a Remix Contest, and although I only got the stems yesterday in the late morning I could not resist to at least give it a try, and it’s a great honour for me that they have put my attempt on their Youtube channel:

It’s a weird feeling editing this music, after all, who am I to cut and remix and change the drum part (King Crimson, please forgive me!), how I ever dare to touch the guitars and voice, or rearrange the bass!? :)

But indeed it was a really fun experience, and I hope to be able do this again in the future.

And who knows, maybe they even like how I messed up their art and they decide to put me on their album! Nevertheless, it has been already a great honour for me to be able to see this material in semi-raw form (and a very interesting one!), so this has been already my first prize.

I’m looking forward now to listen the rest of the album!


I'm happy to announce that JFreeSVG version 3.0 has been uploaded to SourceForge. JFreeSVG is a fast and lightweight API for creating SVG content in Java. This release features:

  • new handling for BasicStroke cap, join and miterlimit;
  • a new ZIP option when writing SVG to files;
  • a demo for exporting Swing UIs to SVG;
  • removal of the CanvasGraphics2D implementation (to focus on SVG only);
  • a fix for handling of PathIterator.SEG_CLOSE;
  • a fix for y-coordinate bug in drawImage();
  • a workaround for ClassCastException when exporting Swing UIs on MacOSX with Nimbus L&F.

To ensure that JFreeSVG provides a fully functional Graphics2D implementation, I tested it using the Swingset3 demo with modifications to redirect the screen output directly to JFreeSVG to produce SVG output. I've always liked the way that Swing uses the Java2D API to cleanly separate its rendering from having any direct knowledge of the actual output target. Here is an example:

SVG not supported in your browser!

This turned out to be an effective test, because it uncovered a bug in one of the drawImage() methods that has remained undetected in all previous JFreeSVG releases.

One last thing...the JFreeSVG repo at GitHub is now public, which will make it easier for other developers to tweak the code for experimentation or bug fixes (if you spot a bug though, please report it to me).

If you'd like to give feedback on this post, please comment via the JFreeSVG forum.

In firefox development, it’s normal to do most development tasks via the mach command. Build? Use mach. Update UUIDs? Use mach. Run tests? Use mach. Debug tests? Yes, mach mochitest --debugger gdb.

Now, normally I run gdb inside emacs, of course. But this is hard to do when I’m also using mach to set up the environment and invoke gdb.

This is really an Emacs bug. GUD, the Emacs interface to all kinds of debuggers, is written as its own mode, but there’s no really great reason for this. It would be way cooler to have an adaptive shell mode, where running the debugger in the shell would magically change the shell-ish buffer into a gud-ish buffer. And somebody — probably you! — should work on this.

But anyway this is hard and I am lazy. Well, sort of lazy and when I’m not lazy, also unfocused, since I came up with three other approaches to the basic problem. Trying stuff out and all. And these are even the principled ways, not crazy stuff like screenify.

Oh right, the basic problem.  The basic problem with running gdb from mach is that then you’re just stuck in the terminal. And unless you dig the TUI, which I don’t, terminal gdb is not that great to use.

One of the ideas, in fact the one this post is about, since this post isn’t about the one that I couldn’t get to work, or the one that is also pretty cool but that I’m not ready to talk about, was: hey, can’t I just attach gdb to the test firefox? Well, no, of course not, the test program runs too fast (sometimes) and racing to attach is no fun. What would be great is to be able to pre-attach — tell gdb to attach to the next instance of a given program.

This requires kernel support. Once upon a time there were some gdb and kernel patches (search for “global breakpoints”) to do this, but they were never merged. Though hmm! I can do some fun kernel stuff with SystemTap…

Specifically what I did was write a small SystemTap script to look for a specific exec, then deliver a SIGSTOP to the process. Then the script prints the PID of the process. On the gdb side, there’s a new command written in Python that invokes the SystemTap script, reads the PID, and invokes attach. It’s a bit hacky and a bit weird to use (the SIGSTOP appears in gdb to have been delivered multiple times or something like that). But it works!

It would be better to have this functionality directly in the kernel. Somebody — probably you! — should write this. But meanwhile my hack is available, along with a few other gdb scxripts, in my gdb helpers github repository.

Over the last twelve months or so, one of my projects has been fixing and reviewing fixes of javac lint warnings in the JDK 9 code base (varargs, fallthrough, serial, finally, overrides, deprecation, raw and unchecked) and once a warning category is cleared, making new instances of that category a fatal build error. Ultimately, all the warnings in the jdk repository were resolved and -Xlint:all -Werror is now used in the build.

Being involved in fixing several thousand warnings, I'd like to share some tips for developers who want to undertake an analogous task of cleaning up the technical debt of javac lint warnings in their own code base. First, I recommend tackling the warnings in a way that aligns well with the build system of the project, with a consideration of getting some code protected by the compiler from some warning categories as soon as possible. While the build of the JDK has been re-engineered over the course of the warnings cleanup, to a first approximation the build has been organized around Hg repositories. (At present, in JDK 9 the build is actually arranged around modules. A few years ago, the build was organized around Java packages rather than repositories.) A warnings cleanup isn't really done until introducing new instances of the warning cause a build failure; new warnings are too easy to ignore otherwise. Therefore, for JDK 9, the effort was organized around clearing the whole jdk repository of a warning category and then enabling that warning category in the build as opposed to, say, completely clearing a particular package of all warnings and then moving to the next package.

There are two basic approaches to resolving a warning: suppressing it using the @SuppressWarnings mechanism or actually fixing the code triggering the warning. The first approach is certainly more expedient. While it doesn't directly improve the code base, it can offer an indirect benefit of creating a situation where new warnings can be kept out of the code base by allowing a warning to be turned on in the build sooner. The different warning categories span a range of severity levels and while some warnings are fairly innocuous, others are suspicious enough that I'd recommend always fixing them if a fix is feasible. When resolving warnings in the JDK, generally the non-deprecation warnings categories were fixed while the deprecation warnings were suppressed with a follow-up bug filed. The non-deprecation warnings mostly require Java language expertise to resolve and little area expertise; deprecation warnings are the reverse, often quite deep area expertise is needed to develop and evaluate a true fix.

Tips on addressing specific categories of lint warnings:

[cast]: Warn about use of unnecessary casts.
Since these warnings are generated entirely from the the contents of method bodies, there is no impact to potential callers of the code. Also, the casts analyzed as redundant by javac are easy and safe to remove; fixing cast warnings is essentially a zero-risk change.
[fallthrough]: Warn about falling through from one case of a switch statement to the next.
When such a falling through is not intentional, it can be a very serious bug. All fallthrough switch cases should be examined for correctness. An idiomatic and intentional fallthrough should have two parts: first, the cases in question should be documented in comments explaining that the fallthrough is expected and second, an @SuppressWarnings({"fallthrough"}) annotation should be added to the method containing the switch statement.

See also the discussion of switch statements in Java Puzzlers, Puzzler 23: No Pain, No Gain.

[static]: Warn about accessing a static member using an instance.
This is an unnecessary and misleading coding idiom that should be unconditionally removed. The fix is to simply refer to the static member using the name of the type rather than an instance of the type.

This coding anti-pattern is discussed in Java Puzzlers, Puzzle 48: All I Get Is Static.

[dep-ann]: Warn about items marked as deprecated in JavaDoc but not using the @Deprecated annotation
Since Java SE 5.0, the way to mark an element as deprecated is to modify it with a @Deprecated annotation. While a @deprecated javadoc tag should be used to describe all @Deprecated elements, the javadoc tag is informative only and does not mean the element is treated as deprecated by the compiler.

A element should have an @deprecated javadoc tag in its javadoc if and only if the element is @Deprecated.

Therefore, the fix should be to either remove the @deprecated javadoc tag if the element should not be deprecated or add the @Deprecated annotation if it should be deprecated.

[serial]: Warn about Serializable classes that do not provide a serialVersionUID.
Serialization is a subtle and complex protocol whose compatibility impact on evolving a type should not be underestimated. To check for compatibility between the reader of serial stream data and the writer of the data, besides matching the names of the reader and writer, identification codes of the reader and the writer are also compared and the serial operation fails if the codes don't match. When present, a serialVersionUID field of a class stores the identification code, called a Stream Unique Identifier (SUID) in serialization parlance. When a serialVersionUID field is not present, a particular hashing algorithm is used to compute the SUID instead. The hash algorithm is perturbed by many innocuous changes to a class and can therefore improperly indicate a serial incompatibility when no such incompatibility really exists. To avoid this hazard, a serialVersionUID field should be present on all Serializable classes following the usual cross-version serialization contracts, including Serializable abstract superclasses.

If a Serializable class without a serialVersionUID has already been shipped in a release, running the serialver tool on the type in the shipped release will return the serialVersionUID declaration needed to maintain serial compatibility.

For further discussion, see Effective Java, 2nd Edition, Item 74: Implement Serializable judiciously.

[overrides]: Warn about issues regarding method overrides.
As explained in Effective Java, 2nd Edition, Item 9: Always Override hashCode when you override equals, for objects to behave properly when used in collections, they must have correct equals and hashCode implementations. The invariant checked by javac is more nuanced than the one discussed in Effective Java; javac checks that if a class overrides equals, hashCode has been overriden somewhere in the superclass class chain of the class. It is common for a set of related classes to be able to share a hashCode implementation, say a function of a private field in the root superclass in a set of related types. However, each class will still need to have its own equals method for the usual instanceof check on the argument to equals.
[deprecation]: Warn about use of deprecated items.
Well documented @Deprecated elements suggest a non-deprecated replacement. When using a replacement is not feasible, or no such replacement exists, @SuppressWarnings("deprecation") can be used to acknowledge the situation and remove the warning. A small language change made in JDK 9 makes suppressing deprecation warnings tractable.
[rawtypes]: Warn about use of raw types.
[unchecked]: Warn about unchecked operations.
Both rawtypes and unchecked warnings are linked to the same underlying cause: incomplete generification of APIs and their implementations. Generics shipped in 2004 as part of Java SE 5.0; Java code written and used today should be generics aware! Being generics-aware has two parts, using generics properly in the signature / declaration of a method, constructor, or class and using generics properly in method and constructor bodies. Many uses of generics are straightforward; if you have a list that only contains strings, it should probably be declared as a List<String>. However, some uses of generics can be subtle and are out of scope for this blog entry. However, extensive guides are available with detailed advice. IDEs also provide refactorings for generics; check their documentation for details.

I hope these tips help you make your own Java project warnings-free.

I recently had occasion to scan some papers using a sheet-fed Ricoh printer/scanner/fax/copier. It seems to think that about 6 MB is as big of an email attachment as it can send so it splits up the PDFs into base64-encoded attachments. If you find yourself in a similar situation:

  • save the raw base64 text (if you’re using GMail, “show original” is your friend) and trim the extraneous text.
  • concatenate the multiple pieces together: cat part1 part2 > all.base64.
  • decode the whole thing: cat all.base64 | base64 -d > myscan.pdf.

As part of milling Project Coin in JDK 9, the try-with-resources statement has been improved. If you already have a resource as a final or effectively final variable, you can use that variable in the try-with-resources statement without declaring a new variable in the try-with-resources statement.

For example, given resource declarations like

        // A final resource
        final Resource resource1 = new Resource("resource1");
        // An effectively final resource
        Resource resource2 = new Resource("resource2");

the old way to write the code to manager these resources would be something like:

        // Original try-with-resources statement from JDK 7 or 8
        try (Resource r1 = resource1;
             Resource r2 = resource2) {
            // Use of resource1 and resource 2 through r1 and r2.
        }

while the new way can be just

        // New and improved try-with-resources statement in JDK 9
        try (resource1;
             resource2) {
            // Use of resource1 and resource 2.
        }

An initial pass has been made over the java.base module in JDK 9 to update the JDK libraries to use this new language feature.

You can try out these changes in your own code using a JDK 9 snapshot build. Enjoy!

As I wrote previously, Project Jigsaw is coming into JDK 9 in several large steps. JEP 200 defines the modular structure of the JDK, JEP 201 reorganizes the JDK source code into modular form, and JEP 220 restructures the JDK and JRE run-time images to support modules. The actual module system will be defined in JSR 376, which is just getting under way, and implemented by a corresponding JEP, yet to be submitted.

We implemented the source-code reorganization (JEP 201) last August. This step, by design, had no impact on developers or end users.

Most of the changes for modular run-time images (JEP 220) were integrated late last week and are now available in JDK 9 early-access build 41. This step, in contrast to the source-code reorganization, will have significant impact on developers and end users. All of the details are in the JEP, but here are the highlights:

  • JRE and JDK images now have identical structures. Previously a JDK image embedded the JRE in a jre subdirectory; now a JDK image is simply a run-time image that happens to contain the full set of development tools and other items historically found in the JDK.

  • User-editable configuration files previously located in the lib directory are now in the new conf directory. The files that remain in the lib directory are private implementation details of the run-time system, and should never be opened or modified.

  • The endorsed-standards override mechanism has been removed. Applications that rely upon this mechanism, either by setting the system property java.endorsed.dirs or by placing jar files into the lib/endorsed directory of a JRE, will not work. We expect to provide similar functionality later in JDK 9 in the form of upgradeable modules.

  • The extension mechanism has been removed. Applications that rely upon this mechanism, either by setting the system property java.ext.dirs or by placing jar files into the lib/ext directory of a JRE, will not work. In most cases, jar files that were previously installed as extensions can simply be placed at the front of the class path.

  • The internal files rt.jar, tools.jar, and dt.jar have been removed. The content of these files is now stored in a more efficient format in implementation-private files in the lib directory. Class and resource files previously in tools.jar and dt.jar are now always visible via the bootstrap or application class loaders in a JDK image.

  • A new, built-in NIO file-system provider can be used to access the class and resource files stored in a run-time image. Tools that previously read rt.jar and other internal jar files directly should be updated to use this file system.

We’re aware that these changes will break some applications, in particular IDEs and other development tools which rely upon the internal structure of the JDK. We think that the improvements to performance, security, and maintainability enabled by these changes are, however, more than worth it. We’ve already reached out to the maintainers of the major IDEs to make sure that they know about these changes, and we’re ready to assist as necessary.

If you have trouble running an existing application on JDK 9 build 41 or later and you think it’s due to this restructuring, yet not caused by one of the changes listed above or in JEP 220, then please let us know on the jigsaw-dev mailing list (you’ll need to subscribe first, if you haven’t already), or else submit a bug report via bugs.java.com. Thanks!

For almost a year now I’ve had a Yoga 2 Pro. Despite some people thinking the name is silly and me only using the screen rotation once (on a plane), it’s a nice machine and Fedora 20 running GNOME is great on it. The only non-stock thing to do1 is, in Firefox, open about:config and set layout.css.devPixelsPerPx to 2.

When I bought it, I cheaped out and got the 256 GB disk. I shouldn’t have, so I recently bought a 1 TB mSATA disk to replace it. I also bought an mSATA -> USB3 enclosure to use for dding everything over to the new disk.

For reasons I can’t recall I have my encrypted /home *not* on a logical volume so growing it into the free space on the new disk basically just involved booting from a live USB stick, unlocking the LUKS volume, using gdisk to delete the existing partition and creating a new, larger one starting at the same offset, e2fsck, and resize2fs. If you’re going to do this yourself, you should of course back up your data first.

Physically changing the disk involved removing the 11 T5 bolts on the bottom and the pesky Phillips 00 bolt holding the SSD in place.

[1]

Well, depending upon how old your kernel is, you may also need to rmmod/blacklist ideapad_laptop.

I started hacking on firefox recently. And, of course, I’ve configured emacs a bit to make hacking on it more pleasant.

The first thing I did was create a .dir-locals.el file with some customizations. Most of the tree has local variable settings in the source files — but some are missing and it is useful to set some globally. (Whether they are universally correct is another matter…)

Also, I like to use bug-reference-url-mode. What this does is automatically highlight references to bugs in the source code. That is, if you see “bug #1050501″, it will be buttonized and you can click (or C-RET) and open the bug in the browser. (The default regexp doesn’t capture quite enough references so my settings hack this too; but I filed an Emacs bug for it.)

I put my .dir-locals.el just above my git checkout, so I don’t end up deleting it by mistake. It should probably just go directly in-tree, but I haven’t tried to do that yet. Here’s that code:

(
 ;; Generic settings.
 (nil .
      ;; See C-h f bug-reference-prog-mode, e.g, for using this.
      ((bug-reference-url-format . "https://bugzilla.mozilla.org/show_bug.cgi?id=%s")
       (bug-reference-bug-regexp . "\\([Bb]ug ?#?\\|[Pp]atch ?#\\|RFE ?#\\|PR [a-z-+]+/\\)\\([0-9]+\\(?:#[0-9]+\\)?\\)")))

 ;; The built-in javascript mode.
 (js-mode .
     ((indent-tabs-mode . nil)
      (js-indent-level . 2)))

 (c++-mode .
	   ((indent-tabs-mode . nil)
	    (c-basic-offset . 2)))

 (idl-mode .
	   ((indent-tabs-mode . nil)
	    (c-basic-offset . 2)))

)

In programming modes I enable bug-reference-prog-mode. This enables highlighting only in comments and strings. This would easily be done from prog-mode-hook, but I made my choice of minor modes depend on the major mode via find-file-hook.

I’ve also found that it is nice to enable this minor mode in diff-mode and log-view-mode. This way you get bug references in diffs and when viewing git logs. The code ends up like:

(defun tromey-maybe-enable-bug-url-mode ()
  (and (boundp 'bug-reference-url-format)
       (stringp bug-reference-url-format)
       (if (or (derived-mode-p 'prog-mode)
	       (eq major-mode 'tcl-mode)	;emacs 23 bug
	       (eq major-mode 'makefile-mode)) ;emacs 23 bug
	   (bug-reference-prog-mode t)
	 (bug-reference-mode t))))

(add-hook 'find-file-hook #'tromey-maybe-enable-bug-url-mode)
(add-hook 'log-view-mode-hook #'tromey-maybe-enable-bug-url-mode)
(add-hook 'diff-mode-hook #'tromey-maybe-enable-bug-url-mode)
Thanks to everybody who commented on the JamVM 2.0.0 release, and apologies it's taken so long to approve them - I was expecting to get an email when I had an unmoderated comment but I haven't received any.

To answer the query regarding Nashorn.  Yes, JamVM 2.0.0 can run Nashorn.  It was one of the things I tested the JSR 292 implementation against.  However, I can't say I ran any particularly large scripts with it (it's not something I have a lot of experience with).  I'd be pleased to hear any experiences (good or bad) you have.

So now 2.0.0 is out of the way I hope to do much more frequent releases.  I've just started to look at OpenJDK 9.  I was slightly dismayed to discover it wouldn't even start up (java -version), but it turned out to be not a lot of work to fix (2 evenings).  Next is the jtreg tests...

I'm pleased to announce a new release of JamVM.  JamVM 2.0.0 is the first release of JamVM with support for OpenJDK (in addition to GNU Classpath). Although IcedTea already includes JamVM with OpenJDK support, this has been based on periodic snapshots of the development tree.

JamVM 2.0.0 supports OpenJDK 6, 7 and 8 (the latest). With OpenJDK 7 and 8 this includes full support for JSR 292 (invokedynamic). JamVM 2.0.0 with OpenJDK 8 also includes full support for Lambda expressions (JSR 335), type annotations (JSR 308) and method parameter reflection.

In addition to OpenJDK support, JamVM 2.0.0 also includes many bug-fixes, performance improvements and improved compatibility (from running the OpenJDK jtreg tests).

The full release notes can be found here (changes are categorised into those affecting OpenJDK, GNU Classpath and both), and the release package can be downloaded from the file area.