Planet Classpath

I finished building my Talos II system, and I decided to post my thoughts on it here.

This is an excellent machine, the best workstation I’ve ever used. It’s very fast — it compiles Emacs and the kernel in 2 minutes, and just spins up the fans a little bit while doing so (under normal operation it’s quiet). And it’s in a completely different league in terms of openness than any other computer I’ve owned (and any other high performance computer on the market today).

The system came with a nice user’s manual, and full schematics! I’ve already referred to the schematics to set up the pinmux for a rarely-used serial port so that I could use a TTL serial cable I had lying around (I submitted the device tree patch to Raptor).

It’s really two computers in one: the “baseboard management controller” (BMC), a low power ARMv6, with a full distro on it, and the main POWER9 CPUs. The BMC boots up as soon as you plug in the power supply, before you even press the power button. (It would be nice if there were an ncurses top-like viewer for fan speeds and temperatures that I could leave running on the BMC serial console, but I haven’t found such a thing yet.)

It has serial ports everywhere! One right into the main CPU, and two into the BMC. This is great for low level development, e.g., breaking into bootcode at an early stage.

I left the machine running overnight after first booting it. My neighbourhood had a power glitch, and in the morning I discovered the main CPU was off, and power cycling it via the BMC wasn’t working. Before unplugging and plugging back in, I asked on #talos-workstation, and it turns out I had hit a bug in the first-release firmware. With special commands I was able to power cycle the main CPU just via BMC software (no unplug required). Wanting to know the details, I asked if there was a data sheet for the chip I was interacting with. The amazing thing, from an openness perspective, is that one of the Raptor engineers instead referred me directly to the Verilog source code of the FPGA handling the power sequencing. No searching for datasheets, no black box testing, just straight source code (which is recompilable using an open source FPGA toolchain, BTW.)

It’s so refreshing to not have to do reverse engineering and speculation on opaque things when something fails. On this machine, everything is there, you just look up the source code.

I’ve always disliked the inflexibility and opacity of BIOS/EFI in the x86 world. IBM has done an amazing job here with OpenPOWER. All the early boot code is open, no locked management engines or anything like that. And they’ve adopted Petitboot as the bootloader. It runs on a Linux kernel, so I was able to bootstrap via deboostrap over NFS by building everything within the bootloader environment. Running a compiler in a boot environment is surreal. Even with free options on x86 like libreboot or coreboot, and GRUB, the boot environment is extremely limited. With Petitboot at times I wondered if I even needed to boot into a “desktop” kernel (at least for serial-only activities.)

Now I’m setting up my development environment and I’m learning about the PPC64 ELFv2 ABI, with a view toward figuring out how to bootstrap SBCL. I feel lucky that I got a POWER9 machine early while there are still some rough edges to figure out.

We are pleased to announce the release of IcedTea 3.9.0!

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 8 support with the July 2018 security fixes from OpenJDK 8 u181.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 3.9.0 (2018-09-27)

  • Security fixes
  • New features
    • PR3623: Allow Shenandoah to be used on all architectures
    • PR3624: Sync desktop files with Fedora/RHEL versions again
    • PR3628: Install symlinks to tapsets in SystemTap directory
  • Import of OpenJDK 8 u172 build 11
    • S8031304: Add dcmd to print all loaded dynamic libraries.
    • S8044107: Add Diagnostic Command to list all ClassLoaders
    • S8055755: Information about loaded dynamic libraries is wrong on MacOSX
    • S8059036: Implement Diagnostic Commands for heap and finalizerinfo
    • S8130400: Test java/awt/image/DrawImage/IncorrectClipXorModeSurface2Surface.java fails with ClassCastException
    • S8136356: Add time zone mappings on Windows
    • S8139673: NMT stack traces in output should show mtcomponent
    • S8147542: ClassCastException when repainting after display resolution change
    • S8154017: Shutdown hooks are racing against shutdown sequence, if System.exit()-calling thread is interrupted
    • S8165466: DecimalFormat percentage format can contain unexpected %
    • S8166772: Touch keyboard is not shown for text components on a screen touch
    • S8169424: src/share/sample/scripting/scriptpad/src/scripts/memory.sh missing #!
    • S8170358: [REDO] 8k class metaspace chunks misallocated from 4k chunk Freelist
    • S8170395: Metaspace initialization queries the wrong chunk freelist
    • S8176072: READING attributes are not available on TSF
    • S8177721: Improve diagnostics in sun.management.Agent#startAgent()
    • S8177758: Regression in java.awt.FileDialog
    • S8183504: 8u131 Win 10, issue with wrong position of Sogou IME popup
    • S8184991: NMT detail diff should take memory type into account
    • S8187331: VirtualSpaceList tracks free space on wrong node
    • S8187629: NMT: Memory miscounting in compiler (C2)
    • S8187658: Bigger buffer for GetAdaptersAddresses
    • S8187685: NMT: Tracking compiler memory usage of thread’s resource area
    • S8187803: JDK part of JavaFX-Swing dialogs appearing behind main stage
    • S8187985: Broken certificate number in debug output
    • S8188855: Fix VS10 build after “8187658: Bigger buffer for GetAdaptersAddresses”
    • S8189599: InitialBootClassLoaderMetaspaceSize and CompressedClassSpaceSize should be checked consistent from MaxMetaspaceSize
    • S8189646: sun/security/ssl/SSLSocketImpl/SSLSocketCloseHang.java failed with “java.net.SocketTimeoutException: Read timed out”
    • S8190442: Backout changes for JDK-8087291 from 8u-dev as it didn’t use main CR id
    • S8190690: Impact on krb5 test cases in the 8u-CPU nightly
    • S8191969: javac produces incorrect RuntimeInvisibleTypeAnnotations length attribute
    • S8192987: keytool should remember real storetype if it is not provided
    • S8193156: Need to backout fixes for JDK-8058547, JDK-8055753, JDK-8085903
    • S8193807: Avoid UnsatisfiedLinkError on AIX by providing empty basic implementations of getSystemCpuLoad and getProcessCpuLoad
  • Import of OpenJDK 8 u181 build 13
    • S8038636: speculative traps break when classes are redefined
    • S8051972: sun/security/pkcs11/ec/ReadCertificates.java fails intermittently
    • S8055008: Clean up code that saves the previous versions of redefined classes
    • S8057570: RedefineClasses() tests fail assert(((Metadata*)obj)->is_valid()) failed: obj is valid
    • S8074373: NMT is not enabled if NMT option is specified after class path specifiers
    • S8076117: EndEntityChecker should not process custom extensions after PKIX validation
    • S8156137: SIGSEGV in ReceiverTypeData::clean_weak_klass_links
    • S8157898: SupportedDSAParamGen.java failed with timeout
    • S8169201: Montgomery multiply intrinsic should use correct name
    • S8170035: When determining the ciphersuite lists, there is no debug output for disabled suites.
    • S8176183: sun/security/mscapi/SignedObjectChain.java fails on Windows
    • S8187045: [linux] Not all libraries in the VM are linked with -z,noexecstack
    • S8187635: On Windows Swing changes keyboard layout on a window activation
    • S8188223: IfNode::range_check_trap_proj() should handler dying subgraph with single if proj
    • S8196224: Even better Internet address support
    • S8196491: Newlines in JAXB string values of SOAP-requests are escaped to “
”
    • S8196854: TestFlushableGZIPOutputStream failing with IndexOutOfBoundsException
    • S8197943: Unable to use JDWP API in JDK 8 to debug JDK 9 VM
    • S8198605: Touch keyboard is shown for a non-focusable text component
    • S8198606: Touch keyboard does not hide, when a text component looses focus
    • S8198794: Hotspot crash on Cassandra 3.11.1 startup with libnuma 2.0.3
    • S8199406: Performance drop with Java JDK 1.8.0_162-b32
    • S8199748: Touch keyboard is not shown, if text component gets focus from other text component
    • S8200359: (tz) Upgrade time-zone data to tzdata2018d
    • S8201433: Fix potential crash in BufImg_SetupICM
    • S8202585: JDK 8u181 l10n resource file update
    • S8202996: Remove debug print statements from RMI fix
    • S8203233: (tz) Upgrade time-zone data to tzdata2018e
    • S8203368: ObjectInputStream filterCheck method throws NullPointerException
    • S8204874: Update THIRDPARTYREADME file
    • S8205491: adjust reflective access checks
  • Backports
  • Bug fixes
    • PR3597: Potential bogus -Wformat-overflow warning with -Wformat enabled
    • PR3600: jni_util.c does not import header file which declares getLastErrorString
    • PR3601: Fix additional -Wreturn-type issues introduced by 8061651
    • PR3630: Use ${datadir} when specifying default tz.properties location
    • PR3632: IcedTea installing symlinks to SystemTap directory rather than individual tapsets
  • AArch64 port
    • S8207345, PR3626: Trampoline generation code reads from uninitialized memory
  • Shenandoah
    • PR3619: Shenandoah broken on s390
    • PR3620: Shenandoah broken on ppc64
    • Allocation failure injection machinery
    • [backport] AArch64 shenandoah_store_check should read evacuation_in_progress as byte
    • [backport] Account trashed regions from coalesced CM-with-UR
    • [backport] Adaptive collection set selection in adaptive policy
    • [backport] Adaptive heuristics accounts trashed cset twice
    • [backport] Adapt upstream object pinning API
    • [backport] Add comments in shenandoah_store_check on direct heap field use
    • [backport] Added diagnostic flag ShenandoahOOMDuringEvacALot
    • [backport] Added missing header file for non-PCH build
    • [backport] Add missing barrier in C1 NIOCheckIndex intrinsic
    • [backport] Add new pinned/cset region state for evac-failure-path
    • [backport] Add ShenandoahRootProcessor API to report threads while scanning roots
    • [backport] Add test to verify Shenandoah is not enabled by default, and enabled with the flag
    • [backport] Add -XX:+ShenandoahVerify to more interesting tests
    • [backport] AESCrypt.implEncryptBlock/AESCrypt.implDecryptBlock intrinsics assume non null inputs
    • [backport] Allow use of fp spills around write barrier
    • [backport] Arraycopy fixes (tests and infrastructure)
    • [backport] Assert Shenandoah-specific safepoints instead of generic ones
    • [backport] Asynchronous region recycling
    • [backport] Avoid notifying about zero waste
    • [backport] barrier moved due to null checks needs to always fix memory edges
    • [backport] Basic support for x86_32: build and run in STW configuration
    • [backport] Bitmap based ShHeapRegionSet
    • [backport] Break heuristics out from ShCollectorPolicy into their own source files
    • [backport] C2 should use heapword-sized object math
    • [backport] Check BS type in immByteMapBase predicate
    • [backport] Cleanup allocation tracking in heuristics
    • [backport] Cleanup and refactor Full GC code
    • [backport] Cleanup and strengthen BrooksPointer verification
    • [backport] Clean up dead code
    • [backport] Cleanup: removed unused code
    • [backport] Cleanup reset_{next|complete}_mark_bitmap
    • [backport] Cleanup SHH::should_start_normal_gc
    • [backport] “Compact” heuristics for dense footprint scenarios
    • [backport] Compact heuristics should not shortcut on immediate garbage, but aggressively compact
    • [backport] Conditionalize PerfDataMemorySize on enabled heap sampling
    • [backport] Consistent liveness for humongous regions
    • [backport] Control loop should wait before starting another GC cycle
    • [backport] Critical native tests should only be ran on x86_64 platforms
    • [backport] Degenerated GC
    • [backport] Degenerated GC: rename enum, report degen reasons in stats
    • [backport] Demote ShenandoahAllocImplicitLive to diagnostic
    • [backport] Demote warning message about OOM-during-evac to informational
    • [backport] Denser ShHeapRegion status line
    • [backport] Disable verification from non-Shenandoah VMOps.
    • [backport] Disallow pinned_cset region moves and allocations during Full GC
    • [backport] Disambiguate “upgrade to Full GC” GCause
    • [backport] Do not add non-allocatable regions to the freeset
    • [backport] Don’t treat allocation regions implicitely live during some GCs
    • [backport] Double check for UseShenandoahGC in WB expand
    • [backport] Drop distinction between immediate garbage and free in heuristics
    • [backport] Dynamic worker refactoring
    • [backport] Eagerly drop CSet state from regions during Full GC
    • [backport] Eliminate write-barrier assembly stub (part 1)
    • [backport] Enable biased locking for Shenandoah by default
    • [backport] Ensure tasks use correct number of workers
    • [backport] Excessive assert in ShHeap::mark_next
    • [backport] Excessive asserts in marked_object_iterate
    • [backport] FinalEvac pause to turn off evacuation
    • [backport] Fix || and && chaining warnings in memnode.cpp
    • [backport] Fix broken asserts in ShenandoahSharedEnumFlag
    • [backport] Fixed code roots scanning that might be bypassed during degenerated cycle
    • [backport] Fixed compilation error of libTestHeapDump.c on Windows with VS2010
    • [backport] Fixed missing ResourceMark in ShenandoahAsserts::print_obj
    • [backport] Fixed pinned region handling in mark-compact
    • [backport] Fix (external) heap iteration + TestHeapDump should unlock aggressive heuristics
    • [backport] fix for alias analysis with ShenandoahBarriersForConst
    • [backport] Fix/improve CLD processing
    • [backport] Fixing Windows and ARM32 build
    • [backport] Fix Mac OS build warnings
    • [backport] Fix Minimal VM build
    • [backport] Fix ShFreeSet boundary case
    • [backport] fix TCK crash with shenandoah
    • [backport] Forcefully update counters when GC cycle is running
    • [backport] FreeSet and HeapRegion should have the reference to ShenandoahHeap
    • [backport] FreeSet refactor: bitmaps, cursors, biasing
    • [backport] FreeSet should accept responsibility over trashed regions
    • [backport] FreeSet should report its internal state before/after GC cycle
    • [backport] Full GC should compact humongous regions
    • [backport] Full GC should not trash empty regions
    • [backport] GC state testers (infra)
    • [backport] Generic verification is possible only at Shenandoah safepoints
    • [backport] Get easy on template instantiations in ShConcMark
    • [backport] Heap region sampling should publish region states
    • [backport] Humongous regions should support explicit pinning
    • [backport] Immediate garbage ratio should not go over 100%
    • [backport] Implement flag to generate write-barriers without membars
    • [backport] Implement protocol for safe OOM during evacuation handling + Use jint in oom-evac-handler to match older JDKs Atomic support better + Missing OOMScope in ShenandoahFixRootsTask
    • [backport] Improve assertion/verification messages a bit
    • [backport] Improve/more detailed timing stats for root queue work
    • [backport] Incorrect constant folding with final field and -ShenandoahOptimizeFinals
    • [backport] Increase test timeouts
    • [backport] Introduce assert_in_correct_region to verify object is in correct region
    • [backport] Isolate shenandoahVerifier from stray headers
    • [backport] keep read barriers for final instance/stable field accesses
    • [backport] Keep track of per-cycle mutator/collector allocs. Fix mutator/collector alloc region overlap in traversal.
    • [backport] Little cleanup
    • [backport] Log message on ref processing, class unload, update refs for mark events
    • [backport] LotsOfCycles test timeouts
    • [backport] Make concurrent precleaning log message optional again
    • [backport] Make control loop more responsive under allocation pressure
    • [backport] Make degenerated update-refs use region-set cursor to hand over work
    • [backport] Make heap counters update completely asynchronous
    • [backport] Make major GC phases exclusive from each other
    • [backport] Make sure selective barriers enabling/disabling works
    • [backport] Make sure -XX:+ShenandoahVerify comes first in the tests
    • [backport] Mark bitmap slices commit/uncommit + Aggregated bitmap slicing
    • [backport] Match barrier fastpath checks better
    • [backport] Minor cleanups
    • [backport] Minor cleanup, uses latest Atomic API
    • [backport] Move barriers into typeArrayOop.hpp direct memory accessors
    • [backport] Move ShHeap::used increment out of locked allocation path
    • [backport] No need for fence in control loop: flags are now ShSharedVariables
    • [backport] Only report GC pause time to GC MXBean + Re-fix memory managers and memory pools usage and pause reporting
    • [backport] Optimize fwdptr region handling in ShenandoahVerifyOopClosure::verify_oop
    • [backport] Optimize oop/fwdptr/hr_index verification a bit
    • [backport] overflow integer during size calculation
    • [backport] Pacer should account allocation waste and unsuccessful pacing in the budget
    • [backport] Pacer should poll FreeSet to figure out actually available space
    • [backport] Passive should opt-in the barriers, not opt-out
    • [backport] Pauses that do not affect heap occupancy should not report heap
    • [backport] Print message when heuristics changes the setting ergonomically
    • [backport] Protect C2 matchers with UseShenandoahGC
    • [backport] Provide non-taxable allocation slack at the beginning of the cycle
    • [backport] Record cycle start/end to avoid continuous periodic GC
    • [backport] Record Shenandoah events in hs_err events section
    • [backport] Refactor allocation failure and explicit GC handling
    • [backport] Refactor allocation metadata handling
    • [backport] Refactor FreeSet rebuilding into the single source
    • [backport] Refactoring GC phase and heap allocation tracking out of policy
    • [backport] Refactor uncommit handling: react on explicit GCs, feature kill flag, etc
    • [backport] Refactor worker timings into ShenandoahPhaseTimings
    • [backport] ReferenceProcessor is_alive setup is racy
    • [backport] Region sampling should lock while gathering region data
    • [backport] Rehash VMOperations and cycle driver mechanics for consistency
    • [backport] Relax assert in SBS::is_safe()
    • [backport] Remove BS:is_safe in favor of logged BS::verify_safe_oop
    • [backport] Remove CSetThreshold handling from heuristics
    • [backport] Remove FreeSet::add_region, inline into FreeSet::rebuild
    • [backport] Remove obsolete check in FreeSet::allocate
    • [backport] Remove ShenandoahGCWorkerPerJavaThread flag
    • [backport] Remove ShenandoahMarkCompactBarrierSet
    • [backport] Rename and cleanup _regions and _free_set uses
    • [backport] Rename dynamic heuristics to static
    • [backport] Rename *_oop_static/oop_ref to *_forwarded
    • [backport] Rename ShenandoahConcurrentThread to ShenandoahControlThread
    • [backport] Report all GC status flags in hs_err
    • [backport] Report fwdptr size in JNI GetObjectSize
    • [backport] Report how much we have failed to allocate during Allocation Failure
    • [backport] Report illegal transitions verbosely, and remove some no-op transitions
    • [backport] Rewire control loop to avoid double cleanup work
    • [backport] Rework shared bool/enum flags with proper types and synchronization
    • [backport] Rewrite and fix ShenandoahHeap::marked_object_iterate
    • [backport] Rich assertion failure logging
    • [backport] Roots verification should take the special roots first
    • [backport] RP closures should accept NULL referents
    • [backport] Set ShenandoahMinFreeThreshold default to 10%
    • [backport] Setup process references and class unloading once before the cycle
    • [backport] ShConcurrentThread races with set_gc_state_bit
    • [backport] Shenandoah critical native support
    • [backport] Shenandoah region/set iterators should not allow copying
    • [backport] Shenandoah SA implementation
    • [backport] Shenandoah/SPARC barrier stubs
    • [backport] ShenandoahVerifyOptoBarriers should not fail with disabled barriers
    • [backport] ShenandoahWriteBarrierNode::find_bottom_mem() fix
    • [backport] ShenandoahWriteBarrierRB flag to conditionally disable RB on WB fastpath
    • [backport] Shenandoah/Zero barrier stubs
    • [backport] SieveObjects test is too hostile to verification
    • [backport] Single GCTimer shared by all operations
    • [backport] Single thread-local GC state flag for all barriers
    • [backport] Some smallish ShHeapRegionSet changes
    • [backport] Speed up asserts and verification, improve fastdebug builds performance
    • [backport] Split live data management for allocations and GCs
    • [backport] Static heuristics should be really static and report decisions
    • [backport] Static heuristics should use non-zero allocation threshold
    • [backport] Store checks should run most of the time
    • [backport] Tax-and-Spend allocation pacing
    • [backport] Testbug: VerifyJCStressTest leaks memory
    • [backport] TestSelectiveBarrierFlags should accept multi-element flag selections
    • [backport] TestSelectiveBarrierFlags times out due to too aggressive compilation mode
    • [backport] Trim/expand test heap sizes to fit small heaps
    • [backport] Trim the TLAB sizes to avoid wasteful retirement under TLAB races
    • [backport] Use leftmost region in GC allocations
    • [backport] Use os::naked_short_sleep instead of naked Thread events for sleeping
    • [backport] Use/sort (cached) RegionData not ShenandoahHeapRegionSet (infrastructure)
    • [backport] UX: Cleanup (adaptive) CSet selection message
    • [backport] UX: Pacer reports incorrect free size
    • [backport] UX: Shorter gc+ergo messages from CSet selection
    • [backport] Verifier crashes when reporting multiple forwardings
    • [backport] Verifier should check klass pointers before attempting to reach for object size
    • [backport] Verifier should print verification label at liveness verification
    • [backport] Verify fwdptr accesses during Full GC moves
    • [backport] Verify regions status
    • [backport] When Shenandoah WB is moved out of loop, connect it to correct loop memory Phi (back out and revisit previous fix)
    • [backport] Wipe out ShenandoahStoreCheck implementation
    • [backport] Workaround C1 ConstantOopWriteValue bug
    • Bitmap size might not be page aligned when large page is used
    • Changed claim count to jint
    • Cherry-pick JDK-8173013: JVMTI tagged object access needs G1 pre-barrier
    • Defer cleaning of system dictionary and friends to parallel cleaning phase
    • Do not put down update-refs-in-progress flag concurrently
    • Fix AArch64 build failure: misplaced #endif
    • Fixed Shenandoah 8u build
    • Fixed Windows build
    • Fix non-PCH build
    • Fix non-PCH x86_32 build
    • Fix up SPARC and Zero headers for proper locations
    • missing barriers in String intrinsics with -ShenandoahOptimizeInstanceFinals -ShenandoahOptimizeStableFinals
    • Missing event log for canceled GC
    • StringInternCleanup times out
    • VerifyJCStressTest should test all heuristics
    • Workaround VM crash with JNI Weak Refs handling

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

  • PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net)
  • Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222

GnuPG >= 2.1 is required to be able to handle this key.

SHA256 checksums:

  • 84a63bc59f4e101ce8fa183060a59c7e8cbe270945310e90c92b8609a9b8bc88 icedtea-3.9.0.tar.gz
  • 7ee0a348f4e32436b3cdc915f2a405ab8a6bfca0619d9acefb2920c14208d39e icedtea-3.9.0.tar.gz.sig
  • 45577f65e61509fcfa1dfce06ff9c33ef5cfea0e308dc1f63e120975ce7bdc3c icedtea-3.9.0.tar.xz
  • 82cb48e36437d0df16fe5071c3d479672d2b360a18afe73559c63d6fb604caf2 icedtea-3.9.0.tar.xz.sig

The checksums can be downloaded from:

A 3.9.0 ebuild for Gentoo is available.

The following people helped with these releases:

  • Andrew Hughes (all bug fixes and backports, release management)

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-3.9.0.tar.gz

or:

$ tar x -I xz -f icedtea-3.9.0.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-3.9.0/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

I'm proud to announce the first release of StepSync, a file sync tool for GNUstep and MacOS (even for venerable PowerPC).

StepSync allows synchronization of folders, optionally recursively descending in sub-folders. It allows thus various options of performing backups: pure insertion, updates and including full synchronization by importing changes from target to source.

After months of development and testing, I consider it stable enough, I tested it with thousands of files and folders.

You can find it at the GNUstep Application Project. I already have plans for new features!

One of my hobbies in GDB is cleaning things up. A lot of this is modernizing and C++-ifying the code, but I’ve also enabled a number of warnings and other forms of code checking in the last year or two. I thought it might be interesting to look at the impact, on GDB, of these things.

So, I went through my old warning and sanitizer patch series (some of which are still in progress) to see how many bugs were caught.

This list is sorted by least effective first, with caveats.

-fsanitize=undefined; Score: 0 or 10

You can use -fsanitize=undefined when compiling to have GCC detect undefined behavior in your code.  This series hasn’t landed yet (it is pending some documentation updates).

We have a caveat already!  It’s not completely fair to put UBsan at the top of the list — the point of this is that it detects situations where the compiler might do something bad.  As far as I know, none of the undefined behavior that was fixed in this series caused any visible problem (so from this point of view the score is zero); however, who knows what future compilers might do (and from this point of view it found 10 bugs).  So maybe UBSan should be last on the list.

Most of the bugs found were due to integer overflow, for example decoding ULEB128 in a signed type.  There were also a couple cases of passing NULL to memcpy with a length of 0, which is undefined but should probably just be changed in the standard.

-Wsuggest-override; Score: 0

This warning will fire if you have a method that could have been marked override, but was not.  This did not catch any gdb bugs.  It does still have value, like everything on this list, because it may prevent a future bug.

-Wduplicated-cond; Score: 1

This warning detects duplicated conditions in an if-else chain.  Normally, I suppose, these would arise from typos or copy/paste in similar conditions.  The one bug this caught in GDB was of that form — two identical conditions in an instruction decoder.

GCC has a related -Wduplicated-branches warning, which warns when the arms of an if have identical code; but it turns out that there are some macro expansions in one of GDB’s supporting libraries where this triggers, but where the code is in fact ok.

-Wunused-variable; Score: 2

When I added this warning to the build, I thought the impact would be removing some dead code, and perhaps a bit of fiddling with #ifs.  However, it caught a couple of real bugs: cases where a variable was unused, but should have been used.

-D_GLIBCXX_DEBUG; Score: 2

libstdc++ has a debug mode that enables extra checking in various parts of the C++ library.  For example, enabling this will check the irreflexivity rule for operator<.  While the patch to enable this still hasn’t gone in — I think, actually, it is still pending some failure investigation on some builds — enabling the flag locally has caught a couple of bugs.  The fixes for these went in.

-Wimplicit-fallthrough; Score: 3

C made a bad choice in allowing switch cases to fall through by default.  This warning rectifies this old error by requiring you to explicitly mark fall-through cases.

Apparently I tried this twice; the first time didn’t detect any bugs, but the second time — and I don’t recall what, if anything, changed — this warning found three bugs: a missing break in the process recording code, and two in MI.

-Wshadow=local; Score: 3

Shadowing is when a variable in some inner scope has the same name as a variable in an outer scope.  Often this is harmless, but sometimes it is confusing, and sometimes actively bad.

For a long time, enabling a warning in this area was controversial in GDB, because GCC didn’t offer enough control over exactly when to warn, the canonical example being that GCC would warn about a local variable named “index“, which shadowed a deprecated C library function.

However, now GCC can warn about shadowing within a single function; so I wrote a series (still not checked in) to add -Wshadow=local.

This found three bugs.  One of the bugs was found by happenstance: it was in the vicinity of an otherwise innocuous shadowing problem.  The other two bugs were cases where the shadowing variable caused incorrect behavior, and removing the inner declaration was enough to fix the problem.

-fsanitize=address; Score: 6

The address sanitizer checks various typical memory-related errors: buffer overflows, use-after-free, and the like.  This series has not yet landed (I haven’t even written the final fix yet), but meanwhile it has found 6 bugs in GDB.

Conclusion

I’m generally a fan of turning on warnings, provided that they rarely have false positives.

There’s been a one-time cost for most warnings — a lot of grunge work to fix up all the obvious spots.  Once that is done, though, the cost seems small: GDB enables warnings by default when built from git (not when built from a release), and most regular developers use GCC, so build failures are caught quickly.

The main surprise for me is how few bugs were caught.  I suppose this is partly because the analysis done for new warnings is pretty shallow.  In cases like the address sanitizer, more bugs were found; but at the same time there have already been passes done over GDB using Valgrind and memcheck, so perhaps the number of such bugs was already on the low side.

Graphos 0.7 has been released a couple of days ago!

What's new for GNUstep's vector editor?
  • improved Bezier path editor (add/remove points)
  • Knife (Bezier Path splitting) instrument fixed and re-enabled (broken since original GDraw import!)
  • important crash fixes (Undo/Redo related)
  • Interface improvements to be more usable with Tablet/Pen digitizer.
Graphos continues to work on GNUstep for Linux/BSD as well as natively on MacOS.

Graphos running on MacOS:

In this post I want to introduce a (not so very) new GC mode that we call “Traversal GC”. It all started over a year ago when I implemented the ‘partial’ mode. The major feature of the partial mode is that it can concurrently collect a part of the heap, without the need to traverse all live objects, hence the name partial GC. I will go into details of how that works in a later post. First I would like to explain another foundation of Shenandoah’s partial GC, which is the single-traversal-GC, or short Traversal GC.

Let me first show some pictures that explain how Shenandoah works (and in-fact, more or less how other collectors work):

Shenandoah usually runs in one of two modes (switched dynamically depending on automatic ergonomic decisions): First the 3-phase mode:

The cycles are:

  1. Concurrent mark: traversal all live objects, starting from roots, and mark each visited object as live.
  2. Concurrent evacuation: based on liveness information from concurrent marking, select a number of regions, compact all live objects in that region into fresh regions.
  3. Concurrent update-refs: scan all live objects and update their references to point to the new copies of the compacted objects.

Each concurrent phase is book-ended by a stop-the-world phase to safely scan GC roots (thread stacks) and do some housekeeping. That makes 4 (very short, but still measurable) pauses during which no Java thread can make progress.

When GC pressure is high, and GC cycles run close to back-to-back, Shenandoah switches to 2-phase operation. The idea is to skip concurrent update-refs phase, and instead piggy-back it on subsequent concurrent marking cycle:

In other words, we now have:

  1. Concurrent mark: traversal all live objects, starting from roots, and mark each visited object as live. At the same time, when encountering references to from-space, update the to point to the to-space copy.
  2. Concurrent evacuation: based on liveness information from concurrent marking, select a number of regions, compact all live objects in that region into fresh regions.

As before, we still have pauses before and after each phase, now totalling 3 stop-the-world pauses per cycle.

Can we do better? Turns out that we can:

Now we only have one concurrent phase during which we:

  1. Visit each live object, starting from roots
  2. When encountering an object that is in the collection-set, evacuate it to a fresh compaction region
  3. Update all references to point to the new copies.

The single concurrent phase is book-ended by 2 very short stop-the-world phases.

This probably sounds easy and obvious, but the devil lies, as usual, in some interesting details:

  • How to select the collection-set? We have no liveness information when we start traversing.
  • How to ensure consistencies:
    • Traversal consistency: how to deal with changing graph shape during traversal
    • Data consistency: how to ensure writes go to the correct copy of objects, how to ensure reads don’t read stale data, etc
    • Update consistency: how to avoid races between updating of references and ordinary field updates

I will go into those details in a later post.

If you’re interested in trying out the traversal mode, it’s all already in Shenandoah (jdk10, jdk11 and dev branches) and stable enough to use. Simply pass -XX:ShenandoahGCHeuristics=traversal in addition to the usual -XX:+UseShenandoahHeap on the command line. More information about how to get and run Shenandoah GC can be found in our Shenandoah wiki.

We are pleased to announce the release of IcedTea 3.8.0!

The IcedTea project provides a harness to build the source code from OpenJDK using Free Software build tools, along with additional features such as the ability to build against system libraries and support for alternative virtual machines and architectures beyond those supported by OpenJDK.

This release updates our OpenJDK 8 support with the April 2018 security fixes from OpenJDK 8 u171.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place on the distro-pkg-dev OpenJDK mailing list and patches are always welcome.

Full details of the release can be found below.

What’s New?

New in release 3.8.0 (2018-05-29)

  • Security fixes
  • New features
    • PR3493: Run AES test to test intrinsics
  • Import of OpenJDK 8 u162 build 12
    • S4354680: Runtime.runFinalization() silently clears interrupted flag in the calling thread
    • S6618335: ThreadReference.stop(null) throws NPE instead of InvalidTypeException
    • S6651256: jstack: DeleteGlobalRef method call doesn’t lead to descreasing of global refs count shown by jstack
    • S6656031: SA: jmap -permstat number of classes is off by 1
    • S6977426: sun/tools tests can intermittently fail to find app’s Java pid
    • S6988950: JDWP exit error JVMTI_ERROR_WRONG_PHASE(112)
    • S7124271: [macosx] RealSync test failure
    • S7162125: [macosx] A font has different behaviour for ligatures depending on its creation mod
    • S8023667: SA: ExceptionBlob and other C2 classes not available in client VM
    • S8031661: java/net/Authenticator/B4769350.java failed intermittently
    • S8046778: Better error messages when starting JMX agent via attach or jcmd
    • S8066185: VM crashed with SIGSEGV VirtualMemoryTracker::add_reserved_region
    • S8072428: Enable UseLoopCounter ergonomically if on-stack-replacement is enabled
    • S8073670: TypeF::eq and TypeD::eq do not handle NaNs correctly
    • S8074812: More specific error message when the .java_pid well-known file is not secure
    • S8078269: JTabbedPane UI Property TabbedPane.tabAreaBackground no longer works
    • S8080504: [macosx] SunToolkit.realSync() may hang
    • S8087291: InitialBootClassLoaderMetaspaceSize and CompressedClassSpaceSize should be checked consistent from MaxMetaspaceSize
    • S8132374: AIX: fix value of os.version property
    • S8134103: JVMTI_ERROR_WRONG_PHASE(112): on checking for an interface
    • S8139218: Dialog that opens and closes quickly changes focus in original focusowner
    • S8147002: [macosx] Arabic character cannot be rendered on MacOS X
    • S8148786: xml.tranform fails on x86-64
    • S8155197: Focus transition issue
    • S8157896: TestDSAGenParameterSpec.java test fails with timeout
    • S8158633: BASE64 encoded cert not correctly parsed with UTF-16
    • S8159432: [PIT][macosx] StackOverflow in closed/java/awt/Dialog/DialogDeadlock/DialogDeadlockTest
    • S8162530: src/jdk.management/share/native/libmanagement_ext/GcInfoBuilder.c doesn’t handle JNI exceptions properly
    • S8164954: split_if creates empty phi and region nodes
    • S8166742: SIGFPE in C2 Loop IV elimination
    • S8169961: Memory leak after debugging session
    • S8172751: OSR compilation at unreachable bci causes C1 crash
    • S8175340: Possible invalid memory accesses due to ciMethodData::bci_to_data() returning NULL
    • S8177026: jvm.dll file version not updated since 8u72
    • S8177414: Missing key events on Mac Os
    • S8177958: Possible uninitialized char* in vm_version_solaris_sparc.cpp
    • S8178047: Aliasing problem with raw memory accesses
    • S8179086: java.time.temporal.ValueRange has poor hashCode()
    • S8180370: Characters are skipped on input of Korean text on OS X
    • S8180855: Null pointer dereference in OopMapSet::all_do of oopMap.cpp:394
    • S8181659: Create an alternative fix for JDK-8167102, whose fix was backed out
    • S8181786: Extra runLater causes impossible states to be possible using javafx.embed.singleThread=true
    • S8182402: Tooltip for Desktop button is in English when non-English locale is set
    • S8182996: Incorrect mapping Long type to JavaScript equivalent
    • S8184009: Missing null pointer check in InterpreterRuntime::update_mdp_for_ret()
    • S8184271: Time related C1 intrinsics produce inconsistent results when floating around
    • S8184328: JDK 8u131 socketRead0 hang at SSL read
    • S8184893: jdk8u152 b06 : issues with nashorn when running kraken benchmarks
    • S8185346: Relax RMI Registry Serial Filter to allow arrays of any type
    • S8187023: Cannot read pkcs11 config file in UTF-16 environment
    • S8189918: Remove Trailing whitespace from file while syncing 8u into 8u162-b03
    • S8190280: [macos] Font2DTest demo started failing for Arabic range from JDK 8 u162 b01 on Mac
    • S8190542: 8u162 L10n resource file update
    • S8192794: 8u162 L10n resource file update md20
  • Import of OpenJDK 8 u171 build 11
    • S8054213: Class name repeated in output of Type.toString()
    • S8068778: [TESTBUG] CompressedClassSpaceSizeInJmapHeap.java fails if SA not available
    • S8150530: Improve javax.crypto.BadPaddingException messages
    • S8153955: increase java.util.logging.FileHandler MAX_LOCKS limit
    • S8169080: Improve documentation examples for crypto applications
    • S8175075: Add 3DES to the default disabled algorithm security property
    • S8179665: [Windows] java.awt.IllegalComponentStateException: component must be showing on the screen to determine its location
    • S8186032: Disable XML Signatures signed with EC keys less than 224 bits
    • S8186441: Change of behavior in the getMessage () method of the SOAPMessageContextImpl class
    • S8187496: Possible memory leak in java.apple.security.KeychainStore.addItemToKeychain
    • S8189851: [TESTBUG] runtime/RedefineTests/RedefineInterfaceCall.java fails
    • S8191358: Restore TSA certificate expiration check
    • S8191909: Nightly failures in nashorn suite
    • S8192789: Avoid using AtomicReference in sun.security.provider.PolicyFile
    • S8194259: keytool error: java.io.IOException: Invalid secret key format
    • S8196952: Bad primeCertainty value setting in DSAParameterGenerator
    • S8197030: Perf regression on all platforms with 8u171-b03 – early lambda use
    • S8198494: 8u171 and 8u172 – Build failure on non-SE Linux Platforms
    • S8198662: Incompatible internal API change in JDK8u161: signature of method exportObject()
    • S8198963: Fix new rmi property name
    • S8199001: [TESTBUG] RMIConnectionFilterTest.java test fails in compilation
    • S8199141: Windows: new warning messaging for JRE installer UI in non-MOS cases
    • S8200314: JDK 8u171 l10n resource file update – msg drop 40
  • Backports
  • Bug fixes
    • S8199936, PR3533: HotSpot generates code with unaligned stack, crashes on SSE operations
    • S8199936, PR3591: Fix for bug 3533 doesn’t add -mstackrealign to JDK code
    • PR3539, RH1548475: Pass EXTRA_LDFLAGS to HotSpot build
    • PR3549: Desktop file doesn’t reference versioned icon
    • PR3550: Additional category used in jconsole.desktop.in is incorrect
    • PR3559: Use ldrexd for atomic reads on ARMv7.
    • PR3575, RH1567204: System cacerts database handling should not affect jssecacerts
    • PR3592: Skip AES test on AArch64 due to VM crash
    • PR3593: s390 needs to use ‘%z’ format specifier for size_t arguments as size_t != int
    • PR3594: Patch for bug 3593 breaks Shenandoah build
    • PR3597: Potential bogus -Wformat-overflow warning with -Wformat enabled
  • Shenandoah
    • PR3573: Fix TCK crash with Shenandoah
    • Remove oop cast in oopMap.cpp again, as oopDesc::operator== has additional checking in Shenandoah.
    • Fix new code for Shenandoah after the 8u171 merge
    • Revert accidental OpSpinWait matching
    • UseBiasedLocking should be disabled only for Shenandoah
  • AArch32 port
    • PR3548: Add missing return values for AArch32 port

The tarballs can be downloaded from:

We provide both gzip and xz tarballs, so that those who are able to make use of the smaller tarball produced by xz may do so.

The tarballs are accompanied by digital signatures available at:

These are produced using my public key. See details below.

  • PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net)
  • Fingerprint = 5132 579D D154 0ED2 3E04 C5A0 CFDA 0F9B 3596 4222

GnuPG >= 2.1 is required to be able to handle this key.

SHA256 checksums:

  • ef1a9110294d0a905833f1db30da0c8a88bd2bde8d92ddb711d72ec763cd25b0 icedtea-3.8.0.tar.gz
  • 5ed72a7475d91e6ef863449f39c12f810d1352d815b4dd4d9a0b8b04d8604949 icedtea-3.8.0.tar.gz.sig
  • ff9d3737ca5cc8712bad31c565c50939d8b062234d3d49c5efa083bbaa24c3e6 icedtea-3.8.0.tar.xz
  • cb93df3c4b632d75b0b7c4e5280b868f109a0aef26f59f0455d5e6a1992b344c icedtea-3.8.0.tar.xz.sig

The checksums can be downloaded from:

A 3.8.0 ebuild for Gentoo is available.

The following people helped with these releases:

We would also like to thank the bug reporters and testers!

To get started:

$ tar xzf icedtea-3.8.0.tar.gz

or:

$ tar x -I xz -f icedtea-3.8.0.tar.xz

then:

$ mkdir icedtea-build
$ cd icedtea-build
$ ../icedtea-3.8.0/configure
$ make

Full build requirements and instructions are available in the INSTALL file.

Happy hacking!

     ____               
    /    \              
   |-. .-.|             
   (_@)(_@)             
   .---_  \             
  /..   \_/             
  |__.-^ /              
      }  |              
     |   [              
     [  ]               
    ]   |               
    |   [               
    [  ]                
   /   |        __      
  \|   |/     _/ /_     
 \ |   |//___/__/__/_   
\\  \ /  //    -____/_  
//   "   \\      \___.- 
 //     \\  __.----._/_ 
/ //|||\\ .-         __>
[        /         __.- 
[        [           }  
\        \          /   
 "-._____ \.____.--"    
    |  | |  |           
    |  | |  |           
    |  | |  |           
    |  | |  |           
    {  } {  }           
    |  | |  |           
    |  | |  |           
    |  | |  |           
    /  { |  |           
 .-"   / [   -._        
/___/ /   \ \___"-.     
    -"     "-           

strace patch.

I’ve been working a bit more on my Emacs JIT, in particular on improving function calling.  This has been a fun project so I thought I’d talk about it a bit.

Background

Under the hood, the Emacs Lisp implementation has a few different ways to call functions.  Calls to or from Lisp are dispatched depending on what is being called:

  • For an interpreted function, the arguments are bound and then the interpreter is called;
  • For a byte-compiled function using dynamic binding, the arguments are bound and then the bytecode interpreter is called;
  • For a byte-compiled function using lexical binding, an array of arguments is passed to the bytecode interpreter;
  • For a function implemented in C (called a “subr” internally), up to 8 arguments are supported directly — as in, C functions of the form f(arg,arg,...); for more than that, an array of arguments is passed and the function itself must decide which slot means what.  That is, there are exactly 10 forms of subr (actually there are 11 but the one missing from this description is used for special forms, which we don’t need to think about here).

Oh, let’s just show the definition so you can read for yourself:

union {
Lisp_Object (*a0) (void);
Lisp_Object (*a1) (Lisp_Object);
Lisp_Object (*a2) (Lisp_Object, Lisp_Object);
Lisp_Object (*a3) (Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a4) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a5) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a6) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a7) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*a8) (Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object, Lisp_Object);
Lisp_Object (*aUNEVALLED) (Lisp_Object args);
Lisp_Object (*aMANY) (ptrdiff_t, Lisp_Object *);
} function;

Initial Approach

Initially the JIT worked like a lexically-bound bytecode function: an array of arguments was passed to the JIT-compiled function.  The JIT compiler emitted a bunch of code to decode the arguments.

For Lisp functions taking a fixed number of arguments, this wasn’t too bad — just moving values from fixed slots in the array to fixed values in the IR.

Handling optional arguments was a bit uglier, involving a series of checks and branches, so that un-bound arguments could correctly be set to nil.  These were done something like:

if nargs < 1 goto nope1
arg0 = array[0]
if nargs < 2 goto nope2
arg1 = array[1]
goto first_bytecode
nope1: arg0 = nil
nope2: arg1 = nil
first_bytecode: ...

&rest arguments were even a bit worse, requiring a call to create a list.  (This, I think, can’t be avoided without a much smarter compiler, one that would notice when reifying the list could be avoided.)

Note that calling also has to use the fully generic approach: we make a temporary array of arguments, then call a C function (Ffuncall) that does the dispatching to the callee.  This is also a source of inefficiency.

Today

Recently, I changed the JIT from this approach to use the equivalent of the subr calling convention.  Now, any function with 8 or fewer (non-&rest) arguments is simply an ordinary function of N arguments, and we let the already-existing C code deal with optional arguments.

Although this often makes the generated assembly simpler, it won’t actually perform any better — the same work is still being done, just somewhere else.  However, this approach does use a bit less memory (most JIT-compiled functions are shorter); and it opens the door to an even bigger improvement.

The Future

What I’m implementing now is an approach to removing most of the overhead from JIT-compiled function calls.

Now, ideally what I’d like is to have every call site work “like C”: move the arguments to exactly where the callee expects them to be, and then call.  However, while looking at this I found some problems that make it tricky:

  • We still need to be able to call Lisp functions from C, so we’re limited to, at best, subr-style calling conventions;
  • While &rest arguments are straightforward (in our simple compiler, somebody just has to make the list); &optional arguments don’t have a good C-like analog.  The callee could push extra arguments, but…
  • In Lisp, a function can be redefined at any time, and it is fine to change the function’s signature.

Consider this example:

(defun callee (x &optional y) (list x y))
(defun caller (callee 23))
(defun callee (x) (list x))

Now, if we compiled caller with a direct call, it would turn out like (callee 23 nil).  But then, after the redefinition, we’d have to recompile caller.  Note this can go the other way as well — we could redefine callee to have more optional arguments, or even more fixed arguments (meaning that the call should now throw an exception).

Recompiling isn’t such a big deal, right?  The compiler is set up very naively: it just compiles every function that is invoked, and in this mode “recompilation” is equivalent to “just abandon the compiled code”.

Except… what do you do if caller is being run when callee is redefined?  Whoops!

Actually, of course, this is a known issue in JIT compilation, and one possible solution is “on-stack replacement” (“OSR”) — recompiling a function while it is running.

This to me seemed like a lot of bookkeeping, though: keeping a list of which functions to compile when some function was redefined, and figuring out a decent way to implement OSR.

The Plan

Instead I came up a with a simpler approach, involving — you guessed it — indirection.

On the callee side, I am going to keep the subr calling convention that is in place today.  This isn’t ideal in all cases, but it is reasonable for a lot of code.  Instead, all the changes will take place at spots where the JIT emits a call.

I am planning to have three kinds of function calls in the JIT:

  1. Indirect.  If we see some code where we can’t determine the callee, we’ll emit a call via Ffuncall like we do today.
  2. Fully direct.  There are some functions that are implemented in C, and that I think are unreasonable to redefine.  For these, we’ll just call the C function directly.  Another fully-direct case is where the code dispatches to a byte-code vector coming from the function’s constant pool — here, there’s no possibility to redefine the function, so we can simply always call the JIT-compiled form.
  3. Semi-direct.  This will be the convention used when JIT-compiled code calls via a symbol.

The core idea of a semi-direct call is to have multiple possible implementations of a function:

  • One “true” implementation.  If the function has 8 or fewer arguments (of any kind), it will simply have that many arguments.  The JIT will simply pretend that an optional argument is fixed.  If it has more than 8 arguments, following the subr convention it will just accept an array of arguments.
  • If the function has optional or rest arguments, there will be trampoline implementations with fewer arguments, that simply supply the required number of additional arguments and then call the true implementation.
  • Remember how there are exactly 10 relevant kinds of subr?  Any forms not covered by the above can simply throw an exception.

A vector of function pointers will be attached to each symbol, and so the JIT-compiled code can simply load the function pointer from the appropriate slot (a single load — the nice thing about a JIT is we can simply hard-code the correct address).

Then, when a function is redefined, we simply define any of the trampolines that are required as well.  We won’t even need to define all of them — only the ones that some actually-existing call site has needed.

Of course, no project like this is complete without a rathole, which is why instead of doing this I’m actually working on writing a compiler pre-pass so that the compiler itself can have the appropriate information about the callee at the point of call.  This sub-project turns out to feel a lot like writing a Java bytecode verifier…

Further Future

Currently the JIT is only used for lexically-bound bytecode functions.  That’s a reasonable restriction, I think — so one thing we should do is make sure that more of the Emacs core is using lexical binding.  Currently, only about 1/3 of the Lisp files in Emacs enable this feature; but many more easily could.

Once my current project is done, the JIT will have a decent calling convention by default.  Since we’ll have information about callees at points of call, I think it will be a good time to look into inlining.  This will require tackling recompilation (and perhaps OSR) and having some sort of tiered optimization approach.  There is still a lot for me to learn here — when does it make sense to inline?  And what metrics should I use to decide when some code is hot enough to optimize?  So, good times head even once the current project is done; and BTW if you have a reading list for any of this I would love to hear about it.

Once this is done, well, I have more ideas for even deeper JIT improvements.  Those will have to wait for another post.

At Fosdem we had a talk on dtrace for linux in the Debugging Tools devroom.

Not explicitly mentioned in that talk, but certainly the most exciting thing, is that Oracle is doing a proper linux kernel port:

 commit e1744f50ee9bc1978d41db7cc93bcf30687853e6
 Author: Tomas Jedlicka <tomas.jedlicka@oracle.com>
 Date: Tue Aug 1 09:15:44 2017 -0400

 dtrace: Integrate DTrace Modules into kernel proper

 This changeset integrates DTrace module sources into the main kernel
 source tree under the GPLv2 license. Sources have been moved to
 appropriate locations in the kernel tree.

That is right, dtrace dropped the CDDL and switched to the GPL!

The user space code dtrace-utils and libdtrace-ctf (a combination of GPLv2 and UPL) can be found on the DTrace Project Source Control page. The NEWS file mentions the license switch (and that it is build upon elfutils, which I personally was pleased to find out).

The kernel sources (GPLv2+ for the core kernel and UPL for the uapi) are slightly harder to find because they are inside the uek kernel source tree, but following the above commit you can easily get at the whole linux kernel dtrace directory.

Update: There is now a dtrace-linux-kernel.git repository with all the dtrace commits rebased on top of recent upstream linux kernels.

The UPL is the Universal Permissive License, which according to the FSF is a lax, non-copyleft license that is compatible with the GNU GPL.

Thank you Oracle for making everyone’s life easier by waving your magic relicensing wand!

Now there is lots of hard work to do to actually properly integrate this. And I am sure there are a lot of technical hurdles when trying to get this upstreamed into the mainline kernel. But that is just hard work. Which we can now start collaborating on in earnest.

Like systemtap and the Dynamic Probes (dprobes) before it, dtrace is a whole system observability tool combining tracing, profiling and probing/debugging techniques. Something the upstream linux kernel hackers don’t always appreciate when presented as one large system. They prefer having separate small tweaks for tracing, profiling and probing which are mostly separate from each other. It took years for the various hooks, kprobes, uprobes, markers, etc. from systemtap (and other systems) to get upstream. But these days they are. And there is now even a byte code interpreter (eBPF) in the mainline kernel as originally envisioned by dprobes, which systemtap can now target through stapbpf. So with all those techniques now available in the linux kernel it will be exciting to see if dtrace for linux can unite them all.

Maintenance of an aging Bugzilla instance is a burden, and since CACAO development has mostly migrated to Bitbucket, the bug tracker will be maintained there as well.

The new location for tracking bugs is Bitbucket issues.

All the historical content from Bugzilla is statically archived at this site on the Bugzilla page.

I always used Eclipse extensively, although I moved away from it when it started having all sort of rendering issues with RHEL, mostly when SWT moved to GTK3 underneath. Most of those problems are slowly being fixed, and the IDE is again very usable under RHEL. I promised Lars last year that I would start using Eclipse again once those problem were addressed, and here I am, keeping the promise!

One thing I never really totally enjoyed in Eclipse was the debugger, though. Of all the IDEs I tried, I find the NetBeans debugger the absolute best, followed by IntelliJ IDEA. I don’t think the Eclipse debugger is bad in itself, but it doesn’t directly do the things I expect by default, and I often need to do quite a bit of tweaking in order to get it right. I admit I’m not exactly the average developer though, so it’s possible that what NetBeans offers me simply matches more what I find most useful. For instance, the detailed view for variables and the fact that you can quickly attach to the native portion of a process (and this works with the JDK as well as any hybrid application). Eclipse can do that, but requires some fiddling (most of the manual process I described on this post).

Now, there are things I love about Eclipse too. For example, while is true that it doesn’t show the most detailed view of variables, you can quickly execute arbitrary code right during the debugging sessions, simply by writing it. I’m not sure about IDEA, but NetBeans has similar functionality, but it’s hidden under tons of menu that you need to click and configure, and for very complex stuff the best option is extending the debugger itself, which is not trivial.

Recently, I’ve been debugging a weird issue in Fedora, and I found I was in need to scan a very large Map of fonts. The default formatter made things quite complex to understand while all I wanted was to quickly see what was the closest key in the map to the one I had as input.

You can see what I mean in this very trivial example.

Here my map is a simple immutable one with just 3 values, however already here you can see that the comma used to separate the values in the variable view makes things quite complicated, just imagine if this map had some thousands values, sorting them would have been quite an experience!

        Map<String, String> map =
                Map.of("Test1", "1,2,2",
                       "Test2", "2,3,1",
                       "Test3", "3,2,1");

Produces:

{Test3=3,2,1, Test1=1,2,2, Test2=2,3,1}

Not quite what I want!

The IDE gives us a very powerful tool though. Instead of simply changing the formatter, we can execute code to analyse the code! There is an hidden gem called “Display” view. You find it under “Window > Show View”, in the Debug perspective it should be readily visible, otherwise you can select “Other” and bring all the views up.

This neat view is simply a secondary editor that is active only during a debugging session, and allows you to instrument the code on the fly. What this means, is that you can type the following in the Display view and scan all the values for Map in question:

for (String key : map.keySet()) {
    System.err.println("* " + key + " -> " + map.get(key));
}

Then simply press “Execute selected text” and the code will execute. The subtle difference between “Execute” and “Display Result”, which is the icon next to the execute, is that the result of the code snippet will be printed in the Display, this is the result of the execution of the code itself, so in our case the snippet executes without a result, which is considered “false” by Eclipse apparently.

but they both have side effects on the running process, so for instance if you modify the map (well, that one is immutable, but you get the point) both options will modify it, no matter where the result gets printed. The Display doesn’t seem to like lambdas or default methods, so you need to keep things a bit old fashioned, but that’s hardly a problem.

Overall I find this feature extremely useful, and worth the hassle of dealing with the default configurations on the Eclipse Debugger. Just be careful though, side effects in debugging are hidden at every step, sometime even just a toString() called by the debugger can change your code in unexpected ways!


I attended the 5th Chrome Dev Summit this week. The talks were all recorded and are available via the schedule (the keynote and leadership panel on day 1 are perhaps of broadest interest and highest bang-for-buck viewing value). It was a high quality, well-produced event with an intimate feel – I was very surprised when Robert Nyman told me it was over 700 people! I appreciated the good vegetarian food options and noticed and was very impressed by the much-better-than-typical-tech-conferences gender representation and code of conduct visibility.

It doesn’t always look this way from the outside, but the various browser engine teams are more often than not working toward the same goals and in constant contact. For those who don’t know that, it was nice to see the shoutouts for other browsers and the use of Firefox’s new logo!

The focus of the event IMO was, as expected, the mobile Web. While the audience was Web developers, it was interesting to see what the Chrome team is focusing on. Some of the efforts felt like Firefox OS 4 years ago but I guess FxOS was just ahead of its time 😉

From my perspective, Firefox is in pretty good shape for supporting the things Chrome was promoting (Service Workers, Custom Elements and Shadow DOM, wasm, performance-related tooling, etc.). There are of course areas we can improve: further work on Fennec support for add-to-homescreen, devtools, and seeing a few things through to release (e.g. JS modules, Custom Elements, and Shadow DOM – work is underway and we’re hoping for soon!). Oh, and one notable exception to being aligned on things is the Network Information API that the Mozilla community isn’t super fond.

Other highlights for me personally included the Chrome User Experience Report (“a public dataset of key user experience metrics for popular origins on the web, as experienced by Chrome users under real-world conditions”) and the discussion about improving the developer experience for Web Workers.

It was great putting faces to names and enjoying sunny San Francisco (no, seriously, it was super sunny and hot). Thanks for the great show, Google!

It’s been a long road, but at last the puzzle is complete: Today we delivered Project Jigsaw for general use, as part of JDK 9.

Jigsaw enhances Java to support programming in the large by adding a module system to the Java SE Platform and to its reference implementation, the JDK. You can now leverage the key advantages of that system, namely strong encapsulation and reliable configuration, to climb out of JAR hell and better structure your code for reusability and long-term evolution.

Jigsaw also applies the module system to the Platform itself, and to the massive, monolithic JDK, to improve security, integrity, performance, and scalability. The last of these goals was originally intended to reduce download times and scale Java SE down to small devices, but it is today just as relevant to dense deployments in the cloud. The Java SE 9 API is divided into twenty-six standard modules; JDK 9 contains dozens more for the usual development and serviceability tools, service providers, and JDK-specific APIs. As a result you can now deliver a Java application together with a slimmed-down Java run-time system that contains just the modules that your application requires.

We made all these changes with a keen eye — as always — toward compatibility. The Java SE Platform and the JDK are now modular, but that doesn’t mean that you must convert your own code into modules in order to run on JDK 9 or a slimmed-down version of it. Existing class-path applications that use only standard Java SE 8 APIs will, for the most part, work without change.

Existing libraries and frameworks that depend upon internal implementation details of the JDK may require change, and they may cause warnings to be issued at run time until their maintainers fix them. Some popular libraries, frameworks, and tools — including Maven, Gradle, and Ant — were in this category but have already been fixed, so be sure to upgrade to the latest versions.

Looking ahead It’s been a long road to deliver Jigsaw, and I expect it will be a long road to its wide adoption — and that’s perfectly fine. Many developers will make use of the newly-modular nature of the platform long before they use the module system in their own code, and it will be easier to use the module system for new code rather than existing code.

Modularizing an existing software system can, in fact, be difficult. Sometimes it won’t be worth the effort. Jigsaw does, however, ease that effort by supporting both top-down and bottom-up migration to modules. You can thus begin to modularize your own applications long before their dependencies are modularized by their maintainers. If you maintain a library or framework then we encourage you to publish a modularized version of it as soon as possible, though not until all of its dependencies have been modularized.

Modularizing the Java SE Platform and the JDK was extremely difficult, but I’m confident it will prove to have been worth the effort: It lays a strong foundation for the future of Java. The modular nature of the platform makes it possible to remove obsolete modules and to deliver new yet non-final APIs in incubator modules for early testing. The improved integrity of the platform, enabled by the strong encapsulation of internal APIs, makes it easier to move Java forward faster by ensuring that libraries, frameworks, and applications do not depend upon rapidly-changing internal implementation details.

Learning more There are by now plenty of ways to learn about Jigsaw, from those of us who created it as well as those who helped out along the way.

If your time is limited, consider one or more of the following:

  • The State of the Module System is a concise, informal written overview of the module system. (It’s slightly out of date; I’ll update it soon.)

  • Make Way for Modules!, my keynote presentation at Devoxx Belgium 2015, packs a lot of high-level information into thirty minutes. I followed that up a year later with a quick live demo of Jigsaw’s key features.

  • Alex Buckley’s Modular Development with JDK 9, from Devoxx US 2017, covers the essentials in more depth, in just under an hour.

If you really want to dive in:

Comments, questions, and suggestions are welcome on the jigsaw-dev mailing list. (If you haven’t already subscribed to that list then please do so first, otherwise your message will be discarded as spam.)

Thanks! Project Jigsaw was an extended, exhilarating, and sometimes exhausting nine-year effort. I was incredibly fortunate to work with an amazing core team from pretty much the very beginning: Alan Bateman, Alex Buckley, Mandy Chung, Jonathan Gibbons, and Karen Kinnear. To all of you: My deepest thanks.

Key contributions later on came from Sundar Athijegannathan, Chris Hegarty, Lois Foltan, Magnus Ihse Bursie, Erik Joelsson, Jim Laskey, Jan Lahoda, Claes Redestad, Paul Sandoz, and Harold Seigel.

Jigsaw benefited immensely from critical comments and suggestions from many others including Jayaprakash Artanareeswaran, Paul Bakker, Martin Buchholz, Stephen Colebourne, Andrew Dinn, Christoph Engelbert, Rémi Forax, Brian Fox, Trisha Gee, Brian Goetz, Mike Hearn, Stephan Herrmann, Juergen Hoeller, Peter Levart, Sander Mak, Gunnar Morling, Simon Nash, Nicolai Parlog, Michael Rasmussen, John Rose, Uwe Schindler, Robert Scholte, Bill Shannon, Aleksey Shipilëv, Jochen Theodorou, Weijun Wang, Tom Watson, and Rafael Winterhalter.

To everyone who contributed, in ways large and small: Thank you!

Thanks to Alan Bateman and Alex Buckley
for comments on drafts of this entry.

The Talos II is now available for pre-order. It is the more affordable, more power-efficient successor to the Talos I machine I wrote about in a previous post.

This is a very affordable machine for how powerful it will be, and there are minimal mainboard + CPU + RAM bundles (e.g., this one), around which one can build a workstation with more readily-available parts. I’ve placed an order for one of the bundles, and will buy the chassis and GPU separately (mainly to avoid high cross-border shipping fees for the full workstation).

The Talos II is an important machine for Free Software, and will likely be RYF-certified by the FSF. Pre-orders end September 15th!

For over twenty years the Java SE Platform and the JDK have evolved in large, irregular, and somewhat unpredictable steps. Each feature release has been driven by one or a few significant features, and so the schedule of each release has been adjusted as needed — sometimes more than once! — in order to accommodate the development of those features.

This approach made it possible to deliver big new features at a high level of quality, after thorough review and testing by early adopters. The downside, however, was that smaller API, language, and JVM features could only be delivered when the big features were ready.

This was an acceptable tradeoff in the decades before and after the turn of the century, when Java competed with just a few platforms which evolved at a similar stately pace. Nowadays, however, Java competes with many platforms which evolve at a more rapid pace.

For Java to remain competitive it must not just continue to move forward — it must move forward faster.

Back on the train Five years ago I mused in this space on the tension between developers, who prefer rapid innovation, and enterprises, which prefer stability, and the fact that everyone prefers regular and predictable releases.

To address these differing desires I suggested, back then, that we switch from the historical feature-driven release model to a time-driven “train” model, with a feature release every two years. In this type of model the development process is a continuous pipeline of innovation that’s only loosely coupled to the actual release process, which itself has a constant cadence. Any particular feature, large or small, is merged only when it’s nearly finished. If a feature misses the current train then that’s unfortunate but it’s not the end of the world, since the next train will already be waiting and will also leave on schedule.

The two-year train model was appealing in theory, but proved unworkable in practice. We took an additional eight months for Java 8 in order to address critical security issues and finish Project Lambda, which was preferable to delaying Lambda by two years. We initially planned Java 9 as a two-and-a-half year release in order to include Project Jigsaw, which was preferable to delaying Jigsaw by an additional eighteen months, yet in the end we wound up taking an additional year and so Java 9 will ship this month, three and a half years after Java 8.

A two-year release cadence is, in retrospect, simply too slow. To achieve a constant cadence we must ship feature releases at a more rapid rate. Deferring a feature from one release to the next should be a tactical decision with minor inconveniences rather than a strategic decision with major consequences.

So, let’s ship a feature release every six months.

That’s fast enough to minimize the pain of waiting for the next train, yet slow enough that we can still deliver each release at a high level of quality.

Proposal Taking inspiration from the release models used by other platforms and by various operating-system distributions, I propose that after Java 9 we adopt a strict, time-based model with a new feature release every six months, update releases every quarter, and a long-term support release every three years.

  • Feature releases can contain any type of feature, including not just new and improved APIs but also language and JVM features. New features will be merged only when they’re nearly finished, so that the release currently in development is feature-complete at all times. Feature releases will ship in March and September of each year, starting in March of 2018.

  • Update releases will be strictly limited to fixes of security issues, regressions, and bugs in newer features. Each feature release will receive two updates before the next feature release. Update releases will ship quarterly in January, April, July, and October, as they do today.

  • Every three years, starting in September of 2018, the feature release will be a long-term support release. Updates for these releases will be available for at least three years and quite possibly longer, depending upon your vendor.

In this model the overall rate of change should be about the same as it is today; what’s different is that there will be many more opportunities to deliver innovation. The six-month feature releases will be smaller than the multi-year feature releases of the past, and therefore easier to adopt. Six-month feature releases will also reduce the pressure to backport new features to older releases, since the next feature release will never be more than six months away.

Developers who prefer rapid innovation, so that they can leverage new features in production as soon as possible, can use the most recent feature release or an update release thereof and move on to the next one when it ships. They can deliver an application in a Docker image, or other type of container package, along with the exact Java release on which the application will run. Since the application and the Java release can always be tested together, in a modern continuous-integration and continuous-deployment pipeline, it will be straightforward to move from one Java release to the next.

Enterprises that prefer stability, so that they can run multiple large applications on a single shared Java release, can instead use the current long-term support release. They can plan ahead to migrate from one long-term support release to the next, like clockwork, every three years.

To make it clear that these are time-based releases, and to make it easy to figure out the release date of any particular release, the version strings of feature releases will be of the form $YEAR.$MONTH. Thus next year’s March release will be 18.3, and the September long-term support release will be 18.9.

Implications This proposal will, if adopted, require major changes in how contributors in the OpenJDK Community produce the JDK itself; I’ve posted some initial thoughts as to how we might proceed there. It will be made easier if we can reduce the overhead of the Java Community Process, which governs the evolution of the Java SE Platform; my colleagues Brian Goetz and Georges Saab have already raised this topic with the JCP Executive Committee.

This proposal will, ultimately, affect every developer, user, and enterprise that relies upon Java. It will, if successful, help Java remain competitive — while maintaining its core values of compatibility, reliability, and thoughtful evolution — for many years to come.

Comments and questions are welcome, either on the OpenJDK general discussion list (please subscribe to that list in order to post to it) or on Twitter, with the hashtag #javatrain.

After almost fifteen years I have decided to quit working on IKVM.NET. The decision has been a long time coming. Those of you that saw yesterday’s Twitter spat, please don’t assume that was the cause. It rather shared an underlying cause. I’ve slowly been losing faith in .NET. Looking back, I guess this process started with the release of .NET 3.5. On the Java side things don’t look much better. The Java 9 module system reminds me too much of the generics erasure debacle.

I hope someone will fork IKVM.NET and continue working on it. Although, I’d appreciate it if they’d pick another name. I’ve gotten so much criticism for the name over the years, that I’d like to hang on to it 😊

I’d like to thank the following people for helping me make this journey or making the journey so much fun: Brian Goetz, Chris Brumme, Chris Laffra, Dawid Weiss, Erik Meijer, Jb Evain, John Rose, Mads Torgersen, Mark Reinhold, Volker Berlin, Wayne Kovsky, The GNU Classpath Community, The Mono Community.

And I want to especially thank my friend Miguel de Icaza for his guidance, support, inspiration and tireless efforts to promote IKVM.

Thank you all and goodbye.

[NOTE: This article talks about commercial products and contains links to them, I do not receive any money if you buy those tools, nor I work for or I am affiliated to any of those companies. The opinion expressed here are mine and the review is subjective]

This is my attempt at a review of Spitfire Audio BT Phobos. Before diving into the review, and since I know I will be critic particularly on some aspects, I think it’s fair to assess the plugin right away: BT Phobos is an awesome tool, make no mistakes.

BT Phobos is a “polyconvolution” synthesiser. It is, in fact, the first “standalone” plugin produced by Spitfire Audio, which is one of the companies I respect the most when it comes to music production and sample based instruments.

The term polyconvolution is used by the Spitfire Audio team to indicate the simultaneous use of three convolvers for four primary audio paths: you can send any amount of each of those four primary sources (numbered 1 to 4) outputs to each of the three convolution engines (named W, X and Y).

Screen Shot 2017-04-18 at 13.31.39

Source material controls

There is lot of flexibility in the mixing capabilities; there are, of course, separate dry/wet signal knobs that send a specific portion of the unprocessed source material to the “amplifier” module, control how much of the signal goes to the convolution circuits, and finally how much of each of the convolution engines applies to each of the source sound.

This last bit is achieved by means of an interesting nabla shaped X/Y pad: by positioning the icon that represents the source module closer to a corner it’s possible activate just the convolution engine that represents that corner; for example, top left is the W engine, top right the X and bottom the Y. Manually moving the icon gradually introduces contributions from the other engines, and double clicking on the icon makes all convolvers contribute equally to the wet sound, by positioning them to the center of the nabla.

Screen Shot 2017-04-18 at 15.45.15

The convolution mixer

Finally, each convolver has a control that allows to change the output level of the convolution engine before it reaches its envelope shaper. Spitfire Audio has released a very interesting flow diagram that shows the signal path in detail, which is linked below for reference.

BT Phobos signal path

In addition to the controls just described, the main GUI has basic controls to tweak the source material with an ADSR envelope which is directly accessible below each of the main sound sources as well as the convolutions modules, but it’s possible to have access to more advanced settings by clicking on the number or the letter that identifies the module name.

Screen Shot 2017-04-18 at 13.32.21

The advanced controls interface

An example of such controls is the Hold parameter, which let the user adjust the time the sound is held at full level before entering the Decay phase of its envelope; another useful tool is the sampling and IR offset controls, which allow to tweak parameters like the starting point of the material or the quantisation and its Speed (the playback speed for the samples, and is a function of the host tempo); there is also a control to influence the general pitch of the sound; finally a simple but effective section is dedicated to filtering – although a proper EQ is missing – as well as panning and level adjustments.

All those parameters are particularly important settings when using loops, but also contribute to shaping the sound with the pitched material, and can be randomised for interesting effects and artefacts generated from the entropy (you can just randomise the material selection only as opposed to all the parameters).

Modulation is also present, of course, with various LFOs of various kind that can be used to modulate basically everything. You can access them either by clicking on the mappings toggle below the ADSR envelope of each section, or by using the advanced settings pages.

The amount of tweaks that can be made to the material in both the source and the convolution engines is probably the most important aspect of BT Phobos, since it gives an excellent amount of freedom to create new sounds from what’s available, which is already a massive amount of content, and allows to build wildly different patches with a bit of work, but it’s definitely not straightforward and needs time to understand the combined effects that each setting has on the whole.

Since the material is polyphonic, the Impulse Responses for the convolution are created on the fly, and in fact, one interesting characteristic of BT Phobos is that there is no difference between a material for the convolution engines and one for the source module,  both draw from the same pool of sounds.

Screen Shot 2017-04-18 at 14.41.57

BT Phobos beautiful GUI

There is a difference on the type of material though, where loop based samples are, well, looped (and tempo sync’ed), and their pitch does not change based on the key that triggers them (although you can still affect the general pitch of the sound with the advanced controls), “tonal” material are pitched and change following the midi notes.

One note about the LFOs: the mappings are “per module”. In other words, it is possible to modulate almost every parameter inside a single module, be it one of the four input sources or one of the three convolution engines, but there seem to be no way to define a global mapping of some kind. For example, I found a very nice patch from Mr. Christian Henson (which incidentally made, at least in my opinion, the best and most balanced overall presets), and I noticed I could make it even more interesting by using the modulation wheel. I wanted to modulate the CC1 message with an LFO (in fact, ideally it would be even better to have access to a custom envelope, but BT Phobos doesn’t have any for modulation use), but I could not find a way to do that other than using Logic’s own Midi FX. I understand that MIDI signals are generated outside the scope of the plugin, but it would be fantastic to have the option of tweaking and modulate everything from within the synth itself.

All the sources and convolvers can be assigned to separate parts of the keyboard by tweaking the mapper at the bottom of the GUI. It is not possible to map a sound to start from an offset in the keyboard controls – for example to play C1 on the keyboard but trigger C2, or any other note – but of course you can change the global pitch so this has effectively the same result, and as said before it can also be modulated with an LFO or via DAW automation, for more interesting effects.

Screen Shot 2017-04-18 at 21.42.19

Keyboard mapping tool

Indeed, the flexibility of the tool, and the number of options at disposal for tweaking the sounds are very impressive. Most patches are very nice and ready to be used as they are, and blend nicely with lots of disparate styles. Some patches are very specific though, and pose a challenge to be used. Generally, I would consider these as starting points for exploration, rather than “final”.

When reading about BT Phobos in the weeks before its release many people asked whether you could add your own sound to it or not. It’s not possible, unfortunately.

At first, I thought that wasn’t a limitation or a deal breaker. I still think it’s not a deal breaker, but I see the value added that BT Phobos has even just as a standalone synth, as opposed to recreate the same kind of signal path manually with external tools, to give your own content the “Phobos treatment”, which is something that is entirely possible of course, for example just with Alchemy and Space Designer (which are both included in MainStage, so you can get them for a staggering 30 euros if you are a Mac user, even if you don’t use Logic Pro X!), but of course, we would be trading away the immediacy that BT Phobos delivers.

That, maybe, is my main criticism to this synth, and I hope Spitfire Audio turns BT Phobos into a fully fledged tool for sound design over time, maybe enabling access to spectral shaping in some form or another, so we can literally paint over (or paint away!) portions of the sound, which is something you can do with iZotope Iris or Alchemy and is a very powerful way to shape a sound and do sound design in general.

Another thing that is missing is a sound effect module, although I don’t know how important that is, given that there are thousands of outstanding plugins that do all sort of effects from delay to chorus etc… And, in fact, many patches benefit for added reverb (I use Eventide Blackhole and found that works extremely well with BT Phobos, since it’s also prominently used for weird sound effects). But it may be interesting to play by putting some effects (including a more proper EQ section) in various places in the signal path, although it’s all too easy to generate total chaos from such experimentation, so it’s possible the Spitfire Audio simply thought to leave this option for another time and instead focus on a better overall experience.

And there’s no arpeggiator! Really!

The number of polyphonic voices can be altered. Spitfire Audio states that the synth tweaks the number of voices at startup to match the characteristics of your computer, but I can’t confirm that, since every change I do seems to remain, even if I occasionally hear some pop and cracks at higher settings. Nevertheless, the CPU usage is pretty decent unless you go absolutely crazy with the polyphony count. I also noted that the numbers effect the clarity of the sound. This is understandable since an higher count means more notes can be generated at the same time, which means more things are competing for the same spectrum, and things can become very confusing very quickly. On the other end, a lower polyphony count has a bad impact on how the notes are generated. I feel sometime that things just stop generating sound, which is counter intuitive and very disturbing, especially since it’s very easy to have a high polyphony count with all those sources and convolvers.

Also to note is that, by nature, some patches have very wild difference in their envelopes and level settings, which means it’s all to easy to move from a quiet to a very loud patch just by clicking “next” (which is possible in Logic at least with the next/prev patch buttons on top of the plugin main frame). The synth does not stop the sound, nor does any attempt to fade from one sound to the next, instead, the convolutions simply keep working on the next sample in queue with the new settings! I still have to decide if this is cool or not, perhaps it’s not intentional, but I can see how this could be used to automate patch changes in some clever way during playback! And indeed, a was able to create a couple of interesting side effects just by changing between patches at the right time.

More on the sounds. The amount of content is really staggering, and simply cycling through the patches does not make justice to this synth, at all!

What BT Phobos wants is a user that spends time tweaking the patches and play with the source material to get the most out it, however it’s easy to see how limiting this may feel at the same time, particularly with the more esoteric and atonal sounds, and there’s certainly a limit on how good a wood stick convolved with an aluminium thin can may sound, so indeed some patches do feel repetitive at times, as the source material does. There are quite a few very similar drum loops for example, or various pitches “wind blowing into a pipe” kind of things.

This is a problem common to other synths based on the idea of tweaking sounds from the environment, though. For example, I have the amazing Geosonics from Soniccouture, which is an almost unusable library that, once tweaked, is capable of amazing awesomeness. Clearly, the authors of both synths – but this is especially valid for BT Phobos I think – are looking at an audience that is capable of listening through the detuned and dissonant sound waves and shape a new form of music.

This is probably the reason why so many of the pre assembled patches dive the user full speed into total sound design territory; however, and this is another important point of criticism, this is sound design that has already been done for you… A lot of the BT patches, in particular, are clearly BT patches, using them as they are means you are simply redoing something that has already been done before, and, despite with a very experimental feeling still strongly present, it’s not totally unheard or new.

For example, I also happen to have Break Tweaker and Stutter Edit (tools that also originally come from BT), and I could not resist to the temptation to play something that resembles BT work on “This Binary Universe” or “_” (fantastic albums)! While this seems exciting – BT in a box! And you can also see the democratising aspect of BT Phobos, I can do that in half hour instead of six months of manual CSound programming! – it’s an unfortunate and artificial limitation on a tool that is otherwise a very powerful enabler, capable of bringing complex sound design one step closer to the general public. Having the ability to process your own sounds would mitigate this aspect I think.

I do see how this is useful for a composer in need of a quick solution for an approaching deadline even with the most experimental tones, though: those patches can resolve a deadlock or take you out of an impasse in a second.

The potential for BT Phobos to become a must have tool for sound design are all there, especially if Spitfire Audio keeps adding content, perhaps more varied (and even better, allow to load your own content). The ability to shape the existing sounds already make it very usable. I don’t think it’s a general tool at this stage, though, and definitely it should not be the first synth or sound shaping processor in your arsenal, especially if you are starting out now.

But it’s not just a one trick pony either, it does offer you quite a lot of possibilities, and the more you work on that, the more addictive it becomes, and I can see Spitfire Audio offering soon this synth within a collection comprising of some of their more experimental stuff like LCO and Enigma, which would be very nice, indeed.

It’s unfortunate that Spitfire Audio does not offer an evaluation period: contrary to most of their offering, BT Phobos needs time to be fully grasped and it’s all but immediate (well, unless you are happy with the default patches or you really just need to “get out of troubles” quickly, but be careful with that because the tax is on the originality), but it can, and does, evolve, as its convolutions do, over time and it can absolutely deliver total awesomeness if used correctly.

Most patches are also usable out of the box, and especially by adding some reverb or doing some post processing with other tools, it’s possible to squeeze even more life out of them.

Overall, I do recommend BT Phobos, is a wonderful, very addictive synthesiser.


Quantum Curling

Last week we had a work week at Mozilla’s Toronto office for a bunch of different projects including Quantum DOM, Quantum Flow (performance), etc. It was great to have people from a variety of teams participate in discussions and solidify (and change!) plans for upcoming Firefox releases. There were lots of sessions going on in parallel and I wasn’t able to attend them all but some of the results were written up by the inimitable Ehsan in his fourth Quantum Flow newsletter.

Near the end of the week, Ehsan gave an impromptu walkthrough of the Gecko profiler. I’m planning on taking some of the tips he gave and that were discussed and put them onto the documentation for the profiler. If you’re interested in helping, please let me know!

The photo above is of us going curling at the High Park Curling Club. It was a lot of fun and I was happy that only one other person had ever curled before so it was a unique experience for almost everyone!

As previously reported, the JSR 269 annotation processing APIs in the javax.lang.model and javax.annotation.processing packages are undergoing maintenance review as part of Java SE 9.

All the planned changes to the JSR 269 API are in JDK 9 build 164, downloadable as an early access binary. Of note new in build 164 is the annotation type javax.annotation.processing.Generated, meant to be a drop-in replacement for javax.annotation.Generated since the latter is not in a convenient module.

Please try out your existing annotation processors -- compiling them, running them, etc. -- on JDK 9 and report your experiences, good or bad, to compiler-dev@openjdk.java.net.

As has been done previously during Java SE 7 and Java SE 8, the JSR 269 annotation processing API is undergoing a maintenance review (MR) as part of Java SE 9.

Most of the API changes are in support of adding modules to the platform, both as a language structure in javax.lang.model.* as well as another interaction point in javax.annotation.processing in the Filer and elsewhere. A small API change was also done to better support repeating annotations. A more detailed summary of the API changes is included in the MR material.

The API changes are intended to be largely compatible with the sources of existing processors, their binary linkage, as well as their runtime behavior. However, it would be helpful to verify that your existing processors work as expected when run under JDK 9. JDK 9 early access binaries are available for download. Please report experiences running processors under JDK 9 as comments here or to me as email. Feedback on the API changes can be sent to compiler-dev@openjdk.java.net.

Note: this article is also available in German.

What is Conversations?

Conversations is an app for Android Smartphones for sending each other messages, pictures, etc, much like WhatsApp. However, there are a number of important differences to WhatsApp:

  • Conversations does not use your phone number for identification, and doesn’t read your address book to find contacts. It uses an ID that looks much like an email address (the so-called Jabber-ID), and you can find contacts by exchanging Jabber-IDs with people, just like you do with email addresses, phone numbers, etc.
  • Conversations uses an open protocol called XMPP, that is used by many other programs on a wide range of systems, for example on desktop PCs.
  • Converations is Open Source, i.e. everybody can inspect the source code, check it for security issues, see what the program actually does, or even modify and distribute it.
  • XMPP builds on a decentralized infrastructure. This means that not one company is in control of it, but instead there are many providers, or you can even run your own server if you want.
  • Conversations does not collect and sell any information from you or your contacts.

There are more differences, but I don’t want to go into detail here, others have already done it, and better (German).

Install Conversations

From Google Play

Conversations is easily installed from Google Play. However, it currently costs 2,39€. I’d recommend everybody who can to buy the it, it supports development of this really good app.

Alternative: From F-Droid

For all those who cannot or don’t want to spend the money, there is another way to get it for free. It is available in the F-Droid. It is an alternative app store, that only distributes Open Source software. In order to do that, you first need to install F-Droid. Then you can start F-Droid and search for Conversations and install it.

Set-up Jabber account

Next step is to set up a Jabber account. You need two things: an ID, and a provider. The first part, the ID, you can choose freely, e.g. a fantasy name or something like firstname.surname, but this is really up to you. In order to find a provider, I recommend this list https://gultsch.de/compliance_ranked.html. The providers at the top of the list have best support for the XMPP features that are relevant for smartphone users. I’d recommend trashserver.net because this supports in-band registration (directly from Conversations) and is very well maintained. If you want to further support the developer of Conversations, I’d recommend an account on conversations.im, this currently costs 8€/year. I think it is worth it, but you have the choice.

If you choose, for example, the ID ‘joe.example’ on the provider ‘provider.org’, then your Jabber-ID is joe.example@provider.org. When you’re decided on a Jabber-ID, you can easily register an account by starting Conversations, entering the Jabber-ID in the set-up screen, check the box ‘register new account on server’, enter your preferred password 2x and confirm it.

Adding contacts

Adding contacts is different than WhatsApp. You have to manually add contacts to your roster. Tap on the ‘+’ symbol next to the little people icon, enter your contact’s Jabber-ID and confirm it. Now you’re ready to start chatting. Have fun!

After turning off comments on this blog a few years ago, the time has now come to remove all the posts containing links. The reason is again pretty much the same as it was when I decided to turn off the comments - I still live in Hamburg, Germany.

So, I've chosen to simply remove all the posts containing links. Unfortunately, that were pretty much all of them. I only left my old post up explaining why this blog allows no comments, now updated to remove all links, of course.

Over the past years, writing new blog posts here has become increasingly rare for me. Most of my 'social media activity' has long moved over to Twitter.

Unfortunately, I mostly use Twitter as a social bookmarking tool, saving and sharing links to things that I find interesting.

As a consequence, I've signed up for a service that automatically deletes my tweets after a short period of time. I'd link to it, but ...

RoboVM 0.0.1 got released this week by Trillian AB.

RoboVM's main focus is to compile Java to native for deployment on mobile devices such as iOS and Android. RoboVM uses a Java to Objective-C bridge built using LLVM. Good news is that the same process work for converting Java applications to native applications on GNU/Linux systems as well!

Mario Zechner the author of libgdx posted this nice picture from inside DDD/GDB of his first HelloWorld compiled to native X86 code running on a GNU/Linux machine.
GNU/Linux machine code generated by RoboVM seen from inside DDD/GDB

http://www.robovm.org/

JogAmp is the home of high performance Java™ libraries for 3D Graphics, Multimedia and Processing.
JOGL, JOCL and JOAL provide cross platform Java™ language bindings to the OpenGL®, OpenCL™, OpenAL and OpenMAX APIs.
Running on Android, Linux, Window, OSX, and Solaris across devices using Java.

Release announcement for JogAmp 2.0.2-rc12

"You're encouraged to stop using the now-ancient 2.0-rc11!"

This 2.0.2-rc12 release include the largest security review in the 10-year history of JOGL

  • Security Fixes

    • Dynamic Linker Usage / Impl.
    • ProcAddressTable field visibility
    • Perform SecurityManager checks where required
    • Validation of property access
    • JAR Manifest tags:
      • Codebase
      • Permissions
      • Sealed
    • Use latest Java7 toolchain
      • Generating Java 1.6 bytecode
      • HTML API doc

https://jogamp.org/wiki/index.php/SW_Tracking_Report_Objectives_for_the_release_2.0.2_of_JOGL
Security fixes are marked in red on the above bug tracking page.
JogAmp send out thanks to the FuzzMyApp security researchers for healthy communication that triggered the security review work.

If you find an issue with the release, please report it to our bug database under the appropriate component. Development discussion takes place inside the JogAmp forum & mailing-list and the #jogamp IRC channel on irc.freenode.net.


Meet us @

JogAmp @ SIGGRAPH 2013

If you’ve been following Infinity and would like to, you know, download some code and try it out… well, now you can!

screenshot

The first release candidate is finally available. It can be downloaded here or from NuGet.

What's New (relative to IKVM.NET 8.0):

  • Integrated OpenJDK 8u45.
  • Many fixes to late binding support.
  • Added ikvmc support for deterministic output files.
  • Various sun.misc.Unsafe improvements.
  • Many minor bug fixes and performance tweaks.

Changes since previous development snapshot:

  • Assemblies are strong named.
  • Fix for bug #303. ikvmc internal compiler error when trying to get interfaces from type from missing assembly reference.
  • Implemented NIO atomic file move on Windows.

Binaries available here: ikvmbin-8.1.5717.0.zip

Sources: ikvmsrc-8.1.5717.0.zip, openjdk-8u45-b14-stripped.zip

Thanks to everybody who commented on the JamVM 2.0.0 release, and apologies it's taken so long to approve them - I was expecting to get an email when I had an unmoderated comment but I haven't received any.

To answer the query regarding Nashorn.  Yes, JamVM 2.0.0 can run Nashorn.  It was one of the things I tested the JSR 292 implementation against.  However, I can't say I ran any particularly large scripts with it (it's not something I have a lot of experience with).  I'd be pleased to hear any experiences (good or bad) you have.

So now 2.0.0 is out of the way I hope to do much more frequent releases.  I've just started to look at OpenJDK 9.  I was slightly dismayed to discover it wouldn't even start up (java -version), but it turned out to be not a lot of work to fix (2 evenings).  Next is the jtreg tests...

I'm pleased to announce a new release of JamVM.  JamVM 2.0.0 is the first release of JamVM with support for OpenJDK (in addition to GNU Classpath). Although IcedTea already includes JamVM with OpenJDK support, this has been based on periodic snapshots of the development tree.

JamVM 2.0.0 supports OpenJDK 6, 7 and 8 (the latest). With OpenJDK 7 and 8 this includes full support for JSR 292 (invokedynamic). JamVM 2.0.0 with OpenJDK 8 also includes full support for Lambda expressions (JSR 335), type annotations (JSR 308) and method parameter reflection.

In addition to OpenJDK support, JamVM 2.0.0 also includes many bug-fixes, performance improvements and improved compatibility (from running the OpenJDK jtreg tests).

The full release notes can be found here (changes are categorised into those affecting OpenJDK, GNU Classpath and both), and the release package can be downloaded from the file area.