Archive for July, 2013

Why is Traditional Java I/O Uninterruptable?

Posted Jul 22 2013 by in Java with 0 Comments

One of the best things about writing code in the Java ecosystem is that so much of the underlying platform is open source. This makes it easy to get good answers to questions about how the platform actually works. To illustrate, I’ll walk through the JVM code to show why Java I/O isn’t interruptable. This will explain why threads performing Java I/O can’t be interrupted.

The core issue with interrupting a Java thread performing I/O is that the underlying system call is uninterruptable. Let’s see why that is, for a FileInputStream. Continue Reading…

Custom EHCache Sizing Engines

Posted Jul 10 2013 by in Java with 0 Comments

One of the ways that EHCache can be configured is to put a bound on the maximum amount of memory a cache can consume. To do this, EHCache needs to be able to compute the size of the cache, which implies that it can compute the size of object graphs contained within the cache. The usual way it does this is to traverse the object graph of each key/value pair in the cache, estimate the size of each component object within the graph, and then add them all up to get a total for the key/value pair.  By doing this for each cached key/value pair, EHCache can estimate the overall memory footprint of a cache and enforce a memory bound by evicting objects, if necessary. Continue Reading…

SQLDeveloper Canned Queries and Templates

Recently, I’ve been trying to give my best effort at using SQL Developer as an Oracle IDE. I really have. And, for the most part, I’ve been somewhat successful. But, as a lifelong (since 2001) TOAD user, there are a few things that I can’t live without. This article is about one of them: canned queries. And, as a bonus, the solution I’m going to outline provides something TOAD never did well: canned templates with variables.

Continue Reading…

Thread Local State and its Interaction with Thread Pools

Posted Jul 2 2013 by in Java with 0 Comments

I recently blogged about solving problems with thread local storage. In that instance, thread local storage was a way to add a form of ‘after the fact’ synchronization to an interface that was initially designed to be called from a single thread.  Thread local storage is useful for this purpose, because it allows each thread to easily isolate the data it manipulates from other threads. Unfortunately, while this is the strength of Thread Local storage, it is also the weakness. In modern multi-threaded designs, threading issues are often abstracted behind thread pools and work queues. With these abstractions at work, the threads themselves become an implementation detail, and thread local variables are too low level to serve some useful scenarios.

One way to address this issue is to use some of the techniques from an earlier post on dynamic extent. The general gist of the idea is to provide Runnable‘s with way of reconstructing relevant thread local state at they time they are invoked on a pool thread. This maps well to the idea of ‘dynamic extent’, as presented in the earlier post. ‘Establishing the precondition’ is initializing the thread locals for the run, and ‘Establishing the post condition’ is restoring their original values. Here’s how it might look in code: Continue Reading…