The JVM Fanboy Blog 

WildFly & Nashorn part 1: Hello world!

by vincent

Posted on Thursday Sep 15, 2016 at 12:46AM in Nashorn

Red-Hat released version 10.1 of their WildFly application server recently.

WildFly is Red-Hat's open-source Java Enterprise Edition application server, which implements the Java EE 7 Full and Web Profile standards. This is a huge difference with the popular Apache TomCat server, which only implements the Servlet Container part of the Java EE specs (but, note that there's also the TomEE project, which makes TomCat a full Java EE application server. Likewise, I should also mention that there's a distribution of WildFly available that only implements the Servlet Container standard!)

As far as I can tell, version 10 of WildFly is not yet Java EE certified by Oracle. The last certified version seems to be WildFly 8. Still, it's definitely one of the more interesting application servers on the market, though.

Introducing Undertow and Undertow.js

Undertow is the name of the HTTP server that powers WildFly. It is a separate project written in Java and Java applications can embed it themselves. Noteworthy is that Undertow is implemented using NIO , the high-performance non-blocking I/O API that was introduced back in Java 7.

On top of Undertow runs Undertow.js, also a seperate project, which we will concentrate on in this article. Undertow.js makes many WildFly capabilities available to server-side JavaScript scripts that run on Oracle Nashorn.

Since WildFly is implemented using Undertow and comes with Undertow.js, getting server-side Nashorn server-side JavaScript scripts to work is a relatively easy process. Undertow.js offers some convenient wrappers around Java objects and therefore, sharing and using DataSource objects to query databases, retrieving and using Java objects, etc. couldn't be much easier. Also, since WildFly, in its developer-friendly standalone mode, automatically reloads modified Nashorn scripts, changes to scripts are active immediately.

WildFly 10.1 comes with undertow-js version 1.0.2.Final, released in February 2016, which is the version I will concentrate on. Note, there's a newer Beta version on GitHub though.

Let's do the Hello World example

I plan to do more parts in this series, where we will examine more exciting Undertow.js features and will write my own tutorial code. But for now, let's get started by creating the Hello World example that I "borrowed" from the Undertow.js docs.

Download and configure WildFly 10.1

I'll assume you will run WildFly for development purposes only, on the same machine as your Java development environment. As WildFly is complex software, I'll not describe configuring WildFly with security in mind. Do not run WildFly on a server without studying the Administrator Guide.

While WildFly has extended run modes, we will concentrate on the easy-to-use and developer-friendly Standalone-mode. In this mode we have just one instance of WidFly, that runs both the administration module and the applications.

  • From the official website download page, download the latest Java EE7 Full & Web Distribution.
  • Unzip the ZIP or TGZ file to a convenient location
  • Navigate to the "bin" subdirectory.
    On Windows machines, run "standalone.bat". On Linux, run "./"
    Windows screenshot of WildFly that is starting
  • If everything goes as planned, you'll see messages printed to the console with the URLs of the HTTP management interface and Administration Console.

    On my machine the Administration Console URL is

    You should also see a message that the WildFly has been started.

  • Before you can access the Administration Console, we will have to create a user that can access it.
    • In a new terminal window (or Command Window on Windows), from the same "bin" subdirectory, run the add-user.bat (Windows) or ./ (Linux) script.
    • When prompted, type "a" (Management User) and press enter
    • Enter your username you wish to use as login and press enter
    • Enter a password and press enter, try to follow the recommendations. If you don't, you will be reminded and asked if you are really sure to continue (don't blame them when your password is hacked!)
    • Re-enter password and press enter
    • Leave the groups list empty, just press enter
    • If you made no mistakes, answer "yes" and press enter when asked if entered info is correct
    • When asked if user is going to be used to connect to another process, enter "no" and press enter
    • Then press any key to continue. A user that can login to the Administrator panel will now have been created.
  • Now you should be able to log in to the Administrator console
    • In your favorite browser, enter the URL that is shown in the terminal window (the one that runs WildFly). This is probably ""
    • When prompted enter the username and password you chose in the previous step.
    • The Management page of the Administration Console should now appear.

Develop the server-side JavaScript code

  • I'll be using Oracle NetBeans to create a new Java Web > Web Application project called "undertowjs_test".

  • We won't be deploying to WildFly using NetBeans IDE (the reason will be explained later), so I just accepted my default TomCat server.
  • On the last screen, do not choose any framework and finish the wizard.
  • In your project's "Web Pages" folder, add a new JavaScript file "main.js"
  • Enter the following code:

            {headers: {"content-type": "text/plain"}},
            [function ($exchange) {
                return "Hello World";

  • Some notes:

    • It creates the "/hello" endpoint and responds with a hardcoded plain text response when a HTTP GET request is received.
    • The "$undertow" object instance is created and made globally available automatically by undertow.js
    • The "$exchange" variable is not used in this example, but can be used to communicate with the server (do things like redirect, set HTTP response codes, retrieve cookies, read query parameters, etc. ). In the next part of this series I will more throughly describe it. For now take a look at the documentation if you want more information.
    • Like normal Java EE applications, endpoints are created relative to the application's context path. So in this case the endpoint will actually be "/undertowjs_test/hello"
  • Since WildFly can also serve JavaScript files for the browser, it can not know which JavaScript files are meant for the server-side Nashorn interpreter and which scripts will be served to the front-end. Therefore, you'll need to create a text file with a fixed filename to tell undertow.js which JavaScript files it should start server-side on server startup. To do this, in the Web Page's WEB-INF directory, create a new empty file and call it "undertow-scripts.conf"

  • Call the file "undertow-scripts.conf" and enter the following single line and save the file:


    • Note: Since we stored main.js in the Web Pages directory, you do not need to add any path to the filename. Also note that scripts mentioned in this file are considered server-only JavaScript files. Download requests for these files will be refused.
  • Now let's build the WAR file

Deploy the WAR file to WildFly

  • Return to the browser, and in the WildFly Management start screen, click on the "Deployments" tab at the top of the screen. Then click the "Add" button.

  • In the dialog that appears, select "Upload new deployment" and click Next

  • A new dialog appears to select the WAR file. Click on "Choose File" and navigate to your NetBeans IDE "undertowjs_test" project' folder. In this directory, a "dist" subdirectory should exist where the "undertowjs_test.war" WAR file is stored. Select that file and click Next.

  • On the final screen, check the settings and click Finish. I'd recommend to keep the "Runtime name" the same as "Name".

  • If everything went well, a popup should appear for a short time telling the deployment was successful.
  • WildFly's http interface runs on 8080 at default. Now you should be able to visit the endpoint created by your Nashorn script. Visit "http://localhost:8080/undertowjs_test/hello"
  • Hopefully you'll get the "Hello World" message now

  • If you get a "Not Found" message instead, double-check you stored "undertow-scripts.conf" in the WEB-INF directory and that the file contains one line with " main.js". I've noticed WildFly will not produce errors when there are errros in undertow-scripts.conf file. If you detect an error, correct it, rebuild the project, on the WildFly Deployments tab, click "undertowjs_test" project and click on the arrow next to the name and choose "Replace", then try the URL again in the browser.

Override the server-side script directory

An interesting feature of WildFly, that I believe is only available when the server is running on Standalone mode, is that you can override the path where Undertow looks for the server-side script files.

This is very handy when developing the scripts. Since you can point WildFly to your project's workspace directory, you can immediately check changes after saving the file in NetBeans IDE (or any other editor). On each request, Undertow.js checks if the file has changed. If this is the case, the JavaScript file is automatically re-compiled by Nashorn and re-run immediately.

  • Create a new Empty File in the Web Pages' WEB-INF directory and call it "undertow-external-mounts.conf"
  • Enter the full path to your NetBeans project's "web" subdirectory of your project's source directory (the directory containing the "main.js" file.

    In my case on my Windows machine, it contains this:

  • Re-build the WAR file and replace the deployment, see above for instructions on how to replace the previous deployment.
  • Now in NetBeans IDE, change the code a little bit and save the file. For instance, I've changed the message to a random quote from one of my all-time favorite B-movies:

            {headers: {"content-type": "text/plain"}},
            [function ($exchange) {
                return "I am Torgo. I take care of the place while the Master is away.";

  • When refreshing the page in the browser, you should immediately see this change, without doing any re-deployment.

For the next part I will try to come up with more interesting examples that really demonstrate the power of undertow.js. I hope you agree WildFly is a very interesting server and its undertow.js component deserves much more attention.

Host Maven repositories on a Raspberry Pi

by vincent

Posted on Sunday May 08, 2016 at 11:42PM in General

For a particular project I'm doing, it would be handy to re-use a JAR file in multiple other future projects. Although there are far more simple solutions to this problem, when I read about repository managers in the official Maven documentation, I gave it some thought and decided to try to run one on one of my Raspberry pi mini-computers that was waiting for a problem to be solved :-)

When hosting a Maven repository on the Raspberry pi, next to hosting private repositories that can be accessed by other computers in my network, it can act as a proxy for Maven Central as well. This means that when a computer in my network needs to download a dependency from Maven Central, it will get a cached copy from the Raspberry pi instead. If the dependency is not already in its cache, it will download the dependency from Maven Central (if it is configured to allow this) first.

Apache Archiva

Looking at the list of available Maven Repository Managers, Apache Archiva caught my eye and decided to give it a try.

Like many other Apache Software Foundation projects that I follow, Archiva is a project with light traffic and not a lot of new releases lately, this is not necessarily a bad thing though, hopefully this is a sign that the current version is generally considered stable.<.p>

What I saw when looking up for more information, impressed me. The web user-interface looks really user-friendly and nice.

Browsing repository in Archiva

Running Archiva in Standalone mode or Servlet container?

Archiva can run in two modes:

  • Stand-alone (a script can be started on the console, that will start an embedded web-server)
  • Run inside a Servlet container (Apache Tomcat) or dedicated Java EE application server (Glassfish, JBoss/Wildfly, Oracle WebLogic...)

Let's take a look at both options

Standalone mode on the Raspberry pi

At first I tried running Archiva in stand-alone mode, assuming this would be ideal for the Raspberry pi.

I had many issues along the way, which all had to do with Archiva's dependency of an out-dated version of Java Service Wrapper by Tanuki Software. As I understand Service Wrapper is a component that lets Java applciations run as Linux daemons and Windows services. Due to licensing issues, the Archiva team can not upgrade this component to a more up to date version.

Blogger Ti-80 - almost exactly three years ago - made a blog post about running Archiva on the Raspberry pi and it seems all problems can be solved by manually building the project and replacing binaries.

I decided to not follow this route and run it in a servlet container. I reasoned I'll probably want to run additional servlets on this Raspberry pi in the future anyway. And Adam Bien convinced me that Tomcat's memoy consumption is not as excessive as some people seem to think.

Running inside Servlet Container

Here are the steps that I took to get this up and running. I did not follow the documentation exactly, as I have some other conventions. Feel free to disagree with me and use their instructions! Note that I use a Raspberry pi 2 (4 cores and 1GB of internal memory), I have not tried this on other models.

The official installation guide is here.

  • Login with SSH to your Raspberry pi, I used the built-in pi user (you did change its default password, didn't you?! :) )
  • Create a dedicated Linux user for running Tomcat:

    sudo -s
    adduser tomcat

    Follow the prompts, then change active user to "tomcat"

    su tomcat

  • Let's create directories and download Tomcat and Archiva

    cd ~
    mkdir downloads
    cd downloads

    Visit the TomCat homepage and look for the download link of the .tar.gz release of the currently stable version. At the time of this writing this was 8.0.33

    wget URL-TO-TOMCAT.tar.gz
    (replace "URL-TO-TOMCAT.tar.gz" with the download URL)
    tar xvfz ./apache-tomcat-X.Y.Z.tar.gz
    (replace "apache-tomcat-X.Y.Z.tar.gz" with downloaded file)
    mv ./apache-tomcat-X.Y.Z ~/tomcat
    (move the created directory, not the downloaded tar.gz file!, to the home directory)

  • Visit the Archiva homepage and look for the download link of the WAR release of the currently stable version. At the time of this writing this was 2.2.0

    wget URL-TO-ARCHIVA.war (replace "URL-TO-ARCHIVA.war" with the download URL)

  • I create a dedicated directory for Archiva and do not place it in the Tomcat home directory (which the Installation Guide suggests). I don't want to pollute Tomcat home directory with Archiva related files. This is a debatable decision, as some dedicated TomCat configuration files will be required to run Archiva anyway.

    cd ~
    mkdir -p webapps/archiva
    cd webapps/archiva
    mkdir conf
    mkdir db
    mkdir logs
    mkdir war

    cd war
    cp ~/downloads/*archiva* ./

  • Let's create Tomcat context configuration XML file.

    cd ~/tomcat/conf
    mkdir -p Catalina/localhost
    cd Catalina/localhost
    nano archiva.xml
    (Install nano if it could not be found: apt-get install nano)

  • Enter following code in Nano
    (Substitute the docBase path with the full path to your downloaded war file)

    <?xml version="1.0" encoding="UTF-8"?>
     <Context path="/archiva"
     <Resource name="jdbc/users" auth="Container" type="javax.sql.DataSource"
               url="jdbc:derby:/home/tomcat/webapps/archiva/db/users;create=true" />
     <Resource name="mail/Session" auth="Container"

    Press CTRL+X , then Y to exit.

  • A bit annoyingly, you'll need to install some dependencies in the "lib" directory of Tomcat

    You could download the Archiva .zip release and get it from them, but you could also download the files from, what we'll do here. I've chosen the exact versions used by the current Archiva stable version to prevent conflicts. We could regret this later, when installing applications that require newer versions... :( .

    Check the installation guide to see if the version numbers mentioned here still match with the latest version!

    cd ~/tomcat/lib

    Vist and find the "Download JAR" button. Copy the link URL.

    wget LINK-TO-ACTIVATION-1-1.JAR (replace with full link to activation-1-1.jar)

    Vist and find the "Download JAR" button. Copy the link URL.

    wget LINK-TO-MAIL-1-4.JAR (replace with full link to mail-1.4.jar)

    Vist and find the "Download JAR" button. Copy the link URL (manual states newer versions of Derby than mentioned in the documentation should work fine).

    wget LINK-TO-DERBY.JAR (replace with full link to Derby jar file)

  • Finally create a start script that will boot Tomcat and additionally sets some required environmental variables.

    cd ~

    Add the following content:

    export CATALINA_OPTS="-Dappserver.home=/home/tomcat/webapps/archiva -Dappserver.base=/home/tomcat/webapps/archiva"

    Press CTRL+X , then Y to exit.

    chmod +x ./

  • Run the script:
  • On the computer you used to log in to Raspberry pi, start your browser (so do not run the browser on your Raspberry pi), go to:
    (replace IP_ADDRESS_RASPBERRY_PI with the correct ip address or host name)

    If everything goes well, after a few seconds you should see a welcome screen, with a button at the top right side to create an admin user. Note that booting can take some time.

    On problems, you'd get a simple 404 screen. In that case you'll have to look at the log files of Tomcat and try to determine what is wrong, usually something's wrong with one of the paths or dependencies:
    nano /home/tomcat/tomcat/logs/localhost.YYYY-MM-DD.log (replace with your current date)

  • Create the admin user and follow the prompts.
  • I should do a future blog post about configuring Archiva, I feel the default settings are good enough to get started.

Configure Maven clients to use Archiva

To get started, let's configure your desktop machine clients to use Archiva to retrieve dependencies of Maven Central from now on only from your Raspberry pi. I advise to not configure your Maven clients on your laptops, unless your Raspberry pi is accessible via the internet, or you have a VPN or something. Otherwise you won't be able to get your dependencies when your Raspberry pi is not on your current network.

Create or edit your Maven settings file from your user directory with your favorite editor,.

On modern Windows machines, this file should be located on:
c:\users\XXX\.m2\settings.xml (replace XXX wityh your username)

On Linux machines, this file is located at:

Make sure the setting.xml file contains at least something like: (but if you have other entries as well, like proxies, etc., make sure to retain them)

<?xml version="1.0" encoding="UTF-8"?>
    User-specific configuration for maven. Includes things that should not 
    be distributed with the pom.xml file, such as developer identity, along with 
    local settings, like proxy information. The default location for the
    settings file is ~/.m2/settings.xml 
<settings xmlns="" xmlns:xsi=""

Of course, replace RASPBERRY_IP_ADDRESS with your Raspberry pi's IP address.

The <mirrorOf>central</mirrorOf> entry tells Maven to only use your Raspberry pi mirror for Maven Central dependencies. Refer to the Maven documentation for more mirror configuration options. Also make sure to read the corresponding Archiva chapter on this subject.

Now when you build a project, Maven Central repositories will be downloaded from your Raspberry pi, which will automatically download the dependency if it did not have it already in its cache.

Some final thoughts

The (Micro-)SD card from your Raspberry pi can get corrupted when files change very often. So if you very frequently add new dependencies or change versions, it's probably better to attach a external harddrive to your Raspberry pi and make sure the Maven repositories are stored on that hard drive.

If after testing you don't want to use Archiva anymore, simply remove the <mirror>....</mirror> entries from your client's settings.xml and you should be fine.

To shutdown TomCat on your Raspberry, you can use standard ~/tomcat/bin/ script, but remember to start the server, use the script in your home directory, otherwise Archiva won't start because it can not find its home directory.

On my Raspberry pi, less than 250 MB of memory was used to run Linux, Tomcat and Archiva, so I have plenty of room to run other servlets on my Raspberry pi in the future.

Use modern front-end tools in Maven

by vincent

Posted on Saturday Apr 16, 2016 at 05:41PM in Build Tools

After years of owning a static - well, very static, neglected would be a better word - personal WordPress-powered website, I decided to start creating a new custom web application that will demonstrate my various web-dev capabilities and will also serve as my personal website. I can't say I will miss PHP for even a second ;-)

The site will use various back-end and front-end technologies interchangeably, for example parts of the back-end are written in Java 8, Groovy and - of course! - Nashorn. I normally would probably not do this on typical small production applications, but for a demo site I can justify this choice. Thanks to NetBeans IDE's excellent Maven support, I had no issues at all referring Groovy classes in Java code and vice-versa.

The site is under construction and not on-line yet at the time of this writing.

Front-end tools

As I chose Facebook React as one of my front-end toolkits (only some interactive pages of my site will use it. At this time I won't use it globally for the whole site) with JSX to define views, I had to use modern front-end build tools for building the front-end.

Various tools exist to automate buildings tasks, such as the task of calling Babel to convert the JavaScript with JSX to plain JavaScript files. I also chose to do dependency management with external tools. My choice have been to use:

  • Node.js and NPM.js (for installing and running dependencies needed by the build tools)
  • Bower (front-end dependency management)
  • Babel (to convert JSX to plain JS code)
  • Gulp.js (task-runner to automate the building and packaging)

Integrating front-end building tools with Maven

For this project I chose Apache Maven for building the back-end of the application. I know a lot of people don't really like Maven, but since I could follow the Maven convention quite well, I actually had a very pleasant time creating and using the project's pom.xml file file.

At this time the back-end is tightly coupled to the front-end: I wrote my HTML templates in Thymeleaf where applicable, but the corresponding JavaScript and CSS files are stored in a separate "static" directory. This is not ideal, but not really a big deal in my application. Also, I use Nashorn to compile some JavaScript files of the front-end code, so the back-end codes needs access to the front-end code in this application anyway.

Note to self: Once the site is up and running, I want to experiment with ditching the Thymeleaf templates completely and let Nashorn create the whole HTML dynamically using the static JS files (as mentioned above, I already let Nashorn generate HTML for AJAX content on the back-end using the same code as the front-end uses, so I already did some initial work on this).

As it is right now, it made sense to make building of the front-end part of building the whole project.

Some older discussions on StackOverflow suggested this had to be done manually by calling scripts via Maven. Some developers use Ant tasks for this (which I planned to do as well, I think Ant tasks are really suited for this kind of work and those can easily be called by Maven as part of the building process).

Introducing the "frontend-maven-plugin" Maven plugin

However, after some more googling, I came across the frontend-maven-plugin by Eirik Sletteberg. This plugin has a lot of tricks up its sleeve. Features include:

  • Installing Node.js and NPM locally for the built project only.

    This installation won't interfere with Node.JS/NPM that are already installed globally on the system and is only used to install the dependencies and execute the configured tools. This should work on modern Windows, OS X and most popular Linux systems.

  • Execute "npm install" to install the dependencies required for building the front-end
  • Execute "bower install" to install the dependencies required by the front-end (Bootstrap, or in my case Foundation Zurb 6, FontAwesome, etc.)
  • Execute "gulp build" to run the various build task (as described above in my case this will call Babel to convert JSX to JS). Note that the plugin supports Grunt as well.
  • Execute front-end unit-tests using Karma (I have not tried this feature yet, but I intend to try Karma soon)
  • Execute WebPack module bundler (I have not yet tried this)

Configuring front-end-maven-plugin in pom.xml file

Adding those tasks to pom.xml was easy, the examples on the website were simple and easy to follow.

I made up my own directory convention. I chose to create a new "frontend" directory in my project's "src" directory to store all front-end related files. That directory contains the package.json (for NPM), gulpfile.js and bower.js files. I let Gulp create a dist directory here that contains the built files.

The directory structure looks like this:

NetBeans screenshot of my project's directory structure. All front-end files are stored in the 'frontend' directory of the standard 'src' directory

Once built, I let Maven copy the full content of the "dist" directory to the project's resources/static directory as part of the project's build process, using the standard maven-resources-plugin.

In the frontend directory the "node_modules", "bower_components" and "dist" are stored (created by the different tools). I chose to also store the node.JS and NPM installation here in a directory that I called "node_maven". I made sure all those directories are ignored by my version control system, that's why those directories are colored gray in the screenshot above.

Here are the relevant entries in my pom.xml file:

                <!-- Use the latest released version:
                        <id>install node and npm</id>
                        <id>npm install</id>
                        <id>bower install</id>
                        <id>gulp build</id>
                            <arguments> build</arguments>
                        <id>copy gulp dist files to static</id>


Also shown in the example above is usage of the maven-clean-plugin plugin to clean the generated directories.

Final thoughts

There are risks when using a plugin like this. For example, when Node.JS or NPM change their download URLs, the plugin will have to be updated. Also, it's hard to guess when new tools will arrive that will become more popular than Gulp, Bower, etc. Luckily the plugin is open-source, so anybody can change the code and hopefully adept it to new situations.

Another, probably more serious concern, is that using tools like this for every build will slow-down the build noticeably. From what I understand, the plugin has built-in support for incremental builds when using Eclipse, but I'm not sure how to do this when using NetBeans IDE at this time.

Finally, in this age of micro-services it makes a lot of sense to completely separate front-end and back-end projects from each other.

Running Nashorn scripts in Ant build scripts: The basics

by vincent

Posted on Tuesday Dec 01, 2015 at 09:01AM in Build Tools

When I start a Java project that I think can confirm to the rules of Apache Maven, I use Maven to create build scripts. Nowadays this is in the majority of my web projects.

Occasionally though, there are cases where I want full control of every step and/or want to do a lot of exotic steps. In those (rare) cases, I still manually create Apache Ant XML build scripts. It has so many built-in type of tasks, most of which are easy to use. Unlike many other developers I know, I quite like Ant, especially when also using Apache Ivy for dependency management.

Checking out the new popular choice for building JVM projects, Gradle By Gradle, Inc., is another high entry on my ever -growing to-do list. Gotta love a tool that has a cursing elephant as a mascot, to illustrate the usual building frustrations ;-) . I will definitely check Gradle out soon.

As probably well known by now, I am a huge Oracle Nashorn fan. When I started working on my latest project, I figured it could be handy to run Nashorn scripts inside Ant scripts and found multiple solutions that I'd like to discuss. For now let's start with the most simple method.

Script Task

Nashorn is fully compatible with the JSR-223 (aka "Scripting for the Java Platform" standard), which Ant supports. Several enhancements were done in Ant 1.7 to this task, so I assume you use a somewhat recent Ant version.

To simply run Nashorn scripts as part of a Ant target, you can use the <script> task. You can embed the script directly in the build.xml file (yuck!) or point it to an external file containing the script.

A simple build.xml example that embeds the script in the XML file (something that I'd never do in production!)

<?xml version="1.0" encoding="utf-8"?>
<project name="VincentsProject" basedir="." default="main">

	<property name="message" value="Hello!"/>
	<target name="main">
		<script language="nashorn" > <![CDATA[
			project.log("Logged from script", project.MSG_INFO);			

You can run the script by saving above's text in a "build.xml" file, switch to a terminal window (Command Prompt on Windows) and run "ant". The output should be something like:

Console output example

Consult Ant's Script task documentation and note some things here:

  • It would have been much, much better to store the script in an external file in a subdirectory and add a src="./build_scripts/script1.js" alike attribute to your <script> attribute.

  • Ant makes available all defined properties to the external script. That's why
    print(message), print(project.getProperty("message")) and print(
    lines all work. You can disable this behavior by adding the setbeans=false attribute to the <script> element, in that case only the "project" and " self" variables would have been passed to the script and "VincentsProject" and "message" would not have been available.

  • You could use the language="javascript" attribute in the <script> element, but this would run Rhino on Java versions 1.6 and 1.7. It is very likely your Nashorn JavaScript script will not be compatible with Rhino, so I'd recommend to use language="nashorn" attribute, unless you are sure your script is compatible with both, or your project requires Java 1.8 or later anyway.

  • Don't worry about the "manager" attribute. Its default value "auto" should be fine. Or set it to "javax" if you're a purist. From what I understand, Apache BSF was an older scripting standard that predates and inspired the JavaX JSR-223 specification that Nashorn implements.

  • When your Nashorn script uses external Java libraries, you can add a "<classpath>" element to the <script> element, like: <classpath><fileset dir="lib" includes="*.jar" /></classpath>

  • You can also add a classpath by defining a "<path>" element in your build file under "<project>" element; and refer to this inside your script element by using the "classref" attribute. See Ant documentation for more help. Use this construction if you need the same classpath specification for multiple tasks.

  • One thing that not seems possible is to add parameters to the ScriptFactory. AFAIK it's therefore not possible to use Nashorn's shell scripting capabilities when using this Script task.

Note that you can do some funky stuff. You can implement a full Ant task that takes file sets as a parameter. I'll rewrite one of the examples from the Ant documentation in Nashorn and post it here.

In a next post about this subject, I will post about the Scriptdef task, that adds some interesting features.

Why the JVM is my favorite platform

by vincent

Posted on Wednesday Nov 18, 2015 at 12:27AM in General

There's currently a lot being written about the somewhat uncertain future of Java. Actually, there's so much being written about this (be sure to click on that last link), that I don't have much to add to this discussion.

Still, I'd like to describe here why the Java platform is my favorite development platform and I will keep on using it on my next projects for the foreseeable future.

Building everything in only one language is a logical, but not necessarily the best choice.

I have changed jobs a few times over the last decade and each one required me to learn a new language.

Having worked with both static and dynamic languages (learning a pure functional language is high on my wish-list) for several years, on different platforms (Windows and Linux mostly), I see advantages and disadvantages for each language and platform choice.

More and more companies permit their development teams to use the tool that the team feel is right for the job. Well known examples of companies that let their team use different languages for different projects are Netflix, LinkedIn, Spotify, Last.FM, etc. I welcome this change and hope more companies will follow eventually.

In my country, The Netherlands, I don't really see this shift happening at the moment, but my guess is it will be the future of development.

JVM: mix and match languages, libraries and even operating systems

The JVM makes it easier (and even fun) to use different languages/operating systems, and still be compatible with many libraries and other development tools.

To me, this is one of the highlights of the platform. Embedding Python script languages in pure Java projects thanks to Jython, using pure Java libraries in the JavaScript language with Nashorn, or writing compact unit-tests code in Groovy for my Java code is a very fun aspect and, in my experience, boosts productivity a lot.

Personally, I develop many of my projects on a Windows machine and deploy the very same JAR file, containing all dependencies that I have tested on my machine, on a Linux server. Here Java developers have more freedom as well than users of most other languages.

Another advantage of having multiple languages is this: I'm sure many new Java 8 features were inspired by the dynamic JVM Groovy language. Having more languages from different vendors/teams on one platform is a very positive thing, it keeps everyone awake.

Modern feature-set

Other highlights:

  • Java is focused on performance. Being able to run threads on different CPU cores is very important to me. We have multi-core CPUs for a very good reason.
  • Thanks to Java 8's Streams API, it's never been easier to make use of multiple cores with minimal code changes. It must be mentioned Java 7 added great multi-threading features as well.
  • JVM developers have so much choice: application servers, databases (even natively written in Java), enterprise application frameworks, mature web frameworks, ORM, etc.
  • Very good development tools. Again a lot of choice, like NetBeans, IntelliJ, Eclipse, JDeveloper IDEs.
  • Memory consumption and boot time can still be a problem, but Java 9 is expected to improve this as part of their modularity Jigsaw project

Do something back...

I think there's something that we Java developers can do to try to keep the Java community healthier. If your company is making money and running on many different open-source projects, consider donating back to at least one of those projects (create patches to fix bugs, improve documentation....). In the very least blog actively about your usage of those projects.

If your company can afford it, instead of automatically using the open-source GlassFish enterprise application server, consider reviewing the commercial Oracle WebLogic server. Oracle is a commercial company and probably wouldn't mind making money on its Java products.

It's probably wishful thinking on my part, I know it's much easier said than done.

Do not forget that the JVM is also not the right tool for everything

While my blog is called JVM Fanboy, I have a lot of interests in other programming languages as well. As a developer you should never put all eggs in one basket.

Node.JS for example is simply incredible. Also I work with Python daily and enjoy that a lot as well. Those languages get more than enough press time, so that's why I decided to blog about JVM-related stuff exclusively. But that does not mean that those other popular languages don't deserve their popularity.

Use QUnit to unit-test stand-alone Nashorn scripts

by vincent

Posted on Sunday Nov 01, 2015 at 05:09PM in Nashorn

I've been writing a lot of standalone Oracle Nashorn scripts lately. I'm just so productive when writing server-side JavaScript code, especially when I have the versatility of the Java run-time class library at my fingertips at the same time as well!

As I said in my previous post, currently IDE and tools integration leaves something to be desired for developers creating standalone Nashorn scripts . I had a small Twitter conversation that seemed to confirm this. NetBeans IDE can run Nashorn scripts when not using projects, but there is no dedicated project type for standalone Nashorn project yet.

As things stand, I currently write my Nashorn scripts still in a text editor and do not use the Java tools that I usually would have (auto-complete, integrated debugger, unit test integration, running project inside IDE...). I've learned since that other people write Java code to start and debug stand-alone Nashorn scripts (NetBeans IDE has this awesome feature where it can seamlessly debug JavaScript code when invoked from Java code) and write wrappers around their JS unit-tests so that Java code invokes it. I would do this too on bigger projects, but as my current Nashorn projects are relatively small, I chose not to follow that route just yet.

Using QUnit with Nashorn

I am a supporter of TDD (test-driven development), so I was looking for an easy-to-use JavaScript-based unit testing library that is compatible with Nashorn.

After some Googling, I found some blog posts talking about Nashorn and unit-testing. Most of them talked about what I just mentioned: wrapping Nashorn scripts in Java code. I found one interesting StackOverflow post where Christian Michon asked exactly what I was looking for. In this post, Sirko responded by suggesting using QUnit (by the JQuery Foundation team), since it can use callbacks and therefore work in any JavaScript environment.

I could not get the posted code to work with the latest release versions of Nashorn and QUnit. After some modifications I got it working and decided to post it here, perhaps it is useful for someone else.

I will evaluate other JavaScript unit-tests frameworks later, especially Jasmine caught my eye, but for now QUnit seems to work fine with my test scripts.

QUnit callbacks

QUnit has an API where callbacks can be registered for reporting single test and/or complete test suites results. Using these callbacks, it's very easy to let QUnit make use of Nashorn's print() function to print to the console. For my projects, that is currently sufficient, as I do not use CI (Continuous Integration) servers for my Nashorn projects at this time and manually run the tests from the command-line (I'd not recommend this work-flow for bigger projects, of course)


Here's how I got QUnit working with Nashorn:

I could only get it to work when using the JVM-NPM project. It implements the well-known require() function from Node.js on JVM JavaScript engines (Nashorn, Rhino and the sadly seemingly inactive open-source DynJS effort)

  • In your project directory, create a "tests" directory and a tests/include directory.
  • Download the latest version of QUnit at (qunit-1.20.0.js at the time of this writing). Only the .js file is needed, the .css will not be used. Place the JavaScript file in the tests/include/ directory.
  • From a command-ine window, navigate to a temporary downloads directory and type "git clone" (requires Git, but I'm betting pretty much all developers have installed this on their system these days).
  • Copy the jvm-npm/src/main/javascript/jvm-npm.js file to your tests/include directory
  • Create a Nashorn javascript library that loads and sets up QUnit to work with Nashorn. This code has been largely based on the mentioned StackOverflow post.
var QUnitModule = require("include/qunit-1.20.0.js");

QUnit = QUnitModule.QUnit

QUnit.log(function(details) {
    if (!details.result) {
      var message = + "\tFAIL" + details.source;
      message += " actual: " + details.actual + " <> expected: " + details.expected;

QUnit.done(function(details) {

Store this file in your tests/includes directory and call it "qunit-nashorn.js". It's a bit hackish, but it loads and sets up the QUnit callbacks and makes available the QUnit as a global object.

Now creating test scripts is the relatively easy part. Some dummy code that you could place in the test directory and call "dummytest.js":


var testMethod = function(b) {
    return b;

QUnit.test("testOK1", function(a) {
    var res = testMethod(true);
    a.equal(res, true);

QUnit.test("testOK2", function(a) {
    var res = testMethod(false);
    a.equal(res, false);

QUnit.test("testFails1", function(a) {
    var res = testMethod(false);
    a.equal(res, true);

QUnit.test("testFails2", function(a) {
    var res = testMethod(true);
    a.equal(res, false);


When running jjs dummytest.js from your tests directory, should result in output like:

Nashorn console output example

Oracle Nashorn + Oracle NoSQL: First steps

by vincent

Posted on Tuesday Oct 20, 2015 at 11:12PM in NoSQL Database

Learning to work with Oracle NoSQL was a high entry on my to-do list. I always figured Oracle Nashorn (the new JavaScript engine that runs on JVM and is shipped for the first time with Java 8) is an ideal language to learn and try JVM-based APIs, mainly because of the interactivity of the JavaScript language. So, I thought this would be a cool opportunity to test the Oracle NoSQL API with Nashorn.

I was absolutely not disappointed. Nashorn is, in my opinion, an excellent tool for learning new APIs

About Oracle Nashorn

First of all, I am a huge Nashorn fan. Although not as fast as Node.js (which is powered by Google's mighty V8 engine), bringing the flexibility of a modern JavaScript implementation to the JVM world was a very good idea. I love that it is part of Java 8 SE, so every Java developer that uses Oracle-based JREs or JDKs has full access to it.

The Nashorn team added extensions to the JavaScript language so that JavaScript code can make full use of existing JVM classes. Also, Nashorn is compatible with the existing JSR-223 "Scripting for Java Platform" standard, which means that the Nashorn engine can be embedded into any JVM program. Using this technique, you can embed JavaScript code in your Java 8 (and up) projects, that communicate with your objects while Nashorn executes the JavaScript code dynamically. Mind blowing IMO... :-)

Read the full article for much more background information and sample code

Read More

Apache Derby part 3: Stored Procedures

by vincent

Posted on Tuesday Aug 04, 2015 at 12:54AM in Relational Database

In the previous part, we created a small database to store movie titles and the media type. In this part we will create and load a Stored Procedure into that database to make it really easy to add movies using the "ij" command-line tool with one simple statement.

First a warning. Most dedicated DBA's will probably hate you when you ask them to enable Java stored procedures support in their database. I don't think it's really considered common practice, but many enterprise-grade database systems offer stored procedures written in Java. I can think of many reasons not to do this, but there are probably cases where it can be a good solution. Since Derby is a database management system implemented fully in Java anyway, it's make a bit more sense here. There's a nice article on the official Oracle Nashorn team blog about this subject.

We will create a simple stored procedure that can be used to easily add a movie using a single CALL AddMovie('Movie Title', 'Media Name') statement. What it will do is simply look up the media record by name, then create the specified movie record. So it saves you from looking up the media id manually or using a subquery to do it with one query. I will never recommend to use stored procedures like this in an application (in this case for performance reasons, applications should look up the id of the media once and cache that), but if I'd plan to add many records manually using SQL and a command-line tool like "ij", then I'd seriously consider adding a utility stored procedure like this for convenience. That is, if the table contained more fields :)

Read the long full article for all sample code

Read More

Apache Derby part 2: Getting started

by vincent

Posted on Tuesday Jul 21, 2015 at 12:07AM in Relational Database

In the first part of this series, I've introduced the Apache Derby database management system. Let's now download Derby and get it up and running by creating a small database.

I am a dedicated NetBeans IDE user (in fact, I am a registered NetBeans project contributor, thanks to my involvement with the NBPython project). I'll be using NetBeans throughout the rest of the series, but on this post I'll only be using the interactive commandline tool "ij".

Note that I assume the reader has basic skills to use Java and SQL, this is not a Java/SQL tutorial.

Downloading Derby

Go to and choose "Download". At the time of this writing (August 26, 2014) is the latest Official Released Release. Click on the version, then download the bin distrubution. In my case, the filename was

Setting up Derby

Copy and unzip the zip file in a directory that pleases you. If you are using Windows, preferably do not put it in the Program Files directory, unless you know how to do this with administrator rights. In the examples below, I choose to unzip the ZIP file in my c:\Java directory (I am writing this on a Windows machine).

Open Command Window in Windows (Terminal in Linux) and change current directory to the unzipped directory) Derby's manual suggests setting the DERBY_HOME environmental variable is required. This is also a good moment to check if Java is installed correctly on your machine.

On my Windows machine:

cd \Java\db-derby-
set DERBY_HOME=\Java\db-derby-
java -version

if the last command produced a Java version number that is 1.7 or 1.8 or higher you'll be good to go. Otherwise you'll have to install or re-install Java on your machine.

On Linux, you'll do something like (assuming you have a "java" directory in your home directory and you unzipped the ZIP file there. Substitute your username for USER):

cd ~/java/db-derby-
export DERBY_HOME=/home/USER/java/db-derby-
java -version

If you plan to use the Derby commandline tools extensively, you'll have to add the bin directory to the path and ensure the DERBY_HOME environmental variable is set on each boot. This is out of scope for this tutorial, IMO.

Now create a directory to store the databases. I'd recommend to not store those in the db-derby- directory, to make upgrading your version of Derby easier.

I have created a "derby_databases" directory in my java directory.

Windows (Linux uses will know how to do this, I assume) :)
mkdir \java\derby_databases

Running ij and creating our first database in embedded mode

If we set up everything correctly, we should now be able to start "ij" the interactive Derby CLI utility.

Windows: (Linux users, start "./ij")

cd \java\db-derby-\bin

A simple "ij version 10.11" screen should appear with ij> prompt. Type "help;" here - do not forget the semicolumn ; - and press enter

ij version 10.11
ij> help;
 Supported commands include:

  PROTOCOL 'JDBC protocol' [ AS ident ];
                               -- sets a default or named protocol
  DRIVER 'class for driver';   -- loads the named class
  CONNECT 'url for database' [ PROTOCOL namedProtocol ] [ AS connectionName ];
                               -- connects to database URL
                               -- and may assign identifier
  SET CONNECTION connectionName; -- switches to the specified connection
  SHOW CONNECTIONS;            -- lists all connections
  AUTOCOMMIT [ ON | OFF ];     -- sets autocommit mode for the connection
  DISCONNECT [ CURRENT | connectionName | ALL ];
                               -- drop current, named, or all connections;
                               -- the default is CURRENT

  SHOW SCHEMAS;                -- lists all schemas in the current database
                               -- lists tables, views, procedures, functions or synonyms
  SHOW INDEXES { IN schema | FROM table };
                               -- lists indexes in a schema, or for a table
  SHOW ROLES;                  -- lists all defined roles in the database, sorted
  SHOW ENABLED_ROLES;          -- lists the enabled roles for the current
                               -- connection (to see current role use
                               -- VALUES CURRENT_ROLE), sorted
  SHOW SETTABLE_ROLES;         -- lists the roles which can be set for the
                               -- current connection, sorted
  DESCRIBE name;               -- lists columns in the named table
--- cut for brevity ----

Now create our first database... Substitute the c:\java\derby_databases\ with the directory you have created to store your databases.

ij> CONNECT 'jdbc:derby:c:\java\derby_databases\derby_test;create=true';

If you somehow lost your connection while doing the exercise below, and you want to connect to your previously created database, you can simply remove the create=true part from above's string to re-open the existing database


Unlike some other embeddable databases, like SQLite, Derby does not store a database in one file. Rather, it creates a subdirectory with the database name and store all related files in that directory. It is very important that the files in that directory are NEVER edited. With a single edit you can completely damage your databases.

Let's find out what schema's are created by default and which tables are added to the default "APP" schema.

ij> show schemas;

11 rows selected

ij> show tables in APP;
TABLE_SCHEM         |TABLE_NAME                    |REMARKS

0 rows selected

The schemas starting with SYS* are for internal use. The "APP" schema is the one we will be using for our first table. As you can see, Derby did not create any table in this schema. Let's create an extremely simplistic database to store your DVD/Bluray collection.

ij> set schema "APP";
0 rows inserted/updated/deleted

0 rows inserted/updated/deleted

ij> CREATE TABLE movie (
name VARCHAR(255) NOT NULL,  
media INT, 
0 rows inserted/updated/deleted

TABLE_SCHEM         |TABLE_NAME                    |REMARKS
APP                 |MEDIA                         |
APP                 |MOVIE                         |
2 rows selected

Note that although we used names in lower case, all names have been converted to UPPERCASE. This happened because we did not put names between quotes. The SQL standard dictates that unquoted names are always converted to uppercase, but many modern database relaxes this rule a bit (or can be configured that way). In Derby this is, as far as I am aware, not configurable. If you do not want table, column, schema, constraint etc. names to be stored in uppercase, then you will always have to put each and every name between two double quotes ("media" or "Media"). Beware that using unquoted names will not work in that case anymore, so Derby is case sensitive only when quotes are used, in all other cases it will convert the name to uppercase.

Finally let's add some records to our database.

ij> INSERT INTO media (name) VALUES ('DVD');
1 row inserted/updated/deleted
ij> INSERT INTO media (name) VALUES ('Bluray');
1 row inserted/updated/deleted
ij> SELECT * FROM media;
ID         |NAME
1          |DVD
2          |Bluray

 ij> INSERT INTO movie (name, media) VALUES ('Sharknado', 2);
1 row inserted/updated/deleted
ij> INSERT INTO movie (name, media) VALUES ('Kung Fury', 1);
1 row inserted/updated/deleted
ij> INSERT INTO movie (name, media) VALUES ('Mega Shark vs Giant Octopus', 1);
1 row inserted/updated/deleted
ij> SELECT * FROM movie;
ID         |NAME
1          |Sharknado
2          |Kung Pow!
3          |Mega Shark vs Giant Octopus

3 rows selected

ID         |NAME

2          |Kung Pow!

3          |Mega Shark vs Giant Octopus

1          |Sharknado

3 rows selected


Now note that in the last query output, the third header column name is displayed in lowercase , because the alias name "media" was put between double quotes.

To be continued

Now that we have Derby up and running, let's continue with the fun stuff: configuring NetBeans and creating our first stored procedure in Java.

Apache Derby part 1: Introduction

by vincent

Posted on Tuesday Jul 07, 2015 at 12:08AM in Relational Database

Apache Derby part 1: Introduction

This will be a series of blog posts about Apache Derby, an open-source "traditional" related database management system, fully implemented in Java.

I've been a happy Derby user for a few years now (in fact, this blog is powered by a Derby database) and I'd like to share my enthusiasm. As I don't see Derby mentioned a lot in recent blogs, articles etc., I thought maybe this article will be of some help for people who are looking for more information. I've used Derby successfully in several smaller-scale web applications.

This first part of the series is a generic introduction, the following parts will be much more technical and actually contain code :) .


The early years

The history of Derby is widely documented.

In 1997 a start-up called Cloudscape Inc released a commercial database management system called Cloudscape (initially it was released with the generic "JDBMS" name). Two years later Informix Software acquired Cloudscape. In 2001 IBM acquired their database assets and renamed it to IBM Cloudscape.

IBM donated the code to Apache Software Foundation in 2004 and one year later it became an open-source sub-project of the Apache's DB top-level project. IBM continued to sell IBM Cloudscape by taking a snapshot of the then-current Apache Derby version, adding installers, offering support contracts and adding additional proprietary components that provided ODBC and Microsoft MDAC compatibility, among other things. IBM retired the IBM Cloudscape product in 2007. The additional proprietary components are no longer available.

Sun/Oracle JavaDB

Sun joined the project when the code was donated to the Apache Software Foundation. They took a snapshot of the then-current Apache Derby version, added it to the Java Developers Kit (JDK) and called it the "JavaDB" component. They did not modify one single line of code, the project name is still Apache Derby, but Sun/Oracle call it JavaDB in their documentation. So every user of a recent JDK has access to a full copy of Apache Derby. JavaDB was Derby 10.6 for Sun's JDK 6, 10.8 for Oracle's JDK 7 and 10.11 for JDK 8.

While JavaDB is very convenient while testing Apache Derby, I'd normally recommend to use the latest available Derby version for production use ( released on 27th of August 2014 at the time of this writing).

Why choose Apache Derby?

Apache Derby was not created to replace generic, full enterprise-ready DBMS'es like Oracle Database, IBM DB2, Postgresql, etc. Derby was, in my opinion, also not envisioned with "big data" applications in mind. One of the reasons is that Derby can store its database on one logical disk only (unless RAID is used...). Also, LOB fields are 'limited' to 2GB each. 

An old 2005 presentation sheet from the IBM website about IBM Cloudscape (that was still on-line at the time of this writing) suggests that Derby should be used when the total stored data is expected to be 50GB or less. Note that this is not a documented maximum, there does not seem to be a hard-coded maximum database size (apart from the mentioned 2 GB per LOB field). I should note many of the shortcomings mentioned in the presentation sheet have been fixed by the Derby developers in the last 10 years.

If your applications can live without storing terabytes worth of data and serving hundreds of users concurrently, here are some of the highlights that I am aware of:

  • Derby can run in two modes: Client/Server and Embedded mode, more on this below.

  • The engine makes full use of JVM's powerful multi-threading, features. I'd call the performance very good (but remember the name of this blog, I'm a bit biased on JVM performance probably). I'd like to create some proper benchmarks, but I'll have to research how to do this carefully.

  • Stored Procedures can be written in any JVM language that can compile classes that implement Interfaces. Those classes are then loaded by Derby and can be triggered using SQL or JDBC code.

  • You can create Derby-style table functions by creating custom classes that implement the standard JDBC "Resultset" interface. Those classes are loaded in Derby and are available to SQL queries like normal, but read-only, Derby database tables. This is a great feature to import foreign data from external text/XML files, other databases (perhaps even NoSQL(!) databases), etc. The user can use the full SQL syntax to filter the data dynamically returned by the object. This is very powerful stuff. "Yes SQL", please! :)

  • A more optimized version of table functions are also available, the class must additionally implement one of Derby's own interfaces. Classes that will do this will be much quicker when the WHERE-clause is used for example, but give up some flexibility as the columns and their data-types must have been declared beforehand.

  • Custom data types are supported, but classes must implement the standard Serializable, or, preferably, the Externalizable interface, so this will in many cases not work on your classes without some additional work.

  • In Embedded mode, Derby can load JAR files stored in the database as if they were on the user's ClassPath.

  • Temporary in-memory only databases can easily be created. Those databases are removed from memory when disconnecting or shutting down the server. Those are supported both in Embedded and in Server/Client mode.

  • The newest version contains (currently experimental) built-in support for the Apache Lucene high-performance search indexer project.

  • SQL support is getting better and better with each release. New is the MERGE statement that INSERTS, UPDATES and DELETES records with one single statement.

  • Derby comes with built-in support for the Java Management Extensions (JMX) , to remotely monitor the Derby processes in real-time.

  • For testing and development purposes only, Derby comes with a simple Servlet web-application that can be used to do some common administration tasks

Embedded / Client/Server Modes

Embedded Mode 

Derby comes with a JDBC driver that contains its complete database engine embedded in it. The JAR file containing this driver is only about 2.5 MB (yes, still a lot bigger than the public-domain SQLite written in C, but still very acceptable). In Embedded Mode applications store their databases on the local file-system, or in temporary in-memory only databases.  Databases are available on a single JVM instance only, but can still accept multiple connections and fully supports multi-threading (but, as mentioned, those are not accessable from other JVM instances). 

As mentioned in the previous section, in this mode Derby can load JAR files stored in the database as if they were on the user's classpath. This, like stored procedures written in Java, requires some extremely careful planning on the developer's side, but could probably be useful in some specific situations. Also, the driver has some additional features to load read-only databases directly from local JAR files that are available on the user's classpath.

Client/Server Mode

In Client/Server Mode, a Derby Network Server runs on a server that is available via a standard TCP/IP port. A specific Derby Network JDBC Driver is now used to connect client connections to the databases. In this mode databases are available to any user that can connect to it, from any JVM instance.

Several options of user authentication are available, including storing name/password in simple plain text Java properties file format, using a LDAP server or storing usernames and passwords directly in the Derby database. User authorization can be controlled per database by standard SQL GRANT/REVOKE commands, or users can be grouped in either READ/WRITE or READ-ONLY groups. The authorization features could be improved, from what I understand any user that has server access level can shutdown the Network Server with a single command... :(

A standard Java Security Manager can be used to restrict the rights of the database server.

Derby in use

  • Derby only supports the JDBC standard to let clients connect to its databases. Many non-JVM programming languages, like Node.js and Python, have libraries that provide basic JDBC support.

  • As mentioned, Derby's SQL support is strong. It is quite strict though. If you, like me, are familiar with default Postgresql setups, it will take some time to get used to put every schema, table and column names in your SQL queries between double quotes, because otherwise they are automatically stored in UPPERCASE format. Once you use quoted names, working with unquoted names does not work any more ("SELECT * FROM person", does not work when the table name is "Person"). This restriction is part of the official SQL standard, but most DBMS'es are more forgiving or can be configured to be completely case insensitive. As far as I know this is not possible with Derby. I believe, however, that being forced to write strict SQL queries that adhere fully to the official SQL standard is not necessarily a bad thing.

  • I believe most major ORM libraries offer Apache Derby support. Oracle's JavaDB component probably helps here. For example, the popular Hibernate ORM comes with a Derby dialect.

  • Derby comes with the "ij" command-line tool that can be used to create databases, write queries, etc. Since this is a default Java console application, I really miss some basic features like cursor keys support. I believe this can be added by adding some external JAR files to the classpath, but could not get this to work yet unfortunately. This makes using this application a frustrating experience to me every time I boot it up.

  • Luckily Derby is supported by a lot of Java developer and JDBC SQL tools. For example, I am very happy with NetBeans IDE's Databases features on the Services tab. I find it a joy to add records, tables to, and execute queries on my Derby database this way. I'm sure the other famous IDEs offer similar features as well.  I had reasonable success with SQuirreL SQL Client as well.

  • The quality of the standard documentation is generally quite good, especially for an open source project. I presume this is because of the commercial roots of Derby. The organization of the information sometimes leaves something to be desired, as content is spread between different PDF/HTML files and sometimes content you'd expect to be in one manual is actually described in one of the others.

To be continued...

In the next installment of this series, i plan to write about setting up and configuring Derby.


by vincent

Posted on Sunday Jul 05, 2015 at 03:28PM in General


Welcome to my blog dedicated to anything that is - somehow :) - related to the Java Virtual Machine.

Java and the JVM have come a long way, a lot of the well-known prejudices are simply not true anymore, or are actively being taken care of. Don't believe me? Ask Twitter.

I love that these days the JVM is a host for many other programming languages and applications can make use of classes without caring which JVM programming language was used to compile that class file. Also more and more dynamic languages are appearing that do not require compiling by the user, but still can fully utilize Java classes. Often, those languages can easily be embedded in Java projects as well.

I follow development of and news reports about Java, the JVM, alternative languages, developer tools, libraries, frameworks quite a bit, so I think I'll have quite a lot to blog about.