The JVM Fanboy Blog 

Entries tagged [maven]

Host Maven repositories on a Raspberry Pi

by vincent

Posted on Sunday May 08, 2016 at 11:42PM in General

For a particular project I'm doing, it would be handy to re-use a JAR file in multiple other future projects. Although there are far more simple solutions to this problem, when I read about repository managers in the official Maven documentation, I gave it some thought and decided to try to run one on one of my Raspberry pi mini-computers that was waiting for a problem to be solved :-)

When hosting a Maven repository on the Raspberry pi, next to hosting private repositories that can be accessed by other computers in my network, it can act as a proxy for Maven Central as well. This means that when a computer in my network needs to download a dependency from Maven Central, it will get a cached copy from the Raspberry pi instead. If the dependency is not already in its cache, it will download the dependency from Maven Central (if it is configured to allow this) first.

Apache Archiva

Looking at the list of available Maven Repository Managers, Apache Archiva caught my eye and decided to give it a try.

Like many other Apache Software Foundation projects that I follow, Archiva is a project with light traffic and not a lot of new releases lately, this is not necessarily a bad thing though, hopefully this is a sign that the current version is generally considered stable.<.p>

What I saw when looking up for more information, impressed me. The web user-interface looks really user-friendly and nice.

Browsing repository in Archiva

Running Archiva in Standalone mode or Servlet container?

Archiva can run in two modes:

  • Stand-alone (a script can be started on the console, that will start an embedded web-server)
  • Run inside a Servlet container (Apache Tomcat) or dedicated Java EE application server (Glassfish, JBoss/Wildfly, Oracle WebLogic...)

Let's take a look at both options

Standalone mode on the Raspberry pi

At first I tried running Archiva in stand-alone mode, assuming this would be ideal for the Raspberry pi.

I had many issues along the way, which all had to do with Archiva's dependency of an out-dated version of Java Service Wrapper by Tanuki Software. As I understand Service Wrapper is a component that lets Java applciations run as Linux daemons and Windows services. Due to licensing issues, the Archiva team can not upgrade this component to a more up to date version.

Blogger Ti-80 - almost exactly three years ago - made a blog post about running Archiva on the Raspberry pi and it seems all problems can be solved by manually building the project and replacing binaries.

I decided to not follow this route and run it in a servlet container. I reasoned I'll probably want to run additional servlets on this Raspberry pi in the future anyway. And Adam Bien convinced me that Tomcat's memoy consumption is not as excessive as some people seem to think.

Running inside Servlet Container

Here are the steps that I took to get this up and running. I did not follow the documentation exactly, as I have some other conventions. Feel free to disagree with me and use their instructions! Note that I use a Raspberry pi 2 (4 cores and 1GB of internal memory), I have not tried this on other models.

The official installation guide is here.

  • Login with SSH to your Raspberry pi, I used the built-in pi user (you did change its default password, didn't you?! :) )
  • Create a dedicated Linux user for running Tomcat:

    sudo -s
    adduser tomcat

    Follow the prompts, then change active user to "tomcat"

    su tomcat

  • Let's create directories and download Tomcat and Archiva

    cd ~
    mkdir downloads
    cd downloads

    Visit the TomCat homepage and look for the download link of the .tar.gz release of the currently stable version. At the time of this writing this was 8.0.33

    wget URL-TO-TOMCAT.tar.gz
    (replace "URL-TO-TOMCAT.tar.gz" with the download URL)
    tar xvfz ./apache-tomcat-X.Y.Z.tar.gz
    (replace "apache-tomcat-X.Y.Z.tar.gz" with downloaded file)
    mv ./apache-tomcat-X.Y.Z ~/tomcat
    (move the created directory, not the downloaded tar.gz file!, to the home directory)

  • Visit the Archiva homepage and look for the download link of the WAR release of the currently stable version. At the time of this writing this was 2.2.0

    wget URL-TO-ARCHIVA.war (replace "URL-TO-ARCHIVA.war" with the download URL)

  • I create a dedicated directory for Archiva and do not place it in the Tomcat home directory (which the Installation Guide suggests). I don't want to pollute Tomcat home directory with Archiva related files. This is a debatable decision, as some dedicated TomCat configuration files will be required to run Archiva anyway.

    cd ~
    mkdir -p webapps/archiva
    cd webapps/archiva
    mkdir conf
    mkdir db
    mkdir logs
    mkdir war

    cd war
    cp ~/downloads/*archiva* ./

  • Let's create Tomcat context configuration XML file.

    cd ~/tomcat/conf
    mkdir -p Catalina/localhost
    cd Catalina/localhost
    nano archiva.xml
    (Install nano if it could not be found: apt-get install nano)

  • Enter following code in Nano
    (Substitute the docBase path with the full path to your downloaded war file)

    <?xml version="1.0" encoding="UTF-8"?>
     <Context path="/archiva"
     <Resource name="jdbc/users" auth="Container" type="javax.sql.DataSource"
               url="jdbc:derby:/home/tomcat/webapps/archiva/db/users;create=true" />
     <Resource name="mail/Session" auth="Container"

    Press CTRL+X , then Y to exit.

  • A bit annoyingly, you'll need to install some dependencies in the "lib" directory of Tomcat

    You could download the Archiva .zip release and get it from them, but you could also download the files from, what we'll do here. I've chosen the exact versions used by the current Archiva stable version to prevent conflicts. We could regret this later, when installing applications that require newer versions... :( .

    Check the installation guide to see if the version numbers mentioned here still match with the latest version!

    cd ~/tomcat/lib

    Vist and find the "Download JAR" button. Copy the link URL.

    wget LINK-TO-ACTIVATION-1-1.JAR (replace with full link to activation-1-1.jar)

    Vist and find the "Download JAR" button. Copy the link URL.

    wget LINK-TO-MAIL-1-4.JAR (replace with full link to mail-1.4.jar)

    Vist and find the "Download JAR" button. Copy the link URL (manual states newer versions of Derby than mentioned in the documentation should work fine).

    wget LINK-TO-DERBY.JAR (replace with full link to Derby jar file)

  • Finally create a start script that will boot Tomcat and additionally sets some required environmental variables.

    cd ~

    Add the following content:

    export CATALINA_OPTS="-Dappserver.home=/home/tomcat/webapps/archiva -Dappserver.base=/home/tomcat/webapps/archiva"

    Press CTRL+X , then Y to exit.

    chmod +x ./

  • Run the script:
  • On the computer you used to log in to Raspberry pi, start your browser (so do not run the browser on your Raspberry pi), go to:
    (replace IP_ADDRESS_RASPBERRY_PI with the correct ip address or host name)

    If everything goes well, after a few seconds you should see a welcome screen, with a button at the top right side to create an admin user. Note that booting can take some time.

    On problems, you'd get a simple 404 screen. In that case you'll have to look at the log files of Tomcat and try to determine what is wrong, usually something's wrong with one of the paths or dependencies:
    nano /home/tomcat/tomcat/logs/localhost.YYYY-MM-DD.log (replace with your current date)

  • Create the admin user and follow the prompts.
  • I should do a future blog post about configuring Archiva, I feel the default settings are good enough to get started.

Configure Maven clients to use Archiva

To get started, let's configure your desktop machine clients to use Archiva to retrieve dependencies of Maven Central from now on only from your Raspberry pi. I advise to not configure your Maven clients on your laptops, unless your Raspberry pi is accessible via the internet, or you have a VPN or something. Otherwise you won't be able to get your dependencies when your Raspberry pi is not on your current network.

Create or edit your Maven settings file from your user directory with your favorite editor,.

On modern Windows machines, this file should be located on:
c:\users\XXX\.m2\settings.xml (replace XXX wityh your username)

On Linux machines, this file is located at:

Make sure the setting.xml file contains at least something like: (but if you have other entries as well, like proxies, etc., make sure to retain them)

<?xml version="1.0" encoding="UTF-8"?>
    User-specific configuration for maven. Includes things that should not 
    be distributed with the pom.xml file, such as developer identity, along with 
    local settings, like proxy information. The default location for the
    settings file is ~/.m2/settings.xml 
<settings xmlns="" xmlns:xsi=""

Of course, replace RASPBERRY_IP_ADDRESS with your Raspberry pi's IP address.

The <mirrorOf>central</mirrorOf> entry tells Maven to only use your Raspberry pi mirror for Maven Central dependencies. Refer to the Maven documentation for more mirror configuration options. Also make sure to read the corresponding Archiva chapter on this subject.

Now when you build a project, Maven Central repositories will be downloaded from your Raspberry pi, which will automatically download the dependency if it did not have it already in its cache.

Some final thoughts

The (Micro-)SD card from your Raspberry pi can get corrupted when files change very often. So if you very frequently add new dependencies or change versions, it's probably better to attach a external harddrive to your Raspberry pi and make sure the Maven repositories are stored on that hard drive.

If after testing you don't want to use Archiva anymore, simply remove the <mirror>....</mirror> entries from your client's settings.xml and you should be fine.

To shutdown TomCat on your Raspberry, you can use standard ~/tomcat/bin/ script, but remember to start the server, use the script in your home directory, otherwise Archiva won't start because it can not find its home directory.

On my Raspberry pi, less than 250 MB of memory was used to run Linux, Tomcat and Archiva, so I have plenty of room to run other servlets on my Raspberry pi in the future.

Use modern front-end tools in Maven

by vincent

Posted on Saturday Apr 16, 2016 at 05:41PM in Build Tools

After years of owning a static - well, very static, neglected would be a better word - personal WordPress-powered website, I decided to start creating a new custom web application that will demonstrate my various web-dev capabilities and will also serve as my personal website. I can't say I will miss PHP for even a second ;-)

The site will use various back-end and front-end technologies interchangeably, for example parts of the back-end are written in Java 8, Groovy and - of course! - Nashorn. I normally would probably not do this on typical small production applications, but for a demo site I can justify this choice. Thanks to NetBeans IDE's excellent Maven support, I had no issues at all referring Groovy classes in Java code and vice-versa.

The site is under construction and not on-line yet at the time of this writing.

Front-end tools

As I chose Facebook React as one of my front-end toolkits (only some interactive pages of my site will use it. At this time I won't use it globally for the whole site) with JSX to define views, I had to use modern front-end build tools for building the front-end.

Various tools exist to automate buildings tasks, such as the task of calling Babel to convert the JavaScript with JSX to plain JavaScript files. I also chose to do dependency management with external tools. My choice have been to use:

  • Node.js and NPM.js (for installing and running dependencies needed by the build tools)
  • Bower (front-end dependency management)
  • Babel (to convert JSX to plain JS code)
  • Gulp.js (task-runner to automate the building and packaging)

Integrating front-end building tools with Maven

For this project I chose Apache Maven for building the back-end of the application. I know a lot of people don't really like Maven, but since I could follow the Maven convention quite well, I actually had a very pleasant time creating and using the project's pom.xml file file.

At this time the back-end is tightly coupled to the front-end: I wrote my HTML templates in Thymeleaf where applicable, but the corresponding JavaScript and CSS files are stored in a separate "static" directory. This is not ideal, but not really a big deal in my application. Also, I use Nashorn to compile some JavaScript files of the front-end code, so the back-end codes needs access to the front-end code in this application anyway.

Note to self: Once the site is up and running, I want to experiment with ditching the Thymeleaf templates completely and let Nashorn create the whole HTML dynamically using the static JS files (as mentioned above, I already let Nashorn generate HTML for AJAX content on the back-end using the same code as the front-end uses, so I already did some initial work on this).

As it is right now, it made sense to make building of the front-end part of building the whole project.

Some older discussions on StackOverflow suggested this had to be done manually by calling scripts via Maven. Some developers use Ant tasks for this (which I planned to do as well, I think Ant tasks are really suited for this kind of work and those can easily be called by Maven as part of the building process).

Introducing the "frontend-maven-plugin" Maven plugin

However, after some more googling, I came across the frontend-maven-plugin by Eirik Sletteberg. This plugin has a lot of tricks up its sleeve. Features include:

  • Installing Node.js and NPM locally for the built project only.

    This installation won't interfere with Node.JS/NPM that are already installed globally on the system and is only used to install the dependencies and execute the configured tools. This should work on modern Windows, OS X and most popular Linux systems.

  • Execute "npm install" to install the dependencies required for building the front-end
  • Execute "bower install" to install the dependencies required by the front-end (Bootstrap, or in my case Foundation Zurb 6, FontAwesome, etc.)
  • Execute "gulp build" to run the various build task (as described above in my case this will call Babel to convert JSX to JS). Note that the plugin supports Grunt as well.
  • Execute front-end unit-tests using Karma (I have not tried this feature yet, but I intend to try Karma soon)
  • Execute WebPack module bundler (I have not yet tried this)

Configuring front-end-maven-plugin in pom.xml file

Adding those tasks to pom.xml was easy, the examples on the website were simple and easy to follow.

I made up my own directory convention. I chose to create a new "frontend" directory in my project's "src" directory to store all front-end related files. That directory contains the package.json (for NPM), gulpfile.js and bower.js files. I let Gulp create a dist directory here that contains the built files.

The directory structure looks like this:

NetBeans screenshot of my project's directory structure. All front-end files are stored in the 'frontend' directory of the standard 'src' directory

Once built, I let Maven copy the full content of the "dist" directory to the project's resources/static directory as part of the project's build process, using the standard maven-resources-plugin.

In the frontend directory the "node_modules", "bower_components" and "dist" are stored (created by the different tools). I chose to also store the node.JS and NPM installation here in a directory that I called "node_maven". I made sure all those directories are ignored by my version control system, that's why those directories are colored gray in the screenshot above.

Here are the relevant entries in my pom.xml file:

                <!-- Use the latest released version:
                        <id>install node and npm</id>
                        <id>npm install</id>
                        <id>bower install</id>
                        <id>gulp build</id>
                            <arguments> build</arguments>
                        <id>copy gulp dist files to static</id>


Also shown in the example above is usage of the maven-clean-plugin plugin to clean the generated directories.

Final thoughts

There are risks when using a plugin like this. For example, when Node.JS or NPM change their download URLs, the plugin will have to be updated. Also, it's hard to guess when new tools will arrive that will become more popular than Gulp, Bower, etc. Luckily the plugin is open-source, so anybody can change the code and hopefully adept it to new situations.

Another, probably more serious concern, is that using tools like this for every build will slow-down the build noticeably. From what I understand, the plugin has built-in support for incremental builds when using Eclipse, but I'm not sure how to do this when using NetBeans IDE at this time.

Finally, in this age of micro-services it makes a lot of sense to completely separate front-end and back-end projects from each other.