Trying Out Java 9’s Modules With Gradle

I’ve touch Java 9, compiled with it, run with it, but haven’t yet done anything unique to it. I decided to see what I could do with the module system, and thought I’d see if I could migrate one of my existing project over.  I selected fluentsee because it was small, functional, and had a small number of outside dependencies.

What Resulted

I did get it to “work” with modules in play.

What Worked

  • I managed to build it as a module with Gradle
  • The module system did clarify the dependencies
  • It caught some Java reflection in play and restricted it
  • I could launch it with a simple module oriented command line

What Not So Much

  • The Gradle to deal with JDK 9 was verbose and cumbersome
  • I could not achieve the equivalent of a “fat jar” I’d had before
  • I could not get jlink to create a “deployable” application

What I Did

To get things working I used:

  • JDK 9.0.4
  • Gradle 4.4.1

And then, in my build.gradle I had to pretty much retool every Java related task.  Here’s the bulk of it:

So in addition to reworking compileJava, compileTestJava, and test, I needed to:

  • Make sure the resources got copied
  • Create a module aware jar
  • Create a task to pull everything together into a mod staging directory

With that all done I was able to:

./gradlew clean mJar module
java -p build/mods -m com.github.nwillc.fluentsee -h

To build, stage and run the project. To check out the code look at the feature/jdk9 branch of fluentsee.

Advertisements

Docker Container Jenkins Slaves in AWS

There are some good articles out there about using docker containers as Jenkins slaves.  There are many good reasons to do this. My use case was, we have some special snowflake test setups that didn’t play well together, but didn’t require all of a dedicated machine, so creating docker images for them, and sharing a machine, made more sense.

Mostly It’s Straight Forward

The posts out there cover the topic well.  Basically there’s a Jenkins Plugin needed, the Docker image setup, and then some configuration of the Docker engine on the host machine.

But, There are Always Challeges

I hit three bumps:

  1. Setting up you Docker engine to accept remote requests. This is mentioned in most of the articles, and usually covered to one degree or another. The snag is, pretty much every OS’s installation of Docker is a little different. So while I knew I had to add the “-H tcp://0.0.0.0:2375” argument to my dockerd, finding out how on our RedHat install took a bit of doing.
  2. Dealing with AWS’ security groups.  If you’re talking to port 2375, well obviously that port needs to be open. Duh. But that only got me so far, containers fired up but Jenkins’ builds hung.  What wasn’t immediately apparent was, that the ssh communication to the slaves wasn’t going to happen on the traditional port 22. Yes, inside the container it would listen on port 22, but that would be mapped externally to a port in a range of numbers.  So my AWS security group needed to have that range open for inbound connections too.  Using docker inspect on the containers that resulted allowed me to see what they were exposing 22 as. I’m not sure I got “the range”, but I got “a range” that’s worked so far.
  3. The image I built had a banner and messages up login. That confused Jenkins apparently. Once I had it so no messages were displayed when I ssh’d in that resolved that issue.

Working

With those two issues addressed, I’ve now got the special snowflake setup as docker images, and Jenkins spins those up and tears them down as needed.  I’m not entirely happy with the “port range” business, and may revisit it, but for now, like most things Jenkins, its working even if there’s a bit of a code smell.

Java Double Brace Initialization

I’m a fan of the JMockit framework, which if you’ve used you’ll surely be familiar with it’s “double brace” syntax. For example:


new Expectations() {{
// ...
}}

While I’ve seen and used the double braces in JMockit, and done a bit of a double take, I’d never gotten around to figuring out what was actually going on there.

What are Those Double Braces?

Today, out of the blue, I came across a post about them, and it too just breezed by the “what” and went straight to a tip employing them.

Okay, I wasn’t going to ignore this again. So what is that double brace syntax?

It Isn’t a Thing

There is no “double brace syntax”. It’s actually a combination of two things, one fairly common, one more esoteric. A good discussion can be found here.

The concise answer is, it an anonymous class declaration, combined with an instance initializer block. A what? It’s an alternative to a constructor, and you’ve likely seen them associated only with static initialization. Who was to know there are instance initializer blocks too?

So, the double brace is a way to create and anonymous class and provide a constructor in one go. If you read the first article I referenced, they use it for adding content to collections at creation time.

Live and learn… if you bother to.

GraphQL Java 6.o

I’ve been a fan of GraphQL ever since I first tried it.  I push back against RESTful APIs to anyone that will listened (or not).  I’ve written a few post about it (GraphQL: Java Server, JavaScript Client, GraphQL: Java Schema AnnotationsGraphQL: A Java Server in Ratpack).  What I haven’t done, is stay current.  I got hooked on graphql-java at version 3.X and decided the annotations were the best way to go, and sadly the annotations development stalled and made upgrades tricky, and so I didn’t.  But it was a constant nagging itch, to upgrade, and finally I did.

This post will discuss a Ratpack server, using GraphQL-java 6.0. I should note, that as I did this work, the annotations finally release an upgrade. Doh.

GraphQL Java 6.0

I committed to upgrade. The annotations had not kept up so this meant a bit of a rewrite.  Normally I’m pretty suspicious of gratuitous annotation use.  They often mask too much of what’s really going on, and they tend to spray the information about one concern throughout your code, making it hard to locate coherent answers on a topic.  That was exactly the case here.  Leaving the annotations behind meant:

  • I had to figure out what previous magic I now had to handle on my own.
  • I had to determine just how deeply into my code they’d rooted themselves.

I tried to approach it intelligently, but in the end I went with brute force, I removed the dependency, and kept trying to build the project, ripping out references, until, while no longer working, the project built and the annotations were gone.  Then I set about fixing what I’d broken.

What Was Missing?

Basically without the annotations there were two things I needed to repair:

  • Defining the query schema
  • Wiring the queries up to the data sources

Defining Your Schema

GraphQL-java 6.0 supports a domain specific language for defining your schema known as IDL.  It’s a major win.  First, it gets your schema, which is by definition, a single concept, into one place, and makes it coherent.  Second, they didn’t go off and write “Yet Another DSL” but instead supported one that while not part of the official spec, is part of the reference implementation, and has traction in the community. Nice.

Wiring Up Your Data Sources

The best practice for this now is using the “DataFetcher” interface. The name is a bit misleading, since these aren’t just for your queries (i.e. fetching data) but also for your mutations (modifying data).  The name is weak, but the interface and it’s use is a breeze.

To the Code

I did all this work on my snippets server kata project, so for a richer example go there, but for the sake of clarity here will look at the more concise Ratpack GraphQL Server example.

The Dispatcher

This didn’t change hardly at all.  It’s still as simple as grappig a JSON map, pulling out the query and variables, and executing them:

Pretty straight forward.

Defining the Schema: IDL

So in this trivial example all I have are Company entities, defined with this bean:

And all I wanted to support was, get one, get all, save one, delete one.  So I needed to define my Company type, two queries, and two mutations. Defining this in IDL was easy:

Loading The Schema

I just tucked my schema definition into my resources folder and then:

Wiring The Data to The Schema

In GraphQL-java, the way I chose to do this is with DataFetcher implementations. So for example, to find a company, by name, from a map it would look about like:

So that’s the way to “fetch” the data, but how do you connect this to your schema? You define a “RuntimeWiring”:

And then you associate that wiring with your schema you loaded:

And Then…

Well that’s it.  You’ve:

  • Created a GraphQL dispather
  • Defined your entites
  • Defined your GraphQL schema
  • Created queries
  • Instantiated the schema, wired in the queries

Done.  Take a look at my GraphQL server in Ratpack for the complete working code.

 

 

Travis-CI to Docker Hub

More and more of my work involved docker images.  I consume them and produce them. My standard open source project CI/CD tool chain stack is Java, Gradle, GitHub, Travis-CI, CodeCov and Bintray.  End to end free and functional.

Recently I moved my snippets server app into a docker image.  This added Docker Hub to my stack, and happily it was an easy addition because of Gradle and Travis-CI.

Setting up the Build

A quick search and review turned up the gradle-docker-plugin.  With this plugin and access to a docker engine you can create, run and push docker images easily. The docs for the plugin will walk you through how to add it to your build.gradle. Also note, to use the types in the tasks below, you’ll need proper import statements. My build.gradle is pretty clean, but I’ll walk through some details below.

Creating the Dockerfile

The plugin is pretty flexible, so the following notes are not the answer but my answer.  Rather than create a fixed Dockerfile, I create mine on the fly from gradle:

task createDockerfile(type: Dockerfile) {
    def labels = [maintainer: 'nwillc@gmail.com']

    destFile = project.file('build/docker/Dockerfile')
    from 'anapsix/alpine-java:8_server-jre_unlimited'
    label labels
    exposePort 4567
    runCommand 'mkdir -p /opt/service/db'
    volume '/opt/service/db'
    copyFile "libs/$project.name-$project.version-standalone.jar", '/opt/service'
    workingDir '/opt/service'
    defaultCommand '--port', '4567', '--store', 'H2'
    entryPoint 'java', '-Djava.awt.headless=true', '-jar', "$project.name-$project.version-standalone.jar"
}

There’s a fair bit going on there so let’s walk through it.  First off, I’m creating the Dockerfile down in my build directory.  Then I’m using the plugin to do standard Dockerfile operations like setting the base from image, creating folders, copying in artifacts, and setting up the command and entry point.  The plugin sticks pretty close to the dockerfile DSL so you should be able to pick it up easily.  It’s worth noting that because this is in gradle, I can use the groovy variables to denote things like the artifact name etc.

Creating the Image

With the task to create the Dockerfile done, building an image is trivial.

task buildImage(type: DockerBuildImage, dependsOn: [assemble, createDockerfile]) {
    inputDir = project.file("build")
    dockerFile = createDockerfile.destFile
    tag = "nwillc/$project.name:$project.version"
}

So here I just indicate where I’ll root the build, in the build folder, and grab the previously created Dockerfile, and tag the image. Running this task will create your artifact, create your Dockerfile, and build the image.

Pushing the Image

I push my images into docker hub’s free public area. So, all I need to add to my build is info about my credentials and a push task.

docker {
    registryCredentials {
        url = 'https://index.docker.io/v1/'
        username = 'nwillc'
        password = System.getenv('DOCKER_PASSWD')
        email = 'nwillc@gmail.com'
    }
}
task pushImage(type: DockerPushImage, dependsOn: buildImage) {
    imageName buildImage.tag
}

Note I grab the password from an environment variable. That keeps it out of my github repo and you can set these in a secure manner in Travis-CI.

Running the Build and Doing the Deploy

With your build.gradle ready to go, and your DOCKER_PASSWD set you can now locally do a ./gradlew pushImage and it should all work, ending up with the image in docker hub.

But now let’s get our CI/CD working. Travis-CI has all you need supported. Set the DOCKER_PASSWD in your Travis-CI account’s profile, and then add the relevant bits to your .travis.yml, here are the key elements:

sudo: required
services: docker
after_success:
  - docker login -u nwillc -p ${DOCKER_PASSWD}
  - ./gradlew pushImage

You’ll need sudo, you have to indicate you’re using the docker service, you’ll need to login to docker hub, and finally push the image after successful build.

Done

With your build.gradle, and .travis.yml enhanced, it’s done. Every push to github builds and tests and if everything is happy your docker hub image is updated.

Home DevOps: Ansible for the Win!

I’ve a Raspberry Pi that I use for various things.  I’m a big fan of these little boxes, but they can be temperamental.  You end up fiddling around to get things installed and sometimes even a simple package update will leave the box a dead parrot.  A couple weeks back, I was just running a regular update and my Pi died a horrible death.  The upside with a Pi is you just re-image the disk and you’re back in business. The disks are small, the process simple.  However, if you’ve customized things all that’s gone.

I decided to rebuild my Pi, after re-imaging it, using Ansible.  Ansible is straightforward and easy to start up with. I’ve used it on and off over time and am proficient with it.  In under an hour I had rebuilt my Pi, with all my customizations with an ansible playbook. It didn’t take much more effort than doing it by hand really, but I did feel like maybe I’d gone a bit far using ansible.  Until the next morning that is.  I’d forgotten a few security measures, and my Pi is accessible to the internet, and in less than half a day someone or some bot had gotten in and taken over. Sigh. Now the whole ansible decision seemed far wiser.  I enhanced my playbook with the security changes, re-images, and reran, and the Pi was back and better in under twenty minutes.

Since that practical example, I’ve done everything on my Pi via ansible and had no regrets.

 

Fluentsee: Fluentd Log Parser

I wrote previously about using fluentd to collect logs as a quick solution until the “real” solution happened.  Well, like many “temporary” solutions, it settled in and took root. I was happy with it, but got progressively more bored of coming up with elaborate command pipelines to parse the logs.

Fluentsee

So in the best DevOps tradition, rather than solve the initial strategic problem, I came up with an another layer of paint to slap on as a tactical fix, and fluentsee was born.  Fluentsee is written in Java, and lets you filter the logs, and print out different format outputs:

$ java -jar fluentsee-1.0.jar --help
Option (* = required)          Description
---------------------          -----------
--help                         Get command line help.
* --log <String: filename>     Log file to use.
--match <String: field=regex>   Define a match for filtering output. May pass in
                                 multiple matches.
--tail                         Tail the log.
--verbose                      Print verbose format entries.

So, for example, to see all the log entries from the nginx container, with a POST you would:

$ java -jar fluentsee-1.0.jar --log /fluentd/data.log \
--match 'json.container_name=.*nginx.*' --match 'json.log=.*POST.*'

The matching uses Java regex’s. The parsing isn’t wildly efficient but keeps up generally.

Grab it on Github

There’s a functional version now on github, and you can expect enhancements, as I continue to ignore the original problem and focus on the tactical patch.