Docker Container Jenkins Slaves in AWS

There are some good articles out there about using docker containers as Jenkins slaves.  There are many good reasons to do this. My use case was, we have some special snowflake test setups that didn’t play well together, but didn’t require all of a dedicated machine, so creating docker images for them, and sharing a machine, made more sense.

Mostly It’s Straight Forward

The posts out there cover the topic well.  Basically there’s a Jenkins Plugin needed, the Docker image setup, and then some configuration of the Docker engine on the host machine.

But, There are Always Challeges

I hit three bumps:

  1. Setting up you Docker engine to accept remote requests. This is mentioned in most of the articles, and usually covered to one degree or another. The snag is, pretty much every OS’s installation of Docker is a little different. So while I knew I had to add the “-H tcp://” argument to my dockerd, finding out how on our RedHat install took a bit of doing.
  2. Dealing with AWS’ security groups.  If you’re talking to port 2375, well obviously that port needs to be open. Duh. But that only got me so far, containers fired up but Jenkins’ builds hung.  What wasn’t immediately apparent was, that the ssh communication to the slaves wasn’t going to happen on the traditional port 22. Yes, inside the container it would listen on port 22, but that would be mapped externally to a port in a range of numbers.  So my AWS security group needed to have that range open for inbound connections too.  Using docker inspect on the containers that resulted allowed me to see what they were exposing 22 as. I’m not sure I got “the range”, but I got “a range” that’s worked so far.
  3. The image I built had a banner and messages up login. That confused Jenkins apparently. Once I had it so no messages were displayed when I ssh’d in that resolved that issue.


With those two issues addressed, I’ve now got the special snowflake setup as docker images, and Jenkins spins those up and tears them down as needed.  I’m not entirely happy with the “port range” business, and may revisit it, but for now, like most things Jenkins, its working even if there’s a bit of a code smell.


The Testing Stack, From My Java Bias

I’ve been at this 30 years, and Java since it’s introduction, so some folks will feel my opinions are a bit old school. I don’t think they are, I think they’re just a bit more thorough then some folks have the patience for.  Recently I butted heads with an “Agile Coach” on this, and they certainly felt I wasn’t with the program. But I’ve been a practitioner of agile methods since before they were extreme and again, I don’t think the issue is that I’m too old school, it’s just I believe the best agile methods still benefit from some traditional before and after.

My View of The Testing Stack

People get obsessed with the titles of these, but I’m not interested in that debate, I’m just enumerating the phases in a approximate chronological order:

  • Unit: TDD these please!
  • Scenario: Test as many scenarios as you can get your hands on. Unit testing should catch all the code paths, but a tool like Cucumber makes enumerating all the possible data sets in scenarios much easier.
  • End-to-end/Integration:  Put the parts together in use case sequences, using as many real bits as possible.
  • Performance/Stress/Load:  Does your correct code do it fast enough and resiliently enough. These can appear earlier, but it needs to happen here too.
  • QA: Yup … I still feel there’s value in a separate QA phase this is what got the agile coach red in the face… more to follow on this.
  • Monitoring/In Situ: Keep an eye on stuff in the wild. Keep testing them and monitoring them.

So this is a lot of testing, and plenty of folks will argue that if you get the earlier steps right, the later ones are redundant.  I don’t agree obviously. I see a value to each distinct step.

  • Unit: Errors cost more the further along you find them so the earlier one tests the better. So get as good coverage at the unit level as you can tolerate.
  • Scenario: Using unit tests to exhaustively cover every possible scenario can be … exhausting, but it’s worth sweeping through mass combinations and permutations if you can.
  • E2E/Integration: Now you know the pieces work in isolation, you need to see that they work together. This will shake out problems with complex interactions and insure that contracts where adhered to over time.
  • Performance: If it’s too slow or, fragile to use, it’s useless.
  • QA: Here is were a lot of folks say I’m being old school.  If you’ve gotten through the prior steps isn’t this redundant?  No.  No matter how good your working relationship and communications with your product team and customers are, your development team always comes at their job with a technical bias. At least I hope your developers are technical enough to have a technical bias. At a bare minimum, having a separate pass of testing that is more closely aligned with the business perspectives makes sure that you avoid issues with the “any key” and the like.  But this pass can help you hone and improve product goal communications, and act as feedback loop to improve the prior testing steps.  In a perfect world it would become redundant.
  • In Situ: Keep your eyes on the product in the wild, test them there too, you can use prior tools and scenarios here, but testing chaotic scenarios is invaluable. This is about watching the canary in the mine, and seeing if you’ve even been worried about the right canary.

From a cost perspective you always want to front load your testing as much as possible, but if your goal is quality, the old adage “measure twice, cut once” should be amended to “measure as often as you can practically tolerate, cut once”.  Needless to say, automation is the key to it,  tool everything, make the testing happen all over the place without human intervention.

A Docker Container as a Jenkins CI Slave

Recently I’ve been working to migrate responsibility for a set of Jenkins slaves from our local office to an off-shore team. We already have some of our continuous integration infrastructure on Amazon AWS so we thought we’d move these boxes there too. But the AWS setup required too much dev ops work and networking infrastructure to do it in the required timeframe. We were determined not to just physically relocate the rats nest of special snowflake machines to the new site though so we looked at other solutions.

Enter Docker

We took an inventory of our CI machines and boiled them down to a coherent list of needs and then worked to create a docker container Dockerfile that could replace any of the individual machines. We cleaned the jobs up and unified as much as we realistically could. In a week or so we had a base image as a starting point that handled a representative set of the jobs.


There is a lot of good docker info out there. You can work with it on Linux, OSX or Windows. Setting it up is trivial. We did most of our initial proof of concept clean room to really understand things, but if you go to GitHub and search for “docker Jenkins slave” you’ll find a bunch of good reference projects to learn from, or base your Dockerfile on.

The Pros

  • The process forced us to review years of organic growth and clean a lot of cruft.
  • The resultant Dockerfile is coherent and easy to maintain.
  • The Dockerfile allows devs to address changing demands rapidly and communicate them lucidly to people managing production infrastructure.
  • Docker provides us a great mechanism for setting up and tearing down pristine test environments.
  • Relocating and scaling capacity becomes trivial.
  • Easy to learn and low impact to run. Devs could do all the work independently on their laptops and the dev cycle is rapid.

The Cons

  • Containers are not full machines. Accessing some resources is a bit more involved. Things like USB ports and involved networking changes have docker specific challenges. It was nothing we couldn’t work through but not a simple as before.
  • Jenkins integration also requires a bit more work then a traditional slave. Again, very doable but a little more complex.

Some Tips

  • Work from a Dockerfile not an image. Keep the Dockerfile and its dependencies in revision control. This gives you great transparency into the resultant image and allows branch and merge development by multiple folks.
  • Consider the use of inheritance. Docker has good support for this, and we didn’t think twice hanging our container off of the Ubuntu one. But do you want specialized subclasses for all your testing types? Many examples out there do this, but if you add this branching factor in your testing it will likely cascade all the way to your production deployments. We strove for a uniform container for all tests, so that we could go with more uniform environments all the way down the line.


I’ve setup CI environments a number of times and they usually suffer either from entropy as product requirements evolve or became speed bumps as controls were put in place to combat the entropy. Docker is a great middle road. The container can rapidly evolve as your testing needs do, but if you work from a Dockerfile you can easily manage change and sanely flow it into production.

AssertJ, Stands On Its Own

Smell The Clutter

Java, with a healthy dose of testing code, has been my mainstay off and on for many years now.  JUnit and assertions have been a big part of my goto tools for most of that.  JUnit’s assertions felt underpowered from the start so, like most folks, I jumped on frameworks like hamcrest and fest. In some of code that’s old, or had a lot of hands in it, the result is a bit of a confused mess of all the possible frameworks.  Messy tests is as bad a code smell as any. Hamcrest hasn’t seen changes in ages and it appears fest is now abandon-ware so I decided it was time to deal with the smell.

Fest had been my preferred assertion framework and so AssertJ, the fest descendant with a still active community, was an obvious choice.  Some initial test usages proved it to be a cleaner fest. And so I began refactoring tests to use only AssertJ, removing JUnit’s, fest’s and hamcrest’s assertions.

Using AssertJ

AssertJ has a nice clean approach to assertions. It uses a factory method Assertions.assertThat() to create a type specific assertion. The type specific assertions offer fluent interfaces that are largely polymorphic.

Simple Example

static import org.assertj.core.api.Assertions.*;
public void shouldProvideAnExample() {
    String actual = "This is a test";
    String[] actualArray = new String[]
               {"This", "is", "a", "test"};

In the example above, assertThat() creates the appropriate type of assertion (String, Array) and contains, on a string looks in the string, and on the array searches the array.

Fluent API

One of AssertJ’s feature I like is its fluent API. In particular it’s cleaner than hamcrest’s approach IMHO. Consider, for example, determining if a string is between two values. There is a isBetween, but even barring that, the fluent approach looks like:


Where as hamcrest this would read (if there was greaterThan and lessThan):


Clearly the two approaches are equally expressive, but I find the fluent syntax easier to scan.

Custom Conditions

import org.assertj.assertions.core.Condition;
static import org.assertj.core.api.Assertions.*;
public void shouldBeEvenlyDivisibleBySix() { 
    Condition<Integer> evenDivBySix = new Condition<Integer>() { 
        public boolean matches(Integer value) { 
            return (value % 6) == 0; 

In this example a condition was created that determined if an integer is evenly divisible by six and then used in assertions on integers.

Custom Assertions

Sometimes you’ve your own types with corresponding common assertions you’d like to apply to them. Perhaps you want to know if a Student instance isInMiddleSchool?
I won’t walk through all the interface implementations here, but the process is quite straight forward.

  1. Subclass AbstractAssert for Student and implement a isInMiddleSchool.
  2. Subclass Assertions adding a new assertThat factory method for your StudentAssert you created.
  3. Use your new factory and assertions as you’d always (i.e. assertThat(student).isInMiddleSchool())

That simple.

The Refactoring

The transition from fest was basically changing some imports and a few small tweaks (see Assertj Migrating from Fest).  JUnit’s assertions were largely an import change and a simple regex or two. Hamcrest was a bit uglier…. and more satisfying.

What Didn’t Work Out?

Everything worked out except a few glitches around Hamcrest.

  • Hamcrest has native bean support via getProperty that AssertJ doesn’t offer. Coded around those using AssertJ’s Conditions.  Probably could have found another Bean/POJO accessor class but was trying to reduce frameworks.
  • At least one mocking framework depends on Hamcrest for some features.  The dependence was minimal and didn’t introduce much clutter into the tests.

Slides Based on This Post