Recently I’ve been working to migrate responsibility for a set of Jenkins slaves from our local office to an off-shore team. We already have some of our continuous integration infrastructure on Amazon AWS so we thought we’d move these boxes there too. But the AWS setup required too much dev ops work and networking infrastructure to do it in the required timeframe. We were determined not to just physically relocate the rats nest of special snowflake machines to the new site though so we looked at other solutions.
We took an inventory of our CI machines and boiled them down to a coherent list of needs and then worked to create a docker container Dockerfile that could replace any of the individual machines. We cleaned the jobs up and unified as much as we realistically could. In a week or so we had a base image as a starting point that handled a representative set of the jobs.
There is a lot of good docker info out there. You can work with it on Linux, OSX or Windows. Setting it up is trivial. We did most of our initial proof of concept clean room to really understand things, but if you go to GitHub and search for “docker Jenkins slave” you’ll find a bunch of good reference projects to learn from, or base your Dockerfile on.
- The process forced us to review years of organic growth and clean a lot of cruft.
- The resultant Dockerfile is coherent and easy to maintain.
- The Dockerfile allows devs to address changing demands rapidly and communicate them lucidly to people managing production infrastructure.
- Docker provides us a great mechanism for setting up and tearing down pristine test environments.
- Relocating and scaling capacity becomes trivial.
- Easy to learn and low impact to run. Devs could do all the work independently on their laptops and the dev cycle is rapid.
- Containers are not full machines. Accessing some resources is a bit more involved. Things like USB ports and involved networking changes have docker specific challenges. It was nothing we couldn’t work through but not a simple as before.
- Jenkins integration also requires a bit more work then a traditional slave. Again, very doable but a little more complex.
- Work from a Dockerfile not an image. Keep the Dockerfile and its dependencies in revision control. This gives you great transparency into the resultant image and allows branch and merge development by multiple folks.
- Consider the use of inheritance. Docker has good support for this, and we didn’t think twice hanging our container off of the Ubuntu one. But do you want specialized subclasses for all your testing types? Many examples out there do this, but if you add this branching factor in your testing it will likely cascade all the way to your production deployments. We strove for a uniform container for all tests, so that we could go with more uniform environments all the way down the line.
I’ve setup CI environments a number of times and they usually suffer either from entropy as product requirements evolve or became speed bumps as controls were put in place to combat the entropy. Docker is a great middle road. The container can rapidly evolve as your testing needs do, but if you work from a Dockerfile you can easily manage change and sanely flow it into production.