First Project With Ratpack

One of my trade skills is server side Java, implying writing services in Java.  Recently they’ve mostly been servlet based microservices. I’ve used the Spark Framework a lot, but as clean as that framework is, there’s no denying that servlets are the man behind the curtain, and you can’t avoid paying him attention any time you do anything of substance. Servlets work well enough, but they are showing their age, and so I always keep an eye open for other light weight service architectures.

Ratpack

When I saw Ratpack I decided to give it a go. It bills itself as “Simple, lean & powerful HTTP apps”.  It’s built on newer and a carefully curated selection of technologies: Java 8, Netty, Guice, asynchronicity, event driven … looked promising.

Giving it a Try

I took my standard approach to any existing evaluation, and migrated one on my kata projects over to it.  The obvious choice was my snippets service.  I created a feature branch, and damned if just a few hours later I didn’t have a version of the service that I felt was cleaner and faster, and the branch was merged to master.

Likes

What I liked about Ratpack:

  • Appeared to live up to its credo, simple, lean and powerful.
  • Seemed to produce a quick app.
  • Clearly not a servlets facade. The APIs are largely consistent and you don’t immediately hit the “and here’s where it becomes servlets” boundaries over and over.
  • Documented, both an online manual and Javadocs.

The Imperfections

Here are the rough edges from my perspective:

  • The documentation. Yup, it’s a like and a dislike. The manual is useful, but it leans towards “here’s a cool way to do this cool thing.” When I wanted to do the pedestrian “serve static content from inside a fat jar”,  I had to hit Google and hunt around various boards.
  • The Gradle syntactic sugar that magically pulls in dependencies.  I wish they just listed the dependencies needed and left it to you to include them.  I really don’t want magic in my gradle build, where some dependencies are implied, but most you need to list. I prefer less magic, a little more work, but consistency.

To Keep in Mind

I was migrating Java 8 code which had lambdas sprinkled throughout.  In at least one place things started happening out of order. I had to keep in mind that Ratpack leans towards asynchronous/reactive paradigms and that some of my lambdas where now off in other threads happening in parallel and I had to make sure they had completed before using their results.

The Bits

If you’re curious to see what the Ratpack based version of my snippets server looks like it’s in my github.

Advertisements

Jenkinsfile: Infrastructure as Code (Chaos?)

Anything I do more than a couple times, and it looks like I will again, I script to automate … I guess I’m an advocate of PoC (Process as Code). I use whatever DSL or tool best fits the situation and so I’m often learning new ones.  Jenkins has long been my goto for CI/CD, applied right it can automate and order a lot of the work of building and deploying.  But Jenkins, starting as Hudson, had a very UI biased architecture, and the evolution from then to now seems to have been largely organic.  I’ve depended on Jenkins, and been happy with the results of my work with it, but often the solutions felt a bit of a Rube Goldberg machine, cobbling together a series of partially working bits to get the job done.

Enter Pipeline/Jenkinsfile

Then along came the Pipeline plugin that allowed for scripting things with the Jenkinsfile DSL.  I dove right in. Pipelines allowed me to stop configuring complex jobs in Jenkins UI, and move all that out into Jenkinsfile scripts managed in my SCM.  Awesome! Or mostly awesome.  Immediately I started hitting issues with the Jenkinsfile documentation and the pipeline plugins.  The DSL spec seemed to be a moving target, and the documentation for some of the pipeline plugins, like the AWS S3 upload/download, were sparse to nonexistent.   So, it was two steps forward and one back.  You could move all your configuration and process description out into code, and the code could reside in your SCM, but the DSL was an inconsistent, poorly documented, patch job.

Enter Blue Ocean

So then the UI revamp of Jenkins from Blue Ocean came out recently, and it’s all about pipelines and Jenkinsfiles.  There was documentation. There was a plan. They seemed to be wrangling the Jenkinsfile DSL ecosystem into sanity!

Maybe … Maybe Not

I don’t know if it’s Blue Ocean’s fault or not, but suddenly there was Declarative and Scripted (aka Advanced) variants of the pipeline DSL. They share common roots, but they are not the same, and while the scripted is richer, it’s not a full on superset. Apparently I’d been working in the land of scripted and Blue Ocean was all documented as declarative. It only took me a good four hours to figure this out and understand why my scripts were exploding all over the place. Eventually I found the easter egg documentation hidden away behind the unexplained advanced links. Then I spent about an hour figuring out the tricks I needed to get the new plugins working with my old scripted skills, and how to hand roll the features that I lost when I didn’t use declaritive.

So yeah… Blue Ocean is absolutely two, maybe three steps forwards, but as per everything Jenkins, there was that one step backwards too. It’s clearly an improvement, but leaves you feeling like you’re basing your progress on a crufty hack.

 

The Testing Stack, From My Java Bias

I’ve been at this 30 years, and Java since it’s introduction, so some folks will feel my opinions are a bit old school. I don’t think they are, I think they’re just a bit more thorough then some folks have the patience for.  Recently I butted heads with an “Agile Coach” on this, and they certainly felt I wasn’t with the program. But I’ve been a practitioner of agile methods since before they were extreme and again, I don’t think the issue is that I’m too old school, it’s just I believe the best agile methods still benefit from some traditional before and after.

My View of The Testing Stack

People get obsessed with the titles of these, but I’m not interested in that debate, I’m just enumerating the phases in a approximate chronological order:

  • Unit: TDD these please!
  • Scenario: Test as many scenarios as you can get your hands on. Unit testing should catch all the code paths, but a tool like Cucumber makes enumerating all the possible data sets in scenarios much easier.
  • End-to-end/Integration:  Put the parts together in use case sequences, using as many real bits as possible.
  • Performance/Stress/Load:  Does your correct code do it fast enough and resiliently enough. These can appear earlier, but it needs to happen here too.
  • QA: Yup … I still feel there’s value in a separate QA phase this is what got the agile coach red in the face… more to follow on this.
  • Monitoring/In Situ: Keep an eye on stuff in the wild. Keep testing them and monitoring them.

So this is a lot of testing, and plenty of folks will argue that if you get the earlier steps right, the later ones are redundant.  I don’t agree obviously. I see a value to each distinct step.

  • Unit: Errors cost more the further along you find them so the earlier one tests the better. So get as good coverage at the unit level as you can tolerate.
  • Scenario: Using unit tests to exhaustively cover every possible scenario can be … exhausting, but it’s worth sweeping through mass combinations and permutations if you can.
  • E2E/Integration: Now you know the pieces work in isolation, you need to see that they work together. This will shake out problems with complex interactions and insure that contracts where adhered to over time.
  • Performance: If it’s too slow or, fragile to use, it’s useless.
  • QA: Here is were a lot of folks say I’m being old school.  If you’ve gotten through the prior steps isn’t this redundant?  No.  No matter how good your working relationship and communications with your product team and customers are, your development team always comes at their job with a technical bias. At least I hope your developers are technical enough to have a technical bias. At a bare minimum, having a separate pass of testing that is more closely aligned with the business perspectives makes sure that you avoid issues with the “any key” and the like.  But this pass can help you hone and improve product goal communications, and act as feedback loop to improve the prior testing steps.  In a perfect world it would become redundant.
  • In Situ: Keep your eyes on the product in the wild, test them there too, you can use prior tools and scenarios here, but testing chaotic scenarios is invaluable. This is about watching the canary in the mine, and seeing if you’ve even been worried about the right canary.

From a cost perspective you always want to front load your testing as much as possible, but if your goal is quality, the old adage “measure twice, cut once” should be amended to “measure as often as you can practically tolerate, cut once”.  Needless to say, automation is the key to it,  tool everything, make the testing happen all over the place without human intervention.

Rant: Docker, NAT, VPN, EC2 … Fail!

My goal for the day was to get consul infrastructure set up at work.  I’d done proof of concept (PoC) work on all of the various bits and it was just a matter of putting the parts together:

  • Everything from docker images
  • The consul server on EC2 instances (in a VPC)
  • The consul agents on a mix of client machines, some Linux, some OSX

I was working from home, and that seemed like it would help. but I didn’t realize that it would actually make the task basically impossible.

What Got Me

Consul is a all about networking, that’s is game.  So as I tried to glue all the PoC bits together  here’s what got me:

  • Getting the EC2 security groups fixed up for consul’s too many ports – annoying but doable
  • Dealing with the fact that the networking on Docker for OSX isn’t quite right. It lacks some of the bridging features and things like -net=host “work” behave non-intuitively
  • Working from home I was on a NAT’d machine connected through a VPN so my address wasn’t always my address and some traffic wouldn’t traffic.
  • The universe hates me.  Ok that’s hyperbolic whining, but by the days end I was sure it was so.

Basically combining all those gotchas together meant that:

  • Every example for consul in docker was from Linux and there was a 50/50 chance it would fail mysteriously on OSX.
  • The errors I hit were often lack of connectivity … and so you were left trying to diagnose silence … not a lot to go on there. Bueller? Bueller? Buller?
  • There was so much “useful” information out there that I just kept trying… I mean if I just tried one more suggestion that would get it right?

Fail

Tomorrow I’ll be back on site and that will eliminate the NATing and the VPN.  Perhaps with those two complexities removed I’ll make progress.   I could always run consul natively rather than in docker…. but I really don’t want to admit defeat.

Java Application Reloader

I’ve some micro services that I’ve been moving off commercial PaaS platforms to a Raspberry Pi on my home network. It’s been fun and worked out well, but I did lose one thing in the move, support for continuous delivery.  Previously Travis-CI was able to deploy the services to the PaaS, but with the new setup I had to figure out a new solution.

Looking at the service, and seeing as it’s a single jar file, I felt I ought to be able to come out with a simple solution.  I knew I could get the jar files onto the Pi, all I needed was some why to cleanly reload the service from the new jar.

A bit of searching turned up this older article on how to Programmatically Restart a Java Application.  It was close to what I wanted, if I could just enhance it to behave like the update feature you often see in apps where it doesn’t just restart, but also updates the version.

Reloader is Born

Starting from the article mentioned, I developed reloader.  Reloader will restart a java application, but additionally it can:

  • Act as a signal handler, so that you can kick it off by sending a signal to the application, optionally creating a pid file.
  • Find the newest version of the jar containing the application, and switch to that, allowing for pseudo in place upgrades.

In addition to adding features I brushed six years of dust off the code,  and deployed the package to jcenter for easier public use.

Using Reloader

Using reloader couldn’t be much simpler. It does pretty much everything on its own.  All you need to do is call one method explicitly:

    Reloader.restartApplication();

Or set it up as a signal handler:

    Reloader.onSignal("USR2");

That’s all it takes.

My Use Case

To achieve continuous delivery from Travis-CI  I added two very restricted and hardened breaches in my Raspberry Pi’s security:

  1. A jar can be pushed to the server remotely
  2. The service can receive a USR2 signal remotely

I then set up Travis-CI to perform those two actions after any successful build. With reloader added to the service, those two actions were all that was needed.

In Conclusion

By its nature reloader is a bit rough around the edges and while it has some test coverage I’ve really only burned it in for my use case. That said if you want to give it a try, or just look at its bits and pieces head over to github and go wild.

From OpenShift to a Raspberry Pi

For some time I’ve parked a personal snippets service I wrote on OpenShift’s free tier. It seemed like the free ride wasn’t going to last so it was time to move on.  I use my snippets on a regular basis, but I didn’t really want to pay for a 24×7 internet presence… and I had a Raspberry Pi laying about, so I thought, why the heck not?

Could it Work?

My first step was to see if this was even doable…

  • Could a Pi host my service?  Java 8, enough memory, enough storage, enough CPU. Pass √
  • Was my home network up to it?  I’ve a decent connection, and I verified my ISP does allow inbound connections. My router can port map. I don’t have a registered domain name or a fixed IP but I knew how to deal with that.  Pass √
  • Was it a sane choice? The answer to that has never stopped me before so … Pass √

Bringing the Pi Out of Retirement

I had a Pi that I hadn’t used in a year or so just gathering dust.  All I had was the Pi and it’s power supply though.  No SD card. No compatible keyboard.  A quick trip a chain drug store up the road got me an 16GB SD card for 7$ on sale.  Google yielded a slew of tutorials when asked “set up Raspberry Pi without keyboard”.   So…

  1. Format the SD
  2. Load it with Raspbian
  3. Pop it in the Pi
  4. Cable Pi to network
  5. Power Pi up
  6. SSH in…. denied ?!?  FAIL X

The tutorials were mostly out of date with regards to a security update in Raspbian. It no longer has the ssh daemon running by default. The solution is to put the SD into a computer and create an empty file named ssh on the FAT32 /boot partition. Return the SD card to the Pi, reboot, and I could ssh in. Pass √

Before I went further, I loaded my service onto the Pi ran it, and sure enough it ran like a charm.  Pass √

Opening a Port and Dynamic DNS

Now to make the Pi accessible to the outside world.  I set up my router to port map the selected port and targeted my networks external IP address with the port and it went right through to the Pi and the server worked! Pass √

Since the external IP address can change, and using raw IP addresses is clunky I set about setting up a Dynamic DNS service. I chose a free one: ChangeIP.com. These services require some client software running on you Pi to keep your possibly changing IP address in sync with the name you chose.  So back on the Pi I set up ddclient which worked fine with very little effort. Did an nslookup and got the right IP address.  Pass √

A note here, on a partial fail, apparently my jobs http proxy rejects dynamic DNS names out of simplistic paranoia so I have to use the IP address, which ironically I can find with and nslookup which the network does allow (?!).

The Final Test

Okay, using my phone, turned its wifi off, and targeted the appropriate URL.  I used my phone without wifi to insure I was really coming in from outside world – no cheating.   And… denied ?!?  FAIL X

Okay, so I said my ISP allowed inbound connections right?  Well, a bit of further investigation taught me that they allowed them, but not by default, again a security enhancement.  Turns out you have to go to their website and turn them on… and the feature was broken.  I got on chat with a support person, and after, following their required script where I checked for them again all the things I already had done, they made the change for me. And…  Pass !!!

I could get in from my phone and it all worked wonderfully.  Easy peasy :-)

Is it Free When it Costs so Much Time…

Clearly not.  So why do I bother with the free/cheap stuff? Ok, I’ll admit that, even though I know it’s not free if it costs my time, I can’t help liking cheap/free.  In particular I know the free tiers of most services rarely stay around, at least not as free. But, the moving from one to the next forces one to learn new technologies, and teaches you how to write good loosely coupled software. Also, the quality and longevity of a free service can be a good indicator of the general value of a product. If it’s unstable and evaporates in a few months, well it’s not something I’m going to suggest at work or to others.

The Rime of the Ancient Mariner in Greece

I had a somewhat eclectic education, and as a grade school child I was tasked with memorizing The Rime of the Ancient Mariner.  So you too are not forced to wrap your head around this epic poem, and for the sake of my tale, I’ll summarize. The poem tells of an old sailor that’s impelled to continuously accost passers by and tell them his tale of how his misdeeds doomed a ship and crew to destruction, and how from that he learned to act with compassion. Why do I mention the poem?  Because I’ve just lived it, or sort of. Your call.

I’m in the process of ending a six year relationship with someone. I’m not seeking judgement here, I accept my share of failures in the relationship, perhaps even the larger share, but for the sake of this tale it’s worth noting that they had chosen to end things and left.  So when they ended up sick in a foreign land, Greece, and turned to me, some very sane people asked me, well what’s that to you?  The poem didn’t come immediately to mind, but when I decided to help the person get back from Greece, I did find myself compelled over and over to explain the situation, and the image of that ancient mariner telling his crazy tale to one person after the next seemed on point.

This wasn’t going so be a simple case of sending a bit of cash you see. They’d ended up twice in the hospital, were diagnosed, treated and recuperating, but were simply in no shape to travel on their own.  Any option I could find to get others to help, US Embassy, insurance, private medical evacuation, all seemed to end in being stuck alone in Greece for an indeterminate length of time, and at huge cost… so realistically someone just had to go there and bring them home.  And that someone really was, for a number of reasons, me. So I did.  I flew to Greece, organized the return trip, and flew them back.  A three day world tour, NYC, Amsterdam, Athens, Rome, NYC.  And it was touch and go at moments. Getting to the airport, through transfers, customs, and back home, with someone so exhausted that five minutes on their feet left them dizzy and shaking like a leaf, and with a stomach so weak that any peculiar smell had them gagging and even vomiting, was tough. But we got back home.

So, as I said, at first I recalled the poem because working through the logistics of missed work, travel plans, finances, etc., I was accosting one person after the the next, having to explain why I was doing this seemingly crazy thing.  But now, with it done, I think maybe too, the poem came to mind because of its conclusion. I’m not sure I even consciously understood it as a child, but the message, to live compassionately, is pretty clear:

He prayeth well, who loveth well
Both man and bird and beast.

He prayeth best, who loveth best
All things both great and small;
For the dear God who loveth us,
He made and loveth all.

While I didn’t magically repair the relationship, or bring about any great change to my life, the fact that I acted compassionately and resolved at least this one crises in a good way… well that part feels deeply right.