JUnit5, Gradle, No Jacoco?!

[Updated 6/4/17 Revisited this, appears still broken]

One 0f the features I’d proposed for JUnit, default method tests (based on my contract tests package), made it into JUnit5 so it’s been on my queue to try it out.  When I settled into try it, I took my standard approach to learning a new tool, I grabbed one of my existing projects and set about applying JUnit5 to it.

I looked over the JUnit docs and some examples and it all looked pretty good.  But when I started to use it in my project I caught a whiff of a bad smell pretty quickly.  The project I’d chosen was built with gradle. Gradle is by far my build tool of choice at this point.  JUnit4 is well integrated into gradle and you don’t need to do much more then add a single dependency to use it.  JUnit5 had a new plugin for gradle, which required a bit more hand holding, and there were several additional dependencies, some required and some optional.

Migrating my tests was fairly painless. Some of the annotation name changes seemed a tad gratuitous but no biggy.  Configuring the new plugin to actually work with the tests was certainly not as easy as it was for JUnit4 but t got done in in minutes not hours.

And then the plugin’s smell turned into a reek when I pushed the code and it fell over in continuous integration.  The JUnit part was fine, all my tests passed without problem, but my code coverage report wouldn’t run. Not at all.  No Jacoco output.

I am a TDD practitioner, so mostly my tests come first. That said I’m also a code coverage fan.  I like to know how well I’m doing with the tests I write, and a nag when I cheat and just crank out boilerplate code untested.  I depend on Jacoco for that.  Sure enough a quick search proved my fears true. The JUnit5 gradle plugin just does not work with Jacoco. Gradle being what it is, i.e. very open to hacking,  I found several “fixes” but they didn’t work for me.

So the JUnit5 work is now off on a code branch, and I’m back to JUnit4.  I’ll keep an eye on things and when it’s sorted out I’ll revisit it.

Advertisements

An Illuminating Example of Inheritance vs. Composition in Object-Functional Programming

The Scenario

I’m filtering out a small number of objects from a collection in Java 8.  Nothing special there, I streamed the collection and used a Predicate. Done.  But then the collection moved out into a cloud database and I found myself pulling the entire collection over the wire to filter out a small set of objects. While the code still worked perfectly performance really suffered. Additionally, the collections where going to continue to be in both places so I needed to find a way to handle both well.

Taking Stock

The cloud database in question was Orchestrate.io and it offers server side filtering based on Lucene queries. So obviously that’s what I wanted to use to reduce the traffic over the wire.  I considered moving away from my Predicates and introducing a query abstraction that could be converted either to a Predicate or a Lucene query based on the collection location. But it felt like that was over designing the solution because both Predicates and Lucene queries are basically built up of comparisons and boolean operators so I felt there ought to be a way to convert one model directly to the other and I already had the Predicates in place.

The Goal

My Predicates tested field/value pairs in beans, and could be built up through the negate, and and or methods.  For example:

Predicate predicate = new BeanPredicate("lastname","Doe")
       .and(new BeanPredicate("firstname","John")
           .or(new BeanPredicate("firstname","Jane")));

From that I wanted to derive a Lucene query:

lastname:"Doe" AND ( firstname:"John" OR firstname: "Jane")

I decided that I could override the toString method to make the Lucene representation available there.

Iteration 1: Subclassing

Java predicated use lambdas to implement the negate,and,or operator tests, but I needed the toString method to change for all those operators as well.  So I went with subclasses for the various operators.  My code looked something like this example of the and operation:

class BeanPredicate implements Predicate {
  private final String label;
  private final String value;

  ...
  public BeanPredicate and(BeanPredicate other) {
    return new BeanPredicate() {
      public boolean test(Bean bean) {
        return super.test(bean) && other.test(bean);
      };
      public toString() {
        return super.toString() + " AND " + other.toString();
      };
    }
  }
}

This passed all tests but damn was it verbose and ugly.

Iteration 2: Composition With Functions

What my next refactoring did was to change the test and toString over to Functions and then have the different operations compose proper implementation:

class BeanPredicate implements Predicate {
  private final String label;
  private final String value;
  private final Function<Bean, Boolean> test;
  private final Function<BeanPredicate, String> toString;
  ...
 
  public BeanPredicate and(BeanPredicate other) {
    return new BeanPredicate(label, value,
        bean -> test.apply(bean) && other.test(bean),
        bp -> toString.apply(this) + " AND " + other.toString());
  }
}

This may not appear glaringly different but it has a number of advantages. The operation implementations are cleaner, easier to read, and do not involve an anonymous class. When you consider that the bulk of the code is in the operations, being cleaner there pays off.

Conclusion

With a Object-Functional language the general wisdom of applying composition over subclassing becomes even more true and can be applied in an even more clean and powerful way.  Particularly, one common draw back of composition, where you end up polluting the class interface with indirect references through the composite types is completely avoided when you’re simply assigning functions.  You can take a look at the final implementation here.

GraphQL: Java Schema Annotations

[Related Update]

In my prior post on GraphQL server in Java I noted the complexity of defining the schema and committed to look at the tools offered to ease the process. Here we are. I’ve migrated the schema definition over to using an annotation package and as I’m generally pleased with the results as it really did simplify the schema definition.

Class Definitions

Defining the classes was trivial. Simply adding a few annotations to the entity getter methods:

Once those are in place, anywhere you reference that type in your queries or mutations will “do the right thing.”

Queries/Mutations

This was a bit less polished, but worked well enough too. They offered a couple of approaches and in the end I implemented the following pattern:

There a few things going on here to note.  First the annotation tool scans the .class definition, so either you use static methods, or if you use instance methods it’s going to call the no argument constructor to create and instance.  Since data fetchers need data, I was a little confused about what to do with static methods, or a no argument instance – how do you get your data access objects in there?  Obviously you could make you DAO objects into singletons and get at them that way but that seemed ugly.  What I found, by digging into the code was that if your GraphQL dispatcher passed in a context object that was accessible via the DataFetchingEnvironment.getSource() method. So I went with static methods and as you see in line 6 above I access the DAO via getSource().

Using the Annotated Classes

Once you’ve annotated your entities and created an annotated query and mutation class here is how to create your schema:

Conclusions

I felt the annotations definitely made the code simpler and cleaner.  About my only complaint was that the documentation and examples were so terse that I ended up tracing through the code to figure out how some of it came together.

GraphQL: Java Server, JavaScript Client

[Read the Update here!]

Github is a service I respect and depend on so its adoption of GraphQL for an update to their API pushed me to look into GraphQL.

The Approach

People suggest a good place to start with GraphQL is to migrate an existing RESTful service, so I decided to try it out on one of my existing Java RESTful services.  That service has a Java backend with a simple jQuery based JavaScript UI.  So I started looking for a Java based GraphQL server and a simple JavaScript GraphQL client.

  • For the server side I settled on graphql-java.
  • For the client side I reviewed several offerings but they were all rather complex for my needs, so with about 40 lines of JavaScript I cooked up my own client.

I also decided to follow the generally accepted wisdom that you only tackle the data management parts of your API with GraphQL,  and leave things like authentication as they were.

The Server

The graphql-java package came with some good examples that let me start trying it out right away. But the service I was migrating was based on tools that didn’t exactly line up with their examples so it wasn’t plug and play. My service was written with Spark: “A micro framework for creating web applications in Java 8 with minimal effort.” And as it turned out connecting graphql-java to Spark was straight forward. Associate something like the following with the graphql  HTTP POST path:

And your service can now dispatch GraphQL requests in JSON of the form:

With that dispatcher in place you can start the process of working on your GraphQL schema implementation, where the magic takes place.

The GraphQL Schema

The schema, at least in graphql-java, is where a lot of the work starts and ends. In a RESTful service, you tend to map a single HTTP request path and operation to a specific Java method. With graphql-java the schema is a holistic description of types, requests, and how to fulfill them. It almost has the feel of a Guice module, or a Spring XML configuration.  There are tools designed to simplify schema creation, but since I had set out to learn GraphQL I decided to feel the pain and create the schema by hand.  Every type exposed by the API had to be described. Every request had to be described. The associations between the requests and the code fulfilling them had to be described.  Again the graphql-java examples and tests proved a good resource but nothing there was a drop in solution.

Describing an Entity

The service I ported was a snippets app. One of of its domain classes is a way to categorize a snippet:

To add this class the the GraphQL schema you have to fully describe it:

Queries and Mutations

Once you’ve described the classes, you need to describe how you’ll retrieve instances, here’s an example of retrieving by key, and retrieving them all:

To create, update or delete objects you’ll need to define the “mutations”, here’s my category create:

From these samples you can tell that the schema is, as the name implies, a detailed description of the classes and operations on them.  The format is verbose, but I found I pretty quickly picked up the syntax and semantics and developing the complete schema wasn’t too bad a chore. Also, don’t forget there we tools claiming to ease the process that might be useful.

The Client

I developed the server side to completion using various testing to drive that.  Once I had a server that supported my old RESTful operations via GraphQL I approached the client side. My client was nothing more then jQuery performing HTTP get/post/deletes, and it wasn’t particularly tidy.  I looked into existing JavaScript clients, but they seemed to all be npm based or integrated into a framework.  So I just wrote the following:

With that my JavaScript could do things like:

My Conclusions from the Experience

So with a couple days of learning and coding I got my snippets service moved from RESTful to GraphQL API.  What are my initial observations?

  • The work is in the schema.  I’ll be looking at the tools to help there in my next refactoring.
  • Both the server and the client benefitted from GraphQL’s more holistic approach. Where GraphQL had a single schema, dispatcher and client pattern, RESTful had used GET/POST/DELETE methods each somewhat distinct.
  • The performance suffered a bit.  I’ve done a lot or RESTful services, and this was my first foray into GraphQL so I’m not surprised that it wasn’t quite as quick end to end. I’m suspicious of some of the magic (likely reflection based) that graphql-java uses to execute the schema queries. That said I’m betting I can improve the performance by doing some things better then my first attempt.

But overall I liked the GraphQL experience and will probably advocate it over RESTful going forward.

As always the complete code for my work is in github.