Easier builders in Java

Anyone that has used the builder pattern for building simple Pojo-style Java classes is probably aware that writing these builder classes quickly becomes quite unpleasant and definitely not fun. You quickly realize that your builders often mimic the structure of your Pojo’s setters, finding yourself almost duplicating half of Pojo’s code for the sake of the pattern.

Following a recent post from Eric Mignot and a few prior reflections I had on optimizing the process of writing these builders, I have come up with a solution that will, I hope, greatly simplify trivial cases (that is, building simple pojos) and, eventually, as the tool evolves, allow for slightly more complex cases to be covered.

So, let me introduce you to the Fluent Interface Proxy Builder. The tool only requires the developer to write the builder interface, not the implementation. The actual implementation is assured by a dynamic proxy that will intercept method calls on your interface, then set the corresponding properties on your Pojo object.

Quick example. Suppose you have a simple Java Pojo:

public class Person {
    private String name;
    private int age;
    private Person partner;
    private List<Person> friends;

    public void setName(String name) {
        this.name = name;
    }

    public void setAge(int age) {
        this.age = age;
    }

    public void setPartner(Person partner) {
        this.partner = partner;
    }

    public void setFriends(List<Person> friends) {
        this.friends = friends;
    }

    ... getters omitted for brevity ...
}

To get a builder for this bean, write a builder interface following a few naming conventions:

public interface PersonBuilder extends Builder<Person> {
    PersonBuilder withName(String name);
    PersonBuilder withAge(int age);
    PersonBuilder withPartner(PersonBuilder partner);
    PersonBuilder havingFriends(PersonBuilder... friends);
    Person build();
}

Note: The super interface “Builder” used here is provided by the framework. This interface has a “T build()” method. I included the “build” method in the example above for the sake of clarity. You may also use your own super interface if using the one provided by the framework proves to be a problem.

To use your builder, first create an instance:

PersonBuilder builder = ReflectionBuilder
                           .implementationFor(PersonBuilder.class)
                           .create();

Then you may use this dynamic builder normally through your interface:

Person person = aPerson()
                .withName("John Doe")
                .withAge(44)
                .withPartner( aPerson().withName("Diane Doe") )
                .havingFriends(
                    aPerson().withName("Smitty Smith"),
                    aPerson().withName("Joe Anderson"))
                .build();

Have a look at the Github project page for all the details and instructions on how to use it in your own project. You may use this freely by the terms of the MIT license.

Get it here!


It is also worth mentioning other alternatives that exist and deserve consideration:

The slight annoyance I see with the latter two (code-generating approaches) is that since the code is generated, it will overwrite any naming customization you’d make after the initial generation. It also makes maintenance of the builder harder over time, as the objects being built evolve. From my point of view, adding a method on an interface is quicker and more natural than re-generating the builders (and possibly overwriting custom names).

Advertisements

Delivering software more efficiently

Organizations today are always looking for ways to improve how they build software. To stay competitive on fast-paced markets, they have to optimize the delivery pipeline to bring features from ideation to market more rapidly. Many rightfully seek solutions by adopting agile or lean practices. To be fully effective, these methodologies also need to be supported by rigorous engineering practices such as those brought forward by Extreme Programming. Executed correctly, these are all very good ways of optimizing the way you and your team build your software.

Delivery = PRODUCTION

Unfortunately, building the software itself is just a part of the big picture. Your sparking new shiny software is not worth anything until it’s out in the wild. To get the most out of any software project investment, organizations need to make sure their software is in the user’s hands as soon and as frequently as possible and with minimal overhead. This is what I mean here by “more efficient delivery”.

Delivering good software is demanding. Delivering good software fast is quite a challenge. What I present below are techniques and practices that, when adopted, will have a direct impact on the time required for a feature (or your software altogether!) to go from idea to your users. These practices will especially be helpful to agile and lean teams, who strive to build software in small, “potentially shippable” increments. Used correctly and alongside recognized engineering practices, they can help transform potentially shippable to definitely shippable.

Be always ready for deployment

One of the first mind shift teams must accomplish is to make sure their code is always ready for deployment. This requires the rigorous use of unit testing, use of slightly different software design paradigms, as well as working differently with source control.

Unit tests

Make sure your automated test coverage is top notch. Don’t necessarily aim for 100% figures, but make sure you’re confident that what’s covered is covered intelligently and correctly. Unit tests have become almost ubiquitous, but it’s unfortunate to see that people still write software without a good, pertinent test suite. Without a confidently complete test suite, your deployments might become much more embarrassing (and might be accompanied by much more praying, voodoo incantations and cute small animal sacrifices).

Feature toggles instead of feature branches

Design new, in-development features so that they can be toggled instead of isolating them in different source control branches. This allows for the new code to be continuously integrated instead of falling farther and farther behind the main line, resulting in painful and long merges. This practice also facilitates heavy and merciless use of refactoring, which feature branching often discourages.

Staged commits

Stage code commits in a special branch where tests are systematically (and automatically!) run to proof each commit, then (also automatically) promoted to the trunk/master/head/main if all tests pass. This way your main line stays as stable as possible. Modern VCSs, like Git or Mercurial, allow for much easier setup of that kind.

Minimize the feedback loop

Problems found early cost less to fix. For that reason, one must strive at making sure potential problems can be identified as early as possible. Make your unit tests run automatically upon each source control commit. Automate regular runs of your functional / performance tests suites. When tests fail, make sure the team is clearly (and again, automatically) notified so that they can switch their attention at fixing the error: they are not ready for deployment!

Automate everything

Deployment to any environment should be done by the push of a single button. Point. Final.Script everything. Allow nothing to be executed by a human. Where there are humans, there are errors. By having everything automated, you not only minimize possibilities of errors during deployment, you also make them quicker.

Use a tool for managing your database migrations. Almost all modern development platforms offer these tools. Research for the right one for your need. Database migrations can be generated by the tool and can be integrated in your deployment scripts so that they are applied automatically to the target environment. Also plan for the worse: your tool should allow for rollbacks (reverse migration) scripts as well.

To fully automate deployments, infrastructure configuration also needs to be taken care of. Use a configuration management tool for this (such as Puppet, Chef or CFEngine). Using a tool such as these, your servers can be provisioned and maintained automatically by the use of configuration “recipes”. Since these recipes are stored as text, they can be versioned and be an integral part of your code base, and evolve alongside your software.

Use a deployment pipeline

Stage your builds to at least a test environment where you can proof deployments. When a deployment is a success, it can be promoted to the next step in the deployment pipeline. Make unit and functional tests phases as integral parts of your deployment pipeline so that the entire deployment pipeline gets stuck if tests fail.

A deployment pipeline

Make sure your application is packaged only once for a given version and that this same package is deployed unchanged between the different environments. Store these packages in a central repository from which the deployment scripts can pull them upon deployment. This requires a clear separation between environment-specific configuration and code. Use your configuration management system to handle environment-specific configurations.

Monitor

When deployments become less of a pain and starts to become a non-event, you will be rapidly starting to think about deploying your code more often. Having an automated “health check” and smoke test suites ready will quickly become mandatory in order to make sure everything happened as planned to each environment. If you are going to use a deployment pipeline, run these tests after each deployment to a given environment and do not allow the pipeline to continue if one of these fail.

Form “delivery” teams

Reaching such a high level of build and deployment automation requires an extremely close collaboration between infrastructure and development teams. Make infrastructure part of the development team, instead of handing off obscure requirements to them late in the project. If possible (this is highly desired!), dedicate an infrastructure team member to your project. Not only will they have insights and knowledge on both your software and the infrastructure constraints, but they will also be able to work with the rest of the organization to help remove potential impediments to improving the delivery pipeline.

Believe!

Although these practices require a substantial amount of effort and collaboration to happen, the benefits teams get from adopting them quickly far outweigh the costs. Moreover, every single tip mentioned above can be implemented using solid and readily accessible open-source tools on most prevalent platforms.

Some also require an organizational mindset shift that transcends the delivery team’s boundaries. Corporate security policies, limited or restricted accesses, lack of trust between teams, teams jealously keeping control of their resources, communication barriers are all possible hurdles to improving the delivery efficiency of your team. Address them one at a time and continue to believe!

It’s never really done

Do not necessarily try to get the whole thing at first. Make a list of the improvement items that need to be addressed by your team, prioritize according to value and go step-by-step. This is a never ending process: there is always something to improve in your delivery pipeline. Regularly reflect on what more can be done to make your deliveries easier and more frequent.

Start this process as early as possible in your project so that you get the most of the additional value provided by these practices. Starting early has the nice side-effect of making teams think of automation every time they need to make a decision about their general software architecture. How will this impact our delivery pipeline? Can we automate this and that? If not, what could allow us to do so?

And hey, why are you still reading this? Go Deliver Something!