This article will take 6 minutes to read.
My team at Healthlink have just finished a proof of concept around Continuous Delivery. Here’s the writeup, sanitised for the internet.
We’re a Health IT company based out of New Zealand, and run an set of products that provide messaging integrations between Healthcare organisations across New Zealand and Australia. Our applications are a mixture of desktop and web service based, and we’re currently undergoing a massive change process from Waterfall to Agile and from a bespoke services company to a truly product focused company.
Our time to market is poor, we want to shorten our release cycle and provide and awesome service to our customers (above and beyond the services that basically enabled us to become a monopoly in New Zealand).
To enable rapid iteration, we needed to shorten our feedback loops, and the best way of doing that is by implementing continuous delivery.
The Principles of Continuous Delivery
There was quite a but of work to do to enable our solution, continuous delivery has a bunch of prerequisites and we wanted to abide by a set of core principles that would guide the technology and process decisions we made.
The principles that guided us are:
Processes MUST be repeatable and reliable.
- The process by which we deploy our software has been automated, but often full automation of a release can make the business uncomfortable. To help the business get used to this, we’ve introduced some decision points in our release flow that require manual intervention.
EVERYTHING has to be automated.
- The manual decision points we’ve introduced don’t violate this because no process is actually being done manually, we’re just providing business the opportunity to control the process in more discrete steps. All of our build, test and deployment steps are automated though, that in itself has reduced our time to any arbitrary environment to about a minute, with full unit, integration and system tests conducted every time we build and deploy.
You must be able to guarantee quality.
- The bedrock of any automated process is quality. We guarantee the quality of our code by monitoring our code metrics in Sonarqube - things like test coverage, technical debt and best practice conformance. Sonarqube breaks builds if they introduce issues or lack test coverage on new code. Further, in tracking unit test coverage, we also ensure that those tests are run and pass - if you don’t use them, there’s little point spending time and money creating and maintaining them.
If something is difficult or painful, it must be done more often. You can’t build muscle without exercise.
- In forcing ourselves to do painful things often, we worked out the issues in those processes and were able to mitigate or resolve them entirely, leading to a much more robust process.
Everything must be expressed as code and versioned.
- Obviously our application code is all safely in git, versioned and lovely, but it’s important to version and code our build steps and infrastructure as well. We achieved this using Puppet with Hieradata within a git repository managed with Puppets’ Code Manager, and docker allows us to store our image creation steps in a Dockerfile, which is stored side by side with the application code itself.
Things are only considered DONE when they are released to Production.
- Setting up triggers such that successful deploys off a dev branch trigger builds and deploys on our master branch enabled us to smoothly flow code through our environments in and automated way. Bamboo gives great deployment functionality that we’ve made good use of.
Measure quality throughout the process.
- Tying all of our builds back to Sonarqube enabled us to keep track of quality, we also track metrics around the amount of time it takes to complete the process, ratios of successfull builds and deploys to failures, and a bunch of other stuff.
The Continuous Deployment process needs to be continuously improved.
- We’re still working on the process. It’s important to not forget that there is always another issue we could be improving.
The Tools we chose
- Docker enables us to have superfast builds by building our applications in layers. By building these layers separately, when we build the application, we only have to build the application, rather than also having to build all of our dependencies and other bits and bobs.
- We use Rancher for container orchestration and management, it enables us to keep a high level picture of what our applications look like somewhere easy to reach. It enables us to do a whole bunch of really cool things like load balancing, health checks and the bread and butter of container orchestration. Further, it also has a great repository of applications, stuff like load balancers, CTF stacks, container discovery tools, even a containerised version of the famous ELK stack. The capability to create your own catalog also exists, so once we’ve got all of our tools and services in to containers, we can spin up production ready software in no time flat, with all of the quality guarantees built in to our building and deployment pipelines. This is the part of our solution that I’m most excited about and I would recommend Rancher to anyone interested in container orchestration and management.
- We use Sonatype Nexus 3 for a bunch of stuff. We have two instances of Nexus, one for integration ready blobs and one for deployment ready blobs. That means that all of our base docker images, sub-images, and layers we don’t want to send to prod live in our CI Nexus, and the only images that live in our CD instance of Nexus are images that are ready for deployment at a moments notice. We also use Nexus to mirror Maven Central and the NPM Registry in order to reduce our build times by removing as much network variance as possible. We also keep build dependencies in our CI Nexus, blobs like Servicemix archives, SoapUI archives and others.
- We use Bamboo to wire up our builds, tests and deploys. One thing that irks me about Bamboo is it’s lack of ability to verify a deployment and conduct system and smoke tests. That being said, it’s got an awesome build system and the a strong deploy system that allow us to tie any image running in production all the way back through the pipe, to the commit that triggered the build. That provides us with a great deal more visibility as to what code is where when compared with our old process.
- Git and Bitbucket are the solid but silent performers. We use a Feature branching workflow with pull requests - we require that the only change to master is a pull request from dev, and code only gets in to dev via pull requests. Each request must have a minimum of two reviewers, no issues or tasks against them, and a passing bamboo build.
- Sonarqube enables us to track issues with our code, and break builds if quality gates aren’t met. It’s also a great tool to communicate the health of our code base to the rest of the business.
Here’s the system we built, end to end. It’s a bit busy but it is quite compact, and looking at it, I can’t see anything I would take away to make it simpler, which I think is a good thing. It’s as simple as it can be while still being powerful enough to deal with any deployment requirement we might have.
What value has the business derived from this project?
Applications that use our CD Pipeline gain the following benefits:
- The application can be built and deployed to any arbitrary environment in less then two minutes in most cases, and we’ve yet to see a build and deployment take longer than four.
- Batch sizes are smaller, so deploys are safer
- Deploys can be rolled back in a single click, in the unlikely event of a failure
The business benefits derived from this then are:
- Vastly cheaper deployments, we’re saving about $800 every deployment on every project that uses this flow.
- Our time to production, where customers use our software, is under half an hour. The benefits of being to get feedback and iterate quickly provide us with an unfair advantage over the competition. I like unfair advantages.
- Software Quality is way up, which means less issues in production, less time spent supporting that software and less time spent maintaining that software.
I feel like that’s some pretty compelling stuff.