Written by on 20 July 2015

Continuous deployment of our software

Just developing software is not enough. You’ll need to get it on production and you want to be sure that you deploy quality software and not breaking software. In this article, part of our microservices journey, I’ll describe how we have set up our deployment pipeline in a way that developers can do their own deployment complying to our standards.

Language agnostic

The deployment pipeline as we have created it is language agnostic. Which means that the process to assure quality and deploy software for a PHP application is the same as the process for a .NET application. The used tools for quality assurance and packaging might differ, but the process is equal. This ensures simplicity and reduces the cognitive load on developers to understand the process for all microservices, which might be developed in different languages.

Designing the deployment process

When we start designing our deployment process we were looking for a solution that could support us in deploying high quality software. At that point we were already using GitHub and their pull-request system for our code versioning. What we wanted was a system that could verify the quality of the pull request, before merging it into the main repository.

If a pull request is validated and passed all checks, it should be allowed to be merged onto the master repository. All steps after the merge should be executed automatically and result eventually into a deployment on production. The diagram below displays a bird overview.

continuous-deployment.002

Automating the process

Each build step consists out of a set of actions which we define in the project itself. These actions are defined in ant build scripts and can be anything from creating a folder to calling a command line tool which does the actual inspection. These build scripts are not triggered manually. We use a build server to guide the whole build process. A build server is able to react on triggers and start the appropriate action.

For our build process we’ve decided to use TeamCity as build server. There are many (open-source) alternatives available, but TeamCity fitted our needs perfectly.

Continuous integration

This step can be triggered for several reasons. It can be the creation or update of a pull request by the developer or it is caused by the merge of a pull request on the main repository. Either way, the purpose of the step is the same. Validating if the quality of the entire repository matches our standards.

When executing this step, the build server will execute a variety of quality tests which basically are split up into two groups, automated tests and static code analyzers.

The group automated tests consists out of unit testing and functional testing. As an example you can think of tools as NUnit, PHPUnit or Behat. These tests are present to validate if the tested functionalities are still matching the expectations of the tests. They are mainly used for regression testing, which boils down to the question of  “does my change do what I expect and does it not break any other functionality”.

Static code analyzers consist out of tools that generate code metrics, check coding violations. As an addition to scripting languages, linting tools like PHPLint will check if the code can be executed at all. These kind of tools do not aim on functionality, but focus on code quality itself. Is your code consistent in indentation, do you consequently use the same naming conventions or do you have a lot of duplicate code in your repository.

Packaging

If the continuous integration step succeeds we move over to the next step, packaging. The purpose of this step is simply to create a single deliverable which can install itself onto servers.

Within Coolblue we can differentiate two types of packages. For Linux systems we use rpm packages and the RedHat package manager. Windows systems will be packaged into NuGet packages and deployed via Octopus Deploy.

We deliberately have chosen to create packages that are close to the operating system, which makes life a bit easier. For example, by choosing for the RedHat package manager we chose a well known process. Known to developers and system administrators since they all have more than basic knowledge of the yum command. But also a known process for the distribution of packages. As a RedHat user you know how repositories work, how they combine with yum and its process.

There a lot of useful tools in the open source community like Capistrano that could help with distribution of artifacts to a production environment. All of those tools have their own pros and cons. For us it felt that we would add a lot of unnecessary complexity to our pipeline and process.

Packaging and delivering .NET services is something different. There is no problem to package .NET applications into a Windows native package format, but hooking into a generic package management on Windows is not possible. So, we tackled this problem a bit differently.

.NET applications will be packaged into the NuGet package format. NuGet being an open source package manager for Windows environments. The build packages then will be distributed in the later steps. The generation and deployment of NuGet packages is worth a whole article on its own which we will publish in the future.

Continuous delivery

We have created a package containing our inspected code repository, now it is time to get it actually on a server so it actually can be used. This third step of the build pipeline will publish to development and accept servers.

Before I start to explain what we do exactly in this step it is wise to get a clear vision how continuous delivery differs from continuous deployment. The difference between the two is small, but significant.

  • Continuous delivery are all automated deployments of packages to any except production.
  • Continuous deployment describes the automated deployment of a package to production.

So during this step our goal is to get the generated package on different servers so the deliverable can be checked and is available for other team members. The deliveries done by this step might differ per project. For example, some projects might have an acceptance environment we need to publish to, others might not.

Continuous deployment

With the continuous deployment step we reach the end of our deployment pipeline. In this step we actually deploy the application onto a production server. For Linux environments that is adding the package to our internal repository server. Windows environments will push and install the package onto the production nodes.

For us this step currently isn’t automatically triggered. A developer manually needs to trigger this build step to get his changes onto production servers. This isn’t something we want. but is necessary due to the lack of automated post-deployment tests.

In post deployment tests we want to check if the deployment went successful, something we currently do manually. For example by checking if the homepage returns a HTTP status code of 200. We want to check if we are still able to add a product to the shopping cart. Is our checkout process still available. These are the most viable tests we have after an deploy since they are critical to our business.

When we automate these post deployments tests and are able to do an instant revert of the deployment we will automate the trigger to have true continuous deployment.

In a high overview this article describes our continuous deployment process. A process which is working fine for now, but we are still working on to optimize. Objectives on our radar are for example the automated post deployment tests and implementing canary deployment.

REPLIES (2)What you say.

  1. Pingback: Continuous Deployment at Coolblue – Technology Up2date

  2. 2 years

    Al gekeken naar de Azure Event driven bus architecture?
    https://azure.microsoft.com/nl-nl/azurecon/
    vanaf 18:20 in beeld

COMMENTGive your two cents.

*