Written by on 28 May 2015

Starting our microservices journey

In our first blog describing our microservices journey, Victor went over our reasons to move to a microservices architecture. In this article I’ll describe how we started development on our first microservices and making some upfront decisions on technology.

What to build first

The first thing we had to do was to decide what we wanted to build as our first microservice. We went looking for a microservice that can be used read only, consumers could easily implement without overhauling production software and is isolated from other processes.

We’ve ended up with building a catalog service as our first microservice. That catalog service provides consumers of the microservice information of our catalog and its most essential information about items in the catalog.

By starting with building the catalog service the team could focus on building the microservice without any time pressure. The initial functionalities of the catalog service were being created to replace existing functionality which were working fine.

Because we choose such an isolated functionality we were able to introduce the new catalog service into production step by step. Instead of replacing the search functionality of the webshops using a big-bang approach, we choose A/B split testing to measure our changes and gradually increase the load of the microservice.

Choosing a datastore

The search engine that was in production when we started this project was making user of Solr. Due to the use of Lucene it was performing very well as a search engine, but from engineering perspective it lacked some functionalities. It came short if you wanted to run it in a cluster environment, configuring it was hard and not user friendly and last but not least, development of Solr seemed to be grinded to a halt.

Elasticsearch started entering the scene as a competitor for Solr and brought interesting features. Still using Lucene, which we were happy with, it was build with clustering in mind and being provided out of the box. Managing Elasticsearch was easy since there are REST APIs for configuration and as a fallback there are YAML configurations available.

We decided to use Elasticsearch since it provides us the strengths and capabilities of Lucene with the added joy of easy configuration, clustering and a lively community driving the project.

Which programming language to use

The team responsible for developing this first microservice consisted out of a group web developers. So when looking for a programming language for the microservice, we went searching for a language close to their hearts and expertise. At that time a typical web developer at Coolblue at least had knowledge of PHP and Javascript.

What we’ve noticed during researching various languages is that almost all actions done by the catalog service will boil down to the following paradigm:

  • Execute a HTTP call to fetch some JSON
  • Transform JSON to a desired output
  • Respond with the transformed JSON

Actions that easily can be done in a parallel and asynchronous manner and mainly consists out of transforming JSON from the source to a desired output. The programming language used for the catalog service should hold strong qualifications for those kind of actions.

Another thing to notice is that some functionalities that will be built using the catalog service will result into a high level of concurrent requests. For example the type-ahead functionality will trigger several requests to the catalog service per usage of a user.

To us, PHP and .NET at that time weren’t sufficient enough to us for building the catalog service based on the requirements we’ve set. Eventually we’ve decided to use Node.js which is better suited for the things we are looking for as described earlier. Node.js provides a non-blocking I/O model and being event driven helps us developing a high performance microservice.

The leap to start programming Node.js is relatively small since it basically is Javascript. A language that is familiar for the developers at Coolblue around that time. While Node.js is displaying some new concepts it is relatively easy for a developer to start using it.

Microservice A <> Microservice B

The beauty of microservices and the isolation it provides, is that you can choose the best tool for that particular microservice. Not all microservices at Coolblue will be developed using Node.js and Elasticsearch. All kinds of combinations might arise and this is what makes the microservices architecture so flexible.

Even when Node.js or Elasticsearch turns out to be a bad choice for the catalog service it is relatively easy to switch that choice for magic ‘X’ or component ‘Z’. By focussing on creating a solid API the components that are driving that API don’t matter that much. It should do what you ask of it and when it is lacking you just replace it.

Challenge ahead

With these fundamental decisions in place we’ve ended up with a pretty big challenge ahead. Not only we’ve had to built the first microservice within Coolblue, but also gain knowledge in a new programming language and a new datastore. If that is not enough, all elements needed to be deployed onto a whole new environment, an environment which we wanted to have in a configuration management system.

In our upcoming articles we will elaborate on all these challenges. Describing our continuous deployment, API decisions and many more. Sign up for the newsletter to stay up to date.

COMMENTGive your two cents.