Is there a Docker in the house?

Tl;dr: You can create artifacts that work as expected and cast them in iron. Shit still happens, just less shit.

I am not a nuts-and-bolts sysadmin type, nor am I particularly beholden to any one technology. In fact, I’m a pragmatist. As I said in a previous post, I have been a Java type for a while now. The reason for this was simple: at my previous employer, we outsourced everything; as a result, we needed to specify a platform our framework-agreement partner could provide. As it happened, our framework-agreement partner is a Java shop and they provide top-notch support for this kind of thing (actually, it turns out they provide good support for much more than this, but it was a good choice at the time). (Note to self: We should have a long chat about development/test/production environments some time.)

Anyway, living in the closeted, comfy world of Java EE, you rarely have to consider much more than creating tested, functioning builds for your platform. Moving to Oslo Public Library, I was forced to confront a reality I haven’t enjoyed for several years: providing services locally. In this respect, I don’t really have any strong opinions beyond “whatever we do should be portable” because we don’t know where our production systems will be hosted.

As Oslo Public Library wants 100% test coverage, we aim for deploying tested artifacts; providing test coverage for complete software packages like the Koha integrated library system is basically impossible as we’re not just talking about unit tests for the software components, but also the configured system as installed.

The route we’ve chosen is to use Docker to encapsulate the packages we’re using and test the resulting artifacts; in this way, we can ensure that every deployed artifact is tested and can be demonstrated to function as expected before it is deployed in production.

A continuous deployment methodology builds and tests every commit we make to our SCM system; the Docker images are built in the same regime and are pushed to the docker repository; these are pulled at build time in the projects that depend on them.

An example here is that we use Koha as a relatively minor component (to do the “library crap” as certain Swedish engineers put it) in the LS.ext system; our Koha-Salt-Docker project produces independently built and tested Docker images represent different stages in our configuration of Koha and these are in turn used in LS.ext. Each Docker build represents incremental improvements of our configuration (including upgrades of Koha). In this way, we can roll backwards and forwards to aid debugging and development.

Using Docker means we can run different processes with different base OSes. This is great in situations where you’re using software that is developed on particular OSes and versions. In this way, we can use 32-bit Debian Wheezy to run one process and 64-bit Ubuntu Precise Pangolin to run another.

It’s worth mentioning that, while Docker is typically used to encapsulate single processes — providing a clean platform that is very easy to debug, this isn’t always practical. Some software packages are so entwined with their database that they’re difficult to separate (looking at you Koha). It’s fully possible to run multiple processes in Docker images; to do this, we’ve used Salt as an orchestration tool; I’m sure that there are other ways of doing the same (Phusion baseimage).

Docker is a newish technology and it has a few issues, not least of which is logging; we’ve largely solved this by gathering the log output and analysing this in a dockerised ELK stack.

There is also a bit of mythology too, such as the “correct” way to build image; there’s a mistaken belief that each Dockerfile will build the same artifact every time — this is patently untrue (every time you build an image, you pull new dependencies — even if you specify particular versions — the dependencies for the packages you use will use the latest compatible version, so your build will never be entirely incremental based on your Dockerfile), but it is also irrelevant as you will be using an image in testing and production and the image is entirely stable.

A final caveat is that the deployment is an area that has a lot of movement at the moment; there are several tools that we’ve looked at. We’ve looked at Salt and Ansible, but I have a belief in dedicated Docker tools like fig and the platform-oriented tools that will likely be what most of us end up using in the long run like Kubernetes.

In sum, Docker provides a less painful approach to creating deployable software artifacts; while there are a few issues that will get ironed out over time, this way of working is here to stay (even if the final technology isn’t Docker).

Advertisements
Posted in Uncategorized
One comment on “Is there a Docker in the house?
  1. […] Is there a Docker in the house? (Brinxmat’s blog | Rurik Greenall)  “As Oslo Public Library wants 100% test coverage, we aim for deploying tested artifacts; providing test coverage for complete software packages like the Koha integrated library system is basically impossible as we’re not just talking about unit tests for the software components, but also the configured system as installed. The route we’ve chosen is to use Docker to encapsulate the packages we’re using and test the resulting artifacts; in this way, we can ensure that every deployed artifact is tested and can be demonstrated to function as expected before it is deployed in production.” […]

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s