Building and Development

Early this summer, a few things coincided in the world that caused me and some of my coworkers to go on an epic quest of reorganizing how we build things.

The first thing was simple, GitHub, the company that we hosted our code with, as well as our issue trackers, documentation, and which was the central part of our workflow, decided to radically change their pricing strategy.

GitHub went from a reasonable amount per month for our hosting, to something that felt painful.

The second event was slightly harder, fuelled by a business need, a migration of from a "one VM per service" model to containerised deployment.

This combination of changes caused us to migrate our code hosting over to GitLab. The migration was brief, and basically involved turning our repositories read-only, and then migrating them. A smooth experience, where the most work was to filter out all our old URL's from webbrowser history to the new domain.

However, the migration also made us leave the excellent HuBoard, in favour of a new, self-hosted Kanban view of GitLab issues. Which a few weeks later got replaced by the built in GitLab Issue board.

The other change was more intrusive and massive. Containerisation called for difficult changes in how we manage our VM and services (Puppet + Ansible, if you're curious), and how we build & deploy software.

In this post, I'll go over how we build & develop our firmware images. The firmware image is the OS part of our software, and how automation ties in here.

Previously, we used a few small ARM machines with external hard drives attached, and built our firmware on top of this. This didn't quite scale, and wasn't easily automated, Neither was it very good on a reliability or performance standpoint. But it worked.

With the new changes, we have moved the build environment to be inside a Docker container, which gives each build a clean slate. This "building" container is one that we create from scratch to contain the build tooling we need.

In the "building" container, we runs the various build tooling in order to assemble the raw disk image.

But, how do you make sure the build system is up to date? For that, we need an automated way of building the Docker container that houses the build system.

And then we need a way to build this container, inside a container.

And then there's turtles all the way down.

So, you bootstrap a container builder from a clean slate, using debootstrap to build a debian stage1 (bootstrap) container. In the stage1 container, you then re-build this container, and name it stage2.

Stage2 containers are then able to reproduce themselves, and then you hook it into the the GitLab Continous Integration system, so they will always remain up-to-date. This stage2 container are then used to build two containers:

  1. More stage 2 containers (for updates, etc.)
  2. Firmware building containers (stage3)

Then we can spawn stage3 containers to build the actual firmware. This also means that container-building containers need to talk to the host docker instance, in order to build containers. This docker demon will then spawn new containers (for each build step).

And if anyone says getting a reproducible build environment running with Docker is easy, they are lying. But it is doable.

The net result of all this intricate work is that we have a firmware build system that's reproducible, scalable and that an automated build system generates firmware images in a reliable way, hooked into our continous integration system.

Firmware builds are easier and less manual, more automated and reliable.

By: D.S. Ljungmark

Release v3.30

Release v3.29

Release v3.28