Continuous delivery is what follows Continuous Integration. In many cases, this is a natural flow towards "faster delivery" when an organisation perceives itself as falling behind or starting to lag when it comes to delivering features and releases.
Continuous Delivery basically sets into practice that once the integration and testing is done, you deploy it automatically, usually to a subset of the infrastructure.
From a developer standpoint, Continuous Delivery puts new pressures on developers. It's no longer enough to postpone testing, unit-tests and monitoring to the late part of the development cycle, because once you've committed your code, it's going to go live somewhere. Organisationally, this means that you have to commit early on to do the right thing.
Do the right thing
These are the best practices that we've always talked about, and sometimes strives to do, including:
- Static Analysis
- Unit tests
- Integration tests
- Code reviews
These go from being best effort to a lifeline requirement. There is no longer a phase between "I'll get to it tomorrow" and "shit's broken, yo!", as the automation takes over.
For an organisation that wasn't previously doing the right thing, continuous delivery will likely turn into a fast and hard learning curve.
After a few years I've come to see Continuous Delivery all over the place, both in large organisations, the scale of Facebook, and small companies. At the scale of Facebook, it's done because it's the only way that scales, the tempo of development simply collides too much otherwise, and at small places, it's the only way to get things done when you move between projects and keep things running.
I think it's partially a Revolt. A revolt against pushes for features before it's time, and against development methods that are constantly under pressure with no end in sight. It's a push-back against "test it later" and, "we promised the feature by this release, you can wait with tests until after".
By migrating towards Continuous Delivery, developers and operations make sure, by holding production hostage, that they get enough time to do the right thing first.
Operations teams are in on it, because the time spent adding monitoring and getting metrics from things that break is buying them resilience and overview. The architectural changes that come, all from Micro-services, stable APIs and load-balanced services all come into play to make sure designs are reliable.
At Modio we do Continuous Delivery at both server side and client side. A portion of devices, in the field, run our master branch, and when they stop working, we notice due to monitoring. Time-based, on regular intervals, we stabilise the master branch, summarise a change log and call it a release.
No code ever goes into a release that hasn't spent some time in production to make sure it works. The largest challenge we face when things break during development, is how to add monitoring to make sure we notice it's fixed, before fixing the problem.
Just as all fixes should have a unit test with it to prove that it's fixed, all changes that broke the deployment needs new monitoring to show that it's no longer broken.
By: D.S. Ljungmark