vanbe05

the adaptive CD pipeline

Blog Post created by vanbe05 Employee on Jul 4, 2016

Implementing Continuous Delivery is a transformation project, no doubts about it. Once you get the team over the initial “I don’t want any change” hurdle, the journey picks up momentum and you book the first successes. You create that “MVP” for your pipeline, knowing that you make some shortcuts on the left and right. But getting it to work end to end and showing those first results is more important than making it 100% complete. Right?

 

But then you need to complete the CD pipeline journey. And everyone has an opinion on what should be included in the CD pipeline. The security team wants you to include automated penetration tests, vulnerability tests, … The Legal team wants to include code scans to prevent unregistered open source components are use … the QA team wants to run a full set of positive and negative tests … And before you know it, your CD pipeline becomes a rigid, complex and resource consuming beast. You start to doubt if this CD pipeline is bringing you that “lean & mean” way of fast tracking a release through the various stages.

 

A single pipeline that covers everything simply does not work. It slows down agility, is made to fail and is so expensive you simply cannot afford it.

 

Fast forward to the solution. There is no such thing as a single pipeline that fits all releases. A CD pipeline that forces every release to go through every stage, wether relevant or not, will not work. What needs to be implemented is a continuous delivery platform which adapts the steps and stages in the pipeline to the specifics of the release that is being pushed through the pipeline.

 

I refer to it as the adaptive CD pipeline. It dynamically builds up the steps and phases that are required based upon the characteristics of the release. And it continuously adapts the pipeline execution while the release is “in flight” in the pipeline. After all, why should your CD platform not be intelligent enough to include additional steps based upon the results from a previous step? Or based upon “lessons learned” from previous releases? Why should it always follow the same path from “source to go-live”?

 

I often use the analogy of route planning. In the old days, you used a map to figure out your way to drive from point A to B. Then came the GPS, and that “thing” calculated the route for you. It even gave you options like the shortest or fastest route. Then came the connected device that used near-real time traffic data and offered you re-routing when you end up in traffic jams. Nowadays, you have “pattern based routing algorithms” that will suggest you a route to prevent you end up in a traffic jam, even before the traffic jam has started. Because it knows that on a Tuesday morning at 10:00 there is a traffic jam on that specific route, so it will suggest you a route that does not have that problem.   

 

Keep the end goal in mind when you start your Continuous Delivery journey. Make sure your CD platform is not just a rigid worklfow of a few hardcoded scripted pipelines, but is enabled for rule & business logic driven automated configuration of the “optimized” pipeline. And implement your MVP pipeline in a way that it can evolve into something bigger without the need of redesigning everything.

Outcomes