Skip navigation
All People > vanbe05 > Benny Van de Sompele's Blog > 2016 > July > 04

Benny Van de Sompele's Blog

July 4, 2016 Previous day Next day

Can we please stop using a conveyor belt as the visualization of continuous delivery? Over and over again when referring to continuous delivery, statements are made that you need to automate a "conveyor belt" type of process. Much like in a manufacturing plant or car assembly line.

 

Today, a continuous delivery pipeline must be highly adaptive, dynamically composed and made for change.

All things that a conveyor belt is not.

 

But is that the case? Is automating a continuous delivery process like a conveyor belt type of automation? I believe it is not, and let me give you a few arguments:

1. at a conveyor belt, everyone single person can hit the stop button and the entire chain stalls.

2. a conveyor belt is made to work at one specific speed only.

3. all parts of a conveyor belt must work at a predefined aligned speed.

4. there is only 1 route possible from start to end.

5. a conveyor belt and flexibility don't go together.

 

And while there might be examples where a continuous delivery pipeline can indeed be considered as a static hardcoded set of steps and operations, most will agree that that is not an ideal state. In today's (and tomorrows) world, continuous delivery operations must be highly adaptive to the type and nature of the change and application architecture (micro services, cloud native architecture). It must support multiple speed of operations, whereby some domains or even just some changes follow a different path between development and going live than others. The continuous delivery operations must be configured dynamically and "on the fly", in line with the framework and compliancy guidelines that accompany the changes that are being brought live. 

 

In all fairness, back in 2010 we also used a conveyor belt symbolization when we started to work in DevOps and Continuous Delivery. But as you go through more projects and gain more experience, insights change and concepts mature. I feel this is an example of the maturity increase that the continuous delivery concepts went through, from a static conveyor belt like approach to a dynamic highly adaptive method of working. And you should evaluate from time to time if your continuous delivery processes and platforms are built for the new adaptive way of working, and not that they are stuck in the first generation of a "conveyor belt" style of working.

Given the hype there is around containerization, I often get the question what I recommend on how organizations should embrace the docker container concept.

 

a pragmatic adoption journey: from "tooling" to "infrastructure" to "components"

 

My recommendation is to definitely implement a container strategy, but do it in a pragmatic gradual approach to learn, benefit and optimize in a controlled way.

 

  • first, containerize the Continuous Delivery tools. Provide docker images running Jenkins for Continuous Integration "out of the box". Or allow your test teams to spin up many parallel instances of containers running Selenium to speed up the test script execution. This brings already benefits such as standardization and optimization to your processes, while you learn about the challenges and issues that come with containerization. And without any impact on the current release pipeline processes.
  • second, containerize the application infrastructure. Providing standardised images which already contain running technology components (such as Apache Tomcat, GlassFish, JBoss, ...) to your development and test teams "as needed" and "when needed" bring great benefits on resource utilisation and standardization. Utilizing containerisation at infrastructure level still requires to deploy release artifacts into a running container environment in the same way as you currently do in fixed or virtualized environments. But this approach does not "hide" your release artifacts inside containers, and keeps your artifact tracking and tracing visible. More over, many organizations simply do not promote containers yet into production and therefore work in a "horizontal" hybrid environment where the lower level dev and test environments are (partially) containerized, while the higher level QA and production environments are still "traditional". In such case, it is much better to have a single reusable agnostic release deployment automation in place that works in both a containerized application infrastructure as well as in physical/virtual "server" environments.
  • third, containerize the application components. At the condition that your application architecture is enabled for containerization (for instance a cloud native architecture using a micro services concept), containerizing the various application components/services from the start and "promoting" the container from stage to stage along the pipeline (in stead of deploying the release artifacts over and over again) makes sense. But most likely, not all your application components are good candidates to be run inside containers, therefore in any case you will need a combination of both containerized and traditional components along the SDLC (referred to as "hybrid approach").  

 

When preparing your organization for a container strategy, assess the capabilities of your continuous delivery platform AND the readiness of your application architecture. Your continuous delivery platform and processes must be made ready to work with containers in a "hybrid" way - where some environments are containerized and some are not, and where some application components can be provisioned in container format and some other not.

 

 

Start the containerization journey, but do it in a gradual pragmatic way from the "tooling" level to the "infrastructure" level to the "component" level. It will allow your organization to adapt the processes, tooling and people for this disruptive adoption journey. And depending on your application architecture, going containers "all the way" might not be the best option after all.

vanbe05

the adaptive CD pipeline

Posted by vanbe05 Employee Jul 4, 2016

Implementing Continuous Delivery is a transformation project, no doubts about it. Once you get the team over the initial “I don’t want any change” hurdle, the journey picks up momentum and you book the first successes. You create that “MVP” for your pipeline, knowing that you make some shortcuts on the left and right. But getting it to work end to end and showing those first results is more important than making it 100% complete. Right?

 

But then you need to complete the CD pipeline journey. And everyone has an opinion on what should be included in the CD pipeline. The security team wants you to include automated penetration tests, vulnerability tests, … The Legal team wants to include code scans to prevent unregistered open source components are use … the QA team wants to run a full set of positive and negative tests … And before you know it, your CD pipeline becomes a rigid, complex and resource consuming beast. You start to doubt if this CD pipeline is bringing you that “lean & mean” way of fast tracking a release through the various stages.

 

A single pipeline that covers everything simply does not work. It slows down agility, is made to fail and is so expensive you simply cannot afford it.

 

Fast forward to the solution. There is no such thing as a single pipeline that fits all releases. A CD pipeline that forces every release to go through every stage, wether relevant or not, will not work. What needs to be implemented is a continuous delivery platform which adapts the steps and stages in the pipeline to the specifics of the release that is being pushed through the pipeline.

 

I refer to it as the adaptive CD pipeline. It dynamically builds up the steps and phases that are required based upon the characteristics of the release. And it continuously adapts the pipeline execution while the release is “in flight” in the pipeline. After all, why should your CD platform not be intelligent enough to include additional steps based upon the results from a previous step? Or based upon “lessons learned” from previous releases? Why should it always follow the same path from “source to go-live”?

 

I often use the analogy of route planning. In the old days, you used a map to figure out your way to drive from point A to B. Then came the GPS, and that “thing” calculated the route for you. It even gave you options like the shortest or fastest route. Then came the connected device that used near-real time traffic data and offered you re-routing when you end up in traffic jams. Nowadays, you have “pattern based routing algorithms” that will suggest you a route to prevent you end up in a traffic jam, even before the traffic jam has started. Because it knows that on a Tuesday morning at 10:00 there is a traffic jam on that specific route, so it will suggest you a route that does not have that problem.   

 

Keep the end goal in mind when you start your Continuous Delivery journey. Make sure your CD platform is not just a rigid worklfow of a few hardcoded scripted pipelines, but is enabled for rule & business logic driven automated configuration of the “optimized” pipeline. And implement your MVP pipeline in a way that it can evolve into something bigger without the need of redesigning everything.

With continuous delivery gaining a lot of momentum in the market, there is also a growing level of confusion on what it means to be “enabled for continuous delivery”. What is “good”, or at least “good enough”?

 

CD.png

 

 

In most articles and presentations, you hear over and over again those same handfull of examples such as facebook (dozens of deployments per hour in production), NetFlix (selfhealing within seconds) or Spotify. But those are the 0,001% of companies which are completely different than most of the existing organizations, who have a legacy of monolithic applications, who have to deal with complex architectural dependencies, who are running on a variety of proprietory and open world technology stacks. Pretending that each and any of these companies can be enabled for continuous delivery in the same way like the 0,001% digital native companies is simply setting the wrong expectations.

 

When assessing where an organization is on the journey to reach the level of “enabled for continuous delivery”, I use the following 5 dimensions to calculate a score: can “any release” be deployed to “any environment” at “any moment” fully “zero touch” by “anyone”. If so, the assessment score is high and cudos to the team for reaching that point.

  1. The “any release” dimension validates that a proper release management and versioning/revisioning is in place, broken builds are removed, a central repository is in place, dependency management is under control
  2. The “any environment” dimension requires that the organization has a clear release pipeline management and process in place, validation happens and automation ensures no mandatory phases can be skipped for the nature of the release
  3. The “any moment” dimension indicates if a release deployment needs to be planned well ahead of time because of the need for critical resources or manual activities, or if it can be decided and executed “ad hoc” and executed immediately (once all approval gates, if any, passed successfully).
  4. The “zero touch” relates to the ability to prepare, execute, validate, and if needed rollback without any manual interaction.
  5. “Anyone” is used in the context of segregation of roles and duties along the software life cycle. Can a non-technical release manager push the button, or do technical devops need to login to various systems and click the buttons for a release to be promoted and deployed along the life cycle.

 

Once an organization is enabled for continuous delivery, then you can start implementing the benefits which continuous delivery brings to an organization: increase the frequency of releases from once every quarter to once every month, or every week, or few times per hour. Start to refactor those monolithic applications into more modern composable applications or even micro services based architectures. And use that collected release pipeline data to visualize evolutions and improvements, setting reachable and realistic targets for all stakeholders based upon the current state of people, processes and technologies in use at your organization.