vanbe05

a pragmatic adoption journey for "containers" in a continuous delivery pipeline

Blog Post created by vanbe05 Employee on Jul 4, 2016

Given the hype there is around containerization, I often get the question what I recommend on how organizations should embrace the docker container concept.

 

a pragmatic adoption journey: from "tooling" to "infrastructure" to "components"

 

My recommendation is to definitely implement a container strategy, but do it in a pragmatic gradual approach to learn, benefit and optimize in a controlled way.

 

  • first, containerize the Continuous Delivery tools. Provide docker images running Jenkins for Continuous Integration "out of the box". Or allow your test teams to spin up many parallel instances of containers running Selenium to speed up the test script execution. This brings already benefits such as standardization and optimization to your processes, while you learn about the challenges and issues that come with containerization. And without any impact on the current release pipeline processes.
  • second, containerize the application infrastructure. Providing standardised images which already contain running technology components (such as Apache Tomcat, GlassFish, JBoss, ...) to your development and test teams "as needed" and "when needed" bring great benefits on resource utilisation and standardization. Utilizing containerisation at infrastructure level still requires to deploy release artifacts into a running container environment in the same way as you currently do in fixed or virtualized environments. But this approach does not "hide" your release artifacts inside containers, and keeps your artifact tracking and tracing visible. More over, many organizations simply do not promote containers yet into production and therefore work in a "horizontal" hybrid environment where the lower level dev and test environments are (partially) containerized, while the higher level QA and production environments are still "traditional". In such case, it is much better to have a single reusable agnostic release deployment automation in place that works in both a containerized application infrastructure as well as in physical/virtual "server" environments.
  • third, containerize the application components. At the condition that your application architecture is enabled for containerization (for instance a cloud native architecture using a micro services concept), containerizing the various application components/services from the start and "promoting" the container from stage to stage along the pipeline (in stead of deploying the release artifacts over and over again) makes sense. But most likely, not all your application components are good candidates to be run inside containers, therefore in any case you will need a combination of both containerized and traditional components along the SDLC (referred to as "hybrid approach").  

 

When preparing your organization for a container strategy, assess the capabilities of your continuous delivery platform AND the readiness of your application architecture. Your continuous delivery platform and processes must be made ready to work with containers in a "hybrid" way - where some environments are containerized and some are not, and where some application components can be provisioned in container format and some other not.

 

 

Start the containerization journey, but do it in a gradual pragmatic way from the "tooling" level to the "infrastructure" level to the "component" level. It will allow your organization to adapt the processes, tooling and people for this disruptive adoption journey. And depending on your application architecture, going containers "all the way" might not be the best option after all.

Outcomes