vanbe05

the need for Release Profiling

Blog Post created by vanbe05 Employee on Jul 14, 2016

It is great to see that more and more teams are establishing various release pipeline paths to go from development to live in production. In one of our recent projects, the customer has implemented a path for major, minor and emergency fixes. And that is great as it helps to keep the CD pipeline lean and mean, and trims off unnecessary and non value adding steps from the pipeline. Which ultimately leads to an optimized pipeline from cost, resource and throughput point of view.

 

Release profiling is the science of calculating the risk level associated to a release based upon different data points.

But what if you could not just have 3 different “routes” to go live. What if you would not depend on the release manager to arbitrary decide if this is a major or minor change? In an ideal world, why couldn’t you use “data” to mathematical decide which steps are relevant and value adding in your continuous delivery pipeline to ensure that release will not break anything and deliver results as expected.

 

In order to calculate a release profile level or score, input from multiple domains must be used such as:

  • Source code analysis tools such as SonarQube, SourceMeter or Semmle just to name a few. SCA tools can analyze the impact and difference of the source code between current build and last version which is a valuable insight and data point for automatic release profile calculation.
  • Data input for release profiling starts even before coding start. With solutions like Agile Requirements Designer (link), there is calculated insight into the level of impact associated to every requirement included in the scope of a release. Adding or removing requirements into the scope of a release results in a (re)calculation of the release profile.
  • Vulnerability and security scanning tools such as Veracode and Black Duck software to just name two of them can provide data input to further dynamically set the release profile level.
  • And why not just simple data analytics? Why not tap into the value of historical release performance where you can see that if code was delivered from outsourced party X or team Y or around technology Z you have a high probability of facing issues and problems. With the right data collection and analytics in place through technologies like Apache Kafka and Splunk, you can identify trends and use those data inputs to set the release profile.

I see release profiling as yet another step in reaching flexible and dynamic release pipelines. It is for sure not yet common practice but what matters most is that the continuous delivery platform you are putting in place is not a hardcoded workflow type of solution. It must be made for dynamic adaptation to the specific needs of the release, or even the release iteration/build. (see another article on this topic). Establishing automated release profiling from multiple data sources is a key enabler to drive highly adaptive and optimized continuous delivery pipelines, resulting in lower cost and higher throughput.

Outcomes