Share
deliver faster by going smaller source

How fast you can deliver software has earned bragging rights in the DevOps world for some time now. In fact at the birth of DevOps in 2009, John Allspaw and Paul Hammond presented at Velocity Conference that they deployed 10x a day at Flickr. In that era, doing weekly deploys (sometimes called releases) was considered fast, 10x a day was ridiculously fast and flew in the face of traditional wisdom. Since then we have seen a dramatic rise in companies talking about their delivery cadence and how many deploys they do in a day. Whats even more interesting is that the IT industry is starting to see the improvement to the cadence as actually providing a competitive advantage rather than just bragging rights. In this article we will explore why your delivery cadence matters and a few practices and tools you can use as you journey to improve.

First a little background…

We released a model on DevOps and the transformation that happens in these four key areas:

  • Treating our systems and infrastructure as code, including how we version it to how we build it.
  • Changing our engineering culture to orient around the total delivery and usage experience.
  • Creating feedback loops in your runtime environment that informs all parts of your engineering team.
  • Favoring faster delivery cadence and a reduction in change volume.

Lets explore what this means when we add security to the mix—we will be using the word rugged as a way to introduce security into DevOps. This is not a way to create a new type of DevOps, but describe the functional ways we change our approach as we grow. It’s still DevOps, we are just applying a rugged filter to help callout specifics to new practices.

Why is Delivery Cadence so Important?

In the 90’s and throughout most of the 2000’s, most of IT followed a waterfall model for delivering software. This means that software spends the majority of its time in architecture and design and only towards the end of development does it actually come together to function. The window for design and development could easily be 6 months or longer with the last month or so being the integration phase where it all gets connected and run together to be tested as a final unit.

When you have a lot to changes to deliver, you need a bigger delivery window and something really big to put them in — something like this sweet ride would work.

The theory is that if we are really good at gathering the requirements and specifying all the development tasks upfront then all the changes at the end would come together easily as the final result. Effectively we batch all the changes together into a release in the latter stages of the software engineering effort. Since this is quite an undertaking to deliver this batch of changes, you can’t do it that often. This causes releases to happen once a quarter or every six months in many organizations. Since releases are so infrequent, it also encourages stuffing as many changes as you could fit into the release.

We believed this process:

  • Resulted in less rework
  • Increased the stability of our systems and software
  • Was more secure
… and then we found out, we were wrong, way wrong

Retrospectively, its easy to look back and wonder why we held this archaic process and philosophy.

In the Modern Era of DevOps

source: 2016 State of DevOps Report by Puppetize and DORA.

Looking at the 2016 State of DevOps report, it turns out that high performing organizations actually experience:

  • Faster recovery times
  • Less rework
  • Faster cycles from Concept to Cash

Additionally, the report finds that security gets better from faster delivery rather than worse.

We found that high performers were spending 50 percent less time remediating security issues than low-performing organizations. In other words, because they were building security into their daily work, as opposed to retrofitting security at the end, they spent significantly less time addressing security issues.
— 2016 State of DevOps Report

Three common practices with implications to Security

There is an excellent book titled Continuous Delivery by Jez Humble and David Farley that is the comprehensive and definitive work on the subject. You should definitely check it out. This book defines three practices that I want to call out that improve security specifically.

1. Smaller Changes are Easier to Rationalize

Security problems are at their very nature bugs in software. Bugs are usually due to mistakes whether logic or language implementation specific, they are generally due to a mistake happening. This happens, and it is unavoidable. One of the benefits of having a higher frequency of deploys is that you will also have smaller amount of changes going out each time. This makes each deploy easier to rationalize in regard to what is being changed. This level of simplicity and readability of changes helps both with auditing functions and security testing.

One of the common security side effects is that different portions of your code can have different security controls applied. Does your change touch the authentication section of the codebase? If yes, then additional testing can happen or alerts can trigger for humans to approve. In the quarterly release methodology discussed earlier, one of the big problems is that the change is actually a batch of hundreds or thousands of changes and when you ask if the change affects the the authentication section, the answer is going to always be: Yes!

2. All the Testing is Automated

Continuous Delivery pipelines hinge on automated testing. Each commit, no matter how small, goes through the same testing before getting released to a pre-production environment and ultimately to production. This is a good thing and security can take advantage of this playing field by adding in static and dynamic security tooling to the pipeline. Adding in security tools like Brakeman or Veracode alongside more dev focused tools like findbugs and linters are some ways that people are handling this.

3. Assurance and Confidence in changes

One of the core tenets of Continuous Delivery is that artifacts are only built once and (as much as possible) are immutable. Continuous Delivery pipelines track the outcome of a build (the artifact) to a repository, to completing testing, to deployed in production. This increases confidence and assures there is an audit chain for changes. This is important to security because it is common for manual changes or hotfixes injected in the environment or software to open up holes later on.

Summary

We have seen that the modern approach to DevOps and culture brings new challenges and new opportunity. As we overlay a rugged approach, there are several ways security can change the culture and add value.


Thanks for reading this article. If you enjoyed it please let us know by clicking that little heart below.

At Signal Sciences we are building the industry’s first Next Generation Web Application Firewall (NGWAF). Our NGWAF was built in response to our own frustrations of trying to use legacy WAFs while enabling business initiatives like DevOps, cloud adoption and continuous delivery. The Signal Sciences NGWAF works seamlessly across cloud, physical, and containerized infrastructure, providing security without breaking production traffic.