If you’re like me and have been around the block in tech more than once, you’ve seen three-letter acronyms come and go. Sometimes the technology they refer to is a flash in the pan; other times it hangs around for a bit before being subsumed into the platforms we build upon.
And so it is with value stream management (VSM), which has grown in popularity in DevOps circles. The first question I tend to ask about this is, why? Is this some new innovation that needs a name, or has someone spotted a weakness in existing tools and methods?
In a nutshell, VSM refers to the need—or the ability—to have visibility over how software is being built. As units of function pass along the pipeline, from concept to deployment, managers can benefit from understanding how this is taking place, from speed of development, to where the bottlenecks are, to what value is being delivered, and so on.
The question of whether we need VSM is particularly pertinent in the field of software development, not least because people have been building applications for an awfully long time. You’d think we’d know how to do that, and how to manage the process by now.
So, has the DevOps world hit an epiphany where suddenly it discovered the secret to life, the universe, and how to develop software? Not quite. VSM (which also has a heritage) exists as a response to a current need, so let’s take a look at the causes.
First, let’s face it: Software development has been running itself into the sand for decades. As systems became larger and more complex, linear processes couldn’t keep up or, more accurately, increasingly slowed things down. There may be some waterfall advocates still out there, but all too often, the process itself was the bottleneck, hindering innovation.
Back in the nineties, pockets of people looked into different ways of doing things. Some went for lean manufacturing approaches and Japanese efficiency techniques. Others focused on outcomes, with use-case-driven design and eXtreme programming, both being about just getting stuff done. Still, when I was training people in agile development methodologies such as DSDM, such approaches were very much the exception.
And then a new reality appeared, driven by the Web, open source, RESTful APIs, and more, where kids were getting stuff done and leapfrogging older, more crusty approaches. Sites and apps needed to be developed fast; they needed to be put together and put out there, quick. People started to say: Look, can we just get that website by next week?
The need for speed was very much driven by fear, and we’re still seeing this today as organizations are (rightly, yet hyperbolically) being told how they need to transform or risk going out of business. But as software development accelerated, it hit new challenges and bottlenecks—not the least of which was the need to control change (one of the founding principles of DevOps, in 2007).
Fast forward to today, and there’s a whole new set of challenges. The fact is that any approach, if applied universally, will eventually show weaknesses. In this case, “just” developing things fast will come at the detriment of other aspects, such as developing them well (cf. shift-left quality and security), or delivering things that make a positive difference.
The latter is where VSM kicks in. It basically serves to fill a gap: If you’re not thinking about whether you’re doing the right stuff, in the right way, then it’s probably time to start. We are now in an age where agile practices, which used to be the exception, have become the norm. But agile itself is not sufficient: “managed agile” is what’s needed.
Which brings us to another challenge. The world has moved from scenarios where everyone was building stuff in the same (waterfall) way, to using development processes that flex according to what people want to do. This is great when you’re just getting going and want to focus on building stuff, but not so good when you want to, say, switch teams and crack on without learning how everything works again.
Frankly, development processes have become fragmented, inefficient, cumbersome, and costly. Which is not good—teams don’t want to be spending their time managing processes and tools, when they could be building cool new applications. And this is where VSM comes in.
The term value stream comes from manufacturing. The easiest way to think about it, I think, is to think about what you’re trying to deliver as a stream of activities that build value on top of each other. So, first, start thinking about the development pipeline as a value stream; make it efficient and effective, then look to standardize value streams across the organization.
Someone recently asked me: Isn’t VSM just applying business process modeling and management techniques to software now? And I answered: It’s completely just applying business process modeling and management to software development. This goes back to a good old Hammer and Champy’s definition of a business process: It’s a sequence of activities that deliver value to a customer, and that’s what software delivery should be.
Value stream management exists because it has to right now, though it doesn’t exist in many places that are trying to implement DevOps practices. So it’s really the reinsertion of tried and true management governance and visibility principles into fast-moving, dynamic, and agile environments.
Will VSM last? That’s another good question. I’m hearing some organizations find VSM to be yet another overhead (clue: they’re probably doing it wrong). I’m also of a mind that if we, as a collective of development and operations advocates, could agree that we don’t need every individual project to reinvent best practice, we could probably standardize our pipelines more, allowing more time to get on with, yes, the cool stuff.
I don’t want to see a return to onerous methodologies such as waterfall. I do want to see innovators innovate, developers develop, and operators operate, all with minimal stress. I’m watching with interest the DevOps Institute’s move toward assessing capabilities, I’m enjoying seeing the adoption of product-based approaches in software development, and I’m talking to multiple vendors about how we might see pipelines as code form into a Terraform-like open standard.
All of these threads are feeding a more coherent future approach. There’s a catalyst for all of this, namely microservices approaches, which beg for straightforwardness in face of the complexity they create. Cf. a recent conversation with DeployHub’s Tracy Regan about the need for application configuration management.
I realize I haven’t answered the question yet. I believe VSM will prevail, but as a feature of more comprehensive end-to-end tooling and platforms, rather than as an additional layer. Managed value streams are a good thing, but you shouldn’t need a separate tool for that, isolated from the rest of your toolchain.
So, ultimately, VSM is not a massive epiphany. It’s simply a symptom of where we are, as we look to deliver software-based innovation at scale. The journey is far from over, but it’s reaching a place of convergence based on microservices (which, somewhat ironically, go back to modular design principles from 1974). Best practice is emerging, and will deliver the standards and platforms we need as we move into the future.