How to work backwards to tie technology value to outcomes
In Maine, a place in the far northeast regions of the United States, people are often heard saying the following when giving directions:
“You can’t get there from here”
If you spend time in Maine, getting from point A to point B can be daunting. GPS has helped in today’s world. If you have ever traveled through remote regions of the world however, you know that the roads are not always as straightforward as shown on the app.
The same thing can be said for software development. What looked like an obvious right path at first, ends up going down the wrong way. Most of us push on, trapped by our own delusion of being right. If you are fortunate though, you pull away from the vortex of the sunk cost fallacy and make the painful decision to backtrack to the beginning.
In my active programming days, I often found myself backtracking. Whether a mistake in the data model, using the wrong library, or writing 200 lines of code when 20 could suffice, I would go down the rabbit hole only to realize many hours later I made a dumb choice.
The frustrating part of backtracking is recovering from mistakes. If you took the advice of the fine people of Maine however, instead of focusing on the “here”, it is better to think about the “there”. In other words, you start from end and work your way to the beginning.
“Begin with the End in Mind”
-Stephen R. Covey
Starting from the end is not so uncommon. This principle is the second habit of Stephen Covey’s The 7 Habits of Highly Effective People. One of Amazon’s fundamental approaches to serving customer needs and developing product is to “Work Backwards”. Even in my math classes, the idea of working backwards was a common approach to solving problems.
This idea can also be useful for mapping business value of technology initiatives. One of the most common questions that comes up with any significant technology spend is determining return on investment (“ROI”). Having been on the technology vendor side for many years, I can attest that pinning down ROI is rarely a straightforward process.
The most common approach to creating an ROI justification or business case is to “back into the number”. This is when the decision has already been made based on the technical criteria (or simply technology preference), and then numbers and metrics are cobbled together to create the appearance of financial justification.
While serviceable from the perspective of securing budget, it does little to provide a clear long-term assessment of impact. This lack of rigorous metrics has two implications. First, it puts the initiative at risk during periods of cost-cutting. Spending that has no clear justification have a higher likelihood of getting the axe. Second, there is no strong justification for growing adoption, thus curtailing the full use and value of the technology.
Decisions on technology spend are never cut and dry. There are competing business interests, corporate politics, and the annual budget wrangling. Especially when competing interests are internal customers demanding their programs get funding, deciding what to prioritize and the criteria for prioritization can be a miserable, career-limiting experience.
The connection from technology value to business value is often murky. Like the roads in Maine, it is not some straight line from the start to the end. What is missing are the transitions along the way that connect what the technology offers to impact on the business. In other words, we need a bridge that connects the two.
The mistake made in connecting the technology and business value is starting with the technology and making the leap to a higher level business key performance indicator (“KPI”). Instead, much like a math problem, it is better to start with the end and work backwards into how the technology can specifically impact the business objective.
The framework I have used to help me think through how to build this bridge is what I call the Three Levels of Technology Metrics as described below:
Level 1 (“L1”) — Metrics directly related to the usage or adoption of technology, such as number of active users on a DevOps tool or amount of content entered
Level 2 (“L2”) — Metrics derived as a direct implication of the use of the technology, such as cycle time from story point to deploy or MTTR rates over time
Level 3 (“L3”) — Metrics tying technology to business goals, such as time to market for new feature or customer satisfaction for new feature or product
To understand how to use this framework, let’s walk through an example of how to identify the L1, L2 & L3 metrics for deploying a developer knowledge tool. Deriving business value from such tools can be difficult, especially when trying to measure nebulous areas like developer productivity. To get to the higher business objective, let’s identify some metrics:
L1 metrics would be number of searches on platform, minutes spent in tool per user, and number of likes per solution.
L2 metrics cover reduction in Jira tickets, code commit rates for newly onboarded developers, and higher rates of code sharing
L3 metrics range from lower defect rates, customer ratings on feature or app, increased product availability
If we take our approach of working backwards to this example, we might start with lower defect rates, connect that to a reduction in tickets, and then tie that to engagement time in the tool. By chaining these metrics together, we create a logical and consistent story.
You might protest that none of these things are strictly correlated. There are many factors that can influence these metrics outside of three levels. However, like with any experiment, it is important to create baselines and control groups when implementing the framework so that you can show correlative effects and impacts.
Another thing you may notices is that depending on the technology or initiative, a L2 metric could be a L3 metric. This is especially true based on changing business priorities or if the impact made by use of the technology is more obvious. The other thing to note is that some L3 metrics are highly related or could be even fit into other metrics.
This leads us to the question of why only define three levels? Because there is a bridge that is often required to connect technology usage or engagement with business value. However, adding more levels makes the connection between business and technology value more tenuous. The result is a more convoluted story and it dilutes the impact you want to convey as to why a particular technology investment is valuable.
How are you measuring the business value of technology initiatives today? What tools or frameworks have you found to be useful?
Episode #10 — Building an operations sensibility in developers with Christine Yen of Honeycomb.io
Podcast episode with Christine Yen of Honeycomb.io
Welcome to season two of the Heretechs podcast! Co-hosts Justin Arbuckle & Mark Birch welcome Christine Yen, CEO and Co-founder of Honeycomb.io, to discuss building an operations sensibility in developers, the importance of observability from a developer perspective, and the best superhero characters.
Clicking the link takes you to our Anchor page where you can find links to all your favorite podcasts players, and don’t forget to leave a rating / comment!
If you want to be a guest on the podcast, have unique thoughts on the state of IT and Engineering, and have a favorite superhero, we would love to have you on the show!
Check out past Heretechs podcast episodes on Apple Podcasts, Google Podcast, or wherever you listen to your favorite podcasts. Please like and subscribe 😁
We help IT leaders in enterprises solve the cultural challenges involved in digital transformation and move towards a community based culture that delivers innovation and customer value faster. Learn more about our work here.