Cake

Something happened recently that has given me a whole new perspective on life. Something I would have never imagined would ever happen. I won't go into the dreadful details, but suffice it to say…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Measuring success of agile teams

In recent years there has been a huge share of organisations adopting agile methodologies within their technology teams (and to a lesser extend throughout the whole organisation). For a majority of companies the most important reason was mainly about increased efficiency and agility. By introducing an agile framework, organisations are able to maintain a focus on rapid delivery of business value. They do this by leveraging a process of continuous feedback and related planning exercises. While implementing an agile methodology, organisations also implemented a common way to describe requirements and features: user stories.

User Stories

One of the well known concepts in agile development is the user story. I trust that 95% of the readers know what a user story is, but if not, Wikipedia defines a user story as: an informal, natural language description of one or more features of a software system. User stories are often written from the perspective of an end user or user of a system.

Usually user stories are written by agile product owners, after discussions with the stakeholders or customers, to describe the requirement. They are designed to make sure the stakeholder and the development team have a common understanding of what needs to be delivered. Usually user stories take the following form:

As a (*role*), I can (*feature*) so that (*reason*)

As such it provides a way for all readers to understand the value of the feature it describes.

Hypothesis driven development

The above all seems okay, why would you change it? Although user stories provide a great way to describe features, it does lack one important aspect: measuring success. Of course a feature can be delivered successfully to customers, but that does not guarantee you are actually increasing the value of your product to your customers.

Measure value

In order to know whether the feature that has been developed is delivering value (and as such can be regarded as a success), the first thing is to formulate an hypothesis associated with the feature. For every feature built, you already have some value in mind, otherwise it’s not worth developing. What is rarely done however is identifying the metric the feature is actually impacting. In our case, when we decide to change the way our users can browse our content, we do this because we assume it will affect the engagement of our users. As such we would identify a metric we can measure and we would assume a certain change to this metric.

Choosing the right metric

As we want to be able to declare success to prove a certain hypothesis, it is important to choose the right metric. The most logical thing is to make sure the value of the feature is expressed with the metric. Eg. for the case above, we would most likely identify metrics like average page views per session and/or average number of sessions per unique user. When we launch the feature and the metrics show the improvement we expected, it’s a success, right? Well… not always. During the broadcast of a football match, a viewer might see the dribble and conclude that it was the single deciding factor for the goal that ensued. But what about the run by the left back in behind the defense? What about the striker occupying two opponents with his own movement? What if the marketing team just decided to launch a campaign, or the 8am news just aired featuring a story about your product? In order for a metric to be valuable it’s important it shows a clear cause and effect.

Validate assumptions

When that metric is clear and the expected amount of change for that metric is identified, it is time to validate the actual assumption. As said above, unfortunately it’s not always as easy to demonstrate the feature delivered caused the success. There are multiple ways to solve this, but the most trusted way is to split test your feature. In order to do this, you let a part of your users consume the “new” product and another part the “old” or existing product. The effect of the feature can than be safely measured and the effect demonstrated. Based on this you can declare success (or failure ;))! — but this is a conversation for another post of ours — stay tuned!”

Add a comment

Related posts:

Morgan

A corporate risk-management consultant must determine whether or not to terminate an artificial being’s life that was made in a laboratory environment.

Happiness Helper

Instead of wasting time trying to figure out the, ‘who am I?’ questions, focus on figuring out the, ‘who I’m not,’ answers. If you want to start that process and be happy, then get a life. It’s that…

Books That Every Entrepreneur Should Read in 2022

Looking for book recommendations for entrepreneurs, other than Zero to One, The Lean Startup and The Hard Thing About Hard Things? We have created a list of 35 books to inspire your reading list for…