#13 Experimentation - It's NOT optional anymore!
Running experiments - continuous and connected- is extremely important to ensure continued business success. It becomes ever so important in challenging times such as now when relying just on the data from the past is not going to be sufficient for any business.
Continuous - Results from past experiments must be fed into the pipeline of newer experiments - based on questions your past experiment answered, newer questions must be asked and inferences are built on top of it
Connected - Apply results from small experiments to your entire customer base: learn from a few, but make decisions about all.
Read more about this here
Back to a framework to run experiments I love this slightly modified structured framework from Lean Startup for defining your experiments and I will soon walk you through how
We believe that [Assumption A] and [Assumption B]… and [Assumption Z] is true
Therefore we believe [this Capability]
Will result in [this outcome]
We will have the confidence to proceed when [we see a measurable signal]
Let me walk you through an example of a successfully designed experiment we recently ran. Putting it in the Lean Startup recommended framework
We believe that it's true that a certain group of users are loving our new product more than anyone else.
Therefore we believe this new product will result in high conversion rates within this specific user group
We will have the confidence to proceed when we see at least a 50% increase in conversion rates
Now here are the key steps in designing an experiment basis the above framework:
Define the problem and state our hypothesis on how we are going to solve it
Set the good KPI and what success looks like
Define the experiment statement: approach and resulting actions
Analyze experiment results and improve/take decisions
And here’s how we did this
1. Problem Statement
For a new product we had launched, some early signals & data pointed that there was a certain user group that was lapping up the product with much more ease ( literally snatching it away from us) and are experiencing fantastic delight.
So we decided to dig deeper here - My hypothesis here was - We will see higher conversion rates if we tweak the product even more to Delight the specific user group. I am a big believer in going after a small market that LOVES you than to cater to a large market that LIKES YOU!
2. Set the good KPI and what success look like
Undoubtedly a single metric here was the conversion rate and set it at 50% higher than usual.
3.Experiment Approach & Actions
We built a Deeper Understanding of User persona - We spoke to 50+ users in our target persona and a few in peripheral and nontarget persona as well. We deeply understood their Motivations, Blockers, and Capabilities - we saw how this product was a great fit and how specifically we could improve it further for them
Split the Users into 2 groups - "Target Persona", "Non-Target Persona" - We had a 30-70 split between the Target & Non-Target persona
High Impact changes to be the product for Target Persona - We prioritized HIGH impact changes to the product for the target persona
High Impact changes to be sales process for Target Persona - We tweaked the sales process to make it a value-based selling.
The experiment - We ran the experiment for 1 week, running the experiment closely with the 30% users in Target persona.
4.Analyze experiment results and improve/take decisions
We saw a 60% increase in the conversion rate, our experiment surpassed the threshold we set! We now need to run this for a few more times to establish consistency of outcome. There isn't a doubt that we must go after this hypothesis aggressively but now newer areas open up for experimentation - Is there specific International geo we should target with this persona, what is the price point, How to scale this. Comes back to my earlier point on running continuous and connected experiments.
Sometimes I also see that post an experiment is run - key stakeholders and other team members argue over the results. Because for each one they want to stick to their confirmation bias.
To ensure that your team won’t argue with your experimental results, take the time to define and get alignment around the following elements:
The Assumption: Be explicit about the assumption you are testing. Be specific.
Experiment Design: Describe the experiment stimulus and/or the data you plan to collect.Participants: Define who is participating in the experiment. Be specific. All customers? Specific types of customers? And be sure to include how many.
Key Metrics and Thresholds: Be explicit about how you will evaluate the assumption. Define which metric(s) you will use and any relevant thresholds. For example, “increase engagement” or "increase in conversion" is not specific enough. How do you measure engagement? “Increase clicks on newsfeed stories by 10%” is more specific and sets a clear threshold
Have a clear rationale for why your experiment design/data collected will impact your metric.
Decide upfront how you will act on the data you collect. Before you run your experiment, define what you will do if your assumption is supported, if it’s refuted, or in the case of a split test, if the results are flat.
More experiments lined up for us shall let you know how they go!
Some ideas are shared from here too