When asking questions is not enough –
How to use experimentation to validate new features (and entire products)

Getting ‘hard’ evidence that customers are into a particular feature (or your whole idea!) is not easy – which is why some forms of user testing can fall short. There is a time and place to understand how users ‘feel’ about a particular feature (“do they like it, would they use it”), but the tough reality is that what people say and what people do can be very different things.

When there is a lot riding on a product launch (maybe a whole department!) you want to have as much solid data as possible to feel confident you and your team are on to a good thing. So how can you scratch below the surface and get the certainty you need that customers will use this product?

Enter the experimentation process

 

1. Identify your riskiest assumption

In a recent project building a digital insurance experience for a client, one of the primary assumptions was that people want a more simplified insurance policy. To test this, we had to narrow down what the idea of ‘simplification’ really means, and what the right amount is. Often what people say they want, and what they really are not the same thing.

If you truly want to test a new feature or product, you must first challenge the pre-existing beliefs you and your team hold on how your users will interact with your product, and what they value most. A good way to start this process is to bring your team together to list ALL of your assumptions – even if they feel like obvious questions early on. What have you been assuming is true? What biases have led to your beliefs? Are there any conflicting assumptions amongst your team?

Once you have narrowed down your assumptions, rank them from most risky to least risky. A good visualisation here is to think of a Jenga tower. The blocks are your assumptions, and the tower is the product itself. What assumptions – if incorrect – could cause the whole ‘tower’ (i.e., product) to fall over? These belong on the bottom – they are the foundations holding it up, the riskiest blocks – and you will want to test these first!

 

2. Create an experiment and decide the success metrics

Now that you have a ranked list of assumptions you can begin to turn these into a hypothesis. This is much easier to do than it sounds and is an incredibly valuable way to get the whole team aligned on what you are trying to learn. It also allows you to systemically get answers to your biggest questions.

To do so, you’ll need to create a hypothesis that is discrete (our assumptions are either true, or not, there is no in-between), falsifiable (our assumptions can be proven false) and testable (we have measurable metrics to determine if our assumptions are true). Use the following statements as a guide:

  • Discrete = “we believe that…”
  • Falsifiable = “to test that we will…”
  • Testable = “we are right if…”

In the example from earlier, when the client was trying to simplify an insurance offer, the experiment plan looked something like this:

  • We believe that… if customers have the option to ‘turn off’ parts of their policy in the app, they will.
  • To test this we will… create a ‘toggle’ function that will allow users to add or delete coverage features.
  • We are right if… 5/8 of the customers who participate activate the toggle function.

We love Alberto Savoia’s – Maths of Product Success here. Check out his YouTube series – it’s well worth your time.

 

3. Run the experiment and analyse your results

The kind of experiment you are running will often determine how many people you need to participate in order to determine the results. A bigger testing pool will give you more accurate results – but if you’ll only ever roll out new features to a select client base, you’ll want your test participants to meet certain criteria. Determine how many participants you need by factoring in how big your total target population is, how many people will actually participate in the research after you invite them, and what margin of error you’re likely to have during the experiment. If you need a hand working out your sample size, this handy guide can help.

Once your experiment is complete, you’ll want to analyse your results a few different ways. To begin with you’ll review the results from each test alone. What is each participant telling you, and do you need to go deeper? In our insurance example, the results were pretty clear – 7/8 participants turned off new features that they didn’t like. But what did this mean for the features? Were they not valued or desired by the company’s users? Was it worth rolling them out and giving customers the option to use them or not? Or was there something else they could offer? These questions are all about next steps. Which leads us to…

 

4. Decide on your next move

Once you have the validation you can move on. Sometimes this means answering new assumptions that arose as a result of the experiment. Other times you’ll need to go back to your list of riskiest assumptions and run the same process again. It can take time, but once you have answers to all your assumptions, you’ll be in a much better position to roll out a product with features that work.

If you’re feeling ready to set up your own experiments, you can DOWNLOAD our Testing Template.

  1. Click here to download the board to your own Mural space
  2. If you do not have a Mural account, you can create a FREE one here

And, if you want a hand implementing it, simply get in touch and let’s chat.

Leave a comment