There are quite a few things to bear in mind when you’re designing an experiment, both digitally and in real life.
However, the key to making sure your experiment is working as hard as it should be is to only test one thing at a time. Otherwise there’s no way of isolating the change factor that had an impact on your results.
This might seem obvious, but it’s tricker than you think. Let me demonstrate, using the medium of beer.
In this experiment, the question I am trying to answer is: which beer is the tastiest?
At my disposal, I have a bottle of non-alcoholic Pale Ale, a bottle of non-alcoholic IPA, and a can of alcoholic IPA.
If I just give all the beers to every participant and ask them which one they prefer, I would never know if their choice is based on 1/ the alcohol content 2/ the type of beer 3/ the fact that they come in a bottle/can 4/ the brand
The same can be said for any experiment we run here at Reason. When we were testing multiple variations of a landing page for a recent proposition, we only changed the tagline in our first experiment. The following week, we picked the tagline that performed best and only changed the imagery. And kept iterating that way until we got to a winning combination. That helped us make sure that our results were representative, and only took into account the one change we were measuring that week.
We can’t show you just yet the proposition we’re working, so here’s an artistic rendition of what the experiments could have looked like if we were testing them on car insurance with over simplistic messaging. If you do want to know more about our work, head over here to see some of our case studies.
Here are some other considerations to bear in mind when designing your own experiment:
Make sure you know exactly who your target audience is. Whether that means segmenting customers by demographics, attitudinal or behavioural traits, make sure you have a representation from each target audience in your sample group. It’s quite difficult to decide on a statistically significant sample size, especially when it comes to service design briefs where what you’re after is qualitative insights rather than quantitative. So make sure you focus on targeting the right audience to get as meaningful a result as you possibly can.
When you’re designing an experiment, decide as early as possible on what the key success metrics are, and set yourself a benchmark. For example, when we were designing a recent insurance proposition for a client, we agreed fairly early on that the success of our experiments would be defined by the number of clickthroughs from social campaigns and the conversion rate on site. We benchmarked these at 1% and 15%. We reviewed our experiments every week, and killed off any iteration that didn’t hit our benchmarks.
To make sure that your results are meaningful, make sure you have a control group of customers. That group has to reflect the make up of the participants taking part in the actual test. If you can’t recruit enough participants to take part in both the control and test groups, then make sure you ask the test group to complete a diary study before you kick start the experiment. And if you’re running an open digital experiment, then make sure that you keep a track of your benchmark metrics so you can compare the impact of the new variables you’ve introduced to your experiment.