Skip to content
Helen Ross 03 May 2017 2 min read

You don’t need to be Einstein to experiment

“With how many things are we on the brink of becoming acquainted, if cowardice or carelessness did not restrain our inquiries.” – Mary Shelley, Frankenstein

 

When people talk about experiments, I tend to think of a vaguely ominous abandoned house with a mad scientist laughing about how “It’s genius, GENIUS I say!”…or the Powerpuff Girls. Sadly, the kind of experiments I’m likely to do have no “Chemical X”, but they do still give me some pretty awesome results.

 

Let me take a step back though. What do I mean when I say experiments?

 

Maybe this?

 

experiment.jpeg

 

Nope.

 

There’s no chemistry lab, and no volcanoes. In this instance, an experiment is (in essence) simply a way of verifying the best option to choose when I have more than one.

 

For example, I’m currently running an experiment on the location of a call to action on our site. There’s some conventional wisdom out there suggesting it must be in a particular spot to get a decent conversion rate. But that doesn’t account for the way different customer bases behave. So what do I do? Rather than just take what I’ve read as gospel, I let the call to action run in the recommended location for a few months. And now I’ve moved it. I’ll let this run for the next few months (to give me a decent potential data set) and then see which of the two spots gives a better conversion rate.

 

Now it may not be as exciting as fighting crime, but I still think it’s pretty rad. It may work out that the research was right for our industry and the location they’ve recommended is better. But it may not. Instead of just hoping that it’s right, I’ll know for sure - based on relevant data.

 

Obviously, you may not have the time or resources to run experiments on every feature of your website. That’s ok. The aim of today’s blog is simply to help dispel the myth that experiments have to be these big productions that only people with dedicated resources can run.

 

Don’t get me wrong, there’s some amazing stuff you can do with A/B testing, user behaviour monitoring, and a whole swathe of tools designed to wring every drop of potential from every decision, but I say start small. Hell, start big if you really want to, but START.

 

Not sure which of the two banners your graphic designer created is going to do better? Don’t have a round table about it to guess which your customers will like better. Try both for a week or two each, and measure results with some basic Google Analytics click-through stats. Let THEM tell you.

 

Or go one better and get specific. Use layers to target a particular subset of customers with your messages and see which one performs better. See what impact it has on them specifically, rather than shouting into the void and just using analysis to refine your message. The more targeted you get the more specific your results are going to be, and that can only be a good thing.

 

As long as you have a premise you want to test and some basic methods of measuring success/failure, I think you’ve got all you need to be on your way.

 

So... what are you waiting for?