June 5, 2020

7 tips for starting Conversion Rate Optimisation (CRO)

June 5, 2020

7 tips for starting Conversion Rate Optimisation (CRO)

7 tips for starting Conversion Rate Optimisation (CRO)

Conversion Rate Optimisation (CRO) sounds complicated but getting started with it on your website doesn't have to be, and the value is immense..Essentially, CRO is the process of trying to make it easier for a customer to take a desired action on your site. This can be a big action like buying a product, or a small action like clicking through to the next page in the funnel.For the purpose of clarity in this article we are referring to conducting on-site tests like AB or multivariate tests. This certainly isn't the only element of CRO however is the most impactful for those starting outThere are a heap of benefits to implementing a CRO strategy, or even just dipping the toe in and starting with some tests. In order to do that, we have put together a few tips to get an understanding of how to start optimising your site.

Focus on customer pain points first

While there is a (strong) temptation to start testing things that you, your colleagues or more commonly your managers want to change on the site, the reality is that they are not the customers and therefore don’t represent their needs accurately.That is not to say ignore the internal opinions or ideas - quite the opposite, in fact. Encourage ideas on how to improve the website but narrow the focus so it addresses a specific customer pain point.You will make the most gains when optimising your website by identifying the biggest customer pain points and developing and testing ideas on how to solve them.So how do we identify these pain points?

Do your research

This is the most important part of any CRO work by far.We can’t guess on what the customer pain points might be, we need to use a combination of quantitative (larger volume, zoomed out insights) and qualitative (small sample but very specific) data sources to develop insights into how the customer behaves, what they want to achieve and where they are struggling.From a quantitative perspective, the main tool we use is website analytics and most commonly Google Analytics. If correct tracking is set up (often a big if), then you can see the actions most commonly taken by users, how they navigate through the site and if they converted or abandoned. Additionally, you can see their behaviours by segments like device category - is mobile performing way worse than desktop?This information is great to give you the what, but in order to understand the why, we need to go deeper. This is where qualitative data like customer surveys, polls and other more detailed information lets you build a clearer picture of the issues a customer might have.Tools like HotJar with its heatmaps, customer recordings, form field analysis and polls cross between qualitative and quantitative insights and can join the two together for a fuller picture.

Have a clear objective

So you have done some research and identified a number of pain points that are preventing the customer from converting - what's next?Well first of all you need to sharpen the focus of what you are trying to achieve.Your main goal might be a sale or lead as a conversion but is that the most appropriate goal for the pain point you are trying to resolve? This obviously needs to be the ultimate goal, but often you will need to break it down into micro goals that represent a users progress along the funnel on site.For example for an ecommerce site it might be a click from the home page to a category or product page. For lead generation this might be watching a video or conducting a site search that we know makes it more likely users convert.If you are using a micro goal as the main objective for your test, then make sure that the main conversion is tracked as the secondary objective. While in theory an improvement in the micro goal will have a positive impact on the final conversion rate, there can sometimes be unintended negative effects.

Develop a hypothesis

When testing, we try to be as scientific and consistent as possible. Keeping with that theme it is best practice to write a hypothesis for what you think (hope) will happen in the test.So if in our research we discovered there was a low click through rate on mobile from the main call to action on the home page, our hypothesis might look like this:

“By moving the button further up the home page on mobile it will increase visibility and in turn increase clicks on the call-to-action button”

So you will notice a few things here:

  • This is a statement not a question - we are looking to either prove or disprove the statement with our test
  • It explains the action taken (moving the button up) and the hopeful impact (increase visibility)
  • It refers to the goal specifically (increase clicks on the call-to-action button)

By writing a hypothesis in this manner it makes it easy to track the work you are doing, the intended effect and the goal.

Validate your test

Before running any test we need to make sure that we will be able to get a result and that result will be able to be implemented.So there are two parts to that:

Will we get a result?

Depending on what you are testing, the volume of traffic and number of variations, the time taken to get a result in the test can vary wildly.Often we will find big customer pain points but the traffic is so low that a test might take 12 months to get a result, which is useless.We use the VWO AB Test Duration Calculator to determine the estimated length of a test. Simply plug in:

  • The current conversion rate
  • Estimated improvement (this is often an educated guess based on the significance of the change and how much of a pain point it is)
  • Number of variations - the more variations you test the slower it will go
  • Daily visitors who will see the test - look for unique visits on the test page but make sure to take device splits into account if it is a device specific test
  • Percentage of users in the test - if its a high risk test, you will likely include a small percentage of traffic

From this you will get an estimated test duration. The shorter the better and generally we try to avoid running tests that will take longer than 4 weeks to complete.If the result is much longer than that, reconsider the impact it might have on the users and if it is still deemed important, just update the site.

Can we implement it

The second part is how difficult is it to change this thing we are testing? If you are testing text on a website, or modifying the layout in simple ways, then you are generally fine.If however, the hypothesis is that “free shipping will increase sales” the that is a far greater business conversation that impacts profit margins, order processing, logistics and more.So consider the complete impact of implementing the change and see if the return is worth it.

Be wary of small sample sizes

Once the test is launched there will often be significant results early and it is tempting to declare a winner before the projected end date has been reached.While there is certainly an opportunity cost involved in leaving tests running too long, the bigger risk lies in hastily making decisions based on small sample sizes. The smaller the sample, the more extreme the results can be early on which can be lead to false positives.Be patient and wait for statistical significance. Virtually all the platforms you will use for running tests (Google Optimize, Optimizely, VWO) will have an in-built statistical significance calculator and they will determine a winner to 90-95% certainty.If you have validated the test, there is no need to jump the gun - take your time and trust the process.

Learn from wins and losses

Obviously in a perfect world, we want every test to be a winner and have a huge impact on the conversion rate however the reality is far from it.Even the biggest companies with the most mature testing strategies and infrastructure will often only have success in 30-40% of tests that they can then implement.These huge companies don’t limit the scope of testing to improving - the reason they test is to learn.So a failed or disproven hypothesis is just as valuable in learning what does not work as it is learning what does.Knowing what doesnt work means that you won't waste time repeating these same mistakes going forward as they have been tested with real customers on the site.The challenge here is having an ad-hoc approach to learning is hard - it's fine when a test is successful and you implement, but if it fails is there a process to record those results, communicate with relevant stakeholders and build on that learning?Needless to say, implementing and executing on a conversion rate optimisation strategy for you website goes far beyond these 7 top level tips, but hopefully using this as a framework will help you start to picture what the process might look like for your business.