Skip to content

How to do a Split Test with Sortable

You’ve probably heard the phrase “split testing” being thrown around if you own a website or are involved in ecommerce. If you’re unfamiliar, split testing is a fairly simple concept of comparing two website or webpage versions to each other to determine which version is performing better. 

In a split test, website traffic is distributed between two different versions of a webpage — the original (A) and a variation (B). They can differ from each other in terms of design, content structure, or page elements. By split testing, you can observe how each traffic group responds to the version they’re exposed to. This shows us which version offers the better conversion rate and opportunity for growth. 

The term “split testing” is often used interchangeably with A/B testing. There is a difference between the two:

  • A/B testing refers to the two webpages or website variations that are competing against each other.
  • Split testing refers to the fact that the traffic is equally split between the existing variations. 

As previously mentioned, split testing involves comparing changes to a single website element (such as a different image, header, call to action, etc.) or between two completely different styles of design and web content. Website users will unknowingly be split into groups and half of them will see the original version while the other half will see a new version. This split test is meant to be unbiased with the user traffic not being knowledgeable of what is happening.

What are the Benefits?

Split testing ensures that decisions are made based on data — no guesswork is involved. The benefits of split testing include:

  1. Improve content engagement – Conducting a split test helps you to evaluate every aspect of the content you wish to create or already have on your site. This means that as you consider and create variables for your split tests, you also create lists of potential improvements. The act of running split tests makes your final version better for your customers.
  2. Reduce bounce rates – If you notice website users that just “bounce” from your site, split testing is a great way to see in what ways you can optimize your website better. Whether it’s tweaking small website elements (image, header, etc.) or website layout and design, split testing can help you find a winning combination of elements that keeps visitors on your site long enough to provide them with value from your content.
  3. Increase conversion rates – Split testing is the easiest, most effective way of creating content that converts more visitors into buyers or just retaining users. When you take the time to carefully craft two versions of your web page, it’s relatively easy to see what works and what doesn’t. Split testing two versions does take a little longer but — when done properly — will definitely help you convert better.
  4. Higher values – If you offer a product or service, conducting this method is a great way to convert more site visitors into buyers. Continuously run tests, again and again, to achieve a more refined version of your web page and increase conversions which can be especially helpful for higher value products or services. 
  5. Reduce risks – Making major revisions to your site can result in considerable costs or strategy changes. Split testing can help you examine user behavior on your site before committing to major decisions and avoid unnecessary risks by allowing you to test the waters before committing. This is especially true if you plan to make a significant change like a new Ad Ops partner who plays a large role in your website monetization.

It’s important to ensure that there’s a level playing field when conducting a split test, especially between two Ad Ops providers. Some Ad Ops providers have been known to take control of site traffic to skew the data in their favour. They use a similar tactic when it comes to page load speed tools.

Conducting a Split Test with Sortable

If you’re curious on how Sortable stands up against your existing ad partner, you can ask us to conduct a split test. We also do this with existing Sortable publishers who may have been offered a promotion with another Ad Ops partner. We’re happy to work with you to show you how our ad stack performs against others. 

How we conduct a split test is by using a dice roll. A dice roll is a piece of code that randomly assigns a user session a cookie to split traffic between two monetization solutions. For example, if you want to compare Sortable’s solution to your existing Prebid solution, we would split the user sessions between Sortable and Prebid to measure performance. 

Sortable can conduct dice rolls on a page level or session level. Page-level dice rolls are useful if you have a high page view/session ratio, while session-level dice rolls are typically preferred for most integrations. Sortable sends you code instructions to split the traffic by session or page, where the percentage of traffic split is controlled by you. We provide a fair, agnostic code that you place on-page and can adjust when you want to without Sortable’s intervention. 

Regardless if it’s a page-level or session-level dice roll, it’s very important the cookie expires every 24 hours to ensure that as the traffic split changes (scaling up or down), the sessions redistribute to the expected percentages. For example, if the cookie expires every six (6) months and we scale from 30% to 50%, all the cookies set on at 30% will be retained and only new traffic will receive the 50% split. From there, the total traffic split will be lower than 50% because the cookies set on the original 70/30% split have not been updated. 

Sortable monitors the split test after a couple of days to go over the data. We assist you with your analysis in comparison with the solution you’re testing us against. If you’re interested in split testing with Sortable, please contact us at team@sortable.com