Back in January, I posted the findings from an experiment we ran to measure what happens when you increase the number of header bidders, and how that impacts CPM. We found that there is a strong correlation between increasing bidders and increased average CPM performance; we saw a 58% increase in CPMs when running 6 bidders versus none.
I received a number of comments on this post, and the topic of latency came up frequently. Publishers have a vested interest in finding the right balance between revenue and site performance, as latency impacts user experience, Google organic ranking, ad viewability and preferred timing for high-paying direct campaigns. So we looked at the metrics from the original experiment and pulled some new data to see whether header bidders impact latency, and by how much.
Recap: Why Use Header Bidders?
Publishers have many options for filling impressions, but they have historically relied on waterfalls to ensure that their inventory gets filled. This puts the publisher in the difficult position of having to prioritize demand partners, and leaves buyers frustrated because, unless they're in the first position, they get the leftovers after the top partners cherry-pick the best impressions. Partners want the benefits of First Look, and publishers want an efficient way to find the highest bidder for their inventory right off the bat. Ignoring latency, header bidding seems like the perfect solution for both publishers and buyers.
The Potential Downside of Header Bidding
Nothing in life (or ad ops) is free and while our experiment showed a strong positive correlation between header bidders and CPM, this comes at a cost. Running an in-browser auction with multiple bidders creates latency on a site, and impacts other important KPIs. The real question is whether the latency is significant enough to impact important things like UX and Google rankings. It's important for publishers is to find the right balance between site performance and monetization.
Let's look at some of the data we've collected on latency while running header bidders.
Some caveats regarding our latency data
- This data measures header bidder performance on a single site. Results will vary widely by site in terms of partner performance.
- Our data was collected during specific time periods, across all users, devices, and geographies, but we've seen that partners performance changes both over time but also across these other dimensions.
- We work with a number of header bidder partners (12 integrated with more on the way), and we are constantly running tests to measure performance of individual partners and combinations of partners. In many instances, the correct set of partners varies, sometimes wildly.
- We run our own non-trivial tech stack that does a lot of learning, adjusting and optimizing. The results consistently include this framework. Relative results are valid, but your experience may certainly vary from ours, as we actively work to mitigate the negative consequences of header bidding.
How Adding Header Bidding Affects Key Metrics
Let's first look at the latency data for our experiment with 6 header bidders. This graph shows the latency associated with adding bidders. We observed a 29% increase in average load time, which is a significant increase, and may scare publishers away from header bidding.
We continue to run header bidders on our site because of the obvious benefits and our ability to mitigate the negative consequences, even with as many integrations as we have.
As mentioned in the caveats, our stack continues to learn and optimize. Below, you can see the data on load time as we move from 4 to 9 bidders, and also the associated CPMs.
We are constantly changing the combination of bidders to test how individual partners perform, but adding bidders still consistently increases CPMs (a 47% increase from 4 bidders to 9). You'll notice that the page load time only increases by 13%. For many publishers, this increased latency is an acceptable tradeoff for a significant increase in CPMs.
A further caveat is that we don't always run all 9 bidders so this helps explain why we continue to drive CPM increases without drastically increasing latency.
Next, let's look at how adding bidders impacts viewability. Caveat: we use a more stringent definition of viewability than IAB - we define a viewable impression as 80% of pixels in the viewport when the page loads.
First, we'll look at how adding bidders impacted viewability during our first header bidder test. Using 0 bidders as a baseline, the graph below shows the percentage change in viewability as bidders are added. There are some fluctuations as we add more, and at 6 bidders, viewability decreases by 6.86% - not a massive penalty. Again, caveats are necessary - we don't use fixed time outs, to minimize the negative impacts of bidding and, in this case, viewability.
When we look at more recent viewability data plotted against CPMs, we see that while CPMs consistently increase as we add bidders, the impact on viewability is far less than in the initial test. We have been running at least four bidders on our site and, using that as the baseline, we see that adding bidders decreases viewability by between 0.56% and 2.38%.
As our machine learns more about the performance of individual bidders and different combinations, the CPMs steadily increase while the variance in viewability is much lower compared to the first experiment.
Interestingly enough, despite some increases in load time and decreases in viewability, we saw almost no impact on bounce rates as we added bidders. In the original test, we observed a 14% bounce rate at 0 bidders and 15% for 1-6 bidders. Even with new data, we still can confidently say that adding bidders has almost no impact on bounce rate.
This is good news for publishers who are concerned that increasing bidders is slowing down their site to the point of driving users away. A relatively consistent bounce rate indicates that the latency increase isn't significantly impacting the user experience, and thus shouldn't impact the SERPs, which bias away from unhelpful (high-bounce) sites.
What this means for publishers
Header bidding creates lift, but not without also creating latency. Every publisher needs to determine if the tradeoff is worthwhile for their business model. Also, not all sites are created equal - if a publisher is already experiencing latency issues due to other factors, adding header bidders may not bring enough lift to offset the increased load times and decreased viewability.
As publishers who own a data-driven ad engine, we're in a unique position to experiment on our own sites to collect data, and we share our learnings with other publishers. Running multiple bidders on our sites works for us, but your mileage may vary.
Finally, if you have a very high sell-through rate, you probably shouldn't be running header bidders (or if you do, you should run them in a way that does not impact the load time of your direct campaigns). This would be non-standard but I'd be happy to share with anyone interested on doing this, as we're currently working on some tests and would welcome the discussion.
At Sortable, we take a very data-focused approach to ad operations. We have 12 engineers who spend all day focused on revenue optimization and team of ad ops people who work with them. We started off as publishers, and we still are. With 20 of our own websites and apps, we now exclusively manage over 2 billion impressions per month. We think using big data, machine learning and automation are the best way to drive programmatic improvements, and our goal is to leverage our learnings and provide the best possible outcomes. If you don't spend your days dreaming of multi-variate testing, machine learning algorithms, mediating between 20-30 demand partners, managing creative quality across networks and other such fussy work, we'd love to talk about we can make your life easier.