The 4 Reasons Why Marketers Are Hesitant About Incrementality

A few weeks back, we had a great conversation on measurement with ex-eBay, Google, Meta, Netflix and ad and martech expert, Andrew Covato. He now runs his own measurement consultancy Growth by Science, advising companies of all shapes and sizes from start-ups to Fortune 500 enterprises.  

We discussed how marketers can navigate some of the most common obstacles they may face in adopting an incremental approach to measurement. Here’s what we covered.  

Q: What does it mean to assess marketing with “incrementality”? 

Andrew: Let’s imagine a business wants to grow its sales. Incrementality refers to sales that are attained over and above some baseline level.  In other words, these extra sales are “incremental” to the sales volume the business expects to receive.  If a business can prove that the incremental sales are due to some activity–like marketing, then it can calculate an ROI on that activity quite easily.  

However, there are specific techniques that are required to understand the causal link between a business activity (like marketing) and an outcome metric (like sales).  As we’ll discuss later, traditional ways of measuring digital media do not address incrementality…and incrementality can be the “red pill” that allows marketers to see the true value of their ads! 


Q: Tell us a little more about these “traditional ways of measuring digital media”–how has measurement changed since the early days of online ads?

Andrew: Early on, advertisers needed some kind of rubric to assess the efficacy of their ads.  They were trying to answer the question: Are ads driving people to visit our websites and buy things?  

At this point, the majority of “old school” digital advertising consisted of search ads and maybe some banners rendered on a desktop browser.  This wasn’t a complex digital ecosystem of multiple devices, platforms and social networks.  

It would logically follow that if you interacted with an ad and then performed some action, that the ad could reasonably be credited with causing the action.  

I call this “post-exposure” measurement, which includes familiar practices like last-click, 7-day-click + 1-day-view (aka “7&1”) and MTA.   
 
As time went on the ecosystem became more complex. People were exposed to more and more ads across the web, mobile devices became commonplace, and these factors significantly increased sophistication in ad delivery and optimization, transforming the landscape.   

Unfortunately, the “post-exposure” attribution paradigm could not keep up, and the causal link of interacting with an ad and subsequently converting no longer held true.    


Q: So why isn’t everyone using incrementality-based measurement as a default?

Andrew: The purest way to directly measure the incremental impact of ads is with a designed experiment—this is why test vs. control experiments are utilized in clinical trials.  

However, there are some legitimate (and not legitimate) challenges to deploying ground-truth experiments.  Let’s break this down into a few categories:  

It is a challenge to shoehorn an incremental approach into existing legacy systems 

At this point in time, there is nothing natively incremental about any platform. Basically all ad tech is built around post-exposure attribution, and incrementality has been treated as an afterthought. 

That means that many marketers have to shoehorn existing infrastructure to make it work.   

So imagine you’re a growth marketing organization and have built your entire structure ,like bonuses, budgets, all of that, around last-click performance.   

To wake up one morning and decide to up-end it all is not a trivial thing.  

Changing your measuring methodology is the equivalent of changing your accounting methodology for finance. It can have very broad implications on profitability, on valuation, on lots of different things.   

But ultimately, if you know that your accounting method is wrong, you do have to bite the bullet and change it.   

The challenge with measurement methodology is that there’s no regulatory body or driving force to mandate this–-not that there should be! But as long as it appears to be in equilibrium and looks good enough, people tend to not not want to rock the boat.

So if you’ve got a legacy growth programme working on last-click, and all of a sudden, you switch everything to incrementality, all your trends are going to have a discrete breaking point. Understandably, it’s really hard for some marketers to wrap their heads around the decision to deliberately break continuity with the trends they’ve been seeing over years, but it’s the only way to move forward.  

It will require a mindset change from deterministic to stochastic measurement. 

Incrementality and statistical methodologies have an inherent fuzziness and an inherent range associated with them–they aren’t point values. So it can be tricky to juxtapose that with a budget, right? How can you tell your CFO that your ROI was between 25 –100%?   

Marketers have to understand that the point of incrementality measurement is not necessarily to give you a point value–but rather an estimate to help guide your investment strategies and optimize channel selection strategies so you can extract an optimal ROI.  

There are inherent challenges associated with running a holdout.

Aside from the logistical challenges of setting up and managing tests (which can be mitigated with the right internal tooling), there are also issues of getting enough signal, and of the opportunity cost of withholding ads. Also, the market is super dynamic…so unless you’re constantly monitoring (e.g., always-on holdout), or testing regularly, results can be out-of-date.  


Q: Well…it sounds like you’ve made a great case for and against running designed experiments to measure incrementality!  What are the first few steps you suggest for advertisers?  

Andrew:  It’s not easy to navigate!  Designed experiments are integral to a proper ads measurement program.  However, to get the best of all worlds–-flexibility, understandability, more granular results– you need triangulation.  This has become a hot buzz-word in measurement, but let me explain how I think triangulation should work. 

1. Start with designed experiments at the top level. This maximizes the chance of getting a signal if there is contribution by the marketing program…or will highlight that the marketing program is not performing at all and should be razed.  If possible, see if incrementality at the channel level for some channels can also be ascertained.

2. Next, deploy a calibrated agile econometric model, like Marketing/Media Mix Model, or MMM.  There are plenty of open-source options and fantastic tools out there in this vein. The MMM should use a Bayesian methodology that allows the incrementality results to inform the regression (note: MMM without calibration is not causal…but that’s a topic for another day…). 

Modern MMMs can be refreshed weekly or even more often unlike the massive models of years past.  This will allow for budget redistribution and more granular understanding of impact more frequently. 

3. Finally, utilize SOME post-exposure attribution data to apportion relative credit between sub-campaigns within a channel.  While the absolute value of post-exposure data is almost always misleading, sometimes the relative credit can be useful, if pegged to an incrementality-based approach at the top line. 

What’s Next?  

  • Interested in Crealytics’ incrementality measurement platform? Learn more here or reach out via our contact form.