Human dynamics of marketing measurement
Addressing the paradox of large marketing programs with limited experimentation
Marketing is expensive, it also has highly variable and often unprofitable returns. Yet many papers highlight financial costs of marketing experimentation as motivation for non-experimental methods, even papers that themselves used experimental methods.
Facebook says that “commonly held view that such experimentation is expensive and often unnecessary relative to alternative methods” [1]. Google adds “barriers to adoption of randomized experiments include technical hurdles in implementation, lost opportunity costs from having a control group, costs of having a test group, and weak advertising effects which may require very large sample sizes” [2]. Uber opined “Pausing the advertising strategy in the entire country is rarely feasible in ongoing advertising spend planning” [3].
I don’t share these strong opinions against having marketing experiments. Maybe those writers have a bit more (over)confidence in their abilities to model with observational data. There are multiple reasons why experimentation appeals to me more.
These experiments typically involve reduction in spending, so there is a clear mechanism for reduced outgoing cost.
Many marketing programs are severely overgrown, due to measurement challenges and optimism. Marginal spending may be unprofitable, and even the entire program may be collectively unprofitable. Cutting spending could be a financial improvement. The opportunity cost might be an opportunity gain. In such cases, large sample sizes mean large savings.
Even when marketing truly is profitable, experiments can often help tune the program size and substantially improve profitability. The result could even be expansion of the program.
More generally, there is just so much to learn!
Please tell me if I’m wrong here. I’d like to hear some examples of companies that had large marketing programs, tested them with randomized control trials (RCTs), regretted their tests, and felt that (ex ante) they made poor decisions by having such experiments.
Let’s look at some of the strategic and organizational barriers to rigorous experimentation.
Incentives matter
I’m going to lead with the aspect that sounds like a principal agent problem, before following with some reasons that showcase the nuanced concerns of everyone involved.
At anywhere larger than the smallest of companies, your external advertising platforms will be more than happy to talk through how their (respective) ad marketplace works. Those companies have massive organizations of people ready to talk to their customers. They can offer guidance and opinions. But note that the job of anyone you talk to is to sell you ads. Directly or indirectly, that is what all those roles come down to. They want you to advertise more broadly and to bid higher. That doesn’t mean they will lie to you, but the information that they will be most proactive in sharing, and the information given to them in the first place by their colleagues, has a certain slant to it.1 They would prefer that your advertising works well for you, but they are entirely fine if it works a little less well and you spend a little more. The platform as a whole will be limited in such respects by needing to not lose your business, but sometimes the structure of bonuses for individuals means they lack even that limited constraint.
You might advertise on a channel through a third-party company or with consultants, especially for channels that are more fragmented or less programmatic. Traditional TV might be an example of such channels. Third-party companies can specialize and know their market in excellent detail. These companies might be able to guide you to better advertising choices among the options they provide. But again, they want you to keep spending. Again they need to be useful enough that you don’t leave them for another service, but otherwise they profit more if you spend more.
Even within your own company there might be an incentive alignment problem. If you are involved in marketing then you are incentivized to find ways to make marketing work for the company. That’s good. But what you can measure might only be the appearance of successful marketing, using calculable and available metrics. Internal marketing teams are incentivized to use metrics that show high attributed profitability and user growth. Estimating incrementality more accurately, if that means finding out that incrementality is lower than previously assumed, might mean spending should be lower and both user growth and marketing profitability are lower than earlier measurements. That might not be a particularly exciting project idea for people in the team, especially compared to creative ideas to expand marketing programs. This group will still want what's best for the company, and more so than third-parties or advertising platforms, but they might naturally become relatively open-minded about ideas that expand their purview while holding a rigorous standard for considering ideas that are likely to do the opposite.
Everyone can have good intentions at heart, but it can be tough to notice caveats when everyone involved has similar incentives to be optimistic about spending. Beliefs have a way of following our own best interest.2
Many advertising programs grow to be large, such that companies could be regretful after the fact, perhaps even regretting their decision-making given the ex ante limited information they had at the time.
Changing conditions
Goals are something I have hinted at but thus far avoided in detail. Should a marketing program maximize profits? Or should it aim for maximum user growth given some opportunity cost relative to profit maximizing? Given user retention, over what time horizon should they optimize for? With what time discounting?
It’s like asking what a company’s goals should be, and the answer might be multifaceted. It might even vary over time as competitive conditions change, the company has positive or negative balance sheet shocks, or, say, interest rates change.
Multifaceted or dynamically changing objectives can cause wild gyrations in marketing spend. Positively, that creates useful variation to help understand the impact of marketing. But that comes with the caveat that those gyrations might be correlated with changes to underlying demand, rendering the data less useful. Furthermore, substantial changes in marketing programs can render past random experiments obsolete. Combining internal dynamics with external market dynamics, and with experimentation we might always be one step behind when it comes to knowing what we need to know.
Moving slowly and quickly
Many people in marketing want to understand their incrementality or (causal) lift, whether it is good news or bad (for them). A major cost can be the organizational overhead, and lower execution speed, of running randomized experiments.
Those experiments could require implementation from ad platforms, needing careful coordination and validation. While some platforms will support user-randomized experiments, others offer no such functionality or have such challenging attribution that they cannot. You’re not going to know which people drove past your billboards or watched your cable TV ad. On some of those platforms, changing marketing parameters could need a lot of lead time. Spend commitments might be locked-in well in advance. For those channels you might need geographic randomization, with tactically chosen regions that require careful design before the experiment and careful analysis after. That takes effort and time.
Running experiments can also crowd out your capacity to run other experiments. You might have to prioritize them carefully.
Many channels and advertising campaigns have low volume, so it may take a long time to accumulate sufficient evidence for or against any hypothesis. It might even be infeasible on any reasonable timeline.
All of the above are ways in which measurement slows down our pace. This part of marketing feels slow, as is in stark contrast to how quickly marketing has to move to adapt to changes in user demand, the competitive landscape for advertising, functionality changes at existing channels, and the addition of new channels.
Ultimately our marketing teams have to make decisions on limited information, artfully deciding when we need to gather more evidence, when we can move ahead at full speed given our best interpretation of existing evidence, and when to do anything in between. Sometimes the risk of waiting outweighs the risk of being wrong.
Bringing it all together
Collectively, this and my prior two posts convey the following narrative:
It is typically very important to measure the causal impact of advertising
This causal impact is hard to accurately measure without randomized experiments
Even with randomized experiments, we will still have big challenges in marketing measurement, due to incrementality being a function of campaign size, a limited data range, and multiple channels with overlapping effects
Beyond technical challenges, we also have strategic, operational, or organizational reasons that result in less experimentation than one might expect
I’ve been lucky to spend a brief time working and studying this area, and I’ve increasingly come to appreciate its complexity. I hope to come back to this topic here soon enough.
[1] Gordon, B.R., Zettelmeyer, F., Bhargava, N., & Chapsky, D. (2018). A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook. Managerial Marketing eJournal.
[2] Chan, D., & Perry, M. (2017). Challenges and Opportunities in Media Mix Modeling.
[3] Barajas, Joel & Zidar, Tom & Bay, Mert. (2020). Advertising Incrementality Measurement using Controlled Geo-Experiments: The Universal App Campaign Case Study.
[4] Chan, D., Yuan, Y., Koehler, J., & Kumar, D. (2011). Incremental Clicks. Journal of Advertising Research, 51, 643 - 647.
Google once released a study showing 89% incrementality of their advertising [4]. That was over 10 years ago, and I can’t find a similar study from them since. I don’t think they’re going to get a better number than that.
Which is a bigger topic than marketing, but I digress.