My apologies to the Freakonomics Radio podcast for borrowing their headline. But it summarizes perfectly a question on the minds of businesspeople for more than a century.
The question is especially relevant for the digital advertising business, which promised to be more measurable, effective, and efficient than traditional print and broadcast media.
As a publisher, I found it hard to quantify for clients just how effective advertising was in our newspaper and on our website. So I did what all publishers do: I sidestepped the question and talked about our audience instead. I told them how our readers mirrored their target customers. I showed them charts of our readers’ job titles, company size, personal income, education, decision-making power, and so forth.
The benefit of advertising with us was building their brand, I would tell them. Our reputation for trustworthiness would rub off on their brand if they advertised. And besides, I would say, “Your competitors are all advertising with us. You need to be part of the conversation.”
Digital advertising promised to change all that vague talk. Now advertisers could use technology to track how many people clicked on their ads and bought their products. The technology was useful for measuring the effectiveness of ads seeking direct response (“buy today and save 30%”).
However, it wasn’t so good at tracking the impact of long-term branding messages (“your safety is our priority”). The truth was and still is that it is very hard to show causation. Recently, however, Stephen J. Dubner of Freakonomics applied his usual skepticism to the question. He sought out experts who used creative methods to try to measure advertising’s effectiveness.
What follows includes a summary of a Freakonomics episode, “Does Advertising Actually Work”, Part 2: Digital.
As a negotiating tactic with the online search engine Bing, eBay turned off all keyword advertising–in effect, it no longer paid for Bing to promote searches for the keyword “eBay”. What happened? eBay used this “natural experiment” to analyze the effectiveness of its online marketing. Despite running no keyword ads for its brand, eBay saw no decline in clicks or purchases.
Steve Tadelis, the Berkeley professor who led the research for eBay, then decided to take the experiment further. eBay withdrew ads for non-branded keywords such as “guitar” or “boots” or “picture frame.” Tadelis told Freakonomics, “This was an extremely blunt experiment where we’re saying, ‘What would happen if we didn’t advertise at all?’ And to our surprise, the impact on average was pretty much zero.”
As a result, eBay’s CEO cut the company’s paid-search marketing budget by $100 million a year. Surprisingly, the rest of the industry did not immediately try to replicate eBay’s experiment with running no ads.
Freakonomics co-author Steve Levitt offered this explanation for the industry’s indifference to the data. “If you think about it, no chief marketing officer is ever going to say, ‘Hey, I don’t know, maybe ads don’t work. Let’s just not do them and see what happens.’ So, don’t get me wrong. I’m not implying that advertising doesn’t work. I’m implying that we don’t have a very good idea about how well it works.”
The podcast also included an extensive interview with Tim Hwang, Google’s former global head of public policy for artificial intelligence and machine learning. Hwang questioned the effectiveness of digital ads. He cited research that 60% of internet ads are not even seen because they are below the bottom of the screen or otherwise not visible–yet advertisers are billed for them. In addition, he said, click-through rates are well below 1% and many users are blocking ads.
Hwang is the author of Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet. Digital advertising is grossly overvalued, he believes. It’s hard to measure its effectiveness, and the platforms that do measure it–mainly Google and Facebook–keep their auction systems opaque. They are run by algorithms that are proprietary.
These algorithms don’t always serve advertisers well, Hwang pointed out. Some major brands have been surprised to see their messages displayed on websites with racist, sexist, or other objectionable content. In response, many have boycotted Google and Facebook (details here and here). The boycotts are meant to tell the platforms to improve their protection of advertisers – brand safety, in other words.
Beyond all that, data shows that digital advertising is likely fraudulent, with clicks and views driven by robots, not humans.
The point of all this research is not to say that advertising doesn’t work. For example, my own loyalty to Apple products, such as the Mac Powerbook I am writing on, along with my iPhone and iPad, has a lot to do with their advertising. I pay a premium for these products because I have been persuaded that they are better. Also, there is data.
Decades ago I bought a Volkswagen Rabbit when they were an expensive novelty, largely on the basis of VW’s years of clever ads about their superior German engineering. And my later purchases of Japanese cars owed a great deal to their ads about ergonomics, engineering, and reliability. Also, I was disappointed with the quality of American cars.
So advertising does work. However, the improved software and hardware available to researchers have removed a lot of the smoke and mirrors publishers like me used to use to avoid answering the question.
The new research provides better tools for doing a cost-benefit analysis on advertising expenditures. It places more pressure on the biggest purveyors of advertising–Google and Facebook–to be more transparent about their algorithms. Because if those algorithms are as powerful as the platforms say they are, it has implications not just for business but also for society, democracy, and politics. More on that in future posts.
More From The Fix: How to make native advertising work for you