There was a freakonomics podcast recently about advertising (online and traditional).
No one can actually prove it has any ROI at all. No one is willing to run the experiments necessary. In the few cases of natural experiments, where ads got turned off for some people by accident, there was no change in buying behavior.
> No one is willing to run the experiments necessary
The people that would have the power to run this experiment have their entire careers depending on things staying as-is. Running the experiment carries a significant risk of exposing that the advertising operations they're responsible for provide much less ROI than they pretend it does.
The unwillingness of anyone to run such an experiment is already an answer. Why wouldn't someone jump at an opportunity to prove the thing/service they provide actually works, unless they were unsure about it themselves?
A small tech team investigated fraud on our platform and developed a system that was pretty robust at detecting and potentially shutting it down. But literally nobody was interested - even the people advertising don't want to know.
The people spending money are typically networks, media buyers, ad agencies, etc, far removed from the actual brand.
There are so many parties who want a slice of the brand's cash that they are all long past caring about whether the ad is being viewed by a human or not.
I worked in adtech. I built a simple ML system that detected signals of fraud on our network. When I reported my findings execs told me the group that was facilitating the fraud was our only profitable division. Soon after, our company went bankrupt. My final paycheck was months late.
IIRC our system wasn't even fancy ML - just finding known user IDs with identical timestamps over many simultaneous clicks / impressions on really diverse publishers.
I work in the field. Incrementality testing is a big part of marketing measurement at any reputable agency. Any claims to the contrary are FUD.
That said, companies like P&G, Airbnb, and Uber, which are oft-cited as examples of digital not being worth it, fail to understand their own brand recognition and organic power, built through prior marketing efforts, as key to their current standing.
Sure, TODAY, it doesn’t have the impact they’d like it to have but the investments PRIOR were key to ensuring their success.
I worked next desk to people running a small ad agency. Because we shared office (and I found them an intern), I got a very good look at how they're working. What I've seen can be summarized as: people who have zero clue or interest in statistics writing "reports", with "graphs" they don't even understand beyond "pointing up = good", proving positive ROI to customers - who also have zero clue or interest in understanding the numbers in the report, and not enough visibility into the whole funnel to independently check attribution. Both the agency and the clients were engaging in a shared and completely unjustified fiction of positive advertising return - and as long as both sides were happy, the money kept flowing.
I've been long since suspecting a lot of advertising on-line looks like that. Every now and then, I see evidence in favor. Like that good ol' Optimizely debacle, where it turned out Optimizely was structurally optimized to help people make invalid A/B tests, that erred on the side of concluding the interventions were working[0]. And sure, big brands with some superstar ad teams probably do this right. But I think there's enough slack in most businesses that advertising spend can get quite far detached from actual ROI without anyone noticing (and with plenty of people happily riding the gravy train).
Have you ever engaged with the large advertizing holding companies? They have teams of HIGHLY acomplished data scientists, boutique vendors for specific niches etc. Even if you don't pay for it there will be some people skilled at statistics looking at reports. If you pay for it, there will be data scientists doing analysis.
The large ad firms deliver a very good product but it usually isn't cheap.
I haven't, and I don't doubt that's true (at least in many cases; I've learned not to underestimate the chances of big companies ending up running scams either).
I've personally dealt mostly with the other end - the individuals and small agencies - and what I saw revealed total lack of necessary competences for the reports to be corresponding to reality. Perhaps it's understandable - after all, people who have the required knowledge likely end up working for big advertisers, or in entirely different fields. But small business owners don't pick these big advertising companies either.
It would be fascinating if true if the big "brand name" companies whose entire business is based on charging more for the same commodity product as a generic store brand, but using advertising to justify the higher price in the customers' mind, weren't buying ads from people doing the same thing to them...
The question isn't are the scientists good, the question is if the vendor is honest.
> Incrementality testing is a big part of marketing measurement at any reputable agency
In a huge number of cases these tests are run by people that don't have the statistical background to property run and understand these tests, in the remaining cases there is almost never follow through to ensure that the results of the test have persisted after the experiment.
People will say "oh this test shows 10% improvement, and this one does too! and this one! and this one!" But then you should see a nearly 50% improvement just because of ad spend and you almost never do. Nobody ever check that all the numbers add up, they just want the numbers that someone reported to feel like they are sound enough to hold up to scrutiny but scrutiny is never applied.
It seems like it would be important to measure this standing and turn down spend once it is reached. I can see how there's a lack of incentive to help large advertising spenders understand this.
Brand is a lot more than marketing spend, and not all brands are equal. Google and Facebook have companies that depend primarily on performance marketing spend over a barrel.
Uber was just incredibly incompetent to not audit their ad spend at all.
I worked at an ad company. It was an absolutely standard metric to eg geo-fence ads out of a state or two for 3 months to demonstrate the impact of ads. This isn't easily externally visible, but tests like this are standard practice.
Particularly in the app install space, which is sketchy as hell once you stop buying from the top handful of vendors, buyers should be auditing by a couple million in annual spend. To get to $150m without looking hard at big chunks of their spend is just plain arrogance and/or incompetence.
I partly work in this space, and I can confirm that geo-fencing over a time period is the probably the most transparent and convincing (to the client) way to do this. Essentially you mark out local regions (as small as zip codes or as large as states, or something else in this spectrum), and ensure that your advertisements only show within certain "fences". And then you compare old and new numbers within those fences.
The technical aspect aside, there are a lot of "soft" factors that help: regular communication of easily consumable numbers/graphs/metrics to the client, calling out inconsistencies, etc. In other words, this is far from a fire-and-forget exercise; this can be an intense monitoring and client-engagement exercise.
The regular client engagement sometimes helps you in pinpointing cause-effect in the observations too. For ex company X might decide to advertise less of product Y in a certain state Z because of new laws there. But the team you're interacting with might not be aware of this change, or might not be cognizant of its potential impact on an advt. campaign. Regular dialogue helps here since you might observe a change in sales trends, and bring it up in a meeting - and the client team might be able to then rationalize the change. This is healthy for both parties.
> Why wouldn't someone jump at an opportunity to prove the thing/service they provide actually works, unless they were unsure about it themselves?
Because everyone is already acting like they know it works, so the only way that experiment can change things is in a way that's bad for the person in question. In that situation, they should (from a local, selfish perspective) be resisting even if they're awfully sure it does work (and perhaps even if they're right!).
Given that, I don't think the behavior has already given us an answer.
Facebook's ad research team have run a lot of experiments.
The goal was to demonstrate an impact of in-store sales from internet advertising. They did the first studies in about 2008-10, and have continued running these studies ever since.
They even built a tool so that advertisers can run these studies, and get a sense for the incremental impact of their ad.
Google have a similar (less full-featured) system.
Really, it's the rest of the ecosystem that has much of the fraud, and I think that a lot of people in the industry are aware of this.
> The goal was to demonstrate an impact of in-store sales from internet advertising.
So the goal was to advertise advertising. A more scientific goal would be to see if there was an increase of in-store sales from internet advertising. Instead they were looking to design experiments that would show a positive effect, with the goal of giving advertisers a dashboard so they could run those experiments themselves.
To let those advertisers do the experiments themselves, and to relate the campaign to in store sales, would require Facebook and google to share a ton lot of internet-user tracking data
> To let those advertisers do the experiments themselves, and to relate the campaign to in store sales, would require Facebook and google to share a ton lot of internet-user tracking data
I think you are misinterpreting what "do the experiments themselves" implies.
Certainly there is a JOIN with Google's or FB's data, but it can still happen entirely in a hosted sandbox with no data sharing necessary.
We've run Google's Local Campaigns, which are supposed to drive brick and mortar store visits, which they measure using GPS and "other signals". I checked with one of our most remote and least frequented stores, to see whether they could see any of the 100 daily visits that Google claimed to have generated, and they couldn't see a thing (avg daily visits were around 80 before during and after the campaign)
Heh. Just today, FB served me an ad claiming to be Macy's and talking about their closeout specials, accompanied by dozens of mourning comments. It was all phony.
Not really, scientists for example also have their whole careers based on the truth of some theories that they use. However, they're willing to put them to proof in different ways. The reason they do so is that they have a high degree of confidence that these theories are true. This cannot be said of people doing advertisement.
The reproduction crisis in sciences suggest the self- interest is pretty wide spread.
And the social stigma in science of being someone who tries to take down, discredit, or disprove your colleagues/superiors/ competitors theories is pretty substantial: IME there has historically almost been a taboo against attacking our speaking negatively about publications and your own sciences + faculties practices.
The saying of sciences advancing one funeral at a time doesn't exist because they're all such great skeptics and falsifiers, and current scientific practice is heavily biased towards positive findings and contains general publication biases.
indeed there's actually a LOT of common ground with advertising self-interest, since a lot of publication in science is effectively just advertising your brand...
I’m not exactly sure what you’re trying to say. Scientists’ whole careers are based on running hypotheses to prove or disprove their theories. That IS their career.
And disproving a theory with lots of research backing it would be great. Imagine if someone found a huge hole in general relativity. There would be a boom in the grant writing industry.
This is a dangerous misconception to be spreading. It is not at all related to your other claims, namely:
> Sampling can skew results. Scientists are people too. We all have flaws.
These are of course correct statements. But they do not influence truth. They may well influence people's understanding of what the truth is, but that's different.
If you really believe truth is relative, walk to the grocery store tomorrow without wearing clothes and convince yourself that it will be fine. Or run a lighter under your hand - maybe it won’t burn today? Of course you wouldn’t to those things though because, at the end of the day, we rely on truth for everything we do. We just don’t always know what it is. Relativism is not the same as “it’s hard to figure out because it’s so complex”
Are you implying that the probability of someone going to a store named and getting out of it without harm is zero? There’s plenty of photographic and video evidence of the contrary. Probably not the best example.
Same for a lighter, there are many factors involved, and playing with lighters that way was a party trick where I grew up.
We use truth as a concept every day as an approximation, but the universe is not bound to follow. Very unlikely things happen all the time.
Truth is an unhelpful concept anyway (in science at least). What’s more useful is the likelihood to get a given result from a given experiment and the ability to make predictions.
And yes, when we put things that way it brings a lot of possible issues with sampling, processing, and measuring (and who’s doing the measuring). Some of these things would be harder to control for in a study of the effectiveness of advertising.
> The people that would have the power to run this experiment have their entire careers depending on things staying as-is.
Any large company could invest in some experimentation, whether their marketing directors want it or not. It makes sense that at least a few do but just don’t publish the results
It’s like saying that no one who works on reliability of the systems is willing to run experiment to throw 100% errors for a month, because running experiment like that may show that reliability of the systems doesn’t matter.
And then using data from a one system that went down and nothing happened as a proof that systems reliability doesn’t matter at all, and it’s huge scam by engineers.
> The unwillingness of anyone to run such an experiment is already an answer. Why wouldn't someone jump at an opportunity to prove the thing/service they provide actually works, unless they were unsure about it themselves?
Or they have run the experiment and the results haven't lines up with their own personal biases so they were discarded.
> Why wouldn't someone jump at an opportunity to prove the thing/service they provide actually works, unless they were unsure about it themselves?
Because advertising in some form certainly works. If you can determine that approach "A" that everybody is doing is actually a waste of money but approach "B" is effective, then you can develop services around approach "B" and market them based on these findings.
Nobody is arguing that advertising doesn't work at all, but my argument (which I think a lot of other people agree with) is that the effectiveness of online advertising (and maybe advertising in general) is overblown.
> It also happens to be the sole form of revenue all of the largest tech firms enjoy
I don't think this is true unless you qualify it a bit. Apple, Amazon, Microsoft, don't make most of their money on ads. Even Google has other revenue streams.
It's exactly my point. Outside of Apple, all the biggest tech firms are 100% reliant on a form of income that is, at best, partially fraudulent. More realistically, almost entirely so.
I don't understand this line of reasoning. P&G, Unilever, Cocacola, etc have never, not once in history, had a gung-ho C level exec who said "Screw it, I'm going to find out if our advertising works". And then either found it works and kept spending, or found out it doesn't work and saved literally billions of dollars.
There is so much money at stake that could be either saved or generated, its simply not possible that no one has looked at it. I used to help Pepsi/Fritolay set up tracking to tie advertising on youtube to in-store sales. They spent millions of dollars to measure their ads, Google had a clean-room data center specifically for pepsi/frito. The idea that no one actually checked if this system works is simply not possible.
The trick isn't figuring out whether advertising in general influences people's behavior. The trick is in figuring out if any particular advertising campaign generated more profit than it cost to run.
Also, there's one alternative that's often forgotten in these discussions: Perhaps game theory is at play. It may be, for example, that, across entire industries, advertising costs more money than it's worth. But that everyone has to do it anyway, because anyone who chooses not to will start losing ground to everyone else. IOW, just like in the standard prisoner's dilemma, choosing to act is less about increasing your potential gains than it is about limiting your potential losses.
There is an interesting long-running natural experiment in the pharmaceutical industry that suggests, albeit inconclusively, that this is the case.
Distilling it down into "any particular advertising campaign" is pretty myopic in this world. It's about strategy as a whole over decades. Not every ad campaign is to drive immediate sales or signups (direct response). Branding and awareness campaigns can take years to run, and these are ALL diligently measured at every stage. Losses do occur do to negligence, malice, and poor execution all the time, but the brands should take some blame as well as they often feel compelled to spend. They have huge budgets that they NEED to spend regardless of they perform - sometimes due to accounting shenanigans, sometimes because they don't know any better, etc.
No matter what time scale you pick, there's still a point where ad spending costs more than you benefit. As time passes, the boost from any campaign decays. For any particular person you only need to show them so many ads to maintain awareness.
> these are ALL diligently measured at every stage
Mmm. Sometimes. Sure, you can obsessively measure audience and sales numbers, but if you don't have any controls then it becomes nearly impossible to figure out how much of that came from ads and how much came from everything else in the entire world.
it is true that that profit/loss point exists, but it may be impossible to measure. But it's very clear that 75+ years of Coca Cola marketing strategy has produced the situation where any human on earth immediately associates "Coca Cola" with soft drink.
If Coca Cola cut advertising 90% this afternoon, they likely wouldn't see the consequences until a new generation or two grew up.
I suspect that a similar dynamic can be in effect with many industry events. Everyone would collectively perhaps be better off with pulling out of or at least scaling back on big industry shows. But that doesn't mean it makes sense for just you to pull out. (And, certainly, your events team probably isn't going to push for scaling back.)
There's also a huge mutual back-scatching thing going on. I remember in a former life we wanted to pass on a big software vendor's user group show because, while we sort of needed their software for some important customers, we got very little traffic at this expensive event. Their CEO called our CEO and basically said to him "Be a pity if something happened to our partnership."
> IOW, just like in the standard prisoner's dilemma, choosing to act is less about increasing your potential gains than it is about limiting your potential losses.
You seem to be assuming that the potential gains are limited to current players of the game in this comparison.
It's possible that Coke, Pepsi etc. are all losing more money on ads than they're gaining in market share against their current competitors and in increased sales. But that doesn't begin to address the question of whether advertising is a net loss.
For one, the advertising also serves as a collective moat against new competitors, which could enter the market and eat up both Coke and Pepsi's market share. Maybe their biggest gain from it is that a bankrupt soda company from the 80s didn't replace them, and that a similar company today wouldn't have room to enter the market.
It also doesn't account for more complex effects like a net increase in sales for all companies in that market. E.g. maybe Rolex, TAG Heuer etc. all benefit in the long term from expensive watches being seen as fashion accessories.
If Rolex stopped their advertising spend one month and saw little immediate change, then TAG Heuer etc. might follow suit. But both might only experience the real loss years later as "luxury watches = must buy" faded from the psyche of their current and potential customers.
Fortune 500 is a cesspool of insane inefficiency balanced by equally ~~insane rent seeking~~ insanely secure revenue. The sooner everyone understands this, the better.
The answer is not complicated, and the logic on this board is strained by people I think that don't work in marketing.
1) Advertising works.
2) Coke, a sugary drink, is largely kept alive by smart marketing, that involves mostly advertising in the end.
3) Much of advertising is quite wasteful, because it is a fuzzy instrument - but on the whole, good marketing works.
4) There is a lot of self-awareness in the industry on this, and no doubt, a lot of borderline fraudulent actions by participants willingly spending money they know doesn't work.
5) Ad networks will happily sell you ads that don't work, and look the other way when there are shenanigans.
The problem here in the thread, is that people are having a hard time grasping how all of these things can be true at the same time, but it's not that hard really.
That some ad spending turned out to be 'not useful' is 'not news' when everyone knows that easily 50% of ad spending is probably wasted, we usually just don't know which 50%.
>2) Coke, a sugary drink, is largely kept alive by smart marketing, that involves mostly advertising in the end.
Not sure I agree. I haven't seen any Coke Cola Ads for years, not online digital Ads and I rarely consume any traditional media. But I will still buy Coke Cola every once in a while.
They are also on all of the convenience store with decent positional shelf space or visual merchandising. Something I learned that no one on HN actually knows anything about during the discussions of App Store.
Edit: All of my friends can taste the difference between Coke and Pepsi. It is just different. So I was surprised to read comments and youtube video about "blind tasting" on the two. No, I didn't drink Coke because of Ads, I drink it because it taste better. (And some people prefer Pepsi )
Your personal situation doesn't represent the market, moreover, you spent 20 years watching coke ads growing up and having coke, it's impossible to tell how that affected you.
You saw 1000 'product placements' for coke without being aware of it in films other things. Taste is associated with familiarity and comfort.
> Taste is associated with familiarity and comfort.
I respectfully disagree, I used to work in F&B industry. And product marketing and placement has absolutely no affect on how one think something taste is good or bad. It does however remind anyone next time the walk into store they should buy coke, which does affect sales number. But not taste difference.
My 7 years old nephew hasn't watched a single Coca Cola coke in his life time, but he prefer coke over Pepsi. While my friends daughter ( and her mum ) prefer Pepsi.
"And product marketing and placement has absolutely no affect on how one think something taste is good or bad"
Yes it odes.
"My 7 years old nephew hasn't watched a single Coca Cola coke in his life time"
1) It's ridiculous to say a 7 year old 'has never seen an add for coke'.
How would you possibly know that? Unless they are living as the Amish, that kid as seen ads.
2) That someone will prefer one drink over another is not relevant to the argument.
Nobody is making the claim that 'ads make you do stuff'. No Coke ad is going to 'force Pepsi drinkers to like Coke'.
Ads are influential. They root the product in feelings, emotions, those have impact, which is why advertising is there.
3) Your nephew was probably drinking code from some age, maybe as a treat. It was Coke and not some Italian cola, because Coke has such a powerful market position.
In Italy, they drink Brio and it's quite good, you can't even get it here, if you could, people would drink it.
Coke spends billions on ads because they work not because they don't.
The sad thing is it's not inefficiency relative comparable things. But anyone that has worked at these places or sold B2B to them just knows it on intuitive level that they are garbage, and need a heavy sedative to think there are no alternative.
> not a predictive theory nor a powerful explanatory theory
It's not. It's about letting go of some efficient market ideal and then finding new ideas.
We can look at Fortune 500 case-by-case to learn new things
> Cola cola
Sugar drug cartel. Despicable business with very stable revenue despite being a net drag on society. (At least "regular" drugs have a lot more upside!)
> Proctor and Gamble
Just as restaurants are reaching down market, and the inefficiency of everyone cooking and cleaning is starting to have market implications, we should see their reign finally dwindle. Wash-and-Fold should follow laundromats. The specialization means that stupid differentiation between products for uninformative consumers (c.f. https://en.wikipedia.org/wiki/Monopolistic_competition) should go away and restaurants and laundromats optimize.
Personal soaps and cosmetics (of course many soaps are cosmetics) however will stay as cultural reasons ensure people will continue to clean themselves and not contract that out for the foreseeable future.
Thanks for elaborating. I don't think I disagree with many or most of the points you've made.
It might be interesting to define a systematic and testable definition of organizational efficiency relative to an ideal market. Just like combustion engines can be compared to their theoretical maximum efficiency, it would be interesting to compare a particular organization against its maximum potential.
Here is another angle on the topic of relative efficiency. Consider a particular organization. Coca Cola will do as an example. Given its environment (financial incentives, regulations, cultural norms, etc), are we surprised by its behavior? This raises the question of how much an organization's particulars (e.g. leadership, history) affect its efficiency.
I suppose I take a systems view of it. Many deplorable businesses survive in niches. But can you "blame" them? It is a matter of perspective: organizations, like life, adapt to their environment and modify it to their liking. Such organizations can stretch the environment to or past its breaking point for a relative long period of time before having to deal with the downstream consequences.
> the inefficiency of everyone cooking and cleaning is starting to have market implications
Elaborate on your reasoning here? While I can maybe imagine a more centralized food preparation system being more efficient, the currently growing market solutions (DoorDash, etc.) are certainly not, and I can't think of a system in which transporting clothing somewhere for someone else to clean would be more efficient than an in-home washing machine. The wasted energy alone would be colossal.
Coca Cola sells water with flavouring and quite astonishing amounts of sugar. Apple sells nice devices made by what might as well be slave labour.
Both are propped up - actually maintained - by huge and vastly expensive brand management strategies.
The criticisms of individual campaigns here are missing the point. It's not about micro ad spend, but the perception of value and manipulation of behaviour created by the combined effect of multiple PR and advertising efforts - which include traditional print and TV/radio ad campaigns, guided advertorials disguised as news in the MSM, interviews with prominent figures, political campaigns of more or less obvious relevance to the core business activity, state, national, and international political lobbying, direct political campaign donations, advertorials masquerading as "freelance" journalism and blogging, managed astroturfing on social media, shareholder relationship management, product placements in movies and music promo videos, articles about commercial visual design elements in trade journals. And on and on.
That perception of value is - unsurprisingly - extremely valuable. And it's very much a US way of doing business. Instead of producing products that are inherently superior, produce something that is functional but glossily packaged, brand it as a premium lifestyle commodity, and charge exorbitant prices for it.,
The prices are traditionally far out of proportion to its actual utility. In fact real utility may well be negative - see also diabetes and any number of other health issues, depression associated with social media use, debt-driven spending on lifestyle products. Etc.
So the rent seeking comes from a kind of cultural squatting. There is value in dominating discourse in all of these different ways, because discourse and narrative define markets and ultimately control behaviour. And while this is happening other kinds of discourse - which may well have more real utility - are diminished at best and crowded out at worst.
So in this case it doesn't matter if Uber "wasted" their money. Uber have their own branding thing going, and explicit ad spend is a tiny part of that.
And even if all online ad spending ended tomorrow, a small number of corporations would have a difficult time for a while, but the marketing industry as a whole would inevitably interpret the change as minor damage and route around it.
Because the ideology of capitalism is "cooridation failure, whatever", It sure would be nice to find a way for one of those companies to freeload off the culture-shaping the others do, and bring the whole enterprise crashing down!
This analysis excludes glossy packaging and status signalling from the inherent superiority, when (in terms of overall market value) they're part of the superiority.
And in Coca-Cola's case (among others) the advertising is much of what creates the value. When you know it's a Coke, it tastes better. I've had a lifetime of absorbing ads that make me think of good things whenever I know I'm drinking a Coke.
> Both are propped up - actually maintained - by huge and vastly expensive brand management strategies.
In the case of Coca Cola, I agree - in the case of Apple, not at all. "Keyboardgate" aside, their products are vastly superior to the competition in build quality and life time. A typical Windows laptop is unusable after two, three years (almost all are made from plastic which breaks or looks extremely ugly rather sooner than later), and has next to no resale value while in contrast even old cheesegrater Mac Pro's fetch many hundreds of dollars today. iPhones are supplied with firmware updates far longer than most Android phones outside the flagship Pixel line are. OS X is, while it has gotten locked down a fair bit over the last years, still way better than Windows as it doesn't do tracking bullshit and start menu ads and decently better than Linux because no matter which device, stuff usually Just Works (tm) without having to fiddle for hours to get something as basic as a Bluetooth headset working.
Apple built themselves a loyal following with pure quality superiority over the competition, unlike HP or Dell who are only surviving because of enterprises who follow the "no one got fired for buying IBM" line.
And that doesn't even touch that it's Apple who's at the forefront of innovation. iPods, iPhones, iPads, Apple Watch, AirPods, now the M1 CPU - these entire device classes were created by Apple. When was the last time you heard about something truly innovative from the Windows/MS/Google side of IT? Only thing I can remember is the Google Glass.
What Apple is doing is not a "brand management strategy" per se - it is delivering actual value and innovation instead of rent-seeking. And for what it's worth Tesla are doing the same thing.
>A typical Windows laptop is unusable after two, three years
This shows that you are just spreading FUD. A Windows machine is no slower after 10 years than it is the day you bought it. Sure you might have added so much software-crap that the software runs slow (which is easily fixed) but the hardware is exactly as fast after 10 years as Apple hardware is. Do you think Apple builds their hardware out of magic and unicorns?
> A typical Windows laptop is unusable after two, three years (almost all are made from plastic which breaks or looks extremely ugly rather sooner than later), and has next to no resale value while in contrast even old cheesegrater Mac Pro's fetch many hundreds of dollars today.
By "typical windows laptop" do you mean a cheap model where the macbook equivalent is buying nothing at all? Of course it's going to be lacking in that case.
Well-made windows laptops exist, and hold plenty of value. But you have to pay for that, no matter what brand you choose.
> tracking bullshit
Like checking signing certificates by pinging servers almost every time you run a program?
Apple isn't repairable. Apple devices have high prices simply because of the brand ecosystem. Old Thinkpads use commodity parts - but they command good prices because they are repairable.
Marketing the new thing is what Apple does. It is good at it. That is not the entire spectrum of customer value.
Thinkpads are pretty much the worst form factor for a laptop that doesn't transition into "toy" space.
Source: forced to use one daily. If I could get a Mac laptop running Linux but with the input smarts of the Apple trackpad and keyboard, I would in a heartbeat.
That shows that 1) people are different or 2) you are a fan of Apple hardware. It doesn't show that Thinkpads are bad. IMO they are light-years ahead of anything Apple sells.
My trackpad is automatically muted/unresponsive while I'm typing. I suggest you figure out how to get palm rejection to activate/work on your setup, and to look if a different Thinkpad series might better fit your body (ergonomics wise).
Effectively every restaurant in the US has to pay either PepsiCo or Coca Cola. Similarly, if you want to buy a non-alcoholic beverage at the grocery store it's mostly down to those two (with Dr. Pepper having a much smaller but still significant stake). Any competitor that emerges just gets bought up by one of the three.
Apple's half of the phone Duopoly. Either you pay them or pay Google if you want a phone and want to buy software for it.
I'm not sure this is quite right. I would have thought that most restaurants very much WANT to sell sodas, as the margins are enormous--much, much higher than the margins on cooked food.
This page [0], for instance, says the cost of goods sold for the soda itself is a penny an ounce for the syrup and CO2. Iced tea is apparently the margin champion, as the same page indicates it can cost as little as a penny per glass.
Restaurants only pay because consumers demand it and would not visit the restaurant otherwise. That’s not rent seeking. That’s market forces. And what they pay correlates with how much their customers demand coke/Pepsi.
The rent seeking is buying up competitors until its a duopoly and then protecting that through shenanigans like exclusive agreements. It has nothing to do with consumer demand.
Buying up competitor does not immediately make it a monopoly or duopoly. Especially in industry where barrier of entry are low.
Exclusive / Special agreement with serving only one kind of Soda ( or no Pepsi where there is Coke ) are Anti-Competitive arrangement. Not Rent Seeking.
The exclusivity contracts are what some consider to be rent seeking. I don't know if this is the correct term, but I do believe they are anti-capitalist and shouldn't be allowed in a capitalist economy. If freedom of choice is so important, than large businesses shouldn't be able to use their ability to corner the market to force you into exclusivity agreements.
Yes. But in fairness I retracted rent sneaking as a combination of monopolization, bonafied rents, addiction, and just shear damn inertia contributes to the malaise.
Calling it all "rent seeking" is not a hill I want to die one.
Part of the claim is that the people who are checking are ad execs who, if PepsiCo stopped buying ads, would shortly be out of a job (or have their budget and influence slashed). A counter to this might be that different advertising channels are likely not identically effective, and a TV ad exec has a big incentive to poke holes in non-TV ads.
See the Pepsi Refresh Project of 2010. Where Pepsi redirected a sizable amount of ad budget (including super bowl ads) to community projects. They abandonded this strategy after a while because they lost market share.
This is the biggest large scale test of advertising I'm aware of. But it probably doesn't apply to all products.
They only lost market share because everybody else was still using advertising. Therefore, this is not a test in favor of advertising (only in the prisoner's dilemma sense).
If advertising is an arms race with more total cost than total benefit, that would be an argument that advertising works. Because anyone, at any given time, will benefit from running advertising.
It also means advertising is a net loss to society, but that is separate from the question 'does advertising work'
How do you look into it though? I don't know, so i'm asking - but my immediate thought is that you treat it like science. You isolate an environment, advertise, and see if it has an affect. But the implication there is that if it doesn't have an affect, money is on the table.
If this is even remotely close to reality then it makes sense to me. Companies are more concerned with constant growth than strict efficiency, imo. They're throw as much money around as possible, and every cent lost or left on the table is panic inducing.
I also imagine different types or products and/or markets behave quite differently. Eg a new product might very well benefit from advertising - since no one can buy your product or visit your store if they don't know it exists.
For brands that large the majority of their spend isn't campaigns specific to new products, but overall brand management. The experiments would need to run for years or decades and if the prevailing belief that these are Red Queen's Races is correct, the cost if they're wrong could be the whole company.
My company provided sales data from stores to a specific team on google that compares it to users who’d been advertised to. This team didn’t want my company to influence results, and didn’t want YouTube to influence results.
More complicated since there are multiple device graphs/etc in play, but effectively tens/hundreds of millions of dollars at play and no one trusted anyone. The largest thing at risk was YouTube’s ad revenue.
> No one can actually prove it has any ROI at all. No one is willing to run the experiments necessary.
This is entirely false. I work in adtech, and almost all companies run experiments in order to optimize their ad spend (everything from Ad A vs Ad B, to Ad vs No Ad, to Channel A vs Channel B, and more).
This isn't to say that advertising always produces ROI. Quite often, experiments will be run that will show a certain strategy isn't performing well, and the company will adjust accordingly. It's incredibly naïve to think that companies are flushing half a trillion dollars a year down the toilet on advertising without any attempt to validate their investments.
> without any attempt to validate their investments.
I've also worked in adtech, for quite a long time on all sides, and my experience is that far more companies do research to justify their ad spend not validate it.
I have managed plenty of A/B tests in my day, each one claiming to show some improvement, 10%, 15% etc (some, as you said, showing no improvement). Even though these tests are often run "correctly", essentially nobody goes back and asks "wait, we had a 10% increase multiple times in the past year, but is our <metric> really showing the cumulative improvement we expected?"
The greatest trick in the industry is that because VC are pouring money into everything, everywhere, every metric appears to grow. At every startup, everywhere, pre-pandemic, numbers were going up because people were pouring money into the system.
I've worked at companies who I know for a fact their adtech product cannot and does not work, yet their business continues to explode because in recent months nearly all ad spend has been on digital ads.
I've talked to companies whose entire function is bidding optimization who literally do not understand how to optimize bidding given the information you have.
Absolutely people are running "experiments" but the function of the "experiments" is to justify that the ad team is worth having, and then that the VP of marketing is doing their job and then that the CEO has hired some super smart people, and then that the VCs might have really found a unicorn. Everyone sees what they want and no one really wants to ask the question "wait, does this really work? are these tests really able to capture the complexity of the environment?" And if you are one of those ornery people that insists on probing into the details and seeing if any of this is working, you will eventually get fired.
Ad tech is largely a scam, but a huge number of rich and smart people benefit from the illusion that it is not, so we continue to see experiments showing that everything works as expected.
I agree that there a lot of (smaller) players in the ad tech space who are scamming companies out of money. And especially if you've been in this industry for a while, this practice was much more prevalent a decade ago.
I don't think your view is representative of how most F500 companies, or many of the new DTC brands, invest on major ad tech platforms though (FB/G/Snap/etc.). Experienced marketers try to measure metrics as close to core business KPIs as possible, and comparing ads based off of simple A/B tests is increasingly becoming a technique of the past. Measuring the incrementality of ad campaigns through long-term holdout groups gives companies a much more accurate read of their investment, while bringing higher statistical rigor as well. So rather than looking for cumulative increases on <metric> after advertising for a year, you could instead point to group A (who saw no ads) and see that group B (who saw ads) drove a 1.5x higher <metric>.
I'm not sure how you can say advertising is a scam when there are clear examples of popular brands that most likely wouldn't exist today without digital ads? Take Allbirds as an example. There are dozens of consumer shoe brands that are already in physical stores and have higher brand recognition; how can you argue that advertising didn't help them cut through the noise and grow their bottom line?
> I work in adtech, and almost all companies run experiments in order to optimize their ad spend ... It's incredibly naïve to think that companies are flushing half a trillion dollars a year down the toilet on advertising without any attempt to validate their investments.
We have a very large and conspicuous example here that suggests that Uber hasn't been doing this.
Running experiments like you suggest is difficult and to do them right requires some fairly specialized knowledge to do correctly. I'd doubt more than a small percentage of companies have the expertise to effectively run any kind of advertising experiment that returns useful data. Most businesses lack that knowledge and rely on metrics provided by the people selling the advertising—who are likely to provide self-serving numbers.
While I'm sure some of the bigger F500 companies do a good job validating their ad spend, I most companies don't even know how. Certainly the majority of small businesses have no idea how effective their advertising spend is aside from very crude word-of-mouth feedback. I suspect this creeps way up into the Fortune 500 as well.
> We have a very large and conspicuous example here that suggests that Uber hasn't been doing this.
I'm not sure this article is the smoking gun that the author makes it out to be. By 2017 Uber had reached saturation in what could reasonably be considered their addressable market. From my perspective, turning off app install campaigns and not seeing any dip in acquisition validates this position.
Meanwhile, their other major app, UberEats, is a business that exists in a highly competitive market, and they continue to invest in app marketing in order to grow that division. If the takeaway from this article was that Uber discovered app marketing was useless, I doubt you'd see them make that same mistake again.
> While I'm sure some of the bigger F500 companies do a good job validating their ad spend, most companies don't even know how.
If anything, I think bigger companies are more prone to overspending/spending inefficiently. While they do experiment to try and optimize strategy, there is some business inertia that gives them the flexibly to move a little slower, since one poor marketing decision will not sink the business overnight. Conversely, small businesses don't have the resources to investment as much in marketing. This leads to poorly run ad campaigns, but rather than continuing to invest in a bad strategy, they typically just kill the effort altogether.
Most small businesses don't need complex experiments to understand how marketing effects their bottom line - poor ROI is a lot more apparent when you're low on funds. That's also why it's in an ad tech platform's best interest to make efficient advertising as accessible as possible.
That's great, thanks for your experience, but can't comment on the argument provided?
Isn't it possible that these experiments you talk about are fatally flawed? I have serious doubts that a company can run and do well designed statistical experiments when academic experts are plagued by p-hacking and other foot guns.
IMO large social media platforms are actually much more capable of running these experiments than academics. (Disclaimer: I work on ads at one of these companies).
Platforms can accurately determine who engaged with an ad (basic logging on their sites), they have infrastructure to create statistically balanced ad experiments, and can also accurately determine whether a conversion happened (either through a conversion pixel or through data brokers).
Running these tests on behalf of advertising clients, or for internal research is fairly standard. If we couldn't prove statistically that our ads were working, I would have left a long time ago.
I think the difference from academic studies that fail to replicate is less about capability and more about the fact that experimentation in adtech is fundamentally straightforward and occurs under far more idealized conditions - those you mentioned, plus massive sample sizes and easy iteration - than most academic research.
Of course, this just reinforces your point that experimentation in adtech is largely not subject to the same issues that have fueled the replication crisis in academia.
That’s kinda a weird take when there’s every incentive to be the the person/manager/director who saves tens to hundreds of millions of dollars in ad spend.
Sure marketing people want to protect their jobs but the odds of “advertising is entirely worthless” being this closely guarded secret kept by millions of people in the industry or being a collective delusion is pretty darn low. The reward for defectors is just too high.
I'm not sure what argument you're referring to? I quoted two claims from the parent comment and refuted those based off of industry experience.
> I have serious doubts that a company can run and do well designed statistical experiments when academic experts are plagued by p-hacking and other foot guns.
By "company" are you referring to individual advertisers or digital advertising platforms? Why do you doubt that either are incapable of running well-designed experiments? It's in both parties best interest to spend ad dollars as efficiently as possible.
While it's certainly possible that every one of the 100 million+ ad experiments ran over the past decade have been fatally flawed, it's highly unlikely. I guess the alternative scenario is that digital advertising is one giant hoax being kept secret by the millions of employees who work in the industry, but given how easily tech news leaks, this also seems extremely improbable.
Another option is that the experiment design was wrong for every single one of those experiments. Sounds unlikely right? Running the same broken experiment any number of times won't change that it's broken. So if these "100 million+" ad experiments are more or less copies of each other they could all be wrong. How likely is that? I don't know, but I do know that at a big Silicon Valley tech company that you've definitely heard of I was involved in an exercise where we were told to create an experiment. The type A take charge person in our group did what they do, which in this case was talking over and shutting down the Los Alamos trained particle physicist who was trying to explain to her why her "experiment" could never show anything.
>No one can actually prove it has any ROI at all. No one is willing to run the experiments necessary. In the few cases of natural experiments, where ads got turned off for some people by accident, there was no change in buying behavior.
This may be true but it's a separate issue than the fraudulent clicks.
Brand advertisement (the kind you are talking about, as opposed to closed-loop direct advertising) is an investment in a brand, that doesn't get paid off in a day, or a week, or a month, or even a year. It adds fractions of a penny onto a customers value, every day from today into eternity. It's an investment in your mind, but the long time horizon makes it, as you point out, nigh impossible to measure ROI.
However, this is one of those cases where despite being immeasurable, it still works.
Imagine doing a study on low-fat or low-fat diets and trying to measure health outcomes like lifespan, or heart disease, or cancer after just one month. You can't do it. The best you can do is measure markers of these outcomes, like insulin resistance, blood triglycerides etc. Brand advertising is similar. You measure markers of long-term purchase intent. It's not perfect.
And just like actually doing a long-term study of diet for example is riddled with confounding variables and very hard (nigh impossible) to do well, so it is with advertising.
Though not an intentional test, we found out what happens when a movie gets a wide release but does not advertise. In 2008, the movie 'Delgo' was released on over 2000 screens with nearly no advertising. Because the production company could not find a distributor, they spent their ad money on renting the screens for a week, with the hope that some people would randomly see it and word of mouth would spread, leading to the theaters wanting to keep it for additional weeks. It became the lowest earning wide release movie up to that point in time. Each screening averaged two people per screening. More people saw Conan O'Brien making fun of the movie in his monologue than actually saw it in the theater.
The reviews for the movie were poor but it had lots of household names as its voice talent, including Anne Bancroft in her final film. Good or bad, advertising likely would have lead to more people seeing it than two per screening. Of course there's no way to know for sure how many more would have been enticed by the ads but we do know that going with zero advertising resulted in a huge disaster.
> It became the lowest earning wide release movie up to that point in time. Each screening averaged two people per screening.
On the other hand it got displaced by Oogieloves which had $40 million in marketing costs. Critics found it mostly bad and the only award it won was for films produced in Brazil. Maybe there was a reason no one wanted to spend ad money on that movie?
These are not fair comparisons. Eminem and the Avengers have already established identities. Music cannot even compare to movies due to the sheer quantity of modern musicians. There are plenty of fantastic (IMO) bands out there that release music without advertising and they get nowhere.
As for movies, Avengers got to where they are today through lots and lots of advertising.
I work in digital marketing and this episode made me want to tear my ears off.
First of all, they didn't differentiate between display or PPC advertising. PPC you only pay if the user actually engaged with the ad. Nearly all of the anecdotes they used where about online display advertising - a known crock.
And we absolutely run experiments all the time! In fact, we tie our ad spend directly to conversions. If anything, the market is too efficient - it's really hard to get more than what you are paying for.
Absolutely. You can give a campaign its own website or URL if you want to test something like the effect of a specific billboard or radio ad.
Or you can do the equivalent of an excess mortality study but for client acquisition or product sales. You take something like the month of January and you say "this is what we expect for web traffic given our standard traffic for that month, and extrapolate it with your average web activity growth". And so long as you have "clean" months to compare it to (with no ads running), you can get a pretty decent back of envelope about what your ad campaign did.
Like, there's so many different ways you can cut the apple, it's patently ridiculous they made those claims.
To give them a bit of credit, there's something to be said for there being a lot of bad ad spending out there. But like the stock market, there's a tacit understanding that it's more or less efficient.
I'm struggling to follow the actual narrative presented in the chopped up Twitter style. It sounds like there were was a third party ad network involved?
One thing we learned a long time ago is never to trust "engagement" data from platforms (be it Google, Facebook, LinkedIn, whatever). There are enough random user behaviors that you generate a ton of noise. You need to build larger, more robust attribution models that tie ad spending directly to revenue sources - new leads, new accounts, purchases, etc. Patting yourself on the back for how many shares or clicks you get is not great.
as a nonexpert its hard for me to say too but perhaps you will benefit from hearing directly from source - the performance marketing guy at Uber who found all this out: https://www.marketingtodaypodcast.com/194-historic-ad-fraud-... honestly i am not sure if he did a good thing by asking all these questions, or was partially to blame for gross incompetence managing a 150mm ad budget, or both. (the chronology of when exactly he took over matters, it seems he was only in the job about a year)
These experiments are run all the time even before the internet. I remember reading about how for broad brand advertising they used to segment by city. Half the cities got an ad for Coca Cola and half didn't. Then you compare sales of Coca Cola. No one will publicly publish numbers because it's a mix of sensitive sales data and competitive advantage (ie: otherwise your competitors don't have to burn money running their own tests). Also, smaller shops turn their online advertising campaigns on and off all the time to test impact.
> No one can actually prove it has any ROI at all.
That's a philosophical question of whether you consider statistics to be "proof".
> No one is willing to run the experiments necessary.
You're nuts if you think this is true. I assure you that companies in traditional industries (i.e., without venture capital) can and do run these experiments.
The Uber story is about venture capital and its anti-market incentives, not about the ad industry.
Read the podcast transcript. An economist proposed turning off print ads in one market to find out if they mattered, and the head of marketing said he’d rather not know.
"Chief marketing officer of Unilever" isn't the guy running experiments, he's the guy that hires ad agencies and auditors that in turn have teams of data scientists running various experiments for Unilever.
I guess you could propose a conspiracy theory that all these people are conspiring to put together fake reports and data sets, but that sounds extremely unrealistic - ad agencies are essentially money-counting businesses like banks, and they take data security very seriously. (Because usually they'd be audited at every step.)
In short, smart people have already thought about these issues, and there isn't any inherent pro-advertising bias in industry, rather the opposite.
Only because the head of marketing realized it would expose his past misjudgments - not because he didn't realize it would actually be a good thing for the company. You'll see this type of behavior in every department - people trying to cover up their mistakes.
Wait, so their marketers are unwilling to do anything that would reveal ad ineffectiveness, and you consider that an argument in favor of the claim that ads are effective (the topic at this point in the thread)? Like, you take that as a signal that marketers would converge on effective ads?
My point is that we have to differentiate between bad employees and the overall effectiveness of ads.
The latter is all about incrementality. When you adjust for it, you get the true effectiveness of ads. In some instances, the incrementality will be significant, in others it won't. It looks like that in Uber's case, the incrementality was low. Btw, if you're not familiar with this term, please look it up - there's a ton of literature on it: https://www.adroll.com/blog/marketing-analytics/beginners-gu...
Which brings me to the point about bad employees - marketers who are not adjusting their performance measurement based on incrementality should be fired (and it certainly looks like this was not being done at Uber at that time).
I interviewed at Uber for a marketing position back when they were small, and decided not to take the job (huge mistake). At least back then, the people I met were super smart (a lot of them were ex Facebook, again back when those teams were super good). I never met Kevin Frisch and don't know if anything he said was taken out of context, but the first impression is not good. Also it's a bit strange for a well-respected Uber guy to go to Intuit - it's not a common career path.
> No one is willing to run the experiments necessary.
Not without reason. Even without the conflict of interest that Nextgrid points out in a sibling post, there's still a significant financial barrier to attempting to measure this stuff. According to a former professor of mine who spent a large chunk of his career studying this stuff, the size of study you need to conduct in order to get any kind of statistical power at all on an ROI study is just absurd. See, for example, the treatment starting on page 15 of: http://www.davidreiley.com/papers/OnlineAdsOfflineSales.pdf
> No one can actually prove it has any ROI at all. No one is willing to run the experiments necessary.
Depends enormously what sort of business you're in. I used to work for a company where all of our sales came from ads, 100%. It was trivially true that if we stopped advertising we would have no sales. We were also committed to running experiments: we knew how well all of our many advertising channels performed, and we ran A/B tests for every change.
This is nonsense. I had a startup completely powered by Google Ads. Ads brought in nearly 100% of the traffic. When my billing information with Google got mixed up and the ads stopped, the traffic went to 0 immediately.
This is incorrect. Companies do the experiments. They just don’t publish the results. Why would they? The idea that Amazon doesn’t know the ROI on their ad/marketing spend is laughable.
Because 50% of advertising spend is wasted, but it's hard to figure out which 50%.
If you're doing an $XY,000 ad campaign, it's a waste of your time and money to figure out which 50% it is.
If you're doing a $XY,000,000 ad campaign, you employ people whose job it is to figure out which 50% it is. The thing is, you usually don't go ahead and pay them to blog about it.
I'm only about halfway through the first of the podcast episodes you linked (thanks!), but just thinking about this logically for a moment:
I would not be shocked to learn that advertising for a specific product or event is not particularly effective. However, I'm inclined to believe it has a huge effect on overall brand recognition.
Let's say you go to Amazon to buy a roll of toilet paper. How do you choose from the literally hundreds of options? You could spend a day of your life reading reviews, and trying to parse which ones are fake. Or you could buy the toilet paper from Scott because you recognize the brand.
As I see it, buying brand advertising is a lot like buying an expensive suit. It's not that the suit makes you more productive, but it is a sign of professionalism, and—frankly—of wealth. If a brand is advertising everywhere, you know they aren't a fly-by-night company, and their products likely meet some standards of quality.
There's also the bit about luxury (mostly) car commercials. Half their purpose is to remind you that you made a good decision buying your <brand> car, and maybe your next car should be the same brand.
I'm a fan of the podcast but one argument they cited seemed to have a pretty glaring error - they looked at the case where eBay was comparing incremental gain on search ads over no ads. It's methodologically hairy because eBay is a very major player with significant brand recognition.
"When Tadelis was working for eBay, the company was in the practice of buying brand-keyword ads. Which meant that if you did an online search for “eBay,” the top result — before all the organic-search results — was a paid ad for eBay."
This doesn't show that advertising doesn't work per se, it shows that eBay didn't hire a competent ad buyer. Whether or not you can prove the efficacy of advertising as a whole, this is not a valid approach.
This was a superficial analysis at best. Adtech produces petabytes of data every day proving that advertising works. There's a reason why two of the most valuable internet companies sell ads.
The problem is knowing exactly which formats and campaigns are working down to the dollar, but part of that is just the reality of fuzzy attribution and it's only getting harder as privacy regulations get stronger. However you can definitely tell the difference when turning everything off, and if you can't then you were advertising to the wrong people in the first place.
Uber's mistake isn't that advertising didn't work, but rather that they didn't vet their vendors or even bother doing any checking and optimization of their own.
The implication here is that money spent on ads is largely wasted.
Companies like Uber and Ebay have turned off all their adspend and saw little to no change in their acquisition metrics. You can argue that they were just doing it wrong. But the point is that, if even they are doing it wrong - and getting nothing in return for the millions they're spending on ads - then it's very likely most others are in the same situation.
You are right that this doesn't mean _all_ advertising is useless, there are absolutely profitable usecases. But the larger points still stands: most money being spend on advertising right now is likely not returning anything.
We now have some strong precedents being set. I believe this will cause more major companies to run the ultimate experiment: turn off all ads and see what happens. It's too early to tell, but it's not impossible we'll see adspend drop significantly across the industry once everyone finds out they're just burning money.
Why the ad spend isn't working is a critical detail. It's definitely not most money but yes there's a lost of waste because it's a 12 figure industry with lots of politics, perverse incentives, thousands of vendors, and a complex global supply chain.
Fraud is a special case because it's criminal activity and has nothing to do with advertising. It happens in every industry but it's especially easy with online technology spanning multiple countries and data that can be easily faked. Uber was exceedingly oblivious here but I wouldn't extrapolate advertising efficacy from these examples.
This is patently false. I have friends who have worked years developing solutions for doing ad effectiveness comparisons. They are A/B tests in most cases but some methodologies require a lot of sophistication because of problems tracking conversions.
Traditional ad platforms like TV, newspapers might not have done this but online ones surely do. Infact that is one point they consider an advantage as you can measure effectiveness unlike the traditional platforms.
It's unclear how this experiment would be done. In the case of brand advertising, it's likely that brand awareness would decay over some period of time and in turn purchase behavior would change.
It's not currently possible to run an A/B experiment with a hold out group of potential customers across all channels, let alone for any longer duration experiment. So how can we separate cause and effect? (although pay per conversion channels do get gamed left and right)
Come up with some new product that requires some personal data for usage (eg. age, gender/sex, address). Start to advertise this in just one country to one demographics, and look how many out-of-target orders you get.
Maybe it's even enough if you simply just sell it via mail order, you can then look at the addresses.
There's probably a natural information spread in any market (word of mouth, trade magazines), and there's probably a physical dispersion of the target group of people too (people move, visitors/tourists saw the ad/product and order it at home), but it still should be a valuable to see how much effect just one campaign has.
Maybe one of the best products for this could be a car. They are pretty standard, really not much difference between them, they are in all price ranges, and regularly new models come out. Advertise one in a few major US cities but don't in others.
For most vendors, not possible to accurately target age/sex/address online. They will certainly sell you the option, but they won't tell you how accurate the data is - or they will provide it a high accuracy with extremely low coverage.
To see this in action - spend a day watching pitbull videos on youtube and see how many spanish language ads you'll get.
Traditional brand advertising testing (TV, newspaper, etc.) would be geography segmented as I understand it. So half the cities got the campaign and half didn't. You can mimic that with IP based geo-location although you'd get more leakage than pre-internet.
Counter-example: I know of at least one consumer goods company that has studied the long-term effects of certain kinds of sponsorship deals on consumer behavior. The study had tracked people for at least 10 years at the time I learned of it. Of course, this company would never publish the results, because they provide a competitive advantage in structuring and bidding on sponsorship deals.
This is completely false. Large advertising platforms have many A/B tests that show significant differences in consumer behavior between groups that receive different ad treatments.
Ads might be less efficient than some believe, but it's super easy to see that they "work", and advertising platforms do it constantly.
"No one is willing to run the experiments necessary."
Tesla is one natural experiment about not spending money on advertising in the mass media compared to traditional car companies that spend HUGE amount of money advertising.
"Hyundai spent $4,006 per Genesis vehicle sold in 2018. Ford’s Lincoln brand came in second with $2,106 per vehicle sold. After Jaguar and Alfa Romeo, GM’s Cadillac brand came in fifth with $1,242 spent per vehicle sold. Tesla was the lowest at just $3 spent per vehicle sold."
Telsa is in an enviable position of selling most of their cars before they're produced. In that position, you don't really need to advertise much. They also get a lot of PR to keep up brand awareness.
If Hyundai couldn't keep Genesis vehicles on the dealer lots, they'd advertise them less too. Having a dealer network means dealers that want manufacturer support in advertising to keep dealers happy, even if the new cars sell themselves, dealers need to get people in to sell used cars.
Tesla's marketing spend is whatever it costs them in legal fees and fines to keep the mouthy celebrity CEO - and their high-profile campaigns in 2018 seemed pretty effective.
Beware extremes like "no one can actually prove". One difference between internet ads and their more ethereal tv and radio predecessors is that adviews, clicks and purchases can be tracked. Also, there are techniques that can make even TV, radio, print and even digital to physical world ad performance more visible: coupons, response codes and campaign-specific phone numbers and URLS. That Uber was buying hundreds of millions in ads and could not attribute performance (sales, for example) speaks more to poorly designed campaigns and potentially very bad actors in the supply chain.
Ads are somewhat like propaganda. There's no proving it will work or not in short terms, but nobody can deny that persistent propaganda has a long-term effect on the whole population. You have think about a kid/youth who listens to some propaganda for years when growing up. If say you show an ad for a new Apple product. A small part of it is to inform consumers about it, the longer larger part is to enforce the Apple brand which has been happening over many years. You can't say well it is not quite working so let's stop doing Ads.
Maybe not for companies of uber’s size/current reach, but small businesses definitely do benefit from ads. They see an immediate uptick in sales when they start advertising on various platforms.
Great subject matter, but boy does reading the transcription remind me why I hate podcasts. Stop infantilizing the audience with these coo-coo-ing sounds bites and get to the damn point already!
Lewis and Rao published a meta-analysis of 25 large scale controlled advertising experiment [1]. Here's the abstract:
Twenty-five large field experiments with major U.S. retailers and brokerages, most reaching millions of customers and collectively representing $2.8 million in digital advertising expenditure, reveal that measuring the returns to advertising is difficult. The median confidence interval on return on investment is over 100 percentage points wide. Detailed sales data show that relative to the per capita cost of the advertising, individual-level sales are very volatile; a coefficient of variation of 10 is common. Hence, informative advertising experiments can easily require more than 10 million person-weeks, making experiments costly and potentially infeasible for many firms. Despite these unfavorable economics, randomized control trials represent progress by injecting new, unbiased information into the market. The inference challenges revealed in the field experiments also show that selection bias, due to the targeted nature of advertising, is a crippling concern for widely employed observational methods.
This story was on the Freakonomics podcast too. It definitely seems like a bubble, but there's no external pressure to make the bubble pop. With the mortgage crisis, eventually, people can't make the mortgage payments and the bubble bursts. But with people happily spending money on ads that don't work, there's no external pressure to stop. Will this bubble ever burst?
Gabriel Leydon of Machine Zone spoke to the general topic at Code/Media back in 2016. Basically discussing that they'd gone through the trouble of building internal expertise and tools for optimizing their ad spend to better ensure specific outcomes they desired/required in a way that would inevitably lead to more sophisticated ad buyers and putting a nail in the coffin of traditional media advertising.
This made the rounds in adtech in 2016 and it's really nothing special. His entire speech comes down to a single quote: "media will be quantified".
Sure, everyone wants that. Precise attribution has been the holy grail for a long time and the struggles are far larger than just a few technology products. The new battleground is first-party data and clean rooms vs privacy regulations. And that's after dealing with all the politics and perverse incentives that happen in such a massive industry.
Alternatively, they dumped hundreds of millions of dollars into ads based on a wildly unrealistic notion of a customer lifetime (much like the rest of Silicon Valley).
I didn't say they achieved their goal. Though I do think it was/is an admirable one. It would be nice if marketing budgets weren't a near limitless accountability-free money pit.
Depending on what kind of advertising you are talking about. Direct response advertisers measure ads to the cents. And they know exactly what the ROI is and where the customer is coming from.
At least in e-commerce it’s very much possible. The company that I work for does such experiments regularly (there’s a dedicated Data Science team for measurement) and I’ve personally been involved in lift studies for Google Ads. They work, you just have to be careful with the ‘how much’ combined with ‘for what’.
Happy to chat with anyone who is interested in the topic (pfalke at pfalke dot com).
Haven’t had a chance to listen to the podcast, apologies if that made me miss the point of the parent post!
Ecommerce sites have very fine grained measurement of their advertising spend and know exactly the ROI (which is why they focus so much on retargeting)
They mention the common retort to this which is very bizarre to me: "If online ads don't work, then huge companies wouldn't spend billions of dollars on them. Therefore they must work. You academics are just missing something."
Ok, how does those companies know it works? They don't have any real data to show that it does, just the fear that if they stop they'll lose a lot of business.
They do have data. It's pretty easy to see sales without ads vs sales with ads and then do further testing to narrow down results from there. This has been done for over a century since the first billboards were put up.
The amount of data generated by adtech today is staggering. The problem isn't data or advertising, it's the wrong people running the wrong campaigns for the wrong reasons.
I suspect that if our browser isn't blocking ads then our brain is. It's complete conjecture, but I assume that adblockers eliminate this cognitive load explaining a portion of why they became successful before they were necessary for security/malvertising.
It's an arms race. In theory advertising should give you an edge over non-advertising competitors, but if everyone is doing it demand remains unchanged and also you're wasting money
IDK. It's a risk worth chewing on. eg If Google and Facebook tanked, bringing down the stock indexes and all the institutional investors, triggering a sell off... What would the knockon effects be? I don't know (no clue). The 2008-9 crisis largely happened when CDS stuff popped. Is there a similar house of cards on top of FAANG stocks or the digital ad market or...?
The ad buyers aren't smart enough to measure the actual effectiveness of their ads, and the ad sellers are not incentivized to teach them how to do it. This can go on for an arbitrarily long time.
For example we can probably agree that for completely new companies spending on ads makes sense. Or giving out free samples, etc.
Similarly for big companies doing media campaigns to keep the new ones at bay makes some sense.
Even if word of mouth is a thing, even if there are organic searches, and even if it seems like a race to the bottom if everyone just tries to outspend each other.
It'd be great to make experiments about how to sensible prevent/regulate this ad arms-race. But first better data privacy laws.
This is really, really not true. Advertising lift has been well studied, especially in metastudies spanning hundreds of digital campaigns across Facebook and Google. These are independent academic metastudies, with hundreds of millions of impression data samples.
Positive lift in the range of 0-20% is very common, and many statistical aspects of causal inference on ad impacts are well understood.
Negative lift and flat campaigns are real phenomena too, and it does deserve more widespread publicity that negative lift happens in an appreciable number of campaigns, but that doesn’t take away from the overwhelming evidence that digital advertising works and that the mechanics of positive lift are well studied.
Here are two of the foundational papers in this area:
Particularly Figure 1 (page 26) in the second link. That figure alone utterly refutes any nonsense claim that digital advertising doesn’t have provably positive ROI.
I did, and I think there's a lot of p-hacking going on. Leading with TV advertising is disingenuous at best.
The ebay example twists the concept of user acquisition (new customers) and purchases (new or old customers). It is a common tactic to buy advertisements defensively, for example, if you're a product manager, and have determined that some of your user base are more transactional rather than frequent.
Another pet peeve I have is how they conflate direct response advertising and marketing.
You are conflating value of the marginal ad with value of any ads at all, and also exaggerating what those articles say
One of them said tracking cookies only boosted conversion 4%, and another said P&G did better with some traditional media advertising than some digital advertising.
No one can actually prove it has any ROI at all. No one is willing to run the experiments necessary. In the few cases of natural experiments, where ads got turned off for some people by accident, there was no change in buying behavior.
https://freakonomics.com/podcast/advertising-part-1/
https://freakonomics.com/podcast/advertising-part-2/