Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mark Zuckerberg grilled on usage goals and underage users at California trial (wsj.com)
191 points by 1vuio0pswjnm7 15 hours ago | hide | past | favorite | 104 comments
 help




The whole article reads like a puff piece for Zuckerberg/meta.

They had him on the stand and these were the most interesting questions and answers? I feel like the WSJ is trying to convince me facebook is a good company trying its best and Zuckerberg is a reasonable empathetic person.


That's what Meta paid for.

That’s exactly the lens they were hoping for

I guess so, I expected a little more nuance to hide it better. but it was just blatant. like any child could figure it out

Plenty of adults don't catch it either. You don't need to be blatant. Dress it up in neutral business language, keep the arguments one step removed from the conclusion, and anchor it in assumptions people already hold about markets and American institutions. Then it's nearly impossible to push back on without sounding like you're attacking the premises.

The journalistic version of the “I’m kidnapped” hand signal.

That’s what the WSJ is there for

You were expecting it to be fair and balanced? What's speaks volumes on murdoch is the WSJ will criticize trump in ways that are heresy on foxnews.

Wsj is always pro-corporate

> The plaintiff is a 20-year-old California woman identified as K.G.M. because she was a minor at the time of her alleged personal injury.

I didn't realize this was literally a single person claiming they were personally injured by literally every major social media company. How does that even work? What laws are purported to have been broken here? I wholeheartedly support some sort of regulatory framework around social media, but this specific case seems like a cash grab. It was already successful too, since Snap and TikTok have settled.


From a Rolling Stone article:

"K.G.M.’s lawsuit was selected as a so-called bellwether case and is proceeding first among more than a thousand personal-injury complaints under a coordinated, court-managed process meant to eliminate the risk of inconsistent rulings at subsequent trials."


"How does that even work?"

There is a master complaint and each plaintiff files a short-form complaint

Because the injuries will vary from plaintiff to plaintiff class action will not suffice. This is why each plaintiff must file individually

To learn more: https://www.jpml.uscourts.gov/articles

Here is the master complaint for the personal injury plaintiffs

https://dn710108.ca.archive.org/0/items/gov.uscourts.cand.40...

Here is the short-form complaint for personal injury (for individuals)

https://dn710108.ca.archive.org/0/items/gov.uscourts.cand.40...

Here is the master complaint for the local government/school district plaintiffs

https://dn710108.ca.archive.org/0/items/gov.uscourts.cand.40...

Here is the short-form complaint for public nusiance (for local governments, school districts)

https://dn710108.ca.archive.org/0/items/gov.uscourts.cand.40...

Hypothetical for discussion

Corporation's lobbyists, or some other circumstances, prevent the establishment of any meaningful regulatory framework that would effectively produce a desired change in the corporation's behaviour

However the threat of thousands of "cash grabs" through private litigation causes the desired change in the corporation's behaviour even in the absence of a regulatory framework

What are the pros and cons

For example, one could argue that the "cash grabs" pose a greater problem than the corporate behaviour that would occur in their absence, or vice versa


My siblings are better informed, I just wanted to say, settlements don’t get paid unless there’s a risk the plaintiff could win.

Not true at all. In fact settlements mostly happen because it would cost significantly more for a company to go through discovery and argue their case in court regardless of the eventual result. And court systems strongly encourage settlements to save their own time. There an entire industry of patent trolls and sleazy personal injury lawyers in business because of this.

The American jury system is always a wildcard.

A judge can be predicted, it's all about facts and evidence, 12 randos means rolling the dice.


Not sure about that, don't defendants sometimes settle because they don't want the publicity of a trial or don't want their dirty laundry being aired in discovery?

Not always. Sometimes it’s as simple as: settling is cheaper than proving you’re “clear” at trial.

No one wants to go through discovery if they don't have to. These companies are flush with cash and can pay to make that problem go away.

I was sued. I was 19 years old working as a painter for a dishonest contractor that paid crap wages. I nosed out of a parking lot after work one day to see around a line of cars turning in and a big sedan ploughed into my little econobox. Several years later, as the statute of limitations was about to run out, the driver of the sedan sued me. My insurance companies first move, before doing any discovery, was to offer her $50k. She said no, so discovery began. It turned out she'd been mis-prescribed an anti-psychotic to create the symptoms she was suing me for having caused. The case was thrown out. The insurance company's legal bills ended up being much less than $50k, but the way it worked was they took a guess at the break even point, offered a bit less than that, and made an offer.

That's not to say this is how it works when Meta is on trial. I just thought it was useful perspective on the nature of settlements.


In legal terms they often call this a "nuisance fee", although it's normally much smaller when the defendant thinks there is a 100% chance they will win but just wants to avoid all the costs.

She alleges that social media applications deliberately got her addicted, knowing that might lead to the depression and suicidality she experienced.

She's not wrong. The discovery process has shown that such decisions were made by Meta and Zuck himself, knowingly, in the face of research that opposed their goals.

“The better that Meta does, the more money I will be able to invest in science research”

That’s an impressive amount of arrogance.


There's an incredible cultural contempt for social media, everyone recognizes the harms, but we collectively spend more and more time on social media apps.

Wat mean?


Ask yourself the same question but replace "social media" with "tobacco"

That seems like a bizarre comparison. Is TikTok high in nicotine?

Have you ever tried quitting smoking?

Easy. I've done it five times in the last three years alone.

I don't think this is what quitting means, or was that part of the joke?

That was the joke: it's not easy to quit smoking.

It means it's addictive

When I have true contempt for something, I find in quite easy to quit.

There are things I am likely addicted to that I don’t like. I wish I didn’t do them and could stop, but I don’t have contempt for them. I have contempt for social media and even tell my own mother I won’t join when she tells me it would make her so happy if I was on Facebook.


I have observed people who objectively were destroying their lives and yet they themselves were happily in denial.

The clichéd and sadly true "I am in control I can stop everything is fine".

Humans are strange.


Alternatively, it may mean that people are largely hypocritical, and evaluate themselves and other people by different standards.

1. Because people like it. 2. “Social media” is not the right term to describe those apps anymore. There’s nothing social about them - just an algorithm feeding you stuff. True social media aren’t that different from forums - places where you can interact with other people (in either healthy or unhealthy way).

Alternative source:

https://www.msn.com/en-us/money/companies/mark-zuckerberg-gr...

Text-only, no Javascript, HTTPS optional:

https://assets.msn.com/content/view/v2/Detail/en-in/AA1WBSLI...

Simple HTML:

      {
       x=AA1WBSLI
       ipv4=23.11.201.94
       echo "<meta charset=utf-8>";
       (printf 'GET /content/view/v2/Detail/en-in/'$x'/ HTTP/1.0\r\n'
       printf 'Host: assets.msn.com\r\n\r\n')  \
       |nc -vvn $ipv4 80 |grep -o "<p>.*</p>"|tr -d '\134'
      } > 1.htm
     firefox ./1.htm

Reminder that Meta funds the Digital Childhood Alliance[1], an "anti big tech" PAC, consisting of 50 conservative groups that include Moms for Liberty, Focus on the Family and Morality in Media, that pushes age verification and the end of anonymity online[2].

The goal is to use the current moral panic to usher in identity verification systems that collect biometrics just to see or share user-generated content online, which is very convenient for companies like Meta and Anthropic[3] who need mountains of biometric and identifying data to train their systems and monetize users.

The other goal is to force all user-generated content on the internet to go through Meta/Anthropic/OpenAI/etc's AI-powered moderation systems. That means the companies will get to train on the totality of all user-generated content on the internet into perpetuity and get paid to do so.

[1] https://www.insurancejournal.com/news/national/2025/07/25/83...

[2] https://www.msn.com/en-us/news/technology/opinion-the-child-...

[3] https://x.com/burgessev/status/2021921843192754583


Really, what's the point of asking a CEO anything? Same for a politician. Just get their emails and quit with the circuses.

> In sworn testimony, Zuckerberg said Meta’s growth targets reflect an aim to give users something useful, not addict them, and that the company doesn’t seek to attract children as users.

That’s a perjury.

I suppose getting more ad revenue is useful to someone, but not the user.

Of course some of us warned that project management by A/B testing would lead to amoral if not outright immoral outcomes but wtf do we know about human nature? Turns out putting a badly made android in charge of a large chunk of culture leads to the near collapse of civilization, which I don’t think any of us would have predicted.


I and others (but not as many as I would have thought) recognized the switch to algorithmic feed in 2006 was a fundamental shift in what social media was. But back then I predicted it would destroy Facebook, which was so wrong - really it ended up (partly) destroying western civilization.

I think people are good at sensing that things are changing but not how it’d play out. It’s very easy to see it in hindsight and even recognize it’s bad, I don’t think anyone saw how bad it would get. I just hope we don’t lose the ideals of free speech and the early promise of the internet with regulating platforms.


Wall Street has been rewarding morally detached leadership for decades using the language of rationality, math and science. Ask them what their source of morality is and their textbook answer is its mathematically inefficient.

Capitalism's existence is actively turning the screws on humanity. The screws of Meta are a lot more refined than the ones used by the Slave Trade Monopoly of the Dutch West India Company but the screws persist.

But "capitalism" doesn't actually exist as such -- it's just a concept that represents patterns of human behavior that stem from human beings' pre-existing motivations inclinations.

Treating descriptive models as the causal factors behind the things they're describing is a reification fallacy.


Which part is perjury? Can you prove that Mark Zuckerberg doesn’t think his apps deliver something useful to the users? As far as the attracting kids part, well, that’s the entire premise of the trial, no?

> which I don’t think any of us would have predicted.

Skynet from Terminator probably would have been referenced by almost everyone, though, as an analogy?


> Turns out putting a badly made android in charge of a large chunk of culture leads to the near collapse of civilization, which I don’t think any of us would have predicted.

I can't tell if this is supposed to be commentary on Zuckerberg or capitalism/free-market-based economies itself.


If this is a real litigation process, I wonder what would be the conditions Meta will need to accept for them to let it go.

I have been snickering at the term "grilled" for years now. All of the aggressive bullshit language being used to retell these accounts is nonsense: NOTHING HAPPENED. Nobody is held accountable, and they just got nagged at in front of class for a bit.

If you asked me, "Hey do you want to make billions of dollars breaking the law, but you might have to sit in front of some cameras every few years and answer fake questions in front of people with dementia?", then I could understand someone thinking that's easy money.


Yeah, but those Reels are getting Thousands of Likes and Shares!

That pays the creator probably thousands of pennies!

"The memes will continue"

This was the funniest / most evil testimony I’ve seen, in any case, in a while.

Couldn’t find it in a quick skim in this article, but, he testified they don’t care about increasing user engagement (absolute lie, increasing use is goal #1 and there’s always a lead OKR tied to it), and they kept pulling up emails re: it, up to and including 2024.


Basically, "I'll lie right to your face for my entertainment because this is a circus and we're all clowns. Now make with the tech tax cuts."

Mark Parilla haha


Fascinating how differently Musk's testimony is portrayed in the WSJ vs by Rolling Stone.

@dang at least the RS story vs. paywall please.

Oh wow they’re really holding him to account by asking some interesting questions then letting him get back to it.

/s


Agreed - such useless pageantry. At least with meat, 'grilling' changes it.

[flagged]


Good things there are entire fields of medical experts working to understand the exact mechanisms and harm and we're not leaving it up to you.

Not to mention how often we keep catching these companies with explicit policies to make people never want to leave the app.


> Good things there are entire fields of medical experts working to understand the exact mechanisms and harm and we're not leaving it up to you.

No, that doesn't work. Harm is a normative concept, not an empirical one, so there's no role for "expertise" to play in defining it. Medical experts can describe mechanisms of causality, and their associated effects, but deciding whether those effects constitute harm is something that actually is up to each individual to decide, since it is an inherently subjective evaluation.

> Not to mention how often we keep catching these companies with explicit policies to make people never want to leave the app.

Yes, and attesting one thing while doing another is certainly something they can be held accountable for -- perhaps even legally, in some cases. But this attempt at treating social media as equivalent to physically addictive chemicals is pure equivocation, and making claims like this actually undercuts the credibility of otherwise valid critiques of social media.

At the end of the day, this is a cultural issue, not a medical one, and needs to be solved via cultural norms, not via political intervention based on contrived pretenses.


Just to make sure I wasn't misunderstanding you, I double checked the meaning of "normative." This is the first result from google:

"establishing, relating to, or deriving from a standard or norm, especially of behavior."

And other sources have something similar. I'm interpreting your comment as saying "(psychological) harm is subjective, and because it can not be measured empirically, it's not possible to have expertise on this topic."

Fortunately, there are real world consequences that can be measured. If I take an action that makes many people say "ow!" and we acknowledge that expression as an indicator of pain, even though I can't measure the exact level of pain each person is experiencing, I can measure how many people are saying "ow!" I can measure the relationship between the intensity of my action, and the number of people that respond negatively. There's plenty of room for empiricism here, and a whole field of mathematics (statistics) that supports handling "normative" experiences. They even have a distribution for it!

The foundation of law is not scientific exactness or scientific empiricism. It is the mechanism by which a state establishes norms. A law against murder does not stop murder, but it does tell you that society does not appreciate it.


> The foundation of law is not scientific exactness or scientific empiricism. It is the mechanism by which a state establishes norms.

Exactly. So it sounds like you're agreeing with me that qualification of a particular effect as "harm" is not a matter of "medical expertise", but is rather a question of subjective norms that is in fact on the opposite side of the is-ought gap from the side at which expertise is applicable.

> A law against murder does not stop murder, but it does tell you that society does not appreciate it.

Well, not exactly. This presumes that "society" in the abstract (a) actually has a general consensus on the question, and that (b) the rules imposed by the legal system reflect that broad consensus, rather than reflecting the values or intentions of the people administering the legal system, without necessarily aligning with those of the general public.

There are a lot of questions that do have broad consensus across society, but also a lot of subjective questions that different people answer very differently. And I think that the level of consensus that actually exists in terms of considering things causing physical injury or pain as "harm" is far, far greater than the level of consensus on treating anything that causes emotional stress as "harm".

I don't think that the "negative response" criteria that you're articulating is sufficient to reveal an underlying normative consensus: I would not presume that most people would equate harm with any kind of negative reaction. For example, I would personally not consider something harmful merely on account of being annoying, insulting, or even morally questionable (though there's often overlap in the last case).


They are saying that judgements of what qualifies as harm is something like a judgement of what is good, or what is right or wrong. That’s not the same thing as evaluating whether something causes pain. You can measure whether something caused pain, sure. (Well, the sort of limitations you mentioned in measuring pain exist, but as you said, they are not a major issue.)

“Harm” isn’t the same thing as “pain”.

I would say that when I bite my finger to make a point, I experience pain, but this doesn’t cause me any suffering nor any harm. If something broke my arm, I claim that this is harm to me. While this (“if my arm were broken, that would be harm to me”) might seem like an obvious statement, and I do claim that it is a fact, not just an opinion, I think I agree that it is a normative claim. It is a claim about what counts as good or bad for me.

I don’t think normative claims (such as “It is immoral to murder someone.”) are empirical claims? (Though I do claim that they at least often have truth values.)


I'd go beyond that and even say that one might consider something harmful, but be willing to endure a certain level of harm in pursuit of something of higher value.

For example, I once asked a smoker why she smoked, and the response was "because I love it" -- when I asked if the enjoyment was worth the health risks, she said "yes; I never planned to live forever". She was making a conscious decision to seek short-term pleasure at the cost of potential longer-term damage to her health. At that point, there wasn't really anything remaining to debate about.


I didn’t mean to imply that the harmful effects of something can’t be worth it for the beneficial effects of that thing. Yeah, if someone is trapped, doing something that frees them and also breaks their arm, may well be an appropriate action for them to take.

According to Wikipedia

> Addiction is ... a persistent and intense urge to use a drug or engage in a behavior that produces an immediate psychological reward, despite substantial harm and other negative consequences

Immediate psychological reward = dopamine hits from likes and shares

Harm and other negative consequences = anxiety, depression, low self-esteem, FOMO, less connection with friends and family, etc...

Food is not as easy to make addictive because the psychological reward diminishes as you get full. The exception to this is people with an eating disorder, who use eating as a way to cope with or avoid difficult feelings.


High sugar food is addictive as you don't feel full fast enough consuming empty calories.

And yet somewhere around that 6th donut it will hit and you will stop.

These companies all hired psychologists to help design systems that maximize dopamine release and introduce loops that drive compulsive behavior.

Besides, they aren’t making great products and haven’t for some time. Is anyone happy with Facebook as a product? Does anyone who used Instagram before it became the a shittier TikTok / ultimate ad medium think it’s a better product today?


>These companies all hired psychologists to help design systems that maximize dopamine release and introduce loops that drive compulsive behavior.

This seems like the important bit: these systems weren't designed just for enjoyment. They hired experts in habit formation.

I talked to a friend recently about this and she described it as feeling hollow. When she stayed up all night playing a game she really liked, she enjoyed herself and might have had regrets about giving up some sleep, but didn't necessarily regret the time spent. She found is nourishing in some way. Similarly to feeling compelled to keep reading a great book, or even eat an extra bit of something particularly great dessert.

But at the same time, she would describe staying up until 3-4am regularly scrolling TikTok and would just feel awful the next day. She didn't want to be up doing it, it wasn't actually really fun or enjoyable, but she just...did it anyway.

I'll also note that there are games that are designed for maximum addictiveness that probably also leave you feeling "hollow" in the way that TikTok does, too, so this isn't necessarily to say that games are universally different. But it's clear that there's a psychological mechanism that some companies use in their design that is intended to hijack, rather than just provide "fun" or entertainment.

I don't know what we do about that, or how/if it should be regulated in some way, but it's pretty clear that there is a real difference.


There's people with unhealthy relationships with both food and video games and I'm comfortable saying they suffer from addiction.

So then do you punish the chefs for making their food too appealing?

If the monopolist chef is deliberately adding addictive ingredients that causes health problems, I think, yes, they're the ones to punish or address the problem with.

Facebook does not have a monopoly on social media. (He says, writing on a competing social media site.)

> addictive ingredients that causes health problems

Like sugar? Are we going to make candy illegal now? Through the court system, retroactively, with no legislative mandate?


The law takes intent into consideration, candy makers are not intending to make someone addicted to their product. This lawsuit is showing the intent behind certain user experience features was to addict users, not just make it a sweet and nice place to be.

We may requires, high sugar food to be labeled like cigarettes, maximum portion size available (largest drink can be 500ml), put more tax on it, advertise against it, ban in schools, ban advertisements in children program/movies.


Well, think of it this way. You could make a meal out of healthy, fresh, whole foods cooked expertly. Or you could give someone a bag of Doritos. Nobody on "My 600lb Life" got there because they were eating great food. They were eating a lot of bad food that doesn't fire satiety signals in their head.

Addictive and Good are not exactly the same thing -- something can be objectively good and not addictive, and vice versa.


this feels like a false equivalence and slippery slope fallacy.

Clearly things like cigarettes and hard drugs are bad and need very heavy regulations if not outright banned. There are lots of gray areas, for sure, but that doesn't mean we shouldn't take things on a case-by-case basis and impose reasonable restrictions on things that produce measurable harm.

Whether or not social media does produce that measurable harm is not my area of expertise, but that doesn't mean we can't study it and figure it out.


Oddly the countries that don’t do this have far better outcomes.

Imagine being allowed to have a beer outside, or after 2 am, oh the humanity. Surely such a society would devolve immediately into chaos.

What if the government wasn’t meant to be a strange parent that let you kill your kids but felt having a beer outside was too much freedom. It might just lead to being the happiest country on earth.


The person who said smoking and hard drugs, and you said a beer outside after 2am. Those aren't the same thing!

> Oddly the countries that don’t do this have far better outcomes

Go on


For example, smoking tobacco in Japan… wait a minute


> Imagine being allowed to have a beer outside, or after 2 am, oh the humanity.

Where do you live that this is not possible?

(I know you’re speaking loosely, I.e. you mean “where I live bars have to stop serving alcohol at 2 Am” but it’s so loose that there’s 0 argument made here, figured I’d touch on another aspect leading to that, other replies cover the others. Ex. The 2 AM law isn’t about you it’s about neighborhoods with bars)


Illinois sells liquor in grocery stores but not after 2am. Or maybe it was a local ordinance. The town next to me was 1am then you couldn’t buy liquor at the 24 hour grocer. So not just bars.

It’s illegal to drink in public in Washington state [1]. I believe this is the case in most places in the United States. Las Vegas is a notable exception.

[1]: https://app.leg.wa.gov/RCW/default.aspx?cite=66.44.100


Can't tell if you're being earnest or pedantic (if earnest, I grew up in a poorer neighborhood than HN so maybe I'm just more familiar with the solution. The Wire has a scene that'll explain it better than I: https://www.youtube.com/watch?v=GV9MamysCfQ)

> this feels like a false equivalence and slippery slope fallacy.

The slippery slope fallacy is purely a logical fallacy, meaning that it's fallacious to argue that any movement in one direction logically entails further movements in the same direction. Arguing that a slippery slope empirically exists -- i.e. that observable forces in the world are affecting things such that movement in one direction does manifestly make further movement in that direction more likely -- is absolutely not an instance of the slippery slope fallacy.

A concrete instance of the metaphor itself makes this clear: if you grease up an inclined plane, then an object dropped at the top of it will slide to the bottom. Similarly, if you put in place legal precedents and establish the enforcement apparatus for a novel state intervention then you are making further interventions in that direction more likely. This is especially true in a political climate where factional interest that actually are pushing for more extreme forms of intervention manifestly are operating. Political slippery slopes are a very observable phenomenon, and it is not a fallacy to point them out.

> Whether or not social media does produce that measurable harm is not my area of expertise, but that doesn't mean we can't study it and figure it out.

It's true that the fact that it isn't your area of expertise doesn't mean we can't study it and figure it out.

Rather the thing that does mean that we can't study it and figure it out is that what constitute "harm" is a normative question, not an empirical one, and the extent to which there is widespread consensus on that question is a bounded one -- the more distant we get from evaluating physical, quantifiable impacts, and the more we progress into the intangible and subjective, the less agreement there is.

And where there is agreement in modern American society, it tends in the opposite direction of what you're implying here: apart from very narrow categories, most people would not consider mere exposure to information or non-physical social interactions to be things that can inflict harm, at least not to a level sufficient to justify preemptive intervention.


okay it's not a slippery slope, but it's something similar (that's why I said "feels like"). He's trying to establish a continuum of things that have a variety of addictive properties in an attempt to discredit the whole idea of addiction ("Don't try to make your video game fun, or some people may become addicted")..

> apart from very narrow categories, most people would not consider mere exposure to information or non-physical social interactions to be things that can inflict harm

That's an extremely disingenuous interpretation of social media. Huge straw man. We're talking about infinite-scrolling A/B tested apps that are engineered to keep eyeballs on the screen at the first and foremost priority for the primary benefit of the company, not the user.


As far as I can tell, even in US, the most litigious nation in the world, you can't SUCCESSFULLY sue e.g. a cigarette maker or alcohol maker for making you addicted.

(I emphasize successfully because of course you can sue anyone for anything. The question is what lawsuits are winnable based on empirical data of what lawsuits were won).

If you could, that would be the end of those businesses. The addiction is beyond dispute and if every alcoholic could win a lawsuits against a winemaker, there would be no winemakers left.

In that context it seems patently absurd that you could sue Facebook for making you addicted.

It would be absurd to create a law that makes it possible without first making such laws for alcohol and cigarettes.

It's also patently absurd that we (where "we" here is leftist politicians) are allowing open drug dealing in populated areas of San Francisco and yet this is what we discuss today and not politician's systemic failure to fix easily fixable problems for which we already have laws making them illegal.


Those companies are required to publicise the addictive nature of their products, and required to advertise services to aid those addicted.

Facebook consistently argues they are not a source of harm, and do none of that.

If the consumer isn't proactively being informed, then no, litigation isn't patently absurd.

"Informed consent" is what you're missing, here.


Since we're being condescending here: what you're missing is absence of laws making a given activity illegal.

As far as I know there's no law that you could use to claim that Facebook did something illegal based on some notion of making addictive products.

Just like there are no laws you could use to claim a winemaker did something illegal based on some notion of making addictive products.

And I think it would be absurd to make what Facebook does illegal before we make what winemaker does illegal.

And we tried with winemakers. Educate yourself on dark times of Prohibition. (you opened the condescension doors).

> Those companies are required to publicise the addictive nature of their products, and required to advertise services to aid those addicted

I've never seen such advertising so I suspect you pulled that factoid out of your ass. Easy for you to correct me: laws have numbers, cite one.

If there is such law for say, alcohol, I wouldn't be opposed to such requirements for Facebook.

I mean, it obviously would end up as ineffective annoyance that doesn't deter or fix anything, like cookie banners, but have at it.

So yeah, it's still patently absurd to sue Facebook claiming addiction.


> I've never seen such advertising so I suspect you pulled that factoid out of your ass. Easy for you to correct me: laws have numbers, cite one.

More than one, but how about we have the FDA do the informing here, as I've apparently pissed you off:

https://www.fda.gov/tobacco-products/labeling-and-warning-st...


> okay it's not a slippery slope, but it's something similar (that's why I said "feels like"). He's trying to establish a continuum of things that have a variety of addictive properties in an attempt to discredit the whole idea of addiction ("Don't try to make your video game fun, or some people may become addicted")..

But he actually is correct. Use the same term to describe the effects of ingesting biologically active chemicals and the effects of emotionally engaging activity -- which in this case mostly consists of exposure to information -- absolutely is disingenuous equivocation. People in this very thread are comparing Instagram with ingestion of alcohol or tobacco products, and that absolutely is a prevarication.

It's not unreasonable to observe the course of these debates, and suspect that the people invoking the language of addiction are doing so as a pretext for treating what is actually a cultural issue instead as a medical one, so as to falsely appeal to empirical certainty to answer questions that actually demand normative debate.


Food? Some products sold as food are most certainly addictive.

Video games? As just one example, Candy Crush is a vacuous waste of anyone's time and money, with plenty of tales of addiction.

Books? People used to think novels were addictive and bad news: https://archive.is/WDDCH


Diluted only if one doesn’t know the definition of addiction

But the intent is to make as much as money as possible with zero care for the users well being.

I worked at Tinder for example and you would think that company in an ethical world would be thinking about how to make dating better, how to make people more matches spending less time on the app. Nope, we literally had projects called "Whale" and the focus was selling on absolutely useful and even harmful features that generated money


I am addicted to Hacker News. Who can I sue?

Indeed. As a wise man once said:

"Who is to say what's right these days, what with all our modern ideas and products?"


So I think two things:

1. It's ok to want certain outcomes as a society. Like maybe this is a little conservative or whatever, but we can't just like stand by and be like, well everyone's dumb, no one's having sex, people are dying, healthcare costs are spiking, there goes our economy. Like I wish we would legalize smoking again, but I understand why we don't.

2. I think one could make an argument that over-optimization is immoral. This Paula Deen video really made me sort of understand the excess that leads to the obesity epidemic. She takes what used to be a desert, wraps it in like three other deserts, fries it and then that's now one desert with twice the calories:

https://www.youtube.com/watch?v=HYbpWcw6MfA

But like, companies are trying to architect food to fit more fat and sugar in. Instagram doesn't go to people and ask them what they want, they study behavioral psychology to get people to use their products more. At some point, letting giant multinational corporations do whatever they want to hack people's brains is a kind of nihilism and absence of free choice that you're trying to avoid.

Monopolies are bad. Overoptimization is bad. It should be ok for us as a culture to reject micro-transactions. It's ok for us to have a shared morality. even if that means Epic games makes a little less money on Fortnight.

I think one measure should be. How much do people wish they did a thing less.

https://fortune.com/well/article/nearly-half-of-gen-zers-wis...

I used to watch like 6 hours of TV a day. Loved every minute of it. Same thing with video games. Same thing with my favorite restaurant, don't feel the same way about smoking or like the M&Ms I buy in the checkout aisle of the grocery store.


I can't speak for others' definition of addiction but Facebook has been pretty bad about artificially inflating users' activities. Outright fake notifications, even spamming people's 2FA phone numbers



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: