Modern platforms know more about their users than most doctors, priests, or therapists. Through relentless behavioral surveillance, they collect real-time information about users’ moods, vulnerabilities, preferences, financial stress, and even mental health crises. This data is not inert or passive. It is used to drive engagement by pushing users toward content that exploits or heightens their current state.
If the user is a minor, a person in distress, or someone financially or emotionally unstable, the risk of harm is not abstract. It is foreseeable. When a platform knowingly recommends payday loan ads to someone drowning in debt, promotes eating disorder content to a teenager, or pushes a dangerous viral “challenge” to a 10-year-old child, it becomes an actor, not a conduit. It enters the “range of apprehension,” to borrow from Judge Cardozo’s reasoning in Palsgraf v. Long Island Railroad (one of my favorite law school cases). In tort law, foreseeability or knowledge creates duty. And here, the knowledge is detailed, intimate, and monetized. In fact it is so detailed we had to coin a new name for it: Surveillance capitalism.
Rethinking Platform Liability in the Age of Algorithmic Harm – Music Technology Policy


Well, it’s one thing to sell ads based on interest in, say, recording gear. It’s another thing entirely to detect despair in someone’s online behavior and feed them dopamine-drip content that keeps them spiraling—just because it boosts engagement metrics. That’s not an unintended externality, that strikes me as quite purposeful and engineered.
But it gets worse with children. They’re still developing impulse control, identity, and self-worth. I put myself in the shoes of a 10-17 year old today given all my self-misgivings at this age … I was a mess… Thank goodness I came up in the 70s. I feel really bad for these kiddos. Five years ago “The Social Dilemma” revealed that some of the original architects of “Social Media” platforms don’t allow their kiddos to even have accounts. That seems prescient, or at least like a significant data point that often goes under-reported, or forgotten or buried, depending on where you set your cynicism control.
If a corporation profits from behavioral data, then human vulnerability becomes a commodity to be bought and traded. In that paradigm, sadness, insecurity, loneliness, obsession and confusion aren’t problems to solve—they’re assets to be harvested, exploited and of course … OF COURSE monetized …No one opted in with informed consent either. The canard “its in the ‘terms of service: doesn’t really hunt when its encoded in such convoluted, verbose legalese thousands of words long such that no one really knows what one is agreeing to. That’s not what I would consider “informed” its just legal theater … and anyway no one really reads that shit at all … that’s why “TL;DR” is a thing.
Platforms aren’t dumb pipes … even if they claim to be. They curate, recommend, and target with incredible repeatability. That moves them from “neutral intermediary” to active participant. And in most areas of law—especially torts, as indicated in the article—if you take action that foreseeably causes harm, and especially if you profit from it, you’re liable. I mean … I guess … until you’re not. Seems like there is “the law” and there is “the law which gets enforced”
But …you can’t drive a truck into a crowd and say, “The algorithm made me do it.” Why should platforms get a free pass when they feed a teenager with body dysmorphia a steady stream of “thinspiration” videos until they end up in the ER?
You know … once upon a time (1996), this whole commercial interwebs thing was just getting off the ground and Section 230 came into being, but I fully contend that it has outlived its usefulness. Think back to the 90s:
BBS, web forums, and USENET groups. NOW … think about recommendation engines running on psychographic data, optimized by AI, and monetized through engineered behavioral addiction. Makes 1996 seem quaint by comparison.
This will completely date me but back in the day “pushers” used to give away the first hit or two of junk in anticipation of getting a “junkie” that would become a lifelong customer. Well, really … what’s the difference? I will put forth my own hypothesis: There is no meaningful difference—except the pusher on the street corner didn’t have a data center full of PhDs and psychometric models fine-tuned to one’s insecurities.
One of the biggest cons of the whole thing is this creeping normalization, … I mean … it’s just what “being online is …” innit?