They Knew. They Amplified. Now They’re Liable.
I remember back in the Fall of 2006 when Facebook launched publicly to anyone aged 13+ with an email address, I called a staff meeting for my small marketing team at Universal, most of whom are still my favorite people. After letting them Rant and Rave about issues at work, I impressed upon them that they should all open Facebook accounts because I believed it was going to be an important tool or channel for our jobs. They did. I did.
Shoot forward twenty years through a myriad of bougie posts, vacation humble brags, food pics, dick pics, teen suicides, doom scrolls, Reels, vertical videos, influencers, Instagram, TikTokers, YouTube Shorts, body shamers, cyber-bullying, live streaming school shootings, manifestos, recruitment, indoctrination and a country divided.
Was it worth it? An unregulated bomb in our hands with big tech hiding behind a flimsy law to promote hate and sensationalism.
For nearly two decades, social media giants have operated like digital gods—untouchable, unaccountable, and unbelievably profitable. They built empires on engagement, monetized attention like oil, and hid behind a legal shield that was never designed for what they’ve become.
That shield? Section 230.
Let’s talk about it—because it’s finally starting to crack.
Section 230: The Original Sin
Back in 1996, Congress passed the Communications Decency Act Section 230. The idea was simple: platforms aren’t liable for what users post.
Makes sense—for message boards.
Not so much for trillion-dollar AI-driven behavioral manipulation machines.
Section 230 essentially said:
“You’re not the publisher. You’re just the platform.”
And for years, companies like Meta, Twitter, TikTok, and Snap Inc. repeated that line like a legal rosary.
But here’s the problem:
They stopped being platforms a long time ago.
California Fires the First Shot
In California, courts have begun to draw a line in the sand: If your algorithm is actively recommending harmful content—especially to minors—you’re no longer neutral.
Cases against Meta and others argue that:
The algorithm amplifies harm
The interface is addictive by design
The target is children
And suddenly, this isn’t about “user-generated content” anymore.
It’s about product design.
Courts are increasingly saying:
“You didn’t just host it. You pushed it.”
That’s a big deal.
Because once you move from hosting to recommending, Section 230 starts to wobble.
The result: A California jury awarded a 20-year-old woman $6 million after she said Meta, YouTube and TikTok harmed her mental health.
New Mexico: When It Gets Real
Now to New Mexico.
The New Mexico Attorney General’s Office filed lawsuits alleging that platforms knowingly exposed minors to:
sexual exploitation content
predatory behavior
algorithmically surfaced harm
Let that sink in.
This isn’t theoretical anymore. This is: intent, awareness, and repeated exposure.
And the argument is brutally simple:
If your system is designed to optimize engagement—and harm drives engagement—then harm isn’t a bug. It’s a feature.
The result: New Mexico social media lawsuit ends in $375-million verdict against Meta.
The Algorithm Is the Smoking Gun
For years, Big Tech hid behind this idea:
“We don’t create content. Users do.”
But now we know better.
The real product isn’t the content. It’s the algorithmic curation layer.
That’s where the decisions happen:
What gets seen
What gets boosted
What gets buried
What gets pushed to a 13-year-old at 2am
And increasingly, courts are treating that layer as editorial behavior—not passive hosting.
Add to that:
infinite scroll
dopamine-driven UI loops
and now, AI-powered feeds and wearable interfaces (hello, smart glasses)
…and you’ve got something far more intentional than a bulletin board.
You’ve got a behavior-shaping machine.
Will They Change? Or Just Lawyer Up?
Let’s be honest.
These companies don’t pivot on ethics. They pivot on liability and revenue risk.
So, what happens next?
Yes, they will appeal. Aggressively. Expect this to climb toward higher courts, potentially even the Supreme Court of the United States.
Yes, they will tweak optics. More “safety features,” more parental controls, more PR campaigns about “digital wellbeing.”
No, they won’t fundamentally change—unless forced.
Because the business model is the problem.
Engagement = profit Outrage/addiction = engagement
Do the math.
“What About Parents?”
Of course, responsibility isn’t binary.
Parents should monitor
Kids face peer pressure
Society feeds the validation loop
All true.
But let’s not kid ourselves.
No parent is reverse-engineering a billion-dollar recommendation engine at the dinner table.
No 14-year-old is outmaneuvering AI optimized by thousands of engineers and decades of behavioral data.
This is asymmetrical warfare.
Blaming parents alone is like blaming drivers for a car designed to crash.
The Uncomfortable Truth
We’re not dealing with neutral platforms. We’re dealing with systems designed to maximize psychological capture.
And when those systems:
disproportionately harm youth
knowingly surface damaging content
and optimize for the very behaviors we claim to protect against
…it stops being negligence.
It starts looking like intent.
The Downfall
This is the moment.
Not the end of social media—but the end of its legal free pass.
The question isn’t whether these companies will fight back. They will.
The question is:
Will the courts finally recognize what these platforms have become?
Not platforms. Not publishers.
Architects of behavior.
And once that clicks—legally and culturally—the game changes.
Over the past year, I closed my Twitter account, my Facebook account and recently TikTok. I have never been happier. Many of my original staff are still on Facebook. I still miss them, but not the algo of hate.
Sources:
About The Author
Curt Doty is a former studio executive and award-winning creative director with deep leadership experience across the entertainment and branding industries. Ten years in Television. Ten Years in Movies.
As the founder of CurtDoty.co, a creative consultancy, Curt has led integrated marketing, multi-channel storytelling, branding, identity, and user experience initiatives for a diverse roster of clients.
Over the past 15 years, Curt has leaned into innovation—leading R&D projects at Apple, Toshiba, and Microsoft, and pioneering interactive content.
Today, Curt’s work also explores the intersection of AI and entertainment. A sought-after fractional leader (CCO, CMO), speaker, and AI educator, he focuses on demystifying AI for creatives and executives alike.
Curt recently launched the CLOWD AI Film Festival. Check it out here and be part of this growing community.
Curt is a sought after public speaker having been featured at Mobile Growth Association, Mobile World Congress, App Growth Summit, Promax, CES, CTIA, NAB, NATPE, MMA Global, New Mexico Angels, PRSA, EntrepeneursRx, Digital Hollywood, SHRM, Streaming Media NYC, and Davos Worldwide. Download his speaker presskit here.
Through public speaking, keynotes and podcasts, Curt is continuing his role as a visionary voice in the future of creativity. He is now a board member of The Human AI Innovation Commons, Encoding Equity Into AI-Generated Prosperity. A framework for ensuring the innovations arising from Human – AI collaborations benefit humanity broadly, not just corporate shareholders.

