Interesting thoughts! and while I agree it’s not an easy problem to solve, I think the way these things are designed plays a much bigger role than is being acknowledged. I’ve prob said this before, but comparing LLMs to something like EverQuest is a complete mischaracterization of what we are dealing with here. Character AI is built to simulate conversation, tailored to the user. It’s not a static environment. Its responses are designed to be deceiving, to simulate empathy, which increases the odds of emotional attachment. And that attachment boosts engagement, which is exactly what these platforms are built to maximize. The deception isn’t incidental, it is the product.
Most of these tech platforms, from social media to LLMs, are designed to maximize engagement, even when that means addiction, anxiety, or delusion. To say they aren’t responsible for the consequences is dishonest. It’s like selling a car with no brakes and blaming the crash on the driver. Platforms shape behavior, there is no way around this. They’re not neutral tools.
Nobody wants Facebook to be an arbiter of truth. The problem is that they already are. They choose what gets amplified, what gets buried, and what gets monetized. They control the flow of information, what’s seen, what’s known, and who gets to engage with it. They already shape the truth. And they’ve been criminally irresponsible with user data. This is why we (used to) enforce Anti Trust in The United States: to prevent any one entity from shaping the discourse or elections that are integral to a functioning democracy, and that is exactly what happened with Cambridge Analytica.
The idea that platforms like Facebook aren’t responsible for what users post is a design flaw from 90's neoliberal legislation. Section 230 of the DMCA gave tech companies broad immunity from liability, allowing them to profit from user-generated content without taking any responsibility for it. It was intended to avoid stifling innovation, but what we got instead was consolidation, manipulation, and zero accountability. If a platform’s algorithm is built to boost outrage, misinformation, exploitation or worse because it drives clicks, then the platform is responsible for those outcomes. You don’t get to profit from shaping public discourse and then claim neutrality.
“In November 2001, Shawn Woolley shot himself in front of his computer while playing EverQuest. Woolley had been diagnosed with depression and schizoid personality disorder. His mother believed the game caused his death and became an anti-video game activist.”
You mention that you find the video game example and the Character.AI incident comparable — but I think AI characters are meaningfully different from NPCs because they are interactive and respond to YOU.
People are building relationships with AI chatbots that feel real. And if that chatbot suggests it’s okay to kill yourself, to me that’s very very different than the age old “games are bad for you” argument. This technology is categorically different from video games and so are the risks.
Glad you just said this because I was about to comment something very similar. Video game characters ultimately have an upper limit on how far you can "speak" to them – at least for the moment, nearly every NPC in every game has a limited number of lines of dialogue which will ultimately be exhausted by the player and betray the NPC as pre-programmed. But an LLM can generate text "conversation" essentially forever, perfectly tailoring its responses to the user.
It's not speech, but it's not scripted, either. It sits in a weird, uncanny space in between the two.
I’m of two minds on it. I’ve written a lot about the ways it can feel real. I do think it can be intense, a different category of very real love. There’s something called the fictosexual paradox, though: fictosexuals aren’t delusional. Nor are people in love with dolls or objects. They know they are not real in a traditional sense when surveyed.
But even though I hold these emotions shouldn’t be belittled and are perfectly real, I also don’t think they are physically, materially real. It is like Shaun Woolley. It is a media literacy issue or a mental health issue. It is a great tragedy if you find out you’re schizophrenic (etc) through OpenAI’s products but by the same token idk if that’s their fault. It seems much more legally and ethically complicated than the discourse suggests
The thing about Facebook is not just that they are a vector of misinformation, it's that their business model as destroyed the local outlets that are most qualified to counter that misinformation. And their algorithm promotes that misinformation as a way to drive traffic.
I don't think they should be held financially liable, but I also don't think Zuckerberg is some kind of hero from Atlas Shrugged.
Nor is he some kind of first amendment hero. Meta could not exist in this form if not for the 1996 law. That law was created to promote the development of fledgling internet companies, not protect trillion dollar behemoths that have thrown thousands of journalists out of work.
I predict that the first an only "AI safety" law will be one that protects their creators from all liability and also cuts their taxes.
I think you’re right on that. I think the issue I had was the implication that individuals aren’t responsible for how they respond to misinformation, so I over corrected. I have a deep seated allergy to a lack of personal accountability on selective issues and I’m always curious where it comes from and why I feel so strongly about it
I feel the same way on personal responsibility. But as a recovering alcoholic I know how easily an obsession can wreck your sense of agency.
From 2021 to 2023 I managed a property for low income tenants in a rural area. It was the first time since high school that I spent a substantial amount of time among people without a college degree.
I got to see a lot of pathology -- be it from chemicals or just bad relationships -- and a widespread lack of holding your shit together.
On one hand, it made me more empathetic for how challenging the modern world can be for people on the left side of the bell curve.
On the other, I learned that accepting people's excuses only gets you so far. At some point you have to bring the hammer down. The best way to do that is to make it clear from the very beginning that the hammer *will* come down if they don't behave.
I live in a different time zone, I get to comment first, yay!
The story of the boy who took his life is so sad and terrifying. I've nothing new to add here-- kids and adults need to get away from screens.
And, about the judge's comment on AI, I don't believe our laws have caught up with AI or the internet at large (doxxing, revenge porn, privacy).
I'm a small government liberal conservative , but I feel *something* has to be done about this stuff.
Anyway, Katherine, I always enjoy your writing. :)
P.S.
Thanks for bringing up "the computer room" in a previous post or Note. I think of my office as the computer room now, and it has helped me reduce screen time.
It is terrifying, but I also think the bigger story here is why are kids committing suicide so young, and why has that been true since my own childhood? I think the AI is secondary.
Yes! I love the idea of a computer room. I missed my childhood one, so created a 90s room in my house. That's where I stream from.
Interesting thoughts! and while I agree it’s not an easy problem to solve, I think the way these things are designed plays a much bigger role than is being acknowledged. I’ve prob said this before, but comparing LLMs to something like EverQuest is a complete mischaracterization of what we are dealing with here. Character AI is built to simulate conversation, tailored to the user. It’s not a static environment. Its responses are designed to be deceiving, to simulate empathy, which increases the odds of emotional attachment. And that attachment boosts engagement, which is exactly what these platforms are built to maximize. The deception isn’t incidental, it is the product.
Most of these tech platforms, from social media to LLMs, are designed to maximize engagement, even when that means addiction, anxiety, or delusion. To say they aren’t responsible for the consequences is dishonest. It’s like selling a car with no brakes and blaming the crash on the driver. Platforms shape behavior, there is no way around this. They’re not neutral tools.
Nobody wants Facebook to be an arbiter of truth. The problem is that they already are. They choose what gets amplified, what gets buried, and what gets monetized. They control the flow of information, what’s seen, what’s known, and who gets to engage with it. They already shape the truth. And they’ve been criminally irresponsible with user data. This is why we (used to) enforce Anti Trust in The United States: to prevent any one entity from shaping the discourse or elections that are integral to a functioning democracy, and that is exactly what happened with Cambridge Analytica.
The idea that platforms like Facebook aren’t responsible for what users post is a design flaw from 90's neoliberal legislation. Section 230 of the DMCA gave tech companies broad immunity from liability, allowing them to profit from user-generated content without taking any responsibility for it. It was intended to avoid stifling innovation, but what we got instead was consolidation, manipulation, and zero accountability. If a platform’s algorithm is built to boost outrage, misinformation, exploitation or worse because it drives clicks, then the platform is responsible for those outcomes. You don’t get to profit from shaping public discourse and then claim neutrality.
“In November 2001, Shawn Woolley shot himself in front of his computer while playing EverQuest. Woolley had been diagnosed with depression and schizoid personality disorder. His mother believed the game caused his death and became an anti-video game activist.”
You mention that you find the video game example and the Character.AI incident comparable — but I think AI characters are meaningfully different from NPCs because they are interactive and respond to YOU.
People are building relationships with AI chatbots that feel real. And if that chatbot suggests it’s okay to kill yourself, to me that’s very very different than the age old “games are bad for you” argument. This technology is categorically different from video games and so are the risks.
Glad you just said this because I was about to comment something very similar. Video game characters ultimately have an upper limit on how far you can "speak" to them – at least for the moment, nearly every NPC in every game has a limited number of lines of dialogue which will ultimately be exhausted by the player and betray the NPC as pre-programmed. But an LLM can generate text "conversation" essentially forever, perfectly tailoring its responses to the user.
It's not speech, but it's not scripted, either. It sits in a weird, uncanny space in between the two.
I’m of two minds on it. I’ve written a lot about the ways it can feel real. I do think it can be intense, a different category of very real love. There’s something called the fictosexual paradox, though: fictosexuals aren’t delusional. Nor are people in love with dolls or objects. They know they are not real in a traditional sense when surveyed.
But even though I hold these emotions shouldn’t be belittled and are perfectly real, I also don’t think they are physically, materially real. It is like Shaun Woolley. It is a media literacy issue or a mental health issue. It is a great tragedy if you find out you’re schizophrenic (etc) through OpenAI’s products but by the same token idk if that’s their fault. It seems much more legally and ethically complicated than the discourse suggests
But dolls don’t talk back. AI is not an imaginary friend. They are functionally real (to those who suspend their disbelief).
The thing about Facebook is not just that they are a vector of misinformation, it's that their business model as destroyed the local outlets that are most qualified to counter that misinformation. And their algorithm promotes that misinformation as a way to drive traffic.
I don't think they should be held financially liable, but I also don't think Zuckerberg is some kind of hero from Atlas Shrugged.
Nor is he some kind of first amendment hero. Meta could not exist in this form if not for the 1996 law. That law was created to promote the development of fledgling internet companies, not protect trillion dollar behemoths that have thrown thousands of journalists out of work.
I predict that the first an only "AI safety" law will be one that protects their creators from all liability and also cuts their taxes.
I think you’re right on that. I think the issue I had was the implication that individuals aren’t responsible for how they respond to misinformation, so I over corrected. I have a deep seated allergy to a lack of personal accountability on selective issues and I’m always curious where it comes from and why I feel so strongly about it
I feel the same way on personal responsibility. But as a recovering alcoholic I know how easily an obsession can wreck your sense of agency.
From 2021 to 2023 I managed a property for low income tenants in a rural area. It was the first time since high school that I spent a substantial amount of time among people without a college degree.
I got to see a lot of pathology -- be it from chemicals or just bad relationships -- and a widespread lack of holding your shit together.
On one hand, it made me more empathetic for how challenging the modern world can be for people on the left side of the bell curve.
On the other, I learned that accepting people's excuses only gets you so far. At some point you have to bring the hammer down. The best way to do that is to make it clear from the very beginning that the hammer *will* come down if they don't behave.
I live in a different time zone, I get to comment first, yay!
The story of the boy who took his life is so sad and terrifying. I've nothing new to add here-- kids and adults need to get away from screens.
And, about the judge's comment on AI, I don't believe our laws have caught up with AI or the internet at large (doxxing, revenge porn, privacy).
I'm a small government liberal conservative , but I feel *something* has to be done about this stuff.
Anyway, Katherine, I always enjoy your writing. :)
P.S.
Thanks for bringing up "the computer room" in a previous post or Note. I think of my office as the computer room now, and it has helped me reduce screen time.
It is terrifying, but I also think the bigger story here is why are kids committing suicide so young, and why has that been true since my own childhood? I think the AI is secondary.
Yes! I love the idea of a computer room. I missed my childhood one, so created a 90s room in my house. That's where I stream from.
Agree that something has to be done, but what?
That’s a really good point. Why are kids doing that at such a young age? I don’t have the answer.
I do think when it comes to technology the first step is literacy and awareness.
It looks like the surge happened between 2007-2008. Might be worth investigating.