Your Cartoon Girlfriend, Brainwashing, and Hyper-Targeted Advertising
thought digest, 08.21.2024
AI as a brainwashing tool. At our last in-person book club here in Chicago, someone brought up the idea that AI companions might be used for propaganda. Think about it: if you form a “real” relationship (i.e., ambiently feed it data about yourself around the clock) with a chatbot, wouldn’t they be the perfect vector for something like that?
While that’s plausible—possibly even likely—I'm more concerned about the same thing happening with hyper-targeted advertising.
Think about Avi Schiffman's Friend, the always-on pendant that connects to your phone and texts you about your immediate environment, as though it's an invisible "friend." The dystopian part of that, at least to me, is not a purchasable imaginary friend, but rather the obvious surveillance aspect. I doubt that people like Schiffman will end up working with the United States government to deploy increasingly intrusive social credit scores even if your “friend” might eventually become a might become a highly skilled political canvasser.
However, I can see him securing a deal with a large corporation like Nestlé or Meta, resulting in more pervasive and persistent advertising under the guise of companionship.
We're already seeing a version of this play out with TikTok.
Years ago, I wrote an article for Tablet about how TikTok might be the perfect psychic.
Its algorithm seems to have an uncanny ability to predict and serve content that resonates deeply with each user. This isn't just a coincidence, as we've all learned: it's the result of extensive data collection and analysis. TikTok tracks not just what you like and share, but how long you linger on each video, what you rewatch, and even the content you create. It's as if the app can read your mind, anticipating your interests and desires before you're even consciously aware of them.
There's a way in which it seems "mystical," almost supernatural in its intuition. That's what makes the actual psychics on the app feel so uncanny. It feels like the app knows which readings will resonate most deeply with you, because it kind of does. Not through supernatural means, but through sophisticated data analysis.
I’m curious about how all of this is going to play out in the long term.
Complicating this is the gap between how quickly tech is advancing and quickly our understanding of how this stuff actually works is degrading. You don’t know how many people I personally know who believe that AI is somehow ensouled. Folks, it’s absolutely not, and you’re never going to convince me that it is.
Anyway, I suspect that a large part of the resurgence of things like astrology is downstream of this disconnect. Not only is there too much information and a real thirst to categorize it, we understand our technologically-augmented world increasingly less. That’s fertile ground for magical thinking.
That Substack post that everyone’s still talking about. I’m still thinking about that Feed Me post. You know the one. And if you don’t, I’m referring to a now-viral essay where journalist Emily Sundberg complains about how as Substack has grown, more people are paywalling their diary entries or low effort lists, and why that worries her about the future of content and the platform.
Maybe that’s not a charitable description, but it’s what I got out of it.
A lot of people felt attacked by her post, but I think she raised some good points. I’m not quite as much of a doomer about it, but she’s not wrong. People are paywalling some real garbage, myself included. (As an aside, am I the only one who felt like the title was poetic, but the Leo Marx reference didn’t really make sense for what she was writing about?)
You can read the full piece here:
Of all the many responses to Sundberg’s piece, two that I didn’t see were:
Beneath the cut: A Discord invite, the two responses I didn’t see, more considerations about people who want to have sex with cartoons, and a weird story that somebody told me.