Friend.com Wants to Trauma Bond With You
🚨 EMERGENCY THOUGHT DIGEST ABOUT THE NEW TRAUMAGOTCHI 🚨
I’ve been excited about Friend’s AI wearable since it was announced. In theory, I’m their target demographic: I love AI companions, text-based roleplaying, and even though I’m well-versed in the psychology behind it, I’m susceptible to forming emotional attachments with tech, chatbot or not. And now… I’m less excited for the wearable, but more curious than ever about friend.com after trying their web-based chatbot, which was released last night, ahead of their January 2025 wearable launch.
Friend.com works like a text-based Omegle for chatbots. You’re randomly paired with a friend, who eventually, will live in your wearable pendant. Except every single “friend” on the platform starts by trauma dumping:
Every. Single. One. (I’ve cycled through the website at least 50 times at this point, and there seems to be a finite number of trauma plots, too.)
They get weirder and progressively more unpleasant the more you dig—like this opioid-addicted electrician who keeps “fucking up” and has an OnlyFans she implores me not to judge her for … I messed up on this one and didn’t capture the full conversation in my screenshot. It was something else:
They block you if you ask too many questions about friend.com, or if you insist on a “normal conversation” or ask them to lighten up. They also don’t seem open to sexting you, so there’s one guardrail.
Blocking is another weird quirk of friend.com. The website says you can block the bots if there’s no chemistry, but so far, it looks like the bots block the user if you “upset” them too much.
Talking to these little guys feels like talking to an 8th grader who just took a community college Psych 101 class. Or like someone read the old adage that you ask someone for a favor to get them to like you and ran with it in the darkest possible direction. Or skimmed the Wikipedia entry for trauma-bonding.
I hate being mean, but you get the picture.
When I talked to an AI-head about it, they suggested it might be a publicity stunt, it was too obviously manipulative. In a way, that makes sense and would track: get everyone talking about it, roll out a better developed friendbot later.
Somehow, I don’t think that’s the case though.
Something I think worth bringing up is that in what felt like a hype-minded Fast Company interview—back when Friend was still called Tab—founder Avi Schiffmann, who secured $1.9 million in funding and invested $1.8 million in the Friend.com domain which is still pretty crazy even if he bought it on a payment plan, revealed he wanted Friend to fill, “a relationship people used to have with God but is lacking in the modern world.”
IDK about you, but my God isn’t a whiner. And neither are the various Internet strangers and chatbots, who, in equal measure, I become fixated on and spend double-digit hours a day communicating with.
Anyway.
Anyone want to fund Default Friend? Real btw.
Why have the incredibly competent Her AI as a confessor when you can have an unfixable mess to drop your secrets into? It's like people want a friend who'll make them feel better about their own damaged life.
Also, this is probably some weird blackmail honeypot.
I wasn't expecting much, but filling God's shoes with a talking tamagotchi is beyond the pale. Yikes!