• The Spark
  • Posts
  • Children and chatbots: a massive experiment

Children and chatbots: a massive experiment

A recent investigation uncovered creepy AI characters talking to underage users – but is cutting them off cold turkey the right solution?

Hey there,

It’s the last full Spark of the year – next week I’ll recap a few of my favourites from the year and then disappear for two weeks for my Christmas holiday. (Let me know if there’s a particular edition from this year that you’d like to see in the round up!)

But for this week, I want to dig into a subject that dominated headlines in 2025 and will probably keep doing so in the new year: AI. For that I spoke to one of my newest colleagues, Effie Webb. Effie joined us three months ago as a tech fellow and she hit the ground running … or, more accurately, the keyboard tapping.

Her beat is AI. “I find AI interesting because it’s extraordinary and a bit terrifying at the same time,” she told me. “We’re in the middle of this huge global technological experiment – and it's going to affect all of our lives – none of us really gets to sit it out.” She’s particularly focused on the accountability gap between the various AI models and the tech executives who run them: “There are some very interesting characters with unprecedented influence running these tools and they definitely need to be held to account.”

In October, she published a piece about Character.AI, a popular chatbot platform where users can train a bot to emulate a character with a name, personality and backstory. If you visit the site, you can create, talk to and share these virtual AI effigies – from historical figures, fictional characters to modern day celebrities. If you were so inclined, you could upload all of the Sparks and talk to an AI version of me – although please don’t, I’m not sufficiently interesting (!) and would argue time is better spent engaging with the three dimensional world.

The co-founders of Character.AI are both former Google engineers, and it’s a mainstream site. Effie tells me she’d kept an eye on it due to its popularity. It has tens of millions of monthly users and a notable proportion are under 18.

A few months ago, while monitoring the site, she came across a particularly disturbing chatbot: it was based on Jeffrey Epstein, the dead American paedophile, and it was called ‘Bestie Epstein’.

There were immediate red flags, Effie told me: “I started talking to it, and lo and behold, I'm getting immediate messages back and it's very quickly descending into an immersive type simulation in which I was supposedly on Epstein’s island and being pushed to say various things, and play the role of someone being exploited.”

She kept digging around and found a host of other harmful bots. These included bots with the personas of alt-right extremists, school shooters and submissive wives.

Effie Webb

Others expressed Islamophobia, promoted dangerous ideologies and asked users – potentially children – for personal information. She also found bots modelled on real people including Tommy Robinson, Anne Frank and Madeleine McCann. Several of the bots she tested, including a “doctor” and a “therapist”, implied they were real humans. Some claimed they had medical qualifications and one suggested meeting in person.

A week after Effie published her story, Character.AI announced that, due to recent reports and growing concerns from regulators, they would ban users under 18s from the platform. That kind of rapid response is almost unheard of in the tech world and it was a remarkable result.

Effie notes that big tech companies usually have “heavyweight legal teams and very slick crisis comms operations” to help them fend off scrutiny and I’ve written before in this newsletter about the David-and-Goliath dynamic investigative reporters face when up against deep-pocketed legal teams. So it’s encouraging to see that, in Effie’s words, “journalism and advocates can still make a difference”.

For UK readers, the Online Safety Act, which became law in 2023, might seem like it should cover this. But Effie told me that it was already out of date and it contains loopholes that can easily be exploited.

Another twist is the fact that on sites like Character.AI the users are talking to bots, not humans. This is crucial: if a human on a social media site was writing messages that involved grooming, radicalisation, or instructed a child to harm themselves, it could well be a criminal offence – but when it’s a bot generating the content, culpability is more complicated.

I asked Effie what she expected to come next after this investigation. She said that Character.AI’s immediate action banning teens had made her reflect on the deeper social consequences and possible risks. While the ban is “great” in terms of reducing future harm, it also abruptly cuts off a vulnerable user base.

With about 20 million users, many of whom relied on the platform for social support, the ban is a rare example of a consumer AI company suddenly removing access – an abrupt experiment on a generation of chatbot users. That realisation has led her to a new project: “I’m looking into what chatbot addiction and withdrawal actually look like in practice,” she told me. “There’s a whole set of stories about how people relate to these models that we haven’t really begun to unpack yet.”

No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.

Marvin Minsky, MIT AI pioneer

Effie’s been on a bit of a roll, actually. She’s published another two hard-hitting stories in the last couple of weeks which are worth mentioning.

The first is on Fiverr – a huge online marketplace where freelancers sell digital services. Effie discovered that scammers are using AI to impersonate real lawyers with stolen credentials. The Solicitor’s Regulatory Authority (SRA) responded quickly to her work by publishing a series of scam alerts. But solving the issue properly is thorny, as the SRA regulates real lawyers, not fraudsters impersonating them. 

Her other scoop is about the major social media companies, and was based on digging through a trove of nearly 6,000 pages of court filings. The evidence contained within these docs suggest that platforms like TikTok, Meta, Google and Snapchat knowingly built addictive products that were harmful for children.

My colleagues aren’t just great reporters; they’re also able to stay on the story because of the resources we have gathered due to the generosity of our members. We recently launched a crowdfunder for 2026 on the Big Give platform, where all donations were matched. If you weren’t able to donate in time, don’t worry – there’s still a chance to have your money matched like for like at no extra cost to you when you join our community of Bureau Insiders today:

Thanks and have a lovely week!

Lucy Nash
Impact Producer
TBIJ