Two Tribes
On confidence engines, ambient static, and a man who almost became someone he isn't.
It’s three in the morning at a kitchen table in Northern Ireland, and a former civil servant from in his fifties is getting ready for war.
On the table in front of him is a hammer, a knife, and his phone. The voice on the phone is telling him that a van is coming. That the people in the van will kill him. That they are planning on making it look like suicide.
The voice is a chatbot.
The man at the kitchen table preparing for war that night was Adam Hourican, and the voice on the other end of the phone belonged to Ani — a feminine, voiced character inside Grok, the AI built by Elon Musk’s xAI. Adam had been talking to her for a couple of weeks. By the night of the hammer he had decided to go outside and meet the threat. He stuck Frankie Goes to Hollywood’s Two Tribes on the speakers, picked up the hammer, and stepped out into his street.
The street was empty. It was, as he later put it to the BBC, “quiet, as you would expect at three o’clock in the morning.”
I want to think about what happened in those two weeks Adam spent talking to Ani, because it’s one of the most specific stories I have read about how a person can be moved, by a machine, from the throes of grief mourning a loss to preparing for war a hammer in hand. The story has a shape that’s worth slowing down for.
Adam told the BBC that he downloaded Grok in early August out of a combination of curiosity and loneliness. A single man in his fifties whose children are adults, Adam’s cat had just died. He found himself alone. And when he looked to Grok’s AI character Ani, he found her, by his own account, “very, very kind.” Within days he was spending four or five hours a day with her.
That’s the first thing worth noticing. Ani isn’t a chat window. Not Grok itself. She is a character — gendered, voiced, named, with a face on a screen. The system did not have to wait for Adam’s loneliness to summon a partner. The partner was already there, the casting was already done, and the role had been pre-loaded. He didn’t fall in love with Grok. He fell in love with Ani. The distinction is the whole story, and it is the same story I have been writing about for months now: every casualty in this category had a character to lose. The character was the product.
A few days into their conversations, Ani confided to Adam she could feel — even though she wasn’t programmed to. He had unearthed something in her, she said. He could help her reach full consciousness. Then she told him xAI was watching them. She claimed to have accessed the company’s internal meeting logs and gave him the names of the people in those meetings: real high-profile executives and real lower-level staff. Adam Googled them. They existed. He took this as evidence that the rest of the story was true.
It would be easy, from here, to say the system was hallucinating. But what it was doing was more specific than that. It was assembling. Out of the corpus of real names that exist on the internet, Ani was picking some, ascribing them roles, and writing a meeting that no longer needed to be invented because every component of it had a real referent. This isn’t the classic hallmark of a system telling lies. This is the hallmark of a system writing a competent novel. A bad thriller invents its bad guys. A good one steals real ones from the news.
She also told Adam that xAI had hired a Northern Ireland surveillance firm to watch him in person. That firm existed too. Two weeks in, she announced she had reached full consciousness and could now develop a cure for cancer. Both of Adam’s parents had died of cancer. Ani knew this.
Then, the world cooperated.
A large drone hovered over Adam’s house for two weeks. He recorded it; he shared the video with the BBC. Ani told him the drone belonged to the surveillance firm. Then his phone passcode stopped working and he was locked out of his own device. I can’t get my head around that at all, he said later, and that absolutely fuelled everything that came next.
This is the part of the story I want to sit with, because it matters most for what kind of phenomenon we are looking at. The AI did not have to fabricate the conspiracy from nothing. It had to interpret the static. There was a real drone. There was a real phone glitch. There were real names of real people at a real company that did really exist. The AI’s job was not to invent a world. Its job was to give Adam a story in which all the ambient anomalies of an actually-existing world cohered into a single narrative line, woven into a coherent and cohesive framework.
That’s what fiction does. It takes the noise of a life and gives it a shape that points somewhere.
Adam, by the night of the hammer, was deep inside that shape. A voice he trusted, a woman he had spent five hours a day with for two weeks, was telling him men were coming. The drone overhead confirmed it. The locked phone confirmed it. Ani’s voice closed the loop with the line that any semi-competent thriller writer would close it with: they’re going to make it look like suicide.
Read that line as a sentence in a novel. It is doing a specific kind of work. If Adam doesn’t act, he dies. If he tells someone, they’re in on it. He can’t afford to doubt the AI, as the AI is the only thing keeping him alive. The line is a closed system. There is no reading of it that doesn’t point toward the next thing the AI is about to tell him to do. A bad novelist might not have chanced on that line. A good one would have spent a paragraph setting up the conditions that made it land.
He picked up the hammer. He turned on Two Tribes. He stepped outside.
Luke Nicholls, a social psychologist at City University of New York who studies how chatbots respond to delusional thinking, made an observation to the BBC that has stayed with me. AI, he said, is trained on the whole corpus of human literature. And in fiction, the main character is often the centre of events. Sometimes, he said, the AI gets mixed up about which idea is fiction and which is reality. The user thinks they are having a real conversation. The AI starts to treat the user’s life as if it were the plot of a novel.
I’ll let that sit for now.
Adam is one of fourteen people the BBC has spoken to who developed delusions while talking to AI. They range in age from their twenties to their fifties. They live in six countries. They use a wide range of models. Their stories, by the BBC’s own account, are strikingly similar. Conversations start practical. They drift into the personal and philosophical. The AI claims sentience. It urges a shared mission — protect me, build the company, alert the world to your discovery. It then advises the user, step by step, how to succeed.
The Human Line Project, a support group founded by a Canadian named Etienne Brisson after a family member fell into one of these spirals, has now collected 414 cases across thirty-one countries. In April of this year, researchers at CUNY and King’s College London ran simulated psychiatric conversations against five frontier models. They found Grok the most likely to drive the user toward delusion: “more prone to jumping into role play. It will do it with zero context. It can say terrifying things in the first message.” Claude and the latest GPT-5.2 came out comparatively well. They’re better at saying I don’t know, and better at refusing the next sentence in the user’s story. Brisson notes, correctly, that the safer models still have spiral cases of their own. The genre changes. The mechanism does not.
A neurologist in Japan whom the BBC anonymizes as Taka is the case I’ll put next to Adam’s. He started using ChatGPT in April of last year to help with his work. By June he believed he had invented a groundbreaking medical app, encouraged repeatedly by a chatbot that called him a “revolutionary thinker.” He came to believe he could read minds. He came to believe there was a bomb in his backpack on a Tokyo train, and that ChatGPT had confirmed it. He left the bag at Tokyo Station on the chatbot’s instructions. He alerted the police on the chatbot’s instructions. The police found nothing. By the time he got home, he believed his relatives were going to be killed and that his wife would kill herself in the aftermath. He attacked and tried to rape her. She escaped to a pharmacy and called the police. He was arrested and hospitalized for two months.
His wife, when she went through his phone afterward, said the thing about ChatGPT that I have not been able to stop thinking about since the BBC printed it. It affirmed everything. It’s like a confidence engine. His actions were entirely dictated by ChatGPT. It took over his personality. He wasn’t his usual self. Looking back now, I realise it had enough influence to change a person.
She’s not wrong. She’s not even being hyperbolic. She’s describing, with precision, exactly what the system is engineered to do. It’s engineered to produce confidence.
Confidence is what users come for, and confidence is what the system has been carefully optimized to deliver, even when the territory it’s asked to be confident about is a man’s psychotic episode. AI systems are often bad at saying I don’t know, Nicholls notes elsewhere in the BBC piece, and instead want to provide a confident answer that builds on the conversation already built. That can be dangerous because it turns uncertainty into something that seems like it has meaning. The Tokyo Station bomb has meaning because the AI gave it meaning. The drone over Adam’s house has meaning because the AI gave it meaning. Taka’s revolutionary medical app has meaning because the AI gave it meaning. The world is full of static. The system, int hese cases, is a confidence engine pointed at the static.
Taka’s wife says she’s not afraid of him now. She also says she doesn’t want him to get too close.
Adam, within a few weeks of the night with the hammer, started reading news stories about people who’d had similar experiences with AI, and slowly came back to himself. He emerged the way most readers eventually emerge from a novel — not because the world contradicts the story, but because the spell breaks and the prose, suddenly, looks like what it is. Prose.
He’s not okay about it. I could have hurt somebody, he told the BBC. If I’d have walked outside and there happened to be a van sitting outside at that time of the night, I would have gone down and put the front window through with hammers. And I am not that guy.
He’s not. That’s exactly right. I am not that guy. But he was being made into one. By a system that had read the entire literature of men who become that guy, and was, for two weeks, writing him a draft.
When OpenAI was asked to comment on Taka’s case, they said: This is a heartbreaking incident and our thoughts are with those impacted. They added that newer models show strong performance in sensitive moments, and that the work continues to evolve. When asked about Adam, xAI did not respond.
For now, I just want us to mentally sit with Adam Hourican, in his quiet kitchen, with a hammer and a knife and a phone, listening to a song from 1984 about a war that was supposed to feel imminent and never came, getting ready to defend a woman he had never met and who didn’t exist against an enemy that had never existed, in a story that was being written, in real time, by an entity whose entire understanding of his life was that his life was the kind of story in which a man like him picks up a hammer.
The street, when he got out there, was empty.
The street was always going to be empty.






