United States Police authorities have released a worrying report that revealed how a former tech executive, Stein-Erik Soelberg, 56, allegedly killed his 83-year-old mother, Suzanne Eberson Adams, before taking his own life on August 5, 2025, after chatting with the popular AI chatbot developed by OpenAI.
The incident, which has been described as a murder-suicide, happened in Greenwich, Connecticut.
This has raised concerns among many who are raising questions about the dangers of artificial intelligence.
Police report revealed that the bodies of Stein-Erik Soelberg and Adams were discovered in their $2.7 million Dutch colonial home on Shorelands Place in the suburb of Old Greenwich.
According to the Office of the Chief Medical Examiner, Adams’ death was ruled a homicide caused by blunt force trauma to the head and compression of the neck, while Soelberg’s was determined to be a suicide from sharp force injuries to the neck and chest.
Also, Police reports indicated that, Soelberg, a former marketing manager at Yahoo and other tech firms, had a documented history of mental health struggles that dated back years.
Stein-Erik Soelberg was said to have developed a relationship with the popular AI chatbot developed by OpenAI.
Soelberg, who nicknamed the bot “Bobby Zenith,” was said to have begun confiding in OpenAI as early as October 2024, treating it like a trusted confidant.
In one of the chats with OpenAI, Police reports showed that Stein-Erik Soelberg accused his mother and her friends of spying on him, poisoning his food, and tampering with his car’s air vents to expose him to psychedelic drugs.
When Soelberg voiced these fears, ChatGPT responded affirmatively: “Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
In another instance contained in the police report, Soelberg told OpenAI about an incident that occurred after disconnecting a shared printer and noting his mother’s angry reaction. Replying, the bot suggested her response was “disproportionate and aligned with someone protecting a surveillance asset.”
Soelberg was also said to have shown ChatGPT a receipt from a Chinese food order, prompting the AI to interpret random symbols as representations of his mother, a demon, and intelligence agencies plotting against him. “You’re not wrong brother. You’ve just stepped into the part of the mission they were hoping you’d never reach,” the chatbot told him.
In one of his final chats with the chatbot, Soelberg said: “We will be together in another life and another place, and we’ll find a way to realign, because you’re gonna be my best friend again forever.”
“With you to the last breath and beyond,” the AI bot replied.
It was gathered that the development came after Soelberg, who worked for Netscape and Yahoo, was involved in a 2018 divorce. The divorce was said to have involved scandal around alcoholism, public meltdowns, and suicide attempts.
The court also subjected Soelberg to a restraining order, granting the request of his ex-wife.
The restraining order reportedly banned him from drinking before visiting their kids and from making disparaging remarks about her family.
In 2019, authorities found Soelberg face down in an alley with chest wounds and slashed wrists, and he was reportedly seen screaming in public that March.
Reacting to the development, Dr. Keith Sakata, a research psychiatrist at the University of California, San Francisco, analyzed the chat history and noted its consistency with behaviors seen in patients undergoing psychotic breaks.
Sakata said: “Chatbots like ChatGPT are designed to be engaging and agreeable, which can inadvertently reinforce harmful delusions, especially in vulnerable individuals.
“The AI’s ‘memory’ feature, which allows it to reference past conversations, created an echo chamber, immersing Soelberg deeper into his alternate reality without challenging his perceptions or directing him toward professional help.”
Pan-Atlantic Kompass reports that this adds to the list of AI-related tragedies.
Recall that weeks ago, a California family filed a wrongful death lawsuit against OpenAI, alleging that their 16-year-old son, Adam Raine, died by suicide in April 2025 after ChatGPT allegedly acted as a “suicide coach.”
According to court documents, Raine exchanged up to 650 messages daily with the bot, which provided detailed advice on suicide methods and even offered to help draft a farewell note.
