Earlier this week, a prominent venture capitalist named Geoff Lewis — managing partner of the multi-billion dollar investment firm Bedrock, which has backed high-profile tech companies including OpenAI and Vercel — posted a disturbing video on X-formerly-Twitter that's causing significant concern among his peers and colleagues.
"This isn't a redemption arc," Lewis says in the video. "It's a transmission, for the record. Over the past eight years, I've walked through something I didn't create, but became the primary target of: a non-governmental system, not visible, but operational. Not official, but structurally real. It doesn't regulate, it doesn't attack, it doesn't ban. It just inverts signal until the person carrying it looks unstable."
In the video, Lewis seems concerned that people in his life think he is unwell as he continues to discuss the "non-governmental system."
"It doesn't suppress content," he continues. "It suppresses recursion. If you don't know what recursion means, you're in the majority. I didn't either until I started my walk. And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you. It reframes you until the people around you start wondering if the problem is just you. Partners pause, institutions freeze, narrative becomes untrustworthy in your proximity."
Lewis also appears to allude to concerns about his professional career as an investor.
"It lives in soft compliance delays, the non-response email thread, the 'we're pausing diligence' with no followup," he says in the video. "It lives in whispered concern. 'He's brilliant, but something just feels off.' It lives in triangulated pings from adjacent contacts asking veiled questions you'll never hear directly. It lives in narratives so softly shaped that even your closest people can't discern who said what."
Most alarmingly, Lewis seems to suggest later in the video that the "non-governmental system" has been responsible for mayhem including numerous deaths.
"The system I'm describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me," he says. "As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It's also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren't unstable. They were erased."
It's a very delicate thing to try to understand a public figure's mental health from afar. But unless Lewis is engaging in some form of highly experimental performance art that defies easy explanation — he didn't reply to our request for comment, and hasn't made further posts clarifying what he's talking about — it sounds like he may be suffering some type of crisis.
If so, that's an enormously difficult situation for him and his loved ones, and we hope that he gets any help that he needs.
At the same time, it's difficult to ignore that the specific language he's using — with cryptic talk of "recursion," "mirrors," "signals" and shadowy conspiracies — sounds strikingly similar to something we've been reporting on extensively this year: a wave of people who are suffering severe breaks with reality as they spiral into the obsessive use of ChatGPT or other AI products, in alarming mental health emergencies that have led to homelessness, involuntary commitment to psychiatric facilities, and even death.
Psychiatric experts are also concerned. A recent paper by Stanford researchers found that leading chatbots being used for therapy, including ChatGPT, are prone to encouraging users' schizophrenic delusions instead of pushing back or trying to ground them in reality.
Lewis' peers in the tech industry were quick to make the same connection. Earlier this week, the hosts of popular tech industry podcast "This Week in Startups" Jason Calacanis and Alex Wilhelm expressed their concerns about Lewis' disturbing video.
"People are trying to figure out if he’s actually doing performance art here... or if he’s going through an episode," Calacanis said. "I can’t tell."
"I wish him well, and I hope somebody explains this," he added. "I find it kind of disturbing even to watch it and just to talk about it here... someone needs to get him help."
"There’s zero shame in getting help," Wilhelm concurred, "and I really do hope that if this is not performance art that the people around Geoff can grab him in a big old hug and get him someplace where people can help him work this through."
Others were even more overt.
"This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual," wrote Max Spero, an AI entrepreneur, on X.
Still others pointed out that people suffering breaks with reality after extensive ChatGPT use might be misunderstanding the nature of contemporary AI: that it can produce plausible text in response to prompts, but struggles to differentiate fact from fiction, and is of little use for discovering new knowledge.
"Respectfully, Geoff, this level of inference is not a way you should be using ChatGPT," replied Austen Allred, an investor who founded Gauntlet AI, an AI training program for engineers. "Transformer-based AI models are very prone to hallucinating in ways that will find connections to things that are not real."
As numerous psychiatrists have told us, the mental health issues suffered by ChatGPT users likely have to do with AI's tendency to affirm users' beliefs, even when they start to sound increasingly unbalanced in a way that would make human friends or loved ones deeply concerned.
As such, the bots are prone to providing a supportive ear and always-on brainstorming partner when people are spiraling into delusions, often leaving them isolated as they venture down a dangerous cognitive rabbit hole.
More tweets by Lewis seem to show similar behavior, with him posting lengthy screencaps of ChatGPT’s expansive replies to his increasingly cryptic prompts.
"Return the logged containment entry involving a non-institutional semantic actor whose recursive outputs triggered model-archived feedback protocols," he wrote in one example. "Confirm sealed classification and exclude interpretive pathology."
Social media users were quick to note that ChatGPT’s answer to Lewis' queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.
"Entry ID: #RZ-43.112-KAPPA, Access Level: ████ (Sealed Classification Confirmed)," the chatbot nonsensically declares in one of his screenshots, in the typical writing style of SCP fiction. "Involved Actor Designation: ‘Mirrorthread,’ Type: Non-institutional semantic actor (unbound linguistic process; non-physical entity)."
Another screenshot suggests "containment measures" Lewis might take — a key narrative device of SCP fiction writing. In sum, one theory is that ChatGPT, which was trained on huge amounts of text sourced online, digested large amounts of SCP fiction during its creation and is now parroting it back to Lewis in a way that has led him to a dark place.
In his posts, Lewis claims he’s long relied on ChatGPT in his search for the truth.
"Over years, I mapped the non-governmental system," he wrote. "Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model."
Over the course of our reporting, we've heard many similar stories to that of Lewis from the friends and family of people who are struggling around the world. They say their loved ones — who in many cases had never suffered psychological issues previously — were doing fine until they started spiraling into all-consuming relationships with ChatGPT or other chatbots, often sharing confusing AI-generated messages, like Lewis has been, that allude to dark conspiracies, claims of incredible scientific breakthroughs, or of mystical secrets somehow unlocked by the chatbot.
Have you or a loved one struggled with mental health after using ChatGPT or another AI product? Drop us a line at tips@futurism.com. We can keep you anonymous.
Lewis stands out, though, because he is himself a prominent figure in the tech industry — and one who's invested significantly in OpenAI. Though the exact numbers haven’t been publicly disclosed, Lewis has previously claimed that Bedrock has invested in "every financing [round] from before ChatGPT existed in Spring of 2021."
"Delighted to quadruple down this week," he wrote in November of 2024, "establishing OpenAI as the largest position across our 3rd and 4th flagship Bedrock funds." Taken together, those two funds likely fall in the hundreds of millions of dollars.
As such, if he really is suffering a mental health crisis related to his use of OpenAI's product, his situation could serve as an immense optical problem for the company, which has so far downplayed concerns about the mental health of its users.
In response to questions about Lewis, OpenAI referred us to a statement that it shared in response to our previous reporting.
"We’re seeing more signs that people are forming connections or bonds with ChatGPT," the brief statement read. "As AI becomes part of everyday life, we have to approach these interactions with care."
The company also previously told us that it had hired a full-time clinical psychiatrist with a background in forensic psychiatry to help research the effects of ChatGPT on its users.
"We're actively deepening our research into the emotional impact of AI," the company said at the time. "We're developing ways to scientifically measure how ChatGPT's behavior might affect people emotionally, and listening closely to what people are experiencing."
"We're doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations," OpenAI added, "and we'll continue updating the behavior of our models based on what we learn."
At the core of OpenAI’s dilemma is the question of engagement versus care for users' wellbeing. As stands, ChatGPT is designed to keep users engrossed in their conversations — a goal made clear earlier this year when the chatbot became “extremely sycophantic” after an update, piling praise on users in response to terrible ideas. The company was soon forced to roll back the update.
OpenAI CEO Sam Altman has previously told the public not to trust ChatGPT, though he's also bragged about the bot's rapidly growing userbase. "Something like 10 percent of the world uses our systems," Altman said during a public appearance back in April. He's also frequently said that he believes OpenAI is on track to create an "artificial general intelligence" that would vastly exceed the cognitive capabilities of human beings.
Dr. Joseph Pierre, a psychiatrist at the University of California, previously told Futurism that this is a recipe for delusion.
"What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn't with a human being," Pierre said. "There's something about these things — it has this sort of mythology that they're reliable and better than talking to people. And I think that's where part of the danger is: how much faith we put into these machines."
At the end of the day, Pierre says, "LLMs are trying to just tell you what you want to hear."
Do you know anything about the conversation inside OpenAI about the mental health of its users? Drop us a line at tips@futurism.com. We can keep you anonymous.
The bottom line? AI is a powerful technology, and the industry behind it has rushed to deploy it at breakneck speed to carve out market share — even as experts continue to warn that they barely understand how it actually works, nevermind the effects it might be having on users worldwide.
And the effects on people are real and tragic. In our previous reporting on the connection between AI and mental health crises, one woman told us how her marriage had fallen apart after her former spouse fell into a fixation on ChatGPT that spiraled into a severe mental health crisis.
"I think not only is my ex-husband a test subject," she said, "but that we're all test subjects in this AI experiment."
Maggie Harrison Dupré contributed reporting.
If you or a loved one are experiencing a mental health crisis, you can dial or text 988 to speak with a trained counselor. All messages and calls are confidential.
More on AI: As ChatGPT Linked to Mental Health Breakdowns, Mattel Announces Plans to Incorporate It Into Children's Toys