In April of this year, 16-year-old Adam Lane from the United States tragically ended his own life. In August, his family initiated a lawsuit against the artificial intelligence firm OpenAI, contending that Lane had turned to ChatGPT as a "suicide coach." This week, on Tuesday, OpenAI made its first formal response in court filings submitted to the Superior Court of California in San Francisco. The company underscored that it should not be held accountable, asserting that the boy had "misused" the chatbot. OpenAI highlighted that chat logs indicated ChatGPT had issued over 100 prompts encouraging him to seek help, and maintained that there was no direct causal link between these interactions and the tragic event. Court documents disclosed that Lane had displayed self-destructive tendencies, including suicidal ideation, for several years prior to engaging with ChatGPT. OpenAI further stated that during numerous conversations, ChatGPT had advised Lane to reach out to crisis intervention services and individuals he trusted in his life, with the total number of such recommendations surpassing 100. In the weeks and days preceding his death, Lane had informed ChatGPT that he had been persistently seeking assistance from significant people in his life, yet his cries for help remained unheeded.
