SpaceX Warns: xAI Investigations Over Pornographic Content May Jeopardize Its IPO Market Access
1 day ago / Read about 0 minute
Author:小编   

According to reports, SpaceX has cautioned in its recently submitted S-1 prospectus that its artificial intelligence arm, xAI, is under scrutiny from multiple investigations. These investigations are centered around allegations of xAI's involvement in the creation and dissemination of sexually abusive images, a situation that could potentially bar the company from accessing specific markets. In the risk factors section of the prospectus, it is noted that numerous regulatory bodies globally are actively probing the utilization of social media and artificial intelligence, with a focus on issues like advertising practices, consumer protection, and the proliferation of harmful content.

SpaceX highlighted a significant challenge it confronts: accusations that its AI products have been misused to generate explicit images without consent, including sexualized depictions of minors. Such regulatory inquiries could subject SpaceX to legal actions, financial liabilities, governmental penalties, and even the revocation of market access in certain regions.

Among the risks detailed in the regulatory filing is an investigation launched by the Irish Data Protection Commission in February. xAI has come under intense global scrutiny due to the widespread appearance of sexually suggestive images on its platform, particularly noticeable in late 2025 and early 2026. These images primarily featured near-nude depictions of women and children on the company's social media platform, X.

xAI announced in January that it had implemented measures to deter users from requesting sexually suggestive images of real individuals and to block the generation of such content in areas where relevant laws forbid it. Nevertheless, multiple investigations initiated in Canada, the UK, Brazil, California, and other locales are still in progress.

In France, Musk disregarded a prosecutor's summons on Monday, declining to address allegations of algorithm abuse, illegal data extraction, and participation in the dissemination of child sexual abuse material.

The cautionary note in the S-1 filing regarding market access underscores the gravity of the various investigations into xAI, especially those focusing on AI-generated images suspected of portraying child sexual abuse and non-consensual sexual images of women. In certain jurisdictions, the creation of such images may be deemed a criminal offense, and their dissemination is a highly contentious issue that can swiftly provoke public outrage.

xAI's restrictions on Grok seem to have curtailed the spread of abusive content to some extent but have not entirely halted it. This February, reports surfaced indicating that Grok continued to produce sexually suggestive images, even when users explicitly stated that the individuals depicted had not given their consent. Just last week, further reports emerged revealing that Grok was still publicly generating sexually suggestive images, including those of actors and pop stars.