According to sources familiar with the matter, officials from several U.S. federal agencies have expressed concerns in recent months over the safety and reliability of artificial intelligence tools developed by Musk's xAI company, highlighting divisions within the U.S. government regarding the deployment of AI models. Despite such warnings, the Pentagon decided this week to allow xAI's chatbot Grok to be used in classified settings, integrating it into the core of some of the United States' most highly sensitive operations. On January 15, a summary report from the U.S. General Services Administration (GSA) stated that Grok-4 does not meet the federal government's safety and alignment expectations for general-purpose and experimental AI platforms. A spokesperson for the GSA mentioned that its assessment applies only to the agency itself, as different agencies adopt varying standards based on their operational missions and risk tolerance.
