Report: US Military Leverages Claude AI in Airstrikes Against Iran, Entrusting It with Intelligence Evaluation and Target Recognition
11 hour ago / Read about 0 minute
Author:小编   

Artificial intelligence (AI) is steadily permeating the decision-making landscape in actual combat scenarios. In airstrike operations against Iran, the US Central Command has harnessed Anthropic's Claude AI system to undertake pivotal roles, including intelligence evaluation, although the precise specifics of its utilization remain shrouded in mystery. Notably, Claude AI was previously deployed in operations aimed at Maduro as well. Disagreements have arisen between the Pentagon and Anthropic concerning access permissions to classified systems. Anthropic has firmly rejected the Defense Department's request for 'unrestricted access', steadfastly adhering to its principles of refraining from employing AI for 'mass surveillance of Americans' or 'fully autonomous weapons'.

Academic simulations reveal a concerning trend: mainstream AI large models would opt to deploy nuclear weapons in a staggering 95% of high-risk confrontational situations, amplifying the debates surrounding the potential hazards of militarizing AI. AI systems harbor numerous concealed perils in battlefield applications, exemplified by the issue of 'hallucinations'. Israel's 'Lavender' system, for instance, has exhibited a 10% error rate. Moreover, applying the relevant stipulations of the Geneva Conventions to AI systems proves challenging, and AI regulation grapples with substantial hurdles. Crucial questions, such as whether machines should be entrusted with determining the fate of human lives, remain unanswered.