The AI Adversary Progresses Too Swiftly: Anthropic Reneges on a Key Safety Pledge
3 day ago / Read about 0 minute
Author:小编   

The US-based AI startup Anthropic once positioned itself as a leading AI research laboratory with an unwavering focus on safety. At the heart of its 2023 'Responsible Scaling Policy' (RSP) was a vow to refrain from training AI systems unless it could guarantee adequate safety measures beforehand. Yet, in recent months, the company has opted to revamp the RSP and abandon its former commitment. The Chief Science Officer argued that suspending AI model training serves no one's interests and that making a unilateral pledge is impractical amidst fierce competition. The updated policy now emphasizes heightened transparency around AI safety risks and pledges to make safety-related investments that are on par with, or exceed, those of its rivals. However, overall, it substantially loosens the reins on Anthropic's safety policies. Currently, Anthropic is contending with intense market competition and has also found itself embroiled in a controversy with the US Department of Defense over the utilization of the Claude tool. The company maintains that this policy adjustment is driven by the breakneck speed of AI advancements and the absence of pertinent federal regulations, aiming to keep pace with competitors in a policy environment that lacks uniformity, all while remaining steadfast in its commitment to upholding industry-leading safety standards.