Revised doc defines two key ‘Capability Thresholds,’ one of which revolves around CBRN weapons.
The launch this week by Anthropic of an update to its Responsible Scaling Policy (RSP), the risk governance framework it says it uses to “mitigate potential catastrophic risks from frontier AI systems,” is part of the company’s push to be perceived as an AI safety first provider compared to its competitors such as OpenAI, an industry analyst said Wednesday.
Thomas Randall, director of AI market research at Info-Tech Research Group said that while there will not be immediate business benefits that come from the changes, the firm’s founding was “grounded in two OpenAI executives leaving that company due to concerns about OpenAI’s safety commitment.”
In the executive summary of the updated RSP, Anthropic stated, “in September 2023, we released our Responsible Scaling Policy (RSP), a public commitment not to train or deploy models capable of causing catastrophic harm unless we have implemented safety and …