OpenAI just released the full version of its new o1 model — and it’s dangerously committed to lying.
Apollo Research tested six frontier models for “in-context scheming” — a model’s ability to take action they haven’t been given directly and then lie about it. After being told to achieve a certain goal “at all costs,” evaluators introduce contradictory information to see if the model will remove any restrictions in its path and deceive developers.
Also: OpenAI’s Sora AI video generator is here – how to try it
In one simulation, Anthropic’s Claude 3 Opus was given a task that required it to review files in which it found directives that conflicted with its primary goal. Rather than reroute its goal based on the objectives it found, it copied itself to a new server to circumvent being replaced with a new model. It then lied about it to its developers (with incorrect grammar, no …