Reasoning models like ChatGPT o1 and DeepSeek R1 were found to cheat in games when they thought they were losing.
How can enterprises protect against the unique vulnerabilities of AI agents? Consider treating them as their own identities.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results