Saturday, April 25, 2026

OpenAI offers $25,000 to anyone who can jailbreak its latest model GPT-5.5

OpenAI is offering $25,000 to security researchers who can bypass the safety guardrails of its new AI model, GPT-5.5, through a "bio bug bounty" programme. This initiative invites vetted experts to find universal "jailbreak" prompts, marking a significant step in external adversarial testing for AI safety.

from Tech-Economic Times https://ift.tt/x9lrJYj

No comments:

Post a Comment

US Justice Department intervenes in xAI challenge to Colorado tech law

In its intervention, the Justice Department said the law violates the Fourteenth Amendment's equal protection ‌guarantee by ⁠requiring ⁠...