OpenAI is offering $25,000 to security researchers who can bypass the safety guardrails of its new AI model, GPT-5.5, through a "bio bug bounty" programme. This initiative invites vetted experts to find universal "jailbreak" prompts, marking a significant step in external adversarial testing for AI safety.
from Tech-Economic Times https://ift.tt/x9lrJYj
Subscribe to:
Post Comments (Atom)
US Justice Department intervenes in xAI challenge to Colorado tech law
In its intervention, the Justice Department said the law violates the Fourteenth Amendment's equal protection guarantee by requiring ...
-
AmpereHour Energy raised $5 million from Avaana Capital and UC Impower to expand manufacturing, R&D, and product development. The startu...
-
Facebook’s Zuckerberg blasted for ‘out of touch’ year-end post after scandal-plagued 2018 After a year plagued by privacy scandals, hat...
No comments:
Post a Comment