Stupidly Easy Hack Can Jailbreak Even the Most Advanced AI Chatbots
New research from Claude chatbot developer Anthropic reveals that it’s incredibly easy to “jailbreak” large language models, which basically means tricking them into ignoring their own guardrails. Like, really easy.