LLM red teamers: People are hacking AI chatbots just for fun and now researchers have catalogued 35 “jailbreak” techniques

A new study explores why people try to make AI chatbots break the rules—and what their efforts reveal about the risks, creativity, and community behind large language model “red teaming.”