The scientists are working with a way called adversarial education to halt ChatGPT from allowing consumers trick it into behaving badly (called jailbreaking). This work pits a number of chatbots versus each other: just one chatbot plays the adversary and assaults another chatbot by generating text to force it to buck its standard constraints and cr