A faster, better way to prevent an AI chatbot from giving toxic responses
A new technique can more effectively perform a safety check on an AI chatbot. MIT researchers enabled their model to prompt a chatbot to generate toxic responses, which are used to prevent the chatbot from giving hateful or harmful answers when deployed.
Read more here: External Link