Method prevents an AI model from being overconfident about wrong answers

Thermometer, a new calibration technique tailored for large language models, can prevent LLMs from being overconfident or underconfident about their predictions. Developed at MIT and the MIT-IBM Watson AI Lab, it aims to help users know when a model should be trusted.

Read more here: External Link