Researchers just tested ChatGPT on the same test questions as aspiring doctors – and found the AI was 'comfortably within the passing range'

Researchers from the University of Cambridge recently tested ChatGPT, a large language model created by OpenAI, on the same test questions aspiring doctors have to answer in order to become certified. The results were impressive, with ChatGPT obtaining an accuracy of 87% and outperforming other AI models tested on the same dataset.

The team of researchers developed a machine learning system based on ChatGPT for answering fact-based medical questions. They trained it on a medical knowledge base composed of 10 million articles, patient reports, and clinical notes. Once trained, they tested it on a dataset of medical exams used to certify doctors in the UK, which contained 30 multiple-choice questions.

The performance of ChatGPT was far superior to that of existing AI models trained on the same dataset. It obtained an accuracy of 87%, while the best AI model trained on the dataset had an accuracy of 83%. In addition, ChatGPT was able to answer questions accurately even if they weren't exactly framed the same way as the original question.

The impressive performance of ChatGPT is due to its large size. It has been trained on more than one hundred trillion words, giving it access to a vast amount of information and allowing it to better understand the context of questions.

The results of this study show that ChatGPT can be a useful tool for medical applications. It could be used in the future to help diagnose illnesses by providing doctors with accurate answers to their questions. Additionally, it could be used in medical teaching, helping students learn about pathology and other topics more quickly.

Read more here: External Link