ChatGPT bombs test on diagnosing kids' medical cases with 83% error rate
A new study by researchers at the University of Stanford has found that ChatGPT, an AI-based language model developed by OpenAI, is not suitable for diagnosing medical conditions in children. The study tested ChatGPT with a range of pediatric illnesses and found it had an 83% error rate.
The researchers used a dataset of 500 medical cases involving pediatric illnesses such as fever, rash, cough and stomach pain. They fed this data into the ChatGPT system and asked it to accurately diagnose the condition. Unfortunately, the system was only able to diagnose correctly 17% of the time.
The researchers also tested how accurate ChatGPT would be in providing advice on treatment. Again, it failed with a staggering 79% error rate. This suggests that any advice given by the system cannot be trusted.
The researchers concluded that while ChatGPT may provide interesting insights when used in a supervised learning environment, it should not be used to provide diagnoses or treatment advice to parents or caregivers. Furthermore, they suggest further research is needed to improve its accuracy before it can be used safely.
Read more here: External Link