AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Slashdot  <--  <--- Return to Home Page
   Local Database  Slashdot   [16 / 112] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   ChatGPT-4 Beat Doctors at Diagnosing Illness, Study Finds   November 18, 2024
 3:20 AM  

Feed: Slashdot
Feed Link: https://slashdot.org/
---

Title: ChatGPT-4 Beat Doctors at Diagnosing Illness, Study Finds

Link: https://science.slashdot.org/story/24/11/18/0...

Dr. Adam Rodman, a Boston-based internal medicine expert, helped design a
study testing 50 licensed physicians to see whether ChatGPT improved their
diagnoses, reports the New York TImes. The results? "Doctors who were given
ChatGPT-4 along with conventional resources did only slightly better than
doctors who did not have access to the bot. "And, to the researchers'
surprise, ChatGPT alone outperformed the doctors." [ChatGPT-4] scored an
average of 90 percent when diagnosing a medical condition from a case report
and explaining its reasoning. Doctors randomly assigned to use the chatbot
got an average score of 76 percent. Those randomly assigned not to use it had
an average score of 74 percent. The study showed more than just the chatbot's
superior performance. It unveiled doctors' sometimes unwavering belief in a
diagnosis they made, even when a chatbot potentially suggests a better one.
And the study illustrated that while doctors are being exposed to the tools
of artificial intelligence for their work, few know how to exploit the
abilities of chatbots. As a result, they failed to take advantage of A.I.
systems' ability to solve complex diagnostic problems and offer explanations
for their diagnoses. A.I. systems should be "doctor extenders," Dr. Rodman
said, offering valuable second opinions on diagnoses. "The results were
similar across subgroups of different training levels and experience with the
chatbot," the study concludes. "These results suggest that access alone to
LLMs will not improve overall physician diagnostic reasoning in practice.
"These findings are particularly relevant now that many health systems offer
Health Insurance Portability and Accountability Act-compliant chatbots that
physicians can use in clinical settings, often with no to minimal training on
how to use these tools."

Read more of this story at Slashdot.

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Slashdot  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0154 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.241108