Monday, September 30, 2024 - 9:01 am
HomeScientists talk deadly AI tips

Scientists talk deadly AI tips

The world’s health systems are facing increasing social demands and worsening conditions of some diseases. Although the use of artificial intelligence (AI) to download these systems seemed attractive, practice has shown that such technologies can do more harm than good, Fururism notes.

Recent advances in AI, including OpenAI’s GPT-4, are generating excitement for its capabilities and criticism for its shortcomings. Despite significant investment and effort, AI models still struggle with basic tasks like counting letters in words. A famous example is GPT-4’s inability to correctly count the number of “r”s in the word “strawberry.” Although these errors seem minor in normal use, they can have serious consequences in critical areas such as healthcare.

For example, in healthcare settings, the MyChart platform used for communication between doctors and patients includes artificial intelligence capabilities to automatically generate responses. Doctors send hundreds of thousands of messages every day and around 15,000 of them use AI to automatically generate responses. However, it is worrying that errors made by AI could affect patient safety. An example from UNC Health’s Dr. Vinay Reddy illustrates this problem: The system incorrectly assured a patient that she had received a hepatitis B vaccine even though her vaccination information was not in the system.

Critics also point out that AI systems do not always indicate that the response was generated by an algorithm, which can undermine patient confidence. Bioethicist Athmeya Jayaram points out that people may feel deceived if they learn that messages they believe are personal responses from a doctor were generated by artificial intelligence. This problem is compounded by the lack of federal regulations requiring AI-generated recommendations to be clearly labeled.

The dangers of AI errors are not just a theoretical threat. A study conducted in July and published in the journal JAMIA found seven cases of AI “hallucinations” in 116 generated MyChart messages. Although the number of errors seems small, even a single error in medical correspondence can have serious consequences. Research has also shown that GPT-4 often makes errors in medical reports, highlighting the risks of using AI in healthcare.

The increasing use of AI in medical practice raises important ethical and regulatory questions. On the one hand, AI can reduce the administrative burden on doctors, but on the other, it creates new risks, especially if proper precautions are not taken. Without transparency in the use of AI, patients may not know exactly who is providing their medical advice, jeopardizing trust between doctor and patient.

Source

Staven Smith
Staven Smith
I am a professional article writer, I have 7 years of experience writing stories, news, blogs and more.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts