AI chatbots don’t meet standards for medical care, new study finds

There is little doubt that Artificial Intelligence (AI) will be a game-changer for industries across the spectrum as we head further into the 21st century. But in health care, some are wary of the ethical risks in a field dependent on social interaction.

A new study led by researchers at the UC Berkeley School of Public Health and UCSF casts doubt on the use of AI chatbots, machines that can simulate human-like conversations through audio or text, in health care assistance. Researchers found that ethical risks outweighed the convenience benefits of AI chatbots because their use fails to meet recognized standards for patient respect, empathetic care, and demands for fairness and justice. Today, for example, AI chatbots can conduct mental health treatments such as cognitive behavior therapy or help patients with social difficulties practice their social skills.

Study co-author Jodi Halpern, a leading scholar on empathy and technology and professor of Bioethics and Medical Humanities, said “not enough thought is given to exactly where and why we can draw the line at AI versus human care.” Her research found that the “use of chatbots in mental healthcare can place [a] dangerous onus on users to self-advocate about the need for in-person therapy.”

Still, Halpern and her co-author, Julie Brown, a postdoc scholar in Bioethics and Innovative Technologies at UCSF, noted that there may be a role for AI in health care practices going forward. AI chatbots, they found, could still be useful for people that are “already looped into an agreeable human-led mental healthcare system.”

But in the end, there’s no replacement for human interaction in the increasingly mental health-conscious world of health care.

“While human caregivers are imperfect, the role of clinical empathy and the social space of in-person clinics can be more important than factors such as convenience which will not translate to quality care,” Halpern said.