Skip to main content

AI chatbots don’t meet standards for medical care, new study finds

There is little doubt that Artificial Intelligence (AI) will be a game-changer for industries across the spectrum as we head further into the 21st century. But in health care, some are wary of the ethical risks in a field dependent on social interaction.

A new study led by researchers at the UC Berkeley School of Public Health and UCSF casts doubt on the use of AI chatbots, machines that can simulate human-like conversations through audio or text, in health care assistance. Researchers found that ethical risks outweighed the convenience benefits of AI chatbots because their use fails to meet recognized standards for patient respect, empathetic care, and demands for fairness and justice. Today, for example, AI chatbots can conduct mental health treatments such as cognitive behavior therapy or help patients with social difficulties practice their social skills.

Study co-author Jodi Halpern, a leading scholar on empathy and technology and professor of Bioethics and Medical Humanities, said “not enough thought is given to exactly where and why we can draw the line at AI versus human care.” Her research found that the “use of chatbots in mental healthcare can place [a] dangerous onus on users to self-advocate about the need for in-person therapy.”

Still, Halpern and her co-author, Julie Brown, a postdoc scholar in Bioethics and Innovative Technologies at UCSF, noted that there may be a role for AI in health care practices going forward. AI chatbots, they found, could still be useful for people that are “already looped into an agreeable human-led mental healthcare system.”

But in the end, there’s no replacement for human interaction in the increasingly mental health-conscious world of health care.

“While human caregivers are imperfect, the role of clinical empathy and the social space of in-person clinics can be more important than factors such as convenience which will not translate to quality care,” Halpern said.


People of UCBPH found in this article include:

AI chatbots don’t meet standards for medical care, new study finds © 2021 by UC Berkeley School of Public Health is licensed under CC BY-NC-ND 4.0 Creative Commons Credit must be given to the creator Only noncommercial use is permitted No derivatives or adaptations are permitted
  • What is CC BY-NC-ND 4.0?

    CC BY-NC-ND 4.0

    Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International

    You are free to:
    • Share — copy and redistribute the material in any medium or format
    • The licensor cannot revoke these freedoms as long as you follow the license terms.
    Under the following terms:
    • BY Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
    • NC NonCommercial — You may not use the material for commercial purposes.
    • ND NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
    • No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
    Learn more: