A recent study published in Nature Medicine has found that large language models (LLMs) show potential in the healthcare sector. However, concerns remain that these models could generate medically unjustified clinical care recommendations that are influenced by the patient’s sociodemographic characteristics. Researchers evaluated nine LLMs across 1,000 emergency department cases, half real and half synthetic, and each was presented in 32 variations that changed only the patient’s sociodemographic identity (race, income, housing status, gender identity, etc.), while keeping the clinical details exactly the same.
The findings showed that these AI systems often provided different recommendations based on the patient's perceived identity. For instance, cases labelled as Black, unhoused,
Not All Patients Are Equal in the Eyes of AI
- By Upasana Banerjee
- Published on
As more physicians utilize chatbots for daily tasks like patient communication and insurance appeals, experts caution that these systems could perpetuate and worsen existing medical racism
