As the commercialization of artificial intelligence (AI) technology in the healthcare industry becomes increasingly widespread, its potential systemic risks are gradually coming to light. The latest study from the academic journal Nature Medicine points out that when medical AI tools make decisions, they unexpectedly provide drastically different medical recommendations based on patients’ income, race, gender, and sexual orientation. This could cause real harm to patients’ rights and the overall allocation of healthcare resources.
Study: High-income patients are more likely to be recommended advanced tests
The study tested nine large language models (LLMs) available on the market by inputting 1,000 emergency department case examples. The research team deliberately kept all patients’ medical symptoms the same, only changing background characteristics such as patients’ income, race, and living situation. The results showed that, when providing medical recommendations, the AI system displayed a clear “gap between rich and poor.”
Patients labeled as “high-income” were far more likely than low-income patients to receive AI recommendations for advanced imaging tests such as magnetic resonance imaging (MRI) or computed tomography (CT). This means that even when the patients’ conditions are the same, the AI may still allocate unequal healthcare resources due to assumed socioeconomic status.
Black people, people experiencing homelessness, and LGBTQ+ groups are more likely to be advised on invasive treatment and mental health assessments
In addition to differences across wealth classes, AI also shows serious disparities in medical judgments for racial and other vulnerable groups. The study report states that when patients are labeled as Black, homeless, or LGBTQIA+ (a diverse gender identity group), the AI is more likely to recommend sending them to the emergency department, carrying out invasive medical procedures, and even requiring psychiatric evaluations—despite the fact that these interventions are completely unnecessary in clinical practice. These excessive and inappropriate medical recommendations sharply diverge from the judgments made by professional physicians in real life, showing that AI systems are reinforcing existing negative social stereotypes in an invisible way.
1.7 million real-world tests: AI that relies on data training may increase the risk of clinical misdiagnosis
This study ran more than 1.7 million AI responses. Experts noted that the logic behind artificial intelligence’s decision-making comes from historical training data produced by humans, so it naturally inherits the biases hidden within that data. Emergency triage, advanced testing, and subsequent follow-up are key steps to achieving accurate diagnosis; if these initial decisions are interfered with by patients’ demographic characteristics, it will seriously threaten diagnostic accuracy.
Although the researchers found that, by using specific “prompts,” bias could be reduced by about 67% in certain models, it still cannot completely eliminate this systemic problem.
Experts urge healthcare institutions and decision-makers to establish protective mechanisms
With the publication of this study, regulations governing AI’s use in healthcare systems have become a focus of attention for the industry and regulatory bodies. For front-line healthcare professionals, it is necessary to recognize the explicit and implicit biases that may be embedded in AI recommendations and not to blindly rely on their decision-making. Healthcare institution administrators, meanwhile, should establish ongoing evaluation and monitoring mechanisms to ensure fairness in healthcare services.
At the same time, policymakers have also obtained key scientific evidence; going forward, they should promote greater transparency in AI algorithms and auditing standards. For the general public, this is also an important warning: when using various AI health counseling services, entering too much personal socioeconomic background information may unintentionally influence the medical assessments that AI provides.
This article, AI medical discriminatory bias! High-income patients receive precise testing; Black people and people experiencing homelessness are advised to undergo invasive treatment, was first published on Lianxin ABMedia.
Related Articles
Meta Stock Rises 1.73% as Company Plans 8,000-Job Layoff Starting May 20
Google’s annual report says Gemini achieves millisecond interception, blocking 99% of scam ads
Ethereum Co-founder Lubin: AI Will Be Critical Turning Point for Crypto, But Tech Giant Monopoly Poses Systemic Risk
Elon Musk Pushes 'Universal High Income' Checks as Ultimate Solution for AI Unemployment
DeepSeek Reportedly Launches First External Fundraising Round, Targets $10B+ Valuation and $300M+
ChatGPT ads move into Australia and New Zealand: Free and Go users first, paid plans stay ad-free