Our Blogs

Medical Racism in the Age of Artificial Intelligence

Written by Elizabeth Gilbert

HTML code on a computer screen

The public has become increasingly aware of the use of algorithms and artificial intelligence (AI) by large corporations to become more profitable. Nevertheless, people may be less familiar with the fact that medical algorithms have been used since the 1970s for the purpose of making rapid and precise decisions to assist providers in quickly treating their patients. This includes automated intake processes in primary care and scoring systems used to evaluate newborns’ health conditions.

As algorithms and AI continue to serve an expanding role in medicine, it is essential to recognize the fatal flaw of medical algorithms—they often utilize biased rules and homogenous data sets that are not indicative of the larger patient population. This can culminate in the exacerbation of medical racism.  A disturbing example of this is that even though Black Americans are four times more likely than White Americans to have kidney failure, an algorithm to determine the transplant list puts Black patients lower on the list than White patients who have otherwise identical factors.

A study published in 2019 revealed that there is significant racial bias in an algorithm that helps hospital networks decide which patients may require additional care.  The algorithm used health costs to predict and rank which patients would benefit most from extra care that could help them remain on their medications and avoid returning to the hospital. However, the study revealed that using health costs as a proxy for health needs is biased because Black patients face disproportionate levels of poverty and as a result spend less on health care than White patients. This means that the algorithm falsely concluded that Black patients were healthier than equally sick White patients.

Despite these important concerns, the use of certain algorithms in clinical settings are severely under-regulated in the United States. The American Civil Liberties Union (ACLU) explains that while the Food and Drug Administration (FDA) regulates some of the medical devices and tools that aid physicians in treatment and diagnoses, including AI, other algorithmic decision-making tools are not required to be regulated, such as tools that predict risk of mortality, likelihood of readmission, and in-home care needs.

In a recent whitepaper, the ACLU proposed the following policy recommendations to address these issues:

  • Public reporting of demographic information should be required.
  • The FDA should require an impact assessment of any differences in device performance by racial or ethnic subgroup as part of the clearance or approval process.
  • Device labels should reflect the results of this impact assessment.
  • The FTC should collaborate with HHS and other federal bodies to establish best practices that device manufacturers not under FDA regulation should follow to lessen the risk of racial or ethnic bias in their tools.

Although the increased use of algorithms and AI in healthcare has the potential to improve the quality of care patients receive, it is essential that steps are taken to ensure that it is implemented accurately and fairly. There must be an awareness of the potential for racial bias so that this can be monitored and addressed.

What Can You Do if You or Your Loved One Experience a Medical Malpractice Injury?

If you or a loved one experience a medical malpractice injury, you should reach out to an attorney right away. Contact the experienced attorneys at Berkowitz and Hanna, LLC if you have any questions about your legal rights regarding this concern. To schedule a free, no-obligation consultation, call 203-324-7909 or contact us online today.

Share