Free Consultation(203) 447-0000
In recent years, the potential applications of artificial intelligence (AI) have been widely explored in a number of industries. Indeed, leaders in fields such as law, finance, and entertainment have been navigating how to use this tool ethically and safely. In perhaps more than any other industry, the ethical and safe implementation of AI in health care is paramount.
Although AI was first introduced in the 1950s, it was initially met with skepticism from the scientific community until the 1970s, when prototypes like the Causal-Association Network (CASNET) were developed. CASNET would apply disease data to an individual and give advice to a physician on how to help the patient manage the disease. Today, AI is being used in health care in a variety of ways, from aiding in diagnosing patients and drug development to engaging in administrative tasks like transcribing medical documents and coordinating billing. The emphasis on further incorporating AI into the healthcare field promises certain benefits, such as improving health outcomes for patients by enabling providers to offer higher-quality, more empathetic care at a lower cost and assisting patients with making more informed healthcare decisions.
Nevertheless, there are also significant risks that would come with an unchecked embrace of artificial intelligence. For instance, when healthcare algorithms rely on data that may be biased towards underrepresented communities, the result can be an exacerbation of existing racial and socioeconomic disparities in healthcare. This was made clear when a 2019 study revealed racial bias in a clinical algorithm hospitals relied upon to determine which patients needed care. For a Black patient to be recommended the same care as a White patient, the algorithm required the Black patient to be categorized as much sicker because the algorithm relied upon healthcare spending to make the determination. Reliance on healthcare spending data fails to account for the fact that Black patients have historically had less money to spend on health care than White patients.
Additional risks include the potential for incorrect medication dosage or poor treatment when clinicians, who have not necessarily received technical training on AI, fail to identify underlying AI malfunctions. For example, a clinician may fail to recognize when a phenomenon known as “overfitting” has occurred, an instance in which an algorithm fits too closely to its training data and as a result, the model is unable to make accurate predictions or conclusions from any other data.
To prevent the risks associated with the use of AI in health care while seeking its benefits, human oversight is unequivocally essential at every stage and level—from the companies developing the technology, to the HHS and FDA who may be regulating the technology, to the providers and healthcare systems utilizing the technology.
Unfortunately, errors are an all too common problem in health care and have been reported as the third leading cause of death in the U.S. Some medical errors are caused by medical malpractice. Because the utilization of artificial intelligence can result in medical errors, it is important to examine the implications of AI in medical malpractice.
The increased use of AI in healthcare decision-making has reignited the discourse surrounding liability in medical malpractice actions. A recent exploration of this issue in Politico explains that in the debate over who is responsible if something goes wrong, health technology companies and some hospitals have argued that providers are the ones responsible because they are making the final call in care and are ultimately accountable for their decisions. One lawmaker proposed that perhaps there could be a safe harbor for providers and the AI products they utilize if they join a surveillance program that tracks patient outcomes.
The issue with such an approach is that it is the patients who would ultimately pay the price for this. The significance of ensuring that someone is held accountable for medical malpractice in which AI played a role cannot be overstated. As Sean Domnick, president of the American Association for Justice, explained to Politico, “When we give people absolution from responsibility, it means everyday people bear the brunt of it.”
As AI continues to be further integrated into healthcare delivery, the three branches of government will be shaping in real time the experience patients have when receiving health care and when seeking justice if medical malpractice occurs. For this reason, the counsel and guidance of an experienced attorney can be essential when a patient suffers a medical malpractice injury.
If you believe that you or a loved one may have experienced a medical malpractice injury, you should reach out to an attorney right away. Contact the experienced attorneys at Berkowitz and Hanna, LLC if you have any questions about your legal rights regarding this concern. To schedule a free, no-obligation consultation, call 203-324-7909 or contact us online today.
Berkowitz Hanna