As technological advancements in AI continue to develop in many different sectors, its impact is felt within the healthcare industry.
Experts in “The Coming Revolution: Equity in the Age of AI” webinar event, moderated by USC Annenberg Center for Health Journalism Director Michelle Levander, weighed in Tuesday on the pressing potential and peril of AI in healthcare, especially on maintaining healthcare equity in a technologically advancing world.
The event focused on how AI bias in healthcare settings is based on incomplete or demographically skewed data, which can lead to missing or underrepresented information of patient communities.
Although some panelists said that AI algorithms can create and deepen biases, others said that the application could address disparities in healthcare and provide more equitable treatment and access.
One example of AI using race to calculate health outcomes is the glomerular filtration rate calculator (eGFR), which provides the best overall index for kidney function, according to Tina Hernandez-Boussard, a professor of medicine at Stanford University.
Hernandez-Boussard said that research has shown that the calculator increases the eGFR value for Black patients because “it was assumed inaccurately that Black patients have higher muscle mass and therefore higher creatine levels.” She said this can lead to delays in diagnosis and treatment, and impact Black patients’ access to kidney transplants.
Katie Palmer, a health tech correspondent at STAT News, said that the biases in healthcare are often due to a lack of robust data acquisition and collection, which leads to patient communities not being well-represented.
“The AI bias problem has been described to me as putting the problems with the more traditional clinical decision support tools that describe, like eGFR, and putting those problems on steroids,” Palmer said.
She said allowing AI algorithms to select variables that determine healthcare outcomes, such as race, would lead to reliance on meaningless factors that don’t directly correlate with a patient’s health.
“In these deep learning models, where you’re letting the computer determine which variables are going to drive that predictive outcome, you’re not selecting the variables yourself that you think are going to be most meaningful,” Palmer said. “That means that it will end up relying on some things that may not be meaningful. It may over-rely on those proxies of race, in inappropriate ways, that don’t impact the patient’s health.”
Bino Varghese, an associate professor of radiology at USC Keck School of Medicine, said that his experiences with AI integration in radiology have improved early and accurate detection of diseases, and led to shorter wait times and faster treatment responses.
He mentioned the importance of creating a rigorous and robust data source to reduce the likelihood of bias in healthcare outcomes.
“It is only as good as the data that you put in, so if you don’t have good quality data and if you don’t have a lot of data, then your model becomes very biased because that’s not translated to everybody,” Varghese said.
Joseph Betancourt, president of the Commonwealth Fund, a non-profit organization dedicated to promoting equitable healthcare systems, said AI and digital tools could reduce disparities and improve culturally competent care.
Betancourt said technology can address disparities in the health care system by removing three main barriers to communities of color seeking mental health treatment.
“One was stigma. The second was the social drivers – not being able to get somewhere to be seen,” Betancourt said. “And the third is perhaps because of trust - wanting to see somebody who looks like you, shares your lived experience or speaks your language.”
But as technological advancements in AI are making a direct impact on the healthcare system, grant funding towards science and medicine-related fields has dropped this year. The average payment for National Institutes of Health competitive grants during the Trump administration has dropped by 41%, according to the New York Times.
The significant decrease in funding towards research in medicine and science could lead to fewer opportunities for exploring and navigating ethical ways to incorporate AI in healthcare.
“There has to be an optimal path that everybody should follow where research is also part of the grand scheme of things,” Varghese said. “By not innovating and by not making things better, we’re just settling for a poor lifetime value.”
Amber Angell, a USC assistant professor of occupational science and therapy, uses AI machine learning models from electronic health record data sets to examine the prevalence of certain co-occurring conditions within autistic populations.
She received two grants, from the National Institute of Mental Health and the National Institute of Child Health and Human Development. Angell said that the decrease in grant funding raises concerns for public health and science-related research.
“It’s a huge concern if the funding is decreasing because there’s all kinds of really important health and public health problems that these studies are funding,” Angell said. “But I know a lot of investigators are looking to foundations, are looking to private funding or looking for other ways to get their work funded.”
Levander said that the application of AI can support health care fields, but it can also be met with resistance and challenges.
“If done right, AI can help address inequities that have persisted for generations when it comes to health care delivery in this country. But the current administration has demonstrated little appetite for regulating this fast-moving industry,” Levander said in a statement to Annenberg Media. “For now, it largely will be left up to many different players to decide whether it is in their corporate interest to addressing these issues of equitable care.”
