Over 90 Total Lots Up For Auction at One Location - WA 04/08

At ASTRO, discussing the risks in using AI with patient data

by Lisa Chamoff, Contributing Reporter | September 18, 2019
Rad Oncology
ASTRO 2019
Artificial intelligence (AI) has plenty of promise in medicine, but there are dangers in relying on algorithms to make assumptions about patient outcomes.

During a keynote address at the ASTRO annual meeting in Chicago, David Magnus, the Thomas A. Raffin professor of Medicine and Biomedical Ethics and professor of pediatrics and medicine at Stanford University, spoke about the ethics involved in using deep learning to make sense of data and the difficulties in addressing privacy issues.

Magnus spoke about how data that is missing from underrepresented populations, including people of color, along with biases in data, can have an impact. He referred to a study a decade ago on the use of developmental delays in decisions about pediatric transplants that found a huge variation in practice across institutions.

“The reason why this matters is because if you learn from data from an institution where those kids are not listed for transplant, the machine learning will learn that it’s fatal to be developmentally delayed ... and reinforce the existing biases in current clinical practice,” Magnus said.

Referring to the difficulty of navigating patient privacy issues, Magnus said that sharing de-identified data with third parties without consent was controversial, citing a recent lawsuit against the University of Chicago Medical Center for allegedly sharing patient records with tech giant Google, without removing provider notes.

“Consent is generally required for information in research repositories, but it’s generally not required for secondary uses in de-identified clinical data, and patients have a right to access their own information,” Magnus said. “That’s our current regulatory structure. It’s not adequate. And there have been a lot of calls recently for rethinking how we think about data and our obligations and potential uses.”

Even if all 19 HIPAA identifiers are stripped, Magnus said, that doesn’t necessarily mean the data is de-identified. He urged more discussion on how to ethically share data, particularly with private companies.

“We need to think about a new model of data stewardship, recognizing the duties to protect patients and the data entrusted to their providers,” Magnus said.

During a moderated discussion after the keynote, Suchi Saria, a professor of machine learning and healthcare at Johns Hopkins University, said that it was important to design algorithms that can correct for biases in data, which she acknowledged was challenging.

“More widespread education of how one can be more holistic in designing these systems is definitely something ... that is going to be key,” Saria said.

You Must Be Logged In To Post A Comment