Hey healthcare innovators — lately I’ve had a spike of interest from companies intrigued by the potential of artificial intelligence (AI) in healthcare diagnostics. More companies and providers are using AI to analyze medical images like X-rays, CT scans, and MRIs – and giving doctors a superpower to spot diseases earlier and more accurately. But as we build these incredible tools, we can’t forget that behind every image is a person, a life story, and a set of rights we must protect.
Consider the story of Henrietta Lacks, a Black woman whose cancer cells were taken without her knowledge or consent in 1951. This serves as a stark reminder of the ethical responsibility we bear in safeguarding patient autonomy and dignity, even as we pursue new treatments. Her cells, known as HeLa cells, have been instrumental in countless medical breakthroughs—yet neither Henrietta nor her family were informed about or compensated for their contribution. This history-making case was a milestone in establishing informed consent, transparency, and equity as key standards for healthcare innovation.
Today, technologists prioritize the human experience by applying principles of bioethics outlined in the Belmont Report and the Declaration of Helsinki, as well as aligning with anti-discrimination regulations from the U.S. Department of Health and Human Services (HHS). Sounds great in theory — but how do we apply bioethical principles to everyday practice in imaging diagnostics? With the following four key steps, we can ensure our AI advancements in healthcare imaging not only help diagnose diseases better, but also uphold bioethics standards and serve the greater good.
Step 1. During data collection and management, protect patient privacy and ensure representative datasets.
Why it matters:
In 2017, DeepMind, a Google-owned AI company, was found to have accessed 1.6 million patient records without adequate consent for the development of a kidney monitoring app. This incident sparked a public outcry and raised concerns about the misuse of sensitive health data for commercial gain. By actively pursuing informed consent, transparent data practices, and proper de-identification, we can avoid repeating such missteps and ensure that patient data is used ethically and responsibly, solely for the purpose of improving healthcare outcomes.
Practical approach:
Apply informed consent and data governance standards. Implement robust data governance frameworks that prioritize patient autonomy, meeting HHS requirements. Obtain explicit, informed consent from patients before using their data for AI model training. This includes detailed explanations of how their data will be used, potential risks, and their right to opt out at any time.
Use advanced techniques for de-identification and anonymization. Employ differential privacy and federated learning to minimize the risk of re-identification. Ensure that all patient data used for training is thoroughly de-identified, removing all direct and indirect identifiers that could compromise patient privacy.
Record data provenance for transparency. Maintain detailed records of data sources, collection methods, and preprocessing steps. Ensure transparency in data-sharing agreements and clearly communicate how the data will be used to build trust with patients and the wider healthcare community.
Step 2. For algorithm development and validation, accuracy can only be achieved by accounting for data bias.
Why it matters:
The case of Optum, a division of UnitedHealth Group, highlights the need for rigorous bias assessment in healthcare AI. In 2018, it was discovered that their algorithm, designed to identify patients needing extra care, was racially biased. The algorithm falsely underestimated the health needs of Black patients, leading to disparities in care recommendations. This bias stemmed from the algorithm's reliance on historical healthcare spending data, which was skewed due to systemic inequities. By actively identifying and mitigating biases, we can prevent AI from perpetuating or even amplifying existing healthcare disparities.
Practical approach:
Apply complementary bias mitigation strategies. Implement rigorous bias mitigation techniques throughout the development process to prevent and mitigate bias introduction. This includes scrutinizing training data for any inherent biases (e.g., underrepresentation of certain racial groups), using fairness constraints during model training, and regularly auditing the model's performance across different demographic groups. Consider using techniques like adversarial training and data augmentation to improve the model's robustness to bias.
Implement rigorous feature validation protocols. Develop a comprehensive validation framework that includes testing the model on diverse datasets, evaluating its performance against expert radiologists, and assessing its robustness to variations in image quality and acquisition protocols.
Use explainable AI (XAI): Incorporate explainability into the AI model's architecture. This means using techniques like SHAP (SHapley Additive exPlanations) or LIME (Local Interpretable Model-Agnostic Explanations) to provide clear, human-understandable explanations for the model's predictions.
Step 3. In diagnostic interpretation, use AI to enhance —not replace— human expertise.
Why it matters:
The case of IBM's Watson for Oncology highlights the importance of human expertise in interpreting AI-generated findings. While initially touted as a revolutionary tool for cancer care, a 2018 report by STAT News revealed that Watson often provided unsafe and inaccurate treatment recommendations. This was due in part to the system being trained on a limited dataset and not always aligning with established clinical guidelines.
Practical approach:
Apply human-in-the-loop design. Design the AI as a decision support tool, not a replacement for human judgment. The AI should provide clinicians with additional information and insights, but the final diagnostic decision should always rest with a qualified healthcare professional.
Build the user interface for transparency. Develop a user interface that clearly displays the AI's findings, including the specific image features that contributed to its analysis. This transparency empowers clinicians to critically evaluate the AI's output and incorporate it into their decision-making process.
Create channels for continuous feedback and improvement. Establish a feedback loop where clinicians can provide feedback on the AI's performance, allowing for continuous improvement of the model's accuracy and clinical utility.
Step 4. Ongoing monitoring and improvement is essential to maintain safety and effectiveness.
Why it matters:
The case of Google Health's AI model for diabetic retinopathy screening demonstrates the importance of ongoing monitoring and improvement. Initially, the model showed high accuracy in detecting the disease from retinal images. However, when deployed in real-world clinical settings in Thailand, the model's performance dropped significantly due to differences in image quality, camera types, and patient demographics.
Practical approach:
Conduct regular audits and ongoing monitoring. Establish a robust monitoring and auditing system to track the AI's performance in real-world clinical settings. This includes regularly reviewing the model's accuracy, identifying potential biases, and detecting any degradation in performance over time.
Set performance metrics and reporting mechanisms. Track and report key performance metrics, such as sensitivity, specificity, and positive predictive value, to ensure the AI maintains a high level of diagnostic accuracy.
Build in adaptive learning and algorithm retraining. Implement mechanisms for the AI to continuously learn from new data and feedback. This allows the model to adapt to evolving medical knowledge and maintain its effectiveness in the face of new disease patterns or imaging techniques.
Honoring Henrietta Lacks' legacy through ethical AI
By prioritizing bioethics in AI development for healthcare, we honor Henrietta Lacks and others whose contributions to medicine deserve respect. Every data point represents a person, a life, and a trust we must uphold. This isn't just about compliance; it's about building a trustworthy future where AI empowers clinicians, improves outcomes, and ensures equitable treatment for all.