A team led by Duke-NUS Medical School conducted a comprehensive scoping review of 467 clinical AI fairness studies. They catalogued medical fields, bias-relevant attributes, and fairness metrics, exposing narrow focus areas and methodological gaps, and offered actionable strategies to advance equitable AI integration across healthcare contexts.
Key points
- Reviewed 467 clinical AI fairness studies, mapping applications across 28 medical fields and seven data types.
- Identified that group fairness metrics (e.g., equalized odds) dominate over individual and distribution fairness approaches.
- Found limited clinician-in-the-loop involvement and proposed integration strategies to bridge technical solutions with clinical contexts.
Why it matters: Addressing identified fairness gaps is crucial to ensure equitable AI-driven diagnoses and treatment decisions across all patient populations.
Q&A
- What is AI fairness?
- What are group fairness metrics?
- How does bias occur in healthcare AI?
- What is individual fairness?