Researchers at Duke-NUS Medical School introduce FairFML, a model-agnostic framework integrating a fairness penalty into federated learning (FedAvg/Per-FedAvg) to mitigate gender disparities in out-of-hospital cardiac arrest prediction, achieving up to 90% fairness gains with minimal AUC loss.

Key points

  • FairFML integrates a convex λ-weighted fairness loss into FedAvg and Per-FedAvg to reduce gender bias by up to 90% in federated cardiac arrest models.
  • Validation on 7,425 OHCA episodes partitioned across 4–6 heterogeneous sites shows FairFML maintains predictive AUC within 0.02 of centralized models.
  • The model-agnostic framework supports logistic regression to deep learning, offering scalable bias mitigation without sharing raw patient data.

Why it matters: Embedding fairness constraints into federated learning enables equitable AI-driven healthcare delivery across institutions without sacrificing performance.

Q&A

  • What is federated learning?
  • How does FairFML improve fairness?
  • What fairness metrics are used?
  • Why is convexity important?
  • What trade-offs does FairFML introduce?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...


Read full article
FairFML: fair federated machine learning with a case study on reducing gender disparities in cardiac arrest outcome prediction