Researchers led by Gachon University propose an explainable federated learning (XFL) framework that combines on-board training and secure global aggregation with XAI techniques, optimizing electric vehicle energy management and traffic predictions while preserving data privacy in smart urban environments.
Key points
- Hierarchical federated learning architecture integrates on-vehicle MLP models and secure cloud aggregation to optimize AEV energy consumption and traffic density predictions.
- SHAP and LIME explainability modules identify critical factors like traffic density, speed, and time-of-day, enhancing transparency in model-driven energy control decisions.
- Global MLP model reaches R² of 94.73% for energy consumption and 99.83% for traffic density on a 1.2 million–record AEV telemetry dataset.
Why it matters: By uniting federated learning with explainable AI, this approach delivers scalable, real-time energy optimization and transparency, advancing sustainable smart mobility beyond traditional centralized models.
Q&A
- What is federated learning?
- How does explainable AI improve model trust?
- Why choose MLP for federated energy modeling?