August 19 in Longevity and AI

Gathered globally: 4, selected: 4.

The News Aggregator is an artificial intelligence system that gathers and filters global news on longevity and artificial intelligence, and provides tailored multilingual content of varying sophistication to help users understand what's happening in the world of longevity and AI.


A team from the University of Lübeck develops a LightGBM-based model that uses five non-invasive sensor streams—skin and body temperature, blood volume pulse, electrodermal activity, and heart rate—to forecast interstitial glucose fluctuations. It applies ensemble feature selection (BoRFE) and leave-one-participant-out cross-validation to achieve RMSE around 18.5 mg/dL, demonstrating feasibility for real-life monitoring.

Key points

  • LightGBM model with BoRFE selection predicts interstitial glucose with RMSE ~18.5 mg/dL and MAPE ~15.6%.
  • Five non-invasive sensor modalities (STEMP, BVP, EDA, HR, BTEMP) capture physiological correlates of glucose excursions.
  • Leave-one-participant-out cross-validation across 32 healthy volunteers during MMT and OGTT validates real-time prediction accuracy.

Why it matters: This approach enables comfortable, real-time blood sugar tracking without invasive devices, potentially transforming diabetes monitoring and preventive health management.

Q&A

  • What is interstitial glucose?
  • How do wearables estimate glucose without blood samples?
  • What is BoRFE feature selection?
  • Why use LightGBM over other models?
  • What applications could this enable?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Digital biomarkers for interstitial glucose prediction in healthy individuals using wearables and machine learning

A collaboration between Cornell’s Precision Nutrition Center and UC San Diego harnesses machine learning to enhance maternal and child nutrition. By integrating anthropometry, biochemical markers, microbiome data, and digital tools, AI-driven models personalize dietary interventions to boost growth and health in low-resource contexts.

Key points

  • AI models at Cornell process multimodal data—anthropometry, biomarkers, microbiome—to optimize nutrition.
  • Transformer-based ‘TPN 2.0’ tool refines neonatal parenteral nutrition formulas, improving safety and reducing costs.
  • Microbiota-directed complementary foods restore growth in malnourished children by targeting gut bacterial profiles.

Why it matters: Implementing AI-driven precision nutrition can transform maternal and child health programs by enabling targeted, data-driven dietary interventions that outperform one-size-fits-all approaches.

Q&A

  • What is precision nutrition?
  • How does AI enhance nutritional assessment?
  • What are microbiota-directed complementary foods (MDCF)?
  • What is a digital twin in nutrition research?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Advances in artificial intelligence and precision nutrition approaches to improve maternal and child health in low resource settings

A team at Yale School of Medicine conducted a group-based simulation trial comparing a standard AI risk dashboard for upper gastrointestinal bleeding with an enhanced version including GutGPT, an LLM-powered conversational interface. While GutGPT significantly improved Effort Expectancy scores, indicating better perceived usability, it did not produce a statistically significant change in Behavioral Intention to adopt the system. The study highlights the importance of integration, trust, and workflow fit beyond ease of use in clinical AI adoption.

Key points

  • Integration of GutGPT—a three-tier LLM architecture (parser, model, guideline retriever)—with an ML-based UGIB risk dashboard.
  • Randomized simulation trial with 106 trainees compared GutGPT+dashboard versus dashboard alone; primary outcome: Behavioral Intention; secondary: Effort Expectancy and decision accuracy.
  • GutGPT improved perceived usability (Effort Expectancy Δ=0.6; 95% CI [0.3,1.0]) but showed no significant effect on adoption intent (BI p=0.657).

Why it matters: Demonstrates that improved AI interface usability alone won’t drive clinical adoption, underscoring the need for trust and workflow integration.

Q&A

  • What is Effort Expectancy?
  • How does GutGPT classify clinician queries?
  • Why didn’t increased usability translate into higher adoption?
  • What is the UTAUT framework?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Usability and adoption in a randomized trial of GutGPT a GenAI tool for gastrointestinal bleeding

Researchers at IBM and Google develop a hybrid Quantum AI framework that leverages parameterized quantum circuits and quantum feature maps. They apply superposition and entanglement to accelerate linear algebra routines and classification algorithms, aiming to enhance performance in optimization, drug discovery pipelines, and large-scale data analysis.

Key points

  • IBM and Google teams deploy hybrid quantum-classical circuits using qubit superposition and entanglement to accelerate linear algebra tasks.
  • The Harrow-Hassidim-Lloyd algorithm demonstrates exponential speedup in solving linear systems for machine learning applications.
  • Variational Quantum Circuits enable QCNN and QSVM models, enhancing classification and feature extraction on high-dimensional datasets.

Why it matters: Quantum AI unlocks accelerated solutions for complex machine learning and optimization tasks, with potential to transform data-intensive research and industry applications.

Q&A

  • What is quantum superposition?
  • How do variational quantum circuits work?
  • What is the Harrow-Hassidim-Lloyd (HHL) algorithm?
  • What limits current quantum hardware?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
The Role of Quantum Computing in Artificial Intelligence