We’re Evolving—Immortality.global 2.0 is Incubating
The platform is in maintenance while we finalize a release that blends AI and longevity science like never before.

April 28 in Longevity and AI

Gathered globally: 15, selected: 12.

The News Aggregator is an artificial intelligence system that gathers and filters global news on longevity and artificial intelligence, and provides tailored multilingual content of varying sophistication to help users understand what's happening in the world of longevity and AI.


Researchers at biotech companies like UNITY Biotechnology and Altos Labs employ AI-driven drug discovery, senolytic compounds, and CRISPR-based gene editing to address telomere attrition, cellular senescence, and genetic aging pathways. This integrated approach seeks to develop personalized longevity treatments that extend healthspan and mitigate age-related diseases.

Key points

  • Telomere-targeting strategies aim to activate telomerase to replenish chromosomal end caps and prolong cellular division capacity.
  • Senolytic compounds selectively induce apoptosis in senescent “zombie” cells, reducing systemic inflammation and tissue dysfunction in preclinical models.
  • CRISPR-Cas9 gene editing modifies aging-related loci to investigate gene functions in cellular senescence and DNA repair pathways.
  • AI-driven drug discovery platforms analyze large genomic and pharmacological datasets to identify novel compounds targeting aging mechanisms.
  • Integration of personalized omics profiles guides tailored interventions, optimizing therapeutic efficacy and minimizing adverse effects.

Why it matters: This synthesis of AI, gene editing, and senescence-targeting therapeutics marks a paradigm shift in longevity science by concurrently addressing multiple aging hallmarks. By combining data-driven drug design with precise molecular interventions, these strategies hold promise for safer, more effective healthspan extension compared to single-target approaches.

Q&A

  • What are telomeres and why extend them?
  • How do senolytic therapies work?
  • In what ways does CRISPR contribute to aging research?
  • What role does AI play in longevity drug discovery?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...

Scientists from City of Hope and UCLA identify a novel age-specific adipocyte progenitor cell population (CP-As) that proliferates and differentiates into fat cells in middle-aged mice. Using single-cell RNA sequencing and in vivo lineage tracing, they pinpoint the LIFR signaling pathway as critical for CP-A mediated adipogenesis. Inhibiting LIFR signaling prevents visceral fat expansion, suggesting a promising strategy to mitigate age-related obesity and metabolic dysfunction.

Key points

  • Discovery of CP-As: age-specific committed preadipocytes emerge in middle-aged adipose tissue.
  • LIFR signaling: critical driver of CP-A proliferation and differentiation into new adipocytes.
  • Lineage tracing & 3D APC transplants confirm autonomous fat-generating capacity of aged APCs.
  • Single-cell RNA sequencing delineates gene expression profiles distinguishing CP-As from other APCs.
  • Pharmacological LIFR inhibition prevents visceral fat expansion without affecting young APC adipogenesis.

Why it matters: By uncovering CP-As and their LIFR-driven adipogenesis, this work shifts the paradigm of age-related fat expansion, highlighting adipogenesis rather than hypertrophy as a major contributor. Targeting LIFR offers a precise therapeutic avenue to curb visceral obesity and metabolic disorders in middle-aged populations.

Q&A

  • What are adipocyte progenitor cells (APCs)?
  • How does the LIFR signaling pathway promote fat cell formation?
  • What distinguishes CP-As from other adipocyte progenitors?
  • Why focus on visceral fat in aging research?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Why Belly Fat Expands With Age, and How to Target It - Neuroscience News

Researchers at the US Navy Marine Mammal Program and founder Dr Stephanie Venn-Watson identify C15:0, an odd-chain saturated fatty acid abundant in dolphin diets, as an essential longevity nutrient. Through controlled dolphin serum analyses and dietary trials, they demonstrate C15:0’s benefits for liver function, cholesterol reduction, and mitochondrial repair. These findings underpin Fatty15, a peer-reviewed supplement engineered to deliver bioavailable C15:0 for human metabolic health optimization.

Key points

  • Identification of pentadecanoic acid (C15:0) as an essential longevity nutrient through dolphin serum metabolomics.
  • Correlation of dietary C15:0 intake with improved metabolic markers and reduced liver disease in bottlenose dolphins.
  • Dolphin dietary trials demonstrate C15:0’s effects on lowering cholesterol, reducing inflammation, and repairing mitochondria.
  • Development of vegan C15:0 supplement Fatty15, validated by over 100 peer-reviewed studies for bioavailability and safety.
  • Proposal to integrate C15:0 into fortified foods, beverages, and infant formulas for broader metabolic health applications.

Why it matters: This discovery challenges prevailing notions that saturated fats are uniformly detrimental by highlighting the therapeutic potential of C15:0, a previously overlooked odd-chain fatty acid. Demonstrating efficacy in a long-lived mammalian model bridges the gap between rodent studies and human application, paving the way for targeted metabolic interventions and evidence-based longevity supplements.

Q&A

  • What is C15:0?
  • Why use dolphins for this research?
  • How does Fatty15 differ from other supplements?
  • Can I get enough C15:0 from diet alone?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
The longevity nutrient: how dolphins helped scientists discover a secret ingredient to help us live longer

Researchers review supervised methods like KNN and logistic regression for heart disease, diabetes and sepsis prediction, unsupervised clustering and PCA for ECG anomaly detection and chronic kidney disease reference intervals, and reinforcement learning frameworks for personalized treatment ranking, demonstrating how AI can enhance diagnostic accuracy and decision support in primary care.

Key points

  • Supervised models including KNN, logistic regression and decision trees achieve up to 89% accuracy in heart disease and sepsis prediction.
  • Autoencoder and clustering-based unsupervised learning identify ECG anomalies with >99% precision and recall.
  • Gaussian mixture models estimate chronic kidney disease reference intervals at 98% and 75% confidence levels.
  • Deep reinforcement learning framework PPORank personalizes treatment recommendations via continuous sequential optimization.
  • Recommended algorithms for primary care include random forests, SVMs and KNN for mixed-data diagnostic tasks.

Why it matters: Integrating these machine learning methods into primary care workflows promises to reduce diagnostic errors and enable earlier disease detection, shifting the paradigm towards proactive patient management. The comparative synthesis of AI algorithms offers clinicians actionable insights and a roadmap for deploying scalable decision-support tools.

Q&A

  • What is supervised learning in healthcare?
  • How do unsupervised methods detect ECG anomalies?
  • What data do these ML models need?
  • How does reinforcement learning recommend treatments?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Aplikasi Machine Learning dalam Pelayanan Kesehatan dan Prediksi Diagnosis dalam pelayanan dokter...

Intellitron explains that quantum computers employ qubits in superposition to dramatically accelerate machine learning algorithms, strengthen data security via quantum key distribution, tackle previously intractable problems, and reduce energy consumption compared to classical systems.

Key points

  • Qubits leverage superposition to process multiple states concurrently, accelerating AI computations.
  • Quantum Key Distribution (QKD) secures AI data with physics-based encryption.
  • Quantum processors execute machine learning algorithms faster than classical hardware.
  • Quantum coherence reduces energy consumption per computation compared to traditional systems.
  • Quantum AI integration enables high-dimensional optimization and complex simulations beyond classical reach.

Why it matters: This convergence of quantum computing and AI offers orders-of-magnitude improvements in processing speed, security, and sustainability, paving the way for tackling previously unsolvable problems in pharmaceuticals, climate modeling, and beyond.

Q&A

  • What is a qubit?
  • How does superposition speed AI?
  • What is Quantum Key Distribution?
  • How are complex problems solved with quantum AI?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...

MIT’s Center for Bits and Atoms, under Neil Gershenfeld, develops morphogenesis-inspired software‐to‐hardware interfaces that program self‐reproducing assemblers. By treating developmental programs (morphogenes) as abstract design instructions and digitizing materials into 20 elemental blocks, they merge computation with geometry to democratize advanced manufacturing worldwide.

Key points

  • Morphogenes adopt biological developmental codes to represent design functions abstractly.
  • Assemblers use 20 digitized material types to hierarchically build and replicate hardware.
  • Interior‐point relaxation algorithms harness analog degrees of freedom for discrete assembly tasks.
  • Overlaying computation and geometry ensures synchronization without traditional thread management.
  • Digital fabrication scales in a Moore’s Law–like curve, enabling mass deployment of personal fab labs.

Why it matters: Merging computation, communication, and fabrication into self‐replicating assemblers could redefine manufacturing by granting individuals unprecedented design and production autonomy. This paradigm shift parallels Moore’s Law in physical fabrication, promising supply‐chain simplification, rapid prototyping, and new scalable AI‐driven material systems.

Q&A

  • What are morphogenes?
  • How do self-reproducing assemblers work?
  • What advantage does merging computation and fabrication offer?
  • How is this different from current 3D printing?
  • What challenges remain for practical implementation?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...

Neuralink’s research team has developed an AI-driven robotic platform that performs intricate neurosurgical procedures, notably brain-computer electrode insertion, with superior precision and reduced operating times. By integrating real-time analytics and robotic actuators, the system minimizes human error and enhances patient outcomes.

Key points

  • AI-driven algorithms guide robotic arms for submicron electrode placement
  • Micron-level positioning uses real-time kinematic feedback to ensure precision
  • Real-time analytics adjust trajectories and minimize human variability
  • Demonstrated 5× faster insertion times and 30% lower error rates versus manual
  • Designed specifically for neurosurgical BCI electrode implantations

Why it matters: This advancement heralds a new era in surgical robotics, promising lower complication rates and broader access to high-precision procedures. By automating critical tasks, it could reduce surgeon fatigue and enable more consistent outcomes across diverse clinical settings.

Q&A

  • What is a brain-computer interface?
  • How do surgical robots achieve submicron precision?
  • What safety measures are in place for robotic surgeries?
  • How does AI improve robotic surgery planning?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Robots Set to Outperform Top Surgeons in Just 5 Years!

Google's research team develops Claybrook, an AI-driven model for frontend web development focused on UI/UX coding. Leveraging advanced reinforcement learning techniques with well-defined reward functions, Claybrook iteratively refines interface designs and code quality. This approach enables creative solutions and subjective evaluation, pushing beyond simple code generation to address complex design challenges in modern web applications.

Key points

  • Claybrook uses reinforcement learning tailored to frontend UI/UX tasks.
  • It optimizes designs via well-defined reward functions guiding iterative improvements.
  • Model generates high-quality code snippets and interface layouts.
  • It addresses extended reasoning challenges by refining output through feedback loops.
  • Developed by Google, focusing on creative and subjective aspects of design.

Why it matters: By integrating reinforcement learning into frontend design, Claybrook represents a shift from static code generation to dynamic, user-centric interface optimization. This capability can streamline development workflows, reduce manual iteration, and empower designers with AI-driven insights, potentially accelerating web innovation and increasing user engagement across applications.

Q&A

  • What is reinforcement learning in UI/UX design?
  • How does Claybrook measure design quality?
  • What are long-chain reasoning challenges for AI models?
  • How does Claybrook differ from traditional code-generation tools?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Google Claybrook AI Model Great for UI / UX Coding and Web Development

Researchers at Neuralink have developed a minimally invasive brain–computer interface implant that interprets neural signals via high-density electrodes. This chip communicates wirelessly with external devices to augment cognitive functions, address potential AI threats, and redefine human–machine symbiosis.

Key points

  • Neuralink's implant comprises high-density electrode arrays that record and stimulate neuronal activity.
  • The BCI communicates wirelessly with external devices, enabling real-time bidirectional neural data exchange.
  • Cybernetic enhancements extend beyond implants to include prosthetic limbs and exoskeletons for strength augmentation.
  • Digital identities on social media illustrate everyday human–machine fusion and evolving self-perception.
  • Feminist cyborg theory, as proposed by Donna Haraway, challenges traditional identity boundaries and promotes affinity-based coalitions.
  • Military and medical applications leverage neuroprosthetics and exoskeletons to restore functions and enhance soldier capabilities.

Why it matters: Human–machine fusion signals a paradigm shift in longevity and cognitive enhancement, offering unprecedented therapeutic and adaptive potential. By transcending biological limits, cyborg technologies could revolutionize disease intervention, social dynamics, and our fundamental concept of self.

Q&A

  • What defines a cyborg?
  • How does Neuralink’s brain chip work?
  • What ethical issues surround cyborg technology?
  • Can digital identity augment human capabilities?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
What is CYBORG: Will Humans Become Cyborgs in the Future? What Exactly is a Cyborg, and Why Could It Be a Threat? | What is CYBORG| English Newstrack

A defense research community applies Graph Neural Networks to represent battlefield assets as graph nodes and edges, using message-passing algorithms to learn network dynamics and predict vulnerabilities, enhancing real-time operational decision support under contested conditions.

Key points

  • Graph representation of battlefield assets: nodes for units and edges for communication links with weighted features.
  • Message-passing GNN layers aggregate neighbor information to learn high-order relational patterns.
  • Temporal GNN architectures capture dynamic network evolution for forecasting connectivity changes.
  • Critical node identification and vulnerability scoring guide network hardening strategies.
  • Anomaly and failure prediction improve resilience against cyberattacks and communications disruptions.

Why it matters: GNNs shift battlefield analysis from static, rule-based approaches to data-driven insights that adapt to dynamic operational conditions. Their ability to learn complex relational patterns enhances network resilience and decision-making speed, offering a substantial edge in modern, information-centric warfare.

Q&A

  • What makes GNNs suitable for battlefield networks?
  • How does message passing work in GNNs?
  • What are temporal graphs and why are they needed?
  • How do GNNs detect network vulnerabilities?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Revolutionizing Battlefield Analysis: How Graph Neural Networks Offer Unprecedented Insights

Kolmogorov complexity, developed by Andrey Kolmogorov and advanced by algorithmic information theorists, measures data simplicity by the minimal program length that can recreate a dataset, guiding AI systems to optimize compression and pattern recognition.

Key points

  • Defines data complexity as the minimal program length to reproduce a string.
  • Applies Occam’s razor via compression-based model selection to prevent ML overfitting.
  • Guides autoencoder architectures to strip redundancies and enhance pattern extraction.
  • Establishes theoretical bounds for file compression formats like ZIP and JPEG.
  • Provides randomness metrics for cryptographic key evaluation and security.
  • Informs optimized coding schemes for efficient data transmission.

Why it matters: Kolmogorov complexity provides a unifying framework linking data compression, pattern recognition, and randomness evaluation, guiding AI and ML toward more efficient and interpretable models. Its application fosters advances in secure communications, algorithm design, and scalable data processing, shaping the future of intelligent systems.

Q&A

  • What defines Kolmogorov complexity?
  • How does Kolmogorov complexity differ from Shannon entropy?
  • Why is exact complexity undecidable?
  • How do AI systems approximate Kolmogorov complexity?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
The Hidden Order of Information: Unlocking the Secrets of Kolmogorov Complexity

Market research from AltIndex.com and Statista predicts a 440% surge in the machine learning market to $568 billion by 2031. This forecast reflects unprecedented venture-capital inflows—$54.8 billion raised in Q1 2025—and accelerated deployment in finance, healthcare, and other sectors, cementing ML’s status as AI’s fastest-growing segment.

Key points

  • Machine learning market projected to hit $568 billion by 2031, marking 440% growth.
  • Q1 2025 venture-capital funding for ML reaches record $54.8 billion.
  • ML’s growth rate outpaces overall AI industry by 40% (440% vs. 331%).
  • U.S. ML market expected to grow 446% to $167 billion; China 444% to $117 billion.

Why it matters: These insights reveal a pivotal shift in AI investment toward machine learning as the core growth engine. With ML poised to capture over half of the total AI market by 2031, stakeholders can allocate resources to the most scalable technologies, drive innovation in predictive solutions, and outpace legacy AI applications.

Q&A

  • What drives the machine learning market’s rapid growth?
  • How are these market projections calculated?
  • Why did VC funding spike to $54.8 billion in one quarter?
  • What explains the U.S. and China ML market race?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...
Machine Learning projected to grow 40% faster than AI industry average by 2031