The University of California, Davis engineering group develops a neuroprosthesis combining intracortical microelectrode arrays and AI-based decoding to map speech-related brain activity into intelligible, expressive voice output in real time, offering a novel communication avenue for patients with severe motor impairments.
Key points
- Four 256-channel intracortical arrays implanted in speech cortical areas record neural intent.
- AI-driven decoder translates neural activity into syllables with under one-second latency and 60% word accuracy.
- Closed-loop synthesis replicates patient-specific vocal tract dynamics for natural, expressive speech.
Why it matters: This technology marks a paradigm shift in neuroprosthetics by enabling real-time, patient-specific speech synthesis, surpassing robotic BCI voices.
Q&A
- What is a brain-computer interface?
- How do implanted microelectrode arrays capture speech-related brain signals?
- What role does artificial intelligence play in the voice-synthesis neuroprosthesis?
- Can the system learn new words and adapt over time?