Among the many branches of systems neuroscience, the study of the human brain is among the most primitive. We are interested in the basic organization of one of the most uniquely human faculties: Speech and language. We believe that the organization of speech in the human brain shares neural mechanisms with other actions such as reaching and grasping. Exploring the connections offers opportunities to develop new insights and understanding.
The neural circuits that perform transformations between sound and speaking are thought to reside in the dominant, typically left, hemisphere of the brain. We collaborate with Orrin Devinsky, Thomas Thesen and others at NYUMC’s Comprehensive Epilepsy Center to study how the human brain listens and speaks.
To study speech, we have developed some tasks inspired by work on looking and reaching (see Sidebar). We find that neural responses at particular sites in the human brain are driven by listening and speaking. Interestingly, some sites respond to both listening and speaking which we call sensory-motor responses. The responses predict not only what the subjects are listening to but also what they will say in the future. Surprisingly, we find the speech transforms are not only present in the dominant hemisphere. We find the transforms are bilateral and are equally present in both hemispheres of the brain.
Bilateral speech transformations suggest that there is an interesting distinction between speech and language: although speech transformations are bilateral, the computational system for language is lateralized. We propose that the brain systems for speech may access language through a unified sensory–motor speech interface. Our ongoing efforts aim to understand how speech and language interact through communication between groups of neurons in different regions of the brain.
Project Funded By:
National Institutes of Health