The neuroscience of understanding

Laura Gwilliams is unlocking how the brain turns sound into meaning.

By Nicholas Weiler

Story tags:

Laura Gwilliams

Laura Gwilliams. Photo: Jess Alvarenga

Share this story

Humans communicate like fish swim—effortlessly, powerfully, and for our very survival. Unlike any other creature, we turn our life stories into shared wisdom, our discoveries into shared knowledge, and our ideas into shared dreams—the very foundation of human society.

Our understanding of this capability—and our ability to help those who lack it—is poised for a major leap forward, thanks to decades of advances across disparate fields like neuroscience, linguistics, deep learning, and AI. Standing at the nexus of those advances is Stanford’s Laura Gwilliams, a faculty scholar with Stanford Data Science and the Wu Tsai Neurosciences Institute, and an assistant professor in the Department of Psychology in the School of Humanities and Sciences.

In the course of her scientific training, Gwilliams has explored the linguistic theory of how language is constructed; applied cutting-edge techniques for brain recording that capture human neural activity from single cells to whole-brain circuits; and developed computational algorithms for decoding the brain’s own subtle electrical language, including the adaptation of the latest large language AI models.

In her Stanford lab, launched in 2023, Gwilliams is not only bringing all these tools to bear on the science of how we understand one another, but bringing together researchers from across Stanford schools and departments to join her. Her scientific approach sheds critical light on the neural algorithm that supports language processing. This has direct consequences for assisting language processing for individuals with epilepsy, aphasia, and minimally verbal autism.

Only now, Gwilliams believes, can we begin to address some of the fundamental questions about language and cognition that are at the center of human life. 

We recently sat down with Gwilliams to discuss the research program she’s building, and the promise of understanding how we understand one another.

Laura Gwilliams and her lab in a conference room looking at brain images

Laura Gwilliams discusses intracranial neural recordings—evidence of speech comprehension—with her Laboratory of Speech Neuroscience students. Photo: Jess Alvarenga

What is it about our ability to understand one another through language that you find so compelling as a scientist?

Language is the closest that we come to telepathy between people. The ability for me to have an idea and convey it to you through language is completely amazing, and yet feels so trivially easy. Part of my fascination with this topic is trying to understand what is unique in the human brain that allows us to have this cognitive ability.

How might that work impact people’s lives?

Understanding how the brain processes language is critical for developing targeted therapies to improve language impairments, and to facilitate communication.

It turns out that difficulties in language processing are one of the most common symptoms across all neurological disorders, even if that disorder is not a language disorder—Parkinson’s, epilepsy, Alzheimer’s, and autism, for example. In these disorders, the inability to talk or to understand language is often reported as one of the most challenging symptoms—more so than seizures or movement problems.

This is the Holy Grail of language processing:
How are we able to extract meaning from a string of sounds in order to create these very complex and rich conceptual experiences?”
Laura Gwilliams
Laura Gwilliams leans against a staircase and looks directly into the camera

Laura Gwilliams. Photo: Jess Alvarenga

Can you share an example of the impact this work can have on patients with language impairment?

Epilepsy can have fundamental effects on kids’ ability to understand and produce language. But there’s a lot of variability from person to person that we don’t entirely understand. We’re trying to use what we understand about the neurotypical adult brain to understand what’s going on in a child’s brain with epilepsy and better or worse language outcomes. We want to see how we might intervene through brain stimulation [using noninvasive tools like transcranial magnetic stimulation] in ways that might ultimately improve the language outcomes of that child.

In another project, I am working with minimally verbal autistic individuals. In this population, it can be hard to determine how much language someone is understanding, because they may not outwardly express what they understand. So, we are applying the computational methods I have developed to “listen in” on brain activity to  read out the properties of language that their brains are processing—such as the meanings of words and the rules that connect them. If we can decode this information from their neural activity, it suggests they are indeed understanding what they hear. 

In this case—and with other neurological disorders—by using my methods as “biomarkers” of successful understanding, we can determine what people are understanding, and why they may not be understanding, which could pave the road to personalized clinical intervention. 

Tell us about this unique new brain imaging instrument—the optically pumped magnetometer (OPM)—that you and Anthony Norcia have helped bring to the Koret Human Neurosciences Community Laboratory at Wu Tsai Neuro. There are only a few on the planet—what makes it special?

I’m so excited for the system to be installed, and not just for my lab. It’s going to be used broadly by many labs across campus to produce some great science. 

I think it helps to compare the OPM to fMRI imaging, which has been the dominant technology in language neuroscience and has led to many valuable discoveries. fMRI maps the strength of neural activity during language processing, but it doesn’t capture that neural activity changes over time. You can say that speech information is represented over here and not over there, but you can’t say too much about the computation that gave rise to it. 

To figure out the actual underlying algorithm of speech comprehension, we need to look at how activity changes over time. The OPM—which lets us study language processing at high resolution in both space and time—is going to give a strong and much-needed push on understanding how the brain “computes” during language processing, rather than what information is present in different brain regions.

Details of lab tools
details of lab tools
Laura Gwilliams and her lab having a discussion

(Top photo) Gwilliams and Irmak Ergin, PhD ’29, demonstrate how they measure electrical activity in the brain. Photos: Jess Alvarenga

Details of lab tools

So the OPM will enable an unprecedented mix of noninvasive brain recordings with high resolution, in both time and space. What questions have you been dying to ask with it?

As I mentioned, the OPM allows us to answer questions at the level of computation. Language comprises structure at different levels. There is structure in the sounds and the orders they occur in to create words; there is structure in the words and the orders they occur in to create phrases. With OPM, we are going to track the brain as it builds structure at these different temporal scales, and different levels of complexity and abstraction. This is important because it will tell us how the brain ultimately transforms sound into meaning—how it “understands” in real time, and flexibly moves between different formats of information.

In addition, we are going to test whether the same computations that are present when people are listening are the same as when they are reading. In the future I would also love to work with individuals who know American Sign Language (ASL) to assess whether visual processing of language through reading or through ASL follow shared computational pathways. This speaks to the universality of language processing–what neural algorithms are recycled across domains, and which, if any, are specialized to a specific domain.

How do AI tools like large language models help you study how language works in the human brain?

Because of algorithms like ChatGPT, there now exists for the first time a model system for language that we can probe experimentally. We have access to every [artificial computational] neuron in the AI system’s “brain,” and we can selectively turn neurons off and on again to observe the effects on that system’s behavior. This allows us to causally interfere with the computational process, to understand what processes are necessary versus byproducts of successful understanding. We can use these insights to develop hypotheses of computational necessity in the biological system, which can then be tested with methods such as direct electrical stimulation.

[Fellow Wu Tsai Neuro Faculty Scholar] Dan Yamins and I co-advise a student in the psychology department, where we’re trying to better understand how large language models encode information compared to the human brain. We are interested in building more powerful AI language systems that learn language more like humans do: from speech rather than text. And we want to understand how those models process language versus how the human brain processes language.

Laura Gwilliams and her lab in a conference room looking at brain images

Atlas Kazemian, PhD ’30 (seated), shares her research findings. Photo: Jess Alvarenga

You joined Stanford in 2023 as a faculty scholar of two interdisciplinary institutes—Wu Tsai Neuro and Stanford Data Science—and you sit at the intersection of many fields: psychology, neuroscience, data science, computer science and machine learning, linguistics, and medicine. How do these intersections play out in your experience and your research?

My science doesn’t exist within one discipline—it’s only possible at this intersection. So being able to take advantage of all of the tools and expertise at these different disciplines very much enables research that I think will lead to the most impactful results and impactful translations.

Stanford is an amazing place to do this kind of work, because collaborating and just talking to people in different departments is so easy. I have PhD students who are co-advised with people in other departments. I have collaborators in medicine, computer science, bioengineering, linguistics, and data science. We have these collaborative conversations and are writing research proposals together—sitting down and thinking, “Okay, this is the expertise that I bring to the table. This is the expertise that you bring to the table. How can we put our minds together to create something that’s bigger than the sum of its parts?” That wouldn’t be possible unless we were really working as a team.

Can studying language comprehension reveal something about how the meaning of our lived experience is expressed in the brain?

This is the Holy Grail of language processing: How are we able to extract meaning from a string of sounds in order to create these very complex and rich conceptual experiences? 

When I was starting out, it became clear to me that these questions just weren’t tractable with the tools available at the time. Now that we have access to research tools like the OPM and computational tools like large language models, the moment has come to tackle these questions. We are entering a new era of scientific discovery in human neuroscience, and I am proud to be working at the forefront of that discovery.  

##

To learn more, listen to Gwilliams’ recent interview on the podcast “From Our Neurons to Yours.”

Share this story

Related stories:Explore more