A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language. 

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.

“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you've had more experience with it,” Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they're listening to, and then see how that relates to the activation.”

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.

“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

At the scale of individual atoms, physics gets weird. Researchers are working to reveal, harness, and control these strange quantum effects using quantum analog simulators — laboratory experiments that involve super-cooling tens to hundreds of atoms and probing them with finely tuned lasers and magnets. Scientists hope that any new understanding gained from quantum simulators will provide blueprints for designing new exotic materials, smarter and more efficient electronics, and practical quantum computers. But in order to reap the insights from quantum simulators, scientists first have to trust them. That is, they have to be sure that their quantum device has “high fidelity” and accurately reflects quantum behavior. For instance, if a system of atoms is easily influenced by external noise, researchers could assume a quantum effect where there is none. But there has been no reliable way to characterize the fidelity of quantum analog simulators, until now. In a study appearing today in Nature, physicists from MIT and Caltech report a new quantum phenomenon: They found that there is a certain randomness in the quantum fluctuations of atoms and that this random behavior exhibits a universal, predictable pattern. Behavior that is both random and predictable may sound like a contradiction. But the team confirmed that certain random fluctuations can indeed follow a predictable, statistical pattern. What’s more, the researchers have used this quantum randomness as a tool to characterize the fidelity of a quantum analog simulator. They showed through theory and experiments that they could determine the accuracy of a quantum simulator by analyzing its random fluctuations. The team developed a new benchmarking protocol that can be applied to existing quantum analog simulators to gauge their fidelity based on their pattern of quantum fluctuations. The protocol could help to speed the development of new exotic materials and quantum computing systems. “This work would allow characterizing many existing quantum devices with very high precision,” says study co-author Soonwon Choi, assistant professor of physics at MIT. “It also suggests there are deeper theoretical structures behind the randomness in chaotic quantum systems than we have previously thought about.” The study’s authors include MIT graduate student Daniel Mark and collaborators at Caltech, the University of Illinois at Urbana-Champaign, Harvard University, and the University of California at Berkeley. Random evolution The new study was motivated by an advance in 2019 by Google, where researchers had built a digital quantum computer, dubbed “Sycamore,” that could carry out a specific computation more quickly than a classical computer.

Whereas the computing units in a classical computer are “bits” that exist as either a 0 or a 1, the units in a quantum computer, known as “qubits,” can exist in a superposition of multiple states. When multiple qubits interact, they can in theory run special algorithms that solve difficult problems in far shorter time than any classical computers.

The Google researchers engineered a system of superconducting loops to behave as 53 qubits, and showed that the “computer” could carry out a specific calculation that would normally be too thorny for even the fastest supercomputer in the world to solve.

Google also happened to show that it could quantify the system’s fidelity. By randomly changing the state of individual qubits and comparing the resulting states of all 53 qubits with what the principles of quantum mechanics predict, they were able to measure the system’s accuracy. Choi and his colleagues wondered whether they could use a similar, randomized approach to gauge the fidelity of quantum analog simulators. But there was one hurdle they would have to clear: Unlike Google’s digital quantum system, individual atoms and other qubits in analog simulators are incredibly difficult to manipulate and therefore randomly control. But through some theoretical modeling, Choi realized that the collective effect of individually manipulating qubits in Google’s system could be reproduced in an analog quantum simulator by simply letting the qubits naturally evolve. “We figured out that we don’t have to engineer this random behavior,” Choi says. “With no fine-tuning, we can just let the natural dynamics of quantum simulators evolve, and the outcome would lead to a similar pattern of randomness due to chaos.” Building trust As an extremely simplified example, imagine a system of five qubits. Each qubit can exist simultaneously as a 0 or a 1, until a measurement is made, whereupon the qubits settle into one or the other state. With any one measurement, the qubits can take on one of 32 different combinations: 0-0-0-0-0, 0-0-0-0-1, and so on. “These 32 configurations will occur with a certain probability distribution, which people believe should be similar to predictions of statistical physics,” Choi explains. “We show they agree on average, but there are deviations and fluctuations that exhibit a universal randomness that we did not know. And that randomness looks the same as if you ran those random operations that Google did.” The researchers hypothesized that if they could develop a numerical simulation that precisely represents the dynamics and universal random fluctuations of a quantum simulator, they could compare the predicted outcomes with the simulator’s actual outcomes. The closer the two are, the more accurate the quantum simulator must be. To test this idea, Choi teamed up with experimentalists at Caltech, who engineered a quantum analog simulator comprising 25 atoms. The physicists shone a laser on the experiment to collectively excite the atoms, then let the qubits naturally interact and evolve over time. They measured the state of each qubit over multiple runs, gathering 10,000 measurements in all. Choi and colleagues also developed a numerical model to represent the experiment’s quantum dynamics, and incorporated an equation that they derived to predict the universal, random fluctuations that should arise. The researchers then compared their experimental measurements with the model’s predicted outcomes and observed a very close match — strong evidence that this particular simulator can be trusted as reflecting pure, quantum mechanical behavior. More broadly, the results demonstrate a new way to characterize almost any existing quantum analog simulator. “The ability to characterize quantum devices forms a very basic technical tool to build increasingly larger, more precise and complex quantum systems,” Choi says. “With our tool, people can know whether they are working with a trustable system.” This research was funded, in part, by the U.S. National Science Foundation, the Defense Advanced Research Projects Agency, the Army Research Office, and the Department of Energy.

Notice updated on 2024-07-18 01:13:08