The secrets of the human brain are being revealed by AI

A computer could read your thoughts if you’re willing to sit completely still for 16 hours inside a large metal tube and allow magnets to bombard your brain while listening to popular podcasts.

At the very least, its rough contours. In a recent study, researchers at the University of Texas at Austin trained an AI model to understand the main idea of ​​a small number of sentences as listeners did, pointing to a near future in which artificial intelligence may help us better understand how humans think.

The episodes were Modern Love, The Moth Radio Hour, and The Anthropocene Reviewed, and the show examined fMRI images of people listening to, or simply remembering, sentences from those three shows. The content of those sentences was then reconstructed using the brain imaging data. For example, when a subject heard “I don’t have my driver’s license yet”, the algorithm analyzed the person’s brain scans and returned “He hasn’t even started learning to drive yet”, not an exact replica of the original text. but rather a close approximation to the idea conveyed. Furthermore, the program was able to analyze fMRI data from users who watched short films and provide summary approximations for particular scenes, providing evidence that the AI ​​was not simply extracting words from brain scans but also underlying meanings.

The results, which were published in Nature Neuroscience earlier this month, contribute to a new area of ​​study that changes the accepted notion of AI. Researchers have used insights from the human brain to build smart devices for many years. Layers of artificial “neurons,” a collection of equations that act like nerve cells by sending results to each other, are the foundation of programs like ChatGPT, Midjourney, and more contemporary voice-cloning software. Despite the fact that the design of “smart” computer programs has long been influenced by human cognition, much about how our brains actually work remains a mystery. In a reversal of that strategy, researchers are now seeking to better understand the mind by studying synthetic neural networks rather than biological ones. MIT cognitive scientist Evelina Fedorenko says that “it’s certainly leading to breakthroughs we couldn’t imagine a few years ago.”

The supposed ability to read the mind of the artificial intelligence program has generated controversy on social networks and in the press. However, that element of the research is “more of a parlor trick,” according to Alexander Huth, a UT Austin neurologist and lead author of the Nature paper. Most brain scanning techniques produce extremely low-resolution data, and the models used in this study were relatively imprecise and customized to each individual participant. As a result, we’re still a long way from developing a program that can plug into anyone’s brain and understand what they’re thinking. The essential importance of this work lies in anticipating which brain regions are activated when hearing or visualizing words, which may provide a deeper understanding of the precise ways in which our neurons cooperate to produce language, one of the defining characteristics of speech. humanity.

See also  Tesla V3 Supercharger Output Will Increase To 324 kW

According to Huth, the achievement of creating a program that can accurately recreate the meaning of words largely serves as “proof of principle that these models really do capture a lot about how the brain processes language.” Neuroscientists and linguists relied on verbal descriptions of the brain’s language network that were vague and difficult to directly connect to observable brain activity before this nascent AI revolution. It was difficult or even impossible to verify theories about the precise linguistic functions that various regions of the brain might be in charge of, let alone the fundamental question of how the brain learns a language. (Perhaps one area is responsible for sound recognition, another for syntax, etc.). But now that AI models have been developed, scientists can more precisely define what those processes involve. According to Jerry Tang, the second lead author of the study and a computer scientist at UT Austin, the benefits could go beyond academic concerns, for example, to help people with particular disabilities. He explained to me, “Our ultimate goal is to help restore communication to people who have lost the ability to speak.”

There has been significant opposition to the notion that AI can aid brain research, particularly among linguistically oriented neuroscientists. That’s because neural networks, which are excellent at identifying statistical patterns, don’t seem to possess fundamental components of how people interpret language, such as understanding the meaning of words. It also makes intuitive sense how human cognition differs from that of machines: Software like GPT-4 learns by analyzing terabytes of data from books and websites, whereas children only learn a language with a small fraction of that many words. GPT-4 can write decent essays and perform exceptionally well on standardized tests. Neuroscientist Jean-Rémi King briefed me on his research from the late 2000s: “Teachers warned us that artificial neural networks are really not the same as biological neural networks. In short, this was a metaphor. King is one of many experts now leading Meta’s brain and AI research and challenging that outdated notion. “We don’t think of this as a metaphor,” he told me. We see artificial intelligence as a very useful model of how the brain processes information.

See also  Poco F4 5G review, key features and full specifications

The inner workings of sophisticated AI programs have been shown to provide a promising mathematical model for how our minds interpret language in recent years, according to scientists. The underlying neural network of ChatGPT or a similar program converts the sentences you type into a series of integers. fMRI scans can record the responses of a subject’s neurons to the same words, and a computer can interpret those scans as essentially another set of statistics. To build two massive data sets, one for how a machine represents language and one for a human, these operations are repeated on countless sentences. Researchers can then map the relationship between these data sets using an approach called a coding model. Once complete, the coding model can begin to extrapolate: Using the AI’s response to a sentence as a guide, it can predict how neurons in the brain will respond.

It seems like new studies are published every few weeks that use AI to examine the language network in the brain. According to Nancy Kanwisher, a neuroscientist at MIT, each of these models could constitute “a computationally accurate hypothesis about what might be going on in the brain.” AI, for example, could provide insight into pending questions like what exactly the human brain is trying to do while learning a language, not just that a person is learning to speak, but the precise neural processes through which communication occurs. It is hypothesized that if a computer model that has been trained for a specific goal, such as learning to predict the next word in a sequence or assessing the grammatical coherence of a sentence, turns out to be the best at anticipating brain responses, it is possible that the human mind shares that goal. Perhaps, like GPT-4, our minds work by figuring out which words are most likely to follow each other. Thus, a computational theory of the brain is developed from the inner workings of a language model.

See also  Monitor, protect and secure your identity and financial data

There are various debates and opposing theories because these computational methods are very new. Francisco Pereira, director of machine learning at the National Institute of Mental Health, told me that there is no reason why the representation you learn from language models should have anything to do with how the brain interprets a sentence. However, it does not follow that a relation cannot exist; there are other ways to determine if one does. Although AI algorithms are not exact replicas of the brain, they are effective research tools because, unlike the brain, they can be dissected, examined, and modified virtually infinitely. For example, to determine what those particular groups of neurons do, cognitive scientists can test how different types of sentences elicit different types of brain responses and try to predict the responses of specific brain regions. “And then going into uncharted territory,” said Greta Tuckute, an expert on the relationship between the brain and language at MIT.

At the moment, AI might not be useful for accurately replicating that unfamiliar neural terrain, but rather for creating heuristics to do so. According to Anna Ivanova, a cognitive scientist at MIT, “If you have a map that reproduces every little detail of the world, the map is useless because it is the same size as the world.” She was quoting a classic Borges tale. “Therefore, abstraction is required.” Scientists are beginning to map the linguistic geography of the brain by defining and evaluating what to keep and discard, choosing between streets, landmarks, and buildings, and then evaluating how useful the resulting map is.

Subscribe to our latest newsletter

To read our exclusive content, sign up now. $5/month, $50/year

Categories: Technology
Source: vtt.edu.vn

Leave a Comment