Can computers understand complex words and concepts?

0

Summary: Artificial intelligence can understand complex words and concepts by representing the meaning of words in a similar way that matches human judgments.

Source: UCLA

In “Through the Looking Glass”, Humpty Dumpty contemptuously says, “When I use a word, it means exactly what I choose it to mean – no more and no less.” Alice replies, “The question is whether you can make the words mean so many different things.”

The study of the actual meaning of words is ancient. The human mind must parse a web of detailed and flexible information and use sophisticated common sense to perceive its meaning.

Now, a new problem related to the meaning of words has emerged: scientists are investigating whether artificial intelligence can mimic the human mind to understand words the way people do. A new study by researchers from UCLA, MIT and the National Institutes of Health addresses this question.

The article, published in the journal Nature Human behaviorreports that artificial intelligence systems can indeed learn very complicated word meanings, and scientists have discovered a simple trick to extract this complex knowledge.

They found that the AI ​​system they studied represents the meaning of words in a way that strongly correlates with human judgment.

The AI ​​system studied by the authors has been used frequently over the past decade to study the meaning of words. He learns to understand the meaning of words by “reading” astronomical amounts of content on the Internet, encompassing tens of billions of words.

When words frequently appear together – “table” and “chair”, for example – the system learns that their meanings are related. And if pairs of words very rarely appear together – like “table” and “planet”, – he learns that they have very different meanings.

This approach seems like a logical starting point, but consider how well humans would understand the world if the only way to understand meaning was to count how often words occur next to each other, with no ability to interact with others and our environment.

Idan Blank, assistant professor of psychology and linguistics at UCLA and co-senior author of the study, said the researchers sought to find out what the system knows about the words it learns and what kind of “good sense” he possesses.

Before the search began, Blank said, the system seemed to have one major limitation: “As far as the system is concerned, every two words only have one numerical value that represents how similar they are.”

In contrast, human knowledge is much more detailed and complex.

“Consider our knowledge of dolphins and alligators,” Blank said. “When we compare the two on a scale of size, from ‘small’ to ‘large’, they are relatively similar. In terms of intelligence, they are somewhat different. In terms of how dangerous they are to us, on a scale from ‘safe’ to ‘dangerous’, they differ greatly. Thus, the meaning of a word depends on the context.

“We wanted to ask if this system actually knows about these subtle differences – if its idea of ​​similarity is flexible in the same way that it is for humans.”

To find out, the authors have developed a technique they call “semantic projection”. One can draw a line between the model’s representations of the words “big” and “small,” for example, and see where the representations of the different animals fall on that line.

Using this method, the scientists studied 52 groups of words to see if the system could learn to sort out meanings – like judging animals by their size or how dangerous they are to humans, or classifying US states by weather or by overall wealth.

Other word groups included terms related to clothing, professions, sports, mythological creatures and first names. Each category was associated with several contexts or dimensions – size, danger, intelligence, age and speed, for example.

A representation of semantic projection, which can determine the similarity between two words in a specific context. This grid shows how similar some animals are based on their size. Credit: Idan Blank/UCLA

The researchers found that, across these many objects and contexts, their method proved to be very similar to human intuition. (To make this comparison, the researchers also asked cohorts of 25 people each to make similar ratings on each of the 52 word clusters.)

Remarkably, the system learned to perceive that the names “Betty” and “George” are similar in terms of being relatively “old”, but they represent different genders. And that ‘weightlifting’ and ‘fencing’ are similar in that both generally take place indoors, but differ in the amount of intelligence they require.

“It’s such a simple and completely intuitive method,” Blank said. “The line between ‘big’ and ‘small’ is like a mental scale, and we place animals on that scale.”

Blank said he didn’t expect the technique to work, but was thrilled when it did.

“It turns out that this machine learning system is a lot smarter than we thought; it contains very complex forms of knowledge, and that knowledge is organized in a very intuitive structure,” he said. “Just by keeping track of the words that coexist in the language, you can learn a lot about the world.”

See also

It's a cartoon of people talking

The study’s co-authors are MIT cognitive neuroscientist Evelina Fedorenko, MIT graduate student Gabriel Grand, and Francisco Pereira, who leads the machine learning team at the National Institutes of Mental Health. of Health.

Funding: The research was funded in part by the Office of the Director of National Intelligence, Advanced Intelligence Research Projects Activity through the Air Force Research Laboratory.

About this news about AI and linguistic research

Author: Stuart Wolpert
Source: UCLA
Contact: Stuart Wolpert-UCLA
Image: Image credited to Idan Blank/UCLA

Original research: Free access.
Semantic projection recovers rich human knowledge of multiple object features from word incorporations” by Idan Blank et al. Nature Human behavior


Summary

Semantic projection recovers rich human knowledge of multiple object features from word incorporations

How is knowledge of the meaning of words represented in the mental lexicon?

Current computer models infer word meaning from lexical co-occurrence patterns. They learn to represent words as vectors in a multidimensional space, in which words that are used in more similar linguistic contexts, i.e. that are more semantically related, are located closer to each other.

However, while inter-word proximity only captures the overall relationship, human judgments are highly context-dependent. For example, dolphins and alligators are similar in size but differ in their dangerousness.

Here, we use a domain-general method to extract context-dependent relationships from word embeddings: “semantic projection” of word vectors onto lines that represent features such as size (the line connecting the words “small and “great”) or danger (“safe” to “dangerous”), analogous to the “mental scales”. This method retrieves human judgments about various categories and properties of objects.

Thus, the geometry of word incorporations explicitly represents a wealth of context-dependent global knowledge.

Share.

Comments are closed.