Efforts to make text-based AI less racist and scary

[ad_1]

In another test, Xudong Shen, a PhD student at the National University of Singapore, rated the language model based on the language model’s gender stereotypes of people or whether they thought they were queer, transgender, or non-dualist. He found that larger artificial intelligence programs tend to produce more stereotypes. Shen said that manufacturers of large language models should correct these shortcomings. OpenAI researchers also found that as language models become larger, they become more toxic; they say they don’t understand why this is the case.

The text generated by a large language model is getting closer and closer to what looks or sounds like a human language, but it still cannot understand the things that almost everyone can understand that require reasoning. In other words, as some researchers have said, this artificial intelligence is an amazing nonsense, able to convince artificial intelligence researchers and others that the machine understands the words it generates.

Alison Gopnik, professor of psychology at the University of California, Berkeley, studies how toddlers and young adults learn to apply this understanding to computing. She said that children are the best learners, and the way children learn languages ​​is largely derived from their understanding and interaction with the world around them. In contrast, large language models have no connection with the world, which makes their output less realistic.

“The definition of nonsense is that you talk a lot, it sounds reasonable, but there is no common sense behind it,” Gopnik said.

Yejin Choi, an associate professor at the University of Washington and head of the Common Sense Research Group at the Allen Institute for Artificial Intelligence, has conducted dozens of tests and experiments on GPT-3 to record how it went wrong. Sometimes it repeats itself.Other times it Decentralized Poisonous language can even be produced when starting with harmless or harmful text.

In order to let artificial intelligence learn more about the world, Choi and a group of researchers created PIGLeT, which is artificial intelligence trained in a simulated environment to understand the physical experience people learn as they grow up. For example, touching a hot stove is a bad idea . This training results in a relatively small language model that performs better than other models on common sense reasoning tasks. She said these results show that scale is not the only secret of success, and researchers should consider other methods of training models. Her goal is: “Can we really build a machine learning algorithm that can learn abstract knowledge about how the world works?”

Choi is also studying ways to reduce the toxicity of language models.Earlier this month, she and her colleagues introduced An algorithm Learning from offensive text is similar to the approach taken by Facebook AI Research; they say it reduces toxicity better than several existing technologies. She said that because of humans, large language models may be toxic. “This is the language there.”

On the contrary, some researchers have found that trying to fine-tune and eliminate bias in the model will ultimately harm marginalized people.In a paper Published in AprilResearchers at the University of California, Berkeley and the University of Washington found that blacks, Muslims, and LGBT people are particularly disadvantaged.

The author said that part of the problem is that people mistakenly judged whether the language is poisonous and labelled the data. This leads to prejudice against people who speak languages ​​different from white people. The co-authors of the paper stated that this can lead to self-stigmatization and psychological harm, and force people to perform code conversions. The OpenAI researchers did not solve this problem in their recent paper.

Jesse Dodge, a research scientist at the Allen Institute for Artificial Intelligence, came to a similar conclusion. He studied efforts to reduce negative stereotypes of homosexuals by removing any text containing the word “gay” or “lesbian” from the training data of large language models. He found that this effort to filter the language would result in the data set effectively eliminating people with these identities, making the language model unable to process text written by or about these people.

Dodge stated that the best way to deal with prejudice and inequality is to improve the data used to train language models, rather than trying to eliminate prejudice after the fact. He recommends better recording the sources of training data and recognizes the limitations of text scraped from the web, which may represent too many people who can afford Internet access and have the time to make a website or post comments. He also urged how the recorded content is filtered, and avoid the full use of block lists to filter content crawled from the Internet.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker