Welcome to TikTok’s endless cycle of reviews and bugs

[ad_1]

It is not necessarily surprising that these videos become news. People make videos because they work. For many years, gaining opinions has been one of the more effective strategies for pushing large platforms to fix certain problems. Tiktok, Twitter, and Facebook make it easier for users to report abuses and violations by other users. However, when these companies seem to violate their policies, people usually find that the best way forward is to try to publish information about them on the platform, hoping to spread and attract attention, so as to reach a certain solution. For example, Tyler’s two videos on the Marketplace profile have both been viewed more than 1 million times.

“The content is flagged because they are from marginalized groups and they are talking about their experiences with racism. Hate speech and talking about hate speech look very similar to algorithms.”

Casey Fiesler, University of Colorado Boulder

Casey Fiesler, an assistant professor of research on technology ethics and online communities at the University of Colorado Boulder, said: “I may be tagged once a week.” She is active on TikTok and has more than 50,000 followers, but although it is not what she sees Everything seems to be a reasonable concern, but she said the general problems of the application are real. In the past few months, there have been several such mistakes, all of which have had a disproportionate impact on marginalized groups on the platform.

MIT Technology Review I have asked TikTok for every recent example, and the answers are similar: After investigation, TikTok found that the problem was caused by mistake, emphasized that the blocked content did not violate their policy, and provided a link to support the company to this Groups.

The question is whether this cycle—some technical or policy errors, viral responses, and apologies—can be changed.

Solve problems before they arise

“There are two hazards to this possible algorithmic content review that people have observed,” Fiesler said. “One is a false negative. People are like,’Why is there so much hate speech on this platform and why hasn’t it been deleted?'”

The other is false positives. “Their content is flagged because they are from marginalized groups and are talking about their experiences with racism,” she said. “Hate speech and talking about hate speech look very similar to algorithms.”

She pointed out that both categories hurt the same person: Those who became targets of abuse would eventually be censored by algorithms for making speech.

Douyin The mysterious recommendation algorithm is part of its success——But its unclear and constantly changing boundaries have had a chilling effect on some users. Fiesler pointed out that many TikTok creators self-censor the text on the platform to avoid triggering comments. Although she wasn’t sure how effective this strategy would be, Fielser started to do it herself, just in case. Account bans, algorithmic mysteries, and strange abstinence decisions are part of the conversation in the app.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker