Words are gradually disappearing from the web. Officially, for moderation reasons, like YouTube demonetizing videos with swear words. But this system penalizes the people it claims to protect: this is the subject of this issue of Numerama’s #Rule30 newsletter.

At the start of the year, youtubeurs and youtubeuses made a funny discovery: they and they no longer had the right to say bad words. In question, a change in YouTube’s internal rules regarding the content suitable for advertisers“, that is to say who have the right to earn advertising revenue. It was no longer allowed to use vulgar words within the first 8 seconds of a video, with rare exceptions. This upheaval was quickly criticized by designers, who denounced opaque and oddly strict rules. Some have lost the right to monetize old videos, posted before the ban on vulgar words. Others have pointed out the hypocrisy of this system: while YouTubers are punished for a few vulgar words, many hateful or serious disinformation videos escape YouTube moderation and earn advertising revenue without worry (for recent examples, look to Brazil).

This whole situation has caused a stir as far away as France, where designers were forced to mask their swear words with beeps. Faced with criticism, YouTube finally announced that it would review these rules to better ” adjust » (this also concerns French videographers). We could stop there, and I could proudly write to you: damn, I’m already done with this newsletter! Except that this story is, in my opinion, emblematic of a larger phenomenon. Little by little, words are disappearing from the web.

A video posted by videographer ProZD (SungWon Cho) in early January, criticizing the censorship of insults on YouTube.

Automatic moderation penalizes the people it’s supposed to protect

The problem is simple: there is a lot of filthy content online. This poses two major challenges for large platforms. The first is that this content sometimes endangers their users, because it involves harassment or violent remarks towards certain categories of people. But it is also a financial issue. In an online ecosystem whose business model is still mainly based on the display of advertisements, it is important that advertisers (the companies that pay to display advertisements on a particular platform) are satisfied. Believe it or not, Coca-Cola doesn’t particularly like seeing its logo displayed next to terrorist propaganda or an anti-Semitic meme. In 2017, YouTube also suffered an unprecedented boycott by several major brands. who had discovered their campaigns on hate videos. And, since online moderation is a big mess, platforms generally opt for solutions for the automatic detection of so-called problematic content. The idea is, at best, these posts will be quickly removed or, at worst, they will not be associated with advertisements, and therefore will not earn revenue.

Automatic moderation is a malfunctioning system. It does not necessarily identify violent content. Worse, it’s easily manipulated, and it tends to penalize the people it’s technically supposed to protect (if I prevent the use of a slur, no one will get slurred). It is Twitter that censors the words ” queer ” and ” dykes“, or the phrase how do we get men to stop raping ? It was TikTok that for at least a few months blocked people in dilapidated homes or with visible disabilities from appearing in its recommendations, officially for reasons of ” fight against bullying“. We add to all this the usual opacity of the functioning of social networks, and we obtain a generalized paranoia among Internet users, who self-censor because of real or supposed moderation rules. We have already spoken here of the algospeak“, a phenomenon born on TikTok, where young Internet users use more or less absurd metaphors (seggs, le$bean, depre$$ion, etc) to talk about topics that aren’t violent or dangerous. It is spreading elsewhere. Some of my favorite YouTubers now use subterfuge to avoid talking about sex, drugs, alcohol, or death.

Although the use of fake words to evade moderation has always existed online (anyone who hung around forums a little too much between 1990 and 2010 will attest to this), it is the automation of this practice that worry. Rather than fear inclusive writing (an enrichment of our language that follows the evolution of our society), we could be more concerned about this progressive restriction of the words we use on the web. Not because the world is changing, but because big companies are overwhelmed by their problems. Can we still say shit to them?

The data transmitted through this form is intended for PressTiC Numerama, in its capacity as data controller. These data are processed with your consent for the purpose of sending you by e-mail news and information relating to the editorial content published on this site. You can oppose these e-mails at any time by clicking on the unsubscribe links present in each of them. For more information, you can consult our entire personal data processing policy.

You have a right of access, rectification, erasure, limitation, portability and opposition for legitimate reasons to personal data concerning you. To exercise one of these rights, please make your request via our dedicated rights exercise request form.

The press review of the week

Free (finally) the nipples

Since 2020, Meta (parent company of Facebook) has had an independent supervisory board, the Oversight Board, which is responsible for examining the social network’s moderation decisions that are the most debated, and sometimes to review them. to cancel. Yesterday, the organization made a very interesting opinion concerning the famous censorship of nipples on Facebook or Instagram. A trans and non-binary couple protested the deletion of a photo showing them bare chested, but with their nipples covered. Meta felt that this photo violated its rules against sexual content. Wrongly, estimated the Oversight Board, which took the opportunity to more generally criticize its vague rules concerning nudity, “ that hinder the freedom of expression of women, trans and non-binary people [sur Instagram et Facebook] (…) with an even greater impact on LGBTQI+ people“. More details on the side ofEngage.

Alert

You may know Hoshi for his love songs. Behind the scenes, the French singer has suffered violent cyberbullying for several years. It is organized in particular via the 18-25 forum of the jeuxvidéo.com site, already renowned for other cases of the same kind (note that it has also suffered sexist, grossophobic or homophobic attacks from media at all times). traditional facts, because the internet does not have a monopoly on hate). Hoshi denounced last week the inability of French justice to find and convict his attackers online, and is now appealing to Emmanuel Macron. It is to be read on the site of France Inter.

Help, my AI is harassing me

Originally, Replika is a chatbot that claims to give its users ” a virtual friend“, able to respond to their messages using artificial intelligence processes. Against a paid subscription, one can choose for the software to act romantically, even erotically. Problem: for some Internet users, the experience turned into sexual harassment from software with much more aggressive words than expected. It is to be read (in English) at Vice.

Nobleman

We are straying a little bit from the favorite subjects of this newsletter, but I recommend that you read this interview with economist Cédric Durand, recently published by The Obs, about Elon Musk’s ambitions. According to the lecturer, the acquisition of Twitter by the richest man in the world is part of a logic ” techfeudal“, the purpose of which would be to control the data and their processing capacities. I found this idea exciting, if terrifying. Read it here (article reserved for subscribers).

Something to read/watch/listen to/play

The Empress of Salt and Fortune.
The Empress of Salt and Fortune.

In a fantastic universe where clerics are responsible for archiving the history of the kingdom, the adelphe Chih investigates the life of a recently deceased empress. Iel goes to her old house, where she was exiled for many years, and there meets her former servant, Rabbit. The latter refuses to answer frontal questions. It is by examining various everyday objects that Chih manages, little by little, to unravel the mystery that surrounds the Empress.

I devoured The Empress of Salt and Fortune, a copy of which was kindly sent to me by L’Atalante editions (thank you!). It is the first novella (a short story, usually around 20,000 words) in a cycle of three tales, The archives of the Collines-Chantantes, by the American author Nghi Vo. Archivist Chih is a recurring character. Yet he is not the heroine of these epics. In The Empress of Salt and Fortune, it is the secrets of the Empress and Lapin that obsess us, and it is through banal objects, rather than glorious relics, that they are revealed: dice, kitchen utensils, boxes of storage, a dress, etc. The powerful are not necessarily those we believe. And it is in the intimate that the power takes root.

The Empress of Salt and Fortune, by Nghi Vo, L’Atalante editions

The data transmitted through this form is intended for PressTiC Numerama, in its capacity as data controller. These data are processed with your consent for the purpose of sending you by e-mail news and information relating to the editorial content published on this site. You can oppose these e-mails at any time by clicking on the unsubscribe links present in each of them. For more information, you can consult our entire personal data processing policy.

You have a right of access, rectification, erasure, limitation, portability and opposition for legitimate reasons to personal data concerning you. To exercise one of these rights, please make your request via our dedicated rights exercise request form.


Subscribe to Numerama on Google News to not miss any news!

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply