Decapitation video on YouTube for hours and without restriction generates debate

Police said Wednesday they have charged Justin Mohn, 32, with first-degree murder and mishandling of a corpse after he decapitated his father, Michael, in their Bucks County home and posted it on a video. 14 minutes on YouTube that anyone could watch anywhere.

News of the event, which has drawn comparisons to beheading videos posted online by members of the Islamic State terrorist group at its peak nearly a decade ago, emerged as the CEOs of Meta, TikTok and other networking companies social workers testified before federal lawmakers frustrated by what they see as a lack of progress on children’s online safety.

YouTube, owned by Google, did not attend the hearing despite being one of the most popular platforms among teenagers.

The disturbing video from Pennsylvania emerged after other horrific footage that has spread on social media in recent years, including mass shootings broadcast live from Louisville, Kentucky; Memphis, Tennessee; and Buffalo, New York; as well as massacres filmed abroad in Christchurch, New Zealand, and the German city of Halle.

Embed – He decapitates his father and shows his head in a video that was on YouTube for hours

Middletown Police Capt. Pete Feeney said the Pennsylvania video was posted around 10 p.m. Tuesday and remained online for about five hours, a period that raises questions about whether social media platforms are complying with restraint practices that may be more necessary than ever amid the wars in Gaza and Ukraine, and an extremely contentious presidential election in the United States.

“It’s another example of the blatant failure of these companies to protect us,” said Alix Fraser, director of the Council for Responsible Social Media at the nonprofit Issue One. “We can’t trust them to grade their own homework.”

A YouTube spokesperson said the company had removed the video, deleted Mohn’s channel and was tracking and removing any reposts that might appear.. The video-sharing platform says it uses a combination of artificial intelligence and human moderators to monitor its content, but did not respond to questions about how the video was detected or why it wasn’t done sooner.

Major social media companies moderate content with the help of powerful automated systems, which can often detect banned content before a human can. But that technology sometimes falls short when a video is violent and graphic in a new or unusual way, as in this case, explained Brian Fishman, co-founder of trust and security technology company Cinder.

That’s when human moderators are “really, really critical,” he said. “AI is getting better, but it’s not there yet.”

The Global Internet Forum Against Terrorism (GIFCT), a group created by technology companies to prevent these types of videos from spreading online, was in communication with all its members about the incident on the night of Tuesday, said Adelina Petit-Vouriot, a spokeswoman for the organization.

About 40 minutes after midnight Eastern Time on Wednesday, the forum issued a Content Incident Protocol, formally alerting its members and other parties about a violent event that has been broadcast live or Recorded. The GIFCT allows the platform with the original recording to send a “hash” — a digital fingerprint corresponding to a video — and notifies nearly two dozen other partner companies so they can restrict it from their platforms.

But by Wednesday morning, the video had already spread to X, where a graphic clip of Mohn holding his father’s head remained on the platform for at least seven hours and received 20,000 views. The company, formerly known as Twitter, did not respond to a request for comment.

Radicalization experts say social media and the internet have lowered the barrier to entry for people to explore extremist groups and ideologies, allowing anyone who may be predisposed to violence to find a community that reinforces those ideas.

In the video released after the murder, Mohn described his father as a 20-year federal employee, espoused various conspiracy theories and railed against the government.

Most social platforms have policies to remove violent and extremist content. But they can’t detect everything, and the emergence of many new, less moderate sites has allowed more violent ideas to spread unchecked, according to Michael Jensen, a senior researcher at the Consortium for the Study of Terrorism and Responses to Terrorism, based in the University of Maryland.

Despite the obstacles, social media companies must be more vigilant in regulating violent content, said Jacob Ware, a researcher at the Council on Foreign Relations.

“The reality is that social media has become the front line of extremism and terrorism,” Ware said. “That is going to require more serious and committed efforts to counter them.”

Nora Benavidez, legal counsel for the media advocacy group Free Press, said that among the technology reforms she would like to see are greater transparency about what types of employees are affected by layoffs and greater investment in workers in the areas. dedicated to preserving security on digital platforms.

Google, which owns YouTube, this month laid off hundreds of employees working on its hardware, voice assistance and engineering teams. Last year, the company said it cut 12,000 workers “across Alphabet, product areas, functions, levels and regions,” without providing further details.

Source: AP

Tarun Kumar

I'm Tarun Kumar, and I'm passionate about writing engaging content for businesses. I specialize in topics like news, showbiz, technology, travel, food and more.

Leave a Reply