Meta applies measures to detect content created with Artificial Intelligence

Tech giant Meta announced that it will begin automatically identifying and tagging any artificial intelligence (AI)-generated image or audio content posted to Facebook, Instagram, and Threads that was made using third-party tools.

As reported by the company in a statement, the measure includes the introduction of specific brands to identify this content under the label “Made with AI.” Invisible watermark that it developed in collaboration with other companies in the sector, such as Google, OpenAI, Microsoft, Adobe or Midjourney, to make it easier for Meta platforms to accurately identify whether content is real or false.

The tool will begin to be implemented starting in May of this year, as part of Meta’s fight against disinformation and fake news, in order to inform users when content has been created with AI and avoid confusion.

Following consultation with the Content Advisory Council, the company owned by Mark Zuckerber will apply this approach to transparency throughout this year, with the goal of “learning much more about how people create and share AI content.”

The policy analysis was carried out in collaboration with more than 120 stakeholders, including academics and civil society organizations, among others from 34 countries in major regions of the world, as well as surveys on how They should address AI-generated content.

100 trusted verifiers

From Meta they stated that they trust their network of “approximately 100 independent fact-checkers” to detect “false or misleading” AI-generated content.

“This will inform industry best practice and our own approach to the future,” said Meta’s President of Global Affairs, Nick Clegg.

This update, which is already used in photorealistic images in the company’s Meta AI tool released in December 2023, in addition to visible markers and tags, introduces invisible watermarks, such as metadata embedded within image files.

If Meta considers the content to be a high risk of deception, it may add a more prominent label to provide more accurate information.

Fine for users

The American company has also reported that it will make available to users a function from which they can report when this type of content is being shared on their social networks, so that the platform can add a label.

Users who post content with photorealistic video or realistic sound, which was created or digitally altered without reporting, could be subject to sanctions.

Another important part of the changes is that AI-generated content will no longer be removed, as was the case until now based on the policy drafted by Meta in 2020.

The parent company of Facebook, Instagram and Thread, had already expressed in February its desire to label all images generated by AI to prevent misinformation. When he announced that he was working with other partners to identify these publications from a series of signals that are included in the metadata of the products of different tools so that the technology platforms can be detected.

@Lydr05

Source: With information from Europa Press Infobae and El Espectador

Tarun Kumar

I'm Tarun Kumar, and I'm passionate about writing engaging content for businesses. I specialize in topics like news, showbiz, technology, travel, food and more.

Leave a Reply