You are currently viewing Meta’s Plan to Label AI-Made Videos, Images, and Audio

Meta’s Plan to Label AI-Made Videos, Images, and Audio

5/5 - (1 vote)

In February, Meta said it would put new labels on Instagram, Facebook, and Threads to show if a picture was made by AI. Now, Meta wants to use rules made by itself and others to add “Made with AI” tags to videos, pictures, and sound made by AI, based on common signals in the industry. (Meta already puts an “Imagined with AI” tag on lifelike pictures made using its AI tools.)

Meta said in a blog post on Friday that it will start putting labels on AI-made content in May 2024 and stop deleting it automatically in July 2024. Before, Meta used its rules on changed media to decide if AI-made pictures and videos should be taken down. Meta explained that this change comes from feedback from its Oversight Board, public surveys, and talks with experts.

Meta's Plan to Label AI-Made Videos, Images, and Audio
Meta

“If we see that computer-made or changed images, video, or sound could really trick people on something important, we might put a bigger label so people know more about it,” Meta said in its blog post. “This way, people get more details about the content so they can understand it better, and if they see it elsewhere, they know what it is.”

Meta’s Oversight Board, made in 2020 to check Meta’s rules on content, said that Meta’s old way of checking AI-made content was too narrow. Made in 2020, when AI-made content was rare, the rule covered only videos changed by AI to make it seem like someone said something they didn’t. But because AI has gotten better, the board said the rule should also cover any fake videos showing someone doing something they didn’t do.

Also, the board said that taking down AI-made media that doesn’t break Meta’s Community Standards could stop people’s freedom to say what they want. So, the board said Meta should put labels on AI-made media but still let people see it. Meta and other companies have been told they aren’t doing enough to stop fake news. Using changed media is a big worry, especially in 2024 when many countries, including the US, are having elections where fake videos and pictures of politicians can easily be made.

“We want to help people know when lifelike pictures are made or changed using AI, so we’ll keep talking with other companies and governments, and we’ll keep checking how we do things as technology gets better,” Meta said in its post.

Source: Zdnet