February 29, 2024

Meta to Label AI-Generated Images On Social Media Platforms

Meta on Tuesday announced that it is working on developing tools that would identify images produced synthetically by generative AI. The company is currently working with industry partners to determine the technical standards for the identification of images, and eventually videos generated by generative AI tools.

The social media giant also revealed plans to deploy these tools at scale across its platforms, such include Facebook, Instagram, and Threads.

Once the feature is deployed, users on Meta’s platforms will start seeing labels on AI-generated images appearing on their social media feeds.

In the blog announcing the new feature, Meta specifically mentioned that it aims to label images from leading companies like OpenAI, Google, Microsoft, Midjourney, Adobe, and Shutterstock.

The tech giant would achieve this by identifying invisible watermarks – a form of metadata that these companies add to images created using their tools. While the watermarks aren’t visible to users, they act as signals that can be digitally identified.

Meta already marks photorealistic images generated using its own AI feature, attaching both invisible watermarks and visible markers.

Meta’s President of Global Affairs Nick Clegg mentioned that the new identification policy will only apply to AI-generated images for now—not audio and video.

Pointing out that this makes it easier for other platforms to identify these AI-generated images, Meta emphasized the importance of the marking system in the company’s “responsible approach” to developing generative AI features.

Meta’s IPTC metadata and invisible watermarks, both of which act as invisible markers, are in line with the best practices recommended by the Partnership of AI (PAI), the company added.

This is because Meta would be relying on markers added by the generative AI companies to identify synthetically generated images, and they are yet to do so for audio and video content.

However, he clarified that while there’s a lack of identification policies for AI-generated audio and video, Meta will be adding a feature that will allow users to disclose it while sharing such content, which would enable the company to add AI labels based on voluntary disclosure.

With the rise of AI, there has been a significant uptick in the spread of disinformation and misinformation using AI-generated content.

The social media giant also admitted that while it was developing cutting-edge tools and standards for the labeling of synthetically generated content, it might still be possible for malicious actors to remove the invisible markers.

This development comes at a crucial time, with several countries including the US, the EU, India, and South Africa set to have major elections in 2024.

However, Meta is already working to further counter such bad actors. The company is working on developing advanced classifiers that would enable it to automatically detect and label AI-generated content even in the absence of invisible markers.

This means any AI-generated images from companies that don’t use markers can be labeled too.

“At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks”, Clegg wrote.

He also added that Meta’s AI Research lab FAIR recently shared its research on Stable Signature, an invisible watermarking technology that it is currently developing.

Describing AI as “both a sword and a shield”, Clegg explained that Meta has been using artificial intelligence to protect users from harmful content for years. The company also intends to use generative AI to deal with such content more efficiently.

free coins
free coinsfree coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins
free coins

Leave a Reply

Your email address will not be published. Required fields are marked *