Thu. May 23rd, 2024

Meta wants to identify any image generated by artificial intelligence on its social networks

Natasha Kumar By Natasha Kumar Apr17,2024

 Meta wants to identify on its social networks any image generated by artificial intelligence

AFP/Archives – Lionel BONAVENTURE

The American giant Meta wants to identify “in the coming months” any image generated by artificial intelligence (AI) which will be published on its social networks, a decision taken against a backdrop of the fight against disinformation, at the start of a year rich in electoral votes.

“In the coming months, we will label the images that users posts on Facebook, Instagram and Threads” when we can detect industry-standard signals that they are AI-generated, Nick Clegg, Meta's head of global affairs, said in a memo on Tuesday. blog.

If Meta has already implemented these labels on images created using its Meta AI tool launched in December, “we want to be able to do it also with content created with tools from other companies” such as Google, OpenAI, Microsoft, Adobe, Midjourney and even Shutterstock, he added.

“We are building this capacity now, and in the coming months we will start applying labels in all languages supported by each application”, further underlined the manager.

The announcement comes as the rise of generative AI raises fears that people could use these tools to sow political chaos, including through disinformation or misinformation, ahead of several major elections this year. year, including in the United States.

Beyond these ballots, the development of generative AI programs is accompanied by the production of a flow of degrading content, according to many experts and regulators, such as false pornographic images (“deepfakes”) of famous women, a phenomenon which also targets anonymous people.

A false image of American superstar Taylor Swift, for example, was viewed 47 million times on X (formerly Twitter) at the end of January before being deleted. According to American media, the publication remained online on the platform for approximately 17 hours.
– Digital “Tattoo” –

While Nick Clegg admits that this large-scale labeling “won't completely eliminate” the risk of producing false images, “it would certainly minimize” its proliferation “within the limits of what technology currently allows.”

< p>In concrete terms, how does that work ? In addition to placing visible markers on AI-generated images, Meta also uses “watermarking” technology, a form of digital “watermarking” which “consists of inserting an invisible mark inside the generated image” by AI so that it is detected by its social networks, explains to AFP Gaëtan Le Guelvouit, expert in digital watermarking from the Technological Research Institute b<>com.

“To each time we post an image on one of their social networks, there is processing: either image compression, size change… It doesn't cost much to add a small detection brick inside. They have the means to do it,” he adds.

“It's not perfect, the technology isn't quite there yet, but it's the most advanced attempt of any platform so far to provide meaningful transparency to billions of people in the world,” Nick Clegg told AFP.

“I really hope that by doing this and taking the lead, we will encourage the rest of the industry to work together and try to develop the common (technical) standards that we need,” added the Meta executive. , which says it is ready to “share” its open technology “as widely as possible.”

“According to our research, this is not an easy task, but it is probably a an important element which will increase confidence in generative AI and technological platforms”, underlines to AFP Duncan Stewart, director of technology, media and telecommunications research at Deloitte.

“It is essential that companies collaborate and define or agree on common technical standards. Isolated solutions may be insufficient,” he adds.

OpenAI, which created ChatGPT also announced in mid-January the launch of tools to combat disinformation, emphasizing that its DALL-E 3 image generator contains “safeguards” to prevent users from generating images of real people, including candidates, for political purposes.

Did you like the article? It mobilized our editorial staff which only lives on your donations.
Information has a cost, especially since competition from subsidized editorial staff requires additional rigor and professionalism.

With your support, France-Soir will continue to offer its articles free of charge  because we believe that everyone should have access to free and independent information to form their own opinion.

You are the sine qua non condition for our existence, support us so that France-Soir remains the French media that allows the most legitimate expression.

Natasha Kumar

By Natasha Kumar

Natasha Kumar has been a reporter on the news desk since 2018. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining The Times Hub, Natasha Kumar worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my 1-800-268-7116

Related Post