fbpx

Artists Can Now Poison AI Image Generators to Stop Them Ripping Off Their Work

An image of an artwork doe by an artist that could be stolen by AI

How do you stop AI from hoovering up creative content and spitting it back out as if it were its own? One group of researchers has come up with a radical answer: poison the source and bring AI down from the inside.

A new tool called ‘Nightshade’ allows artists to lace their work with pixels that are invisible to the human eye but can end up breaking generative AI models if the images are used in training.

The tool comes from a team at the University of Chicago led by Professor Ben Zhao which also released another tool to combat generative AI earlier this year called ‘Glaze’. That tool ‘coats’ the work of human artists in a digital shield that is also invisible but prevents it from being scraped by AI data training software.

“Artists are afraid of posting new art,” the professor of computer science told The New York Times. He said that artists have a “fear of feeding this monster that becomes more and more like them.”

“It shuts down their business model.”

Generative AI imagery programmes like Midjourney, and Dall-e, have been terrifying and enraging anyone who makes a living creating artwork since they exploded on the digital scene late last year. They enable users to generate any imagery they can think of in any style of art that they want. This is not only problematic for artists from a competitive standpoint but offers an extra sting given that the AI is trained to do so using their own work.

Despite the major issues generative AI is creating already, there are few, if any, protections available and the legal routes haven’t yet been tested.

In January, a group of artists in the US launched a class-action lawsuit against the companies running generative AI programmes Midjourney, Stable Diffusion, and DreamUp, claiming they hadn’t been credited, compensated, or had their consent secured in training AI programmes to generate art in their styles.

An image of nightshade in action to stop ai art theft
Image: How Nightshade works to disrupt generative AI image generation based on how many ‘poisioned’ concepts the AI has been trained on.

Just last month, dozens of the world’s top authors joined a similar lawsuit after it was discovered that their work had been used to build an AI training dataset called Books3. Australian authors, including Richard Flanagan, described the issues as the “biggest act of copyright theft in history.”

At present, Australia has no specific laws pertaining to AI, while the US and the EU are currently working out a major legislative framework to hash out the copyright issues with AI.

Zhao describes himself and his team as “pragmatists” who are giving artists tools in the vastly one-sided fight against machine learning in the absence of proper regulation.

“We recognize the likely long delay before law and regulations and policies catch up. This is to fill that void,” he said.

Related: Microsoft Is Spending $5 Billion in Australia to Build an AI ‘Cyber Shield’

Related: Charlie Brooker’s Ideas About the Future of AI Sound Like a ‘Black Mirror’ Episode

Read more stories from The Latch and subscribe to our email newsletter.