Today in Meteor
- Disrupt: Artist vs. AI: Who is right?
- Compile: A Crypto-free Super Bowl Sunday
- Beyond: Where do we go when we die? The Metaverse, of course
- Artificial Unintelligence: Women eating salad
Big ideas that change the world
Generative AI systems like DALL-E, Midjourney and Stable Diffusion, with their Turing-level abilities for creating professional quality imagery, have generated a mix of delight and horror among artists.
For some creators the models spark a sense of uneasy existential dread about being replaced, while others are taking the companies to court after learning their own images were used to train the systems without consent or compensation.
A group of artists filed a class action lawsuit against the companies behind the platforms last month, alleging theft of intellectual property and seeking damages. Photo giant Getty Images filed a lawsuit seeking as much as $1.8 trillion in damages from Stability AI for scraping 12 million of its images.
What's Really Happening
One of the artists whose been vocally upset about AI (though not in the lawsuit) is Greg Rutkowski. He's a highly sought after painter for video games and fantasy themed art. We're great fans of his work.
He's also become a popular prompt term for AI art programs, meaning users often enter his name in hopes of producing an image that has some of his panache. But how the AI interprets that prompt isn't cut and dry.
Our tongue and cheek cover was created using MidJourney and the simple prompt "dragon by Greg Rutkowski." A beautiful fantasy image emerged. But how much of its aesthetic could be attributed to Rutkowski? It's hard to say.
Observe the triptic below.
The first image was hand painted by Rutkowski. The second image was Midjourney prompted to make a dragon with his name. The third image is Midjourney being asked to make a generic dragon. Are there stylistic similarities? Certainly. Is it a clear copy of the artist's work? Definitely not.
Matters get more complicated if several artist names or styles are bunched together or if an artist's name is applied to something he wouldn't normally paint, like this image made with the prompt "hamburger by Greg Rutkowski."
To help sort these matters out, Meteor contacted Kate Downing, an intellectual property attorney working with tech companies. She believes that in their rush to market, AI companies didn’t devote enough attention or resources to pruning copyrighted and other problematic images from their training sets.
“I suspect that these generative AI companies are likely seeing interest and use cases from places they didn't expect,” she says. “I think they also really wanted to get a sense for whether or not there would be interest in their products and where that interest was going to come from before they spent a lot more money on such pruning and licensing deals.”
To say the situation is messy is an understatement. How the legal system will approach questions involving copyright, derivative works and fair use in the new context of AI machine learning models that ingest petabytes of data from across the web is anyone’s guess.
What’s worse is that it could take some time before precedents are set that can apply broadly because cases involving derivative works and fair use tend to be treated very narrowly and on the specific facts of the case. One decision may tell you absolutely nothing about how a court may view a similar case.
“In large part, this becomes a game of who has the best metaphors,” Downing says. “The plaintiffs against Stability AI (& co.) paint Stable Diffusion as a mere collage tool that users are using to steal the art work of hardworking famous artists. Stability AI will have a very different narrative to paint to the court that almost certainly argues that the types of harms described by the plaintiffs were very rare, have largely been addressed once brought to their attention, and which shouldn't overshadow the creativity, productivity, accessibility and usefulness of their technology.”
Why It Matters
This is bad news in the short term for the companies behind these tools (and theoretically, Downing says, for some of the creators that use them) who may have to mount a fair use defense through expensive litigation and multiple appeals that will drag on for years.
New tools are springing up to try and reverse engineer images created by generative AI to determine which human artists’ works were used to train the model. Machine learning startup Chroma recently unveiled the beta version of Stable Attribution, which aims to take images created by Stable Diffusion and pinpoint what images scraped from the web went into generating the new picture.
A system like Stable Attribution could offer artists a way to claw back some compensation from AI companies and a means to opt-in, or not, into training models in the future.
But Downing doesn’t see much of a role for such a tool in settling legal scores
“Right now, the way Stable Diffusion works, every image that's generated is generated using the entire dataset. Telling you which parts of the dataset "contributed" is just illogical -- that's not how it works… The model isn't just regurgitating what's in the dataset collage-style, it's making you an image based on what it "knows" both of the subject matter you've put in the prompt AND what it knows about "pleasing" art.”
Early users of Stable Attribution report its results can be pretty iffy. Its creators concede that the early version of the system is a work in progress that isn’t yet perfect. Still, they insist establishing the collective provenance of visual data that is then mashed up in a machine learning brain and spit back out is “not an impossible problem,” it just remains a difficult one not fully solved.
Stability AI did not immediately respond to a request for comment.
What we’re seeing now is the fragile chaos of the sandbox that is consumer AI. Tools barely out of alpha and beta testing suddenly saw record adoption and it seems like they’re being used in unanticipated ways with unforeseen consequences. These early sandcastles built on the beautiful beach of the AI frontier could be washed away by a single legal wave.
But if that happens, something will be built in their place. One obvious solution right now is new systems with more thoroughly vetted training data or better yet, platforms that allow users to pick their own training data.
“I think we're going to see companies enter the market whose entire business is putting these together and selling them to whoever wants them,” adds Downing.
She says we shouldn’t necessarily fault AI companies for not seeing the legal broadsides coming.
“There are a lot of technological innovations that we enjoy today that probably wouldn't have succeeded if they had done everything "by the book" from the get-go… I'm not sure VCRs or things like TiVo would have ever gone to market if they had tried to iron out the legal issues first. Same goes for the likes of YouTube, Uber, and AirBnB. Until people understand how a technology can really reshape an industry or their lives, both the courts and other transactional entities are unwilling to bend their "traditional" understanding of how the law should apply to a particular technology or even engage in the conversation.”
NOTE: No create section today as our newsletter ran so long. But we'll be back with more unique art on Monday.
Quick dopamine hits
Time out for crypto ads at Super Bowl
After a calamitous 2022 for the industry and the toxic fallout from FTX's demise, don't expect high-priced crypto commercials this Sunday. Though we have to admit the Larry David spot last year was pretty good.
But, the Super Bowl WILL have an official metaverse and it is… Roblox?
Rihanna gets the big show at halftime in Arizona, but Saweetie will be performing all weekend in something called “Rhythm City.” Ok.
AI art goes to college
Bradley University will school students in one of the hottest topics of 2023.
Google breaks a leg
The debut of the search giant's new AI chat bot, Bard, didn't go quite as planned. The AI flubbed the facts on the origins of a Webb Telescope image taken in 2004.
The not real Slim Shady
DJ David Guetta used AI to replicate Eminem’s voice for a track, but was quick to clarify he won’t be distributing the song commercially.
Web3 ain't everything
Heaven is a metaverse
The founder of Somnium Space imagines we're not far off from visiting our late loved ones in the cloud.
LiftBuild’s first “top-down” skyscraper is almost complete in Detroit
Building from the ground up is so 20th century. This company builds entire floors on the ground, assembly-line style, and then lifts them up two central spines to lock them in place.
Tonight I attempted to show a friend how easy it is to make synthetic stock photographs with AI and how much disruption is headed for that industry.
I prompted the classic "stock photo of woman eating salad" and got this nightmare fuel. Serves me right. Maybe Getty shouldn't panic yet. – Neil Katz
Thanks for reading! Please take just 30 seconds to tell us what you love or hate about this issue.