Discover more from The Signal
How to train your bot
Generative AI is a minefield for copyright law
Good morning! Generative artificial intelligence is nothing but a highly trained computer programme that has a phenomenal memory and capability to synthesise information fed to it. The key to its abilities is the training it gets. The more artists it is exposed to, the better the results. But is it fair for generative AI to feed on any artwork without compensating the artist? After all, it draws on artists’ creative expression for its output. Regulation in the US, where leading AI companies are based, is certain about one thing (for now): only humans can hold copyrights. Today’s story argues that the emerging legal framework for AI will have to deal with such fundamental questions. Plus, a selection of absorbing weekend reads.
The Signal is now on Telegram! We've launched a group — The Signal Forum — where we share what we’re reading and listening through the day. Join us to be a part of the conversation!
In 2022, an AI-generated work of art won the Colorado State Fair’s art competition. The artist, Jason Allen, had used Midjourney – a generative AI system trained on art scraped from the internet – to create the piece. The process was far from fully automated: Allen went through some 900 iterations over 80 hours to create and refine his submission.
Yet his use of AI to win the art competition triggered a heated backlash online, with one Twitter user claiming, “We’re watching the death of artistry unfold right before our eyes.”
I talked to Jason Allen, who submitted an AI-generated piece to the Colorado State Fair and won first prize.
“Art is dead, dude,” he told me. “It’s over. A.I. won. Humans lost.”
Sep 2, 2022
1.8K Likes 333 Retweets 146 Replies
As generative AI art tools like Midjourney and Stable Diffusion have been thrust into the limelight, so too have questions about ownership and authorship.
These tools’ generative ability is the result of training them with scores of prior artworks, from which the AI learns how to create artistic outputs.
Should the artists whose art was scraped to train the models be compensated? Who owns the images that AI systems produce? Is the process of fine-tuning prompts for generative AI a form of authentic creative expression?
We’re part of a team of 14 experts across disciplines that just published a paper on generative AI in Science magazine. In it, we explore how advances in AI will affect creative work, aesthetics and the media. One of the key questions that emerged has to do with U.S. copyright laws, and whether they can adequately deal with the unique challenges of generative AI.
Copyright laws were created to promote the arts and creative thinking. But the rise of generative AI has complicated existing notions of authorship.
Photography serves as a helpful lens
Generative AI might seem unprecedented, but history can act as a guide.
Take the emergence of photography in the 1800s. Before its invention, artists could only try to portray the world through drawing, painting or sculpture. Suddenly, reality could be captured in a flash using a camera and chemicals.
As with generative AI, many argued that photography lacked artistic merit. In 1884, the U.S. Supreme Court weighed in on the issue and found that cameras served as tools that an artist could use to give an idea visible form; the “masterminds” behind the cameras, the court ruled, should own the photographs they create.
From then on, photography evolved into its own art form and even sparked new abstract artistic movements.
AI can’t own outputs
Unlike inanimate cameras, AI possesses capabilities – like the ability to convert basic instructions into impressive artistic works – that make it prone to anthropomorphization. Even the term “artificial intelligence” encourages people to think that these systems have humanlike intent or even self-awareness.
This led some people to wonder whether AI systems can be “owners.” But the U.S. Copyright Office has stated unequivocally that only humans can hold copyrights.
So who can claim ownership of images produced by AI? Is it the artists whose images were used to train the systems? The users who type in prompts to create images? Or the people who build the AI systems?
Infringement or fair use?
While artists draw obliquely from past works that have educated and inspired them in order to create, generative AI relies on training data to produce outputs.
This training data consists of prior artworks, many of which are protected by copyright law and which have been collected without artists’ knowledge or consent. Using art in this way might violate copyright law even before the AI generates a new work.
For Jason Allen to create his award-winning art, Midjourney was trained on 100 million prior works.
Was that a form of infringement? Or was it a new form of “fair use,” a legal doctrine that permits the unlicensed use of protected works if they’re sufficiently transformed into something new?
While AI systems do not contain literal copies of the training data, they do sometimes manage to recreate works from the training data, complicating this legal analysis.
Will contemporary copyright law favour end users and companies over the artists whose content is in the training data?
To mitigate this concern, some scholars propose new regulations to protect and compensate artists whose work is used for training. These proposals include a right for artists to opt out of their data’s being used for generative AI or a way to automatically compensate artists when their work is used to train an AI.
Training data, however, is only part of the process. Frequently, artists who use generative AI tools go through many rounds of revision to refine their prompts, which suggests a degree of originality.
Answering the question of who should own the outputs requires looking into the contributions of all those involved in the generative AI supply chain.
The legal analysis is easier when an output is different from works in the training data. In this case, whoever prompted the AI to produce the output appears to be the default owner.
However, copyright law requires meaningful creative input – a standard satisfied by clicking the shutter button on a camera. It remains unclear how courts will decide what this means for the use of generative AI. Is composing and refining a prompt enough?
Matters are more complicated when outputs resemble works in the training data. If the resemblance is based only on general style or content, it is unlikely to violate copyright, because style is not copyrightable.
The illustrator Hollie Mengert encountered this issue firsthand when her unique style was mimicked by generative AI engines in a way that did not capture what, in her eyes, made her work unique. Meanwhile, the singer Grimes embraced the tech, “open-sourcing” her voice and encouraging fans to create songs in her style using generative AI.
If an output contains major elements from a work in the training data, it might infringe on that work’s copyright. Recently, the Supreme Court ruled that Andy Warhol’s drawing of a photograph was not permitted by fair use. That means that using AI to just change the style of a work – say, from a photo to an illustration – is not enough to claim ownership over the modified output.
While copyright law tends to favour an all-or-nothing approach, scholars at Harvard Law School have proposed new models of joint ownership that allow artists to gain some rights in outputs that resemble their works.
In many ways, generative AI is yet another creative tool that allows a new group of people access to image-making, just like cameras, paintbrushes or Adobe Photoshop. But a key difference is this new set of tools relies explicitly on training data, and therefore creative contributions cannot easily be traced back to a single artist.
The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression.
Robert Mahari – JD-PhD Student, Massachusetts Institute of Technology (MIT)
Jessica Fjeld – Lecturer on Law, Harvard Law School
Ziv Epstein – PhD Student in Media Arts and Sciences, Massachusetts Institute of Technology (MIT)
This article is republished from https://theconversation.com under a Creative Commons license. Read the original article at https://theconversation.com/generative-ai-is-a-minefield-for-copyright-law-207473
Introducing TechTonic Shift, not your average tech podcast. 👀 Join our very own Roshni Nair and Rajneil Kamath as these two indulge in the kind of banter and analysis you will not find elsewhere—least of all on an Indian tech bro podcast. That is a promise.
Stormy days ahead: The monsoon is a complex weather phenomenon. It is also delicate and sensitive to changes in surface temperatures, particularly over oceans. While we know stuff like greenhouse gas emissions and aerosols impact surface temperatures, it will become increasingly important to gauge the specific impact and magnitude of the effect of each factor to anticipate weather changes. Climate models project that the tropical Indian Ocean basin will experience a temperature rise of 1.2° C by mid-century relative to the recent past. The Arabian Sea will warm at a greater rate relative to the Bay of Bengal. Which is why the western coast is now seeing more storms, such as Biparjoy. This article in The India Forum by climate researchers Chirag Dhara and Roxy Mathew Koll lays out what’s in store for the subcontinent.
A relationship forged in hubris: If there is one thing that America lacks in victory, it is grace. The American penchant to rub the vanquished nose in the mud is at the root of many a global conflict. After all, humiliation begets fury, silent perhaps at first, but explosive in the end. There was a brief period after the Soviet Union crumbled and the Berlin Wall fell when the US was the numero uno power in the world. At the time, Russian leaders, and even some US diplomats, had rather naively hoped that the US would proceed to dismantle defence arrangements and positions erected during the Cold War. But beginning with President George HW Bush, US leaders not only chose to expand the arrangements but also impose their will on countries far away from its borders. A more sensitive approach to a crumbling nation would perhaps not have transformed Russia into what it is today: an oligarchic economy ruled by an autocrat with wounded pride. The seeds of the Ukraine war were sown in 1989, and the US kept watering them, says this New Yorker essay.
The importance of being charitable: Many of us may have loved reading Charlie and the Chocolate Factory as a child. Anyone will love the story of a well-behaved, poor child who wins against a deeply unequal society to become the heir to a chocolate factory. It perhaps takes an adult to recognise that Mr Willy Wonka’s plan to offer five golden tickets hidden in bars of his chocolate was perhaps the greatest sales growth hack ever. It is an extravagant charity to give away your factory, yes, but couched in the relentless pursuit of profit. This story in The New York Times Magazine likens Mr Beast to Willy Wonka, chronicling his rise as one of the biggest YouTube sensations ever. The formula to his popularity is simple: make videos of extravagant charity that ensures the video goes viral, rake in the views and ad revenue, use it to fund even more over-the-top charity in the next video, and keep the cycle going. Fans defend Mr Beast for his charity, while others criticise the ethics of performative charity just for the views. Whatever the debate, every Mr Beast video guarantees two things: somebody gets a golden ticket, and he rises further to the top.
Big house, tiny foundation: Failing banks and rising interest rates had investors in the US markets bracing for impact. But surprisingly, the shock hasn’t come. Instead, the S&P 500 is up over 14% this year. It’s all thanks to just seven blue-chip Big Tech stocks: Alphabet, Meta, Apple, Microsoft, Amazon, Tesla, and Nvidia. Together, they have kept broader market indices rising, like Atlas holding up the heavens. This column by the Financial Times traces how markets’ performance became so intertwined with Big Tech stocks in the US over the years, at the expense of small-cap companies, cyclical stocks, and the winning horses of the last generation: oil & gas companies. Besides, this concentration of market growth is hurting specialised mutual funds and their unit holders. The big question now is if this reliance on Big Tech will help the markets in the long run or hurt them.
At the top: “The last time I saw someone on this stage was Bruce Springsteen and he did not get the welcome that Prime Minister Modi has got.” This was Australian Prime Minister Anthony Albanese addressing the Indian diaspora when Modi visited the country last month. There are over 18 million Indians living outside their homeland, and it’s the highest-earning migrant group in the US. Indian migrants are making it big in politics, policy, and business. Executives of Indian origin head 25 companies in the S&P 500. With the sheer strength in numbers, India's diaspora could turn out to be more influential than most in current times. This cover story in The Economist talks about the growing influence of NRIs and PIOs globally.
On the edge: North Korea witnessed a devastating famine in the late 1990s that killed approximately three million people. In May this year, the United Nations placed North Korea on a watchlist over food insecurity concerns. Exclusive interviews conducted by the BBC narrate a dire story of starvation and fear. As the world was collectively grappling with Covid-19, the North Korean government restricted trade and travel. Under the garb of containing the virus, the authorities closed borders and have entirely done away with cross-border activity, which has cut short access to food, medicine, and essentials. This heartbreaking BBC report highlights the rot in Kim Jong Un's regime that's imprisoning and starving its own people.