(A)I made it, and you should know!
Would you rather watch a movie marked with a ‘No AI’ label or one without, or do you want to know if a book was written by AI? How, if at all, should we show something was made by a human or by AI?
While people may have wished for AI to tackle tedious and boring tasks, GenAI infiltrates creative industries and challenges human creativity. Artists, writers, musicians, directors and photographers already compete for attention in saturated markets where there’s more content available than a human being can consume in a lifetime. These creatives now also need to stand out against AI-generated content.
To do so, some creators mark their work as purely human-made. For example, publishers might choose to obtain a ‘Books by People’ label, and some filmmakers add an ‘AI-free’ disclaimer in the opening credits. These labels function as prestige and status markers: they do not only merely inform audiences, but actively ascribe a premium of human-only creation, similar to how AOC (controlled designation of origin) labels on French wines convey regional uniqueness, or Fairtrade certification signals commitment to better business practices.
The made-by-humans paradox
The practice of self-imposed ‘human-made’ labels is ahead of the legal requirements to mark AI-generated content. It is also more optimistic about human creativity and human creators’ agency than policy makers are. The recent resolution of the European Parliament on copyright and GenAI fears that human creativity might get overshadowed by AI and urges for the marking of AI-generated content. Human creators labelling their work as AI-free assume it will only become more valuable.
Both types of labelling imply a somewhat paradoxical hierarchy of creative products: what is human is superior; yet, even while superior, the human still needs to be protected from the artificial competition.
What this human-first tendency may lead to is that protection of intellectual property and creators’ rights will need to not only account for the conditions of human authorship, but also to establish procedures through which creators would need to prove their work is human. This might turn out to be increasingly more difficult for creators as consumers begin to suspect anything and everything is AI-generated – and possibly even more annoying than having to prove to a machine that you are human while passing a captcha when entering a website
But how would a creator prove their work does not simply ‘mimic human creativity’, as GenAI-produced content does? One way to do so would be to demonstrate that there was a valuable creative process happening behind the arguably creative output. This is often a process of defamiliarisation, in which a creator finds new ways to look at the world around them, conveying the experience in the creative output. Then, is it necessary that this process is documented and reported so that it can be legally protected, in addition to consumers getting insight into it and appreciating the time and effort that went into creation? Or should creators perhaps introduce imperfections and idiosyncrasies in their work to signal that it is authentic and not AI-produced, just like people deliberately add typos to their writing? In that case, what proportion of imperfection signals humanness enough to be automatically subject to copyright protection?
Can you legally prove that you are human?
Protecting human creativity against AI-generated content is a challenge that both creatives and lawmakers face. This challenge, conceptual and legal, is multifaceted.
First, the proof of humanness emerges as a legal burden. Current copyright regulations are built around outputs and not creators’ inner states, creative process, or the quality of being human per se. If humanness in the creative process is to be marked and protected, we would need to consider how it is situated in people’s feelings and experiences, which is hard to put into codified evidence. Codifying humanness would require defining what being human is. Proving that something is human would then require administering a creative process audit and accepting that there are hard rules to what human creativity is and how novelty can be evaluated.
Second, if we were to accept that human creation can be proven by imperfection, it is then necessary to formalise what we consider human idiosyncrasies in creative work. Is it an odd color, a clashing note, an imprecise choice of words? To be protected by the law, all this can technically be reduced to data, establishing typical patterns of imperfection. Ironically, this would also mean that to protect creative human imperfection against machines, we might need to subject it to methods machine verification. And if there is something un-codifiable about human creation, how do we put that into legal language?
An alternative approach would be for the law to shift part of its focus from the creative works to the conditions under which their makers create. As we move towards the obligatory disclosure of AI-generated content and the countervailing voluntary human-made labelling, how the human creatives are supported in their efforts to defamiliarise the world around us is a question of supporting both creativity itself and the cultural right to creativity and dignity.
0 Comments
Add a comment