leidenlawblog

The fallacy of reactive regulation: AI bias as an unchecked tool of systemic oppression

The rapid integration of new artificial intelligence systems into our lives has raised a number of grave human rights concerns. A proactive regulatory framework is necessary to address the potential for profuse systemic bias in these systems.

The proliferation of technology in our lives is incontestable. So too are the grave concerns arising from our increasing dependency on technology. Defined as the simulation of human intelligence processes by machines (Tech Target), artificial intelligence (AI) is the epitome of human innovation in the 21st century.

Our furthest conception, and natural fear, of artificial intelligence as evinced often in speculative fiction, is the eventual creation of sentient machines that would supplant or enslave humanity. However, there are present day concerns in the realm of AI that I argue are equally, if not more, worrisome than a futuristic takeover by machines. Chief among them, bias in AI. Modern AI systems, referred to as deep neural networks, are trained using large data sets with associated desired outcomes. Training can introduce bias in many ways, often through incomplete or biased training sets, e.g. a facial recognition system trained using only Caucasian faces. In the recent past, AI bias has been shown to lead to sexist hiring practices, racist predictive policing and homophobic online profiling, precipitating a fatal transposition of systems of oppression into AI, and in its wake, a trail of human rights violations. The European regional system has, however, done little to proactively tackle these complex questions, which begs the question: will a narrow focus on reactive regulation, i.e. creating a regulatory framework based on the impact of defective AI, be our greatest error yet?

At a recent human rights forum, Jan Kleissjen, at the helm of a taskforce designated by the Council of Europe to build a regulatory and policy framework addressing both broad and specific concerns around AI by 2021, insisted that the focus of the law ought to be on holding various actors accountable for the harm caused by AI, a largely elusive endeavour thus far. However, this approach is fundamentally flawed because it is predicated on a reactive modality, skirting around the root cause of malfunctional AI, namely bias in AI training. Simply put, to resolve AI bias, our focus must shift towards understanding and regulation of the data, personnel and processes that are used to develop AI systems. As such, the Council of Europe’s mandate must be to scrutinise the input process, before moving to regulate the output. To begin with, diversity in AI labs should be one of the linchpins of AI regulation. If AI bias is reflective of its creators’ biases, then diversifying the creators is a logical step to beginning to solve the problem. Presently, AI labs and companies are comprised overwhelmingly of white males, which limits the diversity of perspectives and approaches to problem-solving with AI systems potentially leading to bias. Working rigorously to guarantee that training data is not prejudicial, is another. This can possibly be mitigated by third-party audits of training systems and their accompanying data. Again, this can only occur in an environment grounded in social context and critically conscious of systems of oppression.

Bias in AI systems is nuanced (MIT) and the landscape of AI technology will continue apace with AI research. Some of this work will help improve our understanding of AI bias. However, a proactive regulatory focus on diversity hiring or affirmative action policies, along with (third-party) bias analysis and reporting from AI developers, is paramount for reducing the negative social and, consequently, human rights impacts of AI bias. Whilst overly simplified, these policies that expand the positive obligations of both State and non-State actors, go to the heart of the issue by adopting a preventative approach, thereby working to avoid the replication of large-scale suffering of minority groups at the hands of intelligent code, silicon and circuitry. The answer, then, for the Council of Europe is not to succumb to the fallacy of reactive regulation, whose track record has historically proven catastrophic for human rights, time and time again.

2 Comments

its priya

Experience discreet companionship in the vibrant city of Chennai with professional Chennai escorts. Alternatively, delve into the serene hills of Dehradun and explore the allure of companionship with Dehradun call girls. Tailored to individual preferences, these services promise unique and enchanting experiences, ensuring memorable encounters in both the bustling urban energy and the tranquil natural landscapes.
<a href="https://www.glamorousescort.in/dehradun-call-girls.html">Dehradun Call girls</a>

<a href="https://www.nightangels.in">Escort in Chennai</a>

Elroam

Great post, extremely important. We couldn't just understand, how would differ the bias of an AI , from that one of human being ? On the contrary, the AI , has more potentiality, to overcome bias, with huge data learning, and evolving one.

Also, from the history of jurisprudence and legislation ( at least so far) we could clearly learn, that using more advanced technology , is generating more and more rigorous regulation, for restraining such overwhelming effect of new technology.As illustration:

One cop , may testify, that he saw the defendant actually doing something ( wrong let's suppose , criminally wrong) . That would be enough in terms of admissibility. Yet:

When one machine shall reach such conclusion . Well , that's hell. Here, very complicated set of standards will have to be established in order to prove the:

functionality, the performance, the accuracy etc.... Means, more machines, more pre caution when reaching the law, and human rights by nature.

So, it wouldn't be so easy. And see here for example, some legal standards, like:

" Frye standard " :

https://en.wikipedia.org/wiki/Frye_standard

Or, more advanced one like:

"Daubert standard"

Here :

https://en.wikipedia.org/wiki/Daubert_standard

Thanks

Add a comment