leidenlawblog

The draft AI Act and children: Room for improvement

The draft AI Act and children: Room for improvement

In 2021 a draft AI Act was published. At the moment, the draft AI Act is before the European Parliament (EP) as part of the European AI package.

The purpose of this European regulation is to introduce harmonised rules in the European Union that address the risks of the use of Artificial Intelligence (AI) systems. The regulation distinguishes between unacceptable, high, medium (limited) and low (minimal) risk AI use with, of course, the most stringent requirements in the case of high-risk AI use. In order to provide effective protection for individuals and their rights in the development and use of AI systems, it is very important that the Act is as clear as possible. Despite the good intentions regarding the protection of a particularly vulnerable group, namely children, and their rights, the draft has quite a few ambiguities that should be avoided.

The proposal for an AI Act pays particular attention to children and their rights in view of their particular vulnerability. The original proposal bans AI practices that exploit vulnerabilities of children and others for the purpose of materially distorting their behaviour in a way that 'causes or is likely to cause that person or another person physical or psychological harm’ (Article 5(1)(b) draft AI Act). However, the provision does not refer to children as such, but only to 'age'. This may mean that it can refer not only to children, but also to young adults (and to older people). Alternatively, it may mean that not all children are covered. On the basis of Recital 16, the first explanation seems the most obvious, given that it speaks of ‘children and people due to their age’. It is recommended that the wording of Article 5(1)(b) be aligned with the recital to eliminate ambiguities. In addition, in the case of high-risk AI systems, the impact on children will have to be taken into account when a risk management system is implemented (Article 9). ‘High-risk’ are, for instance, systems used for 'assessing students in educational and vocational training institutions’, which may or may not include children (i.e. under 18s) (Annex III). The ambiguous reference to age is also in Article 7 (giving instructions to the European Commission to eventually extend the list of high-risk AI systems in Annex III in the future). The Commission should consider 'the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to [a company that uses] an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age’. In other terms, age (or age imbalance?) is considered a source of human vulnerability.

An amendment by the EP Co-Rapporteurs proposes to add directly the following high-risk AI system to Annex III of the proposal: 'AI systems intended to be used by children in ways that have a significant impact on their development, including through personalised education or their cognitive or emotional development.’ It is indeed an important addition to include AI systems with an impact on children in the list. However, the wording of the amendment is somewhat incongruous. It mentions a clear example – personalised education – as well as vague wording on the impact on children's development. Furthermore, there must be a 'significant impact’ on the development - cognitive, emotional or otherwise - of children. Such an impact is not always easy to establish in children, however, and for this reason the precautionary principle is adopted. This principle entails a so-called ‘better safe than sorry’ approach: if there is a possibility that an activity has a negative impact on children's development, it is prohibited as a precautionary measure. If children's development has been shaped by adaptive learning systems in a way that at some point proves not to be in the best interest of the child – not an unlikely scenario which, however, we will not explore further here – it may be too late to reverse and must therefore be prevented. However, the amendment seems to assume that the impact must first be proven before an adaptive learning system is considered high-risk, or at least the wording in its current form gives no indication of a precautionary approach. This may be different in the case of Article 5 of the draft AI Act on prohibitive AI practices which applies when an AI system is ‘likely to cause ... harm’. However, depending on the interpretation that will be given to the provision, this may still be too high a threshold in the light of the precautionary principle. Alternatively, personalised education does not always have to have a negative impact on children's development – children can also benefit from it. The – positive or negative – impact will depend on the design of the system, its use in schools and the personality of the child. In that sense, it is relevant to monitor the impact and an approach in which the provider of adaptive learning systems is explicitly given the obligation – based on children's rights – to carry out a child impact assessment would be a valuable one. Such an obligation is more balanced in that it not only focuses on the negative impact and potential harm of AI systems, but prioritises the welfare of children – something that the best interest principle in Article 3 UNCRC also explicitly calls for. Unfortunately, we do not find such an obligation in the draft AI Act.

In general terms, linking the protection of children to the notion of 'harm’ (as Article 5(1)(b) does) is problematic. In legal terms (and in particular in the field of digital regulations), the notion of harm has no clear borderlines (it can go from quantifiable 'damages’ to mere 'negative effects’). It is no coincidence that the General Data Protection Regulation contains no reference to ‘harm’. On the other hand, the notion of 'significant impact’ proposed by the EP is interesting, but similarly ambiguous (negative or even positive impact?). Looking at the wording of the initial draft of the AI Act, the notion of 'adverse impact’ might appear a more balanced and reasonable standard. Impacts should be monitored on a cyclic basis, looking at the specific contexts of use, and not merely from the outset.

0 Comments

Add a comment