ARTIFICIAL INTELLIGENCE: THIS COMMISSION’S GDPR MOMENT?
La rentrée is upon us and with it legislators and policy makers, back from a staycation or a carefully selected holiday destination, will be pushing ahead with one of this Commission’s flagship initiatives: regulating Artificial Intelligence.
The next three months will see the Commission wrapping up an inception impact assessment, presenting the results of its recent public consultation and drafting an impact assessment. Meanwhile, the Parliament will advance its numerous own initiative reports on various aspects of AI, setting out its initial position on the topic. All should be in place for a Q1 2021 presentation of a legislative proposal.
Regulating AI has been a political priority from day one of this Commission, with President von der Leyen promising a proposal “within the first 100 days” of its term, an unfeasible target, that nevertheless set the tone.
Officials openly admit, that Europe has missed out on the first wave of digitalisation that saw the rise (and dominance?) of US and Chinese firms in the fields of social media, online search and e-commerce. Europe, for various reasons, finds itself on the back foot, squeezed between the US and China.
Caught in this situation, policy makers decided to play to their strengths: use Europe’s pool of technical expertise and leverage its market size to create the first set of rules governing AI, charting out a European way of doing AI that avoids the extremes of both sides. The extra-territorial ambition is clear and obviously modelled on the GDPR. Let’s remember the vindication many felt in Europe when its privacy rules came into force as the Cambridge Analytica scandal broke, showing that perhaps the GDPR was not the unnecessarily heavy handed approach some made it out to be.
High vs low risk
How will policy makers manage the task of creating universal rules for all AI applications, given their diverse uses and types, while avoiding stifling innovation in Europe? The answer comes in their distinction between high and low risk applications, outlined in February’s White Paper on AI, which remains the most likely policy option. This approach would see a strict set of rules (again, akin to the GDPR) imposed on AI applications used in certain high risk sectors and having the potential to create serious repercussions in the fundamental rights of citizens. On the other hand, low risk applications will not see specific rules, but an update of existing legislation on liability and safety to ensure it captures the increasing prevalence of AI.
As always, the devil is in the details. Although the White Paper laid out the two criteria determining high risk applications, putting that in legal language will be another matter. It is already becoming apparent that various industry bodies and companies want a more granular approach and better differentiation between different applications. The usual conundrum of putting in place legislation that strikes the right balance between universality and flexibility once again presents itself.
In it for the long haul?
Not straying too far from the GDPR theme, the AI legislative proposal will most likely occupy us for a long time. If the 1,215 responses to the public consultation are anything to go by, we will see a long and variable list of interest groups, associations, companies, civil society organisations and activists contesting each article and recital. At the same time, the impressive volume of own initiative reports and the jostling by many MEPs to be “Mr or Ms AI” suggests the involvement of many committees and legislators in the discussions, possibly drawing out the legislative process.
A lot of work on AI has already been put in by stakeholders and policy makers. The next three months will see everyone sharpening their positions, finessing their arguments and preparing for the long haul that will kick off in early 2021. This will be a marathon and not a sprint.