1. To regulate or not to regulate?
The Commission wants to focus on a soft law approach in the first instance. A multi-stakeholder AI Alliance will produce draft AI ethics guidelines, which will then inform further development of a suitable legal framework. The Commission doesn’t rule out either horizontal legislation or adaptation of existing law to tackle specific applications of AI. However, one thing is for certain: the EU is quite capable of legislating, even in the face of strong resistance from business. The unfolding Cambridge Analytica story has helped to validate Europe’s strident approach to privacy and may feel, in the words of Jean-Claude Juncker, that they are “catching the wind in their sails”.
2. Bridging the gap with international competitors
Europe is strong in science and engineering, with its industrial base and academic institutions already making a significant contribution to the development of AI. However, it is weaker in the B2C use of this technology and has been less strategic in the incubation of new technology than the US, China and Japan. It also has more limited sources of early stage finance than the US and far fewer internet and mobile phone users than China. The EU will seek to respond to these perceived weaknesses directly with initiatives to open up public sector data and leverage public and private finance, but the ethics of AI and the governance surrounding it is also part of the EU’s strategy to be more competitive internationally. There is also a reference to companies in other parts of the world enjoying a comparative advantage in terms of their business model which means more pressure on the data use of internet-born B2C platforms.
3. Wide scope for ethical governance
The Commission is setting a wide scope for its ethical approach, including “the future of work, fairness, safety, security, social inclusion and algorithmic transparency” as well as more broadly the impact on fundamental rights, including privacy, dignity, consumer protection and non-discrimination. Some tools to address these issues, which appeared in an earlier draft of the communication, didn’t make the final version released today e.g. robust risk assessment or certification schemes, audits of algorithms, etc. However, these are ideas that are clearly under consideration. Self-regulation in respect of these issues can deliver “benchmarks” but new or adapted regulation seems to be the ultimate objective.
4. Liability landscape undergoing change
AI and automated decision-making are fundamentally challenging the way liability rules apply today in case of defective products or harm caused to individuals. AI-powered IoT devices mean increased interaction between multiple players along the value chain making it difficult to assess precisely who should be held liable when things go wrong.
As well as the liability for products, the AI at the heart of many platform business models is putting pressure on limited liability protections for illegal content.
5. Algorithmic transparency necessary to build trust
The Commission sees public acceptance as a prerequisite to fully reaping the benefits of AI, hence the new Strategy’s focus on creating a transparency framework around algorithms and automated decision making. The basic principle is that consumers should know where algorithms affect decisions and the rationale that underpins them. A commitment to support the development of “explainable AI” through research funding shows that this is the area where the Commission believes Europe can make a difference.
Download the report here: 5 things you need to know about the EU’s Strategy on AI