The European Commission released last week an European Data Strategy document and a white paper on AI strategy. We previously wrote an article commenting on the data strategy, and here follows our comments on the AI White Paper.
On Artificial Intelligence – A European approach to excellence and trust
As part of coming out with “Shaping Europe’s digital future”, the European Commission released the White Paper on Artificial Intelligence on February 19th, 2020. The White Paper on AI is, true to its name, an invitation for a dialogue with stakeholders. Members of the EU, citizens, companies and NGOs, are invited to give their comments on the paper by May 21st, 2020 and the Commission plans to release an adopted plan to the member states by the end of 2020.
The Commission will almost certainly have plenty of comments to consolidate, as the details in the white paper are greatly missing or vague. One can only hope that once the final version is available it is more ready for the market than, for example, the earlier regulations touching closely the digital economy, GDPR and PSD2.
The AI White paper was released on the same day with the data strategy for a reason. Shaping Europe’s digital future requires firstly a data strategy, that ensures the quality and quantity of data is available, which then enable building artificial intelligence.
Building the excellence
As with the EU Data Strategy the Commission underlines the value of data to Europe’s current and future sustainable economic growth and societal well-being. AI is one of the most important applications of the data economy and consequently the paper states plans for creating new excellence and testing centres and attracting over €20 billion investments into European AI research annually, which is certainly welcome and aids in keeping Europe’s industry competitive.
The other side of the white paper is to ensure that AI systems are “trustworthy”. For the rest of this article we will focus on this aspect, ie. how the Commission approaches building and earning trust for AI systems.
When used responsibly AI can help to protect citizens’ security and enable them to enjoy their fundamental rights. By the same token the commission also worries that AI can have unintended effects or even be used for malicious purposes. These concerns are addressed by a regulatory framework for AI technologies, which is outlined in the white paper. The challenge for the Commission and the entire European data industry is to translate this outline into a working model before the plan is released.
The paper suggests that regulations will focus on high-risk applications only. High-risk applications for AI are considered those that are 1) used in specific industries, such as healthcare, transport or energy, and 2) that may cause significant effect, like injury or death, to an individual or significant damage to a company. Additionally certain applications are always high-risk, such as biometric identification and recruitment. With this definition of high-risk, the Commission attempts to lighten the regulatory burden on the AI eco-system, but is it possible to clearly define whether an application is or is not high-risk within a complex mesh of AI applications?
For high-risk applications the following regulatory requirements are planned to take effect:
Reasonable assurances are needed that the training data used for building the model covers relevant scenarios, data privacy is protected and measures to avoid discrimination should be in place. These are all relevant objectives, that almost everyone in the industry will agree, but how do you define what are reasonable and relevant measures for defining your training data?
Data and record keeping
For transparency reasons the regulations suggest that records regarding the data set used to train and test the AI system, or the data itself will need to be kept for record. Additionally the training methods and processes need to be documented, so that compliance can be verified at any time. This potentially creates conflicts with other regulations, which may require data to be destroyed after a certain time period.
Informing the capabilities
For transparency reasons, the paper also suggests that users of AI systems need to be informed about the capabilities and limitations of the systems, and citizens need to be made aware that they are interacting with an AI system, not humans. These are valid and reasonable requirements although communicating the precise capabilities of an AI system may prove to be challenging. A requirement of explainable AI could be a better option for improving transparency and making systems understandable by humans.
Robustness and accuracy
An AI system must be technically robust and accurate in order to be trustworthy, which means that such systems need to be developed responsibly. These are obvious requirements for any system, be it using AI or not. The additional requirement is that an AI system needs to be resilient to attacks that would manipulate data or algorithms themselves, which are reasonable requirements and do require extra considerations when AI based technologies are used.
Next in the list of requirements is human supervision of AI systems. Humans need to have an ability to monitor, review, intervene or cancel AI-based decisions. This is an interesting requirement as a large number of decisions today are already made with AI algorithms, and the business case for these systems is often the ability to reduce human intervention. It is unclear what human decision monitoring would mean in practice. Finally this requirement adds that an AI system needs to stop operating if certain conditions are met (e.g. in case of malfunction), which certainly makes sense and is a good design requirement in any case.
Special requirements for Biometric identification
This final high-risk requirement is related to the use of biometrics and was debated already before this paper was released. This published paper refers to the current EU data protection rules and the Charter of Fundamental Rights of the EU, which can be interpreted so that AI can only be used for remote biometric identification purposes where such use is duly justified, proportionate and subject to adequate safeguards. The paper invites the community to debate the circumstances where biometric id would be allowed.
Who do you call?
Managing the special requirements for high-risk applications is not going to be simple. In addition to the complexity of the regulatory requirements discussed above, there is also the question of who should be responsible. In a larger system, the value chain may include numerous developers, integrators, distributors and origins of data. The Commission’s view is that each obligation should be addressed to the actors who are best placed to address any potential risks. This principle is valid, but rather complex to manage in practice. With many stakeholders and potentially millions of Euros of liabilities, it is hard to foresee many volunteers coming forward. And is the definition of low and high-risk applications clear enough to draw the line?
What happens next?
There are many positives in the white paper, as the Commission has recognised the potential benefits of AI technology and wishes to support European activities in this area. This paper is a good attempt to manage the risks that arise with this new technology and also the format of opening the dialogue through a white paper, rather than with a firm plan is welcome.
On the downside the paper lacks positive outlook of things and focuses too much on negatives. The European data strategy document outlines clearly how Europe will go forward with using data for the benefit of the continent. We need a similar positive spin to the AI paper.
From an industry point of view, the risk we can see is that the definition of the requirements will be difficult or even impossible to implement in some cases, which would definitely slow down the development of AI systems in Europe and thus harm the industry. Certainly there are trust issues with AI, but should that be the main focus? For example, if we can prevent Corona virus from spreading using biometrics, should we first wait and focus on the risks? Or if a surgeon using AI powered tools is more accurate than a doctor without the tools, is it fair for the patient to criminalize the doctor making use of AI?
In summary the AI White Paper is a good first step, but the AI policy needs to become more practical, more endorsing and less limiting. To aid with that and give a positive spin to the plan, send your constructive feedback to the Commission through this page by May 21st.