Digital innovation is constantly pushing the boundaries of what’s possible. In a world that moves this fast, we need the EU to update AI regulation, making it as safe as possible for individuals and member states. But, in the pursuit of protection from harm, the European Commission shouldn’t stop innovation.
Gaining that balance requires us, as the business community, to come to grips with the EU’s proposed Artificial Intelligence Act. The European Council has recently adopted a common position on the legislation. Now is the time for us as the business community to outline what the environment looks like that’ll let us develop the fastest, smartest, and bravest tech without jeopardizing individual rights.
With this in mind, I’ve brought together a few essential recommendations on the forthcoming EU AI act from a business perspective. This is based on my experience in the sector and is designed as a quick catch up.
Make the AI Act clear
The golden rule for the EU when it comes to the AI Act is to make it clear. Laws that are uncertain and vulnerable to multiple interpretations discourage private investment. In particular, when it comes to prohibited use cases there has to be no room for confusion or doubt about what is and what isn’t acceptable practice. Also the regulatory requirements of general purpose of AI needs clarification.
As well as being explicit in its requirements for companies, the AI act also needs to be only as intrusive as is necessary to prevent harm. In other words, the European Union must, where possible, avoid micromanaging the sector, encumbering it with unnecessary red tape that will suffocate growth.
Extend the principle of proportionality
It’s good news that the AI proposal doesn’t treat all systems equally and that particular attention is being paid to specific high risk use cases such as facial recognition and criminal justice. Why? Because these are the areas where harm will most likely occur.
This doctrine of proportionality is a key development. However the approach might be extended further. As well as applying the principle to decide on how to respond to high and low risk systems, the EU might consider using it as a means to differentiate their response between one high risk system and another.
What does that mean? We need to consider the cumulative impact of high risk systems. So—as Meeri Haataja and Joanna J. Bryson point out in their article on the subject—the regulation shouldn’t be the same for two identical products if one of those products is out on the open market and used on 40 million consumers while the other is an in-house AI system focused on only a dozen employees.
Align the law with the GDPR and other legislation
Many of the provisions laid out in the EU’s proposal are an accompaniment to the General Data Protection Regulation. Whereas the GDPR deals with data protection, the AI act is about artificial intelligence. With this in mind, it’s vital that the new EU legislation is clearly signposted in relation to existing laws.
Make sensible data governance requirements
In a number of places, the EU draft legislation was vague on details. We still don’t know the specifics of the measures companies need to take that operate the riskier AI systems. A risk management system will need to be put in place, but how intrusive will it prove? Technical documentation will need to be provided by firms, but how complex will the data governance requirements be? Record keeping and post-market monitoring will need to be undertaken, but how time-constraining will teams find this?
The devil is in the details with these matters, and more details need to be presented by the EU.
Get realistic about sandboxes
One final point is about sandboxes. According to some commentators, these will be a game changer that will shape the future of business learning, but I wonder whether this might be overstating the case. While some benefits will be derived from the tools, they probably won’t be a panacea, and protecting innovation will require additional strategies and actions.
Find a balance between protection from harm and industry dynamism
The global marketplace in data and AI is changing all the time, and for European companies to stay the pace they need to remain agile and take opportunities as they come. That’s why it’s so vital for the EU not to stifle industry innovation.
The groundbreaking GDPR is in many ways an important precedent for the AI Act. There, despite initial concerns, software developers learnt to factor in the EU’s data-privacy rules, and the overall outcome was that the EU became a global leader in data protection regulation. At the same time, European companies continued to flourish. Let’s hope a similar turn of events will take place over the next few years with the passing of the new EU AI act.
Then, once the legislation is passed, the discussion in the business community about AI and ethics shouldn’t get closed down. Looking into the more distant future, over the next decade, with technology evolving further, as business leaders we need to answer important questions about transparency and explainability. This way tech advances make a positive impact on society over the long term.
There already exists a joint innovation committee which is made up of the European Commission, EU Member States, Norway, and Switzerland, and it has put together a Coordinated Plan on Artificial Intelligence. This body working alongside individual businesses can create the necessary drive to put the EU at the forefront of developments in AI.