June 12, 2024
Share via

How to Build trustworthy AI?  AI Ethics and Regulation 

The rapid advances in AI open new opportunities at an accelerating pace. As evident by the numerous generative AI applications that have recently emerged in widely different fields from marketing to drug discovery, AI serves as a key driver for future innovation and economic growth. At the same time concerns about risks associated with adopting these new technologies are growing. Close attention is needed to mitigate issues such as unintended bias, unexpected behaviors, or harmful use of AI systems. To realize the full potential of AI we need to build trust in the responsible development and use of AI.   

To this aim regulators and companies are working to ensure ways to implement AI in a responsible fashion, in order to close the confidence gap and ensure benefits while reducing risks. While the need for regulation is widely recognized, different regions have taken different approaches to the topic. While differences on the practicalities of the regulation initiatives exist, a number of common themes arise. 

Alignment on principle level 

First of all, there is a broad alignment with the OECD AI principles. These core principles are widely recognized as a standard for the ethical use and management of AI, ensuring that benefits are widely distributed, the results produced by AI are fair, transparent, and secure, and that accountability is clearly defined. Although interpretations of the details may vary, there is a high-level agreement in place. 

OECD AI Principle What it means 
Inclusive growth, sustainable development and well-being,  AI should contribute to overall growth and prosperity for all – individuals, society, and planet – and advance global development objectives.   
Human-centred values and fairness,    AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society. 
Transparency and explainability, There should be transparency and responsible disclosure regarding AI systems, to ensure that people understand when they are engaging with them and can challenge outcomes. 
Robustness, security and safety AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed. 
Accountability.   Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the principles above. 
The OECD AI Principles identify five complementary values-based principles for the responsible stewardship of trustworthy AI

Secondly, regulators are widely adopting a risk-based approach in translating these principles into practice.  For example, the EU AI Act, which will apply to all AI systems in use in the EU, categorizes AI systems into risk levels, with different requirements. The strictest requirements apply to high risk systems, such as AI solutions applied to e.g. critical infrastructures, education and employment, law enforcement; and to general -purpose AI systems, e.g foundational models and gen-AI systems, that could pose “systemic risks”.  Beyond EU also other countries such as US, Canada, Brazil  and Japan are including elements of AI risk assessment in their regulatory policies.  

Risk level Examples Requirements 
Unacceptable Behavioral manipulation Social scoring Biometric identification and categorization Real-time and remote biometric identification systems Forbidden 
High Critical infrastructures Education and employment Safety components of products Essential services (e.g. credit scoring) Law enforcement, administration of justice, border control Risk assessment Data governance, data quality Traceability and documentation Robustness, security, accuracy Information and human oversight 
Limited Chatbots Transparency on interacting with AI or AI generated content 
Low or minimal AI enabled games Spam filters None 

Differences in practical implementation of regulation and guideline 

When it comes to shaping regulation, approaches differ, from comprehensive to domain-specific and regulatory vs guideline-based approaches.  Unlike EU, many countries such as UK and Switzerland have opted against a comprehensive AI regulation, appending existing laws to accommodate AI.  For example, Switzerland has integrated AI transparency rules into existing data protection laws and modified e.g. product liability laws to address AI systems.  Yet in other regions a combination of comprehensive and domain-specific policies is being shaped.  For example, China has released both general-purpose ethical norms for AI and specific rules for e.g. online algorithms and facial recognition. 

Also other areas such as cybersecurity, personal data management and data ownership are impacted by advances in AI. While the EU AI Act outlines a comprehensive approach to development and governance of AI systems, related regulation also exists or is being defined around use of personal data (General Data Protection Regulation GDPR), access and use of data across sectors (Data Act), digital platforms (Digital Services act) and cybersecurity (Cyber Resilience Act).  In other regions as well the interplay between different AI related policies and regulations is being considered.  

Regulators are also looking into using sandboxes as a tool to help develop trustworthy AI. Regulatory sandboxes provide environments where businesses can test and experiment with new and innovative products or services, usually in a particular area, under supervision of a regulator for a limited period of time. These have been widely used in the finance sector but are emerging in other fields such a telecommunications and health. 

Efforts on the rise on global alignment 

The impact of AI is not limited by country borders. As the recent advances in generative AI show the impacts of AI are global. This has also increased the efforts to establish international collaboration on AI governance. In November 2023 UK hosted the first AI Safety summit, where 28 countries from across the globe agreed on the Bletchley Declaration on AI safety, which recognized “the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community”. This event will be followed with a virtual meeting in May 2024 and a 2nd in-person summit in the end of 2024. 

Actions for companies 

With so much going on in the regulatory field, what actions should a company take? Here are some steps to start with: 

  1. Increase understanding of AI regulation and how it impacts your business. The first step towards responsible AI governance is understanding what regulation impacts your business. What are the policies and regulations enforced or being developed in in the economies you operate in?  
  2. Do an AI inventory and impact assessment. In parallel to understanding AI policies impacting your business it is important to understand in sufficient detail the AI algorithms and systems under development or in use in your company and assess their risk level and impact to end users. Are you developing algorithms in the high-risk areas or do your AI initiatives fall in the low or limited risk category? 
  3. Establish principles and practices. Guided by both the regulatory landscape and AI inventory of your company one should then define principles and practices on AI development and use, from AI development guidelines to AI governance and risk management models and accountability frameworks. 
  4. Increase awareness across organization. As AI impacts the whole organization, one should ensure via targeted trainings that executives, developers and users of AI alike are aware of legal and ethical considerations as well as company level principles and guidelines on AI development and use. 
  5. Set up governance structures. A risk-based approach to AI requires a systematic approach to assessing and mitigating risks and assigning accountability for AI systems. Depending on the current state this may mean setting up completely new processes or integrating into existing data governance structures. 
  6. Take in use suitable technical enablers. When developing AI models in a responsible manner a systematic approach to testing and monitoring accuracy, bias and fairness is needed. Previously considered a rather technical topic, going forward it is important to be able to demonstrate the results of such testing and monitoring also to a less tech-savvy audience, putting new requirements on validation tools and techniques.  

While navigating through the different requirements posed by regulation is important not to focus on ticking the boxes, but to remember that the aim is to build  environment where not only are AI systems developed and used in a responsible manner, but also this is done in a transparent way, so all actors, from developers to regulators to end users, can understand how the ethical principles are being implemented and validated.  Building this trust is what will make it possible to realize the full potential of AI. 

References & more

Reach out to us, if you want to learn more about how we can help you on your data journey.

Details

Title: How to Build trustworthy AI?  AI Ethics and Regulation 
Author:
DAIN Studios, Data & AI Strategy Consultancy
Published in
Updated on June 12, 2024