November 19, 2019
Share via

To what extent does Computer Vision AI need ethics?

Computer Vision as a technology employing Artificial Intelligence (AI) has been around for decades. However, the capabilities in processing large volumes of data in a moment has increased to the point that Computer Vision technologies can, and are, being used to capture more detailed and personalized information in a specific point of time. However, concerns on the ethics of Computer Vision AI have emerged, requiring much more general public awareness and discussion as the technologies develop. 

Computer Vision AI for people can have significant impacts on people’s lives, culture and society – both in positive and negative ways. Unfairness by algorithms and errors may account for the unintended consequences of AI, but when placed together with Computer Vision technology applied to humans, the negative potential consequences raise highly urgent ethical questions and issues. Potential negative impacts of Computer Vision AI include identify theft, malicious attacks against identity, discrimination based on identity, identity error and misinformation, espionage and copy right infringement. 

On the other hand, Computer Vision AI provides such a compelling opportunity for positive life changing technologies that can be used for social good – such as providing helpful information to blind and vision impaired people, it could also be used to support equality and inclusion. Emotional recognition takes the technology a step further – on the positive side, emotional recognition could provide valuable feedback in education, health diagnostics and social democratization, but could quickly be used for controlling and manipulating mass-behavior, in such a case where society enters an era of automatic quantitative reality.  

Saara Hyvönen, co-founder of DAIN Studios and DAIN Studios’ AI ethics expert, answers some questions about using AI in Computer Vision and emotional facial recognition, and the discussions it has stimulated with DAIN’s facial emotion recognition Naama that will be at Slush. 

To start the discussion on the ethics of Artificial Intelligence (AI), tell us about the activities DAIN Studios has been involved in with respect to AI ethics. 

In May 2017, the Minister of Economic Affairs, Mika Lintilä, launched the Artificial Intelligence Programme to think about how to ensure that Finland becomes one of the frontrunners among countries that apply Artificial Intelligence. I have been a member of the ethics subgroup of this programme. The focus of this group was to create discussion around AI ethics and push for renewal of operating models to take ethical guidelines into consideration.

But already before this, compliance and ethics has been a topic of interest: When innovating around data and AI one always needs to think about the broader context: user experience, legal aspects, and, yes, ethical considerations. Data and algorithms do not live in a void.

DAIN Studios will have a Facial Emotion Recognition API called Naama using Computer Vision AI at Slush – what was your initial reaction to the suggestion of using Computer Vision AI? 

My reaction is twofold: this kind of technology is really interesting but of course facial recognition also has this uncomfortable “Big Brother” feeling. What we want to showcase here are the possibilities of this type of technology, but also demonstrate the explainability part: how emotions are detected, what is significant to the algorithm.

From an AI ethics viewpoint, what are some of the issues the programmers and developers using similar technologies should consider? 

In AI development, we can roughly divide ethical questions in two parts: intention and implementation. Intention deals with what we want to do with AI; this is more of a strategic/business question. Developers more run into implementation related questions: how we develop AI. Here it is important to think about what kind of biases may be hidden in the data and in the outcome of the development. Is the algorithm fair? How is fairness defined, how do we prove it?

What are the main concerns for an AI ethics expert such as yourself, in terms of Computer Vision AI use cases that use Facial Recognition?

The right to privacy would be my main concern with the use of facial recognition software. We already know from the widely published case of the social credit system in China that this kind of technology can be used for mass surveillance in ways that impact all aspects of an individual’s life. This highlights the importance of thinking about what we, as a society, want and how these types of technologies should and should not be used.

Additional information

Saara will be at Slush – visit our booth on Thursday 21st November and continue the conversation by coming to booth E1 and meet Naama – DAIN Studios’ facial emotion recognition API.

Details

Title: To What Extent Does Computer Vision AI Need Ethics?
Author:
DAIN Studios, Data & AI Strategy Consultancy
Published in
Updated on November 23, 2023