November in Helsinki, Finland, is not your typical peak period for visitors from other parts of the world, but for Slush week, accommodation in Finland’s capital was hard to find. Slush, the event that continues to attract 3500 startups, 2000 investors and 25,000 attendees from around the world, has done the impossible – the darkest, slushiest, greyest part of the year in Finland is associated with investors, unicorns and a competition on who can create the longest free coffee line. The November event held annually in Helsinki is like a small Christmas for startups, showcasing their latest innovations in a large nightclubesque venue with a low bass hum greeting the bringers of change and disruption – while the scene presents a typical experience for Slush, this year did not disappoint.
DAIN Studios had its first booth at #Slush19 demoing its facial emotion recognition application Naama (Finnish word for face), which was originally intentioned as more of a novelty showcasing DAIN Studios’ talent, but quickly became the topic of – “yes, this is cool, but what if you did…..? Can you do that too?”.
Although DAIN Studios is not focused on developing Computer Vision AI products exclusively at the moment, it does have its own AI Labs where it can prototype, test and demo potential AI products using computer vision AI, as well as other AI methods including Natural Language Processing (NLP), Machine Learning (ML) and, for example, chatbots.
Given the amount of interest in Naama at Slush, it has given pause for thought. There are many potentially beneficial solutions where the algorithms and technology used in Naama could be applied. At DAIN we had previously taken note of a few positive and concerning use cases of computer vision AI in our blog on computer vision AI ethics. What was great about Slush, were the many additional ideas that people had for the application when they engaged in the novelty and tried Naama. Some of our favorite potential use cases that we discussed included:
- Detecting passenger physical emotional wellbeing on a long trip;
- Gauging real time audience response during a performance, play or show;
- Detecting potentially unmanageable situations, such as football fan emotions in large crowds at a highly attended match;
- Emotions in retail and the shopping experience;
- Clear path guidance for blind and visually impaired wheel-chair bound people;
- Gender and age identification for deeper understanding into customer segmentation and preferences;
We also had many discussions on the ethics of the technology, noting that there are many use cases where this technology could be misused or inappropriate.
The evolution of Naama has been interesting within DAIN Studios – it’s probably not your normal innovation path. Naama’s development started with a series of blogs about building Computer Vision APIs and the emerging area of Explainable AI (xAI). These blogs had DAIN’s data scientists discussing how to build something that could showcase xAI using Computer Vision. The internal project took off when it diplomatically combined several ideas from DAIN’s data scientists, who through a process of experimentation and iteration, developed Naama.
The work built on several open-source code repositories, and multiple feature compromises were necessary to accommodate the time and resource constraints. The process was special to DAIN because it brought data scientists from the three studio offices together to work on the demo for Slush.