This site is currently using a generated translation

A face says more than a thousand surveys

What do our customers really think about our products and services? And how do they experience their visit to our department store, service office or similar? These are questions that entrepreneurs have always tried to answer, in different ways.

These questions are highly relevant today, when many companies want to become more customer-centric in their operations, and no longer rely on product or technology. Surveys, mentometer buttons and interviews are common ways to collect customer feedback, but they can be resource-intensive and rarely tell the whole truth, and it can easily become a bit square when analysing the results. What if we could instead decipher our customers' facial expressions to understand what they think about their experience?

Facial Expression Reading

Anyone who has seen the TV series Lie to me(, based on the work of Paul Ekman(, knows that a facial expression is worth a thousand words, and that it is also a much more honest indicator of what we think about something than what we express through words. Facial expression scanning is used to detect potential terrorists at airports, although the screening is usually carried out by human screeners. But we don't need to resort to such drastic examples to find situations where we could have benefited from analysing a large number of facial expressions. What if you could see in which areas of your department store most customers seem happiest and most satisfied, and where their facial expressions don't look nearly as happy? You might want to know how many visitors you have, roughly how old they are and whether they are men or women. What if we could tailor advertising to the mood people are in? Of course, these ideas also raise privacy issues that are important to address when applying them. But the fact is that the technology to analyse faces and facial expressions is already available. We at AddPro decided to give it a try!

face recognition

Coincidentally, Microsoft has served up a smorgasbord of services in their Azure portal. One of these services is called Cognitive Services, originating from Microsoft Project Oxford and providing a number of interfaces with AI functionality. We created a simple application with the goal of finding out if there is a face in front of the camera and if so, evaluating both whether it is a man or a woman but also what emotional expressions the face is exhibiting. In the current example, it turns out to be a man in front of the camera with an emotion of surprise of 76%. Other measurable emotions include joy, fear, contempt and disgust but also other attributes such as hair, facial hair and any glasses. Extended functionality could include, for example, face and voice recognition, identification of objects in the image, and even two-way communication with an artificial intelligence in the form of a bot. Since the majority of the computing power is in the cloud, there is also no obstacle to packaging it all into a resource-efficient IoT device that can be managed by another service in Azure called IoT Hub.

As I said, this was a simple experiment that was relatively quick to set up. We are now exploring the area further to find more new and interesting applications for this technology, and to find out how we can best support our customers' operations with this type of technology. Do you have an idea or a case where you think face, voice or image object identification could be useful? Or are you curious to learn more about this technology? Don't hesitate to get in touch with us at AddPro to discuss the possibilities!