The UK regulator, the Information Commissioner’s Office (ICO), has issued a warning to organisations to assess the public risks of using “emotion analysis” technologies, before implementing these systems. Organisations that do not act responsibly, posing risks to vulnerable people, or fail to meet ICO expectations will be investigated, the regulator said.
Emotional analysis technologies process data such as gaze tracking, sentiment analysis, facial movements, gait analysis, heartbeats, facial expressions and skin moisture.
Examples given by the ICO include monitoring the physical health of workers by offering wearable screening tools, or using visual and behavioural methods, including body position, speech, eyes and head movements to register students for exams.
Emotion analysis relies on collecting, storing and processing a range of personal data, including subconscious behavioural or emotional responses, and in some cases, special category data. This kind of data, the ICO believes, is far more risky than traditional biometric technologies that are used to verify or identify a person. It argues that the inability of algorithms which are not sufficiently developed to detect emotional cues, means there’s a risk of systemic bias, inaccuracy and even discrimination.
“Developments in the biometrics and emotion AI market are immature,” said Deputy Commissioner, Stephen Bonner. “They may not work yet, or indeed ever.
“While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgements about a person that are inaccurate and lead to discrimination.
“The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science. As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area."
The ICO said that it will continue to scrutinise the market, identifying stakeholders who are seeking to create or deploy these technologies, and explaining the importance of enhanced data privacy and compliance, whilst encouraging trust and confidence in how these systems work.
It said it will act positively towards those demonstrating good practice, whilst taking action against organisations who try to gain unfair advantage through the use of unlawful or irresponsible data collection technologies.
It is also developing guidance on the wider use of biometric technologies, which may include technologies that are already successfully used in industry, such as facial, fingerprint and voice recognition.
The guidance is due to be published in Spring 2023, and will be based on public dialogues held in liaison with both the Ada Lovelace Institute and the British Youth Council. These will explore public perceptions of biometric technologies and gain opinions on how biometric data is used.