Lucas P.J.J. Noldus
Hot Topic Talk -- NeuroIS Retreat 2024
On 13 March 2024, the European Parliament approved the European AI Act, an important instrument designed to promote the responsible use of AI, prevent misuse, and cultivate a safe society for all Europeans. Citizens’ privacy, systems’ transparency and monitoring AI systems’ operation to signal and act upon possible unexpected and unwanted effects of these systems are of crucial importance. However, an alarming provision within the regulation is the categorical prohibition of emotion recognition technology in workplaces and educational settings, potentially hindering innovations that could significantly enhance comfort, wellbeing and health of users. While an exception is made for AI systems developed for medical or safety reasons, it leaves many beneficial applications of emotion recognition in the prohibited category. Examples of such applications include affective tools that improve the usability and accessibility of digital systems, systems that adapt to the mental state of the user in order to achieve a balanced workload and prevent burnout, affective systems that enhance the quality of interaction during video conferencing, and systems that help call center operators to respond adequately to customers’ emotions. In January and February 2024, over 120 research institutes and individual experts across the EU co-signed a letter to key persons in the European Commission, Parliament and Council, urging them to reclassify the risk level of emotion recognition from prohibited to high-risk. This did not happen, but awareness for our case has been created. The dialogue continues with the newly established Euro- pean AI Office, which will develop the Guidelines for Implementation of the AI Act. Finally, these guidelines must be translated by each member state into local regulations. Only then will the implications of the AI Act become fully clear for developers and users of affective systems. In the meantime, our research community should remain alert and involved, helping European and national policymakers with fine-tuning the guidelines with respect to emotion recognition, so that Europe can reap the benefits of these promising technologies while risks are mitigated.