What is happening? In the old days, an interface was straightforward, at least in most cases. In practice, a user interface (UI) was either a text-based interface or a graphical user interface (GUI). And, even more important, skilled users were using it. How different is it today and how more different will it become?
The UI and GUI, which had a central role in MMI, are still there. However, other interfaces start nocking on the door. Sensor and actuator-based interfaces, where other sensory channels are used than our visual and auditory ones. Arduino and alike kits provide commercial of-the-shelf (COTS) means to develop even high fidelity prototypes of next-generation intuitive interfaces (II) such as an Arduino Apple watch or a biofeedback-powered affective music player.
II ask for new designs as well as basic knowledge from both psychology and computer science. Many questions need to be answered from each of these perspectives, including: Should sensors and actuators be ambient, wearable, or a combination? And, how should and can we take context into account? What modality is chosen, what is its sensitivity and specificity, and how does it coincide with other modalities? How can input and output signals be augmented or simplified efficiently to reduce computational complexity and, in parallel, aid the user’s processing.
The hardware components for II are already there, even for a while. It isn’t the processing power that is hindering progress. And with Artificial Intelligence (AI) making such a (claimed) progress, algorithms should be able to adapt. So, why is the Internet of Things (IoT), the Ambient Intelligence (AmI), and the Ubiquitous Computing (UbiComp) not in our living room? Why doesn’t it even knock on our door? Is it because we are still facing the old MMI challenges from decades ago?
Essentially, we are facing the same challenges as decades ago. Not that we did not make progress, we did! At least our hardware and software has evolved at a rapid pace. Perhaps, this is best illustrated by Virtual and Augmented Reality, which has entered our living rooms this century (e.g., via VR games), while it still was very expensive and required high level expertise at the end of the previous century. And, yes, we slowly understand more and more of its users, of us as humans in general and the complex context we are living in. However, the changes in society, products, hardware, software, and contexts and so forth, make MMI sort of a moving target.
Some industrial cases make use of their fixed context. For example, new cars are full of MMI with II powered by speech and gesture recognition. On the one hand, this illustrates tech’s progress; on the other hand, it nicely illustrates the challenges we are facing. We have no single best II for cars. Every brand designs its own. Perhaps this is what makes MMI such a wonderful field of science and engineering: it is highly interdisciplinary, we are chasing a moving target so we have to keep running, and it unites both basic research and everyday product development.
Egon L. van den Broek
Department of Information and Computing Sciences, Utrecht University
Human-Centered Computing Consultancy, The Hague