Human Computer Interaction for digital workplaces

Human Computer Interaction for digital workplaces

At Konica Minolta Laboratory Europe, we are committed to sharing new ideas on digital workplaces and finding new academic and industrial partners to collaborate with. And recently, we took this further afield, attending the International Conference on Human-Computer Interaction over the 15th-20th of July in Las Vegas, USA. This 20th edition of HCI International welcomed c1700 participants from 74 countries, hosting 12 affiliated conferences (full details available here) under the auspices of 14 distinguished international boards.
The conference focused on two main thematic areas: Human-Computer Interaction (HCI) and Human Interface and the Management of Information (HIMI).
During the conference opening event, Mary Czerwinski – research manager at Microsoft with many years of experience in designing technology for health and wellbeing – gave an interesting keynote speech on using technology to support healthy habits. In this talk, Czerwinski focused on patients’ emotion sensing using mobile, ambient and wearable devices, providing various lessons learned from several efforts in this space, as well as traps to avoid. Czerwinski’s talk was inspiring for the research that we are performing on Cognitive Hub about mediating with humans’ emotional and cognitive states, as described in The future of User Interfaces whitepaper. Moreover, she provided good suggestions on how to design engaging interventions to help users cope positively with stress and depression.
On the last day of the conference I had the pleasure of chairing a session entitled “Smart home and work environment”.

Speaking about the Cognitive Hub, I focused on the research topics related to the HCI platform, including its architecture components, the adaptation to user state, multimodal interaction, design of user interfaces and the smart transition between physical and digital contents (full paper in Springer LNCS10921). Other papers presented in this session included topics on smart home framework, smart configurable wall, IoT in home environment and hybrid connected physical-digital space.

The parallel sessions covered many topics that are relevant for the different research activities that we conduct at Konica Minolta Laboratory Europe. Here we list a few examples just to show an iceberg of all interesting papers:

  • Futuristic workstation: Rukman Senanayake from SRI International in USA presented the design of a futuristic computer workstation and specifically tailored application models, which can yield useful insights and result in exciting ways to increase efficiency, effectiveness, and satisfaction for computer users. Similarly to the research that we are performing for multimodal interaction, this workstation can track a user’s gaze; sense proximity to the touch surface; and support multi-touch, face recognition etc. The workstation allows the development of a rich contextual user model that is accurate enough to enable benefits, such as contextual filtering, task automation, contextual auto-fill, and improved understanding of team collaborations.
  • User-centred design: Jim Carter from University of Saskatchewan in Canada talked about an ISO standard on user requirements, paying attention to the specific information that should be included in a user requirements specification, or on the syntax of user requirements statements. to define good practicesfor delivering a unified way of recording, formulating and organizing user requirements. Two types of user requirements have been identified: (a) requirements for a user to be able to recognize, select, input or receive physical entities and information, and (b) use-related quality requirements that specify criteria for outcomes such as effectiveness, efficiency, satisfaction, accessibility and user experience. A user requirements specification should also contain information about constraints, the context of use, goals and tasks to be supported, design guidelines and any recommendations for design solutions emerging from the user requirements.
  • Computer vision: Suzan Anwar form University of Arkansas in the USA presented an EREGE system which is capable of face detection, eye detection, eye tracking, emotion recognition, and gaze estimation. It uses an Active Shape Model (ASM) tracker to track 116 facial landmarks via webcam input. Then a Support Vector Machine (SVM) based classifier is developed to recognize seven emotions such as neutral, happiness, sadness, anger, disgust, fear, and surprise. The eye gaze estimation starts by creating the head model followed by presenting both Active Shape Model (ASM) and Pose from Orthography and Scaling with Iterations (POSIT) algorithms for head tracking and position estimation.
  • Natural language processing and generation: Lei Wang from Beijing Normal University in China talked about a conversation user interface developed for financial applications. In current situation, users of a mobile banking applications have to input and click through many steps in order to complete their tasks. A conversational user interface based on intelligent voice technology was developed to simplify the interaction between user and the application. The author presented “transfer accounts” as a research use case to show the process of how the conversational interface was developed and applied.
X