ICHMS 2021 Schedule

Magdeburg, Germany (Hybrid Event)

The main program will be onsite at Building 22-A Lecture Hall 2. Other rooms are noted in the schedule below.

The Sino-German Symposium on Brain-Machine-Systems will be held via zoom, parallel to Sessions 5-8 on day 2. In case you have any question during the conference, don't hesitate and contact the organizers.

Time is Central European Summer Time (UTC+2).

16:00 - 17:00

Registration and Arrival On-Site

Building 29, Room 301

Get a coffee, pick up your name badge and goodie bag or simply login.

18:00 - 20:00

City Tour

Get to know the City of Otto. Jointly with Summer School.

The Meeting Point will be Art Museum and Monastery »Unser Lieben Frauen«


08:30 - 09:00

Registration and Coffee

Lobby of OVGU Building 22-A

Get a coffee, pick up your name badge and goodie bag or simply login.

09:00 - 09:20

Welcome (Curtain Raiser)

Official Opening with Greetings from local representatives

09:20 - 11:00

Session 1: D1.1: Human Performance Modelling

Chair: Malte Schilling

Travis Wiltshire, Dan Hudson, Max Belitsky, Philia Lijdsman, Stijn Wever and Martin Atzmueller

Examining Team Interaction using Dynamic Complexity and Network Visualizations Presentation Video (Abstract)
Given the increasing complexity of many sociotechnical work domains, effective teamwork has become increasingly crucial. While there is evidence that face-to-face communication contributes to effective teamwork, methods for understanding the time-varying nature and structure of team communication are limited. In this work, we combine sensor-based social analytics of Sociometric badges (Rhythm Badge) with two visualization techniques (Dynamic Complexity Heat Maps and Network Visualizations) to advance an intuitive way of understanding the dynamics of team interaction. To demonstrate the utility of our approach, we provide a case study that examines one team’s interaction for a Lost at Sea simulation. We were able to recover transitions in the task and team interaction as well as uncover structural changes in team member communication patterns, which we visualize using networks. Taken together, this work represents an important first step at optimizing team effectiveness by identifying critical transitions in team interactions.

Joachim Meyer and James K. Kuchar

Maximal benefits and possible detrimental effects of binary decision aids Presentation Video (Abstract)
Binary decision aids, such as alerts, are a simple and widely used form of automation. The formal analysis of a user’s task performance with an aid sees the process as the combination of information from two detectors who both receive input about an event and evaluate it. The user’s decisions are based on the output of the aid and on the information, the user obtains independently. We present a simple method for computing the maximal benefits a user can derive from a binary aid as a function of the user’s and the aid’s sensitivities. Combining the user and the aid often adds little to the performance the better detector could achieve alone. Also, if users assign non-optimal weights to the aid, performance may drop dramatically. Thus, the introduction of a valid aid can actually lower detection performance, compared to a more sensitive user working alone. Similarly, adding a user to a system with high sensitivity may lower its performance. System designers need to consider the potential adverse effects of introducing users or aids into systems.

Sayantan Polley, Tarun Gupta, Ritu Gahir, Libin Kutty, Nnamdi Ukwu and Marcus Thiel

IRTEX: Image Retrieval with Textual Explanations Presentation Video (Abstract)
In a Content Based Image Retrieval (CBIR) system, images are retrieved based on the content of the images such as color, shapes and objects. CBIR is typically accomplished by extracting features and comparing image feature vectors with a query vector and rankings are derived using a similarity measure. However, often end users experience a semantic gap between the notion of similarity used by the ranking model versus the users’ perception about image similarity. Explainable AI (XAI) is an emerging research field that attempts to provide transparency of “black box” models, to make AI systems trustworthy and gain user trust. This work aims at building an Image Retrieval system with Textual in EXplanations such as “The (global) results are similar to the query by X % due to shape, Y % due to color”. Local explanations are generated using various methods such as comparing images with respect to overlap between low level features such as color, shape and regions (MPEG-7 features). Additionally, high level features such as background-foreground segmentation, deep learned features and major key-points, objects identified (SIFT features) are used to enrich the explanations. We evaluate the quality of rankings on benchmark data-sets such as PASCAL VOC. The XAI facet of user satisfaction and usefulness of the system is evaluated in a lab based user study. Our results show that the semantic gap is better bridged using high level features. While low level features might be better suited for re-ranking of the retrieved images.

Johanna Schmidhuber, Stephan Schlögl and Christian Ploder

Cognitive Load and Productivity Implications in Human-Chatbot Interaction Presentation Video (Abstract)
The increasing progress in artificial intelligence and respective machine learning technologies has fostered the proliferation of chatbots to the point where today they are being embedded into various human-technology interaction tasks. In enterprise contexts, the use of chatbots seeks to reduce labor costs and consequently increase productivity. For simple, repetitive customer service tasks such already proves beneficial, yet more complex collaborative knowledge work seems to require a better understanding of how the technology may best be integrated. Particularly, the additional mental burden which accompanies the use of these natural language based artificial assistants, often remains overlooked. To this end, cognitive load theory implies that unnecessary use of technology can induce additional extrinsic load and thus may have a contrary effect on users' productivity. The research presented in this paper thus reports on a study assessing cognitive load and productivity implications of human chatbot interaction in a realistic enterprise setting. A/B testing software-only vs. software + chatbot interaction, and the NASA TLX were used to evaluate and compare the cognitive load of two user groups. Results show that chatbot users experienced less cognitive load and were more productive than software-only users. Furthermore, they show lower frustration levels and better overall performance (i.e, task quality) despite their slightly longer average task completion time.

An-Yu Zhuang, Yang Chen Lin, Shang-Lin Yu and Po-Chih Kuo

A brain-sensing fragrance diffuser for mental state regulation using electroencephalography Presentation Video (Abstract)
Human brain studies have shown that olfactory perception can regulate emotion and attention networks and prevent depressed mental states. Fragrance diffusers have been used as the potential appliance to reconcile mental conditions and achieve stress relief in daily life. Although perceiving fragrances are a complicated and subjective experience, studies have shown that it is possible to reveal the personal preference of fragrance from brain activity measured by electroencephalography (EEG). Moreover, using EEG to detect neural/mental states and apply them to human-machine interfaces has also been investigated for years. Therefore, this study has two aims: (1) to identify users’ preference for fragrances from EEG; (2) to develop a personalized fragrance diffuser, Aroma Box, which can detect three mental states from EEG when a user feels depressed, stressed, or drowsily and then release fragrances in real-time to help user recover from the abnormal states. To achieve this goal, we first extracted the features and built a classifier to identify the user's preference for fragrances from EEG. Then we calculated the indicators of brain states based on the EEG frequency analysis. Finally, we deployed our algorithms in an in-house developed diffuser with a consumer 32-channel EEG headset, which has been further implemented in a real-life working environment and evaluated its efficacy by two users.
11:00 - 11:30

Coffee Break

11:30 - 13:00

Session 2: D1.2: Interactive and Wearable Computing Systems (Special Session)

Chair: Giancarlo Fortino

José Luis Samper-Escudero, Sofía Coloma, Miguel Angel Olivares-Mendez, Miguel Ángel Sánchez-Urán and Manuel Ferre

Assessment of a textile portable exoskeleton for the upper limbs' flexion Presentation Video (Abstract)
Flexible exoskeletons are lightweight robots that surround the user's anatomy to either assist or oppose its motion. Their structure is made of light and flexible materials, like fabrics. Therefore, the forces created by the robot are directly transferred to the user's musculoskeletal system; this makes exosuits sensitive to the sliding of the actuation, textile perturbations and improper fitting to the user. LUXBIT is a cable-driven flexible exoskeleton that combines different fabrics and sewing patterns to promote its anatomical adaptation. The exoskeleton described is intended for bimanual assistance of daily tasks and long-term usage. To this end, the system reduces the pressures applied to the user and the misalignment with the user by stacking textile patches. The patches enhance the functioning of the base garment and promote the transference of the assistance forces. Additionally, LUXBIT has a compact actuation with deformable components to prevent restricting the user's motion. The exoskeleton is portable by using an enhanced textile backpack. This paper shows the exoskeleton's benefits for trajectory and muscle activity when the user flexes the shoulder and elbow.

Dipanwita Thakur, Antonella Guzzo and Giancarlo Fortino

t-SNE and PCA in Ensemble Learning based Human Activity Recognition with Smartwatch Presentation Video (Abstract)
Smartwatch based Human Activity Recognition (HAR) is gaining popularity due to habitual unhealthy behavior of the population and the rich in-built sensors of smartwatch. Raw sensor data is not well suited for the classifiers to identify similar activity patterns. According to the HAR literature handcrafted features are beneficial to properly identify the activities, which is time consuming and need expert domain knowledge. Automatic feature extraction libraries give high-dimensional feature sets that increase the computation and memory cost. In this work, we present an Ensemble Learning framework that exploit dimensional reduction and visualization to improve performance specification. Specifically, using Time Series Feature Extraction Library (TSFEL), the high dimensional features are extracted automatically. Then, to reduce the dimension of the feature set and proper visualization, Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE) are used respectively. The relevant extracted features using PCA are fed to an ensemble of five different Machine Learning (ML) classifiers to identify six different human physical activities. We also compare the proposed method with three popularly used shallow ML methods. Self collected human activity smartwatch sensor signal is used to establish the feasibility of the proposed framework. We observe that the proposed framework outperforms the existing state-of-the-art benchmark frameworks, with an accuracy of 96%.

Raffaele Gravina, Ruperto Alexander Lopez and Pablo F. Ordoñez-Ordoñez

FaceMask: a Smart Personal Protective Equipment for Compliance Assessment of Best Practices to Control Pandemic Presentation Video (Abstract)
Disposable and reusable face masks represent one of the key personal protective equipment (PPE) against COVID-19 pandemic and their use in public environments is mandatory in many countries. According to the intended use, there exist different types of masks with varying level of filtration. World Health Organization (WHO) has developed a set of best practices and guidelines to the correct use of this fundamental PPE. Nevertheless, many people tend to neglect wearing the mask in presence of other people and to unintentionally overuse the mask before replacement, which results in increased exposure to airborne infections. This paper proposes the development of a smart wearable computing system, consisting of a reusable face mask augmented with sensing elements and wireless connected to a personal mobile device, to recognize correct positioning of the face and capable to monitor other parameters such as usage time. Specifically, we realized a 3D printed mask prototype with replaceable filter and equipped with a small electronic embedded device. The mask collects internal and external parameters including humidity, temperature, volatile organic compounds (VOC) inside the mask, inertial motion, and external temperature and light. Collected data are transmitted over Bluetooth Low Energy to a smartphone responsible of performing signal pre-processing and position classification. Two machine learning algorithms are compared and obtained results from real experiments showed SVM performed slightly better than Naive Bayes, 98% and 97% accuracy, respectively.

Alex Gibbs, Tomias Scott, Cesar Gonzalez, Renan Barbosa, Ronald Coro, Robert Dizor, S M Mizanoor Rahman

Impedance-Based Feedforward Learning-Control for Natural Interaction between a Prosthetic Hand and the Environment Presentation Video (Abstract)
Robotic prosthetic hands or arms often do not apply appropriate force and pressure and also do not have appropriate tactile and proprioceptive feedbacks as accurately and precisely as a human, which make the prosthetic arms less user-friendly and inconvenient. The lack of human-like tactile and proprioceptive feedbacks may also cause serious safety problems in the interaction between a prosthetic arm and the environment. This paper attempts to propose a supervised learning-based solution to this problem associated with a support vector machine (SVM) classifier, which is to create a method that allows the synthetic hand or a prosthetic arm to apply forces to the environment (and react to the forces applied by the environment on the prosthetic arm in the form of tactile and proprioceptive forces or pressures) properly. As part of the entire goal, we create a glove instrumented with piezoelectric tactile sensors that fits over one of the hands, applies forces on the environment (an object grasped by a human subject wearing the glove) and it records the applied forces/pressures along with the proprioceptive and tactile feedbacks. In a user study, we subjectively evaluate the interaction between the environment and the human hand wearing the glove. Based on the user study results and the measured forces, we then outline a supervised learning algorithm to be applied with a support vector machine to classify the natural and unnatural interaction between the glove (potential prosthetic arm) and the object (environment). The learned (trained) algorithm is then proposed to be used to develop feedforward learning-control for achieving human-like natural and intuitive interaction between prosthetic arms and the environment.

Evy van Weelden, Maryam Alimardani, Travis Wiltshire and Max Louwerse

Advancing the Adoption of Virtual Reality and Neurotechnology to Improve Flight Training Presentation Video (Abstract)
Virtual reality (VR) has been used for training purposes in a wide range of industries, including education, healthcare, and defense. VR allows users to train in a safe and controlled digital environment while being immersed and highly engaged in a realistic task. One of its advantages is that VR can be combined with multiple wearable sensing technologies, allowing researchers to study (neuro)physiological and cognitive processes elicited by dynamic environments and adapt these simulations on the basis of such processes. However, the potential of VR combined with neurotechnology to facilitate effective and efficient aviation training has not yet been fully explored. For instance, despite the growing interest in including VR as part of the training programs for military and commercial airlines pilots, it is still unclear what the effectiveness of VR is in short- and long-term training of pilots. This paper provides an overview of the state-of-the-art research in VR applications for aviation training and identifies challenges and future opportunities. We particularly discuss the potential of neurotechnology in objective measurement of training progress and providing real-time feedback during VR flight tasks. Overall, VR combined with neurotechnology for flight training holds promise to maximize individual learning progress.
13:00 - 14:00

Lunch Break

14:00 - 15:00
Xudong Zhang

Keynote Prof. Xudong Zhang, Ph.D. Session 3: D1.3

ADVANCING MUSCULOSKELETAL MODELS FOR DIGITAL-AGE HUMAN-MACHINE SYSTEMS AND HEALTH ENGINEERING (Abstract) | Remote, streamed in Building 22-A, Lecture Hall 2

Chair: David Kaber

Presentation Video
Musculoskeletal modeling plays an irreplaceable role in advancing biomechanical science and addressing questions that are difficult, ineffective, or impossible to address by experimental studies. It also is enabling digital-age engineering applications that can potentially transform the design of human-machine systems (e.g., vehicles, workplaces) as well as the prevention, treatment, and management of musculoskeletal injuries. The new scientific inquiries and digital applications have however heightened the demands on levels of detail, accuracy, and computational performance for the underlying models. This keynote will demonstrate the systematic efforts by our group in advancing musculoskeletal models to meet such demands. I will first describe the classic modeling frameworks along with some basic concepts, while highlighting the critical gaps and obstacles. This will be followed by a presentation of ways to acquire experimental data with an emphasis on how contemporary multi-modality imaging technologies can improve the accuracy and specificity of structural representations as models’ building blocks. I will then use examples to compare two “put-it-all-together” modeling strategies: (1) additive multi-stage dynamic modeling to achieve computational tractability for highly complex systems, and (2) integrative multi-scale subject-specific modeling to support personalized clinical applications. I will conclude with a summary and some remarks on the promises and challenges of this line of research, particularly in the context of big data and AI.
15:00 - 15:30

Coffee Break

15:30 - 17:40

Session 4: D1.4: Autonomous and Assisted Driving

Chair: Antonio Guerrieri

Muhua Guan, Zheng Wang, Bo Yang and Kimihiko Nakano

A Classified Driver’s Lane-Change Decision-Making Model Based on Fuzzy Inference for Highly Automated Driving Presentation Video (Abstract)
Many efforts have been devoted to modeling drivers’ lane-change decision-making process. However, most of them proposed a general model and ignored drivers’ various driving habits. In this study, a classified driver’s lane-change decision-making model based on fuzzy inference was proposed. A driving experiment was held to determine the membership functions. To meet various driving habits and preferences of drivers, the proposed model was classified into three types, namely aggressive, medium, and conservative. As model validation, a mathematical simulation was run to compare the classified fuzzy model with a conventional model proposed in the previous study. Simulation results showed that the classified fuzzy models could make differentiated lane change decisions. Furthermore, the classified fuzzy models made more stable lane-change decisions than the conventional model. This study suggests the potential of using the proposed model for the design of highly automated driving with different driving types.

Fred Atilla and Maryam Alimardani

EEG-based Classification of Drivers Attention using Convolutional Neural Network Presentation Video (Abstract)
Accurate detection of a driver’s attention state can help develop assistive technologies that respond to unexpected hazards in real time and therefore improve road safety. This study compares the performance of several attention classifiers trained on participants’ brain activity. Participants performed a driving task in an immersive simulator where the car randomly deviated from the cruising lane. They had to correct the deviation and their response time was considered as an indicator of attention level. Participants repeated the task in two sessions; in one session they received kinaesthetic feedback and in another session no feedback. Using their EEG signals, we trained three attention classifiers; a support vector machine (SVM) using EEG spectral band powers, and a Convolutional Neural Network (CNN) using either spectral features or the raw EEG data. Our results indicated that the CNN model trained on raw EEG data obtained under kinesthetic feedback achieved the highest accuracy (89%). While using a participant’s own brain activity to train the model resulted in the best performances, inter-subject transfer learning still performed high (75%), showing promise for calibration-free Brain-Computer Interface (BCI) systems. Our findings show that CNN and raw EEG signals can be employed for effective training of a passive BCI for real-time attention classification.

Gaojian Huang and Brandon Pitts

Driver-Vehicle Interaction: The Effects of Physical Exercise and Takeover Request Modality on Automated Vehicle Takeover Performance between Younger and Older Drivers Presentation Video (Abstract)
Semi-automated vehicles still require manual takeover intervention. For older drivers, age-related declines may make takeover transitions difficult, but the current literature on takeover and aging is mixed. Non-chronological age factors, such as engagement in physical exercise, which has been shown to mitigate perceptual and cognitive declines, may be contributing to these conflicting results. The goal of this pilot study was to examine whether age, physical exercise, and takeover request alert modality influence post-takeover performance. Sixteen younger and older adults were divided into exercise and non-exercise groups, and completed takeover tasks with seven different types of takeover requests. Overall, older adults in the physical exercise group had shorter decision-making times and lower maximum resulting jerk, compared to seniors in the non-exercise group. Takeover request type did not influence takeover performance. Findings may contribute to theories on aging and inform the development of next-generation automated vehicle systems.

Takuma Yabe and Hiroaki Yano

Haptic interface for presenting enveloping force from remote obstacles in a personal vehicle Presentation Video (Abstract)
It is expected that small electric vehicles will be used in Shared Space. Driving an electric vehicle safely becomes issues of Shared Space. In this research, a vehicle control interface that consists of a haptic joystick and five 1 degree-of-freedom manipulators on the joystick is proposed. It gives force sense to the driver's hand and fingers from surrounding manipulators if there are obstacles in proximity area. The direction of the obstacles and the direction of haptic pressure are linked so that the driver can figure out the obstacle's direction without visual information. We prove that the system is able to make drivers to recognize obstacles that should be watched out. It is also revealed that the direction the HMI present is recognizable. In addition, the system is usable when the obstacle is in the blind spot.

Chao He and Dirk Söffker

Human reliability analysis in situated driving context considering human experience using a fuzzy-based clustering approach Presentation Video (Abstract)
Although more higher level advanced driver assistance systems (ADAS) are applied to driving, human driver reliability is still critical for driving safety as human-related accidents accounts for the highest proportion of total accidents. Existing reliability approaches qualify human behaviors in a static manner. In this contribution dynamically changing situations are considered: as example dynamic and situated driving context is used for human reliability evaluation. The dynamic and situated driving context requires dynamic solutions for human reliability evaluation. Cognitive reliability and error analysis method (CREAM) provides the evaluation method for human reliability in industrial fields, when it is applied to situated context, adaption is required. Furthermore, human experience as an important factor for driving safety is also needed to be considered when human driver reliability is evaluated. In this contribution, three variables are selected to evaluate human driver experience (HDE) in situated driving context. Meanwhile, a new list of common performance conditions (CPCs) in CREAM to characterize the situated driving context is generated due to the application limits of CPCs in original CREAM. To determine the levels in HDE variables and new generated CPCs, fuzzy neighborhood density-based spatial clustering of application with noise (FN-DBSCAN) is applied to driving data defining the membership function parameters. Therefore, HDE and human driver reliability score (HDRS) in situated driving context are calculated quantitatively. Next, a new evaluation index, human performance reliability score (HPRS) is defined. The results show that the new proposed method could quantify and evaluate human driver reliability in real time.

Federico Faruffini, Alessandro Correa-Victorino and Marie-Hélène Abel

Vehicle Autonomous Navigation with Context Awareness Presentation Video (Abstract)
Nowadays, many models performing global robotic navigation exist, and they are capable to drive safely and autonomously, and to reach their set destination. However, most of them don’t take into account the information coming from the context in which such navigation is occuring, resulting in a severe information loss. Without Context-Aware Navigation, it is not possible to build a model to let the vehicle adapt its behaviour to the situation, in the way a human driver spontaneously does. It is therefore needed a study on how to connect the contextual information with the robot’s control loop. For our solution we will use semantic structures known as ontologies, that help the vehicle reason in real-time and change its own behaviour in function of given contextual information. After a definition of the Context of Navigation, in this paper we propose an approach to the problem of encoding the Context Awareness in the Autonomous Navigation’s controller. Finally, such approach is put to the test in a simulator, to discuss the results achieved.

Elisabeth Brandenburg, Diana Kozachek, Kathrin Konkol, Christiane Woelfel, Andreas Geiger and Rainer Stark

How Pedestrians Perceive Autonomous Buses: Evaluating Visual Signals Presentation Video (Abstract)
With the deployment of autonomous buses, sophisticated technological systems are entering our daily lives and their signals are becoming a crucial factor in human-machine interaction. The successful implementation of visual signals requires a well-researched human-centred design as a key component for the new transportation system. The autonomous vehicle we investigated in this study uses a variety of these: Icons, LED panels and text. We conducted a user study with 45 participants in a virtual reality environment in which four recurring communication scenarios between a bus driver and his passengers had to be correctly interpreted. For our four scenarios, efficiency and comprehension of each visual signal combination was measured to evaluate performance on different types of visual information. The results show that new visualization concepts such as LED panels lead to highly variable efficiency and comprehension, while text or icons were well accepted. In summary, the authors of this paper present the most efficient combinations of visual signals for four reality scenarios.

Luca Crosato, Chongfeng Wei, Edmond Ho and Hubert Shum

Human-centric Autonomous Driving in an AV-Pedestrian Interactive Environment Using SVO Presentation Video (Abstract)
As Autonomous Vehicles (AV) are becoming a reality, the design of efficient motion control algorithms will have to deal with the unpredictable and interactive nature of other road users. Current AV motion planning algorithms suffer from the freezing robot problem, as they often tend to overestimate collision risks. To tackle this problem and design AV that behave human-like, we integrate a concept from Psychology called Social Value Orientation into the Reinforcement Learning (RL) framework. The addition of a social term in the reward function design allows us to tune the AV behaviour towards the pedestrian from a more reckless to an extremely prudent one. We train the vehicle agent with a state of the art RL algorithm and show that Social Value Orientation is an effective tool to obtain pro-social AV behaviour.

Welcome Reception

Get Together at Tessenow Loft, Elbauenpark, Magdeburg. Jointly with Summer School

Welcome to Day 2

Grab a cup of tea or coffee and join us online or on-site.

Session 5: D2.1: PhD Track

Chairs: Ronald Böck and Iulia Lefter

The Sino-German Symposium, Session 1

Megan Shyr and Sanjay Joshi

Validation of the Bayesian sensory uncertainty model of motor adaptation with a remote experimental paradigm Presentation Video (Abstract)
Understanding human motor learning and adaptation processes is an integral step in developing rehabilitative engineering solutions and training strategies for assistive technologies. Natural skill acquisition enables continually precise movements despite inherent noise in motor execution, sensory perception, and dynamic changes in body parameters (growth, age, etc.) and the external environment. As an initial step, motor learning research has aimed to identify the mechanisms of natural human adaptation during the acquisition of motor skills. Results presented here confirm existing literature on motor adaptation using a remote web-based experimental paradigm that could provide a valuable option to conduct additional future work with expanded more diverse subject populations.

Zhou Gui and Andreas Harth

Towards a Data Driven Natural Language Interface for Industrial IoT Use Cases Presentation Video (Abstract)
The ubiquitous availability of sensors and smart devices makes IoT networks more and more complex to manage and control. A natural language interface (NLI) would allow users to interact with the devices via human language by translating the user command into a machine-interpretable meaning representation, often called logical forms. Despite the rapid development of conversational interfaces in smart home and personal intelligent assistant use cases, there are limited research and applications in industrial sensor and actuator networks, usually referred to as Industrial Internet of Things (IIoT). In this paper, we show an early phase design principle of a semantic representation to express IIoT device interactions and propose a data-focused workflow of IIoT automation system architecture.

Luming Li

Technology Advances in Deep Brain Stimulation
10:00 - 10:10

Coffee Break & Room Change

10:10 - 11:15

Session 6: D2.2: Assistive Technologies and Virtual Reality

Chair: Travis Wiltshire

The Sino-German Symposium, Session 1

Victoria Buchholz and Stefan Kopp

Towards Adaptive Worker Assistance in Monitoring Tasks Presentation Video (Abstract)
Due to the introduction of more and more new technologies into the work environment, the role of the worker often changes from executing a task to monitoring an automated system. Monitoring tasks, however, can induce high levels of mental workload and thus lead to a decrease in the worker’s performance and consequently to an increase in the risk of making serious errors. A promising solution to this problem is the use of assistance systems (AS). As the worker still remains responsible for an error-free operation of the automated system, the AS should be designed human-centered. Hence, it is crucial, to examine the psychological factors that influence the user’s acceptance of the AS in order to ensure a successful long term human-machine collaboration. A concept for a human centered AS for a monitoring task is presented. Thereby, different assistance types and strategies as well as methods to determine the right point of time to apply these strategies during the task are looked at. Finally, challenges for the development of such an AS are discussed and possible solutions are proposed.

Bernhard A. Sabel

Network-thinking of the brain

Sebastian Pimminger and Werner Kurschl

Prototyping Assistive Systems for Manual Assembly in Real Production Environments: Challenges and Lessons Learned Presentation Video (Abstract)
In current production environments, industry 4.0 or smart production plays an essential role. The trend towards more and more individualization and an increasing number of product variants place high demands on people working in manufacturing. Assembly workers must achieve increasingly shorter cycle times and constantly learn how to assemble new products. The support from assistive systems (e.g. digital work instructions, pick-by-light, automatic component recognition for quality control) is necessary in these environments. But many systems and prototypes presented in this field work perfectly in lab environments, however, they may fall short when deployed in a real production environment. We developed a comprehensive and flexible assistance system for manufacturing and identified design and technical challenges, which may only appear when working at the shop floor and. In this paper, we present our experiences and discuss our key challenges and lessons learned.

Naveed Ahmed, Mohammad Lataifeh and Imran Junejo

Visual Pseudo Haptics for a Dynamic Squeeze / Grab Gesture in Immersive Virtual Reality Presentation Video (Abstract)
In this work, we analyze the suitability of employing visual feedback for pseudo haptics as a replacement of active haptics in an immersive virtual reality (VR) environment. A controller-free gesture interaction is widely considered to be a natural user interface in VR. As the controller is not employed, the lack of active haptic feedback can often result in a frustrating experience for complex dynamic gestures, e.g., grab, squeeze, clasp etc. These actions are very easy to perform using a specialized devices or controllers with active haptic feedback, e.g., data gloves with force feedback or controllers with analog triggers and vibrations can be utilized for an immediate or continuous feedback. In contrast, these mechanisms are completely missing in a controller-free interaction. We present an on-screen visual mechanism as the pseudo haptic feedback of a dynamic squeeze / grab gesture to replace the active haptic feedback. Our proposed approach allows for the continuous visualization of a squeeze / grab gesture. We implemented an interaction mechanism to test the visualization for these dynamic gestures and compared it with a system with no pseudo haptics. The results from the user study show that an on-screen continuous visualization can be used as pseudo haptics for a dynamic squeeze / grab gesture in an immersive VR environment.

Surjo R. Soekadar

Next-generation Brain/Neural-Machine Interfaces for Restoration of Brain Function

Grzegorz Owczarek, Marcin Jachowicz, Mieszko Wodziński and Joanna Szkudlarek

Virtual reality (VR) for laser safety training Presentation Video (Abstract)
This contribution presents sample scenarios for laser safety training using virtual reality. The scenarios illustrate activities involving laser devices used in industry, medicine, didactic laboratories and laser shows. The described applications are incorporated into laser safety training conducted by the Central Institute for Labour Protection – National Research institute (CIOP-PIB).
11:15 - 11:30

Coffee Break

Huiguang He

Is Deep Learning Brain-like? Using fMRI to reconstruct Images.
11:30 - 13:00

Session 7: D2.3: Collaborative Intelligent Systems and Applications (Special Session)

Chairs: Makoto Itoh and Weiming Shen

Jason Dekarske and Sanjay S. Joshi

Human Trust of Autonomous Agent Varies With Strategy and Capability in Collaborative Grid Search Task Presentation Video (Abstract)
Trust is an important emerging area of study in human-robot cooperation. Many studies have begun to look at the issue of robot (agent) capability as a predictor of human trust in the robot. However, the assumption that agent capability is the sole predictor of human trust could underestimate the complexity of the problem. This study aims to investigate the effects of agent-strategy and agent-capability in a visual search task. Fourteen subjects were recruited to partake in a web-based grid search task. They were each paired with a series of autonomous agents to search an on-screen grid to find a number of outlier objects as quickly as possible. Both the human and agent searched the grid concurrently and the human was able to see the movement of the agent. Each trial, a different autonomous agent with its assigned capability, used one of three search strategies to assist their human counterpart. After each trial, the autonomous agent reported the number of outliers it found, and the human subject was asked to determine the total number of outliers in the area. Some autonomous agents reported only a fraction of the outliers they encountered, thus coding a varying level of agent capability. Human subjects then evaluated statements related to the behavior, reliability, and trust of the agent. The results showed increased measures of trust and reliability with increasing capability. Additionally, the most legible search strategies received the highest average ratings in a measure of familiarity. Remarkably, given no prior information about capabilities or strategies that they would see, subjects were able to determine consistent trustworthiness of the agent. Furthermore, both capability and strategy of the agent had statistically significant effects on the human's trust in the agent.

Jia Liu

Principles that govern the topographic organization of human visual cortex

Diogo Guimaraes, Dennis Paulino, Antonio Correia, Luis Trigo, Pavel Brazdil and Hugo Paredes

Towards a Human-AI Hybrid Framework for Inter-Researcher Similarity Detection Presentation Video (Abstract)
Understanding the intellectual landscape of scientific communities and their collaborations has become an indispensable part of research per se. In this regard, measuring similarities among scientific documents can help researchers to identify groups with similar interests as a basis for strengthening collaboration and university-industry linkages. To this end, we intend to evaluate the performance of hybrid crowd-computing methods in measuring the similarity between document pairs by comparing the results achieved by crowds and artificial intelligence (AI) algorithms. That said, in this paper we designed two types of experiments to illustrate some issues in calculating how similar an automatic solution is to a given ground truth. In the first type of experiments, we created a crowdsourcing campaign consisting of four human intelligence tasks (HITs) in which the participants had to indicate whether or not a set of papers belonged to the same author. The second type involves a set of natural language processing (NLP) processes in which we used the TF-IDF measure and the Bidirectional Encoder Representation from Transformers (BERT) model. The results of the two types of experiments carried out in this study provide preliminary insight into detecting major contributions from human-AI cooperation at similarity calculation in order to achieve better decision support. We believe that in this case decision makers can be better informed about potential collaborators based on content-based insights enhanced by hybrid human-AI mechanisms.

Viviane Farias, Luiz Oliveira and Jano de Souzas

A collaborative approach to support interoperability and awareness of Internet of Everything (IoE) enablers Presentation Video (Abstract)
Internet of Everything (IoE) is a promising paradigm that integrates the Internet of Things (IoT), Industrial Internet, Internet of People, and many Internet-based paradigms to transform the industry, society, and people lives. It provides seamless integration of intelligent devices - with sensing, identification, processing, communication, and networking capabilities, Big Data – with machine learning, analytics, and distributed computing, and human sensors – with collaboration, intelligent cognition and social networks. IoE brings excellent opportunities to improve changes in society with collaborative intelligent systems. However, these new technologies also bring significant challenges and do not address major existing issues, including interoperability, reliability, and intelligence management. Awareness of these issues is required before IoE applications can be widely deployed. This work introduces an observatory for cataloging IoE applications. Registering these applications will support researchers, developers, and users to exchange more information, design improved IoE applications, facilitating the composition of different enablers (human and non-humans). The main contributions of this work include the proposal of a technology platform, the IoE Database (IoEDB) that enables the distributed and collaborative cataloging of IoE initiatives and provides the evolution of a ‘live’ IoE knowledge-based taxonomy to support interoperability and awareness of IoE enablers.

Mustafa Demir, Mustafa Canan and Myke Cohen

Modeling Team Interaction and Interactive-Decision Making in Agile Human-Machine Teams Presentation Video (Abstract)
In a complex task environment in which team behavior emerges and evolves, team agility is one of the primary determinants of a team’s success. Agility is considered an emergent phenomenon in which lower-level system elements interact to adapt to the dynamic environment. One of the dimensions of team agility is interactive decision-making. In this study, we conceptually model individual team member’s interactive decision-making process for their taskwork; we observe how much the choices of one team member depend on antecedent decisions and the behavior of the other team members. This also help us understand how team members synchronize during the decision-making process in agile teams, especially when team members team up with a machine. To improve the understanding of interactive decision-making, we also propose two modeling techniques: (1) quantum cognition for the taskwork decision-making processes and (2) nonlinear dynamical systems modeling for teamwork.

Naveed Ahmed, Mohammad Lataifeh, Alaa Alhamarna, Maryam Al-Nahdi and Sara Al-Mansouri

LeARn: A Collaborative Learning Environment using Augmented Reality Presentation Video (Abstract)
This work presents “LeARn”, a new network-based collaborative learning environment that employs augmented reality to transform a real-world surface in a virtual lab. The system is a contribution towards replacing a face-to-face learning environment with an augmented collaborative setting. To demonstrate the system, a scenario with a virtual chemistry lab is presented. In this demo, any real-world surface is augmented by virtual lab equipment utilized in a chemistry experiment. The virtual lab is hosted by the instructor, and all the students can join the lab only using their mobile phones or tablets. Each member can interact with the lab equipment, which can be visualized in real-time by the instructor or fellow students. The system allows for real-time communication that fosters a true collaborative environment. The resulting system demonstrates that a complex lab experiment can be performed from a personalized space that can incorporate positive traits of a collaborative environment. The system was deployed and evaluated in an uncontrolled user study, and the results show the effectiveness of an AR-based interactive and collaborative learning environment.
13:00 - 14:00

Lunch Break

14:00 - 16:05

Session 8: D2.4: Trust and Ethics

Chair: Dan Verständig

The Sino-German Symposium, Session 2

Joshua Ho and Chien-Min Wang

Human-Centered AI using Ethical Causality and Learning Representation for Multi-Agent Deep Reinforcement Learning Presentation Video (Abstract)
Human-Centered Computing and AI are two fields devoted to several cross-intersecting interests in the modern AI design. They consider human factors and the machine learning algorithms to enhance compatibility and reliability for human-robot interaction and cooperation. In this work, we propose a novel design concept for the challenging issues that have raised ethical dilemmas; an augmented ethical causality with successor representation for policy gradient models the Human-Centered AI with environments. The proposed system leverages Human-Centered AI using explainable knowledge to construct ethical causations, and shows it significantly outperformed the statistical approach and baselines alone by further considering meta parametric Human-Centered ethical priorities, when compared to other approaches in the simulated game theory environments of Deep Reinforcement Learning. The experimental results aim to efficiently and effectively access the cause, effect and impact of multi-agent heterogeneity in the DRL environments for natural, general and significant causal learning representations.

Xiao Jian Li

he new toolchain development for brain machine interface study in animal model

Sayantan Polley, Aditya Dey, Chandan Radhakrishna, Nishitha Nancy Lima, Suraj Shashidhar, Marcus Thiel and Andreas Nürnberger

Evaluating Reliability in Explainable Search Presentation Video (Abstract)
Explainability in AI (XAI) is being investigated in various AI driven systems like search engines to promote trust and fairness in AI. In the context of search systems one major XAI goal is to explain the notion of similarity in text to a nonexpert user like an avid reader searching for books in a digital library. What is the similarity between book A and B with respect to narrative time? How does the emotion change? This is often an important criteria for searching books. In this work we extend a recent explainable AI driven book search engine [1] with local explanations that visually support users to comprehend the similarity model. Contribution of this work lies in use of topic models and sentiment driven features to show how plot development of a fiction book change over the narrative time. Each book is represented as a feature vector in an interpretable feature space with aspects such as sentence complexity, writing style. In the absence of ground truth similarities and explanations, the system is evaluated on the aspects of reliability of search results and explanations in user studies. Different baselines are created such as a pseudo-random ranker and a bag-of-word model. Baselines are compared with our back-end ranking model, keeping the user interface same for all models. Experimental results on lab based user study with eye tracking mechanism indicate that the explanations help towards making the search system reliable for non expert users.

Shawaiz Bhatti, Mustafa Demir, Nancy J. Cooke and Craig J. Johnson

Assessing Communication and Trust in an AI Teammate in a Dynamic Task Environment Presentation Video (Abstract)
This research examines the relationship between anticipatory pushing of information and trust in human–autonomy teaming in a remotely piloted aircraft system - synthetic task environment. Two participants and one AI teammate emulated by a confederate executed a series of missions under routine and degraded conditions. We addressed the following questions: (1) How do anticipatory pushing of information and trust change from human to human and human to autonomous team members across the two sessions? and (2) How is anticipatory pushing of information associated with the trust placed in a teammate across the two sessions? This study demonstrated two main findings: (1) anticipatory pushing of information and trust differed between human-human and human-AI dyads, and (2) anticipatory pushing of information and trust scores increased among human-human dyads under degraded conditions but decreased in human-AI dyads.

Ulf Ziemann

Real-time EEG-TMS for brain-state dependent stimulation

Zelun Tony Zhang, Yuanting Liu and Heinrich Hussmann

Pilot Attitudes Toward AI in the Cockpit: Implications for Design Presentation Video (Abstract)
As the aviation industry is actively working on adopting AI for air traffic, stakeholders agree on the need for a human-centered approach. However, automation design is often driven by user-centered intentions, while the development is actually technology-centered. This can be attributed to a discrepancy between the system designers' perspective and real-world operational complexity. The same can be observed with AI applications where most design efforts focus on the interface between humans and AI, while the overall system design is built on preconceived assumptions. To understand potential usability issues of AI-driven cockpit assistant systems from the users' perspective, we conducted interviews with four experienced pilots. While our participants did discuss interface issues, they were much more concerned about how autonomous systems could be a burden if the operational complexity exceeds their capabilities. Our results thus point toward an area that is often neglected by designers and could cause unexpected issues in real usage. Besides commonly addressed human-AI interface issues, we believe that more consideration for operational complexities on a system-design level is necessary.

Shouyan Wang

Towards precisely modulating the brain activities

William Page

Advancing Human Adoption of Technology Presentation Video (Abstract)
As technology continues to advance, humans will need to advance with it. This paper starts out by analyzing how humans adopt technology at a societal level. From there, I discuss some of the methods with which we can increase the ability of technology to be adopted: mainly pertaining in education and increasing usability of technology in a general sense. These methods will help bring us to a future paradigm of technology that I think has great benefits- that of ubiquitous computing. Finally, I go over the ethical implications of technological adoption and address it in a contemporary sense.

Myke Cohen, Mustafa Demir, Erin Chiou and Nancy Cooke

The Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming Presentation Video (Abstract)
As technology continues to advance, humans will need to advance with it. This paper starts out by analyzing how humans adopt technology at a societal level. From there, I discuss some of the methods with which we can increase the ability of technology to be adopted: mainly pertaining in education and increasing usability of technology in a general sense. These methods will help bring us to a future paradigm of technology that I think has great benefits- that of ubiquitous computing. Finally, I go over the ethical implications of technological adoption and address it in a contemporary sense.

Andrea Antal

Transcranial electrical stimulation in the clinical practice

Yunmei Liu and David Kaber

Quantitative models for automation rate and situation awareness response: A case study of levels of driving automation Presentation Video (Abstract)
Many taxonomies of levels of automation have been presented in the literature; however, the discrete and ordinal nature of these methods may limit reliable prediction of operator performance. This study defined an “automation rate” to quantify the level of automation in systems. To calculate the automation rate it is necessary to: classify all functions in the automation system according to stages of information processing, calculate the automation rate for each stage, set weights for these automation rates, and finally obtain the overall automation rate for the system. The practicality and feasibility of this model are verified through a case study analysis. In addition, this paper proposes a new relationship between the automation rate and operator situation awareness response, based on existing empirical research findings. Through case analysis and mathematical proof, the rationality of the form is demonstrated. This work lays the foundation for subsequent operator performance optimization analysis.
16:00 - 16:30

Coffee Break

16:30 - 18:00

Panel Discussion (Jointly with Summer School) Session 9: D2.5: Panel

Hosted by: Ronald Böck (OVGU) and Tiago H. Falk (MuSAE Lab, INRS Montreal)

Presentation Video
19:00 - 23:00

Conference Dinner

At Africambo Lodge, Zoo Magdeburg, Jointly with Summer School
A bus transfer will be offered from OVGU Building 22-A at 18:15, transfer back is scheduled for 23:00

08:30 - 10:00

Session 10: D3.1 Coffee, Poster and Demos (Jointly with Summer School)

Chairs: Ernesto De Luca and Gyorgy Eigner

Jiahua Xu, Zheng Wu, Andreas Nürnberger and Bernhard A. Sabel

Predicting Brain Electrical Stimulation Outcome in Stroke by Clinical-inspired Hybrid Graph Convolutional Autoencoder (Abstract)
Noninvasive brain stimulation (NIBS) has gained lots of attention from both academics and clinical usage. Its curative effect shows positive feedback in different kinds of neurological and ophthalmological disorders. Stroke is one of them that could benefit from this new technology. However, the unknown underlying mechanism of brain stimulation hinders our further exploration of brain recovery. Studies on the prediction of possible recovery rates with brain network features are rare. This study proposes a hybrid graph convolutional autoencoder (HGCAE) predict stroke recovery potential after electrical stimulation therapy. Twenty-four occipital stroke patients were randomly assigned to one of three groups, receiving different NIBS interventions. After two months, we identified the responders based on visual performance. The results show that using HGCAE based on brain network measures achieved an overall sensitivity of 91% of predicting recovery following NIBS intervention. This result may help predict the potential outcome of neuronal modulation in stroke patients and allows us to gain more insight into clinical interventions using neuromodulation.

Kenechukwu Mbanisi, Michael Gennert and Zhi Li

SocNavAssist: A Haptic Shared Autonomy Framework for Social Navigation Assistance of Mobile Telepresence Robots (Abstract)
In the rapidly evolving world of remote work, mobile telepresence robots (MTRs) have become increasingly popular, providing new avenues for people to actively engage in activities at a distance. The existing studies indicate, however, that remote navigation around humans in dense environments can be challenging for humans, resulting in a decreased level of satisfaction. Work on shared autonomy for navigation has generally addressed static environments or situations where only one pedestrian interacts with the robot. In this paper, we present our ongoing work on SocNavAssist, a haptic shared autonomy framework for navigation assistance of mobile telepresence robots in human-populated environments. It uses a modified approach of reciprocal velocity obstacles to consider social constraints in dynamic collision avoidance. We also provide visualization of system intent via predicted trajectories on an augmented visual feedback interface to enhance transparency and cooperation. In addition, we outline the proposed experiment to be used in future work to evaluate the proposed framework.

Ying Gao

Default mode network and attention network in unconscious processing (Abstract)
The critical role of unconscious processing in problem solving had been proved by previous empirical evidence. However, we still could not understand how unconscious mind process information differently from the conscious mind. One previous study utilized an event-related potential approach to demonstrate the different processes we went through during processing unconscious cue and conscious cue. In addition to this sequential and localized processing view, this study reanalyzed the data from that study and aimed to investigate how the default mode network and attention network reacted to and engaged in the processing of an unconscious cue. Functional connectivity was measured and graph theory was utilized to obtain the network dynamics. The results showed that compared to the no cue condition, the conscious cue condition did not exhibit a different communication pattern within the default mode network and the attention network. But after the presentation of the unconscious cue, the default mode network and attention network both reacted with enhanced communication within the network. It suggests that if a stimulus is presented below the perceptual threshold, it will activate the default mode network, which might support the divergent activation of the cue. The attention network might be engaged to monitor the default mode network and inhibit some unwanted divergent activation.

Guoyang Liu, Di Zhang, Lan Tian and Weidong Zhou

EEG-Based Familiar and Unfamiliar Face Classification Using Differential Entropy Feature (Abstract)
This study presents a novel approach for familiar and unfamiliar face classification based on electroencephalography (EEG). Firstly, the raw EEG epoch is temporally split into three overlapped segments, and each segment is decomposed into multiple sub-bands by band-pass filters. Then, differential entropy is employed to extract discriminative EEG features. Finally, the obtained features are concatenated and classified with the support vector machine (SVM). The results yielded on our database indicate that the proposed method can achieve a mean accuracy of 76.2% over five participants. This work primarily demonstrates that differential entropy is an effective feature for EEG-based familiar and unfamiliar face classification, and has the potential to be applied to other EEG-based visual task analyses.

Yuan Chen, Lirong Yan, Jiawen Zhang, Zhizhou Guan, Yibo Wu and Fuwu Yan

Distraction detection of driver based on EEG signals in a simulated driving with alternative secondary task (Abstract)
Driving distraction is a main human factor of traffic accidents. Distraction would seriously affect the drivers’ cognitive process, inducing the inability to fully perceive the surrounding environment, make the correct judgments and perform the proper operations in time. It is important to identify the drivers’ attentional state accurately and quickly during the driving process. The objective of this study was to develop a novel driving distraction detection method based on electroencephalographic (EEG) signals. A simultaneous driving and distraction experiment was designed, in which the alternative secondary task with a 2-back paradigm were utilized to induce the visual or auditory distraction. The EEG signals of 22 subjects were analysed to distinguish the focused state of the driver from distraction. Results indicated that, the proposed method based on EEGNet and long short-term memory (LSTM) provided an average classification accuracy of 71.1% in three-class classification. Reducing the number of the electrodes from 63 to 14 would not significantly reduce the accuracy, so that a higher model efficiency could be obtained.

Mustafa Mohammed, Heejin Jeong and Myunghee Kim

Increasing the Efficacy of a Powered Ankle-Foot Prosthesis with 3D Joint Angle Tracking (Abstract)
The goal of this research is to utilize joint angle tracking from depth sensors to design a real-time, powered ankle-foot prosthesis assistive algorithm. These sensors are used primarily for computer vision tasks surrounding rehabilitation; however, their use in motor feedback control is limited. By extracting joint positions from a depth map, the joint angle will be calculated and along with angular velocity collected from an external IMU, are used to train a neural network. This will be deployed alongside human-in-the-loop optimization to determine how the most effective prosthetic regimen can be found in the most efficient manner. The 3D joint angle method is evaluated with EMG data and metabolic cost from prosthetic use and is compared to current control methods.

Rizwan Ahmed, Muhammad Shahzad Rafiq and Imran N Junejo

Crowd Modeling using Temporal Association Rules (Abstract)
Understanding crowd behavior has attracted tremendous attention from researchers over the years. In this work, we propose an unsupervised approach for crowd scene modeling and anomaly detection using association rules mining. Using object tracklets, we identify different paths/routes, i.e., the distinct \textit{events} taking place at various locations in the scene. Interval-based frequent temporal patterns characterizing the scene model are mined using a temporal mining algorithm using Allen's interval-based temporal logic. The resulting frequent patterns are used to generate temporal association rules, which convey the semantic information contained in the scene. Our overall aim is to generate rules that govern dynamics of the scene. Finally the anomalies, both spatial and spatio-temporal, are found by considering behavioral interactions among different objects. We apply the proposed approach on a publicly available dataset and demonstrate its efficient use.

Suhita Ghosh, Andreas Krug, Georg Rose and Sebastian Stober

Perception-Aware Losses Facilitate CT Denoising and Artifact Removal (Abstract)
The concerns over radiation-related health risks associated with the increasing use of computed tomography (CT) have accelerated the development of low-dose strategies. There is a higher need for low dosage in interventional applications as repeated scanning is performed. However, using the noisier and undersampled low-dose datasets, the standard reconstruction algorithms produce low-resolution images with severe streaking artifacts. This adversely affects the CT assisted interventions. Recently, variational autoencoders (VAEs) have achieved state-of-the-art results for the reconstruction of high fidelity images. The existing VAE approaches typically use mean squared error (MSE) as the loss, because it is convex and differentiable. However, pixel-wise MSE does not capture the perceptual quality difference between the target and model predictions. In this work, we propose two simple but effective MSE based perception-aware losses, which facilitate a better reconstruction quality. The proposed losses are motivated by perceptual fidelity measures used in image quality assessment. One of the losses involves calculation of the MSE in the spectral domain. The other involves calculation of the MSE in the pixel space and the Laplacian of Gaussian transformed domain. We use a hierarchical vector-quantized VAE equipped with the perception-aware losses for the artifact removal task. The best performing perception-aware loss improves the structural similarity index measure (SSIM) from 0.74 to 0.80. Further, we provide an analysis of the role of the pertinent components of the architecture in the denoising and artifact removal task.

Bu Fang Chang, Shih-Yi Chien and Yi-Ling Lin

The Effect of Communication Approaches on Intimacy in Human-Humanoid Robot Interaction (Abstract)
Social robots are widely applied in various contexts to provide human-like assistance and facilitate service experience. Prior research considered a variety of design features to explore the influences in human-robot relationships, while a robot’s manner of assisting in interaction and its consequent effects are rarely discussed. This study aims to investigate the relationship between a robot’s communication design and a human’s perceived intimacy in the human- humanoid robot interaction. Different levels of service proactivity (proactive vs. reactive) and types of expressive behaviors (neutral vs. intimate) are developed and empirically validated through an online survey. The findings indicate that the manipulations designed for each experimental condition can be recognized by the participants. In addition, the perception of intimacy is significantly affected when interacting with different robots accompanying different types of behaviors.
10:00 - 11:00

Session 11: D3.2: Information Interaction

Chairs: Ingo Siegert

Qingyang Li, Zhiwen Yu, Huang Xu and Bin Guo

Tree-based Self-adaptive Anomaly Detection by Human-Machine Interaction Presentation Video (Abstract)
Anomaly detectors are used to distinguish the difference between normal and abnormal data, which are usually implemented by evaluating and ranking anomaly scores of each instance. Static unsupervised anomaly detectors can be difficult to adjust anomaly score calculation for streaming data. In real scenarios, anomaly detection often needs to be regulated by human feedback, which benefits to adjust anomaly detectors. In this paper, we propose a human-machine interactive anomaly detection method, named ISPForest, which can be adaptively updated under the guidance of human feedback. In particular, the feedback will be used to adjust the anomaly score calculation and structure of the tree-based detector, ideally attaining more accurate anomaly scores in the future. Our main contribution is to improve the tree model that can be dynamically updated from perspectives of anomaly score calculation and the model’s structure. Our approach is instantiated for the powerful class of tree-based anomaly detectors, and we conduct experiments on a range of benchmark datasets. The results demonstrate that human expert feedback is helpful to improve the accuracy of anomaly detectors.

Johannes Schwerdt, Aljoscha Tersteegen, Pauline Marquardt, Achim J. Kaasch and Andreas Nuernberger

An Explorative Tool for Mutation Tracking in the Spike Glycoprotein of SARS-CoV-2 Presentation Video (Abstract)
Interactive Information Visualization and Human Computer Interaction provides useful support for inexperienced user and experts as well. On one hand visualization provides an informative overview of data and on the other hand interaction encourages users for exploration within it. We chose the challenge of a specialized/expert scenario to build a pipeline that provides interactive visualizations to encourage users for further exploration. We chose the complex task of a phylogenetic analysis of SARS-CoV-2 genomes for mutation tracking. The proposed pipeline hides the mathematical details while providing complex information visually and intuitively. In our proof-of-concept we analyzed four variants of concern and identified mutations in the spike glycoprotein with more than 70% precision and 77% recall in reference to the reports of the Centers for Disease Control and Prevention.

Florian Heinrich, Lovis Schwenderling, Marcus Streuber, Kai Bornemann, Kai Lawonn and Christian Hansen

Effects of Surface Visualizations on Depth Perception in Projective Augmented Reality Presentation Video (Abstract)
Depth perception is a common issue in augmented reality (AR). Projective AR, where the spatial relations between the projection surface and displayed virtual contents need to be represented properly, is particularly affected. This is crucial in the medical domain, e.g., for the distances between the patient's skin and projected inner anatomical structures, but not much research was conducted in this context before. To this end, this work investigates the applicability of surface visualization techniques to support the perception of spatial relations in projective AR. Four methods previously explored in different domains were combined with the projection of inner anatomical structures on a human torso phantom. They were evaluated in a comparative user study (n=21) with respect to a distance estimation and a sorting task. Measures included Task completion time, accuracy, total Head movement and Confidence of the participants. Consistent results across variables show advantages of more occluding surface visualizations for the distance estimation task. Opposite results were obtained for the sorting task. This suggests that the amount of needed surface preservation depends on the use case and individual occlusion compromises need to be explored in future work.
11:00 - 11:30

Coffee Break

11:30 - 13:00

Session 12: D3.3: Interactive Robotics (Special Session)

Chairs: Norbert Elkmann and Christian Hansen

Peter Schatschneider and Norbert Elkmann

Test Stand for the Evaluation of appropriate Drives focusing on a highly-flexible Robot Manipulator Presentation Video (Abstract)
Flexible production sites will become increasingly important in the future. In order to open up this path for SMEs as well as for larger companies, solutions are needed that do not rely on expensive special machines and can be used universally in a wide variety of application areas such as separation or sorting. For a highly flexible gripper with sensory capabilities, which is currently under development by the Fraunhofer IFF, suitable drives have to be identified that would meet all the required specifications and at the same time allow a low price of the final product. This paper describes an universal test setup that can be used to evaluate combinations of motors and gearboxes from different manufacturers and product lines for their suitability for use in a robotic gripper. The measurement procedure is divided into three parts: determination of the motor constant, torque accuracy and the positioning accuracy of the complete drive. The method as well as the results from the investigation of a motor-gearbox combination are presented.

Jose Saenz, Irene Fassi, Gerdienke B. Prange-Lasonder, Marcello Valori, Catherine Bidard, Aske B. Lassen and Jule Bessler-Etten

COVR Toolkit – Supporting safety of interactive robotics applications Presentation Video (Abstract)
Collaborative robotics are increasingly finding use beyond the traditional domain of manufacturing, in areas such as healthcare, rehabilitation, agriculture and logistics. This development greatly increases the size and variations in the level of expertise of cobot stakeholders. This become particularly critical considering the role of human safety for collaborative robotics applications. In order to support the wide range of cobot stakeholders, the EU-funded project COVR “Being safe around collaborative and versatile robots in shared spaces” has developed a freely available, web-based Toolkit that offers support to understand how to consider the safety of cobot applications. This paper describes the state of the art for ensuring safety across various life cycle phases in the development and implementation of collaborative robotics applications and highlights how the Toolkit provides practical support during these tasks. The Toolkit aims to be the most comprehensive resource for supporting cobot stakeholders in ensuring the safety of their applications.

Christian Vogel, Christoph Walter and Norbert Elkmann

Space-time extension of the projection and camera-based technology dealing with high-frequency light interference in HRC applications Presentation Video (Abstract)
Optical sensor systems are widely used in academic and research with the aim of providing safety of humans at human-robot collaboration (HRC). For using these mainly 2D- or 3D- cameras as speed and separation monitoring systems in industrial HRC settings several requirements need to be fulfilled. Beside the overall functional safety of the sensor, the monitoring system have to deal with changing environmental influences as well. This includes not only natural illumination changes of ambient light (sunlight) but also high frequency pulsation of flashlights (stroboscopic effects). In recent years, an innovative optical sensor system based on camera and projector techniques was introduced. The implemented working principle of this HRC monitoring system offers high potential to meet the standardized safety requirements. In this paper, we present the extension of the pulse-modulated light principle to a space-time approach that is capable of handling sudden illumination changes (e.g. flashlights) without falling to danger.

Dominykas Strazdas, Jan Hintz, Aly Khalifa and Ayoub Al-Hamadi

Robot System Assistant (RoSA): concept for an intuitive multi-modal and multi-device interactionsystem Presentation Video (Abstract)
This paper presents RoSA, the Robot System Assistant, a concept for intuitive human-machine-interaction based on speech, facial, and gesture recognition. The interaction modalities were found and reviewed through a preceding wizard-of-oz study showing high impact for speech and pointing gestures. The system's framework is based on the Robot Operating System (ROS), allowing modularity and extendibility. This contactless concept also includes ideas for multi-device and multi-user implementation, working at different workstations.

S M Mizanoor Rahman

Weight Perception-Based Variable Admittance Control to Improve Interactions and Performance in Human-Power Assist Robot Collaborative Manipulation Presentation Video (Abstract)
In the first step, a 1-DOF power assist robotic system (PARS) is developed for object manipulation, and the dynamics for human-robot co-manipulation of objects is derived that includes weight perception. Then, an admittance control scheme with position feedback and velocity controller is derived using the weight-perception-based dynamics. Human subjects lift objects with the system, and human-robot interactions (HRI) and system characteristics are analyzed. A comprehensive evaluation scheme is developed to evaluate HRI and performance. HRI is expressed in terms of physical HRI (maneuverability, motion, safety, stability, and naturalness) and cognitive HRI (trust and workload), and performance is expressed in terms of manipulation efficiency and precision. A constrained optimization algorithm is proposed to determine optimum HRI and performance. Results show that inclusion of weight perception in the dynamics and control is effective to produce optimum HRI and performance for a set of hard constraints. In the second step, a novel variable admittance control algorithm is proposed, which enhances the physical HRI, trust, precision and efficiency by 34.66%, 31.89%, 3.84% and 4.98% respectively, and reduces overall mental workload by 35.38%%, and helps achieve optimum HRI and performance for a set of soft constraints. Effectiveness of the optimization and control algorithms is then validated using a multi-DOF PARS for manipulating heavy objects.
13:00 - 14:00

Lunch Break

14:00 - 15:00
Raimund Dachselt

Keynote Prof. Dr.-Ing. Raimund Dachselt Session 13: D3.4

Interactive Spaces for Ubiquitous Data – A Mobile Visualization Perspective (Abstract) | Building 22-A, Lecture Hall 2

Chair: Andreas Nürnberger

Presentation Video

Dealing with the ever-increasing amount of information in our data-driven world is not only challenging in terms of computing infrastructure, but especially for human beings. They need to be empowered to access, explore, analyze, and make sense of data and information everywhere. Interactive data visualization has proven to be an impactful means of combining computing power with human perceptional and cognitive abilities. Thereby, the past decade has seen an increasing research interest in visualization environments beyond traditional computers and single users. The current ubiquity of mobile devices already points to a future of even more diverse computing environments that go far beyond desktop computers.

What will be such promising computing environments for future data visualization? How can we interact with data in a comprehensible, effective, natural and enjoyable way? How can data visualization be adapted to the situation, the context of use and to collaborative scenarios? The talk will provide insights in recent research developments within this area, where human-computer interaction meets data visualization. Here we will focus less on single devices, like mobile phones or visualization walls, but on interactive spaces combining several devices and display technologies as well as multiple interaction modalities. Using research examples from the fields of multi-display environments, mobile data visualization and Immersive Analytics, the future variety of approaches will be illustrated. We will also outline research challenges for interactive spaces as interfaces for ubiquitous data visualization.

15:00 - 15:30

Coffee Break

15:30 - 17:45

Session 14: D3.5: Data Analysis: From Features to Behavior Models

Chair: Sayantan Polley

Bram van den Berg, Sander van Donkelaar and Maryam Alimardani

Inner Speech Classification using EEG Signals: A Deep Learning Approach Presentation Video (Abstract)
Brain-computer interfaces (BCIs) provide a direct pathway of communication between humans and computers. There are three major BCI paradigms that are commonly employed: motor-imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). In our study we sought to expand this by focusing on “Inner Speech” paradigm using EEG signals. Inner Speech refers to the internalized process of imagining one’s own “voice”. Using a 2D Convolutional Neural Network (CNN) based on the EEGNet architecture, we classified the EEG signals from eight subjects when they internally thought about four different words. Our results showed an average accuracy of 28.5% for word recognition, which is slightly above chance. We discuss the limitations and provide suggestions for future research.

Huili Cai, Xiaofeng Liu, Aimin Jiang, Rongrong Ni, Xu Zhou and Angelo Cangelosi

Combination of EOG and EEG for emotion recognition over different window sizes Presentation Video (Abstract)
Considering the use of a multi-modal framework to enhance emotion recognition, we propose to combine electroencephalography (EEG) and electrooculogram (EOG) through decision level fusion(DLF) and feature level fusion(FLF) for emotion recognition. By using different temporal window sizes to segment the signal, we explore the duration of the emotion of the EOG signal and the EEG signal. Then, some temporal window sizes that are friendly to both EOG signal and EEG signal are selected for segmentation and emotion recognition. According to the different degree of dependence of subjects, the accuracy of the proposed algorithm on subject-dependent and subject-independent is verified on the DEAP dataset. For subject-dependent, using feature level fusion strategy with a window size of 6 seconds, the accuracy is 0.9562 in terms of arousal, and 0.9558 in terms of valence. For subject-independent, using feature level fusion strategy with a window size of 5 seconds, the accuracy is 0.8638 in terms of arousal, and 0.8542 in terms of valence. The experimental results show that the proposed algorithm can better enhance emotion recognition.

Aly Khalifa and Ayoub Al-Hamadi

A Survey on Loss Functions for Deep Face Recognition Network Presentation Video (Abstract)
With the increased collaboration between humans and robots in daily life, face recognition becomes one of the essential aspects of human-robot interaction (HRI). The robot requires a highly accurate face recognition system to be able to operate in different environments and conditions. The role of achieving a high accuracy face recognition is the enhancement of learning discriminative feature representation, which is almost entirely around minimizing the intra-class distance and maximizing the inter-class distance. The loss function is used on the deep Convolutional Neural Networks (CNNs) to enhance this discriminative power of the deeply learned features. Softmax loss is one of the most used loss functions in many CNNs. However, Softmax loss didn't have the sufficient discriminative power needed by face recognition. Recently, many researchers work on developing novel loss functions to improve discriminatory power mainly, the intra-class distance of deep features. This survey paper's main objective is to compare the multiple loss functions used for deep face recognition networks showing the weakness for each loss function.

Alfredo Cuzzocrea and Enzo Mumolo

Dempster-Shafer Decision Fusion of Bimodal Biometric for Identity Verification Presentation Video (Abstract)
The purpose of this paper is to describe a novel fusion algorithm for multimodal biometric identification. In this paper we describe the fusion of fingerprints and voice. This combination of biometrics is rarely used in verification systems although this biometric pair is simple to use and not too invasive. A framework for the combination of several data fusion algorithms is described. In this paper we use only two types of data fusion techniques, namely weighted sum and fuzzy system. Two independent identity decisions can be thus obtained, and from them two beliefs that the identity is verified can be derived. The two beliefs are combined using Dempster Shafer’s approach to obtain the final decision. The results are reported by ROC curves.

Johannes Schwerdt, Michael Kotzyba and Andreas Nuernberger

Fact-Finding or Exploration: Characterizing Reading Strategies in User’s Search Activities Presentation Video (Abstract)
Adaptive information retrieval systems could provide users individual support for their current search activity. For that a clear detection and understanding of such activities are a necessity. In this paper we analyze reading strategies, such as Scanning, Skimming and ’hard’ Reading, given two search activities, namely Fact-Finding and Exploratory search activities. We analyzed the eye tracking data of a lab experiment and identified a positive correlated trend of ’hard’ Reading towards Exploratory searches and Skimming towards Fact-Finding searches. Using the interpretation of these reading strategies, we argue that we are able to draw conclusions about the possible search intent during individual search activities.

Christopher Flathmann, Beau Schelble and Nathan McNeese

Fostering Human-Agent Team Leadership by Leveraging Human Teaming Principles Presentation Video (Abstract)
With human-agent teams beginning to enter the workforce, it is important that humans are well equipped to lead their future teams. Due to the addition of artificial intelligence to teams, the behavioral functions of leaders need to be critically examined to determine their fit with the future of human-agent teamwork. This paper identifies these functional behaviors as resource management behaviors and information behaviors based on past research in teamwork. These behaviors are reviewed within the context of human-human teamwork to define human-oriented leadership behaviors. Based on the review of human-human teamwork along with recent research in human-agent teamwork, an adaptable framework is created for leadership behaviors that will help guide human leaders in human-agent teams. This framework provides a foundation for future human-agent teams to empower and guide human leaders of human-agent teams who need to mediate the integration of agents alongside humans

Franco Cicirelli, Antonio Guerrieri, Carlo Mastroianni, Luigi Scarcello, Giandomenico Spezzano and Andrea Vinci

Balancing Energy Consumption and Thermal Comfort with Deep Reinforcement Learning Presentation Video (Abstract)
The management of thermal comfort in a building is a challenging and multi-faced problem because it requires considering both objective and subjective parameters that are often in contrast. Subjective parameters are tied to reaching and maintaining an adequate user comfort by considering human preferences and behaviours, while objective parameters can be related to other important aspects like the reduction of energy consumption. This paper exploits cognitive technologies, based on Deep Reinforcement Learning (DRL), for automatically learning how to control the HVAC system in an office. The goal is to develop a cyber-controller able to minimize both the perceived thermal discomfort and the needed energy. The learning process is driven through the definition of a cumulative reward, which includes and combines two reward components that consider, respectively, user comfort and energy consumption. Simulation experiments show that the adopted approach is able to affect the behaviour of the DRL controller and the learning process and therefore to balance the two objectives by weighing the two components of the reward.
17:45 - 18:15

Farewell

Chairs: Giancarlo Fortino and Andreas Nürnberger

Presentation Video
18:30 - 19:45

Visit the ElbeDome

The Elbedome is a mixed reality lab for large-scale display of interactive visualizations. As a closing event, we can visit the lab in small groups.


Sponsors