Workshop on Real World Physical and Social Human-Robot Interaction

Humanoids 2024


The workshop is held in hybrid mode, Please Join the Zoom Meeting: https://kth-se.zoom.us/j/67355684591 , Meeting ID: 673 5568 4591

If you are attending, please fill up this form alongwith some questions for the panelists in our panel discussions here https://forms.gle/E1KXyWf526WibZtz9

As robots increasingly enter everyday settings—from homes to workplaces—the necessity for sophisticated human-robot interaction (HRI) capabilities becomes paramount. Traditional HRI systems often rely on a single mode of interaction, which can limit the robot’s ability to understand and respond to human nuances effectively. Multimodal HRI seeks to overcome these limitations by integrating various sensory inputs such as visual, auditory, and tactile feedback, thus enabling robots to interpret and adapt to complex human behaviors and environments. However, the integration of these modalities presents significant challenges, including sensor fusion, context-aware computing, and the development of adaptive, user-centered interfaces that can handle diverse human expressions and intentions.

This workshop aims to convene leading scholars and practitioners to explore the integration of multiple modalities in robotic systems for enhanced human-robot interaction. This workshop will highlight recent advancements in tactile feedback, visual recognition, interaction patterns and social dynamics to create robots that can engage more naturally and effectively with humans in diverse environments. Building on the success of previous related workshops at renowned conferences such as ICRA and IROS, this session is anticipated to attract a broad audience, ranging from academic researchers and industrial practitioners to educators and policy makers. Participants will engage in a series of keynote presentations, interactive panels, demonstration, and hands-on demonstrations, providing both foundational insights and innovative approaches to multimodal interaction. The workshop will also feature a call for papers, inviting contributions that address theoretical models, empirical studies, or state-of-the-art applications in human-robot interaction. Through this comprehensive format, the workshop will foster an inclusive dialogue aimed at shaping the future directions of research and development in the field.

Program Schedule

The workshop is held in hybrid mode, Please Join the Zoom Meeting: Click to join , Meeting ID: 673 5568 4591

If you are attending, please fill up this form alongwith some questions for the panelists in our panel discussions here: Click here for the FORM

Time Activity
09:00 - 09:10 Introduction
09:10 - 09:50 Invited Speaker: Dr. Katja Mombaur, “Physical-social interactions between humans and assistive robots in close proximity” (30 min + 10 min Q/A)
09:50 - 10:30 Invited Speaker: Dr. Eiichi Yoshida, "Human Model for Physical Human-Robot Interaction" (30 min + 10 min Q/A)
10:30 - 11:00 Coffee Break and Poster Presentation
11:00 - 11:30 Spotlight Talks:
  • "Failure Communication in Human-Robot Collaboration with Multimodal AI and Large Language Models." (Link to the paper)
  • "Event-Based Visual Servoing for Human-Robot Navigation using Reinforcement Learning." (Link to the paper)
11:30 - 12:30 Panel Discussion: Why Should Physical and Social HRI Researchers Listen to Each Other More? (40 min + 20 min Q/A)
12:30 - 13:30 Lunch
13:30 - 14:10 Invited Speaker: Quentin Rouxel and Dionis Totsila, "LLMs, Diffusion and Humanoid Robots: Natural Language and Imitation Learning for Contact Interaction" (30 min + 10 min Q/A)
14:10 - 14:50 Invited Speaker: Enrico Mingo Hoffman, "OpenSoT: A Software Tool for Advanced Whole-Body Control." (30 min + 10 min Q/A)
14:50 - 15:30 Spotlight Talks:
  • "Feasibility Study on a Multi-Device Dexterous Hand Teleoperation System for Daily Activity Performance in HRI" (Link to the paper)
  • "Automated Gaze Labelling for Measuring Emotional and Cognitive Engagement in School-Age Children During Storytelling Activities with NAO Robot" (Link to the paper)
15:30 - 16:00 Coffee Break and Poster Presentation
16:00 - 16:40 Luca Marchionni, PAL Robotics (30 min + 10 min Q/A)
16:40 - 17:40 Panel Discussion: How to Reconcile Academia and Industry's Approach to HRI? (40 min + 20 min Q/A)
17:40 - 17:50 Concluding Remarks

Speakers

Meet our esteemed speakers from academia and industry.

Katja Mombaur

Katja Mombaur

Professor, Karlsruhe Institute of Technology, Germany, and University of Waterloo, Canada

Presentation: Physical-social interactions between humans and assistive robots in close proximity

Eiichi Yoshida

Eiichi Yoshida

Professor, Tokyo University of Science, Japan

Presentation:Human Model for Physical Human-Robot Interaction

Enrico Mingo Hoffman

Enrico Mingo Hoffman

ISFP Researcher, Centre Inria de l'Université de Lorraine & Loria, Nancy, France

Presentation: "OpenSoT: A Software Tool for Advanced Whole-Body Control"

Pal Robotics

Luca Marchionni, Pal Robotics

Presentation: TBD

Quentin Rouxel

Quentin Rouxel

Postdoctoral Researcher, Inria Nancy - Grand Est, CNRS, Université de Lorraine, Villers-lés-Nancy, France

Presentation: "LLMs, Diffusion and Humanoid Robots: Natural Language and Imitation Learning for Contact Interaction"

Accepted Papers for Spotlight Presentations

1. Event-Based Visual Servoing for Human-Robot Navigation using Reinforcement Learning

Authors: Ignacio Bugueno-Cordova, Javier Ruiz del Solar and Rodrigo Verschae
Abstract:

This work presents a visual servoing controller for social robots using event cameras and reinforcement learning, specifically designed to support safe, adaptive, and socially- aware human-robot interactions. The proposed controller en- ables real-time navigation and obstacle avoidance in dynamic environments by integrating event-based visual feedback and learning-based policy optimization. The results highlight the ap- proach’s robustness in managing physical and social interaction challenges, adapting smoothly to changes in human motion and environmental conditions. A demo video can be watched at: https://youtu.be/dF8 ektJ8Nk.

Link to the paper

2. Feasibility Study on a Multi-Device Dexterous Hand Teleoperation System for Daily Activity Performance in HRI

Authors: Alessandra Sorrentino, Niccolò Alunni and Filippo Cavallo
Abstract:

This study introduces a novel, non-invasive, multi- device hand-tracking system designed for dexterous teleoperation, with an emphasis on human grasp recognition for precise control of robotic manipulators. The proposed system leverages multiple visual sensors to improve accuracy and reduce occlusion-related tracking errors, allowing reliable detection of hand movements within an extended workspace. Implemented in a ROS-based framework, the system offers adaptability and scalability to additional robotic platforms. Twenty participants evaluated the system’s usability and reliability by performing two common daily activities using teleoperated grasp gestures. Results indicate high reliability, with a 94.17% success rate across trials, and positive user feedback, with 91.67% of users completing tasks successfully. Training effects were evident, with significant decreases in task completion time between early and late repetitions, reflecting enhanced user familiarity. Analysis of workload via NASA-TLX scores showed reduced mental and effort demands in successive tasks, underscoring the system’s user-friendliness and intuitiveness with experience. Future work will explore integrating additional hand parameters, comparing performance with wearable-based teleoperation systems, and expanding control to include robotic arm movement, emulating adaptive human grasp strategies. This work provides a foundation for effective human-robot collaboration in shared environments, advancing robotic capability in intention reading and cooperative task execution.

Link to the paper

3. Automated Gaze Labelling for Measuring Emotional and Cognitive Engagement in School-Age Children During Storytelling Activities with NAO Robot

Authors: Laura Fiorini, Elena Adelucci, Stefano Scatigna, Lorenzo Pugi, Chiara Pecini and Filippo Cavallo
Abstract:

Modeling effective human-robot interaction requires the robot to detect and respond to engagement signals, which include attention, interest, and empathy. This study explores the application of automated gaze-labelling techniques to assess engagement in child-robot interactions using the NAO robot in a storytelling paradigm. A total of 72 children, aged 7 to 9, participated in structured individual sessions recorded for gaze analysis and engagement scoring. Engagement measures were manually annotated by observers using the Engagement Observation Scale and analyzed in correlation with automated gaze data processed using Gaze360 and K-means clustering for automated labelling. Results indicate high levels of engagement, with children directing their gaze toward the robot and interactive environment for most of the session time. The automatic gaze-labeling method showed a significant correlation with observer-assessed engagement scores, underscoring its potential as an efficient alternative to traditional methods.

Link to the paper

4. Failure Communication in Human-Robot Collaboration with Multimodal AI and Large Language Models

Authors: Andreas Naoum, Elmira Yadollahi, and Parag Khanna
Abstract:

This research presents a complete approach to failure communication in Human-Robot Collaboration (HRC) by integrating Multimodal AI and Large Language Models (LLMs). By combining human behavioral analysis with LLMs, the goal is to enhance the efficiency, fluidity, and naturalness of HRC interactions. The proposed framework enables robots to proactively predict and adapt the level of failure explanation based on observed human behavior, preempting potential con- fusion. LLMs generate responses tailored to a selected level of explanation, while follow-up interactions are designed to miti- gate confusion and enhance user comprehension and trust. The proposed framework aims to refine the collaborative experience, fostering more intuitive, adaptive, and efficient interactions between humans and robots across various application domains in real-world environments.

Link to the paper

Call for Contributions

Date: November 22, 2024 (Full-day workshop)
Location: Hybrid (Nancy, France and online), as part of The 2024 IEEE-RAS International Conference on Humanoid Robots, IEEE-Humanoids 2024.
Submission Instruction: Email your contributions to: whsop.realworld.hri@gmail.com
Contact for submissions: paragk@kth.se,e.yadollahi@lancaster.ac.uk

IMPORTANT DATES

All deadlines are at 23:59 Anywhere on Earth time.


SUBMISSION GUIDELINES

Manuscripts should be written in English and will undergo a single-blind review by the organizing committee. The length should be 2-4 pages excluding references. We welcome contributions that include work in progress, preliminary results, technical reports, case studies, surveys, and state-of-the-art research. Position papers are also welcome and should be at least 2 pages excluding references. These can be research project proposals or plans without results. Authors must use the Humanoids templates provided, formatted for US Letter. The templates can be downloaded below.

Manuscript Templates: LaTeX, Word

Contact

Organizers:

Parag Khanna

Parag Khanna

Doctoral Student, KTH Royal Institute of Technology, Sweden paragk@kth.se

Elmira Yadollahi

Elmira Yadollahi

Assistant Professor, Lancaster University, United Kingdom e.yadollahi@lancaster.ac.uk

Ziwei Wang

Ziwei Wang

Assistant Professor, Lancaster University, United Kingdom z.wang82@lancaster.ac.uk

Elmira Yadollahi

Angela Faragasso

Lead Researcher Engineer, Finger Vision Inc., Japanangela.faragaso@fingervision.biz

Barbara Bruno

Barbara Bruno

Junior Professor, Karlsruhe Institute of Technology, Germany: barbara.bruno@kit.edu

Christian Smith

Christian Smith

Associate Professor, KTH Royal Institute of Technology, Sweden: ccs@kth.se