People often hold inaccurate or overly presumptuous mental models about robots, likely due to a lack of experience and the strong influence of robot design (e.g., appearance). Thus, there is often a mismatch between user mental models, or expectations of robots, and actual robot capabilities. This mismatch can lead to ambiguous perceptions and improper trust estimations of robot actions, wrongful accusations of errors, ineffective human-robot collaboration, or even discontinued use. As robotic systems become increasingly complex and their presence more commonplace, HRI researchers should find ways to give users a window into what the robot is “thinking”, “feeling”, and “intending”. We envision a future in which robots can automatically detect and correct inaccurate mental models held by users. This workshop will develop a multidisciplinary vision for the next few years of research in pursuit of that future.
This workshop aims to bring together researchers interested in helping users understand robotic systems. Our aim is to create a clearer picture of what mental model research looks like, what the different sub-areas and problems are, and how different disciplines can contribute to making robot actions more interpretable. In this light, we welcome interdisciplinary contributions from those working on estimating aspects of users’ mental models, designing communicative robot actions or other interventions, and developing decision making systems that connect all these elements to create autonomous robot behavior.
Applicants with a background in human-computer interaction, natural language processing, design, human factors, psychology, neuroscience, cognitive science, and any other related disciplines are welcome to apply. We especially encourage submissions from researchers and practitioners contributing theories, methods and applications that are only sparsely used in the HRI community such as human factors, gaming, and employee education for working with industrial robots.
Please submit a 2-page extended abstract (not including references) of work relating to the workshop topic. Examples of more specific topic areas are below.
All papers should be submitted in PDF format using the ACM template for late-breaking reports (see here under the “Submission Instructions” heading; this is also the same format as for full papers). Note that it's the general ACM SIG format, not the SIGCHI format you might be used to. Papers should be anonymized; please see anonymization guidelines here. Submissions will be peer-reviewed based on their originality, relevance, technical soundness, and clarity. Please email your pdf submissions to: email@example.com
I am a scholar doing research on human-robot interaction. Right now I work as a post-doc in the USC Interaction Lab directed by Prof. Maja Matarić. I received the Ph.D. in Robotics from Oregon State University in 2018 for a dissertation about privacy in human-robot interaction. My advisor was Prof. Bill Smart. I also worked extensively with Prof. Frank Bernieri from the School of Psychological Science. My undergraduate studies were also at OSU—I received the H.B.S. Degree in Mechanical Engineering in 2013.
My goal is for robots to respect human social values; so far I have focused on privacy. My current research is about how humans form mental models of a robot's perceptual capabilities, and how the robot can model this process and then act to influence it. A robot should help humans accurately understand what personal information it is recording.
I am an assistant professor in computer science at the University of Southern California. I graduated with a PhD from the CMU Robotics Institute and with a MS from MIT. My research lies at the intersection of human-robot interaction, game-theory and robot planning under uncertainty. I draw upon insights from studies in economics, cognitive behavioral psychology and human team coordination, to develop mathematical models of human behavior and integrate them into robot decision making in a principled way. Ultimately, my research is motivated by real world problems, thus I believe strongly in the importance of models that are scalable and robust, supporting deployed systems in real world applications.
I am a social and behavioral scientist in the multidisciplinary field of human-robot interaction. My research is motivated by my intrinsic drive to understand human behavior and its underlying psychological and cognitive processes. My past research indicates a strong impact of people’s anthropomorphic evaluations of robots in the emergence of long-term acceptance. Envisioning a future in which the social abilities of robots can only increase, my research interest focuses on people’s social, emotional and cognitive responses to robots including the societal and ethical consequences of those responses. The end goal is to influence technology design and policy direction to pursue the development of socially acceptable robots that benefit society.
Hello. Most people call me Beth and I am an Assistant Professor at the United States Air Force Academy in the Department of Behavioral Sciences and Leadership and in the Warfighter Effectiveness Research Center (WERC). I was recently a post doc in the Humanity Centered Robotics Initiative (HCRI) at Brown University.
I am also the co-creator of the Anthropomorphic RoBOT (ABOT) database, a collection of images of and data about real-world human-like robots. ABOT was created as a resource to enable systematic, generalizable, and reproducible research on the psychological effects of robots’ human-like appearance.
By way of introduction, I am an Associate Professor in the School of Information (UMSI) at the University of Michigan and a core faculty member of Michigan Robotics. I am the director of the Michigan Autonomous Vehicle Research Intergroup Collaboration (MAVRIC). I am also an affiliate of the Michigan Interactive and Social Computing (MISC) Research Group.
My research in the area of human robot collaboration typically involves understanding how to facilitate more effective collaborations between humans and robots. Below you will find links to my two areas of study as they relate to human robot collaborations along with several grants and funded research projects associated with both areas.
David Sirkin is a Research Associate at Stanford University's Center for Design Research, where he focuses on design methodology, as well as the design of physical interactions between humans and robots, and autonomous vehicles and their interfaces. He is also a Lecturer in Electrical Engineering, where he teaches interactive device design. David frequently collaborates with, and consults for, local Silicon Valley and global technology companies including Siemens, SAP and Microsoft Research. He grew up in Florida, near the Everglades, and in Maine, near the lobsters.
Minae is a PhD student in the Computer Science Department. She is broadly interested in enabling robots to intelligently interact with, influence, and adapt to humans.
I am a PhD student at Linköping University, Sweden. My work focuses on the role of the intentional stance — i.e., people’s common-sense or intuitive understanding of others as agents with beliefs, desires, and other so called “intentional states” — in human-robot interactions. My research aims to elucidate some of the challenges or difficulties associated with predicting and explaining the behavior of robots based on the intentional stance, and to develop appropriate methodology for studying the intentional stance toward robots empirically.