Home    Background    Publications    Projects    iHRI Lab People    Contact


Improving human interaction with mode-swapping robots to support safety, sustainability, and success of space missions

Astrobee robotNASA and other space agencies are designing robots to support future space exploration. As such, it is expected that in upcoming missions, astronauts will increasingly work with and rely on robots such as Astrobee (a semiautonomous free-flying robot meant for performing tasks and conducting research on the International Space Station (ISS)). Astrobee can switch between modes (e.g., a tool, autonomous, and telepresence robot) - which can increase confusion during interaction (Johnson 1990). In this project, our goals are to (1) Develop design recommendations for mode-switching robots, and (2) Create, test, and implement software of our design recommendations. Overall, improving Astrobee design will reduce worker error, decrease Astronaut stress and increase efficiency, safety, and mission success. We are performing this reserach in collaboration with NASA Ames Chief Roboticist, Dr. Terry Fong.

Hospitality robot: How can robots interact with people “in the wild?”

Pepper robot What happens when robots enter our public spaces? How can robots best assist people in their everyday lives? In this line of research, we put robots into dining establishments and record how to improve interaction. From using, the humanoid robot, Pepper, as a receptionist at NMSU’s own 100 W. Café, to our partnership with Kiwibot and their food delivery robot, we are helping robots better serve individuals and groups of people. We are collaborating with Dr. Betsy Stringam from the School of Hotel, Restaurant, and Tourism Management.

How much will you trust your self-driving car?

The future is coming! Cars are becoming more autonomous over the years, and now some cars can navigate, change lanes, and adjust speed with just some supervision from people. Opinions about this differ from person to person – is it exciting? Is it scary? We hope to make interactions with these self-driving cars, as well as with other automation and robots, safer by helping people trust them the right amount: not too much and not too little. We are beginning this project in collaboration with Dr. Ewart de Visser from George Mason Universtiy, Dr. Chad Tossell from the US Air Force Academy, and Dr. Gregory Funke from the US Air Force Research Laboratory.

Can social robots improve our interactions with other people?

It can be hard to reach out and connect with other people, especially during this time of increased isolation. Researchers have created social robots as companions for people. Some of the best outcomes of these robots has been connecting people with other people. In this project, it is our goal learn how robots can enhance human connection with other people. We are working on this research in collaboration with Dr. Kate Tsui from Toyota Research Institute.

Socializing with robotsHow do human-robot interaction dynamics change with groups of humans and robots?

With constantly improving technology, it is expected that robots will become more prevalent in everyday life in the near future, and that people will interact with multiple robots simultaneously. Intergroup human-robot interaction (iHRI), or human interactions with multiple robots, has different social dynamics than one-on-one interaction and deserves further study. For example, people were kinder to robots in their group than humans in a different group. Further, characteristics of robot groups (e.g., "entitativity" or cohesion) affect iHRI. In this project, we examine how groups of people interact with one or more robots.


How does culture affect intergroup human-robot interaction?

Cross-Cultural Research and CollaboratorsDifferent cultures have different customs and ways of interacting with the world. Whereas many places in the USA focus on individual goals and independence (i.e., highly individualistic), places like Japan and Portugal focus on the well-being of the group and interdependence (i.e., highly collectivistic). The media also influences perceptions of robots, with dangerous robots like in Terminator and Ex Machina from the USA and helpful robots like Astro boy from Japan. To create robots that can be enjoyed across cultures, we need to understand these differences. In this project, we collaborate with (travel to) researchers around the world to test cross-cultural differences in HRI.

Recent collaborators:

  • Dr. Michio Okada with the Interaction and Communication Design (ICD) Lab in Toyohashi, Japan
  • Dr. Takayuki Kanda with the Advanced Telepresence Robots (ATR) Lab in Kyoto, Japan
  • Dr. Seita Koike who builds Mugbots at Tokyo City University in Japan
  • Dr. Jaeryoung Lee of Chubu University in Japan
  • Dr. Friederike Eyssel of the Center of Excellence Cognition Interaction Technology (CITEC) in Bielefeld, Germany
  • Dr. Ricarda Wullenkord of the Center of Excellence Cognition Interaction Technology (CITEC) in Bielefeld, Germany
  • Dr. Ana Paiva of the Intelligent Agents and Synthetic Characters Group (GAIPS) lab in Lisbon, Portugal.

  • When do humans perceive robots as social entities?

    Displaying a posterHRI researchers have found that people often treat robots similarly to how they treat people in human-human interaction (HHI). Sometimes, humans venture to treat robots as ingroup or outgroup members depending on cues (e.g., the robot’s origin), even in arbitrarily assigned groups (minimal groups paradigm). However, in studies with these findings, researchers identified the robots as belonging to existing human social categories (e.g., the robots were said to be Turkish) or provided other cues as to the robot’s sociality (e.g., the robot was humanoid, verbally greeted participants, and had supposedly performed the task that participants would perform). When robots do not have human-like characteristics, they are less likely to be treated socially. In this project, we study how social robots have to be before humans treat them as such.

    How can we best measure human attitudes toward robots?

    Researchers can measure attitudes explicitly, implicitly, and behaviorally, but most studies examine only one or two types of attitudes. Explicit measures are often the most straight-forward measures and therefore the most common, but using only these can be problematic because they do not always predict actual behavior. We also implicit, explicit, and behavioral measures to foster a deeper understanding of the relationship between different types of behaviors and to better understand the complexity of behavior toward robots.