Can eyes on self-driving cars reduce accidents? Cues from moving eyes could help pedestrians anticipate vehicle’s i...

Published: Wed, 11/02/22

Presence News

from the International Society for
Presence Research
 
JOIN THE ISPR PRESENCE COMMUNITY ON FACEBOOK  

Can eyes on self-driving cars reduce accidents? Cues from moving eyes could help pedestrians anticipate vehicle’s intentions

November 2, 2022


[Suggesting a new meaning for the acronym CASA (car, rather than computer, as social actor), a new study from The University of Tokyo suggests that giving self-driving vehicles eyes that indicate the vehicle’s “attention” will improve how people interact with the vehicles and save lives. Note that the study itself was conducted via a presence-evoking experience in virtual reality. See the original story for more images, and follow the links at the end for more information (including a 1:52 minute video also available on YouTube). –Matthew]

[Image: The Gazing Car. The cart was fitted with robotic eyes which could be moved in any direction, controlled by one of the research team. The windshield was covered to give the impression that there was no driver inside. Credit: Chang et al. 2022]

Can eyes on self-driving cars reduce accidents? Cues from moving eyes could help pedestrians anticipate vehicle’s intentions

September 20, 2022

Robotic eyes on autonomous vehicles could improve pedestrian safety, according to a new study at the University of Tokyo. Participants played out scenarios in virtual reality (VR) and had to decide whether to cross a road in front of a moving vehicle or not. When that vehicle was fitted with robotic eyes, which either looked at the pedestrian (registering their presence) or away (not registering them), the participants were able to make safer or more efficient choices.

Self-driving vehicles seem to be just around the corner. Whether they’ll be delivering packages, plowing fields or busing kids to school, a lot of research is underway to turn a once futuristic idea into reality.

While the main concern for many is the practical side of creating vehicles that can autonomously navigate the world, researchers at the University of Tokyo have turned their attention to a more “human” concern of self-driving technology. “There is not enough investigation into the interaction between self-driving cars and the people around them, such as pedestrians. So, we need more investigation and effort into such interaction to bring safety and assurance to society regarding self-driving cars,” said Professor Takeo Igarashi from the Graduate School of Information Science and Technology.

One key difference with self-driving vehicles is that drivers may become more of a passenger, so they may not be paying full attention to the road, or there may be nobody at the wheel at all. This makes it difficult for pedestrians to gauge whether a vehicle has registered their presence or not, as there might be no eye contact or indications from the people inside it.

So, how could pedestrians be made aware of when an autonomous vehicle has noticed them and is intending to stop? Like a character from the Pixar movie Cars, a self-driving golf cart was fitted with two large, remote-controlled robotic eyes. The researchers called it the “gazing car.” They wanted to test whether putting moving eyes on the cart would affect people’s more risky behavior, in this case, whether people would still cross the road in front of a moving vehicle when in a hurry.

The team set up four scenarios, two where the cart had eyes and two without. The cart had either noticed the pedestrian and was intending to stop or had not noticed them and was going to keep driving. When the cart had eyes, the eyes would either be looking towards the pedestrian (going to stop) or looking away (not going to stop).

As it would obviously be dangerous to ask volunteers to choose whether or not to walk in front of a moving vehicle in real life (though for this experiment there was a hidden driver), the team recorded the scenarios using 360-degree video cameras and the 18 participants (nine women and nine men, aged 18-49 years, all Japanese) played through the experiment in VR. They experienced the scenarios multiple times in random order and were given three seconds each time to decide whether or not they would cross the road in front of the cart. The researchers recorded their choices and measured the error rates of their decisions, that is, how often they chose to stop when they could have crossed and how often they crossed when they should have waited.

“The results suggested a clear difference between genders, which was very surprising and unexpected,” said Project Lecturer Chia-Ming Chang, a member of the research team. “While other factors like age and background might have also influenced the participants’ reactions, we believe this is an important point, as it shows that different road users may have different behaviors and needs, that require different communication ways in our future self-driving world.

“In this study, the male participants made many dangerous road-crossing decisions (i.e., choosing to cross when the car was not stopping), but these errors were reduced by the cart’s eye gaze. However, there was not much difference in safe situations for them (i.e., choosing to cross when the car was going to stop),” explained Chang. “On the other hand, the female participants made more inefficient decisions (i.e., choosing not to cross when the car was intending to stop) and these errors were reduced by the cart’s eye gaze. However, there was not much difference in unsafe situations for them.” Ultimately the experiment showed that the eyes resulted in a smoother or safer crossing for everyone.

But how did the eyes make the participants feel? Some thought they were cute, while others saw them as creepy or scary. For many male participants, when the eyes were looking away, they reported feeling that the situation was more dangerous. For female participants, when the eyes looked at them, many said they felt safer. “We focused on the movement of the eyes but did not pay too much attention to their visual design in this particular study. We just built the simplest one to minimize the cost of design and construction because of budget constraints,” explained Igarashi. “In the future, it would be better to have a professional product designer find the best design, but it would probably still be difficult to satisfy everybody. I personally like it. It is kind of cute.”

The team recognizes that this study is limited by the small number of participants playing out just one scenario. It is also possible that people might make different choices in VR compared to real life. However, “Moving from manual driving to auto driving is a huge change. If eyes can actually contribute to safety and reduce traffic accidents, we should seriously consider adding them. In the future, we would like to develop automatic control of the robotic eyes connected to the self-driving AI (instead of being manually controlled), which could accommodate different situations,” said Igarashi. “I hope this research encourages other groups to try similar ideas, anything that facilitates better interaction between self-driving cars and pedestrians, which ultimately saves people’s lives.”


PAPER:

Chia-Ming Chang, Koki Toda, Xinyue Gui, Stela H. Seo, and Takeo Igarashi, “Can Eyes on a Car Reduce Traffic Accidents?,” In 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’22), September 17–20, 2022, Seoul, South Korea. ACM, New York, NY, USA, 16 pages: September 20, 2022, doi:10.1145/3543174.3546841.

RELATED LINKS:

RELATED FACULTY MEMBERS:


 
 

Managing Editor: Matthew Lombard

The International Society for Presence Research (ISPR) is a not-for-profit organization that supports academic research related to the concept of (tele)presence. ISPR Preseence News is available via RSS feed, various e-mail formats and/or Twitter. For more information please visit us at http://ispr.info.

ISPR
2020 N. 13th Street, Rm. 205
Philadelphia, PA 19122
USA