The road ahead for semi-autonomous vehicle crash liability
The insurance industry continues to be challenged by questions around responsibility and compensation after a semi-autonomous vehicle crashes.
Semi-autonomous vehicles, which allow drivers to supervise and take control of the vehicle, are becoming more common. But before we fully accept the inherent risks and realities linked to the idea that semi-autonomous cars and trucks will drive us instead of us driving them, insurers need to be prepared to help address two major questions: If a vehicle drives itself, who is responsible should that vehicle crash? And, should victims expect to be compensated at the same level as traditional auto accidents?
A few car manufacturers have promised that when their fully-automated vehicles are ready to go on the market, they will take responsibility for any crashes. In cases of assisted automation, Tesla has claimed that the responsibility remains with the human driver. The automaker made this determination after a man riding in a Tesla Model S in “Autopilot” mode crashed into a truck that neither the driver nor the Tesla had detected.
Public perception and blame when accidents happen
A new study, “Blame Attribution Asymmetry in Human–Automation Cooperation,” featured in the journal Risk Analysis, reviewed public perception of blame and responsibility in semi-autonomous vehicle (semi-AV) crashes. The research indicates that questions of responsibility and compensation continue to pose challenges for the insurance industry.
Researchers led by Peng Liu, an associate professor in the College of Management and Economics at Tianjin University, conducted four experiments in two studies. Instead of asking people to look at blame and responsibility judgements from legal, ethical, or philosophical perspectives, the study focused on the perspective of lay people, as they will be the final consumers of semi-AVs.
The first objective was to shed light on whether people respond differently to hypothetical crashes involving semi-AVs caused by humans and automation in three areas: severity judgment and acceptability of the crashes, blame and responsibility attribution to human and automation, and compensation to the victims of these crashes.
The experiment found people are more likely to attribute the fault of the human in the human-caused crash to other agents than to attribute the fault of automation in the automation-caused crash to other agents. Also, participants assigned more blame and responsibility to the automation and its manufacturer in a collision. Thus, automation may be more likely to be regarded as the major agent responsible for crashes caused by semi-AVs and blamed in these crashes.
The second objective explored the different hypothetical responses to human- or automation-caused crashes. Here respondents indicated the victim should be compensated more in an automation-caused accident, compared to a crash caused by a human driver. They also judged the automation-caused crash to be more severe and less acceptable than one caused by a human, regardless of the seriousness of the crash (involving an injury or fatality).
Blame attribution symmetry
The researchers’ effort to theorize people’s responses to crashes also provided previously unidentified evidence of a bias and tendency that people will judge the automation-caused crash more harshly, ascribe more blame and responsibility to vehicle manufacturers, and think the victim in the crash should be compensated more. Called blame attribution asymmetry, the bias also has a direct implication that may impact adoption of semi-AVs since allowing “not-safe enough” semi-AVs on roads could backfire, because these AVs could lead to many traffic crashes, which might in turn produce greater psychological costs and deter more people from adopting them.
Blame attribution also connects the tendency for people to over-react to automation-caused crashes, and the higher negative affect evoked by these crashes, which can amplify attributions of legal responsibility and blame.
This bias confirms that policymakers and regulators need to be aware of people’s potential over-reaction to crashes involving AVs when they set policies for deploying and regulating them, particularly with regard to financial compensation for victims injured or killed by automated systems. Insurers should also have a role in the conversation. “According to our findings, they might need to consider the possibility that to lay people, victims of AV crashes should be compensated more than commonly calculated,” the authors write.
The research goes further and extrapolates that any policy allowing what people feel are “unsafe” semi-AVs on roads could backfire as the inevitable accidents that will occur may deter more people from adopting them. To change people’s negative attitudes about semi-AVs, Liu argues that “public communication campaigns are highly needed to transparently communicate accurate information, dispel public misconceptions, and provide opportunities to experience semi-AVs.”
The future of semi-AVs is arriving faster than many anticipated and has placed heightened responsibility on insurers to establish a nimble roadmap for what lies ahead, maintain open lines of communication with customers and continue to reduce confusion on the responsibilities for safe driving.
Research originally published by The Society for Risk Analysis, a multidisciplinary, interdisciplinary, scholarly, international society that provides an open forum for all those interested in risk analysis.
Peng Liu (pengliu@tju.edu.cn or pliutsinghua@gmail.com) is an associate professor in the College of Management and Economics at Tianjin University. Liu conducts research on human factors and ergonomics (task complexity, workload), safety engineering (human error and reliability analysis, risk analysis), and social psychology (risk perception and risky decision making) with applications in nuclear power plants, and transportation.
Keep reading…