Autonomous vehicles (AVs), like most types of AI, are privy to a litany of legal, ethical, and policy issues. Given their not-quite functionality as not-quite humans, there are all sorts of angles to consider during their development and eventual rollout for mainstream use. This exploration will not be exhaustive, but will hopefully cover the main concerns in a helpful manner.
How safe should AVs be?
There have been 6 levels of vehicle automation as set out by NHTSA, where 0 is an entirely human-operated vehicle and 5 refers to vehicles that are autonomous under all circumstances and do not require human intervention. For the intents and purposes of this article, AVs will be characterised as vehicles which are “designed for travel without a human operator”, or a Level 5 on the scale, as per the National Law Review.
In order to effectively roll out AVs, they must firstly be considered safe enough by safety regulations bodies of a particular country, and secondly must achieve a publicly acceptable standard of safety. The latter is subject to variability: a survey in Scotland has shown no clear public consensus as to what constitutes a “safe enough” standard. It is also the case that consumers’ tastes and preferences may not agree with AVs, as they may feel uncomfortable sitting in a driverless vehicle. These two factors feed into the first prerequisite, as consumer demands may subsequently influence the regulations imposed onto car companies for safety.
The non-governmental RAND program has found that an Improve10 policy, where AVs are 10% safer than the average human-operated vehicle, is generally able to save more lives compared to Improve75 and Improve90 policies, where AVs are 75% and 90% safer respectively. This is particularly important when replicated on a large scale, as implementing Improve10 can potentially save up to 500,000 lives.
The question is therefore whether it is justified to wait for technology to improve before sending AVs out onto the roads. Typically, to maximise consumer comfort to new technology, advances must be implemented gradually to allow acclimatisation and receptiveness to further developments in the future. An MIT paper has reported many consumers have adopted a “wait and see” approach towards autonomous vehicles,
However, a generic evaluation of safety is complicated by the lithium ion batteries typically used by AVs.Their highly combustible nature means that if the battery is damaged in a collision, it can lead to uncontrollable fires. In April 2021, a driverless car crash in Texas killed 2 passengers and caused a fire that lasted for 4 hours. While higher levels of safety accorded to AVs may lead to fewer accidents overall, the long-lasting nature of lithium fires may lead to a higher rate of fatalities overall, and can prove more hazardous when compared to a standard vehicular fire, which is usually extinguished in minutes. Current calculations don’t seem to account for this, which suggests AVs may be unknowingly more dangerous than predicted.
Tune in next week for the 2nd part of this series! Thanks for reading!