37,133 Americans lost their lives to motor vehicle accidents in 2017, down from around 50,000 in 1980. According to the US Department of Transportation, 94 percent of all vehicle accidents are caused by human error rather than, say, mechanical malfunction. Now, a wild card has entered the deck: the self-driving car.
Are self-driving cars safe compared to human drivers? That is certainly the expectation, but at this point, there hasn’t been enough total miles travelled by self-driving cars to adequately compare them to conventional cars. Concern about their safety will undoubtedly continue until more is known.
The Self-Driving Car Market
Many corporate heavyweights are now jumping into the self-driving car market – Google, Uber, General Motors, Ford, Tesla, and Volvo, among others are all jockeying for position. Not all of these companies intend to get involved in vehicle manufacturing – Google, for example, intends only to generate the technology that operates self-driving cars, not the cars themselves.
On the demand side of the equation, the numbers are dizzying. The global autonomous vehicle market is valued at well over $50 billion, and it is projected to grow to half a trillion dollars by 2030. Currently, self-driving cars are relatively unusual, but they could become ordinary or even typical sooner than many people realize. Some observers expect them to be a common sight as early as 2021.
How Self-Driving Cars Work: The Wonders of Technology
Self-driving cars rely largely on LIDAR, a “light-detecting and ranging” sensor. LIDAR uses literally millions of lasers to create a constantly changing 3D image of the car’s environment. The vehicle also links up with GPS signals to locate the car within a city to help plan the most efficient route. To deal with other traffic, radar sensors measure the size and speed of anything near the car that moves (including vehicles and pedestrians). Meanwhile, the car’s cameras can read street signs and traffic signals.
Self-driving cars also operate software programs that can make real-time decisions, independent of human input, about how the vehicle will respond to the actions of other vehicles. This type of software can learn from its experience – in fact, it must do so in order to reach its full potential. It is for this reason that live beta testing on public roads is necessary despite its potential dangers.
Fatalities (So Far)
As of January 2020, only six fatalities have been reported that appear to have been caused by self-driving cars – five in the US and one in China. No one doubts that there will be more. The only question is how many more. Could self-driving cars cut the accident rate in half, or double it?
The Elaine Herzberg Case
The death of Elaine Herzberg in March 2018 garnered wide publicity, in part because she was the first pedestrian ever killed by a self-driving car. Ms. Herzberg was struck while pushing her bicycle across a four-lane highway. At the time of the accident, the vehicle was operating in self-drive mode with a human backup driver in the driver’s seat.
The accident was not necessarily the sole fault of the automated system, however. According to an investigation of the accident conducted by the National Transportation Safety Board, both methamphetamine and marijuana were found in Ms. Herzberg’s bloodstream (this does not necessarily establish that she was intoxicated at the time of the accident, however). She entered the roadway in a dark area with no crosswalk, and she was wearing dark clothing
Regardless, Herzberg’s presence on the road ahead should have resulted in near-immediate braking. Indeed, the car identified Ms. Herzberg as an object six seconds before the accident while traveling at 43 mph, but it waited until a second prior to impact before it flagged the need to brake. The investigation concluded that there was no braking before impact. Uber, which was administering the test drive, responded by suspending the test-driving of all self-driving cars in Arizona.
The Six Levels of Automated Driving
Automated driving is considerably more complex than an on/off switch. The following is a description of the six levels of automated driving:
- Level 0: The human driver is in control, but an automated system issues warnings and may intervene temporarily under certain circumstances.
- Level 1: The human driver and the automated system share control of the vehicle. Rudimentary systems like this, such as cruise control, have been around for decades. Modern innovations include parking assistance, where automation controls the steering while the driver controls the speed. The human driver is required to be prepared to resume full control of the vehicle at any moment.
- Level 2: This is the first level in which the automated system takes over complete control of the vehicle. A human driver, however, must be ready to take over at any moment – in fact, in many cases, the driver is required to keep his hand on the wheel at all times.
- Level 3: The human driver is not required to keep his hands on the wheel or his eyes on the road – he can even surf the internet the way a passenger might. The driver is still required to be ready to take over if a problem develops, but not necessarily within a split second.
- Level 4: The driver can go to sleep if he wants to, even in the back seat. He is only required to take control of the vehicle during certain limited circumstances. The automatic system handles all aspects of the journey including routing and even parking.
- Level 5: Blade Runner-style robo-cars that don’t necessarily even need a human driver aboard at all.
The Dangers of Self-Driving Cars
At this point, the most obvious dangers of self-driving cars include:
- The lack of an effective error-correction mechanism when a self-driving car makes a mistake that could lead to an accident. At some point in the future, such error-correction mechanisms might be built into road infrastructure.
- At the lower levels of autonomy, the risk that a human driver will become complacent and trust the automated system too much, thereby slowing down reaction time. Imagine a Level 2 driver sleeping in the back seat, for example..
- New forms of crime – the remote hijacking of a self-driving car and even its occupants by cybercriminals or terrorists, for example. A criminal might also use technology to steal the vehicle’s contents or the owner’s private information.
- Software malfunctions that cause accidents. Imagine a software malfunction on an 18-wheeler truck carrying hazardous materials, for example.
The Benefits of Self-Driving Cars
If safety concerns can be effectively addressed, self-driving cars could offer many benefits, including:
- A drastic reduction in DUI and other forms of impaired driving. The owner of a self-driving car that uses higher levels of automation would always have access to a “designated driver.”
- A self-driving car can be programmed to obey all of the rules of the road at all times.
- Better fuel efficiency and lower total emissions as self-driving cars take the most efficient route to a given destination.
- Better decision-making, better reaction times, better ability to see in the dark and in adverse road conditions, and better overall safety. While this has yet to be proven, the superiority of self-driving cars already seems to be a very real possibility.
- Greater independence for blind and elderly riders.
The Legal Landscape
When it comes to self-driving cars, the regulatory landscape is still evolving. Currently, the US federal government has issued no mandatory regulations, only voluntary guidelines. Local laws vary significantly from state to state.
Connecticut State Law Concerning Self-Driving Cars
Connecticut has enacted basic legislation that mandates certain procedures for testing self-driving cars. Testing may take place only in certain designated Connecticut cities with populations of at least 100,000, for example. A human driver must be physically present at all times in any self-driving car.
A Shift to Product Liability Claims?
As stated above, at present nearly 95 percent of all accidents are caused by human error, and only a small percentage are caused by mechanical malfunction. If self-driving cars take over, however, a software malfunction will likely be classified as a mechanical malfunction rather than human error. As a consequence, when personal injury and wrongful death lawsuits are filed over vehicle accidents, product liability claims rather than negligence claims are likely to become more and more common.
Product liability cases, already fraught with technical complexity in many instances, are likely to become even more so. For example, a self-driving car might feature software and AI developed in Silicon Valley, sensors and cameras manufactured by another company, with the car itself manufactured by still another company. In some cases, all of these companies might have contributed to an accident.
One of the cutting-edge issues relating to self-driving cars is ethics, because ethics become challenging when an automated system can make decisions without human input. How will the system be programmed to respond when an ethical decision is required?
Suppose, for example, that an automated car with one passenger – the car’s owner – discovers three pedestrians jaywalking, too late to stop. The choices are to veer off the road into a ravine, which will certainly kill the driver, or to hit all three pedestrians and kill them. Should an automated system be programmed to respond how a particularly altruistic driver should behave, or how the average driver probably would behave?
Let Us Make It Happen for You
At Berkowitz Hanna, our lawyers enjoy decades of combined experience, and we have won dozens of multi-million dollar verdicts and settlements. If you have suffered an injury that you believe was at least partially someone else’s fault, contact us immediately for a free case evaluation, You don’t need any money to retain us. Either we win your case or we charge you nothing. And our bill doesn’t come due until your money actually arrives.