Who will be responsible for safety accidents after the car is self-driving?

SaveSavedRemoved 0
Deal Score0
Deal Score0

Remember the Uber crash that killed a pedestrian more than 2 years ago with a self-driving car? The latest news is that on September 15 this year, a grand jury in Arizona decided to indict Rafael Vasquez, the former safety driver of Uber’s self-driving car at the time, with manslaughter and recommended a 2.5-year prison sentence, while The security officer refused to plead guilty in court, and the lawsuit may continue.

What about Uber, the party responsible for the accident? In fact, as early as March last year, Uber, as the perpetrator, was found not guilty by a US court.

The party, known as the “first case of pedestrian deaths caused by autonomous driving,” easily escaped responsibility, and let the safety officer, who did have some fault, bear all the guilt. This result is really embarrassing. Could it be that this is a judicial corruption of “big companies do evil and small employees take the blame”? Or is this decision a new problem with the old model of American justice, not knowing how to rule on self-driving algorithm systems and their owners?

In automated driving tests or commercial use monitored by a safety officer, we will naturally blame the safety officer for the safety accident of the vehicle. After the brakes are removed, the responsibility for the safety accident of the vehicle will naturally be counted on the main body of the enterprise that develops and uses this automatic driving algorithm. By then, accountability for autonomous driving algorithms will become more complicated.

Before discussing this issue, let’s go back to the details of Uber’s car accident and see what controversial points exist in the verdict of this car accident. Can Uber really get out of it? Once the safety guard is removed, how should self-driving cars and autonomous driving algorithms be held accountable? These problems that seem to be encountered in the future have been placed in front of you and me, and urgently need to be considered and discussed.

Back to the scene: How did the crash happen?

In November, the U.S. National Transportation Safety Board (NTSB) released a report revealing details of Uber’s self-driving cars 10 seconds before a collision. It is worth noting that the Uber platform had been ruled not responsible at the time, but the report pointed out that Uber’s self-driving system has various loopholes.

This is what happened in the car accident. On the night of March 18, 2018, a woman in Tempe, Arizona, was hit and killed by an Uber driverless car traveling more than 60 kilometers per hour while pushing her bicycle across the street.

If the car is just an ordinary vehicle, then the responsibility for the accident is obvious. On the one hand, pedestrians crossing the road bear some responsibility, but the driver of the vehicle should bear the main responsibility for failing to brake and avoid in time. But the car is Uber’s self-driving test vehicle, with a safety officer on board who handles emergencies in the vehicle.

In the crash, the safety officer was clearly not doing his due diligence. According to the investigation, the safety officer had been watching an entertainment program similar to “The Voice of China” on his mobile phone during the driving process. During this period, the surveillance camera caught him bowing his head repeatedly until 0.5 seconds before the accident. Noticing the pedestrian in front of the vehicle, and finally hitting the brakes only 0.7 seconds after impact, but the accident has already happened.

There is no problem in holding the safety officer accountable for this accident. After all, his responsibility is to ensure the safety of vehicles and pedestrians on the road, but his negligence directly caused this serious accident. In daily life, a large number of driving accidents are mostly caused by such negligence.

But it was the self-driving system of Uber’s driverless vehicles that gave the safety officer the illusion that the vehicle could judge the road ahead on its own, and that he could be lazy to look at his phone. This is also the dilemma of the L3 level in the autonomous driving technology level. Vehicles can be highly autonomous, but in the event of an accident it is the driver’s responsibility. So how can the driver rest assured or play games?

Going back to the Uber car, isn’t there any problem with it? From the survey, there are many problems.

In the 10 seconds before the Uber vehicle hit the pedestrian, the vehicle would have recognized the pedestrian and avoided the crash. But a series of system misjudgments caused the vehicle to crash without slowing down. There are several key data in the report: in 9.9 seconds to 5.8 seconds, the car accelerated from 56 kilometers to 70 kilometers; in 5.6 seconds, the car’s millimeter-wave radar (Radar) detected an object ahead for the first time and recognized it as “” “Car”, 5.2 seconds, the car’s lidar (Lidar) detected the object ahead for the first time, recognized it as “other”, and determined that it was stationary. From 4.2 seconds to 2.7 seconds, the car swayed back and forth between “car” and “unknown” to the recognized object, but without reference to the tracking history of the object, it was finally determined as a stationary object.

It took 2.6 seconds to 1.2 seconds for the lidar to recognize the object as a stationary bicycle, but the decision swayed again, until it was re-identified as an automatic vehicle and decided to brake. But the actual braking of the vehicle begins 0.2 seconds before the crash. At this time, the vehicle at a speed of 64 kilometers per hour could not avoid hitting the pedestrian. A car accident happens.

We saw a lot of wasted time with multiple wobbly misjudgments of the vehicle in the seconds before the crash. According to the NTSB report, the key problem that caused the accident was that the software could not correctly predict the category and movement trajectory of the victim. If the system correctly identifies the object ahead as a pedestrian early on, it should slow down significantly or try to avoid it.

But Uber’s self-driving system didn’t act so cautiously, instead limiting the system because Uber believed the emergency braking system would destabilize the vehicle.

In other words, Uber considers the braking of the self-driving system as the last factor to consider, which is really terrifying.

The autopilot system drives the car, and the safety officer is responsible?

According to the NTSB’s investigation, Uber’s self-driving system has huge safety flaws. The first is the accuracy and timeliness of the car’s identification algorithm, and the second is the problem of setting permissions for the emergency braking system. The NTSB concluded that Uber’s move to eliminate the automatic emergency braking systems that come with its vehicles increases the risk of testing self-driving vehicles on public roads.

According to the investigation report, the Uber car had been running in self-driving mode for about 19 minutes before the accident, and the vehicle had traveled at least about 22 kilometers. Then within this distance, if the safety officer has not stepped on the brake once, it means that the automatic driving system is likely to have not activated the brake once. If you have driving experience, even on a quiet street at night, you will rarely drive more than 20 kilometers per hour at a speed close to 70 kilometers per hour without needing to slow down or brake.

If Uber really gave the braking authority to the safety officer, how could the safety officer dare to immerse himself in entertainment programs while driving at high speed without caring about the safety of himself and pedestrians.

In other words, while Uber handed the braking authority to the safety officer, it did not make the safety officer realize that he had to understand this safety measure 100%. Uber avoided legal risks by setting up safety officers, but it did not anticipate the safety risks of the vehicle itself, nor did it fulfill its obligation to inform, making a human being “fudged” by the algorithm into a footnote that autonomous driving technology entered the real world.

Conversely, if the autopilot system of a car does not have braking authority in autopilot mode, but needs to be operated by a safety officer, what does this autopilot test mean, is it a fake autopilot test?

According to a former Uber engineer, “Uber’s crash rate will still be too high.” “If Waymo had this behavior, it would stop testing to find out why, and Uber would ignore the problem.” .

These issues are also where the criticism of Uber’s self-driving plans is taking place. Uber both wants to commercialize its self-driving taxi business through aggressive self-driving plans, and it wants to avoid flaws and loopholes in its self-driving system by setting up safety officers. Eventually, problems arise and the blame can be placed on Uber. these employees.

Apparently, Uber did it. At the end of 2018, Uber resumed the road test of unmanned vehicles in some cities, equipped each vehicle with 2 safety officers, carried out stricter monitoring, and optimized the autonomous driving system.

For the local judiciary, the reason for ruling that Uber is not responsible is very simple, that is, “there is no basis for the judgment.”

After the driverless car, who will really take responsibility?

Because of the lack of legal responsibility, Uber was able to “get away with it” this time. But according to the above analysis, Uber is to blame in the face of factual responsibility.

First of all, Uber’s self-driving system does not take safety as the first consideration, but emphasizes the stability and continuity of the system. This lays the groundwork for many safety accidents involving Uber’s unmanned vehicles. If Uber uses such a “radical” algorithm to advance its driverless taxi driving strategy in the future, it is easy to see the situation where the vehicle prioritizes fast travel and ignores road safety.

Secondly, there are flaws in the test of the driving system. According to its setting of the braking system, it needs the intervention of a safety officer to complete it, which is obviously a departure from the original intention of the automatic driving technology. Obviously, such a system cannot truly achieve unmanned commercial use.

In the same year that Uber’s fatal accident occurred, California in the United States further relaxed the supervision of driverless vehicles, allowing no safety driver on the vehicle, as long as the self-driving vehicle can be taken over remotely if there is a problem.

In 2019, Waymo received a fully autonomous driving test license issued by the California Department of Motor Vehicles (DMV), which can test without a safety officer. In the driverless taxi behind, passengers can also call a taxi without a safety officer, but after encountering a sudden danger, they can press the car help button or contact the safety officer in the app while driving. .

Then, in this case, it is necessary to consider the new responsibility and related issues of the unmanned vehicle. After all, the accident of the vehicle can no longer be blamed on the safety officer of the remote guidance.

The attribution of responsibility is actually relatively simple. After the normal traffic accident responsibility is determined, if the other party’s responsibility is excluded, then the accident responsibility will be determined as the responsibility of the driverless car, but as for the car manufacturer, automatic driving system provider or business operator to bear the responsibility , it needs to be divided according to the division of responsibilities of the business model and the determination of the cause of the on-site accident.

But there will be a problem of lack of legal responsibility subjects. Under normal circumstances, there will be a dedicated person responsible for almost every accident, and most of them are the drivers responsible for the violation. . Because it is impossible to sue a car owner who bought a driverless car. After all, he did not drive, and it is impossible to sue the engineer who designed the self-driving algorithm. The engineer is not alone, and the cause of the accident cannot be attributed solely to a line of code. So, does it come down to the company that provides the self-driving system? Then no company would be willing to take such a huge risk.

Perhaps in the future, there will be a liability body jointly established by all parties involved in autonomous vehicles and insurance companies. These manufacturing, design and operation parties will bear the corresponding proportion of insurance premiums according to the size of the responsibility. The owners of unmanned private cars (it is estimated that there will be very few Individuals) will also pay a certain insurance fee in the purchase of services to form an insurance asset pool to deal with possible accidents.

This responsible subject undertakes the overall responsibility identification and compensation for the accident, and also forms an AI measurement system internally to determine the specific responsibility according to the vehicle damage of different car manufacturers, the accident rate of different automatic driving algorithms and the operator’s operation strategy. , to determine the future premiums of different entities.

For example, some car manufacturers give priority to the safety of passengers in the car. After an accident causes damage to pedestrians, companies based on this strategy will pay more insurance premiums; If a passenger is injured or has a fatal accident, you must pay more and pay more for the passengers in the car.

It is foreseeable that after the popularization of self-driving unmanned vehicles, cases of liability determination for various complex situations will emerge one after another. We have to start thinking and trying legislative work before that. Instead of waiting until something happens to start groping. Don’t be like the Uber case, where the blame can only be placed on this irresponsible human being, and there is nothing to do with the self-driving algorithm system.

Strict regulation of driverless cars does not mean that we are not optimistic about the industry. In my opinion, the future of driverless cars is very bright. Although there will be extreme accidents of one kind or another, driverless cars will definitely be safer than the current human-driven travel conditions in the future.

Just like the many accidents that have occurred at Waymo, the vast majority are the full responsibility of the human driver. When driverless cars are in the majority in the future, we will no longer have to be careful about these vehicles, but more careful with human drivers. Because the self-driving system will not go to the “Got Talent Show” while driving.

The Links:   LQ121S1LG61 1DI200A-120

We will be happy to hear your thoughts

Leave a reply

lighting Meta
Logo
Enable registration in settings - general
Compare items
  • Cameras (0)
  • Phones (0)
Compare