Automotive

We May Be Able to Teach Cars to Drive; but Can We Teach Them to Be Ethical?

efwfwqOne leading argument in favor of developing autonomous vehicles is that robotic cars can react far more quickly than human reflexes and cut down significantly on the 30,000 fatal traffic accidents we experience annually in the United States. In cases of death or injury through a traffic accident, the cause is sometimes judged to be an unavoidable by human standards, but what happens in the not-too-distant future if the accident is judged to be the fault of a self-driving car? In a more general sense, on whom does liability fall for self-driving cars, the manufacturer of the car or the software developers who program its algorithms?

When Is A Driverless Car Negligent?

According to IEEE Spectrum, the “reasonable person” legal standard for driver negligence may disappear by the 2020s when proliferating driverless cars will have reduced crash rates by 90 percent. The new definition may change to that of the “reasonable robot”. Our courts today never ask why a driver does any particular thing in the critical moments preceding a crash. The assumption is that the driver, in a panic, acted on instinct. Human ethical standards, imperfectly rendered into law, make the primary assumption that a person with good judgement knows when to disregard the letter of the law in order to honor the spirit of the law. It becomes the job of the engineer to translate good judgement into code for self-driving cars and other autonomous machines, in other words, robots.

In California, autonomous vehicles licensed for testing are required to provide all data from their collection of sensors — video, ultrasonic, infrared, radar, lidar, and so forth, for the 30 seconds preceding a crash to the Department of Motor Vehicles. Engineers are thus able to reconstruct remarkably precise knowledge about the events around a crash, what the vehicle sensed, what alternative actions it considered, and the logic motivating its decisions. Intense scrutiny by regulators and litigators will be brought to bear on autonomous vehicles following a crash, holding them to superhuman safety standards in manufacturing, programming, and performance.

Ethical Dilemmas In Real World Driving

Because all driving entails risk, distributing that risk among drivers, pedestrians, passengers, cyclists, and property will inject ethical considerations into the decisions that a driverless car makes. MIT’s Technology Review reported on a workshop held in 2015 at Stanford University in which engineers and philosophers were asked to discuss ethical dilemmas that might arise when self-driving cars are deployed in the real world.

Chris Gerdes, a Stanford professor heading a research lab that experiments with automated driving, and Cal Poly philosophy professor Patrick Lin called on researchers, automotive executives, and automotive engineers at the workshop to prepare to consider the ethical implications of the technology they are developing. They implemented different ethical settings in the control programs of the driverless cars, for example instructing the car to prioritize avoiding humans rather than avoiding passengerless vans or squirrels. Real split second scenarios presented scenes such as a child suddenly dashing out into the street, forcing a self-driving car to choose between hitting the child or swerving to hit an oncoming van.

Asking The General Public

Study results reported by Scientific American magazine showed that participants from among the public generally preferred to have autonomous vehicles sacrifice themselves and their passengers over injuring a pedestrian. However, they themselves preferred to ride an autonomous vehicle that would prioritize their own safety. A second consideration was whether implementing regulations should impede or delay the rollout of driverless cars that would prevent many thousands of fatalities annually over issues of very, very rare deaths that the cars might cause.

Christopher Hart, chairman of the National Transportation Safety Board, believes that federal regulations will be required to set the basic morals of autonomous vehicles, as well as safety standards for how reliable they must be. It is likely that stringent safety standards will have to be in place to satisfy future providers of car insurance in the future.

Post byhttp://www.Netquote.com

If you have any questions, please ask below!