Automotive

The Brains Behind A Self-Driving Car

Cars have seen significant advancements in safety procedures over the past few decades. Manufacturers include impact-absorbing exterior panels, deformable front ends, and reinforced bumpers to protect the passengers inside. However, as cars become more advanced, artificial intelligence (AI) has slowly been incorporated into our vehicles.

For many who grew up watching The Jetsons or Meet the Robinsons, it looks like our favorite futuristic fantasy worlds are just around the corner. However, the logistics behind a completely automated self-driving car do bring up some conflicts between how a human brain works versus computer code.

So, how can this divide in mentality cause potential problems in the further advancement of automated vehicles? Let us take a look at the science.

The Human Brain VS Artificial Intelligence

Currently, AI holds no power over the human brain. Humans possess several unique abilities that computers simply cannot fully process yet, specifically logic, sensing, and decision-making.

While computers can “learn,” they get their education in a very different way to humans. The best way to explain how AI learns is through two bots: a teacher and a builder. The builder builds several bots using different kinds of code telling the bot what to do. Then, the bots go to the teacher bot for testing. Those who fail are destroyed, and those who succeed are returned to the builder bot to be reworked. This cycle will repeat through several rounds, and it is how photo recognition software, social media algorithms, and several other computer processes are created.

This education allows computers to “learn,” but there are certain things a robot simply cannot be taught. AI cannot use logic to deduce conclusions or use logic to make rational decisions. Because humans have these abilities, we can take the information around us and determine the best course of action.

This ability to make determinations based on logic is fueled by our ability to gather information, which is then used to make educated decisions.. Humans also have an advantage when it comes to our senses. When we are side-swiped by another car, we see and hear the car whizz by. When there is a gas leak within the vehicle, we can smell that something is wrong and pull over.

Computers, by their nature, cannot feel (at least, not yet). While cameras can be installed around the vehicle, a light scratch or fender bender may not be registered by the computer and the car may continue to drive.

This is especially terrifying in cases where someone gets hurt, such as a small animal or child. If the car registers that life as simply a pothole or speedbump, the vehicle may continue driving without registering the accident.

Along with better senses, humans have decision-making powers that AI systems simply do not possess. Computers function on a series of commands – if this happens, then do this, and so on. Because AI lacks common sense, they have no concept of “cause and effect” which is essential when driving on the road.

What Does This Mean?

The current state of AI means that self-driving cars are in our future as we continue making advancements in computer science. Computers could catch up to our level of logical deduction, sensing, and decision-making abilities. However, at technology’s current state, a fully self-driving car would be hazardous.

First, the car might not be able to make a proper decision, only to use a series of if-then statements to determine what the humans who programmed the computer want the vehicle to do. Because there is no moral absolute, it is challenging to determine what programmers should tell cars to do.

This is beautifully illustrated in Moral Machine, a website created by Scalable Cooperation, MIT Media Lab, and the Massachusetts Institute of Technology. The website gives the user a series of questions about what decisions an automated car should make regarding human life. The wide variety of answers illustrates how it is nearly impossible for programmers to install morals into a self-driving vehicle properly.

As for sensing, technology can only go so far. Google is currently looking into how to have computers sense smell, but even proper visual sensing is impossible with today’s technology. While cameras could be installed all over the outside of the car, it is unclear how computers would determine the severity of a crash. The number of possibilities is too vast for computers to deduct what to do properly.

“Roadway safety needs to be our top priority when it comes to automated vehicles,” says Jan Dils, founder of Jan Dils Attorneys at Law. “If that means self-driving cars need to be off the road for a few more years, so be it. that is what needs to happen to protect lives.”

It will likely take another decade for self-driving cars to be a common sight. It is only a matter of time before our roadways become automated, and we need to be prepared for what a technology-fueled future will bring.

If you have any questions, please ask below!