Self-driving cars face plenty of challenges, from inclement weather to sensor malfunctions. But one hurdle stands larger than the rest. To get over it will be critical to launching autonomous vehicles into the mainstream, but it could take well over a decade for the tech to reach that point.
Fleshy Sticking Point
Let’s just cut to the chase: humans are the single biggest challenge to autonomous cars. We’re complicated creatures, sometimes unpredictable, and often irrational. This doesn’t always square well with machines doing their methodical, logical routines.
When I’ve used cars with radar cruise control, I’ve noticed a phenomenon: the car doesn’t follow the next vehicle in the lane nearly as closely as most human drivers would. This is a safe way to operate a car, but the problem is other drivers will just cut in front of me consistently. My vehicle will slow down and slow down, forever getting pushed around by people who take advantage of the wide berth between cars the tech provides. In thick rush-hour traffic, I’ve found having radar cruise control turned on is a tremendous disadvantage. Any level of autonomous tech seems to be quite vulnerable to aggressive driving, which honestly is a problem.
Quite a few proponents of autonomous tech argue this is the very reason why we should remove humans from driving completely. After all, we’re erratic, emotional, and really the source of the problem.
The only thing is, even if humans didn’t drive at all, there are still problems with humans. Confusing, likely, but read on and everything will make sense.
If anyone chooses to drive an older car without autonomous tech, they might behave in a way that’s unpredictable. In other words, they would drive like a human.
Radar, cameras, and lidar aren’t adept at picking up subtle cues from another driver’s face. Humans can see someone is angry, yelling, or even stoic as they plow through a four-way intersection out of turn. A computer at this point struggles to do the same.
Sure, we could pass legislation outlawing the driving of a car, but that has some tricky legal consequences I’ll leave to the lawyers to explain. We might choose to socially demonize people who take the wheel in their own hands. After all, they’re endangering everyone else, correct? There are consequences for such a choice as well. Like it or not, there will always be people who are suspicious of autonomous tech, especially when it malfunctions. No technology is infallible.
Pedestrians and Cyclists
You’ll never get rid of pedestrians and cyclists sharing the road with cars, unless the machines do truly take over in every way.
A human driver, for all their irrational thoughts, can often interpret the intentions of another human with surprising accuracy. They know if a child drops a ball that bounces into the road, chances are high the child will dart out after it. Humans can often tell if a cyclist is looking over his shoulder in preparation to turn or just out of curiosity.
Until machines can predict all the many unpredictable actions of humans on bicycles or on foot, they won’t be able to avoid all accidents. Obviously, human drivers aren’t flawless at this, either, but the talk of many unabashed autonomous car disciples is that the tech will result in zero fatalities. That seems unlikely, to say the least.
Have you ever taught a teenager to drive? I’ve had the privilege of riding with nieces and nephews learning, and one of the most harrowing parts is how they don’t know how to stop without giving everyone in the car whiplash. That slow application of the brake pedal, leading to a smooth stop, requires looking far ahead, predicting what’s going to happen, and reacting early.
Robots struggle just like teenagers. University of Michigan researchers concluded that passengers in autonomous cars are jolted by as much as 8 inches during an emergency stop. And yes, those people were wearing seatbelts, so that’s quite a bit of movement.
Researchers have found autonomous cars tend to brake harder than humans. They’re working on better seatbelt pre-tensioning or warning sounds before a hard emergency stop. In some ways, this is a case of the tech being a little too good.
Pointing out the errors in autonomous vehicles isn’t to say the technology can’t be improved. In fact, the very function of criticism should be to highlight areas where technology needs to be strengthened. Automakers and tech companies are both actively working on solutions to many of the human-centered problems, but a solution likely won’t be forthcoming. Until then, we as humans need to continue to be engaged and cautious as drivers, something we seem to resent constantly.
As we program our cars and move towards automation are we making sure our self-driving cars are making ethical decisions? Learn more about this fast-growing dilemma in the autonomous car revolution. Read more here.
Powered by WPeMatico