Our vision of a cyberpunk future hasn’t undergone significant changes since ‘Blade Runner’ debuted in 1982. The iconic book kickstarted a cult that, like a vine entwining around a tree base, permeates layers of the society decades afterwards.
‘The Fifth Element’, ‘I Am Robot’, ‘Ready Player One’, ‘Deus Ex’, ‘Elysium’ – pick your favorite cyberpunk series, and paint the imagery in your mind. Chances are, your depiction and ours share a lot of common elements.
An expanse of densely packed skyscrapers and corporate logos lighting the night skyline.
A haze of neon lights from a cluster of signs overhead.
‘Swoosh’ and ‘Whirr’ of cars. Through the glass, no steering wheels. No human drivers. Only AI.
As of this moment, we’re a far cry from fully driverless cars. What we do have works only on major highways, requires constant vigilance from the driver, and probably doesn’t exist outside of countries like the US, Germany, or China. Yet inch by inch, we move forward.
Recently, Elon Musk sent social media into a frenzy by claiming that Tesla will achieve the basic form of fully driverless car by 2021. Whether your knee-jerk reaction is to discredit Elon or jump on the bandwagon, more background info is necessary to appreciate the context of his statement.
For starters, what obstacles stand in the way of bringing driverless cars to reality? How insurmountable do those obstacles seem?
Heck, before we delve into those waters, how do you even define a driverless vehicle? Like say, if your car has an auto-park feature, does this count? To what degree?
5 levels of self-driving cars explained
As it turns out, there’s no official definition of what ‘self-driving’ means, despite vehicles with self-driving features becoming wide-spread. Take the Waymo taxi, for example, with its purely automated ride-hailing system. This you can call self-driving. Meanwhile, if your car allows you to let go of the steering wheel, even sometimes, it also deserves to be called self-driving.
Where is the baseline?
From the Society of Engineers – the folks behind classifying motor oil into categories based on the ‘syrup-ness’ of it (wow, did they drink motor oil?), comes the solution. They’ve classified autonomous driving vehicles into 5 levels.
Level 1
Level 1 encompasses all cars with any shred of intelligence, like providing assistance for staying in your lane. The same applies if the car touts an auto-brake feature and stops whenever a particularly rule abiding pigeon decides to cross the road.
Yes. That happens.
Level 2
The key difference between level 1 and level 2 is that level 1 supports either lane assist or auto-break. Level 2 supports both. Level 2 is also where we’re at right now with Tesla. The main limitation of level 2 lies in navigation, with only major highways available.
Level 3
From level 3 onwards, we slowly encroach upon the realm of cyberpunk. At this level, you don’t have to bother with highways or town roads. You punch a destination on the panel, and an on-board computer performs magic.
The amusing bit about level 3 is that it claims to be fully self-driving, but doesn’t quite live up to the claim. Car safety extends beyond lane assist, self-acceleration, and self-break. Entering a busy highway requires the immaculate execution of a complex, and very sophisticated manoeuver centred on timing and intuition. Just accelerating, decelerating, or turning won’t cut it. And level 3 can do only that.
So, if your fully self-driving car encounters a situation it cannot solve while you’re binging on a Netflix show, that might be your last Netflix binge ever.
Level 4
Level 4 brings us closer to a true driverless state. Unlike level 3, level 4 abandons control of the vehicle in the event of an unsolvable problem, usually by pulling over so that the human driver can take over.
It’s still very much applicable to an urban landscape, so if you take a plunge into the wilderness, or try to tackle dunes, the tech will struggle.
Level 5
Throw a fanfare, we’ve finally reached level 5!
The ultimate, and truest driverless state!
The hallmarks of level 5 are:
The AI is always in complete control. No human intervention possible. AI does everything a human can do, only better. Heck, level 5 concept cars don’t even have a steering wheel.
Yep, you’re just gonna have to trust the AI to get you safely to work. Just try not to piss it off…by calling it “it”.
From level 2 to level 5 – what could go wrong?
News flash: Google self-driving car hits public bus in California
Your response:
A) “The car should’ve Googled bus schedules.”
B) “I’m just glad there isn’t a self-driving car run by Microsoft because every day there’d be multiple crashes.” Badum ts.
C) “The bus was filled with a lot of Yahoos, who said they heard a loud Bing.”
D) “Blame it on Apple.”
To reiterate, we’re still at Level 2 of self-driving vehicles. Disbelief towards Elon Musk’s claims of reaching level 5 by the end of 2021, therefore, makes sense. On the other hand, Elon Musk has a track record of proving that tech-gurus should spend less time yapping on the web and more time studying tech. Whom should you believe?
Let’s examine the main roadblocks towards self-driving cars so that you can decide for yourself.
Roadblock N1. Software.
The mainstream method of a self-driving car relies on a machine learning technique called the deep neural network. Think spider-web.
All these edges, with everything in-between, form the main construct of the web. The more spider silk you input, the more beautiful and intricate a web you’ll create. Replace spider silk with real world data, and you get the gist of it.
Neural networks have brought us this far, but they suffer from inherent limitations. They fill in the blanks between web edges in superb fashion, but they don’t recognise edges themselves very well. Particularly new edges and new edge scenarios.
For example, in 2016 Tesla crashed into a truck because it failed to detect it against a brightly-lit sky. Another incident involved Tesla driving into a concrete barrier, killing the driver. Though the dataset learns from those outliers, real humans pay a steep price for it.
On a related note, software is susceptible to hacking. If somebody hacks Tesla servers, they gain access to your car. With a touch of a button, they can make your car drive off the cliff. Or they might mess with your navigation software.
How would you feel if you set the destination for a five-star beach resort, but your car ejected you near a dirty, oily, lichen-infested river with no human settlement in sight?
Verdict: we don’t claim to be tech-gurus, much less VPs of engineering at Tesla. The threat of hacking will always exist. It’s an on-going race with no finish line. Meanwhile neural networks, though unskilled at detecting outliers, can learn from them. Thus, in Elon we trust.
Roadblock N2. Legal.
If AI causes road accidents, who should pay the bill? Human driver for not staying alert or AI company for its software bug? What if a single bug causes thousands of road accidents by the time engineers fix it? You wouldn’t want to be in the shoes of the CEO, that’s for sure.
Countries across the globe still struggle to hold Amazon accountable for tax evasion, and it’s been more than 20 years since the online marketplace’s introduction. Go figure how long it will take for self-driving cars.
Verdict: wait for finance professionals to devise a solution for driverless vehicles (“You’re a mediocre driver but who cares?” insurance).
Driverless cars: The Final Verdict
You don’t need us to tell you the final verdict. Your version of the truth is unique, and two different truths do not dispute each other.
We’re curious about your opinion on the matter. Should we hold our breath for a breakthrough in driverless AI in 2021 or not? Leave us a comment!
And in case you wanna see what could go wrong with driverless cars, check this out.