Sunday, April 23, 2017

5 Unresolved Issues Facing Robo-cars | EE Times




A MUST READ BY EE Times! 




5 Unresolved Issues Facing Robo-cars

Tech is key to 'self-driving' — or is it?
4/13/2017



MADISON, Wis. – Almost a year ago Elon Musk famously proclaimed: “I really consider autonomous driving a solved problem.”
Given all the advancements of Artificial Intelligence and a rash of announcements about business and technology firms partnering to develop robo-cars, the self-driving promise seems self-evident.
Tech companies and carmakers are sticking to a self-imposed deadline to roll out sometime between 2019 and 2021 their first Level 4/Level 5 autonomous cars. Nobody is publicly backpedaling — at least not yet.
The business and investment community understands — and encourages — these business aspirations for autonomous vehicles.
Under the hood, though, the engineering community is staring at multiple problems for which they don’t yet have technological solutions.
At the recent Massachusetts Institute of Technology-hosted event called the “Brains, Minds and Machines Seminar” series, Amnon Shashua, co-founder and CTO of Mobileye, spoke bluntly: “When people are talking about autonomous cars being just around the corner, they don’t know what they are talking about.”
MIT video: 
But Shashua is no pessimist. As a business executive, Shashua said, “We are not waiting for scientific revolution, which could take 50 years. We are only waiting for technological revolution.”
Open questions
Given these parameters, what open questions still need a technological revolution to be answered?
Consumers have already seen pod cars scooting around Mountain View, Calif. An Uber car — in autonomous driving mode — recently collided with a left-turning SUV driven by a human in Arizona.
It’s time to separate the “science-project” (as Shashua calls it) robotic car — doing a YouTube demo on a quiet street — from the commercially viable autonomous vehicle that carmakers need but don’t have.
As EE Times listened to Mobileye’s CTO, as well as several scholars, numerous industry analysts and an entrepreneur working on “perception” in robo-cars, the list of “open issues” hobbling the autonomous vehicle industry has gotten longer.
Some issues are closely related, but in broad strokes, we can squeeze them into five bins: 1) autonomous cars’ driving behavior (negotiating in dense traffic), 2) more specific and deeper “reinforcement” for learning and edge cases, 3) testing and validation (can we verify safety on AI-driven cars?), 4) security and anti-tampering (preventing a driverless car from getting hacked), and 5) the more philosophical but important question of “how good is good enough” (because autonomous cars won’t be perfect).
Let’s break it down.
Next page: 1. Driving behavior

1.       Driving behavior
Mobileye’s Shashua calls it “driving policy.” He means autonomous cars have to “negotiate in dense traffic.”  Mike Demler, senior analyst at The Linley Group, while agreeing, prefers the term, “driving behaviors.”
“By behaviors, I mean the innumerable aspects of safe driving that we all perform every day, but that can’t be described by rules in a DMV driver’s manual,” Demler noted.
These aren’t even so-called “corner cases,” he noted.  “As we saw in the recent Uber accident, there’s more to safe driving than keeping in your lane and observing the speed limit. A human knows (or should know) to proceed very carefully into an intersection when the light is turning yellow, or if their field-of-view isn’t clear. You may not be breaking any rules, but proceeding ahead may still not be the right thing to do.”
Uber accident scene in Tempe, Ariz. (Source: Local NBC News)
Uber accident scene in Tempe, Ariz. (Source: Local NBC News)
Demler added, “Other examples of driving behaviors are the common interactions between drivers, for good or for bad. What do you do when you’re being tailgated? When should you slow down to let someone squeeze into your lane? How about the everyday challenge of finding a parking spot at Costco or a crowded shopping mall, without getting into a fight?”
He said, “A level 4 or 5 vehicle will need to do that too. All the demo videos show a single car in free-flowing traffic, which is comparatively easy.”
Where driving policy or driving behavior matters is when driverless cars must drive places where there are no written rules, when “feel,” or instinct, supersedes digital “book-learning.”

2.  Deep ‘reinforcement learning’
The behavioral problems Demler points out are mostly software issues. But this is also precisely where deep learning starts and troubles begin.  
In this context, the issue is not “deep learning” for object detection. Typically, in computer vision, for example, such deep learning can teach machines to put a boundary box around objects on the streets. This has proven effective.
But in driving policy, the key is deep “reinforcement learning.”
Mobileye’s Shashua likes to give the example of a double-lane merge -- where there are no right-of-way rules. In this area, “we’d like to use machine learning,” Shashua said, “so that machines can learn by observing data rather than programming by rules.” 
To teach robotic cars how to drive in a no-rules situation, machine learning is an ideal tool. “It is much easier to observe and collect data than understand the underlying rules of the problem you wish to solve,” Shashua said. 
But this is also where machine learning exposes its vulnerability. 
“Machine learning is based on the statistics of the data and ability to sift through the data,” said Shashua, but it can also “fail on corner cases.”
In sum, to teach robotic cars driving policy, machine learning needs to collect “rare events” (“accidents”), which is no easy feat.
Sean Welsh, doctoral candidate in Robot Ethics at the University of Canterbury, explained reinforcement learning as follows in hisrecently published article:
When it comes to deep reinforcement learning, this relies on “value functions” to evaluate states that result from the application of policies.
A value function is a number that evaluates a state. In chess, a strong opening move by white such as pawn e7 to e5 attracts a high value. A weak opening such as pawn a2 to a3 attracts a low one.
The value function can be like “ouch” for computers. Reinforcement learning gets its name from positive and negative reinforcement in psychology.
Referring to Uber’s Arizona accident, Welsh wrote: “Until the Uber vehicle hits something and the value function of the deep learning records the digital equivalent, following that policy led to a bad state — on its side, smashed up and facing the wrong way. Ouch!” An inexperienced Uber control system might not be able to appropriately quantify the risk in time.
Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), believes AI has the ability to train driverless cars to behave more like people who sometimes, counterintuitively, use “a little bit of aggression to enable an opening to merge into, for example,” he said.
But data will remain “a big gap,” Magney said. “For AI to be acceptable you need enormous amounts of data to train your behavior models. You can go about collecting as much as you can but you are still not going to be able to test for all edge cases, nor would it be feasible in a safe way. So you have to turn to simulation data in this case.”
Indeed, as Shashua acknowledged, flushing out corner cases can get even more unwieldy when you throw in nuances in the “culture of negotiation” that vary among cities and countries. Every new version of driving policy in different locations and different cultures requires new data collection.
The lack of global solutions is one of the open issues pointed out by Ian Riches, Strategy Analytics’ director responsible for Global Automotive Practice.
The key question comes down to “how to guarantee safety of Machine Learning-based technology,” noted Shashua. In his mind, this is “mostly an open problem.” This is “the Achilles heel of the entire industry,” he warned.

3.       Test and validation
The test and validation issue (for AI-driven cars) didn’t come up in Shashua’s lecture, but it’s on the mind of Forrest Iandola, a CEO at DeepScale, a Mountain View-based startup founded in 2015.
DeepScale focuses on accelerating “perception” technology that lets vehicles understand what’s going on around them in real time.

Forrest Iandola
Forrest Iandola
In academia, Iandola has already made a notable mark by developing SqueezNet, a deep neural network model (DNN) model, together with researchers at UC Berkeley — some of whom who have now joined DeepScale. SqueezNet is not designed to be applied to directly to automated driving problems, Iandola says, but the team developed it “with the goal of making the model as small as possible while preserving reasonable accuracy on a computer vision dataset.”
Iandola believes testing quality assurance in Level 4 autonomous cars is paramount. “For Level 2 ADAS cars, functional safety testing is probably OK. But Level 4 vehicles, which needs to be able to drive on its own perfectly, you need automated testing that goes beyond ISO26262.”
The ISO26262 is good for testing engine controllers, but not for sensors that must see and constantly respond to an outside world full of variables, he explained. “Miles and miles of real-world driving testing aren’t a good measurement, either, for quality assurance,” he said, because such testing can’t possibly cover all the difficult physical conditions – weather, terrain, traffic – involving an infinity of erratic human driver behaviors.
In his opinion, that’s where good simulation-based testing becomes essential.
Iandola noted, however, real-world testing has value, “if we can devise a system under which every module inside every autonomous vehicle can consistently send ‘feedback’ with a certain confidence score” to the cloud.
Rather than waiting for autonomous vehicles to crash, the cars themselves can self-check under such a system, he noted. Each module would ask itself if it was a little confused when it handed off to a human driver, or ask how much confidence it had at that moment. It would send that data consistently. This sort of self-examination helps data annotators identify the hardest cases and build quality assurance models, he explained.
Considering the myriad tests and quality assurances that must be carried out in automated vehicles, Iandola also suggested that the industry might need to develop an autonomous vehicle platform that’s broken down into “a few bite-size chunks.” A centralized robo-car platform has advantages, but one of them is not identifying where certain software glitches or hardware problems crop up.
By no means is Iandola advocating that the industry go back to a vehicle that must handle more than 100 ECU modules supplied by different vendors. He’s simply seeking a more logical way to architect the platform in “a few chunks.”
Magney also sees testing and validation as a big issue. He said, “We are at the early stages of how to solve the testing and validation problem for Level 3+.”
How we validate safety in an AI-driven car is “still an unsolved problem,” he noted. “You must define the process and a framework in which you can test the operation of the vehicle and this has not been done yet.”
Where are the requirements for Machine Learning?
V-Model (Source: Philip Koopman's presentation)
V-Model (Source: Philip Koopman's presentation)
Furthermore, Magney stressed, “You cannot prove why an AI module failed under traditional functional safety practices. You will need a compressive testing and validation process that tests the vehicle’s performance in real life and/or simulation.” 

4.       Securing and anti-tampering 
All regular cars driven by humans today are insecure to start with. The problem gets exponentially worse when cars become driverless. 
As every module in a driverless car gets controlled by a computer with no human in the loop, the computer is the driver. A backseat passenger is at the mercy of any hardware and software problems the computer encounters. With no human driver in the loop, hacking becomes much easier, because there’s nobody behind the wheel to say, “Hey, what’s going on?”
A recent Wired article quoted Charlie Miller – one of the two white hats who remotely hacked a Jeep Cherokee via its Internet connection. He pointed out, “A driverless car that’s used as a taxi poses even more potential problems. In that situation, every passenger has to be considered a potential threat.”
Miller recently left Uber for a position at Chinese competitor Didi, a startup beginning to start its own autonomous ridesharing project. 
In theory, a rogue hacker entering a driverless taxi as a passenger could simply plug an Internet-connected gadget into the car’s OBD2 port, “offering a remote attacker an entry point into the vehicle’s most sensitive systems,” the story said. This is not an easy threat to avert.
And then, there is a plunk. The Linley Group’s Demler added, “At our conference last week, someone brought up the simple case of someone holding up a stop sign to pull over a self-driving car. What would a self-driving car do to prevent a carjacking?”  

5.        How good is good enough?
The eternal question about self-driving cars is more philosophical than technical, but it goes to the heart of the matter: How good is good enough?

Gill Pratt
Gill Pratt
Gill Pratt, a former MIT professor who heads the year-old Toyota Research Institute, during Toyota’s press conference at the Consumer Electronics Show earlier this year, discussed how society perceives the safety of driverless cars. 
Pratt noted that people, tolerant of human error, have come to accept the 35,000 traffic deaths every year in the United States. But then he asked if people would tolerate even half that number of deaths caused by robotic automobiles. "Emotionally, we don't think so," said Pratt. "People have zero tolerance for deaths caused by a machine.”
He was paraphrasing Isaac Asimov’s first rule of robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Mobileye’s Shashua disagrees with both Pratt and Asimov. He sees the safety issue of autonomous cars as similar to that of air bags. “People know air bags save lives, but they can also kill people,” depending on a car’s timing, speed or trajectory. “It happens every year, and society has learned to live with it.” 
If autonomous cars can reduce the number of fatalities from 35,000 to 35, or even 350 -- hypothetically speaking, Shashua believes people will learn to live with robotic cars.
Strategy Analytics’ Riches told EE Times, “No-one is claiming an automated vehicle will be infallible.  However, as an industry and society, we have not yet managed to come up with a robust yardstick for what level of failure is acceptable.”
Riches also added, “Linked to this is the issue of verification.  If/when we ever get a robust definition of an acceptable error rate, we are still a long way from the verification and simulation techniques necessary to prove a priori that a solution will perform to the necessary standard.”
In a recent paper on Autonomous Vehicle Safety submitted to IEEE Intelligent Transportation Systems Magazine, two co-authors -- Philip Koopman, Carnegie Mellon University and Michael Wagner, Edge Case Research LLC -- detailed the challenge of validating machine-learning based systems. They called on the industry and academia to undertake the long-term task of “updating accepted practices to create an end-to-end design and validation process.” This must, they wrote, address safety concerns “in a way that is acceptable in terms of cost, risk, and ethical considerations.”

Related posts:
— Junko Yoshida, Chief International Correspondent, EE Times

No comments:

Post a Comment