No matter what car makers’ marketing says, scientists affirm that the current self-driving technologies are still not safe enough. Various research and development projects continue to improve autonomous driving technology, and one of them claims to have discovered a key element for making it more reliable and safer.
Heng “Hank” Yang, a graduate student at the Massachusetts Institute of Technology (MIT), is working with Luca Carlone, the Leonardo Career Development Associate Professor in Engineering, on something called “certifiable perception”, a project that aims to develop algorithms that can optimize robot perception.
The premise is that robotic systems that are designed to interpret their surroundings (such as the ones implemented on driverless cars) use algorithms to make estimations, but there’s no way of establishing whether those estimations were correct or not. This is why a “certification” would be helpful.
A self-driving car takes snapshots of an approaching car, for example, and then tries to “match” every key point in that image with the labeled key points in a 3D car model. This is a machine-learning process called “neural network”. What the algorithm developed by Yang’s team does is try to find the successful match. If the match is incorrect, it will know how to keep trying, and when there are no better solutions, that’s when a certificate is given.
Ultimately, the goal is for the perception system to be able to “know” when it has failed, and send an alert to the driver, to take over the steering wheel, when this happens.
The 3D model would also allow driverless cars to identify car shapes that are not present in their library of car models, by morphing until it matches the 2D snapshot.
Yang team’s algorithm already won “Best Paper Award in Robot Vision”, at the International Conference on Robotics and Automation (ICRA) and was a “Best Paper Award” finalist at the Robotics: Science and Systems (RSS) Conference.
The young researcher says that next-generation algorithms could be the key to achieving “trustworthy autonomy” for vehicles.
The premise is that robotic systems that are designed to interpret their surroundings (such as the ones implemented on driverless cars) use algorithms to make estimations, but there’s no way of establishing whether those estimations were correct or not. This is why a “certification” would be helpful.
A self-driving car takes snapshots of an approaching car, for example, and then tries to “match” every key point in that image with the labeled key points in a 3D car model. This is a machine-learning process called “neural network”. What the algorithm developed by Yang’s team does is try to find the successful match. If the match is incorrect, it will know how to keep trying, and when there are no better solutions, that’s when a certificate is given.
Ultimately, the goal is for the perception system to be able to “know” when it has failed, and send an alert to the driver, to take over the steering wheel, when this happens.
The 3D model would also allow driverless cars to identify car shapes that are not present in their library of car models, by morphing until it matches the 2D snapshot.
Yang team’s algorithm already won “Best Paper Award in Robot Vision”, at the International Conference on Robotics and Automation (ICRA) and was a “Best Paper Award” finalist at the Robotics: Science and Systems (RSS) Conference.
The young researcher says that next-generation algorithms could be the key to achieving “trustworthy autonomy” for vehicles.