Improving autonomous driving

Driverless car
Driverless cars use computer vision to capture a scene’s depth and motion to navigate safely and accurately. (Image: iStock)

Self-driving cars must accurately assess and navigate a rapidly changing environment. Computer vision, which uses computation to extract information from imagery, is an important aspect of autonomous driving.

Nathan Jacobs, a professor of computer science and engineering at the McKelvey School of Engineering at Washington University in St. Louis, and a team of graduate students developed a joint learning framework to optimize stereo matching and optical flow for autonomous driving. Stereo matching generates maps of disparities between two images and is a critical step in depth estimation for avoiding obstacles. Optical flow aims to estimate per-pixel motion between video frames and is useful to estimate how objects are moving as well as how the camera is moving relative to them.

The framework, which Jacobs presented Nov. 23 at the British Machine Vision Conference in Aberdeen, U.K., outperforms comparable methods for completing stereo matching and optical flow estimation tasks in isolation.

Read more on the McKelvey School of Engineering website.

Leave a Comment

Comments and respectful dialogue are encouraged, but content will be moderated. Please, no personal attacks, obscenity or profanity, selling of commercial products, or endorsements of political candidates or positions. We reserve the right to remove any inappropriate comments. We also cannot address individual medical concerns or provide medical advice in this forum.