New research creates framework for large-scale geospatial exploration

Comparison of search pathway using visual active search (VAS) (left) and the most competitive state-of-the-art approach, greedy selection (right). The VAS framework developed by McKelvey engineers quickly learns to take advantage of visual similarities between regions.

When combatting complex problems like illegal poaching and human trafficking, efficient yet broad geospatial search tools can provide critical assistance in finding and stopping the activity. A visual active search (VAS) framework for geospatial exploration developed by researchers at the McKelvey School of Engineering at Washington University in St. Louis uses a novel visual reasoning model and aerial imagery to learn how to search for objects more effectively.

The team led by Yevgeniy Vorobeychik and Nathan Jacobs, professors of computer science and engineering, aims to shift computer vision — a field typically concerned with how computers learn from visual information — toward real-world applications and impact.

The team’s approach to VAS builds on prior work by collaborator Roman Garnett, an associate professor of computer science and engineering at McKelvey Engineering. It marries active search, an area in which Garnett did pioneering research, with visual reasoning and relies on teamwork between humans and artificial intelligence.

First author Anindya Sarkar, a doctoral student in Vorobeychik’s lab, presented the findings Jan. 6 at the Winter Conference on Applications of Computer Vision in Waikoloa, Hawaii.

Read more on the McKelvey School of Engineering website.

Leave a Comment

Comments and respectful dialogue are encouraged, but content will be moderated. Please, no personal attacks, obscenity or profanity, selling of commercial products, or endorsements of political candidates or positions. We reserve the right to remove any inappropriate comments. We also cannot address individual medical concerns or provide medical advice in this forum.