When combatting complex problems like illegal poaching and human trafficking, efficient yet broad geospatial search tools can provide critical assistance in finding and stopping the activity. A visual active search (VAS) framework for geospatial exploration developed by researchers at the McKelvey School of Engineering at Washington University in St. Louis uses a novel visual reasoning model and aerial imagery to learn how to search for objects more effectively.
The team led by Yevgeniy Vorobeychik and Nathan Jacobs, professors of computer science and engineering, aims to shift computer vision — a field typically concerned with how computers learn from visual information — toward real-world applications and impact.
The team’s approach to VAS builds on prior work by collaborator Roman Garnett, an associate professor of computer science and engineering at McKelvey Engineering. It marries active search, an area in which Garnett did pioneering research, with visual reasoning and relies on teamwork between humans and artificial intelligence.
First author Anindya Sarkar, a doctoral student in Vorobeychik’s lab, presented the findings Jan. 6 at the Winter Conference on Applications of Computer Vision in Waikoloa, Hawaii.
Read more on the McKelvey School of Engineering website.