In fact, the average size of the world’s wildlife populations has declined by nearly 60% over the past 40 years. If these declines continue, many of the planet’s most iconic creatures could disappear completely within our lifetimes. But for organizations working to protect these animals, knowing where to start can be a challenge of its own: in order to defend at-risk species from human impact, conservationists must first identify which animals are in danger, what threat they’re facing, and where they’re headed next.
The Zoological Society of London, or ZSL—a conservation nonprofit dedicated to the protection of wildlife and their habitats—is taking a uniquely data-driven approach to this problem. They use camera traps to safeguard wildlife from poachers and monitor changes in wildlife population numbers. The cameras use motion and heat sensors to take a picture every time an animal or human passes. This process generates a large amount of data, which historically, ZSL has manually tagged and categorized, image-by-image. Categorizing imagery often takes months or even years to complete, and when you’re fighting to protect species under threat of extinction, time-consuming processes like this can be costly in more ways than one.
So ZSL teamed up with the Google Cloud team to test a new product, AutoML Vision (now in public alpha), which will allow them to translate this wealth of camera data into useful insights. Using AutoML, they’re developing custom machine learning models that can identify species within camera trap data and dramatically speed up large scale analysis. The testing is still ongoing, but the early results offer a signal of what’s possible when organizations are able to leverage the latest advancements in AI.