The advance of technology over the last several decades has made capturing images of wildlife, even in the most remote places, as simple as strapping a motion-activated camera to a tree — the resulting photos providing a unique, behind-the-scenes glimpse into the everyday lives of the planet’s diverse species. When it comes to sorting these images, however, the technology has not kept pace.
That was, until recently.
On average, wildlife biologists and researchers are able to sift through and label between 300 and 1,000 camera trap images per hour — on a good day.
Enter Google. A new artificial intelligence (AI) model developed by the Google Earth Outreach team is increasing the rate at which species can be identified from camera-trap images. Wildlife Insights, a partnership with Conservation International and six other conservation organizations, has been trained to recognize hundreds of species from around the world and is able to classify images up to 3,000 times faster than humans, at a rate of 3.6 million an hour.
It begins with researchers uploading their captures, which the AI-enabled, Google-Cloud-based platform then analyzes in order to identify if and what species is pictured. Online tools are then available for researchers to use to analyze trends and create maps, graphs and reports.
“Google is a leader in the AI space, and so it’s almost our duty to apply AI to solve some of the world’s most pressing environmental problems.”
The goal: to help conservationists make better, more effective conservation management decisions to aid in the protection of species. Already, Wildlife Insights is the largest and most diverse public camera-trap database in the world and allows people (even the general public) to explore millions of camera-trap images, filtered by species, country or year. And, according to Google Earth Outreach Program Manager Tanya Birch, Google is looking for more conservation organizations to join in this effort to help grow the database.
To learn more about Google’s new partnership and the technology that’s driving it, we recently spoke with Birch. She discussed Google’s role, the risks associated with using this technology, how organizations can get involved and her hopes for the future of wildlife conservation.
Q: How did this partnership come about, and where did the idea to use AI technology to identify species originate?
A: Wildlife Insights is a joint collaboration with seven leading conservation organizations, led by Conservation International and Google as a founding technology partner. They recognized the need for developing a platform, leveraging the Google Cloud and technology like AI to drastically streamline and speed up conservation monitoring.
Biodiversity right now is in crisis. We are facing a million species going extinct in coming decades and some even sooner than that, so we need to put technology to bear to help solve these kinds of problems.
Q: How will this initiative help researchers better monitor and aid species, especially those that are endangered or threatened?
A: With Wildlife Insights, a conservationist can go and upload data to the Google Cloud, which allows them to collaborate and share data across projects. At the moment, people are using camera traps to observe species in the wild, and this data can sit on disks in disparate locations all around the world. People don’t have a mechanism to share that data, where often, there’s a willingness to share it but no underpinning technology that allows them to collaborate with others in using it. So Wildlife Insights is, first and foremost, a tool to help foster that kind of collaboration across a number of different NGOs, government agencies and land-based conservancies that collect this data. These are the kinds of users that we think will immediately benefit from this kind of technology.
Secondly, processing all of this camera-trap data is incredibly tedious, can be very manual, so with Google as a leader in AI, this is where we really thought we could be the most helpful with Wildlife Insights. So at the moment, if you upload camera-trap images to Wildlife Insights, in the upload process, the AI automatically predicts the species that it spots in the image.
“To be able to communicate across country borders can inform where corridors need to be placed, where overpasses and underpasses need to be built, these kinds of things.”
[Lastly,] at the end of the day, what we want to enable is the general public to [be able to] understand where species are and make species decline first and foremost in people’s understanding of what is the largest planetary emergency we face right now. So putting this together, making public the largest and most diverse public camera-trap database so that classrooms, kids, the general public, anybody can go onto Wildlife Insights and browse through pictures of wildlife that are doing whatever they do in the wild is just really fun; [it’s really fun] to browse through.
Q: You mentioned that it allows people to collaborate. Do you mean this will allow people to work together, say, across borders to assist in recovery efforts, for example?
A: Yeah, exactly. As we know, many species don’t follow country borders. They migrate in and out. There are some species that have huge migrations, and so to be able to communicate across country borders can inform where corridors need to be placed, where overpasses and underpasses need to be built, these kinds of things.
Q: Why is international collaboration as well as the ability to quickly identify species essential to this effort?
A: I think the main thing we’re trying to do is save biologists time. We want people to not [be deterred] by the manual task of looking at individual photos, [because] sometimes photos are not even looked at; they’re sitting on a disk drive somewhere because nobody has the time to look at them.
In terms of international collaboration, I think it’s extremely important that we get the big picture of the risk of extinction and the understanding that this is … a global crisis, and if we’re just looking at it through one lense, we’re not going to get the whole picture. So, bringing all of this [together] in one place on the map can inform biodiversity conservation — and policy.
Q: And I guess it may be able to help determine whether current recovery efforts are working?
Q: How was this AI technology developed? What factors had to be considered or worked in on the backend to ensure the accurate detection and identification of species?
A: Well, we had to identify what our goal was, and it really was around saving people time, so that’s the metric we’re trying to measure against. When you train any AI model, there’s a process of building and training data sets. The decades of work from our partners — Conservation International, the Smithsonian, Conservation Biology Institute, Wildlife Conservation Society, WWF [and others] — [who’ve] been doing this kind of work for decades and manually labeling all these individual photos with what species are contained within that photo, that makes a really good training data set for AI models.
The first step was building this training data set in a format that we could use to train AI models. The second one was actually training — and then testing and evaluating, and then improving upon the model. At the moment, we’ve trained on over 9 million images, and this is constantly growing as more data comes in. We have between 80 to 90.6 percent accuracy across 100 species. We’ve trained over 618 species overall. So there is a lot of room to grow, and we’re constantly working on improving on the AI models.
Q: What are the challenges or downfalls of using this technology? You used the word “predict” earlier, so is there any human oversight of this process to ensure that what the AI model identifies as the species is correct?
A: So, number one, we have a huge responsibility to get this right, and Google has AI principles that guide our work. These cover things like being socially beneficial, being accountable to people. They’re public, and it’s something that anybody can go and look up.
“We’ve been really thoughtful about the potential risks of applying AI — there are risks around endangered species, there are risks around poachers — so we’re moving slowly and cautiously.”
In terms of Wildlife Insights, in the interface, you can see what the model predicted, along with the confidence of that prediction, and a human — a biologist or anybody who uploads their own images — can go in and edit that identification; that then comes back in a feedback loop to correct and improve the model. So this is just the beginning of this project.
I think the other thing that’s important to mention is that we’ve been really thoughtful about the potential risks of applying AI — there are risks around endangered species, there are risks around poachers — so we’re moving slowly and cautiously, and for endangered animals, we’re obfuscating their locations. Our conservation partners are in the business of protecting endangered species, so we’re following a lot of their lead and doing this in consultation with them.
Q: Currently, only researchers and organizations on the frontlines of wildlife conservation are able to upload photos. Do you foresee this changing down the road?
A: Yeah, at the moment, you’re right; it’s available to obviously our core member NGOs. We’re [doing] sort of a gradual roll-out to more organizations. I think there’s a lot of promise in making this a tool that anybody can use. There’s a lot of excitement around it. But, again, I think we have to move thoughtfully here because there are risks of abuse, there is a risk with images that we don’t want being uploaded to the platform, that kind of thing. What we really want comes back to our principle of saving endangered species and what the best way of doing that is.
Q: So, right now, is it just the original seven NGOs that are able to upload images?
A: There are other organizations in addition to our main, core members, and we’re accepting new users as quickly as we can.
Q: Is there a vetting process that organizations or researchers have to go through before they can upload images?
A: There’s a form where you can email us and elaborate on the kinds of data you have, the breadth of your conservation effort, what organization you’re working with, that kind of thing. We want to make sure that people have a really good experience on the platform, and it’s still young; it’s in beta. So, we want to roll it out gradually.
Q: Why is increased participation in this effort important?
A: One very concrete example is what’s considered a biologist’s bycatch. So, if I’m focusing on a certain species, let’s just say a greater bamboo lemur for fun, then I might be completely focused on this greater bamboo lemur, but my cameras may catch a number of different species. If I upload this data and collaborate with other researchers, they might find a whole wealth of other information out there in my camera trap images that is not really important to me. So that’s just one example of how collaboration can help and how a platform like wildlife insights can drive more of this collaboration.
Q: How many countries are currently involved in this effort?
A: Right now, this is all driven by our partners. They all work in a number of different countries, so we’re working as quickly as possible to get data from all the countries that they work in on Wildlife Insights. Right now, there are around 4.5 million camera trap records on Wildlife Insights that anyone can explore, and with continued outreach efforts, we hope that that number will keep growing and growing.
Q: Are any images being uploaded from North America?
A: In particular, the Smithsonian and the North Carolina Museum of Natural Sciences have a lot of efforts in North America. Stay tuned to see that data on Wildlife Insights.
Q: Having this technology, why is it important to Google to be able to use it to help address some of the world’s looming environmental issues?
A: I think it’s important for any company to think about how they can help address global challenges with climate change and loss of biodiversity. Google is a leader in the AI space, and so it’s almost our duty to apply AI to solve some of the world’s most pressing environmental problems — and also, help conservationists and our scientist partners through our expertise in technology.
Q: Is there anything else you want to add?
A: At the end of the day, this is super personal for me, because I’ve had the fortune of taking my kids to see a lot of species in the wild. When my son was 20 months old, I took him to Kenya and he could see elephants in the wild; whether he remembers it is a whole other story, but the thought that our children could one day not have that opportunity, to see African elephants roaming the great savannas, it would be such a tragedy. It doesn’t even have to be that far flung.
We have a camera trap in our backyard, and being able to see a gray fox and other species native to California running around in our backyard, it really ties children to nature — and through technology, I think this is a really cool way that Google can help.