Hear that? Bioacoustics is having its moment, but the technology still needs tuning
- The use of audio to study, monitor, detect and conserve species has gained popularity in recent years.
- Passive acoustic monitoring was found to be more efficient than traditional photographic traps; however, the use of audio can be data heavy and laborious to analyze.
- Technological developments such as artificial intelligence have simplified audio analysis, but environmentalists say there are still gaps.
Think of Sonoma County and the picturesque valley and vineyards come to mind. But the place is also home to a rich and incredible biodiversity. Soundscapes to Landscapes, a biodiversity monitoring initiative in the county, aims to document just that.
Over the past five years, from mid-spring to late summer here in the California wine region, the initiative has collected an enormous amount of sound data by placing sound recorders in 1,300 locations in the county. The project, run by Sonoma State University, conservation NGO Point Blue Conservation Science and many other partners, armed volunteer citizens with recorders and worked with private landowners to accumulate audio, which was then processed. and classified with the help of artificial intelligence technology.
“The idea is to detect individual species or find information that tells you something new about the types of sounds present,” Leonardo Salas, a quantitative ecologist at Point Blue Conservation Science, told Mongabay in a video interview. “We can characterize entire environments based on this.”
The methodology was effective in monitoring changes in ecosystems and studying wildlife patterns. Before the 2017 California fires, Soundscapes to Landscapes had placed audio recorders in a park. After reviewing the data after the fires, the team found a ‘preponderance’ of lazuli buntings (Passerina amoena), a species of songbird that had never been seen or heard in the park before the fires. Initially, the citizen scientist who monitored the park thought it was a mistake in artificial intelligence models. But later, he deduced that songbirds preferred burned areas and may have flown in after the fires, helping the team understand how the fires changed the park’s ecosystem.
Audio data has been used for decades to monitor, study and conserve wildlife. In recent years, bioacoustics has gained importance as a non-invasive method for studying wild animals. It can be used to study entire landscapes and detect species, as Salas’s team does, but also to understand the behavioral and communicative patterns of animals.
The ability of audio recorders to collect large amounts of data can make them more efficient than traditional camera capture and remote monitoring methods. A study published in the journal Methods in Ecology and Evolution found in 2020 that passive acoustic monitoring is “a powerful tool for species monitoring” that detected wild chimpanzees (Pan troglodyte) in Tanzania five times faster than visual techniques. Another study, published in the journal Ecological indicators in 2019, he compared sound recorders to camera traps, finding that the advantage of the former was its “upper detection areas, which were 100-7,000 times larger than those of camera traps.”
However, larger coverage areas mean more data to analyze, making healthy data analysis work-intensive. Technological innovations like artificial intelligence and machine learning have helped simplify the process. But environmentalists say there’s still a long way to go for technology to make processing audio data faster and easier.
Salas says the artificial intelligence models used by Soundscapes to Landscapes often expose these technological gaps. In the past, models have mistaken the noise of a motorcycle engine for the cooing of some kind of dove, and confused the chatter of girls with the sounds of quail. “There is an immense ability to monitor wildlife using sound data, but the technology is not yet available,” she says. “My concern is [whether] it can happen fast enough so that we can start tracking how the planet is changing. “
Darren Proppe, who has been using audio data for years to study songbirds in Texas, says he is “skeptical of AI without any truth about human terrain.” Human intervention, he argues, is necessary not only to detect errors but also to raise larger issues that automated analysis cannot deduce.
“If I’m just looking for the presence or absence of a bird, a puma, or an insect, then the vocalizations can confirm it,” Proppe, the director of the Wild Basin Creative Research Center at St. Edward’s University, tells Mongabay Proppe. Texas in a video interview “But the bigger question would be, what are you missing out on? And human beings will really have to check to make sure they are not misled ”.
Accessibility to real-time monitoring and economic data transfer is another concern when it comes to managing bioacoustics data.
It is a problem that Daniela Hedwig knows too well. As the director of Cornell University’s Elephant Listening Project, she and her team have been listening to and recording African forest elephants for years (Loxodonta cyclotis) that roam the rainforests of central Africa. As a keystone species, elephants play a vital role in maintaining and shaping the forest structure. The data collected by the project is passed on to governments, which can use it to identify locations for conservation activities. The project also collects data that helps track poaching activities by detecting gunshots in the audio. But the inability to monitor in real time, coupled with the inefficiencies of automated detectors, makes the process slow and laborious.
The data is collected by the recorders every four months, after which Hedwig’s team takes nearly three weeks to analyze and analyze the audio, which can often amount to 8 terabytes, about 1,100 hours of 4K-quality video streamed on Netflix. “The reason is that the detectors are not perfect and we have to go through each detection, examine it and decide if it was actually a shot or not,” Hedwig tells Mongabay in a video interview.
Overcoming these challenges along with the incorporation of real-time monitoring, says Hedwig, will push bioacoustics technology further. Given the immense interest the field has garnered in recent times, she says she is optimistic.
“Imagine anti-poaching units sitting in their control room, getting information on a poacher in real time and saying ‘Hey, we have to send people out and capture them,'” says Hedwig. “This will be the big turning point.”
Crunchant, A., Borchers, D., Kühl, H., and Piel, A. (2020). Listen and Watch: Do photo traps or sound sensors detect wild chimpanzees more efficiently in an open habitat? Methods in Ecology and Evolution, 11(4), 542-552. doi:
Enari, H., Enari, HS, Okuda, K., Maruyama, T. and Okuda, KN (2019). An evaluation of the efficiency of passive acoustic monitoring in detecting deer and primates compared to camera traps. Ecological indicators, 98, 753-762. doi: 10.1016 / j.ecolind.2018.11.062
Related audio from Mongabay podcast: Elephant Listening Project research analyst Ana Verahrami explains the role of forest elephants as a key species for tropical forest survival and reproduces some recordings of elephant behavior and vocalizations informing the project’s work, listen here: