TY - JOUR
T1 - Learning to detect an animal sound from five examples
AU - Nolasco, Ines
AU - Singh, Shubhr
AU - Morfi, Veronica
AU - Lostanlen, Vincent
AU - Strandburg-Peshkin, Ariana
AU - Vidaña-Vila, Ester
AU - Gill, Lisa
AU - Pamuła, Hanna
AU - Whitehead, Helen
AU - Kiskin, Ivan
AU - Jensen, Frants H.
AU - Morford, Joe
AU - Emmerson, Michael G.
AU - Versace, Elisabetta
AU - Grout, Emily
AU - Liu, Haohe
AU - Ghani, Burooj
AU - Stowell, Dan
N1 - Funding Information:
Spotted hyena data were collected in collaboration with the MSU-Mara Hyena Project, and data collection was additionally supported by a grant from the Carlsberg Foundation to FHJ.
Funding Information:
The authors would like to thank the participants of the successive editions of the Few-shot Bioacoustic Event Detection task at the DCASE Challenge, without who this work would not have been possible. IN is supported by the Engineering and Physical Sciences Research Council (grant number EP/R513106/1). SS is a research student at the UKRI Centre for Doctoral Training in Artificial Intelligence and Music, supported jointly by UK Research and Innovation [grant number EP/S022694/1] and Queen Mary University of London. VL acknowledges funding from CNRS, MITI award CAPTEO. ASP acknowledges funding from Human Frontier Science Program award RGP0051/2019. The work was also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy—EXC 2117—422037984. ASP received additional support from the Gips-Schüle Stiftung and the Max Planck Institute of Animal Behavior. The data for meerkats were collected at the Kalahari Meerkat Project in South Africa, currently supported by a European Research Council Advanced Grant (No. 742808 and No. 294494) to Tim H. Clutton-Brock, the MAVA Foundation, and the University of Zurich. We further thank the Trustees of the Kalahari Research Centre and the Directors of the Kalahari Meerkat Project. Spotted hyena data were collected in collaboration with the MSU-Mara Hyena Project, and data collection was additionally supported by a grant from the Carlsberg Foundation to FHJ.
Funding Information:
IN is supported by the Engineering and Physical Sciences Research Council (grant number EP/R513106/1 ). SS is a research student at the UKRI Centre for Doctoral Training in Artificial Intelligence and Music, supported jointly by UK Research and Innovation [grant number EP/S022694/1 ] and Queen Mary University of London . VL acknowledges funding from CNRS, MITI award CAPTEO. ASP acknowledges funding from Human Frontier Science Program award RGP0051/2019 . The work was also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy— EXC 2117—422037984 . ASP received additional support from the Gips-Schüle Stiftung and the Max Planck Institute of Animal Behavior .
Funding Information:
The data for meerkats were collected at the Kalahari Meerkat Project in South Africa, currently supported by a European Research Council Advanced Grant (No. 742808 and No. 294494 ) to Tim H. Clutton-Brock, the MAVA Foundation, and the University of Zurich. We further thank the Trustees of the Kalahari Research Centre and the Directors of the Kalahari Meerkat Project.
Publisher Copyright:
© 2023 The Authors
PY - 2023/11
Y1 - 2023/11
N2 - Automatic detection and classification of animal sounds has many applications in biodiversity monitoring and animal behavior. In the past twenty years, the volume of digitised wildlife sound available has massively increased, and automatic classification through deep learning now shows strong results. However, bioacoustics is not a single task but a vast range of small-scale tasks (such as individual ID, call type, emotional indication) with wide variety in data characteristics, and most bioacoustic tasks do not come with strongly-labelled training data. The standard paradigm of supervised learning, focussed on a single large-scale dataset and/or a generic pre-trained algorithm, is insufficient. In this work we recast bioacoustic sound event detection within the AI framework of few-shot learning. We adapt this framework to sound event detection, such that a system can be given the annotated start/end times of as few as 5 events, and can then detect events in long-duration audio—even when the sound category was not known at the time of algorithm training. We introduce a collection of open datasets designed to strongly test a system's ability to perform few-shot sound event detections, and we present the results of a public contest to address the task. Our analysis shows that prototypical networks are a very common used strategy and they perform well when enhanced with adaptations for general characteristics of animal sounds. However, systems with high time resolution capabilities perform the best in this challenge. We demonstrate that widely-varying sound event durations are an important factor in performance, as well as non-stationarity, i.e. gradual changes in conditions throughout the duration of a recording. For fine-grained bioacoustic recognition tasks without massive annotated training data, our analysis demonstrate that few-shot sound event detection is a powerful new method, strongly outperforming traditional signal-processing detection methods in the fully automated scenario.
AB - Automatic detection and classification of animal sounds has many applications in biodiversity monitoring and animal behavior. In the past twenty years, the volume of digitised wildlife sound available has massively increased, and automatic classification through deep learning now shows strong results. However, bioacoustics is not a single task but a vast range of small-scale tasks (such as individual ID, call type, emotional indication) with wide variety in data characteristics, and most bioacoustic tasks do not come with strongly-labelled training data. The standard paradigm of supervised learning, focussed on a single large-scale dataset and/or a generic pre-trained algorithm, is insufficient. In this work we recast bioacoustic sound event detection within the AI framework of few-shot learning. We adapt this framework to sound event detection, such that a system can be given the annotated start/end times of as few as 5 events, and can then detect events in long-duration audio—even when the sound category was not known at the time of algorithm training. We introduce a collection of open datasets designed to strongly test a system's ability to perform few-shot sound event detections, and we present the results of a public contest to address the task. Our analysis shows that prototypical networks are a very common used strategy and they perform well when enhanced with adaptations for general characteristics of animal sounds. However, systems with high time resolution capabilities perform the best in this challenge. We demonstrate that widely-varying sound event durations are an important factor in performance, as well as non-stationarity, i.e. gradual changes in conditions throughout the duration of a recording. For fine-grained bioacoustic recognition tasks without massive annotated training data, our analysis demonstrate that few-shot sound event detection is a powerful new method, strongly outperforming traditional signal-processing detection methods in the fully automated scenario.
KW - Bioacoustics
KW - Deep learning
KW - Event detection
KW - Few-shot learning
UR - http://www.scopus.com/inward/record.url?scp=85169825272&partnerID=8YFLogxK
U2 - 10.1016/j.ecoinf.2023.102258
DO - 10.1016/j.ecoinf.2023.102258
M3 - Article
AN - SCOPUS:85169825272
SN - 1574-9541
VL - 77
JO - Ecological Informatics
JF - Ecological Informatics
M1 - 102258
ER -