Article published in Animals

Bats play a crucial role as bioindicators of environmental changes, making their monitoring highly valuable. In particular, wind energy plants have been found to cause significant fatality rates among bats, as well as birds, mainly through direct collision with the rotor blades or through barotrauma effects. However, the manual identification and classification of bats through their echolocation sounds is an expensive and time-consuming process. To address this issue, we present an automated analysis pipeline applied to a large dataset recorded over a period of two years in a wind test field. The work – published in ANIMALS – proposes various statistical methods based on convolutional neural networks and clustering techniques to examine the relationship between background noise and bat echolocation sounds. In addition, the methodology performs classification at both the genus and species levels, with a high accuracy for most bat classes.


Bats are widely distributed around the world, have adapted to many different environments and are highly sensitive to changes in their habitat, which makes them essential bioindicators of environmental changes. Passive acoustic monitoring over long durations, like months or years, accumulates large amounts of data, turning the manual identification process into a time-consuming task for human experts. Automated acoustic monitoring of bat activity is therefore an effective and necessary approach for bat conservation, especially in wind energy applications, where flying animals like bats and birds have high fatality rates. In this work, we provide a neural-network-based approach for bat echolocation pulse detection with subsequent genus classification and species classification under real-world conditions, including various types of noise. Our supervised model is supported by an unsupervised learning pipeline that uses autoencoders to compress linear spectrograms into latent feature vectors that are fed into a UMAP clustering algorithm. This pipeline offers additional insights into the data properties, aiding in model interpretation. We compare data collected from two locations over two consecutive years sampled at four heights (10 m, 35 m, 65 m and 95 m). With sufficient data for each labeled bat class, our model is able to comprehend the full echolocation soundscape of a species or genus while still being computationally efficient and simple by design. Measured classification F1 scores in a previously unknown test set range from 92.3% to 99.7% for species and from 94.6% to 99.4% for genera.

More information:

Alipek, S.; Mälzer, M.; Paumen, Y.; Schauer-Weisshahn, H. & Moll, J., An Efficient Neural Network Design Incorporating Autoencoders for the Classification of Bat Echolocation Sounds, Animals, 2023 (accepted in August 2023)