Documentation 2024

Drafted by the audio team at WN’s Documentation-palooza 2k24

TL;DR we are knocking on bee hives to make them angry because the sounds they make tell us how the hive is doing. Shane worked on the hardware, and Richard and Ashmit worked on the preliminary ML model

Hardware Team

This year, we mainly worked on designing a setup to put into a hive to record audio from multiple microphones simultaneously. The details for how this was done can be found here: Hardware Team Documentation

Once we got this working, we wanted to collect data so that we could use ML sentiment analysis algorithms to determine the status of a beehive. Richard and Ashmit worked on this and know the most about it.

Beekeepers will knock on a hive box when they want to know how their bees are doing. The sound that the bees make in response to this can tell them a lot – if they are missing a queen, if the bees are sick with some parasite, etc. We wanted to make a device that could do exactly that, and accurately predict the state of the hive with sentiment analysis. To do this, we needed to create a dataset full of the sounds that a hive makes when knocked. We visited a bee research facility on campus, the Robinson lab, on 4/17/24 and they gave us the idea to try collecting data from hives undergoing these different stressors, so that we can use sentiment analysis to match the sounds of a random beehive to their stressor(s). They also had many other recommendations that can be found here:
Notes from Robinson Lab Visit

Our setup currently consists of a raspberry pi, an arduino, and a stepper motor. The stepper motor is mounted to the side of a hive box, and it has a pivoting arm on it that can knock on the hive box. The raspberry pi sends a signal to the arduino whenever we want to knock on the hive, and the arduino takes care of the knocking pattern, intensity, etc. Bees will cover holes up with a waxy substance called propolis (bee gunk), and for some reason they do not cover holes up that are ~2.5-4.5 mm in diameter (as Wagglenet has found out with previous work). The holes in our microphone mesh are much smaller than this, so we designed and 3D printed plastic microphone covers (and mounts for these covers, so we can easily mount the mics in the hive).

We are planning to deploy our prototype at the Robinson lab before the end of the semester, and we hope to gain enough data points to create a preliminary model. We are also using 4 microphones in our setup, two are more sensitive than the others. We are unsure of how loud the hive will be, so having this redundancy should help ensure we get useful data (not too quiet or too loud).

Software Team

We built on a ML algorithm to determine when new queen bees are emerging from the hive so that beekeepers have a sense of when the colony would start swarming. To accomplish this, we utilized a light weight neural network (YAMNET based on MobileNet) to classify various audio sounds. We retrained this light weight model such that it could correctly classify queen bee piping sounds up to a 96% accuracy on our dataset which we scraped from the web. Using this pre-trained classifier provides us with a great backbone for other downstream tasks as well. While we wait for more data to be collected, we can easily fine-tune this model to perform other tasks such as sentiment analysis or any other classification tasks in the future.

We didn’t end up pursuing queen bee sound localization because there weren’t too many sources we could pull from in terms of previous work (at least with localization in such a dense and complex environment).

Published
Categorized as Audio