Bee Tagging

Context:

UIUC’s Robinson Lab conducts research that requires the studying of bees that are tagged with tiny 2x2mm “QR code” tags. Cameras are used to track the tags and analyze the behavior of bees (read more in this article by The Apiarist). Bee tagging is the process of applying these tags to the thoraxes of a large group of bees,  and manually tagging a colony one by one can take researchers 12+ hours!

WaggleNet’s Bee Tagging team aims to build an automated system for tagging bees, which currently does not exist anywhere. By making this tool available to researchers, we hope to offer significant time savings and make bee tagging more feasible.

Bee Tagging is comprised of two sub-teams: Hardware and Software. Both sub-teams are deep in development of the first generation Bee Tagger, and we are nearing the point of testing a fully integrated system. Please keep reading to learn more about our work!

Hardware

This team is responsible for creating all the mechanical and electrical systems.

Bee Tagger Mk1

The philosophy behind Mk1 is to use what is available to us to make a minimum viable product on a fast timeline. Our goal is for both the hardware and software teams to learn from its creation, such that both have a solid foundation of experience entering Mk2 development. Our current design repurposes an old 3D printer for precise movement across an 8″ × 12″ platform. The Tag Applicator takes the place of the printer carriage and can pick and place tags with its vacuum nozzle. Additionally, a Glue Dispenser mechanism is mounted on the bed to supply glue for applying tags. 

From left to right: Tag grid and Glue Dispenser

The Marlin-based 3D printer is an excellent base for use to build off of, as we can utilize its coordinate system by sending gCode commands directly via serial. We are working on establishing control over the mechanisms’ additional actuators with Raspberry Pi. These actuators include stepper, DC, and servo motors, which require different methods to connect. An outline of our control system is shown below:

Bee Tagging Hardware’s current objectives are to finish wiring and control of Mk1 actuators, fully integrate with software, and prototype a bee immobilizing system to be used for Mk2. As of Feb 2026, new member applications will open soon! Please apply if one of the following sounds like you:

  1. Mechanical designer capable of complex, moving mechanisms. Strong proficiency in Fusion360 for CAD, experience with design for 3D printing/3D printer hardware, assembly & hand tool skills. Strong communication skills in team environments, and enthusiasm + dedication towards the project 🙂
  2. Controls system designer with experience in Raspberry Pi, motor control, electrical wiring/circuit design, and Python programming. Experience with modifying 3D printers, Marlin, or Klipper is a bonus. Strong communication skills in team environments, and enthusiasm + dedication towards the project 🙂

Software

This team is responsible for designing the software that detects bees in the enclosed chamber and sends positional data to the actuators. Video feed from a camera module is sent to a Raspberry Pi where a machine learning model identifies and communicates coordinates to the actuators.

System process:

  1. The camera feed captures footage of the bee chamber and communicates it to the Raspberry Pi.
  2. The machine learning model identifies the bee and it’s thorax.
  3. The model places bounding boxes over both the bee and the thorax.
  4. The computes the coordinates of the bounding boxes and communicates with the actuator to move to the tagging location.

Skills used: Computer vision, machine learning, Raspberry Pi, Linux, and a willingness to learn!

Feb 2025 Update: At this time, all Bee Tagging Software positions have been filled. Keep an eye out for future opportunities down the road!

New Recruit Ideas: Given a fully labeled dataset of images with bounding boxes classifying bees and their individual thorax, explain how you would go about training and verifying a model to accurately identify the bees (e.g. the framework, training routine, and parameters). Feel free to include code/pseudocode to explain your thought process.

Here’s something else we’re currently thinking about: upon recognizing the bee and the thorax in the camera, we need to set up an algorithm that receives the positional data and relays it to the actuator. However, bees can change location erratically and flap their wings very quickly (~200 frames/second), challenging both software and hardware implementations. What are some ways we can mitigate this?

We’d love to see your ideas in your application!