In this post, we will share an update on how we are using machine learning to identify fish in underwater videos using data labelled by citizen scientists.

Adi Gabay and Ohad Tayler.
As nature and wildlife enthusiasts, and as scuba divers, we always heard how the fish and reef population in Eilat has reduced. What used to be a vibrant home to many species has been terribly damaged by human activity and global warming, similar to what happened to many other reefs around the globe.

Photograph of the coral reef in Eilat, Israel.
Photograph of Adi Gabay.
When we started our computer science and statistics studies, we didn’t think that our graduation project would directly help the conservation of species and marine life. Volunteering for Wildlife.ai was an amazing opportunity to use our skills for a good cause and to have a real impact on the efforts to conserve marine species.
Today, conservation scientists analyze and assess marine life in the ocean around New Zealand by manually counting the number of fish in short videos taken underwater. What could help this research is to come up with a system that can count the fish automatically, but devising such a system turned out to be very complicated.
The most developed technology in the area of object detection and recognition is CNNs, which requires a lot of annotated data to be trained properly, as well as massive computational resources. However, getting the annotated data is not an easy task. This is why wildlife.ai created Spyfish Aotearoa, a citizen scientists’ site, where volunteers can label photos from underwater videos.
Homepage of Spyfish Aotearoa, the citizen science website used to collect labelled images of fish
Spyfish Aotearoa has allowed us to take the following approach: train a FasterRCNN model on the annotated data while focusing on common species that we’re really interested in, for example, snapper and blue cod, and then we’re training up models to be able to recognize those.
Our goal was to create a pipeline that will allow us to execute different experiments easily when the data and the hyperparameters can change.
Our first challenge was processing the annotation and converting the data to the right format for our model. Each picture is tagged by several different volunteers, therefore, in order to get the best classification for each fish in the picture, we needed a clever method to aggregate the classification to create one classification for each fish. When aggregating, there is a lot to take into account, like how close two annotations need to be merged into one classification, and how we treat conflicted annotations. As well, we needed to consider different aggregation parameters for each fish species as some are harder to spot than others like the ‘Scarlet wrasse’ which is very small to detect.

Examples of two labelled underwater frames with the incorrect format.
After finding the right parameters for the data aggregation, we need to convert the annotations to a format that our model can understand. This was the first step in our pipeline.
The second step was generating the train, validation, and test dataset. In order to properly train and evaluate the network, it is important to split our data such that all datasets contain all types of fish in the same proportion.
The third step in the pipeline is training the model while documenting the progress and occasionally evaluating the model using the validation set.
The final step is an evaluation of the trained model using the test dataset while considering multiple scores like confusion matrix and IOU (intersection over union).

Results of the first three full scale experiments, each one has different image augmentations type.
Our first experiment was an overfit test – the model was given a dataset with 20 images. Our goal was to see if the model can memorize this small dataset. If it can’t, it means that it is not strong enough to succeed in the big task.
We look forward to continuing working on this project and sharing with all of you the results of the full-scale experiments we have just started.
Along the way, we had guidance from Eran Paz, who helped us prepare for the project, build a work plan, create the pipeline from scratch, solve problems and make decisions.
Victor Anton from wildlife.ai who helped us understand the project and its cause, get and explain the data, setting up computational resources.
Thanks to Gal Hayems who was always there for us in case we needed advice.