Welcome to the first of two posts where I will explain how we have developed landmark detection models to locate Archey’s frogs. Our models, like the commonly used face-detection models, predict six frog landmarks that ultimately facilitate the individual identification of threatened frog species. In this post, I will share some background information about the project and discuss how we prepared the data.

Archey's frog on fern. Photo by James Reardon.

Archey’s frog on fern.
Photo by James Reardon.

Project background

Archey’s frogs are endemic to New Zealand, one of the most archaic amphibians in the world and one of only three extant species belonging to the taxonomic family Leiopelmatidae. Archey’s frogs are New Zealand’s smallest endemic frogs, growing up to c. 40 mm in body length.

They tend to live in specific areas, rarely travelling outside a 10-metre radius and have distinct camouflage skin patterns which can be used as a fingerprint to identify each individual frog. These unique characteristics assist the monitoring of them.

Three different Archey's frogs

Three different Archey’s frogs.
Photos were taken by the New Zealand Department of Conservation.

Sadly, Archey’s frogs are listed as Critically Endangered by the IUCN Red List. One of the main threats is introduced mammalian predators, mainly rats. By monitoring Archey’s frogs, the New Zealand Department of Conservation (DOC) can have a better understanding of how to preserve them. Specifically, their monitoring has focused on an experimental approach using different rat control methods to determine the optimal tools that reduce rat numbers in order to promote frog population recovery [1].

Archey's frog homepage wildlifeai

Close up of an Archey’s frog.
Photo by James Reardon.

Currently, the frog monitoring program consists of two main stages. The first stage takes place once or twice a year, in which DOC rangers search for frogs in the wild during four consecutive nights, and take pictures of the frogs they find. The second stage is categorising each frog’s photo as either a recaptured frog (i.e. previously photographed) or a new individual (i.e. first time photographed). This process is by hand and lasts several months or more thus this is the main bottleneck of the monitoring program. Therefore, we would like to replace this second phase with our AI Model. Replacing this time and resource-consuming process will allow a more efficient monitoring process.

Our solution consists of three parts: 1) Landmark Detection; 2) Frog Identification, and 3) Size Identification. Both Frog Identification and Size Identification will use Landmark Detections output to correctly process and predict based on a frog’s image. The Landmark Detection model will predict six frog landmarks in an image. These landmarks will be used to morph and crop the image for the Frog Identification model, much like the common face-detection models broadly used nowadays. The Size Identification model aims to evaluate different frog diameters, such as the length from snout to vent, for later research purposes

Data rangers involved in the frog identification work

Data rangers involved in the Pepeketua ID project.
From left to right, Chen Yoffe, Bar Vinograd, Dror Assaf, Gal Gozes, Elad Carmon and Guy Hay.

Data available

Until now, all Archey’s frogs images have been processed by the DOC in the same manner. Rangers analyse each frog image and tag 4 expressive skin features. Using these features, the ranger manages to affiliate the frog to a subgroup of frogs that have the same features. After that, while carefully comparing each frog in the subgroup, the ranger identifies the specific frog among the hundreds of previously seen frogs.

Manually labelled photos of an Archey's frog

Manually labelled photos of an Archey’s frog.
Photo taken by the New Zealand Department of Conservation.

Once identified, the ranger writes the date, location and frog identification in each photo using computer software (usually Microsoft Paint). This process results in a seemingly random change to image size, and therefore, to an extremely varying image dataset size-wise.

Plot of all image sizes in the dataset

Data plot of all image sizes in the dataset.

Data labelling and preparation

For our Landmark Detection model, we chose six of the most important feature points to measure frog’s size. These landmarks allow an optimal ability to morph and crop the image to standardise, as much as possible, the images before we run the Frog Identification and Size Estimation models. The six feature points we chose were: 1) tip of the frog’s snout, 2) left front leg, 3) right front leg, 4) left eye, 5) right eye, and 6) vent.

Image of Archey’s frog with the six landmarks labelled in green.

Archey’s frog labelled image, the green dots are the labels to find.

In order to label the data, we created a Zooniverse project, in which volunteers manually labelled the landmark in as many images as they wish without the need to register or download specific software. This allowed the number of labelled images to reach a number of 1,642 images, of which 170 were labelled twice.

We used images that were labelled by at least two different volunteers to understand how accurate users were at manually labelling the photos. As can be seen in the figure below, most double labelled images were very closely labelled, except for a few outliers, that were mostly due to program errors, and poor understanding of the correct points to label.

Difference in labels between images labelled at least twice

Difference in labels between images labelled at least twice, where the distance is normalized by image size.

The most common mislabeled point was the front legs, in which a couple of volunteers misunderstood the assignment and wrongly classified the knees instead. The mean difference without outliers was 1.4% of the image with a standard deviation of 0.7%. Due to the low number of overall images, the cutoff that we used was two standard deviations from the mean, which is 2.8% of the image.

In at least twice-labelled images, the mean of the labels was considered to be the ground truth, while ensuring that their difference by the percentage of image size was small, if not the image would be discarded.

After aggregating the labels from multiple volunteers and dealing with the outliers we ended up with a total of 1633 relevant images for classification.

Photos of Archey's frogs inadequately labelled.

Examples of inaccurately labelled frogs.
Probably an outcome of program malfunction (left) and a user incorrectly labelling the “knee” instead of the joint (right).

The data was split into train, validation, and test sets. To avoid data leakage, the images were split by frog ID to ensure that no images of the same frog were included in more than one data set. The split was as follows: 1124 images (70%) to train, 257 images (15%) to validate and 252 images (15%) to test the models.

Considering computer vision problems in general, and Landmark Detection problems in particular, 1633 is a relatively small number of images. Capturing all the image variations and complexity of the task with such a limited dataset is a big challenge. Moreover, as opposed to Image Classification, image resolution is crucial for correct predictions, especially when the prediction must be extrapolated to the original image size. This restriction causes the feature space to be extremely large, and a model trained only on 1633 images is unlikely to generalize well.

To overcome the scarcity of training examples, a new python generator was created using the image augmentation library imgaug. The generator was able to replicate all variations that were seen in the original data: the rotation, the scaling, the shear, and the different lighting.

In the next post, I will explain the models we used, the results we got and discuss the next steps for our Pepeketua (frog) identification project.

Acknowledgments

I would like to thank the whole Wildlife.AI team for doing an amazing job, and especially Elad Carmon for his insightful comments, stimulating discussions and guidance through this project.

Share this story!