/ Back to Dispatch
Dispatch 03Marine science and annotation

CoralNet

CoralNet helps scientists work through reef imagery at a scale that manual annotation alone cannot sustain.

Reef Surveys Become Label Backlogs

Coral surveys generate enormous piles of imagery, and that means they also generate a less glamorous problem: annotation debt. Once the cameras come back, someone still has to go point by point through reef images and decide what is coral, algae, sand, rubble, or something rarer. That work is slow, repetitive, and important enough that errors carry downstream into analysis.

CoralNet was built around that bottleneck. From the start, the goal was not just to publish a paper or a model checkpoint. It was to make computer vision methods actually available to reef researchers in a form they could use, which is why the project became a managed web platform rather than a local research prototype.

Where the Neural Net Helps

CoralNet describes itself very plainly: it deploys deep neural networks for fully and semi-automated annotation of benthic images. The machine-learning task is unusually concrete. For a given point in an image, the system crops an image patch and predicts a score over the mutually exclusive labels in that survey's label set. The output is not an abstract embedding for its own sake; it is a suggested annotation with a confidence profile.

The current engine is more than a generic image model dropped into science. CoralNet's 1.0 release moved to an EfficientNet-B0 feature extractor and a multi-layer perceptron classifier, trained with a much larger set of curated examples from CoralNet users. On their reported test set, that update cut error rate by 22.4% relative to the previous beta engine. That is the sort of improvement that matters because it compounds across tens of thousands of points.

Why CoralNet Became Infrastructure

A useful detail in CoralNet's architecture is that it separates shared feature extraction from per-source classifiers. That matters because reef projects rarely have the thousands of examples per class that a fully bespoke supervised system would want. So the platform computes reusable features and then trains source-specific classifiers as each survey accumulates more human annotations and confirmed labels.

That is why the site has lasted. It is not just a model endpoint. It is also a data repository, collaboration platform, annotation tool, and deployment layer. CoralNet even exposes an API for large classification jobs, describing workflows that can reach tens or hundreds of thousands of images. The AI is important, but the thing researchers actually live in is the managed system around it.

The Loop That Improves the Model

CoralNet gets stronger in exactly the way a serious scientific ML system should: more manual annotations, more confirmed automatic labels, more retraining, better source-specific performance. The model assists the humans, and the humans provide the training signal that makes the next model more useful. That feedback loop is the core product.

That is what makes this a better case study than a generic “AI for conservation” headline. The important thing here is not that machine learning touched reef science. It is that a reef-science workflow was rebuilt so that model assistance, expert review, and shared infrastructure all reinforce one another instead of fighting for control.

Sources

CoralNetAbout CoralNetA New Deep Learning Engine for CoralNetCoralNet Deploy API