How AI Is Mapping Ocean Floor Biodiversity


The ocean floor is one of the least mapped environments on Earth. We’ve mapped the surface of Mars in greater detail than most of the seabed. The fundamental challenge isn’t getting cameras down there—remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs) have been capturing deep-sea footage for decades. The challenge has always been what happens after the footage comes back to the surface.

A single deep-sea survey can produce hundreds of hours of video and tens of thousands of still images. Every frame potentially contains organisms that need to be identified, counted, and catalogued. Doing this manually requires expert marine taxonomists spending months or years painstakingly reviewing footage frame by frame. It’s slow, expensive, and creates a massive bottleneck between data collection and scientific insight.

That bottleneck is now being broken by AI.

The Scale of the Problem

To understand why AI matters here, you need to appreciate the scale of manual annotation. I worked on a benthic survey project in 2019 where our team collected approximately 40,000 still images from AUV surveys across the Coral Sea. Each image needed to be analysed for substrate type, coral cover percentage, and identification of visible organisms.

Our team of four specialists spent nearly eight months on the annotation. Eight months of looking at underwater photographs, identifying sponges, corals, echinoderms, and fish, and recording standardised measurements. It was thorough, accurate work, but the timeline made it nearly impossible to respond quickly to management questions or emerging threats.

This isn’t unusual. The Schmidt Ocean Institute estimates that less than 5% of deep-sea survey footage collected globally has been fully analysed. The rest sits in digital archives, containing potentially valuable biodiversity data that nobody has had time to examine.

What AI Is Actually Doing

The AI systems being deployed for ocean floor analysis use convolutional neural networks (CNNs) trained on annotated images. The training process is straightforward in concept: show the network thousands of images where humans have identified the organisms, and the network learns to recognise those same organisms in new images.

In practice, it’s more complex. Deep-sea imagery presents unique challenges—variable lighting, water turbidity, unusual angles, organisms that look different at different life stages, and species that closely resemble each other. The networks need substantial training data and careful validation to achieve useful accuracy.

Several research groups have made real progress. The FathomNet project, based at the Monterey Bay Aquarium Research Institute, has built a large open-source database of annotated deep-sea images specifically designed to train AI systems. Over 200,000 images tagged with organism identifications provide training data that individual research groups couldn’t compile alone.

Australian efforts have been significant too. CSIRO’s deep-sea research program has developed AI annotation tools that can classify benthic substrate types—sand, rubble, hard reef, soft sediment—with accuracy above 90%. Substrate classification might sound mundane, but it’s fundamental to understanding what organisms can live where.

Results in the Field

The practical impact is substantial. Tasks that took months now take weeks. A recent survey I was involved with processed 15,000 seabed images using an AI annotation pipeline in twelve days. The same task would have taken our team three to four months manually.

The AI system identified and counted sponges, soft corals, hard corals, and several categories of mobile invertebrates. Accuracy varied by organism type. Common, visually distinctive species like barrel sponges and fan corals were identified with over 85% accuracy. Smaller, less distinct organisms were harder—sea cucumbers partially buried in sediment, for instance, were correctly identified only about 60% of the time.

This variable accuracy means human review is still essential, but the workflow has changed. Instead of annotating from scratch, experts review and correct AI annotations. This is faster by an order of magnitude—confirming a correct identification takes seconds, while finding and identifying an organism from scratch in a complex image can take minutes.

These AI specialists and similar organisations working in marine technology are developing increasingly sophisticated pipelines that integrate AI classification with expert review workflows. The goal isn’t to replace taxonomists but to multiply their capacity—letting each expert process ten times the data they could handle manually.

Beyond Simple Identification

The most interesting applications go beyond “what species is this?” to answering ecological questions that require pattern analysis across large datasets.

Population density mapping is one. By processing thousands of images along survey transects, AI systems can generate high-resolution maps of organism distribution. Where do sponge gardens transition to coral communities? How does species composition change with depth? These gradient patterns require analysing so many images that manual annotation often can’t provide the resolution needed.

Change detection over time is another. If you survey the same area in 2020 and 2025, can AI systems identify what’s changed? Which coral colonies have grown? Which have died? Where are new recruits establishing? This kind of temporal analysis is incredibly labour-intensive manually but relatively straightforward for well-trained AI systems if the imagery is consistent.

Health assessment—identifying bleaching, disease, or damage in coral communities—is an active area of development. AI systems can potentially flag stressed organisms across entire survey regions, directing human attention to areas that need it most. Early detection of disease outbreaks or bleaching events could enable faster management responses.

The Limitations

I’d be doing a disservice to the field if I didn’t discuss what AI can’t do yet. Rare species are problematic—by definition, there aren’t enough training images. Novel species (previously undescribed) can’t be identified by systems trained only on known species. Cryptic organisms—those hidden in crevices, partially buried, or camouflaged—are routinely missed.

Taxonomic resolution varies. AI might identify “branching coral” but not distinguish between closely related Acropora species that even human experts disagree on without genetic analysis. For some research questions, genus-level identification is sufficient. For others, species-level precision is essential and AI isn’t there yet.

The “black box” problem also applies. When a neural network classifies an organism, it doesn’t explain its reasoning in the way a taxonomist would. If the system makes an error, it’s not always obvious why, which makes targeted improvement difficult.

Where This Is Going

The trajectory is clear: AI-assisted benthic analysis will become standard practice within five years. Not replacing human taxonomists—the expertise to train, validate, and interpret these systems will remain essential—but fundamentally changing the ratio of data collected to data analysed.

The bottleneck will shift from “we can’t analyse the footage fast enough” to “we need better cameras, more survey time, and wider coverage.” That’s a much better problem to have.

For ocean conservation, this means better data faster. Better monitoring of marine protected areas. Faster detection of ecosystem changes. More comprehensive baseline surveys. The ocean floor has secrets we haven’t found yet, not because the technology to look doesn’t exist, but because we haven’t been able to process what we’ve already seen. AI is fixing that.

The ocean floor is vast and our resources for studying it are limited. Anything that multiplies the effectiveness of those resources is worth pursuing. AI isn’t a magic solution—it’s a powerful tool that requires skilled humans to deploy, validate, and interpret. But it’s the tool this field has been waiting for, and the early results are genuinely exciting.