Accessible AI Enhances Monitoring of Coral Seeding Devices in Reef Restoration

Cover for Accessible AI Enhances Monitoring of Coral Seeding Devices in Reef Restoration

The reefs of our planet—those living rainforests beneath the waves—are shrinking shadows of their former selves. As climate and human pressures grow, scientists search not only for ways to revive these delicate undersea worlds, but also tools to reliably measure their recovery.

TL;DR

  • AI can rapidly spot tiny coral seeding devices in large underwater images with nearly 99% accuracy, slashing human workload.
  • This is possible with free, easy-to-use software—making advanced monitoring accessible for real-world reef restoration.

Imagine you’re a gardener in an endless field, tasked with planting and checking thousands of seedlings scattered over acres. These seedlings are actually nurseries for new life—small devices seeded with coral babies, planted onto damaged reefs in hopes of bringing the ecosystem back. But checking each one by hand, especially across hard-to-reach underwater landscapes, is painstaking and slow work—sometimes so daunting the field is left unchecked.

Enter the promise of artificial intelligence. The researchers in this study asked: could a smart, accessible, and free AI tool help us find and track these coral seeding devices (CSDs), so we could know—quickly and reliably—how restoration efforts are faring?

Their experiment unfolded across two very different regions: Palau and Australia’s Great Barrier Reef. In each place, teams photographed broad swaths of reef using GoPro cameras, then stitched these images into detailed orthoimages—giant, flat maps where every coral, rock, and device could theoretically be found. Using open-source software called TagLab, they fed the AI just 30 examples of CSDs (a tiny fraction compared to thousands present), spending only about ten minutes creating these training examples. The AI, using a machine learning technique called DeepLab V3+, learned to spot the unique shapes and colors of the devices in these complex environments.

The results? Astonishing. On a Palau reef with nearly 1,000 CSDs, the trained AI found over 98% of the devices as reliably as a human expert, but with a 95% reduction in repetitive manual labor. Remarkably, giving the AI more annotated examples actually made it less reliable—small, sharply focused training sets yielded better results. When they tested the system on seven different sites along the Barrier Reef, the “trained in ten minutes” AI still achieved over 99% recall (catching nearly every CSD) and more than 93% precision.

Why does this matter for us? Imagine any big task that demands spotting needles in haystacks: counting tree seedlings from drone images; tracking invasive species; or monitoring construction on vast landscapes. By democratizing AI for restoration practitioners—no coding needed, just a user-friendly interface—scientists and citizens alike can gather the robust data needed to steer restoration, secure funding, and retain community trust. It’s not just for coral: the method could ripple outward, enabling smarter, scalable stewardship across many environments.

Of course, there are frontiers yet to cross. These promising AI tools must be tested in still more varied settings (murky waters, new device shapes), and their technology made even more accessible. They do not replace the irreplaceable insight of a diver or restoration ecologist; rather, they amplify it, freeing human hands and minds for the creative and complex work that still can’t be done by algorithms alone.

In the end, this study offers a glimpse of a future where clever, friendly AI stands beside us—not above us—helping to rebuild and track the health of the world’s reefs, one tiny coral outpost at a time.

Sources

License: cc_by Citation: Stratford, J. E., Toor, M., Forster, R., Larkey, E. et al. (2025) Accessible AI Enhances Monitoring of Coral Seeding Devices in Reef Restoration. bioRxiv. 10.1101/2025.11.27.688254 This post summarizes scientific research. All interpretations are my own. Please refer to the original paper for full methods and context.