Skip to main navigation Skip to search Skip to main content

Computer vision-based real-time underwater shrimp monitoring and weight estimation for sustainable aquaculture

  • Bing Chian Wu
  • , Chien Kang Huang
  • , Pei Shan Teng
  • , Jia Zhen Yu
  • , Tien Hsiung Weng
  • , Shih Shun Lin
  • , Han Ching Wang
  • , Chu Fang Lo
  • , Nai Yueh Tien

Research output: Contribution to journalArticlepeer-review

Abstract

Accurate real-time estimation of shrimp size and weight is critical for effective aquaculture management, yet traditional manual methods are labor-intensive and inefficient. Furthermore, continuous underwater video monitoring using deep-learning algorithms necessitates high-performance GPU resources to maintain real-time inference throughput, which introduces substantial challenges regarding energy consumption and storage limits. To address these bottlenecks, this study presents a fully automated shrimp monitoring framework that integrates computer vision, machine learning, and adaptive computation control to achieve real-time length and width measurement, weight prediction, and resource-aware operation. The system employs a YOLOv5-OBB (Oriented Bounding Box) detector for oriented shrimp localization and a YOLOv8-Seg (Segmentation) model for abdominal segmentation, supported by distortion correction and pixel-to-length calibration using an underwater grid plate. A Scale-Invariant Feature Transform (SIFT)-enhanced Re-identification (Re-ID) mechanism maintains identity consistency across frames, while a logistic regression-based water-clarity classifier dynamically suspends detection during turbid conditions to reduce unnecessary GPU usage and data storage. Experimental results show that image-based length and width estimation achieved Root Mean Squared Errors (RMSEs) of 4.03 mm and 0.45 mm, with Mean Absolute Percentage Errors (MAPEs) of 3.16% and 3.74%, respectively. For weight prediction, a regression model using length alone reached RMSE = 4.62 g, MAPE = 10.77%, and Coefficient of Determination (R2) = 0.970, while a multi-feature model using both length and width improved performance to RMSE = 4.29 g, MAPE = 10.58%, and R2 = 0.973. Compared to manual measurements, image-based predictions yielded slightly higher errors (≤ 2.2% MAPE difference) but remained within acceptable tolerance (< 5%). The water-clarity module reached 99.24% accuracy; by dynamically filtering non-actionable frames, the system achieved a 69% reduction in video storage requirements and maintained a remarkably low average GPU utilization of 14.84%. By integrating high-accuracy visual sensing with this adaptive, energy-efficient processing, the proposed system provides a scalable solution for long-term, real-time shrimp monitoring. These results highlight the system’s relevance to resource-aware and performance-efficient computing within the broader context of real-time environmental and aquaculture applications.

Original languageEnglish
Article number173
JournalJournal of Supercomputing
Volume82
Issue number3
DOIs
Publication statusPublished - 2026 Feb

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 7 - Affordable and Clean Energy
    SDG 7 Affordable and Clean Energy
  2. SDG 12 - Responsible Consumption and Production
    SDG 12 Responsible Consumption and Production
  3. SDG 14 - Life Below Water
    SDG 14 Life Below Water

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Software
  • Information Systems
  • Hardware and Architecture

Cite this