Segmentation

faCRSA: an automated pipeline for high-throughput analysis of crop root system architecture

Abstract

Optimizing root system architecture (RSA) is essential for plants because of its critical role in acquiring water and nutrients from the soil. However, the subterranean nature of roots complicates the measurement of RSA traits. Recently developed rhizobox methods allow for the rapid acquisition of root images. Nevertheless, effective and precise approaches for extracting RSA features from these images remain underdeveloped. Deep learning (DL) technology can enhance image segmentation and facilitate RSA trait extraction. However, comprehensive pipelines that integrate DL technologies into image-based root phenotyping techniques are still scarce, hampering their implementation. To address this challenge, we present a reproducible pipeline (faCRSA) for automated RSA traits analysis, consisting of three modules: (1) the RSA traits extraction module functions to segment soil-root images and calculate RSA traits. A lightweight convolutional neural network (CNN) named RootSeg was proposed for efficient and accurate segmentation; (2) the data storage module, which stores image and text data from other modules; and (3) the web application module, which allows researchers to analyze data online in a user-friendly manner. The correlation coefficients (R2) of total root length, root surface area, and root volume calculated from faCRSA and manually measured results were 0.96**, 0.97**, and 0.93**, respectively, with root mean square errors (RMSE) of 8.13 cm, 1.68 cm2, and 0.05 cm3, processed at a rate of 9.74 seconds per image, indicating satisfying accuracy. faCRSA has also demonstrated satisfactory performance in dynamically monitoring root system changes under various stress conditions, such as drought or waterlogging. The detailed code and deployable package of faCRSA are provided for researchers with the potential to replace manual and semi-automated methods.

OSNet: an oriented instance segmentation network of breeding plot extraction from UAV RGB imagery

Abstract

Drones have enabled large-scale breeding and cultivation experiments. However, extracting individual breeding plots from aerial images is a key prerequisite and urgent demand for extracting variety-level traits. The main difficulties in plot extraction include irregular rotation angles of the plots, ambiguous gaps both within and between plots, and variable color contrasts between the vegetation and the background. To solve these challenges, a novel oriented instance segmentation network (OSNet) is proposed by leveraging a global context transformer (GCT) and an oriented region proposal network (RPN). The performance was assessed using a welllabeled dataset with 960 plots of 160 wheat varieties across two years. Results show that OSNet achieved the AP@0.5 of 0.917, F1-score of 0.959, Accuracy of 0.966, IoU of 0.912, Recall of 0.934, and Plot-a of 0.999. OSNet outperformed five state-of-the-art (SOTA) networks with an average improvement of 3.08 %, 1.42 %, 1.19 %, 1.70 %, 1.79 %, and 0.04 % in AP@0.5, F1-score, Accuracy, IoU, Recall, and Plot-a, respectively. The sensitivity analysis proved that OSNet consistently achieved stable segmentation accuracy across different rotation angles and growth stages. The interpretability through ablation analysis showed that OSNet benefits from the oriented proposal and global information. Furthermore, OSNet can be transferred to new datasets with various years, crops, and data dimensions, supporting typical phenotyping tasks such as 2D wheat spike detection (r = 0.91) and 3D canopy height measurement (r = 0.89). The innovative methodology will be a fundamental tool for processing drone imagery, accelerating phenotypic trait extraction across various varieties and thereby expediting the breeding process.