Lettuce Phenotype Estimation Using Integrated RGB-Depth Image Synergy
Abstract
Accurate measurement of phenotypic traits in plant growth using automated methods is crucial for applications such as breeding and cultivation. Aiming to address the need for non-destructive, precise detection of phenotypic traits in factory-grown lettuce, by integrating RGB images and depth images collected by depth cameras, an improved DeepLabv3+ model was used for image segmentation, and a dual-modal regression network estimated the phenotypic traits of lettuce. The backbone of the improved segmentation model was replaced from Xception to MobileViTv2 to enhance its global perception capabilities and performance. In the regression network, a convolutional multi-modal feature fusion module (CMMCM) was proposed to estimate the phenotypic traits of lettuce. Experimental results on a public dataset containing four lettuce varieties showed that the method estimated five phenotypic traits—fresh weight, dry weight, canopy diameter, leaf area, and plant height—with determination coefficients of 0.922 2, 0.931 4, 0.862 0, 0.935 9, and 0.887 5, respectively. Compared with the RGB and depth image-based phenotypic parameter estimation benchmark ResNet-10 (Dual) without CMMCM and SE modules, the improved model increased the determination coefficients by 2.54%, 2.54%, 1.48%, 2.99%, and 4.88%, respectively, with an image detection time of 44.8 ms per image. This demonstrated that the method achieved high accuracy and real-time performance for non-destructive detection of lettuce phenotypic traits through dual-modal image fusion.
Keywords: lettuce, phenotypic estimation, modality fusion, segmentation model, RGB images, depth images
Download Full Text:
PDFReferences
ZHANG X, HE D, NIU G,et al. Effects of environment lighting on the growth, photosynthesis, and quality of hydroponic lettuce in a plant factory [J]. International Journal of Agricultural and Biological Engineering, 2018, 11(2) : 33 -40.
SUBLETT W L, BARICKMAN T C, SAMS С E. The effect of environment and nutrients on hydroponic lettuce yield, quality, and phytonutrients[J]. Horticulturae, 2018, 4(4) : 48.
HUANG Linsheng, SHAO Song, LU Xianju, et al. Multispectral image segmentation and registration of lettuce based on convolutional neural networks [J]. Transactions of the Chinese Society for Agricultural Machinery, 2021 , 52(9) ; 186 - 194.
DONG Mo. Precise identification and application research of lettuce phenotypes based on deep learning[D] . Changchun: Jilin University, 2024. (in Chinese)
MA Yidong, HU Pengzhan, JIN Xin, et al. Design and experiment of low damage flexible harvesting device for hydroponic lettuce [J]. Transactions of the Chinese Society for Agricultural Machinery, 2022, 53(10) : 175 - 183,210. (in Chinese)
LIU Lin, YUAN Jin, ZHANG Yan, et al. Non-destructive estimation method of fresh weight of substrate cultured lettuce in solar greenhouse [J]. Transactions of the Chinese Society for Agricultural Machinery, 2021 , 52(9) ; 230 -240.
JIANG Xinlu, CHEN Tian’en, WANG Cong, et al. Survey of deep learning algorithms for agricultural pest detection [J]. Computer Engineering and Applications, 2022, 59(6) ; 30 -44. (in Chinese)
GANG M S, KIM H J, KIM D W. Estimation of greenhouse lettuce growth indices based on a two-stage CNN using KGB - D images[J]. Sensors, 2022,22( 15): 5499.
LI Jie, WANG Jun, LI Bo, et al. Apple yield prediction based on CBAM’s CNN - Bi - Coupled LSTM [J]. Information Technology and Informatization, 2023(10): 4 — 7.
ZHANG Runzhi, ZHANG Xiao, WU Gang. A method for predicting the weight of fragrant pears based on Kinect camera [J]. Food and Machinery, 2023, 39(9); 77 -82,88. (in Chinese)
JIANG Viyu, WANG Shuo, ZHANG Li'na, et al. Maturity classification method based on the mechanical characteristics of hydroponic lettuce[J]. Transactions of the CSAE, 2023 , 39(1) ; 179 - 187.
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: transformers for image recognition at scale[J]. arXiv Preprint, arXiv: 2010. 11929, 2020.
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017,30;5998 -6008.
HE K, ZHANG X, HEN S, et al. Deep residual learning for image recognition [С]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016.
HE T, ZHANG Z, ZHANG H, et al. Bag of tricks for image classification with convolutional neural networks[C] //2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2019.
ANSEL J, YANG E, HE H, et al. PyTorc h 2; faster machine learning through dynamic python bytecode transformation and graph compilation [C] //Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2024, 5(2) :929 -947.
WOO S, PARK J, LEE J Y, et al. CBAM; convolutional block attention module[С] // Computer Vision-ECCV 2018,Lecture Notes in Computer Science, 2018:3 - 19.
ZHANG W, PANG J, CHEN K, et al. K~Net: towards unified image segmentation [J], Advances in Neural Information Processing Systems, 2021, 34; 10326 - 10338.
GIRSHICK R. Fast R - CNN [C] //2015 IEEE International Conference on Computer Vision (ICCV) , 2015.
ZHAO H, SHI J, QI X, et al. Pyramid scene parsing network [С] / 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
HOWARD A, SANDLER M, CHEN B, et al. Searching for MobileNetV3 [С] //2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
QIN D, LEICHNER C, DELAKIS M, et al. MobileNetV4 - universal models for the mobile ecosystem [J]. arXiv Preprint, arXiv:2404. 10518, 2024.
HU Songtao, ZHAI Ruifang, WANG Yinghua, et al. Extraction of potato plant phenotypic parameters based on multi-source data [J] . Smart Agriculture, 2023, 5(1): 132 -145.
HU J, SHEN L, SUN G. Squeeze-and-excitation networks [С] //2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.
Refbacks
- There are currently no refbacks.