Construction and Screening of Intelligent Grading Model of Cigar Leaves Based on Deep Learning

DUChaofan, WANGRuiqi, WUTianyi, SHENCuiyu, SHENFulong, LAIRijun, LINXiaolu, MAXudong, XIEXiaofang

Chin Agric Sci Bull ›› 2025, Vol. 41 ›› Issue (34) : 157-164.

PDF(2102 KB)
Home Journals Chinese Agricultural Science Bulletin
Chinese Agricultural Science Bulletin

Abbreviation (ISO4): Chin Agric Sci Bull      Editor in chief: Yulong YIN

About  /  Aim & scope  /  Editorial board  /  Indexed  /  Contact  / 
PDF(2102 KB)
Chin Agric Sci Bull ›› 2025, Vol. 41 ›› Issue (34) : 157-164. DOI: 10.11924/j.issn.1000-6850.casb2025-0426

Construction and Screening of Intelligent Grading Model of Cigar Leaves Based on Deep Learning

Author information +
History +

Abstract

This study aims to address the challenges in the cigar leaf grading process in China, where the lack of mature intelligent grading methods has led to a reliance on manual operations, resulting in inefficiency and inconsistent standards. The goal is to ensure the quality of cigar leaf products. The ‘FX-01’ variety, the main cultivar in Longyan, Fujian, was used as the research material, and a dataset of 8637 images covering nine commonly used acquisition grades was collected. Five state-of-the-art deep learning models (Swin, ViT, ResNet, Beit and ConvNext) were employed to develop intelligent grading models for upper, middle, and lower leaves, respectively. The results showed that all models met the requirements for daily response speed, with the ConvNext and ViT models achieving the best performance on the middle leaf test set, with an average accuracy of 93.3%. These findings demonstrate the feasibility of deep learning-based image technology in the intelligent grading of cigar wrapper leaves and provide technical support and theoretical guidance for further system improvement and mobile deployment, laying a foundation for the automation and standardization of cigar production.

Key words

cigar / grading / image recognition / deep learning / model construction / intelligent grading model / ConvNext / vision Transformer (ViT)

Cite this article

Download Citations
DU Chaofan , WANG Ruiqi , WU Tianyi , et al . Construction and Screening of Intelligent Grading Model of Cigar Leaves Based on Deep Learning[J]. Chinese Agricultural Science Bulletin. 2025, 41(34): 157-164 https://doi.org/10.11924/j.issn.1000-6850.casb2025-0426

References

[1]
李爱军, 秦艳青, 代惠娟, 等. 国产雪茄烟叶科学发展刍议[J]. 中国烟草学报, 2012, 18(1):112-114.
[2]
荣仕宾, 李晶晶, 赵园园, 等. 雪茄芯叶人工发酵温湿度控制对化学成分和香味品质的影响[J]. 中国烟草学报, 2021, 27(2):109-116.
[3]
高娅北, 钟秋, 王松峰, 等. 雪茄茄衣晾制过程中烟叶颜色和含水量变化及其相关分析[J]. 中国烟草科学, 2019, 40(2):57-63.
[4]
王佩, 贾世伟, 冯俊升, 等. 雪茄烟叶晾制过程中颜色变化探讨[J]. 现代农业科技, 2022(7):180-185.
[5]
赵晨婕, 周康, 姜宇, 等. 油分判定在雪茄烟叶分选分级中的影响研究[J]. 现代农业科技, 2023(10):194-197.
[6]
GUO J M, PRASETYO H, SU H S. Image indexing using the color and bit pattern feature fusion[J]. Journal of visual communication and image representation, 2013, 24(8):1360-1379.
[7]
TAMURA H, MORI S, YAMAWAKI T. Textural features corresponding to visual perception[J]. Transactions on systems, man, and cybernetics, 1978, 8(6):460-473.
[8]
BARTOLINI I, CIACCIA P, PATELLA M. WARP: Accurate retrieval of shapes using phase of Fourier descriptors and time warping distance[J]. IEEE Computer society, 2005, 27(1):142-147.
[9]
LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision, 2004, 60(2):91-110.
[10]
IQBAL K, ODETAYO M O, JAMES A. Content-based image retrieval approach for biometric security using colour, texture and shape features controlled by fuzzy heuristics[J]. Journal of computer and system sciences, 2012, 78(4):1258-1277.
[11]
SULTANA M, GAVRILOVA M. A content based feature combination method for face recognition[J]. Advances in intelligent systems and computing, 2013, 226:197-206.
[12]
LIU Z, MAO H, WU C Y, et al. A ConvNet for the 2020s[J]. arXiv e-prints,2022:11976-11986.
[13]
HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[J]. IEEE,2016:770-778.
[14]
Bao H, Dong L, Piao S, et al. BEIT: BERT pre-training of image transformers[J]. arXiv preprint, 2021.
[15]
LIU Z, LIN Y, CAO Y, et al. Swin Transformer: Hierarchical vision transformer using shifted windows[J]. IEEE, 2021:10012-10022.
[16]
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[J]. International conference on learning representations, 2021.
[17]
傅隆生, 冯亚利, ELKAMIL T, 等. 基于卷积神经网络的田间多簇猕猴桃图像识别方法[J]. 农业工程学报, 2018, 34(2):205-211.
[18]
THIPWIMON-CHOMPOOKHAM, GONWIRAT S, LATA S, et al. Plant leaf image recognition using multiple-grid based local descriptor and dimensionality reduction approach[A].// The 3rd International Conference on Information Science and Systems[C]. 2020.
[19]
汪睿琪, 张炳辉, 顾钢, 等. 基于YOLOv5的鲜烟叶成熟度识别模型研究[J]. 中国烟草学报, 2023, 29(2):46-55.
[20]
ZHAO J, DU C, LI Y. YOLO-Granada: A lightweight attentioned YOLO for pomegranate fruit detection[J]. Scientific reports, 2024, 14:16848.
[21]
SHI Y, QING S, ZHAO L, et al. YOLO-Peach: A high-performance lightweight YOLOv8s-based model for accurate recognition and enumeration of peach seedling fruits[J]. Agronomy, 2024, 14(6):1628.
The identification and enumeration of peach seedling fruits are pivotal in the realm of precision agriculture, greatly influencing both yield estimation and agronomic practices. This study introduces an innovative, lightweight YOLOv8 model for the automatic detection and quantification of peach seedling fruits, designated as YOLO-Peach, to bolster the scientific rigor and operational efficiency of orchard management. Traditional identification methods, which are labor-intensive and error-prone, have been superseded by this advancement. A comprehensive dataset was meticulously curated, capturing the rich characteristics and diversity of peach seedling fruits through high-resolution imagery at various times and locations, followed by meticulous preprocessing to ensure data quality. The YOLOv8s model underwent a series of lightweight optimizations, including the integration of MobileNetV3 as its backbone, the p2BiFPN architecture, spatial and channel reconstruction convolution, and coordinate attention mechanism, all of which have significantly bolstered the model’s capability to detect small targets with precision. The YOLO-Peach model excels in detection accuracy, evidenced by a precision and recall of 0.979, along with an mAP50 of 0.993 and an mAP50-95 of 0.867, indicating its superior capability for peach sapling identification with efficient computational performance. The findings underscore the model’s efficacy and practicality in the context of peach seedling fruit recognition. Ablation studies have shed light on the indispensable role of each component, with MobileNetV3 streamlining the model’s complexity and computational load, while the p2BiFPN architecture, ScConv convolutions, and coordinate attention mechanism have collectively enhanced the model’s feature extraction and detection precision for minute targets. The implications of this research are profound, offering a novel approach to peach seedling fruit recognition and serving as a blueprint for the identification of young fruits in other fruit species. This work holds significant theoretical and practical value, propelling forward the broader field of agricultural automation.
[22]
WU M, LIN H, SHI X, et al. MTS-YOLO: A multi-task lightweight and efficient model for tomato fruit bunch maturity and stem detection[J]. Horticulturae, 2024, 10:1006.
The accurate identification of tomato maturity and picking positions is essential for efficient picking. Current deep-learning models face challenges such as large parameter sizes, single-task limitations, and insufficient precision. This study proposes MTS-YOLO, a lightweight and efficient model for detecting tomato fruit bunch maturity and stem picking positions. We reconstruct the YOLOv8 neck network and propose the high- and low-level interactive screening path aggregation network (HLIS-PAN), which achieves excellent multi-scale feature extraction through the alternating screening and fusion of high- and low-level information while reducing the number of parameters. Furthermore, We utilize DySample for efficient upsampling, bypassing complex kernel computations with point sampling. Moreover, context anchor attention (CAA) is introduced to enhance the model’s ability to recognize elongated targets such as tomato fruit bunches and stems. Experimental results indicate that MTS-YOLO achieves an F1-score of 88.7% and an mAP@0.5 of 92.0%. Compared to mainstream models, MTS-YOLO not only enhances accuracy but also optimizes the model size, effectively reducing computational costs and inference time. The model precisely identifies the foreground targets that need to be harvested while ignoring background objects, contributing to improved picking efficiency. This study provides a lightweight and efficient technical solution for intelligent agricultural picking.
[23]
DANVEER R, RANJAN A G. Ensemble of deep learning and machine learning approach for classification of handwritten Hindi numerals[J]. Journal of engineering and applied science, 2023, 70(1).
[24]
LI J, LI M. Flowering index intelligent detection of spray rose cut flowers using an improved YOLOv5s model[J]. Applied sciences, 2024, 14:9879.
Addressing the current reliance on manual sorting and grading of spray rose cut flowers, this paper proposed an improved YOLOv5s model for intelligent recognition and grading detection of rose color series and flowering index of spray rose cut flowers. By incorporating small-scale anchor boxes and small object feature output, the model enhanced the annotation accuracy and the detection precision for occluded rose flowers. Additionally, a convolutional block attention module attention mechanism was integrated into the original network structure to improve the model’s feature extraction capability. The WIoU loss function was employed in place of the original CIoU loss function to increase the precision of the model’s post-detection processing. Test results indicated that for two types of spray rose cut flowers, Orange Bubbles and Yellow Bubbles, the improved YOLOv5s model achieved an accuracy and recall improvement of 10.2% and 20.0%, respectively. For randomly collected images of spray rose bouquets, the model maintained a detection accuracy of 95% at a confidence threshold of 0.8.
[25]
LI Y, LI Y, LIANG C, et al. Improving deep learning with more data[J]. arXiv preprint, 2020.
[26]
申萍, 童丹, 陈哲. 基于偏斜叶色分布模式的新鲜烟叶成熟度判别[J]. 烟草科技, 2021, 54(8):26-35.
PDF(2102 KB)

Accesses

Citation

Detail

Sections
Recommended

/