Special Issue



    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Research on the optimized pest image instance segmentation method based on the Swin Transformer model
    GAO Jiajun, ZHANG Xu, GUO Ying, LIU Yukun, GUO Anqi, SHI Mengmeng, WANG Peng, YUAN Ying
    JOURNAL OF NANJING FORESTRY UNIVERSITY    2023, 47 (3): 1-10.   DOI: 10.12302/j.issn.1000-2006.202206048
    Abstract2181)   HTML68)    PDF(pc) (3804KB)(1096)       Save

    【Objective】To achieve accurate pest monitoring, the author proposes an optimized instance segmentation method based on the Swin Transformer to effectively solve the difficulty in image recognition and segmentation of multi-larval individuals under complex real scenarios.【Method】The Swin Transformer model was selected to improve the backbone network of the Mask R-CNN instance segmentation model and to identify and segment Heortia vitessoides larvae which harmed Aquilaria sinensis. The input and output dimensions of all layers of the Swin Transformer and ResNet models with different structural parameters were adjusted. Both models were set as the backbone networks of Mask R-CNN for comparative experiments. H. vitessoides moore larvae identification and segmentation performances for different backbone networks were quantitatively and qualitatively analyzed using Mask R-CNN models to determine the best model structure.【Result】(1) Using this method, the F1 score and AP were 89.7% and 88.0%, respectively, in terms of pest identification framing, and 84.3% and 82.2%, respectively, in terms of pest identification and segmentation, increasing by 8.75% and 8.40%, respectively, compared to that of the Mask R-CNN model in terms of target framing and segmentation. (2) For small target pest identification and segmentation tasks, the F1 score and AP were 88.4% and 86.3%, respectively, in terms of pest identification framing, 84.0% and 81.7%, respectively, in terms of pest identification and segmentation, and increased by 9.30% and 9.45%, respectively, compared to that of the Mask R-CNN model in terms of target framing and segmentation.【Conclusion】In segmentation tasks under complex real scenarios, the recognition and segmentation effects depend to a large extent on the model’s ability to extract image features. By integrating the Swin Transformer, the mask R-CNN instance segmentation model has a stronger ability to extract features in the backbone network, with a better overall recognition and segmentation effect. It could provide technical support for the identification and monitoring of pests and solutions for the protection of agriculture, forestry, animal husbandry, and other industrial resources.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Plant disease and pest detection based on visiual attention enhancement
    YANG Kun, FAN Xijian, BO Weihao, LIU Jie, WANG Junling
    JOURNAL OF NANJING FORESTRY UNIVERSITY    2023, 47 (3): 11-18.   DOI: 10.12302/j.issn.1000-2006.202210022
    Abstract2134)   HTML55)    PDF(pc) (6997KB)(778)       Save

    【Objective】 Accurate detection is the key to precise control of plant diseases and pests. Building an accurate and efficient monitoring model of plant diseases and pests provides an important basis for the early diagnosis and warning of plant diseases and pests.【Method】In view of the weak generalization ability of the existing plant diseases and pests detection models and the high rate of missed detection of small targets, a plant disease and pest detection model based on visual attention enhancement improvement YOLOv 5-VE (vision enhancement) was proposed. The Mosaic 9 data enhancement method was used to facilitate the detection of small targets in experimental samples; a feature enhancement module based on a visual attention convolutional block attention module (CBAM) was designed; to determine the location loss of overlapping and occluded targets, the DIoU bounding box location loss function was introduced.【Result】The recognition and average detection accuracies of the YOLOv 5-VE model on the experimental dataset reached 65.87% and 73.49%, respectively, which were 1.07% and 8.25% higher, respectively, than those of the original model. The detection speed on the GPU with the model 1 080 Ti reached 35 fps. 【Conclusion】This method can quickly and effectively detect and identify a variety of diseases and pests in field scenes with complex backgrounds, improve robustness of detection, improve the feature extraction ability of the model for pests and diseases, reduce the interference of complex field scenes in detection, and shows good application potential. It can be widely used for the large-scale detection of plant diseases and pests.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research on recognition of Camellia oleifera leaf varieties based on deep learning
    YIN Xianming, JI Yu, ZHANG Riqing, MO Dengkui, PENG Shaofeng, WEI Wei
    JOURNAL OF NANJING FORESTRY UNIVERSITY    2023, 47 (3): 29-36.   DOI: 10.12302/j.issn.1000-2006.202112037
    Abstract1759)   HTML29)    PDF(pc) (2921KB)(922)       Save

    【Objective】Deep learning methods are used to carry out research on Camellia oleifera based variety recognition on leaves, this study developed C. oleifera strain image recognition technology to provide scientific basis for C. oleifera variety identification.【Method】Eleven leaves of C. oleifera varieties grown under natural lighting conditions and free from pests and diseases were collected for a study. Images of the front and back of the leaves with a white cardboard background were captured using a smartphone. Invalid images were removed by usability screening, and a dataset of camellia leaf varieties with 2 791 images was constructed. Deep learning networks (GoogLeNet and ResNet) were used to identify and study the leaf images of 11 C. oleifera varieties.【Result】Both GoogLeNet and ResNet networks can meet the requirements of C. oleifera variety recognition based on leaves, with overall F1 scores of 94.0% and 80.7%. Among them, the GoogLeNet network was more effective in recognition, with average accuracy, recall, Macro F1 and Micro F1 value of 94.1%, 94.0%, 94.0% and 96.9%, respectively, and its recognition recall for two varieties, NO. 1 and 8, reached 100%.【Conclution】Deep learning networks (GoogLeNet and ResNet) can achieve C. oleifera variety recognition based on leaves, which can provide a reference for rapid leaf-based C. oleifera variety recognition.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research on remote sensing change monitoring of urban land types based on BOVW and SVM
    HUANG Jingshu, GAO Xindan, JING Weipeng
    JOURNAL OF NANJING FORESTRY UNIVERSITY    2023, 47 (3): 37-44.   DOI: 10.12302/j.issn.1000-2006.202110007
    Abstract1740)   HTML18)    PDF(pc) (4191KB)(915)       Save

    【Objective】 By studying changes in urban land types, we can determine the impact of urban evolution on environmental climate, urban development and government decision-making. 【Method】 Using the NWPU-RESISC45 standard dataset with a resolution of 15-30 m alongside Landsat 8 remote sensing images of the Harbin urban area as experimental data, we created a remote sensing image dataset that included four land types: urban buildings and roads, water bodies, vegetation and bare land. Texture information was added to the experimental data to extract SIFT (the scale-invariant feature transform) feature points. A visual dictionary containing a large amount of semantic information was also obtained using the K-means clustering algorithm to construct a bag of visual words (BOVW). The feature points extracted using BOVW were then combined with a support vector machine (SVM) to classify the dataset. Finally, using Landsat8 images of the same seasons in 2013 and 2019 and using those of the Songbei District of Harbin City as examples, the location and area change information for each land type were calculated. 【Result】 The classification results based on BOVW and SVM were compared with five single classification models and three “feature extraction + classifier” models. When using a visual dictionary of 550 words, the classification and change-monitoring accuracies of the model were 79.40% and 79.29%, respectively. The monitoring results, combined with the specific data of Harbin City, also showed that the coverage area of urban buildings and roads type, vegetation types in the Songbei District of Harbin decreased significantly, whilst the coverage area of water bodies and bare land types increased between 2013 and 2019. This change was in line with the five-year plan for the environmental protection successively launched by the Harbin municipal government in recent years, alongside the relevant policy requirements for the reasonable control of the urban scale in the master plan. 【Conclusion】 For Landsat remote sensing images with a long time-span and a low resolution, the change-monitoring models built using BOVW and SVM were both effective in monitoring land type changes. To a certain extent, this could improve the accuracy of classification and change monitoring, whilst providing a reference for the land type change monitoring.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Estimation of total nitrogen in young Aquilaria sinensis based on multi image features
    YUAN Ying, WANG Xuefeng, WANG Tian, CHEN Feifei, HUANG Chuanteng, LIN Ling, DONG Xiaona
    JOURNAL OF NANJING FORESTRY UNIVERSITY    2023, 47 (3): 19-28.   DOI: 10.12302/j.issn.1000-2006.202107017
    Abstract1594)   HTML25)    PDF(pc) (2742KB)(861)       Save

    【Objective】Multi-image features of young Aquilaria sinensis were extracted using computer vision technology to estimate the total nitrogen content of leaves, providing a new method for rapid and nondestructive measurement of the nitrogen nutritional status of A. sinensis. 【Method】In this study, the best histogram entropy method (KSW entropy method) and morphological processing based on the HIS (Hue-intensity-saturation) color space were used to segment an image of young A. sinensis, and the color, shape and textural features of the image were extracted. Subsequently, the partial least squares (PLS) method was used to reduce the multi-image feature dimensions, and the principal components of the image feature variables were generated. Finally, the Elman neural network (ElmanNN), optimized using the BAS algorithm, was used to estimate the total nitrogen content of young A. sinensis, and the validation results of the model were compared with those of other commonly used models. 【Result】Research showed the following: (1) focusing on the visible image of A. sinensis, the segmentation algorithm based on HIS color space was better than that based on RGB and Lab color space. (2) The PLS algorithm extracted six principal components from the image features, which reduced the dimension of the image features quickly, and effectively eliminated the multicollinearity among the feature variables. (3) The PLS-BAS-ElmanNN model proposed in this study could achieve the adaptive selection of model parameters, and had higher estimation accuracy; for instance, the R2 was 0.740 7 and the root mean square error was only 1.265 3 g/kg. The estimation accuracy of it was slightly higher than that of the PLSR and PLS-GAM models. 【Conclusion】In this study, we proposed an image processing method for young A. sinensis and constructed a PLS-BAS-ElmanNN estimation model that can stably process high-dimensional image data. This provides a new idea for monitoring the nitrogen nutrition status of young A. sinensis and has a very important practical significance for the accurate cultivation of A. sinensis.

    Table and Figures | Reference | Related Articles | Metrics | Comments0