• Identification Method of Kale Leaf Ball Based on Improved UperNet

    Subjects: Agriculture, Forestry,Livestock & Aquatic Products Science >> Other Disciplines of Agriculture, Forestry,Livestock & Aquatic Products Science submitted time 2024-07-17 Cooperative journals: 《智慧农业(中英文)》

    Abstract: [Objective] Kale is an important bulk vegetable crop worldwide, its main growth characteristics are outer leaves and leaf bulbs. The traits of leaf bulb kale are crucial for adjusting water and fertilizer parameters in the field to achieve maximum yield. However, various factors such as soil quality, light exposure, leaf overlap, and shading can affect the growth of in practical field conditions. The similarity in color and texture between leaf bulbs and outer leaves complicates the segmentation process for existing recognition models. In this paper, the segmentation of kale outer leaves and leaf bulbs in complex field background was proposed, using pixel values to determine leaf bulb size for intelligent field management. A semantic segmentation algorithm, UperNet-ESA was proposed to efficiently and accurately segment nodular kale outer leaf and leaf bulb in field scenes using the morphological features of the leaf bulbs and outer leaves of nodular kale to realize the intelligent management of nodular kale in the field. [Methods] The UperNet-ESA semantic segmentation algorithm, which uses the unified perceptual parsing network (UperNet) as an efficient semantic segmentation framework, is more suitable for extracting crop features in complex environments by integrating semantic information across different scales. The backbone network was improved using ConvNeXt, which is responsible for feature extraction in the model. The similarity between kale leaf bulbs and outer leaves, along with issues of leaf overlap affecting accurate target contour localization, posed challenges for the baseline network, leading to low accuracy. ConvNeXt effectively combines the strengths of convolutional neural networks (CNN) and Transformers, using design principles from Swin Transformer and building upon ResNet50 to create a highly effective network structure. The simplicity of the ConvNeXt design not only enhances segmentation accuracy with minimal model complexity, but also positions it as a top performer among CNN architectures. In this study, the ConvNeXt- B version was chosen based on considerations of computational complexity and the background characteristics of the knotweed kale image dataset. To enhance the model’s perceptual acuity, block ratios for each stage were set at 3:3:27:3, with corresponding channel numbers of 128, 256, 512 and 1 024, respectively. Given the visual similarity between kale leaf bulbs and outer leaves, a high-efficiency channel attention mechanism was integrated into the backbone network to improve feature extraction in the leaf bulb region. By incorporating attention weights into feature mapping through residual inversion, attention parameters were cyclically trained within each block, resulting in feature maps with attentional weights. This iterative process facilitated the repeated training of attentional parameters and enhanced the capture of global feature information. To address challenges arising from direct pixel addition between up-sampling and local features, potentially leading to misaligned context in feature maps and erroneous classifications at kale leaf boundaries, a feature alignment module and feature selection module were introduced into the feature pyramid network to refine target boundary information extraction and enhance model segmentation accuracy. [Results and Discussions] The UperNet-ESA semantic segmentation model outperforms the current mainstream UNet model, PSPNet model, DeepLabV3+ model in terms of segmentation accuracy, where mIoU and mPA reached 92.45% and 94.32%, respectively, and the inference speed of up to 16.6 frames per second (fps). The mPA values were better than that of the UNet model, PSPNet model, ResNet-50 based, MobilenetV2, and DeepLabV3+ model with Xception as the backbone, showing improvements of 11.52%, 13.56%, 8.68%, 4.31%, and 6.21%, respectively. Similarly, the mIoU exhibited improvements of 12.21%, 13.04%, 10.65%, 3.26% and 7.11% compared to the mIoU of the UNet-based model, PSPNet model, and DeepLabV3+ model based on the ResNet-50, MobilenetV2, and Xception backbones, respectively. This performance enhancement can be attributed to the introduction of the ECA module and the improvement made to the feature pyramid network in this model, which strengthen the judgement of the target features at each stage to obtain effective global contextual information. In addition, although the PSPNet model had the fastest inference speed, the overall accuracy was too low to for developing kale semantic segmentation models. On the contrary, the proposed model exhibited superior inference speed compared to all other network models. [Conclusions] The experimental results showed that the UperNet-ESA semantic segmentation model proposed in this study outperforms the original network in terms of performance. The improved model achieves the best accuracy-speed balance compared to the current mainstream semantic segmentation networks. In the upcoming research, the current model will be further optimized and enhanced, while the kale dataset will be expanded to include a wider range of samples of nodulated kale leaf bulbs. This expansion is intended to provide a more robust and comprehensive theoretical foundation for intelligent kale field management.

  • Recognition Method of Facility Cucumber Farming Behaviours Based on Improved SlowFast Model

    Subjects: Agriculture, Forestry,Livestock & Aquatic Products Science >> Other Disciplines of Agriculture, Forestry,Livestock & Aquatic Products Science submitted time 2024-07-17 Cooperative journals: 《智慧农业(中英文)》

    Abstract: [Objective] The identification of agricultural activities plays a crucial role for greenhouse vegetables production, particularly in the precise management of cucumber cultivation. By monitoring and analyzing the timing and procedures of agricultural operations, effective guidance can be provided for agricultural production, leading to increased crop yield and quality. However, in practical applications, the recognition of agricultural activities in cucumber cultivation faces significant challenges. The complex and ever-changing growing environment of cucumbers, including dense foliage and internal facility structures that may obstruct visibility, poses difficulties in recognizing agricultural activities. Additionally, agricultural tasks involve various stages such as planting, irrigation, fertilization, and pruning, each with specific operational intricacies and skill requirements. This requires the recognition system to accurately capture the characteristics of various complex movements to ensure the accuracy and reliability of the entire recognition process. To address the complex challenges, an innovative algorithm: SlowFast-SMC-ECA (SlowFast-Spatio-Temporal Excitation, Channel Excitation, Motion Excitation-Efficient Channel Attention) was proposed for the recognition of agricultural activity behaviors in cucumber cultivation within facilities. [Methods] This algorithm represents a significant enhancement to the traditional SlowFast model, with the goal of more accurately capturing hand motion features and crucial dynamic information in agricultural activities. The fundamental concept of the SlowFast model involved processing video streams through two distinct pathways: the Slow Pathway concentrated on capturing spatial detail information, while the Fast Pathway emphasized capturing temporal changes in rapid movements. To further improve information exchange between the Slow and Fast pathways, lateral connections were incorporated at each stage. Building upon this foundation, the study introduced innovative enhancements to both pathways, improving the overall performance of the model. In the Fast Pathway, a multi-path residual network (SMC) concept was introduced, incorporating convolutional layers between different channels to strengthen temporal interconnectivity. This design enabled the algorithm to sensitively detect subtle temporal variations in rapid movements, thereby enhancing the recognition capability for swift agricultural actions. Meanwhile, in the Slow Pathway, the traditional residual block was replaced with the ECA-Res structure, integrating an effective channel attention mechanism (ECA) to improve the model’s capacity to capture channel information. The adaptive adjustment of channel weights by the ECA-Res structure enriched feature expression and differentiation, enhancing the model’s understanding and grasp of key spatial information in agricultural activities. Furthermore, to address the challenge of class imbalance in practical scenarios, a balanced loss function (Smoothing Loss) was developed. By introducing regularization coefficients, this loss function could automatically adjust the weights of different categories during training, effectively mitigating the impact of class imbalance and ensuring improved recognition performance across all categories. [Results and Discussions] The experimental results significantly demonstrated the outstanding performance of the improved SlowFast- SMC-ECA model on a specially constructed agricultural activity dataset. Specifically, the model achieved an average recognition accuracy of 80.47%, representing an improvement of approximately 3.5% compared to the original SlowFast model. This achievement highlighted the effectiveness of the proposed improvements. Further ablation studies revealed that replacing traditional residual blocks with the multi-path residual network (SMC) and ECA-Res structures in the second and third stages of the SlowFast model leads to superior results. This highlighted that the improvements made to the Fast Pathway and Slow Pathway played a crucial role in enhancing the model’s ability to capture details of agricultural activities. Additional ablation studies also confirmed the significant impact of these two improvements on improving the accuracy of agricultural activity recognition. Compared to existing algorithms, the improved SlowFast-SMC-ECA model exhibited a clear advantage in prediction accuracy. This not only validated the potential application of the proposed model in agricultural activity recognition but also provided strong technical support for the advancement of precision agriculture technology. In conclusion, through careful refinement and optimization of the SlowFast model, it was successfully enhanced the model’s recognition capabilities in complex agricultural scenarios, contributing valuable technological advancements to precision management in greenhouse cucumber cultivation. [Conclusions] By introducing advanced recognition technologies and intelligent algorithms, this study enhances the accuracy and efficiency of monitoring agricultural activities, assists farmers and agricultural experts in managing and guiding the operational processes within planting facilities more efficiently. Moreover, the research outcomes are of immense value in improving the traceability system for agricultural product quality and safety, ensuring the reliability and transparency of agricultural product quality.

  • Zero-Shot Pest Identification Based on Generative Adversarial Networks and Visual-Semantic Alignment

    Subjects: Agriculture, Forestry,Livestock & Aquatic Products Science >> Other Disciplines of Agriculture, Forestry,Livestock & Aquatic Products Science submitted time 2024-06-17 Cooperative journals: 《智慧农业(中英文)》

    Abstract: Objective  Accurate identification of insect pests is crucial for the effective prevention and control of crop infestations. However, existing pest identification methods primarily rely on traditional machine learning or deep learning techniques that are trained on seen classes. These methods falter when they encounter unseen pest species not included in the training set, due to the absence of image samples. An innovative method was proposed to address the zero-shot recognition challenge for pests. Methods  The novel zero-shot learning (ZSL) method proposed in this study was capable of identifying unseen pest species. First, a comprehensive pest image dataset was assembled, sourced from field photography conducted around Beijing over several years, and from web crawling. The final dataset consisted of 2 000 images across 20 classes of adult Lepidoptera insects, with 100 images per class. During data preprocessing, a semantic dataset was manually curated by defining attributes related to color, pattern, size, and shape for six parts: antennae, back, tail, legs, wings, and overall appearance. Each image was annotated to form a 65-dimensional attribute vector for each class, resulting in a 20×65 semantic attribute matrix with rows representing each class and columns representing attribute values. Subsequently, 16 classes were designated as seen classes, and 4 as unseen classes. Next, a novel zero-shot pest recognition method was proposed, focusing on synthesizing high-quality pseudo-visual features aligned with semantic information using a generator. The wasserstein generative adversarial networks (WGAN) architecture was strategically employed as the fundamental network backbone. Conventional generative adversarial networks (GANs) have been known to suffer from training instabilities, mode collapse, and convergence issues, which can severely hinder their performance and applicability. The WGAN architecture addresses these inherent limitations through a principled reformulation of the objective function. In the proposed method, the contrastive module was designed to capture highly discriminative visual features that could effectively distinguish between different insect classes. It operated by creating positive and negative pairs of instances within a batch. Positive pairs consisted of different views of the same class, while negative pairs were formed from instances belonging to different classes. The contrastive loss function encouraged the learned representations of positive pairs to be similar while pushing the representations of negative pairs apart. Tightly integrated with the WGAN structure, this module substantially improved the generation quality of the generator. Furthermore, the visual-semantic alignment module enforced consistency constraints from both visual and semantic perspectives. This module constructed a cross-modal embedding space, mapping visual and semantic features via two projection layers: One for mapping visual features into the cross-modal space, and another for mapping semantic features. The visual projection layer took the synthesized pseudo-visual features from the generator as input, while the semantic projection layer ingested the class-level semantic vectors. Within this cross-modal embedding space, the module enforced two key constraints: Maximizing the similarity between same-class visual-semantic pairs and minimizing the similarity between different-class pairs. This was achieved through a carefully designed loss function that encourages the projected visual and semantic representations to be closely aligned for instances belonging to the same class, while pushing apart the representations of different classes. The visual-semantic alignment module acted as a regularizer, preventing the generator from producing features that deviated from the desired semantic information. This regularization effect complemented the discriminative power gained from the contrastive module, resulting in a generator that produces high-quality, diverse, and semantically aligned pseudovisual features. Results and Discussions  The proposed method was evaluated on several popular ZSL benchmarks, including CUB, AWA, FLO, and SUN. The results demonstrated that the proposed method achieved state-of-the-art performance across these datasets, with a maximum improvement of 2.8% over the previous best method, CE-GZSL. This outcome fully demonstrated the method’s broad effectiveness in different benchmarks and its outstanding generalization ability. On the self-constructed 20-class insect dataset, the method also exhibited exceptional recognition accuracy. Under the standard ZSL setting, it achieved a precise recognition rate of 77.4%, outperforming CE-GZSL by 2.1%. Under the generalized ZSL setting, it achieved a harmonic mean accuracy of 78.3%, making a notable 1.2% improvement. This metric provided a balanced assessment of the model’s performance across seen and unseen classes, ensuring that high accuracy on unseen classes does not come at the cost of forgetting seen classes. These results on the pest dataset, coupled with the performance on public benchmarks, firmly validated the effectiveness of the proposed method. Conclusions  The proposed zero-shot pest recognition method represents a step forward in addressing the challenges of pest management. It effectively generalized pest visual features to unseen classes, enabling zero-shot pest recognition. It can facilitate pests identification tasks that lack training samples, thereby assisting in the discovery and prevention of novel crop pests. Future research will focus on expanding the range of pest species to further enhance the model’s practical applicability.

  • Big Models in Agriculture: Key Technologies, Application and Future Directions

    Subjects: Agriculture, Forestry,Livestock & Aquatic Products Science >> Other Disciplines of Agriculture, Forestry,Livestock & Aquatic Products Science submitted time 2024-06-17 Cooperative journals: 《智慧农业(中英文)》

    Abstract: Significance  Big Models, or Foundation Models, have offered a new paradigm in smart agriculture. These models, built on the Transformer architecture, incorporate numerous parameters and have undergone extensive training, often showing excellent performance and adaptability, making them effective in addressing agricultural issues where data is limited. Integrating big models in agriculture promises to pave the way for a more comprehensive form of agricultural intelligence, capable of processing diverse inputs, making informed decisions, and potentially overseeing entire farming systems autonomously. Progress  The fundamental concepts and core technologies of big models are initially elaborated from five aspects: the generation and core principles of the Transformer architecture, scaling laws of extending big models, large-scale self-supervised learning, the general capabilities and adaptions of big models, and the emerging capabilities of big models. Subsequently, the possible application scenarios of the big model in the agricultural field are analyzed in detail, the development status of big models is described based on three types of the models: Large language models (LLMs), large vision models (LVMs), and large multi-modal models (LMMs). The progress of applying big models in agriculture is discussed, and the achievements are presented. Conclusions and Prospects  The challenges and key tasks of applying big models technology in agriculture are analyzed. Firstly, the current datasets used for agricultural big models are somewhat limited, and the process of constructing these datasets can be both expensive and potentially problematic in terms of copyright issues. There is a call for creating more extensive, more openly accessible datasets to facilitate future advancements. Secondly, the complexity of big models, due to their extensive parameter counts, poses significant challenges in terms of training and deployment. However, there is optimism that future methodological improvements will streamline these processes by optimizing memory and computational efficiency, thereby enhancing the performance of big models in agriculture. Thirdly, these advanced models demonstrate strong proficiency in analyzing image and text data, suggesting potential future applications in integrating real-time data from IoT devices and the Internet to make informed decisions, manage multi-modal data, and potentially operate machinery within autonomous agricultural systems. Finally, the dissemination and implementation of these big models in the public agricultural sphere are deemed crucial. The public availability of these models is expected to refine their capabilities through user feedback and alleviate the workload on humans by providing sophisticated and accurate agricultural advice, which could revolutionize agricultural practices.