JOURNAL OF NANJING FORESTRY UNIVERSITY ›› 2025, Vol. 49 ›› Issue (2): 194-202.doi: 10.12302/j.issn.1000-2006.202308021

Previous Articles     Next Articles

Method for aerial forest fire image recognition based on self-attention mechanism

WANG Junling1,2(), FAN Xijian1,*(), YANG Xubing1, YE Qiaolin1, FU Liyong3   

  1. 1. College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China
    2. School of Computer and Artificial Intelligence, Nanjing University of Science and Technology Zijin College, Nanjing 210023, China
    3. Institute of Forest Resource Information Techniques, Chinese Academy of Forestry, Beijing 100091, China
  • Received:2023-08-10 Accepted:2024-03-15 Online:2025-03-30 Published:2025-03-28
  • Contact: FAN Xijian E-mail:wangjunling@njfu.edu.cn;xijian.fan@njfu.edu.cn

Abstract:

【Objective】This study aims to address the challenges of small fire point targets and complex environments in aerial forest fire images, we propose FireViT, a self-attention-based image recognition method. 【Method】This method aims to enhance the accuracy and robustness of aerial forest fire image recognition. We used forest fire videos collected by drones in Chongli District, Zhangjiakou City, to construct a dataset through data preprocessing. A 10-layer vision transformer (ViT) was selected as the backbone network. Images were serialized using overlapping sliding windows, with embedded positional information fed into the first layer of ViT. The region selection modules, extracted from the preceding nine layers of ViT, were integrated into the tenth layer through multi-head self-attention and multi-layer perceptron mechanisms. This effectively amplified minor differences between subgraphs to capture features of small targets. Finally, a contrastive feature learning strategy was employed to construct an objective loss function for model prediction. We validated the model’s effectiveness by establishing training and testing sets with sample ratios of 8∶2, 7∶3, 6∶4, and 4∶6, and compared its performance with five classical models.【Result】With the allocation ratio of four training and test sets, the model achieved a recognition rate of 100% and accuracy of 94.82%, 95.05%, 94.90%, and 94.80%, respectively, with an average accuracy of 94.89%. This performance surpassed that of the five comparison models. The model converged rapidly, maintained a high recognition accuracy rate, and demonstrated stability in subsequent iterations. It showed strong generalization ability. The recognition rates were 99.97%, 99.89%, 99.80% and 99.77%, also higher than the five comparison models.【Conclusion】This research employed a model that integrated a self-attention mechanism with weakly supervised learning to reveal distinct local feature variations in aerial forest fire images across various environments. The approach exhibited strong generalization capability and robustness, which was significant for improving the capacity, efficiency, and effectiveness of fire situation management and hazard response. It also played a crucial role in preventing forest wildfires.

Key words: aerial forest fire image, self-attention mechanism, fine grained classification, vision transformer, forest fire prevention, unmanned aerial vehicles (UAV), Chongli District of Zhangjiakou City

CLC Number: