Intelligent on-Board Forest Fire Search System
DOI:
https://doi.org/10.18372/1990-5548.74.17290Keywords:
fire detection, convolutional neural networks, unmanned aerial vehicles, YOLO, R-CNN, single shot MultiBox detector, classifireAbstract
The paper analyzes the situation with forest fires in Ukraine. It is shown that the situation is deteriorating every year. For forest fire monitoring it is substantiated the need of the integrated use of data from satellites and unmanned aerial vehicles. It has been shown that early detection of a fire before it becomes a disaster is critical to preventing catastrophic fires and saving lives and property. A fire detection approach based on the use of computer vision methods that can work with a non-stationary camera installed on board the unmanned aerial vehicle is substantiated. An approach for detecting a "spot" of fire using convolutional neural networks is proposed. In our task of detecting a forest fire using an unmanned aerial vehicle, tracking based on detection is chosen as the model initialization method, when objects are first detected using the detection method and then linked into tracks (association). The Yolov4-tiny architecture was chosen as the architecture of the neural network detector, which provides high accuracy and speed of binary classification.
References
H. O. H. U. M. Celik, and T. Demirel, “Fire detection in video sequences using statistical color model,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2006.
I. K. Martin Mueller, Peter Karasev and A. Tannenbaum,“Optical flow estimation for flame detection in videos,” IEEE Trans. on Image Processing, vol. 22, no. 7, 2013. https://doi.org/10.1109/TIP.2013.2258353
N. A. Che-Bin Liu, “Vision based fire detection,” in Int. Conf. in Pattern Recognition, 2004. https://doi.org/10.1109/ICPR.2004.1333722
P. Gomes, P. Santana, and J. Barata, “A vision-based approach to fire detection,” International Journal of Advanced Robotic Systems, 2014. https://doi.org/10.5772/58821
X. Z. Chunyu Yu, Zhibin Mei, “A real-time video fire flame and smoke detection algorithm,” in Asia-Oceania Symposium on Fire Science and Technology, 2013.
A. E. C. B. Ugur Toreyin, “Online detection of fire in video,” in IEEE Conf. on Computer Vision and Pattern Recognition, 2007.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks.” In Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, USA, 2012.
R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, Ohio, USA, 2014. https://doi.org/10.1109/CVPR.2014.81
J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conf. on Computer Vision and Pattern Recognition, 2015. https://doi.org/10.1109/CVPR.2015.7298965
Yusuf Hakan Habiboğlu, Osman Günay, and A. Enis Çetin, “Covariance matrix-based fire and flame detection method in video,” Machine Vision and Applications, vol. 23, no. 6, pp. 1103–1113, 2012. https://doi.org/10.1007/s00138-011-0369-1
Qingjie Zhang, Jiaolong Xu, Liang Xu and Haifeng Guo, “Deep Convolutional Neural Networks for Forest Fire Detection,” International Forum on Management, Education and Information Technology Application (IFMEITA 2016), pp. 568–575. https://doi.org/10.2991/ifmeita-16.2016.105
Michael Z. Zgurovsky, Viktor M. Sineglazov, Olena I. Chumachenko Artificial Intelligence Systems Based on Hybrid Neural Networks, Springer, 2020, https://link.springer.com/book/10.1007/978-3-030-48453-8. Customer can order it via https://www.springer.com/gp/book/9783030484521
W. Luo and J. Xing, “Multiple Object Tracking: A Literature Review,” arXiv.org [Electronic resource]. 2017. – URL: https://arxiv.org/abs/1409.7618 (date of the application: 14.03.2018).
A. Bewley and Z. Ge, “Simple online and realtime tracking,” arXiv.org [Electronic resource]. 2016. – URL: https://arxiv.org/abs/1602.00763 (date of the application: 14.03.2018)
A. Sadeghian, A. Alahi, and S. Savarese, “Tracking the Untrackable: Learning To Track Multiple Cues with Long-Term Dependencies,” arXiv.org [Electronic resource]. 2017. – URL: https://arxiv.org/abs/1701.01909 (date of the application: 24.02.2018). https://doi.org/10.1109/ICCV.2017.41
N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” arXiv.org[Electronic resource]. 2017. – URL: https://arxiv.org/abs/1703.07402 (date of the application: 24.02.2018).
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788. https://doi.org/10.1109/CVPR.2016.91
J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” arXiv preprint, arXiv:1612.08242, 2016, 9 p. https://doi.org/10.1109/CVPR.2017.690
J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” Tech report, arXiv:1804.02767, 2018, 6 p.
C. M. Bishop, “Pattern Recognition and Machine Learning,” Springer-Verlag, New York, 2006, 738 p.
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Extended tech report, arXiv:1506.01497. 2016, 14 p.
R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, 21 p. https://doi.org/10.1109/CVPR.2014.81
R. Girshick, “Fast R-CNN,” IEEE International Conference on Computer Vision (ICCV), 2015, 9 p. https://doi.org/10.1109/ICCV.2015.169
J. R. R. Uijlings, K. E. A van de Sande, T. Gevers, and A. W. M. Smeulders, “Selective Search for Object Recognition,” International Journal of Computer Vision, vol. 104, pp. 154–171, 2013. https://doi.org/10.1007/s11263-013-0620-5
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, Ch.-Y. Fu, and A. C. Berg, “SSD: Single Shot MultiBox Detector,” “European Conference on Computer Vision (ECCV),” Springer, Cham., 2016, vol. 9905, pp. 21–37. https://doi.org/10.1007/978-3-319-46448-0_2
S. Wan, Z. Chen, T. Zhang, B. Zhang, and K. Wong, “Bootstrapping Face Detection with Hard Negative Examples,” arXiv:1608.02236, 2016, 7 p.
V. M. Sineglazov, M. G. Lutsky, and V. S. Ishchenko, “Suppression of noise in visual navigation systems,” Proceedings of 2021 IEEE 6th International Conference on Actual Problems of Unmanned Aerial Vehicles Development (APUAVD), Kyiv, Ukraine, October 19-21, 2021, pp. 7–10.
V. M. Sineglazov, “Hybrid Neural Networks for Noise Reductions of Integrated Navigation Complexes,” Artificial Intelligence, no. 1, 2022, рр. 168–180. https://doi.org/10.15407/jai2022.01.288.
Zicong Jiang, Liquan Zhao, Shuaiyang Li and Yanfei Jia, “Real-time object detection method for embedded devices,” 2021, 11 р.
Z. Jiao, Y. Zhang, X. Xin et al., “A deep learning based forest fire detection approach using UAV and YOLOv3,” in Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), pp. 1–5, Shenyang, China, July, 2019. https://doi.org/10.1109/ICIAI.2019.8850815
Downloads
Published
Issue
Section
License
Authors who publish with this journal agree to the following terms:
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).