Computer Vision Applied to Nuclear Safety: A Comparison Between YOLOv5 and YOLOv8 in the Identification of Warning Signs in Nuclear Contexts
DOI:
https://doi.org/10.18540/jcecvl11iss1pp21998Keywords:
YOLOv5, YOLOv8, Nuclear Safety, Convolutional Neural NetworksAbstract
This article aims to contribute to the improvement of autonomous visual inspection systems through the application of computer vision and artificial intelligence techniques. The proposal is situated within the context of intelligent automation for critical environments, using real-time object detection models. The focus of the research is the identification of visual warning signs, particularly the international symbol for ionizing radiation, found in controlled areas, industrial facilities, and locations with operational restrictions, where automatic recognition can mitigate operational and human risks. To achieve this goal, a comparative analysis was conducted between two versions of the YOLO (You Only Look Once) architecture, widely used in real-time computer vision applications: YOLOv5 and YOLOv8. The study used a dataset of 1,392 manually labeled images on the Roboflow platform, containing different variations of the radiation symbol in various visual contexts. The images were divided into training (70%), validation (20%), and testing (10%) sets. Both architectures were trained using the same baseline parameters to allow a fair performance comparison. The evaluated metrics included precision, recall, mAP@0.5, and mAP@0.5:0.95, as well as the analysis of false positive rates. The results indicated that while YOLOv8 performed better in more sensitive metrics such as mAP@0.5:0.95, YOLOv5 showed higher overall performance and a lower incidence of false positives, making it more suitable for applications in autonomous systems. Thus, this research contributes to the advancement of intelligent embedded visual detection solutions, with applications in automated inspections, especially in hard-to-reach or operationally restricted environments.
Downloads
References
CHEN, C. et al. Deep learning-based thermal image analysis for pavement defect detection and classification considering complex pavement conditions. Remote Sensing, v. 14, n. 1, p. 106, 2021.
GOODFELLOW, Ian et al. Deep learning. Cambridge: MIT press, 2016.
GOODRICH, M. A.; SCHULTZ, A. C. Human-Robot Interaction: A Survey. Foundations and Trends in Human-Computer Interaction, v. 1, n. 3, p. 203–275, 2007.
HE, K. et al. Deep Residual Learning for Image Recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
INTERNATIONAL ATOMIC ENERGY AGENCY (IAEA). Now Available: New Drone Technology for Radiological Monitoring in Emergency Situations. Viena: IAEA, 2021. Disponível em: https://www.iaea.org/newscenter/news/now-available-new-drone-technology-for-radiological-monitoring-in-emergency-situations. Acesso em: 9 maio 2025.
KENDOUL, F. Survey of Advances in Guidance, Navigation, and Control of Unmanned Rotorcraft Systems. Journal of Field Robotics, v. 29, n. 2, p. 315–378, 2012.
KRIZHEVSKY, A.; SUTSKEVER, I.; HINTON, G. E. ImageNet Classification with Deep Convolutional Neural Networks. In: Advances in Neural Information Processing Systems, v. 25, p. 1097–1105, 2012.
REDMON, J. et al. You Only Look Once: Unified, Real-Time Object Detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
ULTRALYTICS. YOLOv5 Documentation. 2021. Disponível em: https://docs.ultralytics.com/. Acesso em: 9 maio 2025.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 The Journal of Engineering and Exact Sciences

This work is licensed under a Creative Commons Attribution 4.0 International License.