Publicaciones

Daniel Saavedra

RL1, Publisher: Neural Computing and Applications, Link>

AUTHORS

Daniel Saavedra, Domingo Mery, Sandipan Banerjee

ABSTRACT

In the field of security, baggage-screening with X-rays is used as nondestructive testing for threat object detection. This is a common protocol when inspecting passenger baggage particularly at airports. Unfortunately, the accuracy of such human inspection is around 80–90%, under optimal operator conditions. For this reason, it is quite necessary to assist human inspectors with the aid of computer vision algorithms. This work proposes a deep learning-based methodology designed to detect threat objects in (single spectrum) X-ray baggage scan images. For this purpose, our proposed framework simulates a large number of X-ray images, using a combination of PGGAN (Karras et al. in International conference on learning representations, 2018. https://openreview.net/forum?id=Hk99zCeAb) and superimposition (Mery and Katsaggelos in 2017 IEEE conference on computer vision and pattern recognition workshops (CVPRW), 2017.https://doi.org/10.1109/CVPRW.2017.37) strategies, that are used to train state-of-the-art detection models such as YOLO (Redmon et al. in You only look once: unified, real-time object detection. CoRR abs/1506.02640, 2015. http://arxiv.org/ abs/1506.02640), SSD (Liu et al. in SSD: single shot multibox detector. CoRR abs/1512.02325, 2015. http://arxiv. org/abs/1512.02325) and RetinaNet (Lin et al. in Focal loss for dense object detection. CoRR abs/1708.02002, 2017. http://arxiv.org/abs/1708.02002). Our method has been tested on real X-ray images in the detection of four categories of threat objects: guns, knives, razor blades and shuriken (ninja stars). In our experiments, YOLOv3 (Redmon and Farhadi in Yolov3: An incremental improvement. CoRR abs/1804.02767, 2018. http://arxiv.org/abs/1804.02767) obtained the best mean average precision (mAP) with 96.3% for guns, 76.2% for knives, 86.9% for razor blades and 93.7% for shuriken, while the average mAP for all threat objects was 80.0%. We believe the effectiveness of our method in the detection of threat objects makes its use in checkpoints possible. Moreover, our methodology is scalable and can be easily extended to detect other categories automatically.


34 visualizaciones Ir a la publicación

AUTHORS

Alejandro Kaminetzky, Daniel Saavedra, Domingo Mery, Laurence Golborne, Susana Figueroa

In X-ray testing, the aim is to inspect those inner parts of an object that cannot be detected by the naked eye. Typical applications are the detection of targets like blow holes in casting inspection, cracks in welding inspection, and prohibited objects in baggage inspection. A straightforward solution today is the use of object detection methods based on deep learning models. Nevertheless, this strategy is not effective when the number of available X-ray images for training is low. Unfortunately, the databases in X-ray testing are rather limited. To overcome this problem, we propose a strategy for deep learning training that is performed with a low number of target-free X-ray images with superimposition of many simulated targets. The simulation is based on the Beer–Lambert law that allows to superimpose different layers. Using this method it is very simple to generate training data. The proposed method was used to train known object detection models (e.g. YOLO, RetinaNet, EfficientDet and SSD) in casting inspection, welding inspection and baggage inspection. The learned models were tested on real X-ray images. In our experiments, we show that the proposed solution is simple (the implementation of the training can be done with a few lines of code using open source libraries), effective (average precision was 0.91, 0.60 and 0.88 for casting, welding and baggage inspection respectively), and fast (training was done in a couple of hours, and testing can be performed in 11ms per image). We believe that this strategy makes a contribution to the implementation of practical solutions to the problem of target detection in X-ray testing.

30 visualizaciones Ir a la publicación