Machine Learning Image Analysis for Asset Inspection
- Nick Hayward (Total E&P UK Ltd) | Marta Portugal (Merkle)
- Document ID
- Society of Petroleum Engineers
- SPE Offshore Europe Conference and Exhibition, 3-6 September, Aberdeen, UK
- Publication Date
- Document Type
- Conference Paper
- 2019. Society of Petroleum Engineers
- object classification, image analysis, data science, object detection, machine learning
- 104 in the last 30 days
- 104 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||USD 9.50|
|SPE Non-Member Price:||USD 28.00|
Total E&P UK Ltd and Merkle have undertaken a proof of concept project to investigate machine learning image analysis applied to image and video data from onshore and offshore sites, in preparation for the deployment of an autonomous asset inspection ground robot in 2019. The aim is to better understand the feasibility of these methods, and to demonstrate the benefits of robotic inspection with regard to improving safety and efficiency, enhancing data capture and reducing operational costs.
An object detection model was developed based on high-performance open source algorithms. Transfer learning was applied using a custom-built image library and the result is a model able to detect a range of different types of items in the industrial environment including mobile equipment, process equipment, infrastructure and personnel (with and without PPE). The object detection model is used to feed into object classification anomaly detection models to look at the state of selected pieces of identified equipment, such as whether a valve is open or closed, which can be placed in the context of the expected state of the process equipment by relating it to the digital twin for the asset. An additional object detection model was developed to operate as a gas leak detection system for infrared cameras.
The object detection model achieved good results and model performance was driven by the number and quality of images used for the training. An anomaly detection model designed to detect whether ball valves were open or closed delivered good results, with high accuracy and balanced false positive and false negative detection rates. The overall performance of the infrared gas detection model was restricted by the limited volume and variability of the training data, although the false positive detection rate was very low. A significant part of the machine learning was devoted to the development of a consistent labelled image library for oil and gas equipment, infrastructure and gas leaks. Image transformations were tested but boosting the number of images using transforms gave variable results. Additional training and testing data is needed to ensure that the models are as robust as possible, especially for the gas leak detection model. Once the models are productionised and in use, additional data can be used to periodically retrain the models for improved performance.
In addition to the machine learning algorithms, a fundamental aspect of the project is the development of the overall technical architecture, supporting the data science. This includes enabling the transfer of data from inspection robots or other connected camera devices into a data store in a cloud computing environment and returning the results to dashboard systems with different levels of detail depending on user requirements.
|File Size||1 MB||Number of Pages||14|
Everingham, M., Van Gool, L., Williams, C. K., Winn J. and Zisserman, A. 2010. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2): 303–338. https://doi.org/10.1007/s11263-009-0275-4.
Github. 2018. YOLOv2 in Keras and Applications (26 April 2018 revision), https://github.com/experiencor/keras-yolo2.
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. 2015. arXiv preprint arXiv:1512.03385. https://arxiv.org/abs/1512.03385.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.E., Fu, C., and Berg, A.C. 2016. SSD: Single Shot MultiBox Detector. In: Leibe B., Matas J., Sebe N., Welling M. (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9905. Springer. https://doi.org/10.1007/978-3-319-46448-0_2.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. 2015. You only look once: Unified, real-time object detection. arXiv preprint arXiv:1506.02640. https://arxiv.org/abs/1506.02640.
Redmon, J. and Farhadi, A. 2016. YOLO9000: Better, Faster, Stronger. arXiv preprint arXiv:1612.08242v1. https://arxiv.org/abs/1612.08242v1
Ren, S., He, K., Girshick, R., and Sun, J. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497. https://arxiv.org/abs/1506.01497.