Detection of Plastic Greenhouses Using High Resolution Rgb Remote Sensing Data and Convolutional Neural Network
Main Article Content
Abstract
Agricultural production in greenhouses shows a rapid growth in many parts of the world. This form of intensive farming requires a large amount of water and fertilizers, and can have a severe impact on the environment. The number of greenhouses and their location is important for applications like spatial planning, environmental protection, agricultural statistics and taxation. Therefore, with this study we aim to develop a methodology to detect plastic greenhouses in remote sensing data using machine learning algorithms.
This research presents the results of the use of a convolutional neural network for automatic object detection of plastic greenhouses in high resolution remotely sensed data within a GIS environment with a graphical interface to advanced algorithms. The convolutional neural network is trained with manually digitized greenhouses and RGB images downloaded from Google Earth. The ArcGIS Pro geographic information system provides access to many of the most advanced python-based machine learning environments like Keras – TensorFlow, PyTorch, fastai and Scikit-learn. These libraries can be accessed via a graphical interface within the GIS environment.
Our research evaluated the results of training and inference of three different convolutional neural networks. Experiments were executed with many settings for the backbone models and hyperparameters. The performance of the three models in terms of detection accuracy and time required for training was compared. The model based on the VGG_11 backbone model (with dropout) resulted in an average accuracy of 79.2% with a relatively short training time of 90 minutes, the much more complex DenseNet121 model was trained in 16.5 hours and showed a result of 79.1%, while the ResNet18 based model showed an average accuracy of 83.1% with a training time of 3.5 hours.
Downloads
Article Details
x
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Online available at: https://arxiv.org/pdf/1603.04467.pdf
Agüera, F., Aguilar, M.A., Aguilar, F.J. 2008. Using texture analysis to improve per-pixel classification of very high resolution images for mapping plastic greenhouses. ISPRS Journal of Photogrammetry and Remote Sensing 63 (6), 635–646. DOI: 10.1016/j.isprsjprs.2008.03.00310.1016/j.isprsjprs.2008.03.003
Agüera, F., Liu, G. G. 2009. Automatic greenhouse delineation from QuickBird and Ikonos satellite images. Computers and Electronics in Agriculture 66, 191–200. DOI: 10.1016/j.compag.2009.02.00110.1016/j.compag.2009.02.001
Chollet, F. 2015. Keras. Online available at: https://github.com/fchollet/keras
Davies, E.R. 2018. Computer Vision: Principles, Algorithms, Applications, Learning. Academic Press, 5th edition, 866 p. DOI: 10.1016/C2015-0-05563-010.1016/C2015-0-05563-0
Ding, P., Zheng, Y., Deng, J-W., Jia, P., Kuijper, A. 2018. A light and faster regional convolutional neural network for object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing 141, 208–218. DOI: 10.1016/j.isprsjprs.2018.05.00510.1016/j.isprsjprs.2018.05.005
ESRI 2021, ArcGIS Pro online help. Online available at: https://pro.arcgis.com/en/pro-app/latest/tool-reference/image-analyst/an-overview-of-the-deep-learning-toolset-in-image-analyst.htm
Everingham, M., Gool, V., L., Williams, I., K., C., Winn, J., Zisserman, A. 2010. The PASCAL Visual Object Classes (VOC) Challenge. International Journal of Computer Vision 88, 303–338. DOI: 10.1007/s11263-009-0275-410.1007/s11263-009-0275-4
Flood, N., Watson, F., Collett, L. 2019. Using a U-net convolutional neural network to map woody vegetation extent from high resolution satellite imagery across Queensland, Australia. International Journal of Applied Earth Observation and Geoinformation 82, 101897. DOI: 10.1016/j.jag.2019.10189710.1016/j.jag.2019.101897
Gallwey, J., Robiati, C., Coggan, J., Vogt, D., Eyre, M. 2020. A Sentinel-2 based multispectral convolutional neural network for detecting artisanal small-scale mining in Ghana: Applying deep learning to shallow mining. Remote Sensing of Environment 248: 111970. DOI: 10.1016/j.rse.2020.11197010.1016/j.rse.2020.111970
Goodfellow, I., Bengio, Y., Courville, A. 2016. Deep Learning. MIT Press, Online available at: http://www.deeplearningbook.org
González-Yebra, Ó., Aguilar, A. M., Nemmaoui, A., Aguilar, J., F. 2018. Methodological proposal to assess plastic greenhouses land cover change from the combination of archival aerial orthoimages and Landsat data. Biosystems Engineering 175, 36–51. DOI: 10.1016/j.biosystemseng.2018.08.00910.1016/j.biosystemseng.2018.08.009
Guo, Y., Xu, Y., Li, S. 2020. Dense construction vehicle detection based on orientation-aware feature fusion convolutional neural network. Automation in Construction 112. 103124. DOI: 10.1016/j.autcon.2020.10312410.1016/j.autcon.2020.103124
Howard, J., Gugger, S. 2020. Fastai: A layered API for Deep Learning. Information 11 (2), 108. DOI: 10.3390/info1102010810.3390/info11020108
Jiang, B., Ma, X., Lu, Y., Li, Y., Feng, L., Shi, Z. 2019. Ship detection in spaceborne infrared images based on Convolutional Neural Networks and synthetic targets. Infrared Physics & Technology 97, 229–234. DOI: 10.1016/j.infrared.2018.12.04010.1016/j.infrared.2018.12.040
Kattenborn, T., Leitloff, J., Schiefer, F., Hinz, S. 2021. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing 173, 24–49. DOI: 10.1016/j.isprsjprs.2020.12.01010.1016/j.isprsjprs.2020.12.010
Koc-San D. 2013. Evaluation of different classification techniques for the detection of glass and plastic greenhouses from WorldView-2 satellite imagery. Journal of Applied Remote Sensing 7 (1): 073553. DOI: 10.1117/1.JRS.7.07355310.1117/1.JRS.7.073553
LeCun y., Boser, B., Denker, S. J., Henderson, D., Howard, E. R., Hubbard, W., Jackel, D. L. 1990. Handwritten Digit Recognition with a Back-Propagation Network. pp. 396–403.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, Y. C., Berg, C. A. 2016. SSD: Singe Shot Multibox Detector. European Conference on Computer Vision 2016, 21–37. DOI: 10.1007/978-3-319-46448-0_210.1007/978-3-319-46448-0_2
McCarthy, J., Minsky, I. M., Rochester, N., Shannon, E., C. 1955. A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine, 27 (4), pp. 12–14. DOI: 10.1609/aimag.v27i4.1904
Mezősi, G. 2011. Magyarország természetföldrajza, (Physical geography of Hungary) Academic Press, Budapest, pp. 393.
Michie, D. 1968. „Memo” Functions and Machine Learning. Nature 218 (5136), 19–22. DOI: 10.1038/218019a010.1038/218019a0
Müller, B., Reinhardt, J., Strickland, M. T. 1995. Neural Networks: An Introduction. Springer, Berlin, pp. 307.
Nemmaoui, A., Aguilar, J. F., Aguilar, A. M., Qin, R. 2019. DSM and DTM generation from VHR satellite stereo imagery over plastic covered greenhouse areas. Computer and Electronics in Agriculture 164, 104903. DOI: 10.1016/j.compag.2019.10490310.1016/j.compag.2019.104903
Nilsson, N., J. 1980. Principles of artificial intelligence. Morgan Kaufmann, California, pp. 475.
Novelli, A., Aguilar, A.M., Nemmaoui, A., Aguilar, J. F., Tarantino, E. 2016. Performance evaluation of ebject based greenhouse detection from Sentinel-2 MSI and LANDSAT 8 OLI data: A case study from Almería (Spain). International Journal of Applied Earth Observation and Geoinformation 52, 403–411. DOI: 10.1016/j.jag.2016.07.01110.1016/j.jag.2016.07.011
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamakurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Cornell University. Online available at: https://arxiv.org/pdf/1912.01703v1.pdf
Pedregosa, F., Varoquaux, G., Gramfort, A., Michael, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, É. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12, 2825–2830. Online available at: https://arxiv.org/pdf/1201.0490.pdf
Pi, Y., Nath, D. N., Behzadan, H. A. 2020. Convolutional neural networks for object detection in aerial imagery for disaster response and recovery. Advanced Engineering Informatics 43, 101009. DOI: 10.1016/j.aei.2019.10100910.1016/j.aei.2019.101009
Poirson, P., Ammirato, P., Fu, C. Y., Liu, W., Kosĕcká, J., Berg, C. A. 2016. Fast single shot detection and pose estimation. Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 2016, pp. 676–684, DOI: 10.1109/3DV.2016.7810.1109/3DV.2016.78
Rai, K. A., Mandal, N., Singh, A., Singh, K. K. 2020. Landsat 8 OLI Satellite Image Classification using Convolutional Neural Network. Procedia Computer Science 167, 987–993. DOI: 10.1016/j.procs.2020.03.39810.1016/j.procs.2020.03.398
Schiefer, F., Kattenborn, T., Frick, A., Frey, J., Schall, P., Koch, B., Schmidtlein, S. 2020. Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks. ISPRS Journal of Photogrammetry and Remote Sensing 170, 205–215. DOI: 10.1016/j.isprsjprs.2020.10.01510.1016/j.isprsjprs.2020.10.015
Simon, A., H. 1995. Artificial intelligence: an empirical science. Artificial Intelligence 77 (1), 95–127. DOI: 10.1016/0004-3702(95)00039-H10.1016/0004-3702(95)00039-H
Virnodkar, S.S., Pachghare, C.V., Jha, K.S. 2020. CaneSat dataset to leverage convolutional neural networks for sugarcane classification from Sentinel-2. Journal of King Saud University – Computer and Information Sciences. DOI: 10.1016/j.jksuci.2020.09.005 (in press)10.1016/j.jksuci.2020.09.005
Watanabe, S., Sumi, K., Ise, T. 2018. Using deep learning for bamboo forest detection from Google Earth images. bioRxiv 351643, DOI: 10.1101/35164310.1101/351643
Wu, C., Deng, J. S., Wang, K., Ma, L. G., Tahmassebi, A. R. S. 2016. Object-based classification approach for greenhouse mapping using Landsat-8 imagery. International Journal of Agricultural and Biological Engineering 9, 79–88. DOI: 10.3965/j.ijabe.20160901.1414
Yang, D., Chen, J., Zhou, Y., Chen, X., Chen, X., Cao, X. 2017. Mapping plastic greenhouse with medium spatial resolution satellite data: Development of a new spectral index. ISPRS Journal of Photogrammetry and Remote Sensing 128, 47–60. DOI: 10.1016/j.isprsjprs.2017.03.00210.1016/j.isprsjprs.2017.03.002
Yang, G., Xu, R., Chen, Yi., Wu, Z., Du, Y., Liu, S., Qu, Z., Guo, K., Peng, C., Chang, J., Ge., Y. 2021. Identifying the greenhouse by Google Earth Engine to promote the reuse of fragmented land in urban fringe. Sustainable Cities and Society 67, 102743 DOI: 10.1016/j.scs.2021.10274310.1016/j.scs.2021.102743
Zhang, D., Pan, Y., Zhang, J., Hu, T., Z, J., Li, N., Chen, Q. 2020. A generalized approach based on convolutional neural networks for large area cropland mapping at very high resolution. Remote Sensing of Environment 247, 111912. DOI: /10.1016/j.rse.2020.11191210.1016/j.rse.2020.111912