Abstract
Crop identification has been a subject of study for a long time. With the help of satellites and manned aircraft, surveys are carried out in order to determine the areas planted with different crops. This is done for different purposes, and the methods used are expensive and time-consuming, which has restricted their use to very special cases. For small farmers, determining the planting areas for different species could lead to better coordination and distribution of the supply of agricultural products. The main objective of this work is to explore the use of drones to capture images, and Deep Learning pre-trained models techniques to classify crops in small areas, in order to provide a solution to problems related to the estimation of planting areas, for different horticultural products grown on a small scale. This is an exploratory work, in which the Detectron2 Zoo was evaluated. Detectron2 has been developed by Facebook IA and offers a great number of pre-trained models. These models can serve in many object detection tasks, and evaluating their performance might be the first step in the search for a practical solution to many problems, before building a model from scratch. In this work, we did not make changes to the architectures of the original models, but we explored some variations of the corresponding hyperparameters. We selected three classes of areas to detect in our experiments: uncultivated areas (no crop), green onion crops (onion), and foliage flowers crops (foliage). The results show that it is possible to detect small-scale crops and classify them with the help of Detectron2 models. However, we would still have to perform further evaluations in order to compare these models with other state-of-the-art methods, and we would still have to modify the architectures of these models in order to improve their performance. In future work, it is recommended to delve into aspects such as the count of plants in order to determine their quantity with greater precision and thus being able to carry out planning activities in a much more efficient way.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chew, R., Rineer, J., Beach, R., O’neil, M., Ujeneza, N., Lapidus, D., Miano, T., Hegarty-Craver, M., Polly, J., & Temple, D. S.: Deep neural networks and transfer learning for food crop identification in UAV images. Drones 4(1), 1–14 (2020). https://doi.org/10.3390/drones4010007
Schmedtmann, J., Campagnolo, M.L.: Reliable crop identification with satellite imagery in the context of common agriculture policy subsidy control. Rem. Sens. 7(7), 9325–9346 (2015). https://doi.org/10.3390/rs70709325
Aguilar, M.A., Vallario, A., Aguilar, F.J., Lorca, A.G., Parente, C.: Object-based greenhouse horticultural crop identification from multi-temporal satellite imagery: a case study in Almeria Spain. Rem. Sens. 7(6), 7378–7401 (2015). https://doi.org/10.3390/rs70607378
Ghazaryan, G., Dubovyk, O., Löw, F., Lavreniuk, M., Kolotii, A., Schellberg, J., Kussul, N.: A rule-based approach for crop identification using multi-temporal and multi-sensor phenological metrics. Eur. J. Rem. Sens. 51(1), 511–524 (2018). https://doi.org/10.1080/22797254.2018.1455540
Sitokonstantinou, V., Papoutsis, I., Kontoes, C., Arnal, A.L., Andrés, A. P.A., Zurbano, J.A.G.: Scalable parcel-based crop identification scheme using Sentinel-2 data time-series for the monitoring of the common agricultural policy. Rem. Sens. 10(6) (2018). https://doi.org/10.3390/rs10060911
Orynbaikyzy, A., Gessner, U., Conrad, C.: Crop type classification using a combination of optical and radar remote sensing data: a review. Int. J. Rem. Sens. 40(17), 6553–6595 (2019). https://doi.org/10.1080/01431161.2019.1569791
Boursianis, A.D., Papadopoulou, M.S., Diamantoulakis, P., Liopa-Tsakalidi, A., Barouchas, P., Salahas, G., Karagiannidis, G., Wan, S., Goudos, S.K.: Internet of Things (IoT) and agricultural unmanned aerial vehicles (UAVs) in smart farming: a comprehensive review. IoT 100187,(2020). https://doi.org/10.1016/j.iot.2020.100187
Radoglou-Grammatikis, P., Sarigiannidis, P., Lagkas, T., Moscholios, I.: A compilation of UAV applications for precision agriculture. Comput. Netw. 172(February), 107148 (2020). https://doi.org/10.1016/j.comnet.2020.107148
Ayaz, M., Ammad-Uddin, M., Sharif, Z., Mansour, A., Aggoune, E.H.M.: Internet-of-Things (IoT)-based smart agriculture: toward making the fields talk. IEEE Access 7, 129551–129583 (2019). https://doi.org/10.1109/ACCESS.2019.2932609
Bhosle, K., Musande, V.: Evaluation of deep learning CNN model for land use land cover classification and crop identification using hyperspectral remote sensing images. J. Indian Soci. Rem. Sens. 47(11), 1949–1958 (2019). https://doi.org/10.1007/s12524-019-01041-2
Lowenberg-DeBoer, J., Erickson, B.: Setting the record straight on precision agriculture adoption. Agron. J. 111(4), 1552–1569 (2019). https://doi.org/10.2134/agronj2018.12.0779
Trivelli, L., Apicella, A., Chiarello, F., Rana, R., Fantoni, G., Tarabella, A.: From precision agriculture to Industry 4.0: Unveiling technological connections in the agrifood sector. Brit. Food J. 121(8), 1730–1743 (2019). https://doi.org/10.1108/BFJ-11-2018-0747
Elorza, P.B.: Uso De Drones, pp. 80–85 (2016)
Rubwurm, M., Korner, M.: Temporal vegetation modelling using long short-term memory networks for crop identification from medium-resolution multi-spectral satellite images. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2017 July, pp. 1496–1504 (2017). https://doi.org/10.1109/CVPRW.2017.193
Conţiu, Ş, Groza, A.: Improving remote sensing crop classification by argumentation-based conflict resolution in ensemble learning. Expert Syst. Appl. 64, 269–286 (2016). https://doi.org/10.1016/j.eswa.2016.07.037
Li, W., Fu, H., Yu, L., Cracknell, A.: Deep learning based oil palm tree detection and counting for high-resolution remote sensing images. Rem. Sens. 9(1) (2017). https://doi.org/10.3390/rs9010022
Dijkstra, K., van de Loosdrecht, J., Atsma, W.A., Schomaker, L.R.B., Wiering, M.A.: CentroidNetV2: a hybrid deep neural network for small-object segmentation and counting. Neurocomputing 423, 490–505 (2021). https://doi.org/10.1016/j.neucom.2020.10.075
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: single shot multibox detector. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9905 LNCS, pp. 21–37 (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016 December, pp. 779–788. https://doi.org/10.1109/CVPR.2016.91
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 386–397 (2020). https://doi.org/10.1109/TPAMI.2018.2844175
Redmon, J., Farhadi, A.: YOLO v.3. Tech Report, 1–6 (2018). https://www.pjreddie.com/media/files/papers/YOLOv3.pdf
Torrey, L., Shavlik, J.: Transfer learning. In: Handbook of Research on Machine Learning Applications, IGI Global, pp. 657–665 (2010). https://doi.org/10.1201/b17320
Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., He, Q.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2021). https://doi.org/10.1109/JPROC.2020.3004555
You, K., Liu, Y., Wang, J., Long, M.: LogME: practical assessment of pre-trained models for transfer learning (2021). http://www.arxiv.org/abs/2102.11005
Wu, Y., Kirillov, A., Massa, F., Lo, W.-Y., Girshick, R.: Detectron2 (2019). https://www.github.com/facebookresearch/detectron2
Kirillov, A., He, K., Girshick, R., Rother, C., Dollar, P.: Panoptic segmentation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019 June, pp. 9396–9405. https://doi.org/10.1109/CVPR.2019.00963
Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, 2015 Inter, pp. 1440–1448. https://doi.org/10.1109/ICCV.2015.169
Rampersad, H.: Faster R-CNN: towards real-time object detection with region proposal networks. Total Perform. Scorecard 159–183,(2020). https://doi.org/10.4324/9780080519340-12
Yuan, J., Xue, B., Zhang, W., Xu, L., Sun, H., Zhou, J.: RPN-FCN based rust detection on power equipment. Proc. Comput. Sci. 147, 349–353 (2019). https://doi.org/10.1016/j.procs.2019.01.236
Chen, X., Girshick, R., He, K., Dollar, P.: TensorMask: a foundation for dense object segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, 2019-October, pp. 2061–2069. https://doi.org/10.1109/ICCV.2019.00215
Zhang, H., Chang, H., Ma, B., Shan, S., Chen, X.: Cascade retinanet: maintaining consistency for single-stage object detection. 30th British Machine Vision Conference 2019. BMVC 2019, 1–12 (2020)
Guler, R.A., Neverova, N., Kokkinos, I.: DensePose: dense human pose estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7297–7306 (2016). http://www.arxiv.org/abs/1612.01202
Kirillov, A., Wu, Y., He, K., Girshick, R.: Pointrend: Image segmentation as rendering. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 9796–9805 (2020). https://doi.org/10.1109/CVPR42600.2020.00982
Nowozin, S.: Optimal decisions from probabilistic models: the intersection-over-union case. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 548–555,(2014). https://doi.org/10.1109/CVPR.2014.77
Rahman, M.A., Wang, Y.: Optimizing intersection-over-union in deep neural networks for image segmentation. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10072 LNCS, pp. 234–244 (2016). https://doi.org/10.1007/978-3-319-50835-1_22
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Restrepo-Arias, J.F., Arregocés-Guerra, P., Branch-Bedoya, J.W. (2023). Crops Classification in Small Areas Using Unmanned Aerial Vehicles (UAV) and Deep Learning Pre-trained Models from Detectron2. In: Zapata-Cortes, J.A., Sánchez-Ramírez, C., Alor-Hernández, G., García-Alcaraz, J.L. (eds) Handbook on Decision Making. Intelligent Systems Reference Library, vol 226. Springer, Cham. https://doi.org/10.1007/978-3-031-08246-7_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-08246-7_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08245-0
Online ISBN: 978-3-031-08246-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)