Skip to main content

Crops Classification in Small Areas Using Unmanned Aerial Vehicles (UAV) and Deep Learning Pre-trained Models from Detectron2

  • Chapter
  • First Online:
Handbook on Decision Making

Abstract

Crop identification has been a subject of study for a long time. With the help of satellites and manned aircraft, surveys are carried out in order to determine the areas planted with different crops. This is done for different purposes, and the methods used are expensive and time-consuming, which has restricted their use to very special cases. For small farmers, determining the planting areas for different species could lead to better coordination and distribution of the supply of agricultural products. The main objective of this work is to explore the use of drones to capture images, and Deep Learning pre-trained models techniques to classify crops in small areas, in order to provide a solution to problems related to the estimation of planting areas, for different horticultural products grown on a small scale. This is an exploratory work, in which the Detectron2 Zoo was evaluated. Detectron2 has been developed by Facebook IA and offers a great number of pre-trained models. These models can serve in many object detection tasks, and evaluating their performance might be the first step in the search for a practical solution to many problems, before building a model from scratch. In this work, we did not make changes to the architectures of the original models, but we explored some variations of the corresponding hyperparameters. We selected three classes of areas to detect in our experiments: uncultivated areas (no crop), green onion crops (onion), and foliage flowers crops (foliage). The results show that it is possible to detect small-scale crops and classify them with the help of Detectron2 models. However, we would still have to perform further evaluations in order to compare these models with other state-of-the-art methods, and we would still have to modify the architectures of these models in order to improve their performance. In future work, it is recommended to delve into aspects such as the count of plants in order to determine their quantity with greater precision and thus being able to carry out planning activities in a much more efficient way.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chew, R., Rineer, J., Beach, R., O’neil, M., Ujeneza, N., Lapidus, D., Miano, T., Hegarty-Craver, M., Polly, J., & Temple, D. S.: Deep neural networks and transfer learning for food crop identification in UAV images. Drones 4(1), 1–14 (2020). https://doi.org/10.3390/drones4010007

    Article  Google Scholar 

  2. Schmedtmann, J., Campagnolo, M.L.: Reliable crop identification with satellite imagery in the context of common agriculture policy subsidy control. Rem. Sens. 7(7), 9325–9346 (2015). https://doi.org/10.3390/rs70709325

    Article  Google Scholar 

  3. Aguilar, M.A., Vallario, A., Aguilar, F.J., Lorca, A.G., Parente, C.: Object-based greenhouse horticultural crop identification from multi-temporal satellite imagery: a case study in Almeria Spain. Rem. Sens. 7(6), 7378–7401 (2015). https://doi.org/10.3390/rs70607378

    Article  Google Scholar 

  4. Ghazaryan, G., Dubovyk, O., Löw, F., Lavreniuk, M., Kolotii, A., Schellberg, J., Kussul, N.: A rule-based approach for crop identification using multi-temporal and multi-sensor phenological metrics. Eur. J. Rem. Sens. 51(1), 511–524 (2018). https://doi.org/10.1080/22797254.2018.1455540

    Article  Google Scholar 

  5. Sitokonstantinou, V., Papoutsis, I., Kontoes, C., Arnal, A.L., Andrés, A. P.A., Zurbano, J.A.G.: Scalable parcel-based crop identification scheme using Sentinel-2 data time-series for the monitoring of the common agricultural policy. Rem. Sens. 10(6) (2018). https://doi.org/10.3390/rs10060911

  6. Orynbaikyzy, A., Gessner, U., Conrad, C.: Crop type classification using a combination of optical and radar remote sensing data: a review. Int. J. Rem. Sens. 40(17), 6553–6595 (2019). https://doi.org/10.1080/01431161.2019.1569791

    Article  Google Scholar 

  7. Boursianis, A.D., Papadopoulou, M.S., Diamantoulakis, P., Liopa-Tsakalidi, A., Barouchas, P., Salahas, G., Karagiannidis, G., Wan, S., Goudos, S.K.: Internet of Things (IoT) and agricultural unmanned aerial vehicles (UAVs) in smart farming: a comprehensive review. IoT 100187,(2020). https://doi.org/10.1016/j.iot.2020.100187

  8. Radoglou-Grammatikis, P., Sarigiannidis, P., Lagkas, T., Moscholios, I.: A compilation of UAV applications for precision agriculture. Comput. Netw. 172(February), 107148 (2020). https://doi.org/10.1016/j.comnet.2020.107148

    Article  Google Scholar 

  9. Ayaz, M., Ammad-Uddin, M., Sharif, Z., Mansour, A., Aggoune, E.H.M.: Internet-of-Things (IoT)-based smart agriculture: toward making the fields talk. IEEE Access 7, 129551–129583 (2019). https://doi.org/10.1109/ACCESS.2019.2932609

    Article  Google Scholar 

  10. Bhosle, K., Musande, V.: Evaluation of deep learning CNN model for land use land cover classification and crop identification using hyperspectral remote sensing images. J. Indian Soci. Rem. Sens. 47(11), 1949–1958 (2019). https://doi.org/10.1007/s12524-019-01041-2

    Article  Google Scholar 

  11. Lowenberg-DeBoer, J., Erickson, B.: Setting the record straight on precision agriculture adoption. Agron. J. 111(4), 1552–1569 (2019). https://doi.org/10.2134/agronj2018.12.0779

    Article  Google Scholar 

  12. Trivelli, L., Apicella, A., Chiarello, F., Rana, R., Fantoni, G., Tarabella, A.: From precision agriculture to Industry 4.0: Unveiling technological connections in the agrifood sector. Brit. Food J. 121(8), 1730–1743 (2019). https://doi.org/10.1108/BFJ-11-2018-0747

  13. Elorza, P.B.: Uso De Drones, pp. 80–85 (2016)

    Google Scholar 

  14. Rubwurm, M., Korner, M.: Temporal vegetation modelling using long short-term memory networks for crop identification from medium-resolution multi-spectral satellite images. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2017 July, pp. 1496–1504 (2017). https://doi.org/10.1109/CVPRW.2017.193

  15. Conţiu, Ş, Groza, A.: Improving remote sensing crop classification by argumentation-based conflict resolution in ensemble learning. Expert Syst. Appl. 64, 269–286 (2016). https://doi.org/10.1016/j.eswa.2016.07.037

    Article  Google Scholar 

  16. Li, W., Fu, H., Yu, L., Cracknell, A.: Deep learning based oil palm tree detection and counting for high-resolution remote sensing images. Rem. Sens. 9(1) (2017). https://doi.org/10.3390/rs9010022

  17. Dijkstra, K., van de Loosdrecht, J., Atsma, W.A., Schomaker, L.R.B., Wiering, M.A.: CentroidNetV2: a hybrid deep neural network for small-object segmentation and counting. Neurocomputing 423, 490–505 (2021). https://doi.org/10.1016/j.neucom.2020.10.075

    Article  Google Scholar 

  18. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: single shot multibox detector. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9905 LNCS, pp. 21–37 (2016). https://doi.org/10.1007/978-3-319-46448-0_2

  19. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016 December, pp. 779–788. https://doi.org/10.1109/CVPR.2016.91

  20. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 386–397 (2020). https://doi.org/10.1109/TPAMI.2018.2844175

    Article  Google Scholar 

  21. Redmon, J., Farhadi, A.: YOLO v.3. Tech Report, 1–6 (2018). https://www.pjreddie.com/media/files/papers/YOLOv3.pdf

  22. Torrey, L., Shavlik, J.: Transfer learning. In: Handbook of Research on Machine Learning Applications, IGI Global, pp. 657–665 (2010). https://doi.org/10.1201/b17320

  23. Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., He, Q.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2021). https://doi.org/10.1109/JPROC.2020.3004555

    Article  Google Scholar 

  24. You, K., Liu, Y., Wang, J., Long, M.: LogME: practical assessment of pre-trained models for transfer learning (2021). http://www.arxiv.org/abs/2102.11005

  25. Wu, Y., Kirillov, A., Massa, F., Lo, W.-Y., Girshick, R.: Detectron2 (2019). https://www.github.com/facebookresearch/detectron2

  26. Kirillov, A., He, K., Girshick, R., Rother, C., Dollar, P.: Panoptic segmentation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019 June, pp. 9396–9405. https://doi.org/10.1109/CVPR.2019.00963

  27. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, 2015 Inter, pp. 1440–1448. https://doi.org/10.1109/ICCV.2015.169

  28. Rampersad, H.: Faster R-CNN: towards real-time object detection with region proposal networks. Total Perform. Scorecard 159–183,(2020). https://doi.org/10.4324/9780080519340-12

  29. Yuan, J., Xue, B., Zhang, W., Xu, L., Sun, H., Zhou, J.: RPN-FCN based rust detection on power equipment. Proc. Comput. Sci. 147, 349–353 (2019). https://doi.org/10.1016/j.procs.2019.01.236

    Article  Google Scholar 

  30. Chen, X., Girshick, R., He, K., Dollar, P.: TensorMask: a foundation for dense object segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, 2019-October, pp. 2061–2069. https://doi.org/10.1109/ICCV.2019.00215

  31. Zhang, H., Chang, H., Ma, B., Shan, S., Chen, X.: Cascade retinanet: maintaining consistency for single-stage object detection. 30th British Machine Vision Conference 2019. BMVC 2019, 1–12 (2020)

    Google Scholar 

  32. Guler, R.A., Neverova, N., Kokkinos, I.: DensePose: dense human pose estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7297–7306 (2016). http://www.arxiv.org/abs/1612.01202

  33. Kirillov, A., Wu, Y., He, K., Girshick, R.: Pointrend: Image segmentation as rendering. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 9796–9805 (2020). https://doi.org/10.1109/CVPR42600.2020.00982

  34. Nowozin, S.: Optimal decisions from probabilistic models: the intersection-over-union case. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 548–555,(2014). https://doi.org/10.1109/CVPR.2014.77

  35. Rahman, M.A., Wang, Y.: Optimizing intersection-over-union in deep neural networks for image segmentation. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10072 LNCS, pp. 234–244 (2016). https://doi.org/10.1007/978-3-319-50835-1_22

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Willian Branch-Bedoya .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Restrepo-Arias, J.F., Arregocés-Guerra, P., Branch-Bedoya, J.W. (2023). Crops Classification in Small Areas Using Unmanned Aerial Vehicles (UAV) and Deep Learning Pre-trained Models from Detectron2. In: Zapata-Cortes, J.A., Sánchez-Ramírez, C., Alor-Hernández, G., García-Alcaraz, J.L. (eds) Handbook on Decision Making. Intelligent Systems Reference Library, vol 226. Springer, Cham. https://doi.org/10.1007/978-3-031-08246-7_12

Download citation

Publish with us

Policies and ethics