Cities in Sight: Autonomous UAVs for 3D Maps without LiDAR

Main Article Content

Luis Alberto Chavarría-Zamora
Pablo Soto-Quirós

Abstract

In Costa Rica, urban topography faces the challenge of relying on expensive and unreliable LiDAR systems in adverse conditions such as fog or rain. To overcome these limitations, a low-cost autonomous UAV platform was developed. Equipped only with RGB cameras and an IMU, these UAVs generate three-dimensional urban maps. The proposal integrates hybrid monocular depth estimation techniques, combining self-supervised learning and knowledge transfer, with collaborative exploration algorithms based on swarms, Bézier curves, and pheromone modeling in Neo4J. After validating the design in simulations (PyBullet/Pygame) and real indoor flights, the system achieved depth accuracy of tens of centimeters, produced georeferenced point clouds, and allowed for the semantic segmentation of traffic and obstacles using ViT and YOLOv8. The results demonstrate that this approach offers a viable and economical alternative to traditional LiDAR, with the potential to deploy real-world swarms and optimize resources.

Article Details

How to Cite
Chavarría-Zamora, L. A., & Soto-Quirós, P. (2026). Cities in Sight: Autonomous UAVs for 3D Maps without LiDAR. Tecnología En Marcha Journal, 39(6), Pág. 60–69. https://doi.org/10.18845/tm.v39i6.8573
Section
Artículo científico

References

[1] Cheng, W., Ren, Y., & Li, M. (2020). Urban 3D modeling using mobile laser scanning: A review. Virtual Reality & Intelligent Hardware, 2(3), 175–212.

[2] Grigorescu, S., Trasnea, B., Cocias, T., & Macesanu, G. (2020). A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3).

[3] Janai, J., Güney, F., Behl, A., & Geiger, A. (2019). Computer Vision for Autonomous Vehicles. Universität Tübingen.

[4] Lee, S., Kim, J., & Park, H. (2016). Review on dark channel prior based image dehazing algorithms. Journal of Image and Video Processing, (4), 1–23.

[5] Li, P., Li, J., & Zhang, X. (2019). Stereo R-CNN Based 3D Object Detection. En Proceedings of the IEEE/CVPR (pp. 7644–7652).

[6] Li, Y., Zhang, L., & Liu, S. (2020). What happens for a ToF LiDAR in fog? IEEE Transactions on Intelligent Transportation Systems, 99, 1–12.

[7] Majer, F., Svoboda, J., & Novák, P. (2019). Learning to see through haze. En Proceedings of the European Conference on Mobile Robotics.

[8] McGuire, K. N., Kumar, V., & Michael, N. (2019). Minimal navigation solution for a swarm of tiny flying robots. Science Robotics, 4(35), eaau5660.

[9] Ruíz, P., Morales, R., & Herrera, J. (2014). El uso de imágenes LiDAR en Costa Rica. Revista Geológica de América Central, 51, 7–31.

[10] Wang, Y., Sun, Z., Liu, J., Shen, C., & Reid, I. (2019). Pseudo-LiDAR from Visual Depth Estimation. En Proceedings of the IEEE/CVPR (pp. 8445–8453).

[11] Xu, Z. C. B. (2018). Multi-Level Fusion Based 3D Object Detection From Monocular Images. En Proceedings of the IEEE/CVPR (pp. 2345–2353).

[12] You, Y. W., Wang, S., & Lu, H. (2020). Pseudo-LiDAR++. En International Conference on Learning Representations (ICLR).

[13] Zeng, W., Chen, X., & Shi, J. (2018). Inferring Point Clouds from Single Monocular Images. En Proceedings of the IEEE/CVPR.