DOI: 
10.22389/0016-7126-2025-1018-4-40-47
1 Vasilenko D.V.
2 Permyakov R.V.
Year: 
№: 
1018
Pages: 
40-47

Racurs

1, 
2, 
Abstract:
The authors explore the practical use of automated point cloud classification algorithms in urban planning. 3D digital modeling, crucial for city development, is increasingly reliant on photogrammetry, generating 3D models from aerial and satellite imagery with no need for ground surveys. However, classifying the resulting point cloud data (surface, buildings, vegetation etc.) requires further processing. Here we leverage machine learning algorithms to address this challenge, analyzing their performance on point clouds acquired by photogrammetric method in PHOTOMOD 8.0 using classical and multi-temporal space stereo imagery of urban areas within Moscow, Nizhny Novgorod, Kaliningrad (RF) and Seoul, South Korea. The purpose of this work is to evaluate the practical applicability of algorithms and identify the dependence of photogrammetric point clouds automatic classification results from classical and multi-temporal space stereo imagery of urban areas
References: 
1.   Vasilenko D. V. Razrabotka algoritma klassifikatsii plotnykh oblakov tochek na primere gorodskoi zastroiki. Vestnik SSUGT, 2024, Vol. 29, no. 6, pp. 44–52. DOI: 10.33764/2411-1759-2024-29-6-44-52.
2.   Zakharova L. P., Kopytov A. A., Petrova E. V., Ptushkin S. V. Sozdanie fotogrammetricheskoi modeli goroda Moskvy po materialam bespilotnykh letatel'nykh apparatov. Geoprofi, 2022, no. 6, pp. 14–19.
3.   Permyakov R.V. (2021) Photogrammetric processing and application of multi-temporal satellite imagery stereopairs. Geodezia i Kartografia, 82(8), pp. 36-44. (In Russian). DOI: 10.22389/0016-7126-2021-974-8-36-44.
4.   3D-model' Nizhnego Novgoroda. Geoprofi, 2024, no. 4, pp. 13.
5.   Furukawa Y., Ponce J. (2010) Accurate, dense, and robust multi-view stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 32, no. 8, pp. 1362–1376. DOI: 10.1109/TPAMI.2009.161.
6.   Gehrig S., Eberli F., Meyer T. (2009) A real-time low-power stereo vision engine using semi-global matching. Proceedings of Computer Vision Systems, pp. 134–143. DOI: 10.1007/978-3-642-04667-4_14.
7.   Hirschmuller H. (2008) Stereo processing by semi-global matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 30 (2), pp. 328–341. DOI: 10.1109/TPAMI.2007.1166.
8.   Hirschmüller H., Buder M., Ernst I. (2012) Memory Efficient Semi-Global Matching. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, no. I–3, pp. 371–376. DOI: 10.5194/isprsannals-I-3-371-2012.
9.   Hu Q., Yang B., Xie L., Rosa S., Guo Y., Wang Z., Trigoni N., Markham A. (2020) RandLA–Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11105–11114. DOI: 10.1109/cvpr42600.2020.01112.
10.   Li Y., Bu R., Sun M., Wu W., Di X., Chen B. (2018) PointCNN: convolution on X-transformed points. Neural Information Processing Systems, pp. 828–838.
11.   Riegler G., Ulusoy A. O., Geiger A. (2016) OctNet: learning deep 3D representations at high resolutions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6620–6629. DOI: 10.1109/CVPR.2017.701.
12.   Saouli A., Babahenini M. C., Medjram S. (2018) Accurate, dense and shading-aware multi-view stereo reconstruction using metaheuritic optimization. Multimedia Tools and Applications, no. 78, pp. 15053–15077. DOI: 10.1007/s11042-018-6904-6.
13.   Schenk T., Csatho B. (2002) Fusion of lidar data and aerial imagery for a more complete surface description. International Archives of Photogrammetry and Remote Sensing, no. 34, pp. 310–317.
14.   Tao G., Yasuoka Y. (2002) Combining high resolution satellite imagery and airborne laser scanning data for generating bare ground DEM in urban areas. Proceedings of International Workshop on Visualization and Animation of Groundscape. International Archives of Photogrammetry. Remote Sensing and Spatial Information Science, no. 34, pp. 310–317.
15.   Thomas H., Tsai Y.-H. H., Barfoot T. D., Zhang J. (2024) KPConvX: modernizing kernel point convolution with kernel attention. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5525–5535. DOI: 10.1109/CVPR52733.2024.00528.
16.   Thomas H., Qi C. R., Deschaud J.-E., Marcotegui B., Goulette F., Guibas L. (2019) KPConv: flexible and deformable convolution for point clouds. Proceedings of the IEEE International Conference on Computer Vision, pp. 6410–6419. DOI: 10.1109/iccv.2019.00651.
17.   Qi C., Su H., Mo K., Guibas L. J. (2016) PointNet: deep learning on point sets for 3D classification and segmentation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77–85. DOI: 10.1109/CVPR.2017.16.
18.   Qi C., Yi L., Su H., Guibas L. J. (2017) PointNet++: deep hierarchical feature learning on point sets in a metric Space. Advances in Neural Information Processing Systems, pp. 5105–5114.
Citation:
Vasilenko D.V., 
Permyakov R.V., 
(2025) Classification of photogrammetric point clouds obtained from stereo pairs of satellite images with PHOTOMOD DPS for solving urban planning tasks. Geodesy and cartography = Geodeziya i Kartografiya, 86(4), pp. 40-47. (In Russian). DOI: 10.22389/0016-7126-2025-1018-4-40-47
Publication History
Received: 18.12.2024
Accepted: 28.03.2025
Published: 20.05.2025

Content

2025 April DOI:
10.22389/0016-7126-2025-1018-4