Authors:
Nicholas Karkut
1
;
Alexey Kiriluk
1
;
Zihao Zhang
2
and
Zhigang Zhu
1
;
3
Affiliations:
1
Computer Science Department, The City College of New York - CUNY, New York, NY 10031, U.S.A.
;
2
Graduate Landscape Architecture Program, The City College of New York - CUNY, New York, NY 10031, U.S.A.
;
3
PhD Program in Computer Science, The Graduate Center - CUNY, New York, NY 10016, U.S.A.
Keyword(s):
3D Object Detection, Computer Vision, Image Segmentation, Depth Computation, Landscape Architecture.
Abstract:
This paper proposes a computer vision-based workflow that analyses Google 360-degree street views to understand the quality of urban spaces regarding vegetation coverage and accessibility of urban amenities such as benches. Image segmentation methods were utilized to produce an annotated image with the amount of vegetation, sky and street coloration. Two deep learning models were used -- Monodepth2 for depth detection and YoloV5 for object detection -- to create a 360-degree diagram of vegetation and benches at a given location. The automated workflow allows non-expert users like planners, designers, and communities to analyze and evaluate urban environments with Google Street Views. The workflow consists of three components: (1) user interface for location selection; (2) vegetation analysis, bench detection and depth estimation; and (3) visualization of vegetation coverage and amenities. The analysis and visualization could inform better urban design outcomes.