data to the client as the analysis goes on would
increase the odds of repeated use by the same user.
Another possibility for future work is scenic route
selection. Based on the vegetation levels detected in
multiple consecutive GSV images, users could ask for
directions from one location to another, but instead of
choosing the fastest or shortest route, they can choose
the most scenic route. This can be especially useful
for tourists of landscape attractions or hiking trails.
Finally, in the current implementation, we use
green colors to index vegetation coverage. This
method will not work if a Google Street View image
is collected in winter when deciduous trees shed their
leaves. Also, this method does not account for urban
plants that are not green. However, using green colors
as a proxy for vegetation coverage can result in an
estimate representative enough in urban design
analytics. In the future, a deep learning model like
Mask R-CNN (Abdulla, 2017) for semantic
segmentation can be applied to improve vegetation
segmentation accuracy. In addition, we use benches
as one example of urban amenities. Future works can
include other types of urban amenities in public
spaces.
ACKNOWLEDGMENTS
The research is supported by the 2022 CCNY
College-wide Research Vision (CRV) Award and a
2022 CUNY Interdisciplinary Research Grant (IRG).
The work is also supported in part by the US Air
Force Office of Scientific Research (AFOSR) via
Award #FA9550-21-1-0082, the US National Science
Foundation (NSF) through Awards #2131186 (CISE-
MSI) and #1827505 (PFI), and the ODNI Intelligence
Community Center for Academic Excellence (IC
CAE) at Rutgers University, USA (#HHM402-19-1-
0003 and #HHM402-18-1-0007).
REFERENCES
Abdulla, W. (2017). Mask R-CNN for object detection and
instance segmentation on Keras and TensorFlow,
Github, [https://github.com/matterport/Mask RCNN]
Ahmetovic, D., Manduchi, R., Coughlan, J. M., & Mascetti,
S. (2015). Zebra crossing spotter: Automatic population
of spatial databases for increased safety of blind
travelers. In Proc. of Int. ACM SIGACCESS Conf. on
Computers and Accessibility, pages 251–258.
Biljecki, F., & Ito, K. (2021). Street view imagery in urban
analytics and GIS: A review. Landscape and Urban
Planning, 215, 104217.
Cheng, M., Zhang, Y., Su, Y., A ́lvarez, J. M., & Kong, H.
(2018). Curb detection for road and sidewalk detection.
IEEE Trans. Vehicular Technology, 67:10330–10342.
Du, Y., Duan, G., & Ai, H. (2012). Context-based text
detection in natural scenes. In Proc. of the IEEE Int.
Conf. on Image Processing, 1857–1860. IEEE.
Li, X., & Ratti, C. (2018). Mapping the spatial distribution
of shade provision of street trees in Boston using google
street view panoramas. Urban Forestry & Urban
Greening, vol. 31, 109–119.
Li, X., Zhang, C., Li, W., Ricard, R., Meng, Q., & Zhang,
W. (2015). Assessing street-level urban greenery using
google street view and a modified Green View index.
Urban Forestry & Urban Greening, 14(3), 675–685.
Liu, Y., Zhang, M., Li, Q., Zhang, T., Yang, L., &. Liu, J.
(2021). Investigation on the distribution patterns and
predictive model of solar radiation in urban street
canyons with panorama images. Sustainable Cities and
Society, vol. 75, p. 103275.
Nianticlabs. (2019). Monodepth2 [Source Code].
https://github.com/nianticlabs/monodepth2
Qiu, S., Psyllidis, A., Bozzon, A., & Houben, G.-J. (2019).
Crowd-mapping urban objects from street-level
imagery. The World Wide Web Conf. on - WWW '19.
Qiu, W., Li, W., Liu, X., & Huang, X. (2021). Subjectively
measured streetscape qualities for Shanghai with large-
scale application of computer vision and machine
learning. Proc. of the 2021 DigitalFUTURES, 242–251.
Sun, J. and Jacobs, D. W. (2017). Seeing what is not there:
Learning context to determine where objects are
missing. In CVPR, 5716–5724.
Stubbings, P., Peskett, J., Rowe, F., & Arribas-Bel, D.
(2019). A hierarchical urban forest index using street-
level imagery and deep learning. Remote Sensing,
11(12), p. 1395.
Tang, Z., Ye, Y., Jiang, Z., Fu, C., Huang, R., & Yao, D.
(2020). A data-informed analytical approach to human-
scale greenway planning: Integrating multi-sourced
urban data with machine learning algorithms. Urban
Forestry & Urban Greening, vol. 56, p. 126871.
Ultralytics. (2020). YoloV5 [Source Code]. https://
github.com/ultralytics/yolov5
Velasquez-Camacho, L., Cardil, A., Mohan, M., Etxegarai,
M., Anzaldi, G., deMiguel S. (2021). Remotely sensed
tree characterization in urban areas: A Review. Remote
Sensing, 13(23), p. 4889.
Wang, R., Liu, Y., Lu, Y., Zhang, J., Liu, P., Yao, Y., &
Grekousis, G. (2019). Perceptions of built environment
and health outcomes for older Chinese in Beijing: A big
data approach with street view images and deep
learning technique. Computers, Environment and
Urban Systems
, vol. 78, p. 101386.
Wang, X., Chen, J., Tang, H., & Zhu, Z. (2022). MultiCLU:
Multi-stage context learning and utilization for store-
front accessibility detection and evaluation. In Proc. of
ACM Int. Conf. on Multimedia Retrieval, 304–312.
Xu, G., Zhu, X., Tapper N., & Bechtel B. (2019). Urban
climate zone classification using convolutional neural
network and ground-level images. Progress in Physical
Geography: Earth and Environment, 43(3), 410-424.