Simultaneous Visual Context-aware Path Prediction
Haruka Iesaki, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi, Yasunori Ishii, Kazuki Kozuka, Ryota Fujimura
2020
Abstract
Autonomous cars need to understand the environment around it to avoid accidents. Moving objects like pedestrians and cyclists affect to the decisions of driving direction and behavior. And pedestrian is not always one-person. Therefore, we must know simultaneously how many people is in around environment. Thus, path prediction should be understanding the current state. For solving this problem, we propose path prediction method consider the moving context obtained by dashcams. Conventional methods receive the surrounding environment and positions, and output probability values. On the other hand, our approach predicts probabilistic paths by using visual information. Our method is an encoder-predictor model based on convolutional long short-term memory (ConvLSTM). ConvLSTM extracts visual information from object coordinates and images. We examine two types of images as input and two types of model. These images are related to people context, which is made from trimmed people’s positions and uncaptured background. Two types of model are recursively or not recursively decoder inputs. These models differ in decoder inputs because future images cannot obtain. Our results show visual context includes useful information and provides better prediction results than using only coordinates. Moreover, we show our method can easily extend to predict multi-person simultaneously.
DownloadPaper Citation
in Harvard Style
Iesaki H., Hirakawa T., Yamashita T., Fujiyoshi H., Ishii Y., Kozuka K. and Fujimura R. (2020). Simultaneous Visual Context-aware Path Prediction. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP; ISBN 978-989-758-402-2, SciTePress, pages 741-748. DOI: 10.5220/0008921307410748
in Bibtex Style
@conference{visapp20,
author={Haruka Iesaki and Tsubasa Hirakawa and Takayoshi Yamashita and Hironobu Fujiyoshi and Yasunori Ishii and Kazuki Kozuka and Ryota Fujimura},
title={Simultaneous Visual Context-aware Path Prediction},
booktitle={Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP},
year={2020},
pages={741-748},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0008921307410748},
isbn={978-989-758-402-2},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP
TI - Simultaneous Visual Context-aware Path Prediction
SN - 978-989-758-402-2
AU - Iesaki H.
AU - Hirakawa T.
AU - Yamashita T.
AU - Fujiyoshi H.
AU - Ishii Y.
AU - Kozuka K.
AU - Fujimura R.
PY - 2020
SP - 741
EP - 748
DO - 10.5220/0008921307410748
PB - SciTePress