5 CONCLUSIONS
In this paper, we proposed a method for efficient
stereo matching that is robust to lighting changes and
has a fast matching speed. The proposed method
transforms the input image into the independent
image from lighting factors. After that, the matching
cost is calculated using the concept of census
transform. Besides, we also calculate average pixel
values from the left and right images. Those values
are applied to selecting whether to use the original
color image or the transformed image as an input for
stereo matching before the cost computation. As a
result, the proposed method showed three times faster
speed for the cost computation than that of ANCC and
also showed 5.2% and 1.68% lower errors than
ANCC in exposure and illumination conditions,
respectively.
ACKNOWLEDGEMENTS
This work was partly supported by Institute of Infor
mation & Communications Technology Planning &
Evaluation(IITP) grant funded by the Korea governm
ent(MSIT) (No.2014-3-00077, AI National Strategy
Project) and the National Research Foundation of Ko
rea (NRF) grant funded by the Korea government(M
SIT) (No. 2019R1A2C2087489).
REFERENCES
Zhang, K., Fang, Y., Min, D., Sun, L., Yang, S., Yan, S., &
Tian, Q. (2014). Cross-scale cost aggregation for stereo
matching. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (pp. 1590-
1597).
Sun, J., Shum, H. Y., & Zheng, N. N. (2002, May). Stereo
matching using belief propagation. In European
Conference on Computer Vision (pp. 510-524).
Boykov, Y., Veksler, O., & Zabih, R. (2001). Fast
approximate energy minimization via graph cuts. IEEE
Transactions on Pattern Analysis and Machine
Intelligence, 23(11), 1222-1239.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012).
Imagenet classification with deep convolutional neural
networks. In Advances in Neural Information
Processing Systems (pp. 1097-1105).
Simonyan, K., & Zisserman, A. (2014). Very deep
convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual
learning for image recognition. In Proceedings of the
IEEE Conference on Computer Vision and Pattern
Recognition (pp. 770-778).
Zbontar, J., & LeCun, Y. (2015). Computing the stereo
matching cost with a convolutional neural network. In
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (pp. 1592-1599).
Luo, W., Schwing, A. G., & Urtasun, R. (2016). Efficient
deep learning for stereo matching. In Proceedings of the
IEEE Conference on Computer Vision and Pattern
Recognition (pp. 5695-5703).
Mayer, N., Ilg, E., Hausser, P., Fischer, P., Cremers, D.,
Dosovitskiy, A., & Brox, T. (2016). A large dataset to
train convolutional networks for disparity, optical flow,
and scene flow estimation. In Proceedings of the IEEE
Conference on Computer Vision and Pattern
Recognition (pp. 4040-4048).
Heo, Y. S., Lee, K. M., & Lee, S. U. (2010). Robust stereo
matching using adaptive normalized cross-correlation.
IEEE Transactions on Pattern Analysis and Machine
Intelligence, 33(4), 807-822.
Scharstein, D., & Pal., C. (2007). Learning conditional
random fields for stereo. In Proceedings of the IEEE
Conference on Computer Vision and Pattern
Recognition (pp. 1-8).
Finlayson, G., & Xu, R. (2003). Illuminant and gamma
comprehensive normalisation in logrgb space. Pattern
Recognition Letters, 24(11), 1679-1690.
Tomasi, C., & Manduchi, R. (1998). Bilateral filtering for
gray and color images. In Proceedings of the IEEE
Conference on Computer Vision (pp. 1-8)
Li, G. (2012). Stereo matching using normalized cross-
correlation in LogRGB space. In IEEE Conference on
Computer Vision in Remote Sensing (pp. 19-23).
Chang, Y. J., & Ho, Y. S. (2019). Adaptive Pixel-wise and
Block-wise Stereo Matching in Lighting Condition
Changes. Journal of Signal Processing Systems, 91(11-
12), 1305-1313.
Zabih, R., & Woodfill, J. (1994). Non-parametric local
transforms for computing visual correspondence. In
European Conference on Computer Vision (pp. 151-
158).