ject may result in different classifications of its parts.
Fig. 6(d) shows that the distant region of the river is
classified as static region while the nearer region of
the river is classified as having non-repetitive motion
and rest of the region is classified as having repetitive
motion.
5 CHALLENGES
There may be scenes that have very large ob-
ject displacements where both normal optical flow
method and LDOF algorithm yield less accurate re-
sults. Though the scene may contain regions of non-
repetitive motion, they may get unidentified because
of its absence in the sampled images. The algorithm
may fail, for instance, when there is a lightning in
the scene. This is expected as optical flow algorithm
works under the assumption of constant brightness.
Change of lighting condition in the scene leads to
error in the segmentation. Optical flow algorithms
have its dependency on the brightness value at the
pixel location. Segmentation problems arise in such
exceptional cases of complex natural scenes. Usage
of unsupervised learning such as K-means clustering
gives rise to the problem of different partitions result-
ing in different clusters.
6 FUTURE WORK
Classification of motion in a dynamic scene has a
bright research future when the scene is affected by
drastic illumination changes. In some of the previ-
ously considered examples, we saw that the illumi-
nation of the scene keeps fluctuating which leads to
bad results upon implementation of the proposed al-
gorithm (Fig. 4). There may be problems due to vari-
ations in camera parameters such as aperture, focal
length, and shutter speed. We plan to improve the pro-
posed approach for use in video synopsis and motion
magnification in future.
7 CONCLUSIONS
The proposed approach segments the scene into static,
repetitive, and non-repetitive motion regions effective
for a sampling rate between 1 per 30 frames to 1
per 5 frames. For scenes containing large displace-
ments, LDOF gives better results. The approach fails
in the scenes where lighting condition changes as the
brightness constancy assumption does not hold true.
Also when the depth of the object varies widely, we
face difficulty in classification. We hope to customize
this approach to other computer vision applications
involving segmentation of different objects based on
the motion they exhibit.
REFERENCES
Black, M. J. and Anandan, P. (1996). The robust estima-
tion of multiple motions: Parametric and piecewise-
smooth flow fields. Computer Vision and Image Un-
derstanding, 63(1):75 – 104.
Brox, T. and Malik, J. (2011). Large displacement optical
flow: descriptor matching in variational motion esti-
mation. Pattern Analysis and Machine Intelligence,
IEEE Transactions on, 33(3):500–513.
den Bergh, M. V. and Gool, L. J. V. (2012). Real-time stereo
and flow-based video segmentation with superpixels.
In WACV, pages 89–96. IEEE.
Derpanis, K. G. and Wildes, R. (2012). Spacetime tex-
ture representation and recognition based on a spa-
tiotemporal orientation analysis. Pattern Analysis
and Machine Intelligence, IEEE Transactions on,
34(6):1193–1205.
Horn, B. K. and Schunck, B. G. (1981). Determining optical
flow. Artificial intelligence, 17(1):185–203.
Lucas, B. D. and Kanade, T. (1981). An iterative image
registration technique with an application to stereo
vision (ijcai). In Proceedings of the 7th Interna-
tional Joint Conference on Artificial Intelligence (IJ-
CAI ’81), pages 674–679.
Ochs, P. and Brox, T. (2012). Higher order motion models
and spectral clustering. In Computer Vision and Pat-
tern Recognition (CVPR), 2012 IEEE Conference on,
pages 614–621. IEEE.
Peterson, B. (2010). Understanding Exposure: How to
Shoot Great Photographs with Any Camera. Amphoto
Books.
Pritch, Y., Rav-Acha, A., and Peleg, S. (2008). Nonchrono-
logical video synopsis and indexing. Pattern Analy-
sis and Machine Intelligence, IEEE Transactions on,
30(11):1971–1984.
Ren, X. and Malik, J. (2003). Learning a classification
model for segmentation. In Computer Vision, 2003.
Proceedings. Ninth IEEE International Conference
on, pages 10–17 vol.1.
Stauffer, C. and Grimson, W. E. L. (1999). Adaptive
background mixture models for real-time tracking.
In Computer Vision and Pattern Recognition, 1999.
IEEE Computer Society Conference on., volume 2.
IEEE.
Wadhwa, N., Rubinstein, M., Durand, F., and Freeman,
W. T. (2013). Phase-based video motion processing.
ACM Trans. Graph. (Proceedings SIGGRAPH 2013),
32(4).
MotionCharacterizationofaDynamicScene
707