C
3
=]0.5, 1], C
4
=]1, 5], C
5
=]5, 20], C
6
=]20, 200],
C
7
=]200, ∞[. From the obtained results, we can con-
clude that the high value of rotation angles are better
estimated than the low values, the better estimation
are obtained respectively by the line segments of the
categories C
5
, C
6
, C
4
and C
3
. However, It is necessary
to avoid the line segments of the ﬁrst and seventh cat-
egories.
We studied also the inﬂuence of the noise on the
uncertainty estimation of rotation angles. The great
noise decrease the accuracy in the estimation of ro-
tation angles. However, we can conclude that the
slopes of categories C
4
, C
5
are more robust to noise.
A set of images of interior 3D scene are taken by the
camera after two rotations. The extraction of inter-
est points is done using Harris detector (Harris and
Stephens, 1988). Some of these interest points are
chosen to deﬁne three line segments (S
1
, S
2
, S
3
). We
used many combinations of interest points in order to
deﬁne the three segments. We applied our algorithm
for these images. Many combinations of the six inter-
est points are used but eliminating the combinations
for which the slopes are near from zero (category C
1
)
or having high values (category C
7
). We selected only
the combination of interest points deﬁning segment
lines of categories C
4
, C
5
, C
3
and C
6
. The average
of calculated values of β and θ by this algorithm are
considered as the estimated values. In our case, the
error in estimated values from the three images are
(1.59
◦
, 0.93
◦
).
5 CONCLUSIONS
In this paper we addressed the problem of camera
motion from lines and points correspondences across
multiple views. We investigated in the ﬁrst the mathe-
matical formula between slopes of lines in the various
images acquired after the movement of rotation of the
camera.
Assuming that lines in successive images are tracked,
we used the found relation for estimating rotation an-
gles of camera.
The advantage of the proposed method is that
does not require any knowledge about the geometrical
models of the camera; they use only the slope of line
segment as 2D primitive.
The obtained results from experiment conducted over
synthetic and real images are promising and will en-
courage us for their use in different applications so as
head pose estimation where the interest points of the
head are moving around the ﬁxed camera.
REFERENCES
A. Biswas, P. Guha, A. M. and Venkatesh, K. (2006). In-
trusion detection and tracking with pan-tilt cameras.
In Proceedings of the Third IET International Confer-
ence on Visual Information Engineering, pp. 565-571,
Bangalore (India).
A. Yamada, Y. S. and Miura, J. (2002). Tracking players and
a ball in video image sequence and estimating camera
parameters for 3d interpretation of soccer games. In
IEEE, ICPR’02.
B. Rousso, S. Avidan, A. S. and Pelegz, S. (1996). Ro-
bust recovery of camera rotation from three frames.
In IEEE Computer Society Conference on Computer
Vision and Pattern Recognition.
Bartoli, A. and Sturm, P. (2003). Multiple-view struc-
ture and motion from line correspondences. In Ninth
IEEE International Conference on Computer Vision
(ICCV’03).
C. Jonchery, F. D. and Koepﬂer, G. (2008). Camera motion
estimation through planar deformation determination.
In Journal of Mathematical Imaging Vision, Vol.32,
pp.7387.
Duda, R. and Hart, P. (1988). Pattern Classiﬁcation and
Scene Analysis. Wiley New York, USA, 1nd edition.
Harris, C. and Stephens, M. (1988). A combined corner
and edge detector. In In Alvey Vision Conference, pp.
147-152.
O. Faugeras, Q. L. and Papadopoulo, T. (2000). The Ge-
ometry of Multiple Images. MIT Press, Cumberland,
USA, 1nd edition.
R. Ewerth, M. Schwalb, P. T. and Freisleben, B. (2004). Es-
timation of arbitrary camera motion in mpeg videos.
In Proceedings of the 17th International Conference
on Pattern Recognition (ICPR04).
Urfalioglu, O. (2004). Robust estimation of camera rota-
tion, translation and focal length at high outlier rates.
In Proceedings of the First Canadian Conference on
Computer and Robot Vision.
VISAPP 2009 - International Conference on Computer Vision Theory and Applications
578