2 WORK BACKGROUND
The increasing tendency in surveillance and
guarding in many smart areas give grow of many
problems in camera placement and coverage (J.
Wangand and N. Zhong, 2006). For example, in
Computational Geometry, large progress has been
done in solving the problem of “optimal guard
location” for a polygonal area, e.g., the Art Gallery
Problem(AGP), where the assignment is to
determine a minimal number of guards and their
fixed positions, for which all points in a polygon are
monitored (J. Urrutia, 2000).
After, a large study has been devoted on the
problem of cameras optimal placement to obtain
complete coverage for a given area. For instance,
Hörster and Lienhart (R. Lienhart and E. Horster,
2006) focus on maximizing coverage with respect to
a predefined “sampling rate” which guarantee that
an object in the area will be observed at a certain
minimum resolution. Although, their camera type
does not have a circular sensing ranges, i.e., they
work with a triangular sensing range. In (K.
Chakrabarty, H. Qi, and E. Cho, 2002), (S. S.
Dhillon and K. Chakrabarty, 2003), the environment
is modelled by a grid map. The authors compute the
camera placement in such a way that the desired
coverage is accomplished and the overall cost is
minimized. The cameras are placed on a grid cell
such that each of them is covered by at minimum
one camera. Also, Murat and Sclaroff (U. Murat and
S. Sclaroff, 2006) modelled three types of cameras:
Fixed perspective, Pan-Tilt-Zoom and
Omnidirectional. However, they use only one type
of camera at one time. Dunn and Olague (E. Dunn,
G. Olague, and E. Lutton, 2006) consider the
problem of optimal camera placement for exact 3D
measurement of parts Located at the center of view
of several cameras. They demonstrate good results
in simulation for known fixed objects. In (X. Chen
and J. Davis, 2000) , Chen and Davis develop a
resolution metric for camera placement considering
the occlusions. In (S. Chen and Y. Li , 2004), Chen
and Li describe a camera placement graph utilizing a
genetic algorithm approach. Our work is oriented in
the same direction as those presented above.
However, in our research, we consider the
simultaneous use of both fixed and PTZ cameras in
one monitoring space. We do optimal static camera
placement for detection task and optimal PTZ
camera placement for to guarantee the identification
and recognition requirements.
3 MULTI-CAMERA
PLACEMENT PROBLEM
Our objective is to find out the optimal position,
orientation and the minimum number of fixed
cameras to cover a specific area for detection
requirements, after find out the optimal position,
orientation and the minimum number of PTZ
cameras to cover the same detected area for
identification and recognition requirements. This is a
typical optimization problem where some
Constraints are given by the characteristics of both
the camera (field of view, focal length) and the
environment (size, shape, obstacle and essential
zones). In our approach, the step of minimization is
done based on linear integer programming method
(S. S. Dhillon and K. Chakrabarty, 2003), (E.
Horster and R. Lienhart, 2006). To identify the
spatial representation of the environment, we use a
Grid of points (S. Thrun, 2002).
This work assumes that both the sensing model
and the environment are surface-projected defining
two-dimensional models. We model the static
camera field of view by an isosceles triangle as
shown in Fig. 1, where its working distance is
calculated based on the detection resolution
requirements and we model the surface-projected
PTZ camera field of view using also isosceles
triangle taken into consideration the extended FOV
due to motion which in our case 360°(2) ,by
dividing its total FOV in to sectors ,each sector
represent one resolution task based on the
identification or recognition resolution value taking
into consideration the zoom effect as shown in
fig(3,4),which is caused by the zoom lenses, this
later often described by the ratio of their longest to
shortest focal lengths. For instance, a zoom lens with
focal lengths from 100mm to 400mm may be
described as a 4:1 or "4X" zoom. That is, the zoom
level of a visual sensor is directly proportional to its
focal length.
3.1 Static Camera
We denote the discretized sensors space as
,
1,2,…, to be deployed in a given area, which is
approximated by a polygon A. In our labour, we
focus on polygon discretized fields. For each
deployed sensor
, we know its location
,
in
the 2-D space as well as its orientation parameters
required to model the static camera Field of View
(FOV). We have modelled the FOV ∃
as done in
(Morsly, Y ; Aouf, N ; Djouadi, M.S and
OptimalCameraPlacementbasedResolutionRequirementsforSurveillanceApplications
253