although in other applications of the algorithm, this
may be a necessary requirement. Image graininess
Some types of images can have a grainy structure -
often due to the nature or features of the image acqui-
sition system. It is a typical problem in those cases
where it is necessary to acquire an image with max-
imal resolution. The main problem with processing
coarse-grained maps is related to the in-practicality of
detecting the boundaries, i.e. boundaries are detected
that are associated with grains instead of the contours
of objects. A typical solution consists of smoothing
the image using minor diffusion in which the bound-
aries of the grains become fuzzy and diffused with
each other, while the contours of object remain (al-
beit over a larger spatial extent). A similar effect can
be obtained using the median filter. However, use
of the median filter includes an inevitable loss of in-
formation characterised by shallow details (i.e. low
grey level variability). In this thesis, the Wiener fil-
ter (Wiener, 1949)is used which is computational ef-
ficient, robust and optimal with regard to grain diffu-
sion and information preservation. This filter elimi-
nates high-frequency noise and thus does not distort
the edge of objects. Other solutions include prelimi-
nary de-zooming for the purpose decreasing grit size
up to and including the size of a separate pixel. Such a
method involves loss of shallow details however, and
thus, the size of the map (and accordingly, the pro-
cessing time) decreases. The other advantage of such
a method concerns hardware implementation, e.g. ap-
plication of a nozzle to an optical system. In situations
where the methods described here are unacceptable,
it is necessary to use a more complex quality detector
for boundary estimation which is discussed below.
Geometrical Distortions. In practice, the most im-
portant geometrical distortions are directly related to
character of an image acquisition. In the majority of
cases is possible to use a standard video camera as
the image sensor. However, the majority of industrial
production specifications for video systems use an in-
terlaced scan technique for image capture. This leads
to ‘captured lines’ in the image of both even and odd
types which leads to a time delay between neighbour-
ing lines (equal to half the acquisition time frame). If
there is a moving object in the field of view, then its
position on even and odd lines will be different - the
picture of the object will be ‘washed’ in a horizon-
tal direction. This is a particularly important problem
in the extraction of edges. In this case, it is impossi-
ble to bleed the verticals. The elementary solution to
this problem is to simply skip the even or odd frames
(preferably the even frames as the odd frames consist
of later information). Another way is to handle even
and odd frames separately providing the processing
speed allows for practical implementation. If this is
not possible, it is necessary to use a video system with
non interlaced scanning.Over the past few years, with
the development of digital video and engineering the
capability has emerged to use digital video cameras
with high resolution. A singular advantage of this is
the uniformity of the picture without the distortions
discussed above. However, the video RGB of matri-
ces need to be analysed to avoid inter-colour distor-
tions. These distortions are connected to the geomet-
rical distribution of the RGB cells on the surface of
a CCD matrix and can be seen when increases in the
size of the digital are introduced. Special filters need
to be designed that can be used in the prevention of
this kind of distortion
Edge Detection. has gone through an evolution span-
ning more then 20 years. Two main methods of edge
detection have been apparent over this period, the first
of these being template matching and the second, be-
ing the differential gradient approach. In either case,
the aim is to find where the intensity gradient mag-
nitude g is sufficiently large to be taken as a reliable
indicator of the edge of an object. Then, g can be
thresholded in a similar way to that in which the inten-
sity is thresholded in binary image estimation. Both
of these methods differ mainly in that they proceed to
estimate g locally. However, there are also important
differences in how they determine local edge orienta-
tion, which is an important variable in certain object
detection schemes.
Each operator estimates the local intensity gradi-
ents with the aid of suitable convolution masks. In
a template matching case, it is usual to employ up
to 12 convolution masks capable of estimating lo-
cal components of the gradient in the different direc-
tions. Common edge operators used are due to Sobel
(J.M.S.Perwitt, 1970), Roberts (L.G.Roberts, 1965),
Kirsch (R.A.Kirsh, 1971), Marr and Hildreth (Marr
and E.Hildreth, 1977), Haralick (R.M.Haralick, 1980;
R.M.Haralick, 1984), Nalwa and Binford (Nalwa and
Binford, 1986) and Abdou and Pratt (Abdou and
W.K.Pratt, 1979). In the approach considered here,
the local edge gradient magnitude or the edge magni-
tude is approximated by taking the maximum of the
responses for the component mask:
g = max(g
i
: i = 1ton)
where n is usually 8 or 12. The orientation of the
boundary is evaluated in terms of the number of a
mask giving maximal value of amplitude of a gradi-
ent.
The integration of local operands into convex hull
algorithm is the way forward to isolate ROI or in par-
ticular cases accuracy identify object location. The
New'Spider'ConvexHullAlgorithm-ForanUnknownPolygoninObjectRecognition
313