3 COMPUTATIONAL LOAD
As well as achieving high accuracy, a recognition sys-
tem should also prevent erroneous identification of
non-signs, i.e., limiting the number of false alarms
and even when the purpose is not oriented to real time
applications (in our case we are concerned about road
maintenance tasks), the computation time should be
as low as possible. Computational time required to
process an image in a TSDRS depends on multiple
factors. The most relevant are related to:
1. Image Properties. The properties of the images
to capture are easily configurable through the ac-
quisition system. Computational load in the seg-
mentation stage is strongly influenced by image
size, especially when algorithms work in a pix-
elwise fashion. We can reduce the image size
considering a trade-off between speed and detec-
tion probability since small objects in the scene
are difficult to detect and identify. Furthermore,
in the case of a TSDRS that includes tracking it
is crucial to detect the signs since that appear in
the first frames of the sequence with small sizes.
Other criterion to consider is wether the system
works with grayscale or color images. Processing
with grayscale images demands lower computa-
tional load but color information is lost. With the
purpose to reduce the image analysis a possible
alternative is to define the area to explore.
2. Number of Segmentation Algorithms. As we
demonstrate in (G
´
omez-Moreno et al., 2010)
there’s not an algorithm robust enough to all dif-
ficulties from outdoor environments. For this rea-
son, our TSDRS allows us to work with different
algorithms in parallel although their information
is highly redundant and the load complexity in-
creases.
3. Complexity of the Recognition Module. In a
recognition system based on SVMs the number
of support vectors grows as the number of classes
and training samples do.
In order to find the main bottlenecks and optimize
the system to improve its performance, we analyze
the computation profile. In Table 1 profiles of the
three main stages mentioned are summarized. The
rest of processing time is dedicated to other tasks,
such as image read/write operations. By a simple in-
spection, we can observe that computational load of
the recognition process is approximately 15 and 46
times higher than the corresponding to the detection
stage and segmentation detection stage, respectively.
The reason is a consequence of the high number of
support vectors to manage in the test phase when a
realistic road sign database is considered.
Table 1: Computational load in the three sub-stages of the
TSDRS.
Process CPU cycles
Recognition 49363
Detection 3118
Segmentation 1068
In this way, since the recognition module based on
SVMs is executed for every candidate object, com-
putational cost for each frame increases linearly with
the number of objects at the input of the recognition
stage. Unfortunately, most of theses objects are false
positives. So, in Fig.3 we can observe the output
detection module for an image, to which we apply
two segmentation algorithms. Note that all the de-
tected objects are identified through their correspond-
ing geometric shape. In the recognition stage all false
alarms are discarded, but our aim in this research is to
reduce the number of objects evaluated in this process
due to its computational load.
In this research our proposal is to decrease the
number of false positives at the input of the recog-
nition module in order to minimize the computational
load. The aim is to implement a false alarm filter us-
ing the Viola-Jones detector as a previous step to the
recognition module based on SVM and so, reduce the
load of the TSDRS.
4 FALSE ALARM FILTER
In machine learning community it is well known
that more complex classification functions yield lower
training errors yet having the risk of poor generaliza-
tion. If the main consideration is test set error, struc-
tural risk minimization provides a formal mechanism
to select a classifier with the right balance of com-
plexity and training error. Another significant con-
sideration in classifier design is computational com-
plexity. Since time and error are fundamentally dif-
ferent quantities, no theory can simply select the op-
timal trade-off. Nevertheless, for many classification
functions computation time is directly related to the
structural complexity. In this way temporal risk mini-
mization is clearly related to structural risk minimiza-
tion.
This direct analogy breaks down in situations
where the distribution of classes is highly skewed.
For example, in our TSDRS there may be dozens of
false positives among one or two traffic signs in an im-
age. In these cases we can reach high detection rates
and extremely fast classifications. The key insight is
FALSE ALARM FILTERING IN A VISION TRAFFIC SIGN RECOGNITION SYSTEM - An Approach based on
AdaBoost and Heterogeneity of Texture
271