was to find a parameter set that is optimal according
to evaluation criteria above for both models.
Initial Parameter Combinations. In their original
description of DBSCAN, (Ester et al., 1996) state that
minPts does not have a significant influence and can
be set to 4 as default. This was done for the initial val-
idation. To find candidates for the more important ε
parameter, k-distance plots for runs with and without
an incident were inspected. Depending on distance
measure and flattening, the ε values were chosen. For
instance, the values 500, 400, 300, 200, and 100 were
selected for “Euclidean distance without flattening”.
For flattening, the weights 33, 50, 66 and 0 were se-
lected. Together with 5 distance measures (all except
the relative distance), this resulted in 100 initial pa-
rameter combinations.
Only moderate detection rates were achieved, cou-
pled with high numbers of false alarms. This indi-
cated the need for adjustments of the incident detec-
tion.
Incident Validation. To improve the discrimination
of traffic fluctuation against real incidents, domain
knowledge was applied: While decreased traffic is ex-
pected in the “incident section” and its downstream
sections, the load should increase upstream as well as
in “detour sections”. A validation mechanism based
on this was devised, which decreased the number of
false alarms significantly, while the detection rates
were conserved.
Finding Incident Indicators. To identify the rea-
sons for weaker detection rates, the correctly detected
incidents were analysed in more detail. These were
mostly associated with high traffic demand. To bet-
ter detect secondary and tertiary incidents, the rel-
ative distance measure was introduced. After test-
ing 10.000 parameter combinations, it showed to be
mainly suitable for the incident indicators rather than
for validators. From there on, the relative distance
was only used for indicators, while the other measures
were only applied for validators.
Detection Delay. A deeper look into the detection
delay showed, that heavier flattening leads to longer
delay: All optimal combinations with regard to detec-
tion rate and false alarms included flattening, which
led to significant delays of at least two control cycle
periods.
Incident Filters. Incidents with certain properties
common to false alarms were identified. One find-
ing was that most of them were situated on the edge
of the road model, possibly due to the way AIMSUN
Next simulates the incoming traffic. These were fil-
tered out.
Additionally, false alarms were discovered almost
exclusively in scenarios that also included incidents.
It showed that some validators confirmed too many
other incidents, which resulted in false alarms. There-
fore, they were restricted to validate at most one inci-
dent.
minPts. Prior to the last optimisation step, the so-
far locked parameter minPts was investigated once
more. The values 2, 3, 4, 5, 6 and 8 were tested for
all sets of distance measures, ε values and flattening
weights as well as for filters, indicator and valida-
tor instances: Especially good combinations of dis-
tance measures and ε work equally well for all tested
minPts values, but as no significant changes occurred,
minPts = 4 was retained.
Parameter Optimisation. Finally, the optimal re-
sults for the 2x2 grid were evaluated in the 3x3 grid in
order to optimise the parameters for unflattened data.
The optimal parameter sets according to detection rate
and false alarm count are chosen from those combina-
tions that achieved an average detection delay of less
than two control cycle durations (180 s).
5.8 Results
Table 3 presents the results for the 2x2 Manhat-
tan grid. It shows that the incident filter reduces
the number of false alarms for all distance mea-
sures. Although the results have been optimal for
some combinations even before introducing the fil-
ter, more combinations of different parametrisation
now show nearly optimal results. An optimal result
was achieved parametrisation when using average dis-
tance for validation and relative distance for indica-
tion.
The Table 4 shows the best ε values for each com-
bination of indicator and validator distance measure
with unflattened data in the 3x3 Manhattan grid. The
results indicate that in the case of unflattened data no
dependency between the indicator and validator pa-
rameter sets exists. For all validator distance mea-
sures, the same ε values work best for both relative
distance measures. In return, the relative average dis-
tance with ε set to 0.95 works best for all validator
distance measures and especially outperforms the rel-
ative DTW with respect to false alarms. The validator
distance measures do not differ significantly in their
results, but the Euclidean distance measure with ε set
VEHITS 2021 - 7th International Conference on Vehicle Technology and Intelligent Transport Systems
158