proaches take into account different possible driv-
ing scenarios for a fixed environment, i.e., for a
fixed set of ODD elements and their attributes.
However, driving scenarios are just a part of
operating scenarios for an autonomous vehicle.
For a vehicle with driving automation, as men-
tioned in ISO 21448, to identify a scenario, we
should consider the combination of environmen-
tal factors (e.g., snowfall, rain), driving scenarios
(e.g., overtaking a vehicle, performing cut-in in
front of a vehicle), behaviors of agents involved
(e.g., pedestrians, other vehicles), road geometry
(e.g., straight road), road infrastructure (e.g., traf-
fic signs) and goals/objectives of the scenario i.e.,
the tasks we want to accomplish in the scenario
(e.g, an ego vehicle should stop when a pedestrian
is crossing the road in city streets).
Moreover, the current simulation tools (e.g.,
CARLA (Dosovitskiy et al., 2017), Fortellix (Fortel-
lix, 2020)) require to create operating environments
before testing scenarios and most of the operating en-
vironments have been created based on customers’
needs (Fadaie, 2019). However, whether these oper-
ating environments can cover the entire operational
design domain (ODD) defined by engineers of au-
tonomous vehicles is not verified. Also, most of the
current simulation tools perform exhaustive testing,
i.e., generating all possible test cases for each sce-
nario with all discrete parameter values, their ranges
and increments initialized by the engineers and run-
ning simulations within a fixed operating environ-
ment. While tools such as Fortellix (Fortellix, 2020)
offers a means to combine scenarios and operating
conditions, the scenario description is restricted to
Open M-SDL (Fortellix, 2020) specification, which
does not require to have all the essential ODD el-
ements and attributes we aim to cover as a part of
the ODD. Performing analysis with such tools does
not account for overlooked ODD elements or their
attributes. For example, if an attribute of a pedes-
trian such as race or gender cannot be set in a simula-
tion tool despite the presence of the attribute in ODD,
it may be ignored during the verification using sim-
ulation. Further, given the complexity of ODD for
autonomous vehicles, simulating all potential scenar-
ios with operating environments might not be feasi-
ble. This is because the need for resource (e.g., time
and effort) increases as the complexity of the ODD
increases.
For example, let us consider the following factors
that are part of ODD (taken from Table B.3 of ISO
21448): climate, time of the day, road shape, road fea-
ture (e.g., tunnel, gate), condition of the road, light-
ing (e.g., glare), condition of the ego vehicle (e.g., a
sensor covered by dust), operation of the ego vehi-
cle (e.g., a vehicle is stopping), surrounding vehicles
(e.g., a vehicle to the left of the ego vehicle), road par-
ticipants (e.g., pedestrians), surrounding objects off-
roadway (e.g., a traffic sign), objects on the roadway
(e.g., lane markings). Each of these factors can have
multiple values. For example, for ‘time of the day,’ its
values can be early morning, daytime, evening, and
nighttime. Based on the values given in Table B.3
of ISO 21448, the total number of combinations for
ODD factors is 169,554,739,200. Note that we have
not considered properties of agents (e.g., gender of a
pedestrian), vehicles (e.g., speed of the vehicle), en-
vironmental attributes (e.g., amount of snowfall) yet.
These combinations only represent operating en-
vironments and are still not complete as multiple
agent types and vehicle types are not considered. For
each of these combinations, we need to generate in-
stances of scenarios to test by considering properties
of the agents, environmental attributes, and an ego ve-
hicle. Examples of properties are the amount of rain-
fall (environmental), the number of randomly initial-
ized pedestrians (agents related), and a speed range
of the vehicle (ego vehicle related). The properties
values are manually selected/initialized by the simu-
lation engineers and experts. Test cases for simulation
are generated based on these properties by consider-
ing all possible combinations among the properties.
In the example properties considered above, if we as-
sume rainfall can range from 0 cm to 10 cm with an
increment of 0.5 cm, the number of randomly initial-
ized pedestrians can range from 0 to 20, with an incre-
ment of 1, and the speed range of a vehicle is consid-
ered to be between 30 mph and 90 mph, with an incre-
ment of 5 mph, then an exhaustive testing strategy re-
sult will be 21 (for rainfall)× 21 (for pedestrians)× 13
(for the speed range) = 5733 tests. As the number of
properties, their ranges, and their increments change,
this will result in a large number of tests. Creating a
large number of operating environments and perform-
ing exhaustive testing can be very expensive and of-
ten it is not feasible to do so. Moreover, analyzing
test cases that expose collisions and near misses from
a significantly large number of test cases is difficult as
often experts need to manually analyze the causes of
collisions and near misses.
To address these limitations, we propose a
dependency-based combinatorial approach (DBCA)
for operating environment identification and test suite
optimization for analysis of scenarios. DBCA utilizes
IPOG (Lei et al., 2007; Lei et al., 2008), a widely
used combinatorial testing algorithm, to generate t-
way combinations of operating environments and test
cases for each scenario defined in those operating en-
VEHITS 2021 - 7th International Conference on Vehicle Technology and Intelligent Transport Systems
236