2 RELATED WORK
Considerable effort has been conducted in developing
more efficient object detection and tracking
algorithms for specific sensors or combinations of
them. An extensive overview is given by (Sivaraman,
2013). Empirical studies were performed for some of
these algorithms, to investigate the detection
probability given specific scenarios (Held, Levinson,
& Thrun, 2012). Geese et. al. (2018) recently
presented an approach to predict the performance of
an optical sensor in dependence of the environmental
conditions. Another approach is the detection and
tracking of moving objects (DATMO) based on
occupancy grids as presented, e.g. by Baig, Vu, &
Aycard (2009). However, to the best of the authors’
knowledge, no thorough theoretical analysis about the
number of detectable objects or the fraction of objects
contained in the LEM or the GEM has been
conducted so far.
The contribution of the present work is an
analytical model that allows estimating the absolute
and the relative amount of objects contained in both
LEM and GEM, in dependence of macroscopic
parameters, such as the linear vehicle density, and the
properties of road, vehicles and sensors.
3 VEHICLE PERCEPTION
In order to perceive their environment, automated
vehicles have to make use of all kinds of sensors.
While subsection 3.1 deals with the vehicle’s own on-
board sensors, subsection 3.2 introduces the concepts
of cooperative awareness and collective perception,
which allow utilizing the data of external sensors
shared through V2X communication.
3.1 On-board Sensors
The growing complexity of advanced driver
assistance systems (ADAS) is leading to an
increasing number of sensor-systems being installed
in nowadays’ vehicles. Fig. 1 shows some of them.
They can roughly be divided into four classes,
depending on their range: (i) ultra-short (up to 5 m):
e.g. ultrasound for parking assistance, (ii) short (~30
m): e.g. radar for blind spot detection, rear collision
warning or cross traffic alert, (iii) mid-range (~100
m): e.g. radar, LIDAR or video for surround view,
object detection, video-supported parking assistance,
traffic sign recognition, lane departure warning,
emergency braking or collision avoidance, and (iv)
long range (~200 m): radar e.g. for adaptive cruise
control or sheer-in assistance on a highway.
To ensure the functional safety of highly
automated vehicles, sensor redundancy for object
detection will be necessary, making sure the vehicle
is still able to deal with adverse environmental
conditions or even a sensor system falling out.
Figure 1: Vehicle sensors of ultra-short (grey), short
(green), medium (blue) and long range (red).
The LEM is essentially based on data registered by
the mid- and long-range sensors. To detect objects
that are further away, V2X communication is of great
use.
3.2 V2X Communication
The limited perception capabilities of on-board
sensors can be enhanced with V2X communication,
by means of cooperative awareness and collective
perception. Cooperative awareness consists of
vehicles transmitting data about their own state via
V2X communication, such as their current position,
speed and heading. This service is implemented by
the Cooperative Awareness Message (CAM) in
Europe and by the Basic Safety Message (BSM) in
the US. Collective perception (Günther, 2016) allows
cars to inform nearby vehicles of objects detected by
their own on-board sensors. The exchange of
Collective Perception Messages (CPM) enables
vehicles to perceive objects beyond their own
sensor’s range by looking through other vehicles’
“eyes”. The collective perception service is currently
considered for standardization by the European
Telecommunications Standards Institute (ETSI) in
order to ensure its interoperability among all
equipped vehicles.
4 ANALYTICAL MODEL
The quality of a highly automated vehicle’s
environmental model sensitively depends on the
fraction of objects it contains. It is thus necessary to
predict this quantity as accurately as possible. With