network time reference. This could be easily done
changing the current camera firmware or using a de-
vice that provides the functionality by default.
In order to obtain better track matching results
when also the time is considered, the procedure used
to update the position over time should be improved
as well. First of all, it would be helpful to repeat the
tests with the same algorithm but using a better GPS
receiver to obtain more frequent and accurate posi-
tion updates. Secondly, more sophisticated dynam-
ics models could be used to predict the vehicle po-
sition. Other sensors, such as radar, lidar or stereo
camera could replace the monocular camera or could
be added to the current system. In particular, using
these sensors it would be possible to improve the ac-
curacy in the distance measurement and to achieve
better results in the matching when also the time is
taken into account. This would not require too much
effort, given the modular approach used to design the
algorithm.
Finally, another direction in which the future work
should focus is the analysis of more scenarios. First
of all, it would be extremely useful to use at least
three vehicles sending and receiving CAMs, in or-
der to have not only multiple targets from the cam-
era, but also from the C2C Communication system.
Moreover, different driving scenarios should be con-
sidered, for example with the target car arriving from
an intersection, or during overtaking.
5 CONCLUSIONS
In this work a sensor fusion algorithm to unambigu-
ously assign detected vehicles from C2C Communi-
cation and on-board sensors has been designed and
implemented in real-time. All the main challenges
faced during the design phase, i.e., the data collection
procedure, the sensor fusion mechanism to be used,
the spatial and the temporal alignment of data from
the two systems and the track generation process,
have been described and a solution to each problem
has been proposed. Then, the sensor fusion algorithm
has been developed and tested in Matlab using differ-
ent metrics to evaluate the results and to understand
the most critical parts that should be improved in the
future work. Both simulated and recorded data from
real driving scenarios have been used in this phase
and, for this purpose, specific tools for data acquisi-
tion and storage have been deployed as well. Finally,
the algorithm has been implemented inside an in-car
system to demonstrate its capabilities in real time and
to offer a convenient debugging environment for fur-
ther research on the topic.
The overall results obtained using the developed
algorithm are promising. In particular, the Matlab
simulations show excellent results from a spatial point
of view, with a successful and unambiguous detec-
tion and matching of target vehicles. Further research
should be done to obtain likewise satisfying results
when also the time is considered in the calculation of
the difference between tracks. In this regard, concrete
ideas and possible solutions for further research have
been given. Concerning the results obtained with the
real time implementation of the algorithm, they are
perfectly aligned with the ones from the Matlab sim-
ulations and can be considered positive and encour-
aging as well. Compared with the solutions proposed
so far in literature, the demonstrator that has been de-
veloped in this work is new and innovative, and rep-
resents the first step towards a real world application
running in real time inside vehicles.
In order to obtain a reliable product that can be
used in applications, further work on this topic should
be done. Nevertheless, this work represents a good
basis for the future research and an important con-
tribution to the field of ADAS applications based on
sensor fusion.
REFERENCES
Baldessari, R., B
¨
odekker, B., Brakemeier, A., Deegener,
M., Festag, A., Franz, W., Hiller, A., Kellum, C.,
Kosch, T., Kovacs, A., Lenardi, M., L
¨
ubke, A., Menig,
C., Peichl, T., Roeckl, M., Seeberger, D., Strassberger,
M., Stratil, H., V
¨
ogel, H.-J., and Zhang, B. W. W.
(2007). CAR 2 CAR Communication Consortium
Manifesto. Technical report, CAR 2 CAR Commu-
nication Consortium.
Bar-Shalom, Y. and Li, X.-R. (1993). Estimation and Track-
ing: Principles, Techniques, and Software. Artech
House.
Bar-Shalom, Y. and Li, X.-R. (1995). Multitarget-
Multisensor Tracking: Principles and Techniques.
YBS Publing, 1st edition.
Beauducel, F. (2014). ll2utm. http://www.mathworks.com/
matlabcentral/fileexchange/45699-ll2utm-and-utm2ll/
content/Codes/matlab/ll2utm.m. [Online; accessed
07-April-2015].
European Telecommunications Standards Institute (2014).
Etsi ts 102 894-2 v1.2.1: Intelligent Transport Sys-
tems (ITS); Users and applications requirements; Part
2: Applications and facilities layer common data dic-
tionary. Technical Specification.
Fuchs, H., Hofmann, F., L
¨
ohr, H., Schaaf, G., and Kleine-
Besten, T. (2012). Car-2-X. In Winner, H., Hakuli,
S., and Wolf, G., editors, Handbuch Fahrerassisten-
zsysteme: Grundlagen, Komponenten und Systeme f
¨
ur
aktive Sicherheit und Komfort.
Design, Implementation and Testing of a Real Time System to Unambiguously Assign Detected Vehicles from Car-to-Car Communication
and on-Board Camera
29