Authors:
Steven Verstockt
1
;
Viktor Slavkovikj
1
;
Pieterjan De Potter
1
;
Jürgen Slowack
2
and
Rik Van de Walle
1
Affiliations:
1
Ghent University - iMinds, Belgium
;
2
Barco NV, Belgium
Keyword(s):
Multi-modal Sensing, Image Classification, Accelerometer Analysis, Geo-annotation, Mobile Vision, Machine Learning, Bike-sensing.
Related
Ontology
Subjects/Areas/Topics:
Biometrics and Pattern Recognition
;
Location Based Applications
;
Multimedia
;
Multimedia Signal Processing
;
Multimedia Systems and Applications
;
Multimodal Signal Processing
;
Sensors and Multimedia
;
Telecommunications
Abstract:
This paper presents a novel road/terrain classification system based on the analysis of volunteered geographic information gathered by bikers. By ubiquitous collection of multi-sensor bike data, consisting of visual images, accelerometer information and GPS coordinates of the bikers' smartphone, the proposed system is able to distinguish between 6 different road/terrain types. In order to perform this classification task, the system employs a random decision forest (RDF), fed with a set of discriminative image and accelerometer features. For every instance of road (5 seconds), we extract these features and map the RDF result onto the GPS data of the users' smartphone. Finally, based on all the collected instances, we can annotate geographic maps with the road/terrain types and create a visualization of the route. The accuracy of the novel multi-modal bike sensing system for the 6-class road/terrain classification task is 92%. This result outperforms both the visual and accelerometer
only classification, showing that the combination of both sensors is a win-win. For the 2-class on-road/off-road classification an accuracy of 97% is achieved, almost six percent above the state-of-the-art in this domain. Since these are the individual scores (measured on a single user/bike segment), the collaborative accuracy is expected to even further improve these results.
(More)