a daunting task because of their variety. Therefore, in
this paper we answer the question: Which hardware
platform is best suited to perform a particular image
processing task on a UAV? A hardware platform can
be a simple embedded processor (e.g. a Raspberry PI)
or even a small computer like a laptop, depending on
the processing power that is needed. Using these un-
der a UAV impose severe constraints on the hardware
platforms: they should be lightweight, small and have
adequate processing power at low power consumption
to maintain long flight times. To determine the ef-
fective processing speed of a particular algorithm on
a specific hardware platform, one should implement
the algorithm on each specific platform. Acquiring a
large variety of test platforms to determine the most
suitable one evidently is not time nor cost efficient.
Therefore, in this paper we present a framework that,
given a specific algorithm, estimates the processing
speed, power consumption and flight time on a large
set of hardware platforms, without the need to acquire
any of them. For this we rely on two benchmark algo-
rithms. This paper provides data for a number of hard-
ware platforms only restricted in the fact that they are
CPU-based. However since our framework is generic,
new platforms can easily be added to the framework.
An overview of the platforms that we have included
can be found in Table 1.
The framework will be evaluated on two real
cases. In the first case we track a person with a UAV
using a face detection algorithm (Viola and Jones,
2001). For this, we search for a hardware platform
that can run the face detector at 4fps while minimizing
the power consumption (e.g. maximum flight time).
In our second case the UAV should visually navigate
through a fruit orchard corridor, running a vantage
point detection algorithm (Hulens and Vanderstegen,
2012) on-board at 10fps.
The main contributions of this paper are:
• State-of-the-art overview of the current best CPU-
based processing platforms for complex image
processing on-board a UAV.
• Present experimental results of benchmark com-
puter vision experiments on each of these state-
of-the-art platforms.
• We propose a generic model to estimate the pro-
cessing speed, power consumption and UAV flight
time of any given image processing algorithm on
a variety of hardware platforms.
• Validation of the proposed generic model on two
real cases (people detection/tracking and vision-
based navigation).
This paper is structured as follows: in the next sec-
tion we give an overview of the related work on this
topic. In section 3 we briefly discuss the hardware
platforms that we used in the framework. In section 4
we present our framework and in section 5 we verify
our framework with some experiments and show our
results.
2 RELATED WORK
Currently, UAVs are often used to capture images of
the environment which are then processed afterwards
e.g. surveying (Siebert and Teizer, 2014). For this the
UAVs are controlled manually or by means of GPS.
However, our main focus is on autonomously flying
UAVs. To enable this, UAVs mainly rely on vision al-
gorithms. Therefore, algorithms like path planning
and obstacle avoidance (e.g. object detection) are
used to steer the UAV to a certain position (Suzuki
et al., 2011; Ferrick et al., 2012; Lin and Saripalli,
2014). Due to their computational complexity, on-
board UAV processing is often practically unfeasible.
Therefore, in these approaches, a ground station (with
desktop computer) is used to process the images and
steer the UAV. However this severely limits their op-
erating range.
In cases where on-board processing currently is
employed, only light-weight algorithms are used. For
example (McGee et al., 2005) use sky segmentation
(color segmentation), running on a Pentium III pro-
cessor, to detect and avoid objects in the sky. (Meier
et al., 2011) use a marker detection system to follow
a predefined path.(Sa et al., 2014) use line detection,
running on a Cortex-A9, for the inspection of pole-
like structures. (Wenzel et al., 2011) track an IR-
LED-pattern mounted on a moving platform, using a
ATmega 644P controller and (Anthony et al., 2014)
filters laser scanner data on an Atom-based process-
ing platform to estimate crop height.
However, our real-life test case algorithms are
much more complex. To implement more complex
algorithms on a UAV often FPGAs or ASICs are used
since they offer an optimal trade-off between weight,
power consumption and processing power. (Kok
et al., 2013) designed an FPGA based path planning
algorithm, and (Ehsan and McDonald-Maier, 2009)
evaluate other hardware like ASICs as on-board vi-
sion processing platform.
However, translating e.g. OpenCV code (C, C++
or python) to hardware (using e.g. VHDL) is a te-
dious and time consuming task. (Nieuwenhuisen and
Behnke, 2014) use a high-end processing platform for
on-board path planning and obstacle avoidance. This
is possible since, in their case, power consumption or
weight is less relevant because they use an octacopter
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
378