the object itself (Lim, 2012). The common approach
to detect object in videos is using the information
from each frame of the video. But, this method has
high error rate. Therefore, there are some detection
methods that use temporary information computed
from sequence of frames to reduce the detection error
rate (Mohong, 2012).
Object tracking, just like object detection, is one
of important fields in computer vision. Object
tracking can be defined as a process to track an
object in a sequence of frames or images. Difficulty
level of object tracking depends on the movement of
the object, pattern change of the object and the
background, changing object structure, occlusion of
object by object or object by background, and camera
movement. Object tracking is usually used in high-
level application context that needs location and
shape of an object from each frame [AutonomyLab,
2014]. There are three commonly known object
tracking algorithms, Point Tracking, Kernel
Tracking, and Silhouette Tracking. Examples of
object tracking application are traffic surveillance,
automatic surveillance, interaction system, and
vehicle navigation.
In the problems we consider in this paper, the
observer is the AR.Drone and the target is a person
or any object that moves according to its own
intentions and proceeds at a speed compatible with
the speed of the drone. We assume that the target
needs to reach a specific destination within a
confined known area, which might be outdoors or
indoors, and chooses an efficient path to do so. In
addition, the target does not perform evasive actions
by attempting to use features in the environment for
concealment. This is a plausible assumption as the
target might be cooperating with the drone or simply
unaware of its presence. As we do not deal with
object recognition, we assume that the target is
identified by a specific tag known in advance by the
drone, although the drone might fail to observe the
target even when it is in view due to its noisy
sensors. Finally, we are interested in long-term SaT
missions in wide areas, relative to the scale of the
drone.
In this research, development of an autonomous
detection and tracking system of an object using
AR.Drone is conducted. The term autonomous
means the system can detect and track object
independently without interactions from user/human.
Detection means the robot is able to recognize
certain object using its sensors. Tracking means
robot is able to follow the said object movement.
AR.Drone has two cameras, frontal camera and
vertical camera. This research is focused on usage of
computer vision algorithm as the base of the
developed system. Therefore, object detection is
carried out using AR.Drone frontal camera as the
main sensor.
Being a robot for toy, AR.Drone has limits in
computational capacity. Meanwhile, the image
processing with computer
Vision algorithm needs pretty high resources.
With that concern, all computations executed in this
system are conducted in a computer connected to the
AR.Drone wirelessly.
Object detection program receives image or video
stream from AR.Drone camera. Every frame of the
video stream is processed one by one by the program.
The computer vision algorithm will process the
image and gives object information as output. For
example the detected object is a blue. If the object is
not found, no output is given.
For an easier but intuitive application, we chose
to use the AR Drones bottom camera to help the
drone park by itself. We have crossed blue lines on
the ground; the AR Drone starts from a point further
from the crossed point. It first detect the straight
lines and find the center (average) of the blue pixels,
it provides feed back to the AR Drone control
system which moves the AR Drone in real-time. The
figure 4 shows the result of the camera drone.
Figure 4: Detecting blue colour with AR.Drone camera.
To allow the drone to carry out a tracking mission
autonomously, we combine the abstract deliberative
skills with low-level control and vision capabilities.
We implemented different techniques for the two
phases of a tracking mission. We first give an
overview of them and then provide additional details
on their implementation.
Tracking Phase
Tag recognition: Since we assume that our target is
identified by a specific tag, the drone needs to be
able to recognise tags from a distance based on the
video stream coming from its cameras. We use
computer vision algorithms to solve this problem.
The following figure 5 shows a tag tracking using
AR.Drone.
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
226