analysis, only a few focus on specific animals, especi-
ally for lobster behavior analysis. (Kato et al., 2004)
developed a computer image processing system for
quantifying zebrafish behavior based on two color ca-
meras. Later (Qian and Chen, 2017) extend the sy-
stem for tracking of multiple fishes from multi-view
images. (Straw et al., 2010) used a multi-camera sy-
stem for tracking of flying animals. The flying ani-
mals were modeled as small blobs and their positions
were calculated by triangulation with known camera
positions. (Yan and Alfredsen, 2017b) tried to ex-
tract the gesture from a single lobster in view based
on a skeleton and distance transform. However the
algorithm requires that the background is restricted to
have a color that is very different from the lobster and
its performance relies heavily on color based segmen-
tation. Furthermore, attempts of aggressive behavior
analysis for stage IV European Lobster juveniles was
presented in (Yan and Alfredsen, 2017a).
Almost all previous studies on animal behavior are
using RGB cameras and the objects are extracted from
the background based on noticeable differences in co-
lor or grayscale image pattern between objects and the
background. However, in real applications, shadows
and noise in images are inevitable and the object ex-
traction algorithm normally puts strong restrictions on
the backgrounds that can be used, which subsequently
complicates the setup procedure of the animal beha-
vior experiments.
The main contribution of our algorithm is that it
is the first attempt using an infrared depth camera
for lobster behavior research which introducing mini-
mum disturbance to the nocturnal animal under obser-
vance. The paper also addresses the water reflection
problem when tracking and determining the orienta-
tion of the animal under water using infrared depth
camera.
The structure of this paper is as follows: we
describe our proposed algorithm in detail in Section
2. This section contains three subsections describing
each module contained in the automated real-time
tracking system. The experiments where our algo-
rithm is tested on ten wild European lobsters sized
between 25-30 cm are described in Section 3. Dis-
cussion and further work are given in Section 4.
2 ALGORITHM DESCRIPTION
The system configuration is shown in Figure 1(a).
The infrared camera is mounted in front of a water fil-
led arena holding an European lobster with the optical
axis being approximately perpendicular to the bottom
surface of the arena. A typical depth map is shown in
(a) The system configuration (b) Depth image
Figure 1: Infrared Depth Camera System for European Lob-
ster Tracking.
Figure 1(b). The value on the color map bar is the dis-
tance of the point to the camera plane with unit of mil-
limeter. It is noticeable that there are erroneous depth
areas in the center and the left bottom of the image
due to the water surface reflection and deflection. Mo-
reover, because the absorption coefficient of water for
the infrared light source (825 - 850 nm) used in the
depth camera is much higher than that of air (Pegau
et al., 1997), and the infrared signal being attenuated
exponentially with respect to the distance travelled in
water, it will appear more noisy at the bottom of the
most distant parts of the arena. In the following sub-
sections, we will deal with difficulties caused by these
problems in order to track the lobster and obtain the
orientation of lobster as accurate as possible.
2.1 Lobster Segmentation
Because of the relative position between the camera
and the background is fixed in experiment’s setup, it
is effective to use the background subtraction method.
B
t
(x,y) =
(
1 d
a
(x,y) −d
t
(x,y) > T
d
0 otherwize
(1)
where B
t
is the primitive segmented foreground con-
taining the lobster at time t. d
t
(x,y) is the depth map
value at pixel (x,y) and d
a
(x,y) represents the depth
to the bottom of arena obtained by calculating the
average of the first N depth maps prior to introdu-
cing the lobster into the arena. Because the lobster
is always located above the bottom of the arena, we
can segment out the lobster simply by calculating the
depth difference.
However, this method is not able to segment out
the lobster in the area where depth camera renders a
wrong depth map due to the reflection or deflection
of water. Normally,the areas with erroneous measu-
rements are marked with distance zero and we have
to interpolate depths in this region by using the va-
lue from the depth region that is measured correctly
by the camera. Because the arena bottom is flat, we
Infrared Depth Camera System for Real-time European Lobster Behavior Analysis
597