to tracking devices and methods, augmented reality
systems in digestive surgery can be classified in two
categories, vision-based and hybrid systems. In their
work, (Nicolau et al., 2005b) proposed a low cost
and accurate guiding system for laparoscopic surgery
with validation on abdominal phantom. The system
allows real time tracking of surgical tools and regis-
tration using markers by optimization of a given cri-
terion (EPPC)(Nicolau et al., 2005a). In the other
side, (Feuerstein et al., 2008) propose a hybrid sys-
tem composed of optical and electromagnetic track-
ing systems to determine the position and the ori-
entation of the intra-operative imaging devices, such
as mobile C-arm, laparoscopic camera and flexible
ultrasound, allowing direct superimposition of ac-
quired patient data in minimally invasive liver resec-
tion without need of registration.
3 PROPOSED METHOD
In this section we outline the principal components
of our markerless augmented reality system for la-
paroscopic cholecystectomy. Taking into consider-
ation temporal coherence according to the principal
steps of standard laparoscopic cholecystectomy, the
first component detects all anatomical and patholog-
ical structures in the surgical 2D laparoscopic view
using a statistical color model of digestive organs. As
a result, we have for each organ an initial segmenta-
tion represented by a sparse binary image. Then, false
positives are filtered using an adaptation of particles
swarm optimization (PSO) algorithm. thus, we have
a set of particles with different radius in the 2D image
for each organ.
In laparoscopic cholecystectomy, the most impor-
tant organ is the gallbladder and its vascular sup-
ply. The same principle is applied on preoperative
CT-Scan images to build a particles-based 3D model
of the gallbladder and the liver. The novel pro-
posed wavelet-based multi-resolution analysis allows
to have coarse models either of 3D virtual organs or
2D images. Finally, we make a 2D/3D registration for
each resolution level.
In order to build a statistical color model, a set of
16735 colored laparoscopic images (IRCAD source)
from a video of laparoscopic surgeries is used. The
images have 240 x 320 RGB coded pixels with 256
bins per channel (24 bits per pixel). The video se-
quence is acquired at a frame rate of 30 Hz.
3.1 Anatomical Color Model
According to the cholecystectomy intervention work-
flow step (t), we construct for each anatomical region
(i) a statistical color model using a histogram with 256
bins per channel in the RGB color space. Each color
vector (x) is converted into a discrete probability dis-
tribution in the manner:
P
i,t
(x) =
c
i,t
(x)
∑
N
i,t
j=1
c
i,t
(x
j
)
, t = t
1
. . . t
6
, i = 0 . . . S
t
. (1)
where c
i,t
(x) gives the count in the histogram bin rep-
resenting the rgb color triple (x) and N
i,t
is the total
count of the rgb histogram entries returned by the his-
togram bins number of the structure region (i) dur-
ing the intervention step (t). The number of detected
structures S
t
varies according to the step. According
to the European standard and common laparoscopic
cholecystectomy installation and intervention work-
flow, the number of structures classes is limited to
four. In practice, the step (t) denotes a time interval
represented by a set of consecutive laparoscopic im-
ages t =
h
I
t
v,1
. . . I
t
v,n
i
in the videos (v) that compose
the training dataset.
After analysis of the laparoscopic video, we have
observed that it contains at most 10017 RGB color
bins over the whole sequence with a mean of 1997
RGB triples in each frame. Therefore, the RGB his-
togram is mostly empty with 99,94% of the 256
3
RGB
bins that are not used. Figure 1, shows the evolution
of RGB bins count in the training laparoscopic chole-
cystectomy video.
Figure 1: Evolution of RGB bins count in the sequence.
3.2 Spherelet : Wavelet-based
Multi-resolution Analysis
In this section we propose a new multi-resolution
analysis of 3D objects modeled as a set of elementary
non intersected particles defined by their centers and
radius. The virtual model of the anatomical structure
GRAPP 2010 - International Conference on Computer Graphics Theory and Applications
140