Figure 7: Box-plot of the variance for every one of the 20
sensors.
3.4 Results
We performed a cross-validation of the network.
The dataset was randomly partitioned in 6 groups
called “folds”: a single fold was used as validation
set while the remaining 5 as training set. The process
was repeated 6 times, with each 6 folds used exactly
once as the validation set. Finally, the 6 results were
combined together.
Results are summarized in Table 3. We have a
dataset of 36 instances, 18 of which are expert (2
sessions for every expert subject) and 18 are
novices; 94.4% of the instances are correctly
classified, while 5.6% are incorrectly classified.
Table 3: Confusion matrix.
Classified as
expert
Classified as
novice
16 2 Expert
0 18 Novice
TP (true positive) rate for experts is 0.889, and
for novices is 1, while FP rate for experts is 0 and
for novices is 0.111.
4 CONCLUSIONS
We designed and developed a system for the
evaluation of the skill of a surgeon while performing
a suture. The system makes use of a sensory glove to
obtain the exact position of the hand and movements
of the fingers. Features were extracted by re-
sampling data from the glove in order to give the
same duration to all the gestures, and then averaging
the values of the 20 sensor in windows of 50
samples. The total number of features was reduced
using the Correlation-based Feature Subset
Selection, with forward selection. Finally, the
median of the duration of the gesture was added to
the feature set. The dataset was classified by means
of a neural network. Results of a 6-folds cross-
validation showed a correct recognition of 94.4%.
By looking at the dispersion of the acquired data,
we noticed that, in general, experts have a lower
dispersion among them with respect to novices,
underlining a more systematic approach. We
exploited this by using an algorithm that reduces the
number of feature by considering only the most
effective one. Possible future enhancements include
the analysis of the dispersion among different
repetition in the same session: this information could
be used as an additional useful input to the classifier.
REFERENCES
Hall M. A.(1998). Correlation-based Feature Subset
Selection for Machine Learning Hamilton, New
Zealand.
Law B., Atkins M. S., Fraser S., Kirkpatrick A. E., Lomax
A. J., Eye gaze patterns differentiate novice and
experts in a virtual laparoscopic surgery training
environment, Proceedings of the 2004 symposium on
Eye tracking research & applications Pages 41-48
Lin H., Shafran I., Yuh D., Hager G. D, (2006). Towards
automatic skill evaluation: Detection and segmentation
of robot-assisted surgical motions. Computer Aided
Surgery, Septermber 2006; 11(5): 220-230.
Mitra S., Acharya T., Member S. (2007). Gesture
recognition: A survey. IEEE C 37.
Qiang Z. and Baoxin L. (2010). Towards Computational
Understanding of Skill Levels in Simulation-based
Surgical Training via Automatic Video Analysis.
International Symposium on Visual Computing
(ISVC).
Saggio G., Bocchetti S., Pinto G. A., Orengo G., Giannini
F., (2009a). A novel application method for wearable
bend sensors, ISABEL2009, 2nd International
Symposium on Applied Sciences in Biomedical and
Communication Technologies, Bratislava, Slovak
Republic, November 24-27.
Saggio G., De Sanctis M., Cianca E., Latessa G., De
Santis F., Giannini F. (2009b). Long Term
Measurement of Human Joint Movements for Health
Care and Rehabilitation Purposes”, Wireless Vitae09 -
Wireless Communications, Vehicular Technology,
Information Theory and Aerospace & Electronic
Systems Technology, Aalborg (Denmark), 17-20 May,
pp. 674-678.
Saggio G., Cavallo P., Fabrizio A., Obinna Ibe S., (2011a).
Gesture recognition through HITEG data glove to
provide a new way of communication. ISABEL 2011,
4th Proceedings of the International Symposium on
Applied Sciences in Biomedical and Communication
Technologies, October 26-29.
SurgicalSkillEvaluationbyMeansofaSensoryGloveandaNeuralNetwork
109