Vegetative State: Early Prediction of
Clinical Outcome by Artificial Neural Network
L. Pignolo
1
,
F. Riganello
1
, A. Candelieri
2
and V. Lagani
2
1
S. Anna Institute, RAN – Research on Advanced Neuro-rehabilitation, Crotone, Italy
2
Laboratory for Decision Engineering and Health Care Delivery
Department of Electronic Informatics and Systemistics
University of Calabria, Cosenza, Italy
Abstract. Residual brain function has been documented in vegetative state
patients, yet early prognosis remains difficult. Purpose of this study was to
identify by artificial Neural Network procedures the significant neurological
signs correlated to, and predictive of outcome. The best networks test set
accuracy was 70%, 72% and 70% for the entire patients’ group and the
posttraumatic and non-posttraumatic subgroups, respectively. The method
accuracy does not reflect a perfect classification, but is significantly far from
the random or educated guess and is in accordance with the results of previous
clinical studies.
1 Introduction
The Vegetative State (VS) is a clinical condition characterized by the absence of
awareness (of self and environment), voluntary or purposeful behavioral responses to
external stimuli, and communication in the severely brain damaged. Subjects in VS
are otherwise awake, often with wakefulness-sleep cycles [1, 2, 3, 4, 5, 6]. Recovery
(with varying residual disabilities) occurs only in a portion of patients; resources,
staff, logistics and costs requirements for the care of these subjects are imposing
irrespective of outcome. Purpose of this study was to identify by artificial intelligence
procedures a significant model supporting decision in the early prognosis of VS
subjects [7, 8]. It should be noted in this regard that evidence-based neurology
indicates significant neurological signs correlated to, and predictive of outcome.
Prognosis can be modeled as a regression, classification or survival analysis problem
by traditional statistics or machine learning techniques [9]. This study is purposed to
demonstrate that reliable classification models predictive of the vegetative state
outcome prognosis can be obtained by Artificial Neural Networks (ANN) techniques.
Section 2 of this paper outlines dataset and pre- processing; Section 3 describes the
experimentation protocol for the training of classification models; Sections 4 and 5
summarize and comment the results.
Pignolo L., Riganello F., Candelieri A. and Lagani V. (2009).
Vegetative State: Early Prediction of Clinical Outcome by Artificial Neural Network.
In Proceedings of the 5th International Workshop on Artificial Neural Networks and Intelligent Information Processing, pages 91-96
DOI: 10.5220/0002264300910096
Copyright
c
SciTePress
2 Data Collection and Pre-processing
2.1 Data Collection
Three hundred and thirty three subjects in VS consecutively admitted to the dedicated
semi-intensive care unit of the S. Anna – RAN Institute (Crotone, Italy) over a 9-year
period (April 1998–March 2006) were considered retrospectively. The VS was
clinically defined in all subjects compliant to the criteria suggested by the Multi-
Society Task Force and the guidelines of the London Consensus Conference (Multi-
Society Task Force, 1994).
For each patient, were entered in the dataset: age, sex, etiology of brain injury
(posttraumatic or non-posttraumatic), rating at the Glasgow Coma Scale (GCS) [10]
at admission, and twenty-two neurological signs of established relevance in coma and
VS [11] (Tables 1) assessed by the attending physician at two-week intervals
following procedures and criteria predefined as intrinsic to the UNI ENI ISO
9001:2000 quality standards. Each sign was present or absent (binary attribute). The
subjects’ condition at discharge was measured by the Glascow Outcome Scale (GOS):
GOS
1
=death; GOS
2
=vegetative state exceeding 1 year in duration; GOS
3
=recovery,
with severe disabilities; GOS
4
=recovery, with mild disabilities; and GOS
5
=full
recovery or recovery with minimal disabilities not interfering with the everyday life
[12]. The GOS is widely used in the evaluation of the VS outcome, but the subject’s
assignment to any GOS class is subjected to misclassification [13] which could affect
the training of classification models. Therefore, the first two classes and the latest two
classes of GOS were combined into the GOS
1-2
and GOS
4-5
classes respectively, with
a resulting sharper separation among classes. The prediction of outcome was
estimated at admission and after 50, 100 and 180 days after admission.
Table 1. Clinical signs assessed at two-week intervals and entered into the artificial neural
network processing as potential prognostic factors.
Decerebration
Decortication
Conjugated gaze deviation
Skew eye deviation
Blink reflex
Cilio-spinal Reflex
Tactile-oral Reflex
Optic-oral Reflex
Bulldog Reflex
Grasping reflex
Corneal Reflex
Corneal-mandibular reflex
Threat reflex
Myotactic-cervical reflex
Chewing reflex
Sucking Reflex
Oculo-cephalic reflex (with disappearance of the doll’s head phenomenon)
Absence of spontaneous motility
Eye tracking
Snout Rabbit sign
Half-moon pucker sign
Klippel sign
92
2.2 Pre-processing
Etiology of brain injury and the pathophysiology underlying VS are known to
influence the outcome. The dataset inclusive of all patients and two data sub-sets of
the posttraumatic (n=213) and non-posttraumatic patients were considered.
Continuous numerical attributes (such as age and GCS level) were normalized in the
interval [0;1] for each dataset; remaining attributes were binary and did not require
pre-processing.
3 Experimentation
3.1 Parameter Configuration
The classification models were structured as classical feed-forward ANN , with one or
two hidden layers and sigmoid function activation [14, 15]. The number of neurons
was varied among 1, 2, 4, 6, 10, 15, 20, 25, 30 and 40 for both the first and second
hidden layer. The Stuttgart Neural Network Simulator (SNSS) was used for all the
experimentations [16]
The training of the ANN was performed by using the standard Back Propagation
algorithm and the “Enhanced Back Propagation” algorithm. The latter introduces the
previous arc weight change as a parameter for computing the new arc weight change.
SNNS implements both algorithms with the Std_Backpropagation and
BackpropMomentum functions. In particular, the Std_Backpropagation function
requires the specification of the parameter η (learning rate) and d
max
(maximal
difference between expected and calculated output for each neuron). Besidea η, the
BackpropMomentum function needs the momentum μ measuring the influence of the
previous arc weight change on the current weight calculation. Table 2 shows the
parameters configuration used for the training algorithms.
Table 2. Training algorithms parameters configurations.
Std_Backpropagation BackpropMomentum
η d
max
η Μ
0.1 0.1 0.1 0.2
0.3 0.2 0.3 0.8
0.5 0.5
0.7 0.7
0.9 0.9
3.2 Experimentation Protocol
We used a Training–Validation–Test (TVT) procedure to select the best parameter
configuration regulating both the network structure and the training algorithm
operation. In particular, for each dataset the following steps were applied:
93
1. creation of training, validation and test set (see Table 3);
2. for each combination of network and training algorithm parameters:
a. execution of 200 training cycles;
b. evaluation of network accuracy on the validation set;
c. if the total number of training cycles is 20000, then stop; otherwise,
return to step a;
3. selection of the network with the best accuracy on the validation set;
4. evaluation of accuracy on the test set.
At the end of the TVT procedure, we obtained three trained ANN (one for each
dataset) with their respective accuracy on the test set.
Table 3. Subdivision of instance among training, validation and test sets.
Dataset Training Validation Test
NPT Dataset 80 20 20
PT Dataset 133 30 50
Entire Dataset 200 53 80
4 Results
The best networks test set accuracy was 70%, 72% and 70% for the entire patients’
group and the posttraumatic and non-posttraumatic subgroups, respectively. The best
parameter configurations are reported in Table 4.
Table 4. Configurations parameters of the best networks. BP: standard back propagation
algorithm; EBP: enhanced back propagation; na: not applicable.
Dataset
Entire
dataset
PT
dataset
NPT
dataset
1
st
hidden layer 30 1 6
2
nd
hidden layer 30 N.A. N.A.
Training algorithm BP EBP BP
η 0.7 0.1 0.7
d
ma
x
N.A. N.A. 0.2
μ 0.2 0.8 N.A.
A better understanding of the classificatory performance can be obtained through
the analysis of the confusion matrices (see Tables 5 and 6) indicating misclassified
elements. We decided to assign instances with unclear evaluation to the
“misclassified” class (e.g. the same instance was assigned to two classes at the same
time with similar probability).
94
Table 5. Entire dataset confusion matrix.
predicted class
Misclassified
real
class
1_2 3 4_5
1_2 16 1 3 2
3 5 2 11 0
4_5 1 0 38 0
Table 6. Posttraumatic dataset confusion matrix.
predicted class
Misclassified
real
class
1_2 3 4_5
1_2 6 0 0 3
3 0 0 7 2
4_5 0 0 30 2
Table 7. Non-posttraumatic dataset confusion matrix.
predicted class
Misclassified
real
class
1_2 3 4_5
1_2 8 2 0 0
3 0 5 0 1
4_5 1 1 1 1
5 Comment
The method accuracy does not reflect a perfect classification, but is significantly far
from the random or educated guess and is in accordance with the results of previous
clinical studies [11]. It should be noted that class GOS
3
has a larger error estimate
both in the entire dataset and in the posttraumatic sub-set. The higher
misclassification depends on this class taking into account all patients with a severe
motor outcome (e.g. paresis of one or more limbs), impaired consciousness (e.g.
global amnesia) or both. GOS
3
can therefore be heterogeneous and ANN are unable to
identify a major labeling characteristic. Interestingly, test set patients with GOS
3
in
the non-posttraumatic dataset are well classified, while GOS
4-5
subjects of the same
dataset are poorly classified. The limited size of the non-posttraumatic sample does
not allow further investigation of such phenomenon.
95
References
1. Jennett B, Plum F. Persistent vegetative state after brain damage: a syndrome in search of a
name. Lancet 1972;1:734-6.
2. Dolce G, Sazbon L. The posttraumatic vegetative state. Stuttgart, Thiene, 2002.
3. Laureys S. The neural correlate of (un)awareness: lessons from the vegetative state. Trends
Cogn Sci 2005;9:556-9.
4. Jennett B. The vegetative state. Cambridge, UK, University Press, 2002.
5. Multi-Society Task Force on PVS. Statement on medical aspects of the persistent
vegetative state. N Eng J Med 1994; 330:. 1499-1508.
6. Zeman A. Consciousness. Brain 2001;124:1263-89 (review).
7. Braakman R, Jennett WB, Minderhoud JM. Prognosis of the post traumatic vegetative state.
Acta Neurochirurgica 1988;95): 49-52.
8. Schmutzard E, Kampfl A, Franz G, Pfausler B, Haring HP, Ulmer H, Felber S,
Golaszewski S, Aichner F. Prediction of recovery from post traumatic vegetative state with
cerebral magnetic-resonance imaging. Lancet 1998;351:1763-67.
9. Rovlias A, Kotsou S.Classification and regression tree for prediction of outcome after
severe head injury using simple clinical and laboratory variables. J Neurotrauma.
2004;21:886-93.
10. Teasdale G, Jennet B. Assessment of coma and impaired consciousness: a practical scale.
Lancet 1974; 2:81-84.
11. Dolce G., Quintieri M., Serra S., Lagani V., Pignolo L. Clinical signs and early prognosis: a
decisional tree, data mining study. Brain Injury. 2008, 22:7, 617 — 623.
12. Jennet B., Bond M. Assessment outcome after severe brain damage: a practical scale.
Lancet 1976; 1: 480-484.
13. Pignolo, L., Quintieri M., Sannita, W. G. 'The Glasgow outcome scale in vegetative state: A
possible source of bias, Brain Injury, 2009, 23:1,1 — 2.
14. Holte, R.C. (1993). Very simple classification rules perform well on most commonly used
datasets. Machine Learning, 11, 63–90.
15. Van Bemmel, J.H.,&Munsen, M.A. (1997). Handbook of medical informatics. Berlin:
Springer-Verlag.
16. Zell, A., Mache, N., Hubner, R., Schmalzl, M., Sommer, T., Korb, T. SNNS Stuttgart
Neural Network Simulator Users Manual, Version 2.0, report (1992),No. 3/92, IPVR,
Universität Stuttgart.
96