individual booth with a computer, a webcam and the
board with textures (described in item 3). By
touching the texture, the system we call “EXAT" (an
allusion to the name of origin) will capture through
the Webcam the location of the finger of the student,
identifying the texture and informing him the name
of the texture (colour adjusted), thereby, the system
will store in "EXAT" database these markings,
generating the profile results at the end of the
application of subtest. The student will be able to
reveal the result of each test through a gestual
marking of the reply that also will be interpreted by
the system of computational vision.
However, for the accomplishment of automatized
stage 3, a new stimulus board was formed; this
board possess, as in the original subtest, all the
stimulus congregated in it (the division in two
boards was not necessary anymore), since the
rectangles with the textures do not need to be big
because they do not present the writing in Braille.
So, the board had eight lines with five textured
rectangles in each line. The textures continue being
done using thermoform material (PVC film) and,
below each texture, it was placed a colored rectangle
there. Actually, the computer recognizes the colors
when the child presses the rectangle and states the
name of a conflicting texture.
Therefore, it is expected that with the automation
of Expressive Attention subtest that the application
time becomes faster, the intervention of the
applicator happens only when really needed, the
interaction with students take place simultaneously,
and that the results are produced by the system at the
end of the application, making the automated subtest
as similar as possible compared to the original test.
Finally, it is believed that this technology
becomes a tool that opens new horizons of research,
contributing then to new tests adapted very closely
to those applied to seeing people. We emphasize that
the results produced by this system have not yet
been concluded, as we reported earlier it is being
finalized by the trainees of NCE/UFRJ with
conclusion expected to be in August 2010.
3.4.1 Prototype
The EXAT software prototype, implemented based
on the detailed information above, can bring truth to
the subtest “Expressive Attention”, once it keeps the
stimulus’ simultaneity.
The computer vision creates a new property for
the thermoform: the sound. Therefore, it gives a
label where there is none; translating it to an
accessible language. By that, the trace of the child’s
finger, through the webcam, allows it to use a
spoken label for virtual objects.
In order to begin receiving an explanation about
the task, the child has to press a key. For him/her to
listen to the explanation one more time, it’s possible
to press the key as many times as needed. The
EXAT sums up the number of times that the
explanation was given and calculates the reaction
time of the child (since the first explanation, until
the test has begun).
This software calculates the time transition from
previous position to the current one (∆t). As per this
data, the prognosis would be more objectively
classified in order to be used as counterproof for
previous prognosis that could not be reevaluated by
the manual adaptation, since there are no records of
it.
4 RESULTS
The analysis consisted in the grouping of the raw
scores of items 4, 5, 5.1, 6 and 6.1, the times of
those items and the forecasts produced by the
researcher during the implementation phase of the
subtest, with the help of statistical program Orange
Canvas. Such predictions were categorized into the
following levels: standard (0), lack of concentration
(1), fatigue (2), difficulty in understanding (3),
difficulty in identifying the textures (4), impatience
(5), did not attend (6); which received numbers for
statistical purposes.
By using the Orange Canvas program, it was
verified that there was a correlation between the
variables inherent of the subtest and the forecasts,
where their veracity was checked. Through
Classification Tree Graph tool, a diagram was drawn
to the forecasts and their correlation with all the
aforementioned variables. Of the 64 participants in
the sample, 43 were grouped for not having
conducted the test and 21 others who conducted the
test were in a group that split into two subgroups:
group 1 and group 2. It was found that from the 21
participants, 16 are in the same classification (group
1) in which there were 9 participant classified as
standard, three classified as poor concentration, one
classified as tiredness and three classified as
impatience, all of them having time of item 4 ≥
22.500 and time of item 6 ≤ 325 500. This first
group is divided into two other subgroups: group 1A
and group 1B. It was found that the group 1A had
two classified as standard and three classified as
poor concentration, all with scores of item 6 ≤ 29
USE OF COMPUTATIONAL INTELLIGENCE AND VISION IN THE STUDY OF SELECTIVE ATTENTION OF
CONGENITAL BLIND CHILDREN
497