points have not been placed yet: When the pro-
gram stops because one of the conditions to stop it
is still activated, it only starts running again when
all the points are replaced. This way, the user can
skip between frames until reaching one that is in-
teresting to track.
• Change of size of the facial detector box: Since
this detector is used to correct some flaws regard-
ing the detection of the eyes, it is necessary that
its size does not change much. If the size exceeds
a certain limit, the user will replace the points and
a new facial detector is generated.
• Box of one of the eyes moves outside the facial
detector box: Another way to correct the non de-
tection of the eyes is to verify if the boxes created
around the eyes are inside the facial detector box.
If one of them goes out of the facial detector box,
the user reintroduces the points.
• Loss of validity of the reference point: Since the
video is recorded using the webcam of the laptop,
its quality is not always the best. Also, the lighting
conditions might not be the best as well. This way,
it might not always be possible for the tracker to
keep track of the reference point. If this condition
is true, the user has to reintroduce the reference
point again.
• Total loss of the facial detector: Related to the
problems described above, the facial detector
might have difficulty to trace every feature needed
to work properly. The detector stops working
when there are no longer enough feature points.
When this happens the user reintroduces the eyes’
position and reference point and a new face detec-
tor is created.
4 RESULTS
The results are displayed through graphics. The
graphics have represented on the x-axis the variable
count, which corresponds to the number of frames,
and on the y-axis the difference between the position
of the eye and the position of the reference point. The
reference point used was always the center of the nose
because it has the same distance between both eyes.
The videos recorded for this paper, were recorded
using the webcam of the laptop, which was located at
the bottom center of the screen. The videos had 30,03
frames per second and a quality of 720p. The distance
between the screen and the subject being recorded
was, approximately, 60 cm.
The eye tracker algorithm developed was tested on
healthy adults and healthy children under two years
old.
When the right eye is referred, this eye matches
the right eye of the individual. The same happens to
the left eye.
When a movement to the right is referred, this
movement means that the subject moved to their right
side.
• Horizontal Movement
– Left eye:
∗ Movement to the left: The left eye moves
away from the nose, so the difference will be
higher;
∗ Movement to the right: The left eye gets closer
to the nose, so the difference will be lower;
– Right eye:
∗ Movement to the left: The right eye gets closer
to the nose, so the difference will be lower;
∗ Movement to the right: The right eye moves
away from the nose, so the difference will be
higher;
• Vertical Movement
– Both eyes: Both eyes have the same kind of
response on the graphic when the individual
looks up or down.
∗ Looking up: When the subject looks up, the
center of the eye moves away from the ref-
erence point, so the difference between them
will be higher.
∗ Looking down: When the subject looks down,
the center of the eye gets closer to the ref-
erence point, so the difference between them
will be lower.
In this paper, an example of a healthy individual
is shown and its video acquisition was made under a
controlled environment and protocol.
In this example, the subject is a brown-eyed male,
without known evidences of visual impairment. The
distance between the laptop and the male is about 60
cm. The eye movements in the video were premed-
itated. The binarization threshold was 0,12 for both
eyes and two erosions were made followed by five di-
lations and, finally, three erosions.
The video has a duration of, approximately, 9,4
seconds and the head stands still until the 8,7 seconds
mark.
Figure 9 represents a frame taken from a video
where the subject is looking to the center. Figure
10 represents the variation of the horizontal position,
over time, of the subject’s left eye. The dot placed on
the graphic corresponds to the frame shown in figure
9.
RehabVisual: Implementation of a Low Cost Eye Tracker without Pre-calibration
239