orientation and location at the crosswalk; then, it can
detect the crosswalk using edge information.
Similarly, the Zebralocalizer identifies pedestrian
crossings (i.e. zebra crossings) using line analyses
and localization of crosswalks using a camera and
3D accelerometers. These vision-based methods
function successfully on localizing crosswalks at
intersections and guiding users to cross intersections
safely.
Recently, some commercial products such as
“Text and Walk,” and “Walk and Email,” which
displays the road situation captured from back
camera on application background, thereby making
pedestrians users write SMS and e-mail while
walking safety. However, it was shown in (Ophir,
2009) that the users may not be aware of dangers
even if they are displayed as application
background. In (Sivaraman, 2010), authors
presented a car detection system based on Harr
features that exploits the back camera of
smartphone. It was well worked on the resource
constrained smartphone, however, it can only detect
cars when they were very close to the pedestrians,
limiting the time for the pedestrians to react safety.
On the other hand, none of them have considered
the current situations where a user stands on. In real
scenarios, the people require different guidance
solutions according to the context where they are
currently located. For example, if a user is at an
intersection, they want to locate the crosswalk.
However, when the user is walking on the sidewalk,
the system leads the user to walk on the far side
from the road. Accordingly, the outdoor contexts
where the user is located should be first recognized.
In this paper, a novel method for automatically
recognizing a user’s current context is proposed in
order to increase pedestrian safety, particularly for
users who operate their mobile phone while walking.
Here, the context refers to the type of place where a
user is standing, which is classified as a sidewalk,
roadway, or intersection. Among these types of
contexts, the discrimination between a sidewalk and
an intersection are more important than recognizing
a roadway.
As a key in discriminating outdoor contexts, the
orientation of the boundaries between sidewalks and
roadways are used: horizontally oriented boundaries
are found in images corresponding to intersections
and more vertically oriented boundaries are
observed in images corresponding to sidewalks.
Therefore, localizing such boundaries from input
images should be undertaken first. In order to
separate the boundaries between sidewalks and
roadways from other lines, and then to discriminate
such boundaries as sidewalks or intersections, the
color and texture properties of the images are
considered and machine-learning based
classifications such as a support vector machine
(SVM) are used. Then, in order to improve the
computation cost and accuracy, a multi-scale
classification is adopted, where a coarse layer first
classifies the boundary pixels from the background
and a fine layer classifies the boundary pixels into
one of the three contexts: sidewalks, intersections, or
roadways.
In order to evaluate the effectiveness of the
proposed system, numerous videos were collected
from real environments, and they were used to
measure the accuracy of the proposed system. From
the experimental results, it was found that the
average accuracy was 98.25%.
2 SYSTEM ARCHITECTURE
We propose a novel assistive device that aids mobile
phone users walking and crossing roads more safety.
The proposed system is implemented on smartphone
and uses its back camera to detect users’ current
context, and notifies the recognized results to users
through sound and vibration from the phone.
Then, as a key element of discriminating
between sidewalks and intersections, the orientation
of the boundaries between the sidewalks and
roadways are used. Fig 1 illustrates some sample
intersections and sidewalks captured from outdoors,
where the vertical and horizontal lines in the
boundaries between sidewalks and roadways can be
easily observed. The images corresponding to
intersections have horizontal boundaries, as shown
in Fig. 1(a), whereas the images corresponding to
sidewalks have some boundaries that are close to
vertical and non-horizontal lines (see Fig. 1(b)).
(a) (b)
Figure 1: Some of outdoor images (a) images categorized
to intersections (b) images categorized to sidewalks.
Based on these observations, the proposed
method was designed and developed. As seen in Fig.
1, it is critical to accurately localize the boundaries
from the input images. For this, we use the color and
textural properties, which are trained by machine
learning algorithm.