Thinning based Antialiasing Approach for Visual Saliency of Digital
Images
Olivier Rukundo
Tilburg Center for Cognition and Communication, Department of Communication and Information Sciences,
Tilburg University, Warandelaan 2, Tilburg, Netherlands
Keywords: Antialiasing, ACO, Edge Detection, Thinning, Edge-matting, Compositing, Near-realism.
Abstract: A thinning based approach for spatial antialiasing (TAA) has been proposed for visual saliency of digital
images. This TAA approach is based on edge-matting and digital compositing strategies. Prior to edge-
matting the image edges are detected using ant colony optimization (ACO) algorithm and then thinned using
a fast parallel algorithm. After the edge-matting, a composite image is created between the edge-matted and
non-antialiasing image. Motivations for adopting the ACO and fast parallel algorithm in lieu of others found
in the literature are also extensively addressed in this paper. Preliminary TAA experimental outcomes are
more promising but with debatable smoothness to some extent of the original size of the images in
comparison.
1 INTRODUCTION
Reproducing faithfully a continuous signal from
digital samples remains a main concern in digital
signal processing – especially in cases where the
samples have regions with strong spatial
discontinuities or features (Rukundo, 2012;
Rukundo and Maharaj, 2014; Rukundo, Cao and
Huang, 2012; Rukundo, Wu and Cao, 2011;
Rukundo and Cao, 2012). Failure to accurately
reproduce a continuous signal leads results in
artefacts known as aliasing (Crow, 1981). In
computer vision and graphics, aliasing effect creates
visually distracting artefacts and loss of salient
details in an image. Visual saliency is a broad term
that refers to the idea that certain parts of a scene are
pre-attentively distinctive and create some form of
immediate significant visual arousal within the early
stage of human visual system (Timor and Michael,
2001). Early approaches used edge detectors to
extract such pre-attentively distinctive parts of an
object. However, today such an extraction is
increasingly also being performed by saliency
detectors, for example, for salient map construction
for assessments applications in still image and video
processing (Tong, et al, 2010; Itti, et al, 1998; Tong,
et al, 2011; Timor and Michael, 2001). Given that
salient regions of an image attract our attention, it is
very important to improve the visual quality of an
aliased salient map or contour in an image.
Therefore, much effort has been expended in
attempts to attain near-realism by developing
advanced antialiasing techniques. Near-realism is a
word used in this paper to signify the best image
approximation of reality ever achieved digitally.
With a few computationally-expensive exceptions,
most approaches based on filtering before sampling
have focused on weakening (if not to entirely
remove) the effects of image aliasing artefacts using
strategies based on creating new pixels or using
artificial or non-original information. There has been
reluctance to use non-artificial or original
information of the yet-to-be antialiased image, even
despite the well-known blurriness artefacts (which
reduce sharpness) associated with creating new or
non-original pixel values or using weighted
functions. However, an edge inferring and
smoothing antialiasing technique - defining jags as a
sequence of corner points separated by a one-pixel
width has been proposed in (Bloomenthal, 1983)
and remains among the rarest techniques for tackling
aliasing issues by paying attention to edge
information, but this technique remains prone to
potential errors, especially at the edge vertices. It is
important to note that aliasing artefacts are easily
visible on the edges or highly discontinuous features
of an (aliased) image rendered, not on the main
658
Rukundo O..
Thinning based Antialiasing Approach for Visual Saliency of Digital Images.
DOI: 10.5220/0005356206580665
In Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP-2015), pages 658-665
ISBN: 978-989-758-089-5
Copyright
c
2015 SCITEPRESS (Science and Technology Publications, Lda.)
texture or foreground. Another edge-directed
antialiasing approach has been proposed in (Iourcha,
Yang and Pomianowski, 2009) that is technically
based on the computation of the gradients of edge
information prior to deriving filtered colors.
However, it does not scan larger edged areas or
refine potential edge pixels. Following from these
earlier attempts, a new approach based on digital
thick-edge thinning is studied and presented in this
paper. This approach begins with the premise that
the simplest way of achieving an overlay image is
via compositing but, in this regard, the question
becomes what kind of image is to be composited or
blended for antialiasing purposes? A practical
answer to this question is very important if the
concern is the minimization or removal of the visual
aliasing artefacts via simultaneous elimination of
blurring artefacts in order to increase the sharpness
(of specific salient features) of the processed images.
Figure 1: (a) shows an aliased image of the size 256 × 256;
(b) shows an antialiased image of the size 256x256 using
the Snowbound Software.
Therefore, this paper takes care to address these
issues extensively throughout the second and third
sections, before moving on to discuss compositing in
detail. Compositing is performed between the image
with thinned edges (i.e. after the edge-matting) and
the original image or the image with aliased
artefacts. It is important to note that in order to
prevent the aliasing artefacts from occurring it is
also necessary to review the need for sampling
continuous signals as well as isotropic usage of
regular grid of pixels, but this is necessarily beyond
the scope of this paper. Figure 1 shows two images,
the aliased or original image (Figure 1 (a)) and the
image antialiased by a Snowbound Software’s
Virtual Viewer Alias/Anti-alias tool (Figure 1 (b));
in this paper, this is referred to as a traditional anti-
aliasing technique. As it can be seen in Figure 1, the
shapes of the boundaries or edges of Figure 1 (a) are
harsh; this perception of harshness (by human eyes)
is believed or inferred to be falsely represented or
aliased. In the same figure, Figure 1 (b) will appear
less harsh or smoothed at a lower resolution (i.e. the
resolution lower that 256 × 256 shown in Figure
1(b)) - compared to Figure 1 (a). Readers can use
their computer’s photo viewer to change the Figure
1 (a) and Figure 1 (b) image resolutions by zooming
out up to 10% to see how the smoothness perceived
in both cases are different or simply see how at 10%
of the Figure 1 (a) there are still image boundaries
with jagged parts, or how at 10%, the Figure 1 (b)
therefore looks smoother than Figure 1 (a)). This is
the simplest and most rapid efficiency assessment
adopted for the experimental part of this paper. By
considering the blurriness of Figure 1 (b), it becomes
apparent that surrounding stair steps or jagged edges
with intermediate shades of gray is not the most
promising way to achieve visual salience of an
image without further introducing blurriness in the
rendered image. A combination of strategies that
involve refining the actual sampled image signals
via detecting the strongest discontinuity zone (or
area) first, and then applying the thinning technique
to achieve a less harsh look, before producing the
overlay image have been developed and applied,
and include further blurriness removal advantages.
The preliminary outcomes of this technique are
presented and discussed in this paper. Because the
history and literature around antialiasing, visual
saliency, edge detection and thinning techniques and
algorithms are already well-known and widely
documented, they will not be replicated in this
paper.
This paper is organized as follows: Part II
introduces ACO edge detection; Part III presents the
thinning and algorithm adopted; Part IV presents the
proposed TAA, Part V presents experimental
discussions, Part VI presents further discussions and
Part VII offers a conclusion.
2 ACO EDGE DETECTION
Detecting thick edges or large image’s zones with
stronger discontinuities is crucial in the strategy of
edge-matting. The tool adopted for this process is
the ACO (Dorigo and Stutzle, 2004; Baterina and
Oppus, 2010; Rezaee, 2008; Lu and Chen, 2008).
The traditional edge detection methods – such as
Sobel or Canny edge filter/detector act like step
detectors in their operations, and consequently are
unable to adequately cover large enough areas of
discontinuities (Canny, 1986; Gonzalez and Woods,
2008). Unlike the ACO edge detector results shown
in Figure 3 (a), after applying these step detectors to
the image in Figure 1 (a), the obtained results
(shown in Figure 2 (a) and Figure 2 (b)) are
ThinningbasedAntialiasingApproachforVisualSaliencyofDigitalImages
659
evidence that Sobel and Canny edge detectors cover
a smaller area of image discontinuities. As it can be
observed, the boundaries of Figure 2 (a) do not have
a discernable difference in shape compared to Figure
1 (a). Under such circumstances, there is no need for
thinning – given that the single-pixel-width ridgeline
has been achieved – but without considering the
influence of potentially larger zones of image
discontinuities surrounding or adjacent to the step
edge strip detected. Also, as shown in Figure 2 (b),
the outcome of the Canny step detector technique is
unsatisfactory when applied to larger zones of image
feature discontinuities which are crucial for
achieving antialiasing of images without introducing
new pixels or using non-artificial pixel values.
Figure 3 (a) shows the results obtained after
detecting edges of Figure 1 (a) using the ACO edge
detection method.
Figure 2: (a) edge detection using Sobel and (b) edge
detection using Canny method.
As it can be observed, the boundaries are thicker and
visibly different in shape than in Figure 2 (a) and
Figure 2 (b). This demonstrates the utility of the
ACO edge detection method for the purpose of
studying larger zones of image discontinuities
(without the need for redefining the edge or the
algorithmic parameters). Following from this, the
question becomes how to reduce the large zones
covered by ACO to a ridgeline of single pixel width?
It is important to note that the objective is to achieve
an average shape that uses only the original pixels
Figure 3: (a) shows the edges detected by ACO; (b) shows
a zoomed-in portion of the detected edge with a red line
showing the approximation of the average shape; and the
estimate location of the thinned edge ridgeline.
and reduces the visible harshness possibly induced
by the aliasing artefacts. It is from this that the need
for application of the thinning concept emerges. In
order to reduce detected edges to a single pixel
width size, it is critical to thin the ACO detected
edges. As high computational cost is another issue in
developing digital image processors, a fast thinning
algorithm has been used in this process.
3 THINNING
A process of reducing an object in a digital image to
the minimum size necessary for machine recognition
of that object is referred to as thinning (Abe,
Mizutani and Wang, 1994; Wang and Zhang, 1989;
Zhang and Suen, 1984). This consists of converting
binary shapes obtained from edge/boundary
detection to one-pixel width lines. This is based on
deleting pixels iteratively inside the shape to shrink
it without shortening it or breaking it apart. A fast
parallel algorithm based on the work done by Zhang
and Suen in (Zhang and Suen, 1984) has been used
for this purpose and the result is shown in Figure 4.
The matlab source code of this algorithm is available
at the website of matworks. As can be observed in
Figure 4, the right-angledness of the stair-steps’
vertices has been changed.
Figure 4: A portion of the edge thinned using fast parallel
algorithm for thinning developed by Zhang and Suen, in
1984.
The new image appears much more near-realistic
and with reduced harshness, an outcome achieved
via inducing these changes to every vertex.
Therefore, thinning the image edges detected by
ACO is the final step in the edge-matting strategy
and is also the most computationally expensive part
of the TAA.
4 TAA
A process of extracting the object from the original
image is often referred to as matting, while the
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
660
process of inserting the object into another image is
called compositing (Szeliski, 2010). In this paper,
the word matting has been used in connection with
the process of extracting the thinnest ridgelines and
therefore named ‘edge-matting’. A corresponding
and specifically extracted ridgeline or shape has
been referred to as ‘edge-matted’. In Figure 5, the
light blue color indicates the thickest ridgeline
detected by the ACO or ‘thick-ridgeline’ – while a
dark blue color (crossing into the center of the thick-
ridgeline) indicates the potential final location of the
‘edge-matted’ with reference to this thick-ridgeline.
Figure 5: Example of the thick-ridgeline and edge-matted
in light and dark blue, respectively with similar thick-
ridgelines boundaries.
As can be observed, the thick–ridgeline shape is
similar to the estimate edge-matted shape. The edge-
matted shape is inferred according to the concept of
iterative deleting of pixels side-by-side adopted by
many image thinning and skeletonization
algorithms. Referring to the results shown in Figure
2 (a) and Figure 2 (b) and the inferred result in
Figure 5, either in light or dark blue, there is a strong
similarity in shapes. This indicates that when the
thick-ridgelines boundaries are similar in shape (or
evolve at the same frequency) the resulting edge-
matted will have exactly the same shape, thus
Figure 6: Example of the thick-ridgeline and ‘copied’
edge-matted in light and dark blue, respectively with
dissimilar thick-ridgelines boundaries.
making it difficult to antialias using the TAA
approach. However, when the thick–ridgelines
boundaries (see Figure 6) are dissimilar in shape,
consequently the inferred edge-matted will also be
dissimilar to those thick-ridgelines shapes (see
Figure 7). This phenomenon demonstrates the
importance of not ignoring larger image
discontinuity zones, especially when the concern is
to achieve the most reasonable shape of edge-
matted.
It should be noted that the ACO edge detector
was chosen due to its potential for overcoming the
limitations of traditional edge detection methods and
its ability to extract the thickest ridgeline in a more
efficient and reliable way (Baterina and Oppus,
2010).
Figure 7: Example showing approximately how iterative
deletion of spurious pixels is performed. The blueish lines
represent the edge-matted.
Figure 7 uses various colors to show approximately
how the iterative deletion of spurious pixels is
performed (from light yellow color to dark yellow
color and then to a mix of light and dark blue colors)
and provides evidence that, under such
circumstances, the final inferred edge-matted shape
is the average shape of (and is thereby dependent
upon) the shapes of the thick-ridgeline boundaries.
This dependence is outstanding finding of this paper
and could serve to initiate a revolutionary step
forward in developing further applications. Figure 8
shows the inferred edge-matted according to the
shape of the thick-ridgeline boundaries. In Figure 8,
the shape of edge-matted is different from that in
Figure 5 and with less harsh boundaries compared to
the thick-ridgelines shapes – while in Figure 5, edge-
matted and thick-ridgelines have no dissimilarities in
shape. In addition, comparing the edge-matted shape
to that of any of the boundaries of the thick-edge, the
former looks ‘smoothly’ much better than the latter.
As these ideal cases demonstrate, the result is that
the edge-matted inferred in Figure 5 and Figure 8
have different outcomes although the thinning
ThinningbasedAntialiasingApproachforVisualSaliencyofDigitalImages
661
processes (or iteratively deletion performed side–
by–side) were the same. For antialiasing purposes,
the final compositing strategy has been divided into
two important steps, namely alpha compositing and
thresholding.
Figure 8: Example of the resulting edge-matted.
Alpha compositing was adopted as the easiest
operation of combining a non-antialiasing image
with an edge-matting image in order to create a new
image or appearance. Thresholding has been
introduced and applied to the overlay image in order
to extract (or output) the refined composite image.
Referring to compositing algebra, the over-operator
has been adopted - as it is suitable for placing an
image on the top of the other (or simply blending
two images) and accomplished by applying
Equation (1).
)1(
abbaao
CCC
(1)
where
o
C
is the result of the operation or composite
image,
a
C
is the colour of the pixel in edge-matted
image,
b
C
is the colour of the pixel in the original
image and,
a
and
b
are the alpha of the pixels
in the edge-matted and original image, respectively
(Porter and Duff, 1984). Figure 9 recapitulates the
TAA and briefly illustrates the edge-matting as a
combination of edge detection and thinning. The
compositing is presented as the final strategic step in
which the overlay image is formed before applying
the threshold and outputting the results. This relative
simplicity makes the TAA approach interesting to
apply for antialiasing purposes as well as the further
development of robust antialiasing techniques. In
this approach, the original image is input and its
edges are simultaneously detected using the
algorithm developed by Tian, Yu and Xie in
(Tian,Yu and Xie, 2008) and whose matlab source
code is available on the website of matworks. This is
followed by the edge thinning process using a fast
parallel algorithm for thinning developed by Zhang
and Suen (Zhang and Suen, 1984).
Figure 9: Recapitulation of the TAA.
After, the resulting edge-matted image is blended
with the original image thus forming a composite
image. It should be noted that in Matlab a similar
process of blending two images can be achieved
with the help of the imfuse function. The obtained
blended images (see Figure 13) are then refined by
applying a threshold to the results of the
compositing operation, according to the Equation (2)
and Equation (3).
255124
oo
CC
(2)
0124
oo
CC
(3)
The outcomes of preliminary experiments and those
of the Snowbound Software’s Virtual Viewer
Alias/Anti-alias tool are presented and discussed in
the following part.
5 EXPERIMENTAL RESULTS
AND DISCUSSIONS
The TAA preliminary results are displayed at a
higher resolution in Figure 12 to clearly show the
visual difference output between the new TAA and
old concept based antialiasing. Figure 10 (a) shows
an original or non-anti-aliasing image with very
harsh edges. The thinnest skeleton (or edge-matted)
is shown in Figure 10 (b) with less harsh edges
compared to the former. It is important to note that
what can be seen as disconnections of edges is not a
flaw specific to the TAA method, and additionally is
not a concern of this experiment, since the TAA
output is an overlay image of two images (see Figure
13 (a) and Figure 13 (b)).
In addition, it is evident that it is not
necessary for all the edge-lines/boundaries of a
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
662
Figure 10 : (a) non-anti-aliasing and (b) thinned edge
image.
non-antialiasing image to undergo antialiasing (see
and compare Figure 11 (a) and Figure 11 (b)).
Figure 11 : (a) a traditionally anti-aliased and (b) edge-
matted output images.
In this Figure 11 (a), it can be seen that the
traditional antialiasing technique applied along the
vertical and horizontal lines of the non-antialiasing
image were no longer necessary. This is another
advantage of the TAA technique as it means that
TAA does not use more computational effort to
calculate average pixel values to pad along such
lines. In the experimental results, shown in Figure
12 (c) and Figure 12 (d), a blurring layer of the
calculated average pixel values is added to the non-
antialiasing image edge boundaries. Until it would
be presented at a lower resolution, for example, at 64
x 64, such images would look smoother than images
at 256 × 256 resolution – thereby tricking the human
eyes through the appearance of the smoothed
boundaries of edges. A disadvantage of this
artificial-layer-adding based method is its
dependence on the images for its operation. For
example, see the results in Figure 12 (c) and Figure
12 (d). At the actual resolution, it can be observed
that they do not demonstrate the same level of
artefacts though processed by the same traditional
antialiasing method. In case of TAA shown in
Figure 12 (e) and Figure 12 (f), the boundaries of the
antialiased image look quite different especially,
on the right-angled or vertices. In addition, the
sharpness is increased with TAA as compared to the
traditional technique which remains technically
deficient. Further locally objective assessments
using any image in various photo viewers or editing
software programmes also display better
performance of the proposed TAA compared to the
Snowbound Software case presented –especially
when the antialiased image size is reduced to 10% of
its original size. Technically reducing an image to a
particular percentage does not mean that should be
the size at which that image should remain for
further processing.
(a) (b)
(c) (d)
(e) (f)
Figure 12: (a) and (b) - two non-antialiasing images, (c)
and (d) - two traditionally-antialiased images, (e) and (f) -
two TAA antialiased images.
This is usually done to increase the size of an aliased
image to a higher resolution for antialiasing
activities, and afterwards reduce it to a lower
resolution (matching the desired size), as a method
of concealing the visual artefacts induced and thus
creating a visually pleasing perception of that output
for a low resolution image. Further studies on this
non-artificial pixels-based antialiasing concept are
necessary given the increasing interest in achieving
near-realism as well as preventing aliasing artefacts
from occurring in images processed digitally. The
ThinningbasedAntialiasingApproachforVisualSaliencyofDigitalImages
663
following section further discusses the potential of
the TAA approach and offers insights into the
process of the imfuse function when it is applied
between the edge-matted and non-antialiasing
images (see Figure 13 (a) and Figure 13 (b)) just
before the threshold is applied.
6 FURTHER DISCUSSIONS AND
APPLICATIONS
There is the potential to further develop the
proposed approach and extend its application to
refining other categories of digital images which
also require visual saliency, as well as many other
cases in which effort to view the object’s near-
realism is needed.
Figure 13: (a) and (b) - overlay images obtained from the
edge-matted and original images, in each case. The false
color has been enabled by the ‘falsecolor’ command to
clearly show the boundaries between the edge-matted and
original images of the overlay images.
It should be noted that the proposed approach
enables the exclusive use of original pixels
information which is not possible with traditional
antialiasing methods and is particularly important
when the blurriness artefacts are undesired. Figure
13 (a) and Figure 13 (b) show the overlay images of
each of the two processed images (of the capital and
small ‘A’ letter) obtained by using only the imfuse
command function with no threshold applied.
However, the corresponding overlay images do not
automatically produce the desired visual effects and
therefore a threshold has been introduced and
applied to ensure that the widest visible rift is output
or obtained. Such a threshold has been applied to the
results of the compositing operation Equation 1
according to the Equation (2) and Equation (3)
commands. The average matlab lines
execution/processing time required to output the
composite image (ACO
average (four kernel functions with 30 steps)
= 24.51856875 sec. or ACO
average (one kernel function with 30
steps)
= 9.989709, thinning = 13.293638 sec. and
compositing = 2.085868 sec.) as well as other
computational complexity related issues will be
discussed in the optimization effort in future papers.
7 CONCLUSIONS
The results and arguments presented in this paper
concentrated on increasing or improving the visual
saliency of an image processed digitally. The
undesired phenomenon associated with digital image
processing – aliasing – results from an unfaithful
reproduction of a continuous signal or sample by a
digital processor/software. Effort to tackle this
phenomenon has led to the development of an
antialiasing method that exclusively uses non-
artificial pixels in its operations to avoid inducing
other phenomena such as blurriness or haloness,
(which also reduce the visual saliency). This TAA
approach comprises of edge-matting and
compositing strategic steps which have been
thoroughly examined and experimental results have
been provided in the previous sections of this paper.
The key differences between the two strategies have
been explored and the preliminary experimental
results demonstrate the lack of edge vertices errors
via avoiding the averaged pixels’ padding process
and by achieving high quality edge refinement via
thinning. The quality performance measure selected
locally served as an ‘objective assessment’ as it is
possible to use any photo viewer or editing software
locally due to the simplicity of the latter. The
sharpness achieved by TAA is very promising but
the smoothness remained debatable to some extent
compared to the other case studies mentioned.
Further investigations and improvements are
expected in the future research works, especially in
terms of computational complexity and accuracy as
well as comparative evaluations using more
complicated images for testing more objects.
ACKNOWLEDGEMENTS
This work was supported Tilburg Center for
Cognition and Communication, Tilburg School of
Humanities, Tilburg University. The author would
like to thank the reviewers and editors for their
helpful comments.
REFERENCES
Rukundo, O., 2012.
灰度像插值优化方法的研究
.
VISAPP2015-InternationalConferenceonComputerVisionTheoryandApplications
664
(Optimal Methods Research on Grayscale Image
Interpolation). Thesis. China National Knowledge
Infrastructure.
Rukundo, O., Maharaj, B.T., 2014. Optimization of Image
Interpolation Based on Nearest Neighbour Algorithm.
In VISAPP’14, 9th International Conference on
Computer Vision Theory and Applications, Lisbon,
Portugal, pp. 641-647.
Rukundo, O., Cao, H.Q., Huang, M.H., 2012,
Optimization of Bilinear Interpolation Based on Ant
Colony Algorithm, Lecture Notes in Electrical
Engineering, Vol. 137, pp. 571-580.
Rukundo, O., Wu, K.N., Cao, H.Q., 2011. Image
Interpolation Based on the Pixel Value Corresponding
to the Smallest Absolute Difference. In IWACI’11, 4th
International Workshop on Computational
Intelligence, Wuhan, China, pp. 432-435.
Rukundo, O., Cao, H.Q., 2012. Nearest Neighbor Value
Interpolation. International Journal of Advanced
Computer Science and Applications. Vol. 3, No.4, pp.
25-30.
Crow, F.C., 1981. A Comparison of Antialiasing
Techniques. IEEE Computer Graphics and
Applications, Vol. 1, pp. 40–48.
Timor, K., Michael, B., 2001. Saliency, Scale and Image
Description. International Journal of Computer Vision,
Vol. 45, No. 2, pp. 83-105.
Tong,Y.B., Cheikh,F.A.,Konik, H., Tremeau A., 2010.
Full reference image quality assessment based on
saliency map analysis. International Journal of
Imaging Science and Technology, Vol. 54, No.3, pp.
030503-030514.
Itti, L., Koch, C., Niebur, E., 1998. A Model of Saliency-
Based Visual Attention for Rapid Scene Analysis.
IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 20, No. 11, pp.1254-1259.
Tong, Y. B., Cheikh, F. A., Guraya, F.F.E., Konik, H.,
Tremeau, A., 2011. A Spatiotemporal Saliency Model
for Video Surveillance. Journal of Cognitive
Computing, Vol.3, No.1, pp.241-263.
Bloomenthal, J., 1983. Edge Inference with Applications
to Antialiasing. In SIGGRAPH '83, 10th Annual
Conference Computer Graphics and Interactive
Techniques, Detroit, USA, pp. 157-162.
Iourcha, K., Yang, J.C., Pomianowski, A., 2009. A
directionally adaptive edge anti-aliasing filter. In HPG
'09, Conference on High Performance Graphics, New
Orleans, Louisiana, pp. 127–133.
Dorigo, M., Stutzle, T., 2004. Ant Colony Optimization.
MIT Press, Massachusetts. Illustrated Ed.
Baterina, A.V., Oppus, C., 2010. Image Edge Detection
Using Ant Colony Optimization. WSEAS Transactions
on Signal Processing, Vol. 6, pp. 58-67.
Rezaee, A., 2008. Extracting Edge of Images with Ant
Colony. Journal of Electrical Engineering. Vol. 59, pp.
57-9.
Lu, D.S., Chen, C.C., 2008. Edge Detection Improvement
by Ant Colony Optimization. Pattern Recognition
Letters. Vol. 29, pp. 416-25.
Canny, J., 1986. A Computational Approach to Edge
Detection. IEEE Trans. Pattern Analysis and Machine
Intelligence. Vol. 8, pp. 679-698.
Gonzalez, R. C., Woods, R. E., 2008. Digital Image
Processing. Pearson Prentice Hall, 3
rd
Edition.
Abe, K., Mizutani, F., Wang, C.H., 1994. Thinning of
Gray-scale Images with Combined Sequential and
Parallel Conditions for Pixel Removal. IEEE
Transactions on Systems, Man, and Cybernetics. Vol.
24, pp. 294-299.
Wang, P.S.P., Zhang, Y.Y., 1989. A Fast and Flexible
Thinning Algorithm. IEEE Transactions on
Computers. Vol. 38, pp. 741-744.
Zhang, T. Y., Suen, C. Y., 1984. A Fast Parallel
Algorithm for Thinning Digital Patterns.
Communication of the ACM. Vol. 27, pp. 236-239.
Szeliski, R., 2010. Computer Vision: Algorithms and
Applications. Springer London, Illustrated Ed.
Porter, T., Duff, T., 1984. Compositing Digital Images.
ACM SIGGRAPH Computer Graphics. Vol. 18, pp.
253-259.
Tian, J., Yu, W.Y., Xie, S.L., 2008. An Ant Colony
Optimization Algorithm for Image Edge Detection. In
CEC’08, Congress on Evolutionary Computation,
Hong Kong, pp. 751-756.
ThinningbasedAntialiasingApproachforVisualSaliencyofDigitalImages
665