Image Analysis based on Radon-type Integral Transforms Over Conic
Sections
Dhekra El Hamdi
1,2
, Mai K. Nguyen
1
, Hedi Tabia
1
and Atef Hamouda
2
1
Laboratoire Equipes de Traitement de l’Information et Syst
`
eme (ETIS), Universit
´
e de Cergy-Pontoise/ ENSEA/ CNRS
UMR 8051, F-95000 Cergy-Pontoise, France
2
Laboratoire d’Informatique, Programmation, Algorithmique et Heuristiques (LIPAH),
Facult
´
e des Sciences de Tunis, Universit
´
e de Tunis EL Manar, 1068, Tunis, Tunisia
Keywords:
Radon Transform, Conic Sections, Image Analysis, Feature Extraction.
Abstract:
This paper presents a generalized Radon transform defined on conic sections called Conic Radon Transform
(CRT) for image analysis. The proposed CRT extends the classical Radon transform (RT) which integrates a
image function f (x, y) over straight lines. As the CRT is capable of detecting conic sections with any position
and orientation in original images it makes possible to build a new descriptor based on integrating an image
over conic sections. In order to test and verify the utility and performance of this new approach we have
developed, in this work, the Radon transforms defined on circles and on parabolas, then built a descriptor
combining the features extracted by the circular RT, parabolic RT and linear RT. This descriptor is applied to
object classification. A number of experiments on both synthetic and real datasets illustrates the efficiency and
the advantages of this new approach taking into account the global features of different (circular, parabolic
and linear) shapes of images under study.
1 INTRODUCTION
One of the most basic stages in image analysis is
the detection of primitives features such as lines and
curves in an image. The most popular method for
segment recognition is the classical Radon transform
(RT) ( Radon, 1917) which is defined as an integral of
the image function along all lines in image space.
In fact, various applications based on the RT were im-
plemented such as centerline detection (Zhang and
Couloigner, 2007), biometric identification such as
iris identification (Bharath et al., 2014) and object
recognition (Nguyen and Hoang, 2015).
So far the classical RT is restricted to segment de-
tection. In this paper, we focus on curves detection.
Our main motivation is to achieve both of the two fol-
lowing objectives: definition of a generalized Radon
transform which detects more complex curves than
lines, and application of this new transform to fea-
ture extraction. Our purpose consists to verify that
integrating an object over curves rather than lines can
improve the performance of object classification.
This paper is organized as follows: Section 2 presents
a review of related works in the literature. Afterwards,
Section 3 defines the RT over conic sections. In sec-
tion 4, we discuss the use of our approach for feature
extraction. Then, we present the experimental evalu-
ation of our method in section 5. Section 6 concludes
the paper.
2 RELATED WORKS
In this section, we review some previous works re-
lated to RT for feature extraction and generalized
Radon transform.
In the evolution of image analysis, a number of meth-
ods have been proposed for definition of descriptors
which have high discrimination power.
There are two main categories of feature extraction
methods: a transform-based methods which com-
putes a global descriptor of the shape, and a hand-
crafted -based methods which aim at extracting lo-
cal features such as Scale Invariant Features Trans-
form (SIFT), Gradient Localization Oriented His-
togram (GLOH) and Gradient Moments (GM) (Islam
and Sluzek, 2010) . These descriptors are local and
based on gradient magnitude and orientation of key-
points.
In this paper, we focus on the first category which
aims to extract the global descriptor. In contrast to
356
Hamdi, D., Nguyen, M., Tabia, H. and Hamouda, A.
Image Analysis based on Radon-type Integral Transforms Over Conic Sections.
DOI: 10.5220/0006613403560362
In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2018) - Volume 4: VISAPP, pages
356-362
ISBN: 978-989-758-290-5
Copyright © 2018 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
local features, global features are not based on cer-
tain points of interest but they describe the image as a
whole.
Popular methods are based on Fourier transform,
the Generic Fourier Descriptor (GFD) proposed by
Zhang and Lu is a typical Fourier descriptor which
is invariant to shape rotation (Zhang and Lu, 2002).
Besides the Fourier descriptor, the classical RT has
also been employed for the definition of several shape
descriptors due to its excellent geometric properties.
Hasegawa et al. proposed a RT- based method for
shape recognition which is based on the histogram of
RT (Hasegawa and Tabbone, 2016). This approach
is robust to translation, rotation and scaling but it not
invariant under shape distortion. For that, the authors
compute an angle correlation matrix and apply the dy-
namic time warping to the angle coordinate in order to
be robust to distortion transformations. Furthermore,
the RT is used for near-duplicate image detection (Lei
et al., 2014). The authors proposed a family of ge-
ometric invariant features based on linear RT. These
features are able to distinguish images which are not
near- duplicated pairs.
Despite the efficiency of RT for linear features detec-
tion, it remains limited in detection of more complex
features. The recognition of different patterns than
linear features can not be achieved directly by RT.
One of the most used transforms for the detection of
complex features is the generalized Hough Transform
(GHT) (Ballard, 1981). It can recognize parameter-
ized curves and arbitrary shapes from binary images
and from grey level images. However, the GHT is
a discrete intuitive method unlike the RT which is
based on a mathematic foundation allowing to recover
a continuous 2D function f through its integrals.
Recently, several works have focused on generaliz-
ing the RT to detect more complex patterns where
the straight lines were replaced by curves and weight
functions were introduced into the integrals along
these curves.
Elouedi et al. defined a discrete generalized Radon
transform for detection of polynomial curves (PDRT)
(Elouedi et al., 2015). This transform generalizes the
classical RT by projecting the image with respect to
polynomial curves. However, the use of the PDRT is
limited to square prime sized images.
Our motivation in this work is to define a novel gen-
eralized Radon transform. Our transform can detect
complex forms which are the conic sections. We
present an analytical method for generalized Radon
transform which is very different to the approach
mentioned above.
In the next section, we introduce the definition of a
generalized Radon transform which is the extension
of the classical RT to conic sections in the plane. We
provide the mathematical framework to the integrals
over conic sections.
3 RADON TRANFORM OVER
CONIC SECTIONS
Let us first recall the classical Radon transform (RT).
The RT in euclidean space represents the integration
of a function f (x, y) over lines as defined in this equa-
tion:
R f (ρ, φ) =
Z
+
Z
+
f (x, y)δ(ρxcos(φ) ysin(φ))dxdy,
(1)
where δ(.) is the Dirac delta function, ρ ]
, +[ is the distance from the origin of the coordi-
nate system to the line and φ [0, π[ is an angle cor-
responding to the orientation of the line (Fig. 1).
In Radon space the value R f (ρ, φ) reaches a maxi-
mum value (peak) at the points who coordinates ρ and
φ correspond to the lines parameters (ρ, φ) (Fig. 2).
Let : T
x
0
,y
0
, R
φ
0
, S
α
the geometric transformations
where
u (x
0
, y
0
) : the translation vector of coordi-
nates (x
0
, y
0
), φ
0
: the rotation angle, α : the scale
factor, and g(x
0
, y
0
) : the transformed function f (x, y).
The RT offers excellent properties that are useful for
object recognition as outlined below:
Symmetry : R f (ρ, φ) = R f (ρ, φ ± π).
Periodicity : R f (ρ, φ) = R f (ρ, φ +2kπ), of period
2π, k is integer.
Translation : a translation of f (x, y) by
u (x
0
, y
0
)
: g = T
x
0
,y
0
[ f ] implies a shift by a distance d =
x
0
cosφ + y
0
sinφ in ρ coordinate
Rg(ρ, φ) = R f (ρ x
0
cosφ y
0
sinφ, φ).
Rotation : a rotation by an angle φ
0
of f (x, y)
: g = R
φ
0
[ f ] implies a shift in φ coordinate
Rg(ρ, φ) = R f (ρ, φ + φ
0
).
Scaling : a zoom of factor α 6= 0 in f (x, y) :
g = S
α
[ f ] involves a change of scale in ρ coor-
dinate and in Rg amplitude by a factor α and
1
|α|
respectively Rg(ρ, φ) =
1
|α|
R f (α × ρ, φ).
As arbitrary curves can not be detected by the RT,
a generalized Radon transform over conic sections
(CRT) in two dimensions may be able to detect more
complex curves than lines. Whereas the classical RT
of a function integrates over lines, the generalized
Radon transform represents the integration over conic
sections.
The proposed CRT transform extends the formalism
Image Analysis based on Radon-type Integral Transforms Over Conic Sections
357
Figure 1: Geometry of the Radon transform over lines.
Figure 2: (a) Initial image presenting two lines. (b) The
classical RT of the image represented in (a): the coordinates
of each peak corresponds to polar parameters of line in (a).
of the RT presented in (Cormack, 1981). Cormack de-
fined a generalized Radon transform which is defined
on general set of curves in the plane. This last in-
clude special cases of parabolas, hyperbolas, straight
lines and circles through the origin with some restric-
tions. In fact, theses parabolas and hyperbolas curves
are given in polar coordinates with one focus is fixed
at the origin of the polar coordinate system.
3.1 The CRT Formalism
Geometrically a conic section is the locus of all points
M whose distance to the focus F is equal to a constant
e (eccentricity) multiplied by the distance from M to
the directrix of the conic (Fig. 3).
The conic section in polar coordinates with focus
at the origin, is defined as:
for M(r, θ):
r =
ρ
1 + e cos(θ φ)
, (2)
where
Figure 3: (a) Four conic with same focus and their directrix:
ellipses (e = 0.25, e = 0.5), parabola (e = 1) and hyperbola
(e = 2) with fixed focus F and directrix D. (b) Parabola ac-
cording to angle φ1 = 180
. (c) Ellipse according to angle
φ2 = 0
.
ρ =
b(1 e
2
)
a
b(e
2
1)
f or
0 e < 1, θ ] π, π[
e = 1, θ ] π, π[
e > 1, θ ] θ
0
, θ
0
[
]θ
0
, 2π θ
0
[, cos(θ
0
) =
1
e
a is the distance from the focus to the directrix for
parabola and b is the semi major axis for ellipse and
hyperbola.
The generalized Radon transform integrates a func-
tion f (x, y) over conic sections in the plane. It is de-
fined as:
R
c
f (x
F
, y
F
, ρ, φ, e) =
Z
c
f (x, y)ds, (3)
where ds denotes the integration measure on this
conic section.
x =
ρ
1 + e cos(θ φ)
cos(θ).
y =
ρ
1 + e cos(θ φ)
sin(θ).
Let :
γ = θ φ, ds =
q
dr
2
+ r
2
dγ
2
,
dr
2
+ r
2
dγ
2
= r
2
dγ
2
1 + e
2
+ 2 e cos(γ)
(1 + e cos(γ))
2
.
ds = ρ
p
1 + e
2
+ 2 e cos(γ)
(1 + e cos(γ))
2
dγ.
VISAPP 2018 - International Conference on Computer Vision Theory and Applications
358
We find:
R
c
f (x
F
, y
F
, ρ, φ, e) =
R
γ
f
ρ
1 + e cos(γ)
cos(γ + φ) + x
F
,
ρ
1 + e cos(γ)
sin(γ + φ) + y
F
ρ
p
1 + e
2
+ 2 e cos(γ)
(1 + e cos(γ))
2
dγ.
(4)
Therefore the CRT space is expressed by five pa-
rameters which are the coordinates of focus x
F
and
y
F
, the conic parameter ρ, the orientation angle of
conic φ and the eccentricity e.
3.2 Numerical Simulation Results
We describe in this section the numerical implemen-
tation of the RT over conic sections. For that through-
out the paper, we present only the discretisation of a
class of conic with fixed focus (x
F
, y
F
) and fixed ec-
centricity e.
For the special case of e = 1, x
F
= y
F
= 0, we in-
tegrate over parabolas with focus at the origin. The
equation of CRT become:
R
c
f (ρ, φ) =
R
γ
f
ρ
1 + cos(γ)
cos(γ + φ),
ρ
1 + cos(γ)
sin(γ + φ)
ρ
p
2 + 2 cos(γ)
(1 + cos(γ))
2
dγ.
(5)
The original image function f (x, y) of size Nx ×
Ny is discretized as follows: Nx = Ny = 100 (arbi-
trary length unit), dx = dy = 1, 50 x 49 and
50 y 49. The central point in the CRT co-
incides with the origin of coordinates of the object
(x, y) = (0, 0).
The CRT requires integrals that must be computed nu-
merically (Equation (5)). This last is performed via
the summations a long γ using a discretisation angu-
lar step dγ = 1rad. When points in the summation
grid do not fit those of the discrete function, a linear
interpolation method is used to calculate the values of
the function at the new positions.
Therefore, to treat the general CRT transform we
have applied this previous discretisation iteratively by
varying eccentricity e and the focus coordinates x
F
and y
F
.
The proposed transform is illustrated with some nu-
merical results based on synthetic images (Fig. 4, Fig.
5).
Figure 4: (a) Initial image where three parabolas are posi-
tioned at the center row. The result of the CRT on (a) is the
(x
F
, y
F
, ρ, φ, e) parameters space where (b) present 2D view
of CRT space (e = 1, x
F
= y
F
= 0) and the coordinates of
each of the peaks corresponds to the parameters (φ, ρ) of
the curves. ρ is the distance of focus to directrix and φ is
the orientation of parabola. (c) present 3D view of CRT
space according to (x
F
, y
F
, R
c
f ((x
F
, y
F
, ρ, φ, e)) where the
color corresponds to the eccentricity e (blue: ellipse, green:
parabola, red: hyperbola) and the peaks which have green
color corresponds to parabolas.
Figure 5: (a) Initial image where two ellipses are presented.
(b) 2D view shows the peaks corresponding to the posi-
tion of one focus for each ellipse. (c) The coordinates of
each of the maximum value correspond to the parameters
(x
F
, y
F
, R
c
f ((x
F
, y
F
, ρ, φ, e)) of the four focus of ellipses
according to (φ = 0
, e = 0.5, ρ = 22). The peaks have blue
color which corresponds to ellipses.
Image Analysis based on Radon-type Integral Transforms Over Conic Sections
359
Figure 6: Architecture of proposed approach.
4 APPLICATION TO FEATURE
EXTRACTION
The proposed approach for feature extraction relies on
a set of global features extracted from Radon space.
The main contribution of this work is to present the
utility of integrating an image over curves other than
lines. In this section, we proposed a novel descriptor
based on CRT transform.
Our approach can be divided into two stages, as out-
lined in Fig. 6. In the first stage, we extracted global
features and in the second stage, the resulting features
are concatenated and then given to the Support Vector
Machine(SVM) classifier.
The global features are extracted directly from Radon
space. In order to deal with generic shapes, we chose
the integration over circles and parabolas rather than
lines.
For each focus, the result of parabolic Radon
transform is a (φ, ρ) Radon space. we varied the an-
gle φ in [0,179
[ and ρ in [1,
q
N
2
x
+ N
2
y
]. N
x
, N
y
are
the size of image. Futhermore, the result of circu-
lar Radon transform which is a special case of ellipse
is a vector of radius R. We varied the radius R in
[
q
N
2
x
+ N
2
y
,
q
N
2
x
+ N
2
y
].
In order to reduce the computational time of CRT
transform, first we applied CRT over parabolas and
circles with one fixed focus which is the centroid of
image. Then we varied the number of focus in order
to increase the performance of our descriptor.
We transformed the parabolic Radon space in a vec-
tor (Fp) and also the circular Radon space in a vector
(Fc).
Therefore, for each object, a set of global features are
extracted which are parabolic features (Fp), circular
features (Fc) and also linear features (Fd) which is a
vector given by the classical Radon space.
We applied principal component analysis (PCA) to
the Fp, Fc and Fd vectors for all objects existing in
Figure 7: (a) Classes of ETH-80 data set. (b) Classes of
MPEG-7 data set.
dataset in order to reduce the dimensionality of vec-
tors. We combined all features into a one final fea-
tures vector. The goal of the features concatenation
stage is to extract discriminant information that im-
proves the object classification accuracy compared to
features extracted from only linear RT.
In the classification step, we used a standard SVM
(Chang and Lin, 2011) using the radial basis function
kernel.
5 EXPERIMENTS
To evaluate the effectiveness of our approach for ob-
ject classification, we carried out the experiments on
two image data sets: ETH-80, and MPEG-7 data set.
Some examples of objects in these datasets are shown
in Fig 7. The main motivation behind the selection of
these two databases is that they present a good bench-
mark to show how the proposed descriptor can handle
multi-object categorization tasks.
On the first, ETH- 80 dataset contains eight dif-
ferent object categories: apples, tomatoes, pears, toy-
cows, toy-horses, toy-dogs, toy-cars and cups. Each
category is represented by 10 objects and 41 views
per object are provided. For the qualitative analysis
of our descriptor, we input to our system the same
repartition of datasets as those used in methods we
compared with and display our results with their ones.
In fact, four randomly chosen images from each of 10
objects (in total 40 images) are selected for the clas-
sification set. The remaining instances constitute the
training set.
In training phase, we have putted the parameters C
and γ of SVM classifier respectively to 8 and 0.0625.
To evaluate the proposed approach, we used the F
measure which is a robust measure to evaluate the per-
formance of a descriptor for object class recognition.
It is defined by:
F measure =
2 t p
po + t p + f p
, (6)
VISAPP 2018 - International Conference on Computer Vision Theory and Applications
360
Table 1: F measure for ETH80 classes.
Object SIFT GLOH GM RT CRT
Apple 0.97 0.93 0.91 0.80 0.96
Car 0.99 0.97 0.96 0.98 1
Cow 0.98 0.95 0.82 0.87 0.93
Cup 0.99 0.97 0.97 1 1
Dog 0.97 0.92 0.87 0.88 0.93
Horse 0.98 0.94 0.87 0.82 0.91
Pear 0.98 0.95 0.95 0.98 1
Tomato 0.98 0.96 0.88 0.78 0.95
Average 0.98 0.95 0.90 0.89 0.96
Table 2: Overall accuracy on ETH80 dataset with training
50% and test 50%.
Training Test
Overall Acuracy
SIFT (Setitra
and Larabi,
2015)
RT CRT
50% 50% 90% 86% 90%
where t p is the number of true positive, f p is the num-
ber of false positive and po is the number of positive
examples of each class to the classifier.
In this repartition of dataset, we presented 40 images
as positive examples and 40 × 7 images as negative
examples for each class. The average measure of our
descriptor outperforms the descriptor based only on
classical RT.
The average F measure of our approach is 0.96 ver-
sus 0.89 for the RT based method (Table 1).
We compared then the F measure of our descrip-
tor with some state of the art techniques, namely :
SIFT, GLOH and GM which were experimented on
the same dataset. Table 1 gives the F measure
for each class in the first ETH- 80 dataset. Table 1
shows that SIFT has a better average score than our
descriptor relatively to the ETH- 80 dataset. How-
ever, we mention the difference is slight and in some
classes, F measure with the CRT-descriptor outper-
forms the one of SIFT descriptor. This slight differ-
ence is mainly due to the miscategorization of dogs,
horses, and cows owing to the similarity of animal
legs.
Besides, we have done several experiments with re-
spect to different sizes of labelled set and test set.
From Table 2, it can be observed that the overall ac-
curacy of our descriptor is similar to the rate of SIFT
descriptor for equal division (50% training, 50% test).
Moreover, we evaluated our approach on MPEG-
7 dataset. This last consists of 1400 binary images
partitioned into 70 categories. Each category has 20
different shapes. For each class, 10 images are cho-
Table 3: Overall accuracy on MPEG-7 dataset with training
50% and test 50%.
Training Test
Overall Acuracy
SIFT(Setitra
and Larabi,
2015)
RT CRT
50% 50% 78% 75% 86%
sen as the test set and the remaining 10 images are
then used for training. Despite the big number of
classes, we chose the computation of overall classi-
fication to evaluate with accuracy the performance of
our descriptor. We compared afterwards our results
with those of SIFT descriptor . Table 3 shows that
the overall classification accuracy of our descriptor on
MPEG-7 dataset outperforms the one of SIFT. Over-
all classification was 86%. In order to analyze the
classification results, we generated a confusion matrix
which represents a matching matrix between the pre-
dicted class and the actual class (Fig. 8). We can see
from confusion matrix that several objects were clas-
sified without any mistake. However, we can see from
blue rectangles in the diagonal the few classes which
are bad classified. These classes are: 15(chicken), 32
(device 9), 50 (jar).
Figure 8: Confusion matrix for overall classification
(MPEG-7 data set).
6 CONCLUSION
In this work we present a framework of the CRT
which generalizes the classical RT by integrating a
image function over conic sections. This makes
it possible a new image analysis approach taking
into account the global features of different (circular,
parabolic, linear) shapes of analysed images. The in-
terest and efficiency of the proposed approach is illus-
trated by numerical tests in feature extraction and ob-
ject classification. The encouraging results open the
Image Analysis based on Radon-type Integral Transforms Over Conic Sections
361
way to further research development directions tak-
ing into account more shapes of curves (ellipses, hy-
perbolas, etc.) and incomplete shapes (circular arcs,
broken lines, etc.) of images under study (Nguyen
and Truong, 2010) (Truong and Nguyen, 2015).
REFERENCES
Radon, J. (1917).
¨
Uber die Bestimmung von Funktio-
nen durch ihre Integralwerte l
¨
angs gewisser Mannig-
faltigkeiten. Akad. Wiss., 69:262–277.
Ballard, D. (1981). Generalizing the hough transform to de-
tect arbitrary shapes. Pattern Recognition, 13(2):111
– 122.
Bharath, B. V., Vilas, A. S., Manikantan, K., and Ra-
machandran, S. (2014). Iris recognition using Radon
transform thresholding based feature extraction with
Gradient-based Isolation as a pre-processing tech-
nique. In 2014 9th International Conference on In-
dustrial and Information Systems (ICIIS), pages 1–8.
Chang, C.-C. and Lin, C.-J. (2011). LIBSVM: A library
for support vector machines. ACM Transactions on
Intelligent Systems and Technology, 2:27:1–27:27.
Cormack, A. M. (1981). The Radon transform on a family
of curves in the plane. Proceedings of the American
Mathematical Society, 83(2):325–330.
Elouedi, I., Fournier, R., Nat-Ali, A., and Hamouda, A.
(2015). The polynomial discrete Radon transform.
Signal, Image and Video Processing, 9(Supplement-
1):145–154.
Hasegawa, M. and Tabbone, S. (2016). Histogram of
Radon transform with angle correlation matrix for dis-
tortion invariant shape descriptor. Neurocomputing,
173:24–35.
Islam, S. and Sluzek, A. (2010). An evaluation of local
image features for object class recognition. In Pro-
ceedings of the International Conference on Computer
Vision Theory and Applications (VISIGRAPP 2010),
pages 519–523.
Lei, Y., Zheng, L., and Huang, J. (2014). Geometric in-
variant features in the Radon transform domain for
near-duplicate image detection. Pattern Recognition,
47(11):3630–3640.
Nguyen, M. K. and Truong, T. T. (2010). Inversion of a new
circular-arc Radon transform for Compton tomogra-
phy. Inverse Problems, 26:065005.
Nguyen, T. P. and Hoang, T. V. (2015). Projection-Based
Polygonality Measurement. Image Processing, IEEE
Transactions on, 24(1):305–315.
Setitra, I. and Larabi, S. (2015). SIFT descriptor for binary
shape discrimination, classification and matching. In
Computer Analysis of Images and Patterns - 16th In-
ternational Conference, CAIP 2015, Valletta, Malta,
September 2-4, 2015 Proceedings, Part I, pages 489–
500.
Truong, T. T. and Nguyen, M. K. (2015). New properties
of the v-line Radon transform and their imaging ap-
plications. Journal of Physics A: Mathematical and
Theoretical, 48(40):405204.
Zhang, D. and Lu, G. (2002). Shape-based image retrieval
using generic Fourier descriptor. Sig. Proc.: Image
Comm., 17(10):825–848.
Zhang, Q. and Couloigner, I. (2007). Accurate Centerline
Detection and Line Width Estimation of Thick Lines
Using the Radon Transform. IEEE Transactions on
Image Processing, 16(2):310–316.
VISAPP 2018 - International Conference on Computer Vision Theory and Applications
362