In this study, grayscale images are used in the exper-
iments. All the images are changed to 256 grayscale
in a pre-process step. Then a step function is used on
these grayscale images to reduce the 256 gradation
into 16. And the m× n image is converted to a 1× mn
vector. Text compression is used by PRDC (Pattern
Representation Scheme using Data Compression) to
find most frequently repeated and longest feature in
text data. In order to adopt this advantage of PRDC,
images converted to text data. But the size of image
data is too big to directly convert each pixel to a char-
acter. Besides that the extracted features will become
too much and some of which are redundancy. Hence,
we consider dividing a 1 × mn vector segments and
cluster them. After which, each cluster is replaced
by a character and the converted image is called text-
transformed image. To obtain the text-transformed
images, data compression is then used for representa-
tion of the converted texture image in this study. Each
1× mn vector (of a grayscale image) is made into seg-
ments with length L. The PRDC is used to compress
the segments into compressibility vectors. The dictio-
naries used for compression are constructed by com-
pressing the pre-processed images with LZW method,
from a small number of randomly chosen images.
On these compressibility vectors, clustering with
k-means is performed to get clusters of segments. It
is considered that the segments belong to the same
cluster have similar properties. Therefore we can re-
place them by one character, from which we get the
text-transformed image.
Now we classify the text-transformed images
based on the PRDC. The PRDC is used again to com-
press the text-transformed image to obtain compress-
ibility vectors. And the dictionary is constructed by
compressing the text-transformed image with LZW
method. In the same way, clustering is performed on
the compressibility vectors. The compressibility vec-
tors are used as follows for classification of similar
texture image. The compression dictionaries consti-
tute a compressibility vector space. The compressibil-
ity vector space can be represented by a compressibil-
ity table, which is made by projecting the input data
into the compressibility vector space. Let N
i
be the
input data. By compressing the input data, a com-
pression dictionary is obtained, which is expressed as
D
N
i
. Compressing data N
j
by D
N
i
, we get compres-
sion ratio C
N
j
D
N
i
=
K
N
i
L
N
j
. Where, L
N
j
is the size of the
input stream N
j
, K
N
i
is the size of the output stream.
Compressing with all of the dictionaries, we obtain a
compressibility vector for each input and for all input
data we get a compressibility table. In this table, the
columns show the data N
j
, the rows show the com-
pression dictionary D
N
j
formed by the same data, and
the elements show the compressibilityC
N
j
D
N
i
[%]. We
utilize this table to characterize data. Finally, images
are classified by the proposed approach.
3 EXPERIMENTS AND RESULTS
In this section, we show how to evaluate the perfor-
mance of the proposed approach. Experiments with
using real-world images were carried out. When eval-
uating a texture image analysis method, a number of
aspects such as change in rotation and scale should
be considered. The performance evaluation of our
approach is implemented in the following different
cases. Based on the experiences of the authors, R = 3
is used for the radius and P = 8 pixels are used for
the sampling points in LBP operator, in which it rep-
resents the texture images well.
3.1 Rotation Invariance
For the case of rotation invariance, we test if our ap-
proach can cope with the rotation of the image change
with respect to the viewpoint.
Figure 3: Examples of texture images in (Lazebnik et al.,
2005).
We randomly select 5 unrotated and 5 rotated im-
ages from all 25 texture classes (Fig. 3) in textured
surfaces to obtain 250 images. Then the proposed
approach is applied to express these images, and the
value of recall of clustering is computed. Because the
initialization value of k-means gives influence on the
experiments, this experiment runs 5 times to obtain
the average recall. As the result, the average recall
reached to 89 percent, which is close to the results
(88.1 to 92.6 percent) obtained in (Lazebnik et al.,
2005). The average recall got by only using data
compression representation is 72 percent. This result
showed that our approach is able to deal with the case
when images changed in rotation.
Though images changed in rotation, the combi-
nation of LBP operator and data compression rep-
resentation was able to find out frequent patterns
when textures appear in images repeatedly. Hence
TEXTURE IMAGE ANALYSIS USING LBP AND DATA COMPRESSION
439