Efficient Classification of Digital Images based on Pattern-features
Angelo Furfaro
1
and Simona E. Rombo
2
1
DIMES, University of Calabria, Italy
2
DMI, University of Palermo, Italy
Keywords:
Image Classification, Bidimesional Pattern Extraction, Irredundant Pattern.
Abstract:
Selecting a suitable set of features, which is able to represent the data to be processed while retaining the
relevant distinctive information, is one of the most important issues in classification problems. While different
features can be extracted from the raw data, only few of them are actually relevant and effective for the
classification process. Since relevant features are often unknown a priori, many candidate features are usually
introduced. This degrades both the speed and the predictive accuracy of the classifier due to the presence of
redundancy in the feature candidate set. We propose a class of features for image classification based on the
notion of irredundant bidimensional pair-patterns, and we present an algorithm for image classification based
on their extraction. The devised technique scales well on parallel multi-core architectures, as witnessed by the
experimental results that have been obtained exploiting a benchmark image dataset.
1 INTRODUCTION
Image classification is an active research field and va-
rious classification techniques, based on supervised
and on unsupervised techniques or on a mix of them,
appeared in the literature. Surveys on the topic can
be found in (Bosch et al., 2007; Lu and Weng, 2007;
Nanni et al., 2012).
Traditional approaches use low-level image featu-
res, such as color or texture histograms. Other techni-
ques rely on intermediate representations, made of lo-
cal information extracted from interesting image pa-
tches referred to as keypoints (Bosch et al., 2007).
Image keypoints are automatically detected using va-
rious techniques, and then represented by means of
suitable descriptors. Keypoints are usually clustered
based on their similarity, and each cluster is inter-
preted as a visual word”, which summarizes the lo-
cal information pattern shared among the belonging
keypoints (Yang et al., 2007). The set of all the vi-
sual words constitutes the visual vocabulary or code-
book. For classification purposes, an image is then
represented as a histogram of its local features, which
is analogous to the bag-of-words model for text do-
cuments. Examples of commonly exploited keypoint
detectors are: Difference of Gaussian (DoG) (Lowe,
2001), Sample Edge Operator (Berg et al., 2005),
Kadir-Brady (Kadir and Brady, 2001). Feature des-
criptors are often based on SIFT (Scale Invariant Fe-
ature Transform) (Lowe, 1999).
Like in many other classification problems, the re-
levant features to be employed are not known a priori.
Therefore, various candidate features can be introdu-
ced, many of which are either partially or completely
irrelevant/redundant to the target concept (Dash and
Liu, 1997). A relevant feature is neither irrelevant nor
redundant to the target concept; an irrelevant feature
does not affect the target concept in any way, and a
redundant feature does not add anything new to the
target concept (John et al., 1994). In this work, we
analyze a special kind of features for image classifi-
cation, based on the concept of bidimensional irre-
dundant pair-patterns.
Roughly speaking, bidimensional irredundant
pair-patterns represent approximate repetitions bet-
ween pairs of images in an input training set. In more
detail, given two images I
1
and I
2
, one can superim-
pose each rectangular sub-portion of I
1
with each rec-
tangular sub-portion of I
2
, keeping only those pieces
of the two images that match. When all the possi-
ble repeated portions between I
1
and I
2
are conside-
red, also taking into account sub-regions that are si-
milar but not identical, the number of such features
can grow exponentially with the size of I
1
and I
2
, and
many of the extracted patterns are irrelevant and/or
redundant.
Suitable notions of maximality and irredundancy
have been introduced for digital images (Apostolico
Furfaro, A. and Rombo, S.
Efficient Classification of Digital Images based on Pattern-features.
DOI: 10.5220/0006955500930099
In Proceedings of the 5th International Conference on Physiological Computing Systems (PhyCS 2018), pages 93-99
ISBN: 978-989-758-329-2
Copyright © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
93
et al., 2008) and successfully exploited for both image
compression in (Amelio et al., 2011) and image clas-
sification (Furfaro et al., 2013), showing to be useful
in encoding the relevant information from input ima-
ges, by reducing their representation. However, all
such approaches work by extracting repetitions from
a single input image, whereas for classification pur-
poses the goal is that of singling out useful repetitions
able to characterize a set of images, representing a
class.
A first contribution of the research work presen-
ted here is that of extending the notions of maxima-
lity and irredundancy introduced in (Apostolico et al.,
2008) for pairs of images. In particular, the main idea
is that of eliminating all the extracted pair-patterns
that are redundant with respect to the other ones in
the set of candidates. The notion of redundancy here
is related to the occurrence of patterns in the images
of a class, since if several patterns occur on the same
class, then only the most informative ones will be kept
into account and they will be used as features for the
classification process.
As a second contribution, we propose an image
classification approach based on the extraction of
such bidimensional pair-patterns. In particular, gi-
ven an input training set of images already classified,
the irredundant pair-patterns are extracted and used
to build a suitable codebook. After feature selection,
image classification is then performed based on the
K-Nearest Neighbour approach. The algorithm has
been implemented according to the principles of pa-
rallel computing, in order to use efficiently the mo-
dern multi-core architectures.
We tested the proposed approach on a benchmark
image dataset (ZuBud (Shao et al., 2003)) and the pre-
liminary results show that it is able to reach high va-
lues of accuracy.
The paper is organized as follows. Section 2 il-
lustrates some preliminary notions, while in Section
3 the proposed classification algorithm is described
in detail. In Section 4 we show some preliminary
results we obtained on real datasets, also comparing
them with those returned by other methods proposed
in the literature. Finally, Section 5 draws some con-
clusive remarks.
2 PRELIMINARY NOTIONS
A digitized image can be represented as a rectangular
array I of N = m × n pixels, where each pixel i
i j
is a
character (typically, encoding an integer) over an alp-
habet Σ, corresponding to the set of colours occurring
into I (see Figure 1).
(a)
I
[m,n]
=
i
11
i
12
. . . i
1n
i
21
i
22
. . . i
2n
.
.
.
i
m1
i
m2
. . . i
mn
(b)
Figure 1: (a) A digitized image (Lena). (b) The correspon-
ding image I over the alphabet of colours Σ = {c
1
, c
2
, ..., c
k
}
(each element i
i j
represents a pixel of Lena with a specific
colour c
l
Σ).
We are interested in finding a compact descriptor
for a set of images S
I
= {I
1
, I
2
, . . . , I
l
}, able to cap-
ture the common features of the images in the set. To
this aim, we search for the repetitive content among
I
1
, I
2
, . . . I
l
under the assumption that if a small block
of such images is sufficently repeated in S
I
, then it
represents a feature that is characteristic for the set.
Such repeated blocks can be not necessarily identi-
cal, but somewhat identical unless some pixel, due to
different shades or lightness in the original pictures.
Thus, in addition to the solid characters from Σ, we
also deal with a special don’t care character, deno-
ted by ’, that is a wildcard matching any character
in Σ {∗}. Don’t cares are useful in order to take
into account approximate repetitions. We say that an
image P defined on Σ {∗} occurs in a larger image
I if P[i, j] = I
i
[h + i 1, k + j 1], for some position
[h, k] in I and for all the positions [i, j] in P.
Given a pair of images I
i
and I
j
, both of size m ×n
and such that I
a
, I
b
S
I
, a bidimensional pair-pattern
is an extended image P of size m
0
× n
0
such that:
1. m
0
m and n
0
n;
2. there is at least one solid character adjacent to
each edge of P;
3. there exist positions [h
a
, k
a
] in I
a
and [h
b
, k
b
] in I
b
such that P occurs in both I
a
and I
b
.
The notion of bidimensional pair-pattern extends
that of 2D motif already introduced in (Apostolico
et al., 2008) and applied to digital images in (Amelio
et al., 2011; Furfaro et al., 2017). The main difference
here is that the bidimensional pair-pattern (referred to
simply as pattern in the following) is extracted from
PhyCS 2018 - 5th International Conference on Physiological Computing Systems
94
pairs of images, whereas the 2D motif represents re-
petitions occurring in the same image.
When approximate occurrences are taken into ac-
count, and patterns with don’t cares are thus conside-
red, the number of all the possible patterns that one
can extract from an input image can grow drastically,
often becoming exponential in the size of the input
image. In order to limit such a growth, suitable no-
tions of maximality and irredundancy have been pro-
posed for bidimensional patterns (Apostolico et al.,
2008; Rombo, 2009; Rombo, 2012). Since such noti-
ons concern patterns extracted from only one image,
we extend them here for pairs of images. To this aim,
given a pattern P occuring in some of the images in
S
I
(we say that P is covered on S
I
), its occurrence list
L
P
= {1, 2, . . . , k} is made of the indices of those ima-
ges {I
1
, I
2
, . . . , I
k
} in S
I
where P occurs.
Maximal Pattern. Let P = P
1
, P
2
, . . . , P
f
be a set
of patterns covered on S
I
, and let L
P
1
, L
P
2
, . . . , L
P
f
be their occurrence lists, respectively. A pattern P
i
is
maximal in P if and only if there exists no pattern P
j
P , j 6= i, such that P
i
occurs in P
j
and |L
P
i
| = |L
P
j
|.
In other words, P
i
cannot be substituted by P
j
without
loosing some of the P
i
occurrences in S
I
.
Irredundant Pattern. A pattern P
i
that is maximal
in P , having occurrence list L
P
in S
I
, is also irre-
dundant in P if there not exist any maximal pattern
P
j
P , j = 1, . . . , h, such that P
i
occurs in P
j
and
L
P
= L
P
1
L
P
2
. . . L
P
h
, up to some offsets.
3 THE CLASSIFICATION
ALGORITHM
This section describes the proposed image classifica-
tion procedure whose pseudo-code is reported in Al-
gorithm 1.
The algorithm takes in input an image dataset S
I
=
{I
1
, I
2
, . . . , I
l
}, which is a priori partitioned into h
classes C
1
, C
2
, . . . , C
h
, and a test image I for which
the algorithm will predict the membership class. For
each image in the input dataset, the set of irredun-
dant bidimensional pair-pattern motifs is extracted by
overlapping it with all the other images in its class, as
described by the extraction procedure (PATTERNEX-
TRACTION) reported in Algorithm 2. The overall set
made of the extracted motifs constitutes the codebook
D. Then, for each image I
j
, we build an array w
j
, ha-
ving as many entries as the codebook size. w
j
is the
histogram of the occurrences of the codebook patterns
into I
j
, i.e. w
j
[i] is the number of occurrences of the
motif m
i
in the image I
j
.
The REMOVEBORDER procedure, invoked at line
12 of Algorithm 2, removes from the input pattern
those border zones made up of only don’t cares.
In order to classify a test image I, its histogram w
with respect to the codebook pattern is computed, as
for the training set images.
The final step, which actually outputs the classifi-
cation label for I, uses the well-known classification
algorithm k-Nearest Neighbour (kNN). This step is
applied by calculating all the distances between each
array w
j
and the array w obtained from the image that
has to be classified. The output class is that having
the larger consensus among the k images that score
the lowest values for the distances from I.
Algorithm 1: Classification algorithm.
Require: set S
I
of l images of size m × n images par-
titioned in h classes C
1
, C
2
, . . . , C
h
; a test image I;
an integer k;
Ensure: the label x of the class predicted for I
/* training phase */
1: D
0
=
/
0
2: for each class C
j
in C
1
, C
2
, . . . , C
h
3: B
j
=PATTERNEXTRACTION(C
j
)
4: D = D B
j
5: end for
6: for each image I
j
in S
I
7: let w
j
be an empty array of |D| integer values
8: for each motif m
i
in D
9: w
j
[i] =COMPUTEOCCURRENCES(m
i
,I
j
)
10: end for
11: end for
/* testing phase */
12: let w be an empty array of |D| integer values
13: for each motif m
k
in D
14: w[k] = COMPUTEOCCURRENCES(m
k
,I)
15: end for
16: let E be an empty array of l real values
17: for each image I
j
in S
I
18: E[ j] = dist(w
j
, w)
19: end for
20: sort E in increasing order
21: let S be the set of the first k elements in E
22: return the label x of the most popular class in S
Efficient Classification of Digital Images based on Pattern-features
95
Algorithm 2: Extraction of irredundant patterns.
Require: set S
I
of l images of size m × n
Ensure: set I
S
I
of irredundant patterns of S
I
1: I
S
I
0
=
/
0
2: for each image I
a
in S
I
3: for each image I
b
6= I
a
in S
I
4: for each position [i
a
, j
a
] in I
a
5: for each position [i
b
, j
b
] in I
b
6: extract a pattern P
ab
such that
7: if I
a
[i
a
, j
a
] 6= I
b
[i
b
, j
b
] then
8: P
ab
[i, j] =
9: else
10: P
ab
[i, j] = I
a
[i
a
, j
a
]
11: end if
12: P
ab
=REMOVEBORDER(P
ab
)
13: I
S
I
0
= I
S
I
0
P
ab
14: end for
15: end for
16: end for
17: end for
18: for each pattern P in I
S
I
0
19: compute the occurrence list of P in S
I
20: end for
21: I
S
I
= I
0
S
I
\ {all the redundant patterns in I
0
S
I
}
3.1 Distance Notions
An important aspect in classification algorithms is the
choice of the distance notion employed to measure
the similarity among objects. This choice may have
in some cases a great impact on the algorithm perfor-
mances. As discussed in the next section, the pro-
posed classification technique has been tested with
different definitions of the distance functions. Some
of them are normalized versions of classical distance
functions which take into account also the standard
deviation of the training samples for each dimensions.
In particular, we used: the Euclidean distance and its
normalized version, the Hamming distance, and a nor-
malized Manhattan distance.
The formal definitions of these functions are re-
ported in Table 1, where s
2
k
=
1
l1
l
i=1
(w
i
[k] w[k])
2
,
i.e. the variance of the k
th
dimension, w[k] =
1
l
l
i=1
w
i
[k], i.e. the mean value for the k
th
dimen-
sion, and δ(x, y) =
1 if x = y
0 otherwise.
3.2 Algorithm Parallelization
The algorithm has been implemented according to the
principles of parallel computing. Some details about
Table 1: Used distances.
Euclidean
d
2
(w
i
, w
j
) =
q
n
k=1
(w
i
[k] w
i
[k])
2
Normalized Euclidean
d
2
s
(w
i
, w
j
) =
r
n
k=1
(w
i
[k]w
i
[k])
2
s
2
k
Hamming
d
H
(w
i
, w
j
) =
n
k=1
δ (1 δ(w
i
[k], 0), δ(w
j
[k], 0))
Normalized Manhattan
d
M
s
(w
i
, w
j
) =
n
k=1
|w
i
[k]w
j
[k]|
s
k
Start execution
PATTERNEXTRACTION(C
1
) PATTERNEXTRACTION(C
2
) PATTERNEXTRACTION(C
h
)
dist(w,w
1
)
COMPUTEOCCURRENCES(m
1
,I
1
)
dist(w,w
2
) dist(w,w
l
)
...
...
...
...
...
...
...
...
...
COMPUTEOCCURRENCES(m
1
,I) COMPUTEOCCURRENCES(m
2
,I) COMPUTEOCCURRENCES(m
|D|
,I)
...
...
...
COMPUTEOCCURRENCES(m
|D|
,I
1
)
...
COMPUTEOCCURRENCES(m
1
,I
l
)
...
COMPUTEOCCURRENCES(m
|D|
,I
l
)
...
Assing class to image I
...
...
training phase
testing phase
Figure 2: Parallel execution of Algorithm 1.
the implementation are provided below.
Both training and testing phases have been pa-
rallelized. In particular, a pool of n
core
+ 1 working
threads has been created, where n is the number of
available cores on the underlying architecture. Each
iteration of the for loops can be transformed into a
separate task, which can be submitted for execution
and accomplished by a worker thread. In particular,
regarding the for loop at line 3, for each class we ge-
nerate a single task consisting in the execution of the
PATTERNEXTRACTION procedure on the relevant test
images; similar considerations hold regarding the for
loop of line 6, where a single task is the execution
of the COMPUTEOCCURRENCES procedure on each
pair made of a pattern and training image.
As for the testing phase, the for loop of line 13
is parallelized like it has been done for that of line
6. Finally, the computation of the distances among
PhyCS 2018 - 5th International Conference on Physiological Computing Systems
96
Figure 3: Images from the ZuBud dataset.
the histogram of a test image and those of the training
images (line 17) is performed in parallel as well.
4 EXPERIMENTAL RESULTS
This section describes the experimental campaign that
has been performed in order to test and analyze the re-
sults of the proposed image classification algorithm.
The algorithm has been implemented in Java. A stan-
dard benchmark dataset, the ZuBuD dataset, which is
described by the next subsection, has been used. The
classifier performances have been measured from two
points of view: in terms of its accuracy and in terms of
scalability of its parallel execution. In particular, the
experiments have been executed on a machine with
an i7 processor with 2.00 Ghz per core, and 8GB of
RAM.
4.1 The ZuBuD Dataset
The ZuBuD (Zurich Building Image Database) (Shao
et al., 2003) is a famous collection of images, often
used to test classification algorithms. The dataset is
freely available, and it is considered a standard in the
literature to test this kind of classifiers. It consists of a
collection of photos that have been done to 201 buil-
dings of Zurich. For each building, the training set
contains ve pictures, each of which represents the
building from a different point of view. Each image
in the training has a size of 640 × 480 pixels: in total,
there are 1005 images in the training set. Then, we
have the test set, that consists of 115 photos of buil-
dings. These images have a size of 320 × 240 pixels,
and represent some of the buildings contained into the
training set in various angulations and various points
60
65
70
75
80
85
90
95
100
20 40 60 80 100 120 140 160 180 200
Accuracy (%)
Training classes
Classification accuracy vs. distance
Euclidean
Normalized euclidean
Normalized Manhattan
Hamming
Figure 4: Accuracy.
Table 2: Accuracy vs. distance (201 classes).
Distance Accuracy
Euclidean 71.30%
Normalized euclidean 78.26%
Normalized Manhattan 75.65%
Hamming 72.17%
of view. In general, the photos have numerous cha-
racteristics of heterogeneity: in fact, they have been
taken with two cameras and in different periods time
of the year. The only condition consistent between all
photos relating to a same building are the conditions
of illumination, which remain almost identical.
4.2 Accuracy Analysis
The algorithm has been tested with an increasing
number of training classes and with various kinds
of distances. This allows us to study how the accu-
racy evolves with the increasing of the number images
and, consequently, of the extracted patterns. Figure 4
shows how the accuracy tends to decrease with the in-
crease of the training set size till 100 classes, then it
increases and settles down to about 70%.
As it can be seen from Table 2, the best score
has been achieved with the Normalized Euclidean dis-
tance, which with 201 classes scored 78.26% of accu-
racy, i.e 90 test images out of 115 were correctly clas-
sified.
4.3 Scalability
In this kind of algorithms performance is a crucial as-
pect, especially with big datasets. As explained in
Section 3.2, the classifier was parallelized both in the
training phase and in the testing phase: this allows us
to use efficiently the modern multicore architectures.
Efficient Classification of Digital Images based on Pattern-features
97
1
1.5
2
2.5
3
3.5
4
1 2 3 4 5 6 7 8 9
Speed up factor
Executing threads
Speed-up factor chart
Speed-up
Figure 5: Speed up.
We also did a work of optimization on all of the data
structures used during the execution. However, there
is no guarantee that all of the real world machines on
which the algorithm will run have at their disposal
the same number of physical cores. Accordingly, it
should be noted as the performance evolve in relation
to varying the number of threads that are running si-
multaneously on the machine in question: this aspect
is summarized in the speed-up plot of Figure 5. The
results are related to an experiment conducted with 10
training classes.
In this speed-up factor chart we can see how the
increase of number of threads, running simultane-
ously on the machine, have a good effect on the exe-
cution time. Clearly, beyond a certain number, i.e.,
that of the physical processors, there are no further
considerable improvements. The machine where the
experiments were performed is equipped with 8 cores
and by using a pool of 8 threads we reached a factor
speed-up of nearly 4.
5 CONCLUDING REMARKS AND
DISCUSSION
This paper presented an image classification techni-
que based on bidimensional motifs. The analysis of
the experimental results, suggests that this technique
is effective and obtains good performances both in
terms of accuracy and of scalability. We must not
however lose sight of the enormous complexity that
the classification problem presents inherently, especi-
ally as it regards the search space that we can explore
in order to improve the accuracy of classification. In
fact, at present, it is possible to customize the classi-
fier in a considerable number of parameters, each of
which can assume a range of values potentially very
high. In some cases, the wrong choice of these pa-
rameters can also degenerate performance at an ex-
tent as to make the execution times unacceptable. In
this sense, research is responsible for identifying si-
tuations where the instrument maximizes percentage
of correctly classified images with reasonable perfor-
mances. As regards the implementation, the actual
classifier version runs in a maximally efficient on a
single machine with multi-core architecture: one of
the steps for the future might be to offer an implemen-
tation that can deliver computing in cluster machines
or distributed environments, for example in cloud sce-
narios. In this way, we may improve timing perfor-
mance and be able to experience the behavior of the
classifier in further complex scenarios. To be taken
into account is also the positive influence that has had
the introduction of distance normalization: this aspect
has led a substantial improvement in the accuracy of
the algorithm, and is indeed a solid starting point for
future research. It might also be interesting to try to
replace the final classification technique, currently k-
NN, with other types of classifiers coming for exam-
ple from the pool of statistical ones. Another open
question is to see how non-exact chromatic matching
is influential on the technique. Moreover, suitable in-
variants, such as rotations and/or translations, can be
considered in the 2D basis generation, by extending
the proposed approach to this aim. Finally, a compari-
son with more recent techniques for image classifica-
tion (Chan et al., 2015; Kumar et al., 2017; Maggiori
et al., 2017) will be object of our investigation.
ACKNOWLEDGEMENTS
The authors are grateful to Michele Bombardieri for
his help in the implementation of a preliminary ver-
sion of the software presented here. Moreover, their
research has been partially supported by a project fi-
nanced by INDAM titled “Elaborazione ed analisi di
Big Data modellati come grafi in vari contesti appli-
cativi”, under the program GNCS 2018.
REFERENCES
Amelio, A., Apostolico, A., and Rombo, S. E. (2011).
Image compression by 2D motif basis. In Data Com-
pression Conference (DCC 2011), pages 153–162.
Apostolico, A., Parida, L., and Rombo, S. E. (2008).
Motif patterns in 2D. Theoretical Computer Science,
390(1):40–55.
Berg, A., Berg, T., and Malik, J. (2005). Shape matching
and object recognition using low distortion correspon-
dences. In Proc. of IEEE Computer Society Confe-
rence on Computer Vision and Pattern Recognition
(CVPR 2005), volume 1, pages 26–33.
Bosch, A., Mu
˜
noz, X., and Mart
´
ı, R. (2007). Review:
Which is the best way to organize/classify images by
content? Image Vision Comput., 25(6):778–791.
PhyCS 2018 - 5th International Conference on Physiological Computing Systems
98
Chan, T., Jia, K., Gao, S., Lu, J., Zeng, Z., and Ma, Y.
(2015). Pcanet: A simple deep learning baseline for
image classification? IEEE Trans. Image Processing,
24(12):5017–5032.
Dash, M. and Liu, H. (1997). Feature selection for classifi-
cation. Intelligent Data Analysis, 1:131–156.
Furfaro, A., Groccia, M. C., and Rombo, S. E. (2013).
Image classification based on 2d feature motifs. In
Flexible Query Answering Systems, pages 340–351.
Springer.
Furfaro, A., Groccia, M. C., and Rombo, S. E. (2017). 2D
motif basis applied to the classification of digital ima-
ges. Computer Journal, 60(7):1096–1109.
John, G. H., Kohavi, R., and Pfleger, K. (1994). Irrelevant
features and the subset selection problem. In Machine
Learning: Proceedings of the Eleventh International,
pages 121–129.
Kadir, T. and Brady, M. (2001). Saliency, scale and image
description. Int. J. Comput. Vision, 45(2):83–105.
Kumar, A., Kim, J., Lyndon, D., Fulham, M. J., and Feng,
D. D. (2017). An ensemble of fine-tuned convolu-
tional neural networks for medical image classifica-
tion. IEEE J. Biomedical and Health Informatics,
21(1):31–40.
Lowe, D. (1999). Object recognition from local scale-
invariant features. In Proc. of the 7th IEEE Internatio-
nal Conference on Computer Vision, volume 2, pages
1150–1157.
Lowe, D. (2001). Local feature view clustering for 3D ob-
ject recognition. In Proc. of the IEEE Computer So-
ciety Conference on Computer Vision and Pattern Re-
cognition (CVPR 2001), pages 682–688.
Lu, D. and Weng, Q. (2007). A survey of image classifi-
cation methods and techniques for improving classifi-
cation performance. International Journal of Remote
Sensing, 28(5):823–870.
Maggiori, E., Tarabalka, Y., Charpiat, G., and Alliez, P.
(2017). Convolutional neural networks for large-scale
remote-sensing image classification. IEEE Trans. Ge-
oscience and Remote Sensing, 55(2):645–657.
Nanni, L., Lumini, A., and Brahnam, S. (2012). Survey on
LBP based texture descriptors for image classification.
Expert Syst. Appl., 39(3):3634–3641.
Rombo, S. E. (2009). Optimal extraction of motif patterns
in 2D. Information Processing Letters, 109(17):1015–
1020.
Rombo, S. E. (2012). Extracting string motif bases for
quorum higher than two. Theoretical Computer
Science, 460:94–103.
Shao, H., Svoboda, T., and Gool, L. V. (2003). Zubud
- Zurich building database for image based recogni-
tion. Technical Report TR-260, Computer Vision Lab,
Swiss Federal Institute of Technology, Switzerland.
Yang, J., Jiang, Y.-G., Hauptmann, A. G., and Ngo, C.-W.
(2007). Evaluating bag-of-visual-words representati-
ons in scene classification. In Proceedings of the in-
ternational workshop on Workshop on multimedia in-
formation retrieval, MIR ’07, pages 197–206.
Efficient Classification of Digital Images based on Pattern-features
99