themselves by trying to correctly annotate body parts
in diagnostic images.
Much work has already been done on this field. In
2003, IRMA structure was proposed which uses a
mono-hierarchical multi-axial classification code
which preserves the advantages of SNOMED
DICOM and also provide advantage in content based
image retrieval (Lehmann, Schubert, Keysers,
Kohnen & Wein, 2003). To select a region of interest
geometric tools was proposed by Schneider and
Eberly (2003). Barrett and Mortensen introduced a
semi-auto segmentation tool which was capable of
segmenting the region of interest using very little time
and effort (1997).
For building medical image retrieval system a
hierarchal similarity learning method using neural
networks and support vector machines was proposed
by El-Naqa, Yang, Galatsanos, Nishikawa & Wernick
(2004). An automatic indexing and retrieval method,
based on medical concept from the Unified Medical
Language System (UMLS) was recommended for an
image retrieval system. To learn semantics from
images, a vector machine is used as a structured
learning framework. In order to parse a XML
database where tags can have multiple meaning a
novel XML TF*IDF ranking strategy was proposed
(Bao, Lu, Ling & Chen, 2010). Avni, Greenspan,
Konen, Sharon and Goldberger represented image
contents by local patch using Bag-of-Words
model (BoW model). Nonlinear kernel-based Support
Vector Machine (SVM) was used to classify images.
The system was able to successfully discriminate
between healthy and pathological chest radiographs
(Avni, Greenspan, Konen, Sharon & Goldberger,
2011). To improve retrieval performance using
adaptive wavelet, a regression function is used which
estimates the best wavelet filter. For every possible
separable or non-separable wavelet filter,
image characterization is computed almost instantly
using an algorithm proposed by Quellec, Lamard,
Cazuguel, Cochener and Roux (2012). In order to
rank the search results of a query using text based
search, content based image search was proposed
(Cai, Zha, Wang, Zhang & Tian, 2014).
2 DATA
To examine the efficiency and usability of our system
we acquired radiographs from different hospitals in
Bangladesh. A total of 4324 DICOM images were
collected. The DICOM images were downscaled to
JPEG images preserving 80% or above image quality.
Each image had a fixed width of 1600 pixels and the
height was proportioned accordingly to maintain the
original aspect ratio. All of the images were then
sorted manually into head-neck, body, lower limb,
upper limb. Radiographs without any human body
parts were discarded into true negative category. The
sorting was done for the convenience of working.
This sorting does not influence the search methods.
The manual sorting process revealed some
interesting information about the dataset. It became
clear that not all radiographs were perfectly taken.
Some radiographs had a couple of problems including
blurriness and region out of focus.
3 PROPOSED MODEL
In order to create the database for our proposed
system, we have created a user interface where
experts can load any jpeg images. This software
includes tools for selecting specific region of interest
in the image and annotate them accordingly. The
information given in this front end would then be
stored in an xml file. As xml tags are generic these
can be used in almost any system. We have built the
annotating part of images around the IRMA coding
structure. After annotating an image a user can see
that image from the front end with annotations. A
main focus of this system is searching for images in
our database. In that regard two types search tool have
been integrated. Text based searching uses the tags
created for annotations. It implements a modified
version of tf-idf method. Content based search uses
Gabor filter to search any image from a given image.
Also we have added an exam mode for comparing
annotations automatically. This is based on each
annotated segment of an image. Further elaborations
on these are given below. Figure 1 illustrates the front
end of this system.
3.1 Expert Mode
Expert mode consists of tools to help the annotation
of radiographs. By using this interface expert
radiologists are able to annotate selected radiographs.
Expert Mode uses an elegant annotation system and a
set of selection tools to efficiently annotate the region
of interest. After the annotation is finished the data is
stored as an xml structure. A help button is included
to guide a new user throughout annotation the
process. Also a magnification tool is included so the
expert can observe the fine detail in the radiographs
and make annotations accordingly. Lastly the expert
can backtrack and view previous annotations by
pressing the show annotation button.
Teaching&LearningSystemforDiagnosticImaging-PhaseI:X-RayImageAnalysis&Retrieval
431