loading
Documents

Research.Publish.Connect.

Paper

Authors: Obioma Pelka 1 ; Felix Nensa 2 and Christoph M. Friedrich 3

Affiliations: 1 University of Applied Sciences and Arts Dortmund and University of Duisburg-Essen, Germany ; 2 University Hospital Essen, Germany ; 3 University of Applied Sciences and Arts Dortmund, Germany

ISBN: 978-989-758-278-3

Keyword(s): Biomedical Imaging, Deep Learning, Keyword Generation, Machine Learning, Multi-modal Representation, Transfer Learning, Radiographs.

Abstract: As the number of digital medical images taken daily rapidly increases, manual annotation is impractical, time-consuming and prone to errors. Hence, there is need to create systems that automatically classify and annotate medical images. The aim of this presented work is to utilize Transfer Learning to generate image keywords, which are substituted as text representation for medical image classification and retrieval tasks. Text preprocessing methods such as detection and removal of compound figure delimiters, stop-words, special characters and word stemming are applied before training the keyword generation model. All images are visually represented using Convolutional Neural Networks (CNN) and the Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) Show-and-Tell model is adopted for keyword generation. To improve model performance, a second training phase is initiated, where parameters are fine-tuned using the pre-trained deep learning network Inception-ResNet-V2. For the image classification tasks, Random Forest models trained with Bag-of-Keypoints visual representations were adopted. Classification prediction accuracy was higher for all classification schemes and on two distinct radiology image datasets using the proposed approach. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 35.172.100.232

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Pelka, O.; Nensa, F. and Friedrich, C. (2018). Adopting Semantic Information of Grayscale Radiographs for Image Classification and Retrieval.In Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: KALSIMIS, ISBN 978-989-758-278-3, pages 179-187. DOI: 10.5220/0006732301790187

@conference{kalsimis18,
author={Obioma Pelka. and Felix Nensa. and Friedrich, C.},
title={Adopting Semantic Information of Grayscale Radiographs for Image Classification and Retrieval},
booktitle={Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: KALSIMIS,},
year={2018},
pages={179-187},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006732301790187},
isbn={978-989-758-278-3},
}

TY - CONF

JO - Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: KALSIMIS,
TI - Adopting Semantic Information of Grayscale Radiographs for Image Classification and Retrieval
SN - 978-989-758-278-3
AU - Pelka, O.
AU - Nensa, F.
AU - Friedrich, C.
PY - 2018
SP - 179
EP - 187
DO - 10.5220/0006732301790187

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.