loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Eugene Ch’ng 1 ; 2 ; Pinyuan Feng 3 ; Hongtao Yao 3 ; Zihao Zeng 3 ; Danzhao Cheng 1 and Shengdan Cai 1

Affiliations: 1 Digital Heritage Centre, University of Nottingham Ningbo, China ; 2 NVIDIA Joint-Lab on Mixed Reality, University of Nottingham Ningbo, China ; 3 School of Computer Science, University of Nottingham Ningbo, China

Keyword(s): Digital Heritage, Deep Learning, Object Detection, Data Augmentation, Photogrammetry, Fusion Dataset.

Abstract: Cultural heritage presents both challenges and opportunities for the adoption and use of deep learning in 3D digitisation and digitalisation endeavours. While unique features in terms of the identity of artefacts are important factors that can contribute to training performance in deep learning algorithms, challenges remain with regards to the laborious efforts in our ability to obtain adequate datasets that would both provide for the diversity of imageries, and across the range of multi-facet images for each object in use. One solution, and perhaps an important step towards the broader applicability of deep learning in the field of digital heritage is the fusion of both real and virtual datasets via the automated creation of diverse datasets that covers multiple views of individual objects over a range of diversified objects in the training pipeline, all facilitated by close-range photogrammetry generated 3D objects. The question is the ratio of the combination of real and synthetic imageries in which an inflection point occurs whereby performance is reduced. In this research, we attempt to reduce the need for manual labour by leveraging the flexibility provided for in automated data generation via close-range photogrammetry models with a view for future deep learning facilitated cultural heritage activities, such as digital identification, sorting, asset management and categorisation. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.142.98.111

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Ch’ng, E.; Feng, P.; Yao, H.; Zeng, Z.; Cheng, D. and Cai, S. (2021). Balancing Performance and Effort in Deep Learning via the Fusion of Real and Synthetic Cultural Heritage Photogrammetry Training Sets. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 1: ARTIDIGH; ISBN 978-989-758-484-8; ISSN 2184-433X, SciTePress, pages 611-621. DOI: 10.5220/0010381206110621

@conference{artidigh21,
author={Eugene Ch’ng. and Pinyuan Feng. and Hongtao Yao. and Zihao Zeng. and Danzhao Cheng. and Shengdan Cai.},
title={Balancing Performance and Effort in Deep Learning via the Fusion of Real and Synthetic Cultural Heritage Photogrammetry Training Sets},
booktitle={Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 1: ARTIDIGH},
year={2021},
pages={611-621},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010381206110621},
isbn={978-989-758-484-8},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 1: ARTIDIGH
TI - Balancing Performance and Effort in Deep Learning via the Fusion of Real and Synthetic Cultural Heritage Photogrammetry Training Sets
SN - 978-989-758-484-8
IS - 2184-433X
AU - Ch’ng, E.
AU - Feng, P.
AU - Yao, H.
AU - Zeng, Z.
AU - Cheng, D.
AU - Cai, S.
PY - 2021
SP - 611
EP - 621
DO - 10.5220/0010381206110621
PB - SciTePress