
of middle-aged adults (Sode et al., 2011), whereas
our dataset included only children under ten years
old. Because of this, our dataset contains many ex-
amples of extreme motion artefact, which the QMEs
calculated by Sode et al. in their paper may not be
able to accurately depict. Nonetheless we would still
like to highlight the potential of clinical applications
of our algorithm. Not only would clinicians be able
to immediately repeat a scan in which motion arte-
fact has been automatically detected, but additionally,
scans were artefact may be unavoidable, such as those
with young children, may be corrected and included
in study.
6 FUTURE WORK
Our preliminary results have shown that our al-
gorithmic approach to detecting motion artefact in
HRpQCT scans can distinguish between scans with
high and low levels of motion artefact. However,
as described above, there are some limitations, as
our current model cannot accurately distinguish the
amounts of motion contained in scans graded one to
four. In our further research we plan to increase our
training samples and refine our U-Net model to im-
prove our sinogram segmentation and make it more
robust to noise. We predict that this will improve the
accuracy of our detection of jumps in the sinogram
data, and therefore increase the accuracy of our nu-
merical motion artefact measures. We plan to con-
tinue to iterate on this approach to ensure that our
quantitative results accurately reflect the amount of
distortion caused by motion in the image, and can be
used to inform the accuracy of HRpQCT parameters
derived from artefact affected scans.
ACKNOWLEDGEMENTS
TC is supported through a doctoral studentship at
the University of Southampton funded jointly by
the MRC Lifecourse Epidemiology Centre and the
Institute for Life Sciences. This work was sup-
ported by MRC [MC PC 21003; MC PC 21001],
and National Institute for Health Research (NIHR)
Southampton Biomedical Research Centre, Uni-
versity of Southampton, and University Hospital
Southampton NHS Foundation Trust, UK.
REFERENCES
Benedikt, S., Horling, L., Stock, K., Degenhart, G., Pal-
lua, J., Schmidle, G., and Arora, R. (2023). The im-
pact of motion induced artifacts in the evaluation of
HR-pQCT scans of the scaphoid bone: an assessment
of inter- and intraobserver variability and quantitative
parameters. Quant. Imaging Med. Surg., 13(3):1336–
1349.
Harvey, N. C., Javaid, K., Bishop, N., Kennedy, S., Papa-
georghiou, A. T., Fraser, R., Gandhi, S. V., Schoen-
makers, I., Prentice, A., and Cooper, C. (2012).
MAVIDOS maternal vitamin D osteoporosis study:
study protocol for a randomized controlled trial. the
MAVIDOS study group. Trials, 13:13.
Inskip, H. M., Godfrey, K. M., Robinson, S. M., Law, C. M.,
Barker, D. J. P., Cooper, C., and SWS Study Group
(2006). Cohort profile: The southampton women’s
survey. Int. J. Epidemiol., 35(1):42–48.
Laib, A. (2023). Sop quality grading.
Liu, Y., Wen, T., Sun, W., Liu, Z., Song, X., He, X., Zhang,
S., and Wu, Z. (2022). Graph-based motion artifacts
detection method from head computed tomography
images. Sensors, 22(15):5666.
Pauchard, Y., Ayres, F. J., and Boyd, S. K. (2011). Auto-
mated quantification of three-dimensional subject mo-
tion to monitor image quality in high-resolution pe-
ripheral quantitative computed tomography. Physics
in Medicine & Biology, 56(20):6523.
Pauchard, Y., Liphardt, A.-M., Macdonald, H. M., Hanley,
D. A., and Boyd, S. K. (2012). Quality control for
bone quality parameters affected by subject motion in
high-resolution peripheral quantitative computed to-
mography. Bone, 50(6):1304–1310.
Sode, M., Burghardt, A. J., Pialat, J.-B., Link, T. M., and
Majumdar, S. (2011). Quantitative characterization of
subject motion in hr-pqct images of the distal radius
and tibia. Bone, 48(6):1291–1297.
Walle, M., Eggemann, D., Atkins, P. R., Kendall, J. J.,
Stock, K., M
¨
uller, R., and Collins, C. J. (2023). Mo-
tion grading of high-resolution quantitative computed
tomography supported by deep convolutional neural
networks. Bone, 166:116607.
Yin, X.-X., Sun, L., Fu, Y., Lu, R., and Zhang, Y. (2022).
U-Net-based medical image segmentation. J. Healthc.
Eng., 2022:4189781.
ICPRAM 2024 - 13th International Conference on Pattern Recognition Applications and Methods
840