IMAGINE Dataset: Digital Camera Identification Image Benchmarking
Dataset
Jarosław Bernacki
a
and Rafał Scherer
b
Department of Intelligent Computer Systems, Cze¸stochowa University of Technology,
al. Armii Krajowej 36, 42-200 Cze¸stochowa, Poland
Keywords:
Digital Camera Identification, Sensor Identification, Digital Forensics, Privacy, Security, Machine Learning,
Deep Models, Convolutional Neural Networks.
Abstract:
We present the IMAGINE dataset. The proposed dataset may be used for benchmarking digital camera iden-
tification algorithms, which is an important issue in the field of digital forensics. So far, the most common
image dataset seems to be the Dresden Image Database, but this dataset contains images from relatively old de-
vices which include charge-coupled device (CCD) imaging sensors. Our dataset contains a number of images
coming from modern devices which include mobile devices, compact cameras, and digital single-lens re-
flex/mirrorless (DSLR/DSLM) with Complementary Metal-Oxide-Semiconductor (CMOS) imaging sensors.
Extensive experimental evaluation performed on a set of modern camera identification methods and algorithms
confirmed the reliability of the IMAGINE dataset.
1 INTRODUCTION
Digital camera identification based on images has be-
come a very popular task in digital forensics in recent
years. In digital forensics, imaging sensor identifi-
cation is crucial because it can provide valuable in-
formation about the origin and authenticity of digital
images. Knowing the specific imaging sensor used
to capture an image can help forensic analysts to de-
termine whether an image has been altered or manip-
ulated, and to establish the chain of custody of the
digital evidence. Therefore, accurate imaging sen-
sor identification is essential for maintaining the in-
tegrity of digital forensic investigations. Recognizing
a camera based on images is known as “digital fin-
gerprinting” (Goljan, 2008) or a proof of presence.
The most common methods for digital camera iden-
tification are Photo-Response Non-Uniformity-based
(PRNU) methods. Each camera has a unique PRNU
pattern, which can be used to identify the source cam-
era that captured an image. The PRNU pattern can be
estimated from a large number of images captured by
a camera and used as a reference for camera identifi-
cation. The PRNU-based methods are widely used for
camera identification due to their robustness against
post-capture processing and compression.
a
https://orcid.org/0000-0002-4488-3488
b
https://orcid.org/0000-0001-9592-262X
The most recent family is based on deep learn-
ing, usually using Convolutional Neural Networks
(CNNs) to extract features from an image and
comparing them with the features of known cam-
eras (Bondi et al., 2017; Ding et al., 2019; Kirchner
and Johnson, 2020; Li et al., 2018; Luk
´
as et al., 2006;
Mandelli et al., 2020; Yao et al., 2018).
Digital camera identification might be realized
in two aspects: individual source camera identifica-
tion (ISCI) and source model camera identification
(SCMI). ISCI is able to distinguish a particular cam-
era model among other cameras of the same model.
SCMI distinguishes a particular camera model among
the different models but does not distinguish a partic-
ular copy of camera among other cameras of the same
model.
The basis for the effective benchmarking of cam-
era identification algorithms is image datasets. Imag-
ing datasets should be extensive and provide a large
number of images coming from multiple modern de-
vices with ground truth labeling, i.e. corresponding
camera information, such as the make and model of
the camera. One of the most common image datasets
is the Dresden Image Database (Gloe and B
¨
ohme,
2010). It was presented in 2011 and offers a vast
number of images of different devices. However, this
dataset is not up to date and this is probably its only
disadvantage. It presents images coming from obso-
Bernacki, J. and Scherer, R.
IMAGINE Dataset: Digital Camera Identification Image Benchmarking Dataset.
DOI: 10.5220/0012130300003555
In Proceedings of the 20th International Conference on Security and Cryptography (SECRYPT 2023), pages 799-804
ISBN: 978-989-758-666-8; ISSN: 2184-7711
Copyright
c
2023 by SCITEPRESS – Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)
799
lete cameras equipped with CCD (charge-coupled de-
vice) imaging sensors. Nowadays, cameras and mo-
bile devices are equipped with CMOS (Complemen-
tary Metal-Oxide-Semiconductor) imaging sensors.
In this paper we propose a new dataset called
IMAGINE (IMAGIng seNsor idEntification) that may
be used for testing digital camera identification algo-
rithms. This dataset is suitable for statistical algo-
rithms, machine learning or deep models including
convolutional neural networks (CNN). Our dataset
utilizes a number of JPEG images acquired by mod-
ern devices enclosing mobile devices (smartphones,
tablets), drones, compact cameras, digital single
lens reflex/mirrorless (DSLR/DSLM), equipped with
CMOS imaging sensors. Therefore, our dataset may
be perfect for a reliable examination of digital camera
identification algorithms and methods. The dataset is
available on the following website:
https://kisi.pcz.pl/imagine/
In summary, our main contributions are:
We propose the IMAGINE dataset for bench-
marking individual source camera identification
algorithms and methods;
We benchmark proposed dataset with a set of
modern individual source camera identification
methods and show that the IMAGINE dataset
allows for a reliable testing of such methods;
moreover, we experimentally show that proposed
dataset may speed up training of convolutional
neural networks in the ISCI aspect.
The paper is organized as follows. In the next sec-
tion we recall existing image datasets. In section 3 we
describe in details the proposed IMAGINE dataset. In
section 4 we recall state-of-the-art algorithms for in-
dividual source camera identification. In section 5,
we show the classification results of state-of-the-art
algorithms on the proposed dataset. The final section
concludes this work.
2 PREVIOUS WORK
One of the most common image databases used in
many papers is the Dresden Image Database (Gloe
and B
¨
ohme, 2010). It contains tens of images coming
from (among others) the following cameras: Agfa,
Canon, Casio, Kodak, Nikon, Olympus, Praktica,
Rollei, Sony and Samsung. Moreover, dataset in-
cludes images of the same frame shoot by differ-
ent copies of the same camera model, which is es-
pecially desired for a reliable algorithm evaluation.
A drawback of this dataset is that it was intro-
duced in 2011 and contains mostly charge-coupled
device (CCD) imaging sensors which are nowadays
replaced by modern Complementary Metal-Oxide-
Semiconductor (CMOS) sensors.
VISION (Shullani et al., 2017) is a image dataset
containing images from 35 modern smartphones of
the following manufacturers: Apple (iPad/iPhone),
Huawei, Lenovo, LG, Microsoft, OnePlus, Samsung,
Sony, Wiko and Xiaomi. VISION comes from 2017
year.
A set of High Dynamic Range (HDR) images that
is called UNIFI dataset (Shaya et al., 2018) was pub-
lished in 2018. This dataset includes smartphones’
images of the following brands: Asus, Huawei,
iPad/iPhone, OnePlus, Xiaomi and Samsung. It con-
tains a diverse range of scenes captured with multi-
ple exposure settings, and provides high-quality HDR
content that is suitable for computer vision and image
processing research.
MICHE-I dataset is an iris database that was in-
troduced in 2015 and consist of images taken by three
mobile devices: Apple iPhone 5, Samsung Galaxy S4
and Samsung Galaxy Tab 2 (MICHE, 2019; De Mar-
sico et al., 2015). The Biosec Baseline Iris Subcor-
pus (Fi
´
errez-Aguilar et al., 2007) and IITD contact
lens iris (Kohli et al., 2013) concern iris images. IITD
contains 6570 images from Image Analysis and Bio-
metrics Lab of the IIITD. The Notre Dame Iris Cos-
metic Contact Lenses is provided by the Computer Vi-
sion Research Laboratory (CVRL) (Jr. et al., 2013).
The dataset consists of a few thousands images. Al-
though such datasets are dedicated for iris recogni-
tion, may also be used for digital camera recognition.
The Forchheim Image Database (Hadwiger and
Riess, 2020) was presented in 2020 and consists of
images coming from modern smartphones. Although
there was used a large number of images (more than
23000), the main weakness seems to be the lack of
modern professional cameras, including digital single
lens reflex and mirrorless cameras.
Some research is conducted with popular image
sharing website Flickr (Flickr, 2023). Note however
that many images published on Flickr are manipulated
with image processors such as Adobe Lightroom,
DxO PhotoLab, Luminar or many others. Therefore,
the analysis may not be effective with such images.
3 IMAGINE DATASET
DESCRIPTION
Utilized Devices. The proposed dataset utilizes
number of different types of 55 imaging devices. In
particular, it includes: 12 mobile devices, which are
11 smartphones and one tablet, two drones, 4 com-
SECRYPT 2023 - 20th International Conference on Security and Cryptography
800
pact cameras, 17 digital single lens reflex (DSLR), 18
digital single lens mirrorless (DSLM). The total num-
ber of images is 2489.
The devices are equipped with imaging sensors
of various physical dimensions. The list of imaging
sensor dimensions of used devices is presented in Ta-
ble 1. The full list of models of devices is listed in
Table 2.
Table 1: Sensor dimensions of used devices. Dim stands for
sensor dimensions (in millimeters), Diag denotes sensors’
diagonal (in millimeters).
Sensor Dim Diag
FF 36.0 × 24.0 43.27
FX 35.9 × 24.0 42.18
FE 35.6 × 23.8 42.74
APS-C
1
22.3 × 14.8 26.76
APS-C
2
23.5 × 15.7 28.26
APS-C
3
23.6 × 15.8 28.40
1” 13.2 × 8.80 15.86
1/2.55” 6.17 × 4.55 7.67
1/2.3” 6.16 × 4.62 7.70
1/3” 4.80 × 3.60 6.00
1/3.1” 4.40 × 3.30 5.50
1/4.0” 3.20 × 2.40 4.00
Note that some devices are used as two copies of
the same model (Tab. 2). Also worth mentioning that
Samsung Galaxy A40 is equipped with two lenses:
standard (wide) and ultrawide – in our dataset we use
only standard (wide) images.
Images. The dataset contains JPG images coming
directly from cameras they are not edited in any
software in any way. All cameras were set to their
default shooting mode with default white balance.
Image Download Script. The images of the
IMAGINE dataset may be easily downloaded by us-
ing the provided script, written in BASH. The script is
available on the dataset’s webpage and works on Mi-
crosoft Windows and Linux operating systems. De-
tailed instructions, how to run the script on Linux or
Microsoft Windows can be found on dataset’s web
page. Below we present a part of the download script.
mkdir Canon_EOS_R5
for ((n=1;n<=42;n++))
do
wget -O Canon_EOS_R5/$n.jpg https://
kisi.pcz.pl/imagine/img/Canon_EOS_R5
/$n.jpg
done
4 ALGORITHMS FOR
INDIVIDUAL SOURCE
CAMERA IDENTIFICATION
In this section we briefly recall the state-of-the-art al-
gorithms for an individual source camera identifica-
tion that we will benchmark with IMAGINE dataset.
We test algorithms presented by Luk
´
as et
al. (Luk
´
as et al., 2006), Bondi et al. (Bondi et al.,
2017), Tuama et al. (Tuama et al., 2016), Mandelli
et al. (Mandelli et al., 2020) and Kirchner & John-
son (Kirchner and Johnson, 2020). Due to paper limi-
tations, we recall the algorithm presented by Luk
´
as et
al., and a shallow conception for CNN-based camera
identification based on Bondi—Kirchner & Johnson
papers.
Luk
´
as et al.s Algorithm. The base of the Luk
´
as
et al.s algorithm (Luk
´
as et al., 2006) is the calcu-
lation of the noise residual N which is defined as
N = I F(I), where F is a denoising filter, N denotes
a noise residual of one image I. This procedure should
be repeated for a certain number of images from a
camera (it is suggested to use at least 45 images). The
camera’s noise residual is eventually calculated as an
average of the used number of noise residuals. Images
are processed in their original resolution.
Convolutional Neural Networks. The general idea
of convolutional neural network-based camera identi-
fication of cited papers relies on applying 3 4 con-
volutional layers with usually 32 128 filters of size
4 × 4 or 5 × 5 with kernel size 2 and stride also 2
with max-pooling layers. The ReLU is used for ac-
tivation; for classification are utilized fully connected
layers. Other parameters may be found in for instance
Bondi’s paper (Bondi et al., 2017).
5 EXPERIMENTAL RESULTS –
BENCHMARKING THE
IMAGINE DATASET
In this section we provide classification experiments
with the state-of-the-art algorithms for individual
source camera identification (ISCI mentioned in
previous section), trained by images from the IMAG-
INE dataset. We experimentally check the classifi-
cation accuracy of the following algorithms: Luk
´
as,
Bondi, Tuama, Mandelli and Kirchner & Johnson.
We run all listed algorithms in their default (original)
parameters with own procedures for classification.
IMAGINE Dataset: Digital Camera Identification Image Benchmarking Dataset
801
Table 2: Utilized devices. The * symbol denotes that size sensor if not officially given by the manufacturer. However, one
may assume that it is similar to 1/4.0”. Resolution stands for image resolution in pixels.
Symbol Device name Device type Released Resolution Sensor # of devices
A01 Acer Liquid Jade S smartphone 2014 4160 × 3120 * 1
A02 Apple iPhone 5S smartphone 2013 3624 × 2448 1/3” 1
C01 Canon EOS 1D X Mark II DSLR 2016 5472 × 3648 FF 1
C02 Canon EOS 5D Mark IV DSLR 2016 6720 × 4480 FF 1
C03 Canon EOS 6D Mark II DSLR 2017 6240 × 4160 FF 1
C04 Canon EOS 750D DSLR 2015 6000 × 4000 APS-C
1
2
C05 Canon EOS 760D DSLR 2015 6000 × 4000 APS-C
1
2
C06 Canon EOS M3 DSLM 2015 6000 × 4000 APS-C
1
2
C07 Canon EOS M5 DSLM 2016 6000 × 4000 APS-C
1
2
C08 Canon EOS M50 DSLM 2018 6000 × 4000 APS-C
1
2
C09 Canon EOS 90D DSLR 2019 6940 × 4640 APS-C
1
1
C10 Canon EOS M100 DSLM 2017 6000 × 4000 APS-C
1
1
C11 Canon EOS M200 DSLM 2019 6000 × 4000 APS-C
1
1
C12 Canon EOS R DSLM 2018 6720 × 4480 FF 1
C13 Canon EOS R5 DSLM 2020 8192 × 5464 FF 1
C14 Canon EOS R6 DSLM 2020 5472 × 3648 FF 1
C15 Canon EOS RP DSLM 2019 6240 × 4160 FF 1
C16 Canon PowerShot G9 X Mark II compact 2017 5472 × 3648 1” 2
C17 Canon PowerShot SX270 HS compact 2013 4000 × 3000 1/2.3 1
D01 DJI Spark drone 2017 3968 × 2976 1/2.3 1
F01 Fujifilm X-T200 DSLM 2020 6000 × 4000 APS-C
2
1
L01 Lenovo K5 Plus smartphone 2016 4096 × 2304 * 1
L02 LG K10 smartphone 2016 4160 × 2336 * 1
M01 Microsoft Lumia 640 smartphone 2015 3264 × 1840 1/4.0” 1
N01 Nikon D5 DSLR 2016 5568 × 3712 FX 1
N02 Nikon D6 DSLR 2020 5568 × 3712 FX 1
N03 Nikon D500 DSLR 2016 5568 × 3712 APS-C
3
1
N04 Nikon D610 DSLR 2013 6016 × 4016 FX 1
N05 Nikon D750 DSLR 2014 6016 × 4016 FX 2
N06 Nikon D780 DSLR 2020 6048 × 4024 FX 1
N07 Nikon D810 DSLR 2014 7360 × 4912 FX 1
N08 Nikon D850 DSLR 2017 8256 × 5504 FX 1
N09 Nikon D3100 DSLR 2010 4608 × 3072 APS-C
3
2
N10 Nikon D5600 DSLR 2016 6000 × 4000 APS-C
3
2
N11 Nikon D7200 DSLR 2015 6000 × 4000 APS-C
3
2
N12 Nikon P100 compact 2010 3648 × 2736 1/2.3” 1
N13 Nikon Z6 DSLM 2018 6048 × 4024 FX 2
N14 Nikon Z6 II DSLM 2020 6048 × 4024 FX 1
N15 Nikon Z7 DSLM 2018 8256 × 5504 FX 1
N16 Nikon Z7 II DSLM 2020 8256 × 5504 FX 1
N17 Nokia 2.2 smartphone 2019 4160 × 3120 1/3.1” 1
S01 Samsung Galaxy A40 smartphone 2019 4608 × 3456 1/2.8” 1
S02 Samsung Galaxy Ace 3 smartphone 2013 2560 × 1920 * 1
S03 Samsung Galaxy S7 smartphone 2016 4032 × 3024 1/2.55” 2
S04 Samsung Galaxy SIII mini smartphone 2014 2560 × 1920 * 1
S05 Samsung Galaxy Tab A 10.1 tablet 2016 3264 × 1836 * 1
S06 Samsung Galaxy Trend 2 Lite smartphone 2015 2048 × 1232 * 1
S07 Samsung Omnia II smartphone 2009 2560 × 1920 * 1
S08 Sony A1 DSLM 2021 8640 × 5760 FE 1
S09 Sony A7R III DSLM 2017 7952 × 5304 FE 1
S10 Sony A7S DSLM 2014 4240 × 2832 FE 1
S11 Sony A9 DSLM 2017 6000 × 4000 FE 1
S12 Sony ActionCam AS200V sport camera 2015 3104 × 1744 1/2.3” 1
S13 Sony RX100 VI compact 2018 5472 × 3648 1” 1
Y01 Yuneec Breeze 4K drone 2015 4160 × 3120 1/3” 1
SECRYPT 2023 - 20th International Conference on Security and Cryptography
802
We also compare proposed dataset with well-known
in the literature Dresden Image Database (Gloe and
B
¨
ohme, 2010) (let us in further part of article name
it shortly Dresden). As evaluation, we use accuracy
(ACC) measure, defined as
ACC =
TP + TN
TP + TN + FP + FN
where TP/TN stands for “true positive/true negative”;
FP/FN stands for “false positive/false negative”. TP
denotes number of instances correctly classified to a
specific class; TN are cases that are correctly rejected.
FP denotes cases incorrectly classified to the specific
class; FN are instances incorrectly rejected.
Due to large dimensions, we skip confusion ma-
trices for considered methods. The summary of the
average accuracy of each algorithm is presented in
Tab. 3.
Table 3: Identification accuracy of the tested algorithms.
Algorithm Accuracy [%]
Luk
´
as 98.0
Bondi 98.0
Tuama 97.0
Mandelli 98.0
Kirchner & Johnson 93.0
Results clearly indicate very high identification
accuracy on the proposed image dataset. The Man-
delli et al.s method obtained the identification ac-
curacy at the level of 98.0%. The other methods
achieved similar results of 97.0 98.0% accuracy.
Only the Kirchner & Johnson algorithm achieved
93.0%. This confirms the usefulness of the proposed
IMAGINE dataset.
In Figures 1-2 we present the evaluation results of
the IMAGINE dataset. Fig. 1 presents results on train-
ing accuracy of considered CNNs for 50 epochs. Re-
sults indicate that all CNNs achieve comparable and
high training accuracy on IMAGINE dataset. We ob-
serve a bit lower results in case of Kirchner & John-
son’s CNN, however all networks exceed the training
accuracy of 80.0% after passing 30 epochs.
In Fig. 2 we evaluate the training accuracy for
Mandelli et al.s CNN both on the IMAGINE dataset
and Dresden Image Database. The analysis shows
that the proposed dataset achieves higher training ac-
curacy with a less number of training epochs. Train-
ing the CNN for about 20 epochs with the IMAGINE
dataset provides accuracy exceeding 80.0% while the
Dresden set obtains about 70.0%; learning for at least
30 epochs enables achieving accuracy of about 95.0%
for the IMAGINE dataset; the Dresden set requires
more than 40 epochs for such result. Similar trend is
observed in case of other CNNs presented by Bondi,
Figure 1: Comparison of training accuracy on selected
CNNs on IMAGINE dataset.
Tuama and Kirchner & Johnson (we skip correspond-
ing Figures for clarity). This denotes that the IMAG-
INE dataset may result in a faster model training than
the Dresden Image Database.
Figure 2: Comparison of training accuracy of Mandelli et
al.s CNN on IMAGINE and Dresden Image Database. Re-
sults for CNNs by Bondi, Tuama and Kirchner & Johnson
are similar.
We also compare the identification accuracy of
Luk
´
as et al.s algorithm on IMAGINE and Dresden
datasets. Similarly as in CNNs, Luk
´
as algorithm ob-
tains a bit higher identification accuracy. Results on
the IMAGINE dataset achieve 98.0% of identification
accuracy; on the Dresden set it is 96.0%.
To sum up, experiments demonstrated that pro-
posed dataset achieve very high identification accu-
racy on modern state-of-the-art algorithms in the ISCI
aspect. Results indicated also that considered CNNs
may be trained with less number of training epochs
with the IMAGINE dataset, compared to Dresden Im-
age Database.
IMAGINE Dataset: Digital Camera Identification Image Benchmarking Dataset
803
6 CONCLUSION
We have proposed an IMAGINE dataset for bench-
marking digital camera identification algorithms. Our
dataset contains number of images coming from mod-
ern CMOS-based devices. This dataset may be used
for testing digital camera identification algorithms
using different methodologies, including statistical
methods, machine learning or deep models with con-
volutional neural networks (CNN). We have evalu-
ated our dataset on a set of modern state-of-the-art
algorithms for individual source camera identifica-
tion. Results confirmed the reliability of IMAGINE
dataset.
ACKNOWLEDGEMENTS
The project financed under the program of the Pol-
ish Minister of Science and Higher Education under
the name “Regional Initiative of Excellence” in the
years 2019–2022 project number 020/RID/2018/19,
the amount of financing 12,000,000.00 PLN.
The authors would like to thank the Editorial Of-
fice of Optyczne.pl (Optyczne, 2023) website for
sharing part of images to the proposed dataset.
REFERENCES
Bondi, L., Baroffio, L., Guera, D., Bestagini, P., Delp,
E. J., and Tubaro, S. (2017). First steps toward cam-
era model identification with convolutional neural net-
works. IEEE Signal Process. Lett., 24(3):259–263.
De Marsico, M., Nappi, M., Riccio, D., and Wechsler, H.
(2015). Mobile iris challenge evaluation (miche)-i,
biometric iris dataset and protocols. Pattern Recog-
nition Letters, 57:17–23.
Ding, X., Chen, Y., Tang, Z., and Huang, Y. (2019). Cam-
era identification based on domain knowledge-driven
deep multi-task learning. IEEE Access, 7:25878–
25890.
Fi
´
errez-Aguilar, J., Ortega-Garcia, J., Toledano, D. T.,
and Gonzalez-Rodriguez, J. (2007). Biosec baseline
corpus: A multimodal biometric database. Pattern
Recognition, 40(4):1389–1392.
Flickr (2023). Flickr, https://www.flickr.com/. Online; ac-
cessed 5 April 2023.
Gloe, T. and B
¨
ohme, R. (2010). The ‘Dresden Image
Database’ for benchmarking digital image forensics.
In Proceedings of the 25th Symposium On Applied
Computing (ACM SAC 2010), volume 2, pages 1585–
1591.
Goljan, M. (2008). Digital camera identification from im-
ages - estimating false acceptance probability. In Digi-
tal Watermarking, 7th International Workshop, IWDW
2008, pages 454–468.
Hadwiger, B. and Riess, C. (2020). The forchheim im-
age database for camera identification in the wild. In
Bimbo, A. D., Cucchiara, R., Sclaroff, S., Farinella,
G. M., Mei, T., Bertini, M., Escalante, H. J., and
Vezzani, R., editors, Pattern Recognition. ICPR Inter-
national Workshops and Challenges - Virtual Event,
January 10-15, 2021, Proceedings, Part VI, volume
12666 of Lecture Notes in Computer Science, pages
500–515. Springer.
Jr., J. S. D., Bowyer, K. W., and Flynn, P. J. (2013). Vari-
ation in accuracy of textured contact lens detection
based on sensor and lens pattern. In IEEE Sixth In-
ternational Conference on Biometrics: Theory, Appli-
cations and Systems, BTAS 2013, Arlington, VA, USA,
September 29 - October 2, 2013, pages 1–7.
Kirchner, M. and Johnson, C. (2020). SPN-CNN: boost-
ing sensor-based source camera attribution with deep
learning. CoRR, abs/2002.02927.
Kohli, N., Yadav, D., Vatsa, M., and Singh, R. (2013). Re-
visiting iris recognition with color cosmetic contact
lenses. In International Conference on Biometrics,
ICB 2013, 4-7 June, 2013, Madrid, Spain, pages 1–
7.
Li, R., Li, C., and Guan, Y. (2018). Inference of a compact
representation of sensor fingerprint for source camera
identification. Pattern Recognition, 74:556–567.
Luk
´
as, J., Fridrich, J. J., and Goljan, M. (2006). Digital
camera identification from sensor pattern noise. IEEE
Trans. Information Forensics and Security, 1(2):205–
214.
Mandelli, S., Cozzolino, D., Bestagini, P., Verdoliva, L.,
and Tubaro, S. (2020). Cnn-based fast source device
identification. IEEE Signal Process. Lett., 27:1285–
1289.
MICHE (2019). Miche database, http://biplab.unisa.it/
miche/database/. Online; accessed 1 December 2019.
Optyczne (2023). Optyczne.pl, https://www.optyczne.pl/.
Online; accessed 5 April 2023.
Shaya, O. A., Yang, P., Ni, R., Zhao, Y., and Piva, A. (2018).
A new dataset for source identification of high dy-
namic range images. Sensors, 18(11):3801.
Shullani, D., Fontani, M., Iuliani, M., Shaya, O. A., and
Piva, A. (2017). VISION: a video and image dataset
for source identification. EURASIP J. Information Se-
curity, 2017:15.
Tuama, A., Comby, F., and Chaumont, M. (2016). Cam-
era model identification with the use of deep convolu-
tional neural networks. In IEEE International Work-
shop on Information Forensics and Security, WIFS
2016, Abu Dhabi, United Arab Emirates, December
4-7, 2016, pages 1–6. IEEE.
Yao, H., Qiao, T., Xu, M., and Zheng, N. (2018). Ro-
bust multi-classifier for camera model identification
based on convolution neural network. IEEE Access,
6:24973–24982.
SECRYPT 2023 - 20th International Conference on Security and Cryptography
804