Qatar, February, 2020). Deep learning for rf-based
drone detection and identification: A multi-channel 1-
d convolutional neural networks approach. In IEEE
International Conference on Informatics, IoT, and En-
abling Technologies, pages 112–117.
Arifin, F., Robbani, H., Annisa, T., and Ma’arof, N. (2019).
Variations in the number of layers and the number of
neurons in artificial neural networks: Case study of
pattern recognition. In Journal of Physics: Confer-
ence Series, volume 1413, page 012016.
Azari, M. M., Sallouha, H., Chiumento, A., Rajendran, S.,
Vinogradov, E., and Pollin, S. (2018). Key technolo-
gies and system trade-offs for detection and localiza-
tion of amateur drones. IEEE Communications Mag-
azine, 56(1):51–57.
Bernardini, A., Mangiatordi, F., Pallotti, E., and Capodi-
ferro, L. (2017). Drone detection by acoustic signature
identification. Electronic Imaging, 2017(10):60–64.
Berry, M. J. and Linoff, G. S. (2004). Data mining tech-
niques: for marketing, sales, and customer relation-
ship management. John Wiley & Sons.
Bisio, I., Garibotto, C., Lavagetto, F., Sciarrone, A., and
Zappatore, S. (2018). Blind detection: Advanced
techniques for wifi-based drone surveillance. IEEE
Transactions on Vehicular Technology, 68(1):938–
946.
Blum, A. (1992). Neural networks in C++: an object-
oriented framework for building connectionist sys-
tems. John Wiley & Sons, Inc.
Boger, Z. and Guterman, H. (Orlando, FL, USA, October,
1997). Knowledge extraction from artificial neural
network models. In IEEE International Conference
on Systems, Man, and Cybernetics, Computational
Cybernetics and Simulation, volume 4, pages 3030–
3035.
Bottou, L. (Paris France, August, 2010). Large-scale ma-
chine learning with stochastic gradient descent. In In-
ternational Conference on Computational Statistics.
Busset, J., Perrodin, F., Wellig, P., Ott, B., Heutschi, K.,
R
¨
uhl, T., and Nussbaumer, T. (2015). Detection and
tracking of drones using advanced acoustic cameras.
In Unmanned/Unattended Sensors and Sensor Net-
works XI; and Advanced Free-Space Optical Commu-
nication Techniques and Applications, volume 9647,
page 96470.
Chan, W., Jaitly, N., Le, Q., and Vinyals, O. (Shanghai,
China, March, 2016). Listen, attend and spell: A
neural network for large vocabulary conversational
speech recognition. In IEEE International Conference
on Acoustics, Speech and Signal Processing.
Chang, X., Yang, C., Wu, J., Shi, X., and Shi, Z. (Sheffield,
UK, July, 2018). A surveillance system for drone
localization and tracking using acoustic arrays. In
IEEE Sensor Array and Multichannel Signal Process-
ing Workshop (SAM).
Deng, C., Liao, S., Xie, Y., Parhi, K. K., Qian, X., and Yuan,
B. (Fukuoka, Japan, October, 2018). Permdnn: Effi-
cient compressed dnn architecture with permuted di-
agonal matrices. In Annual IEEE/ACM International
Symposium on Microarchitecture (MICRO).
Graves, A., Abdelrahman, M., and Hinton, G. (Vancouver,
Canada, 2013). Speech recognition with deep recur-
rent neural networks.
He, X. and Xu, S. (2010). Artificial neural networks. Pro-
cess neural networks: theory and applications, pages
20–42.
Huang, G.-B. (2003). Learning capability and storage
capacity of two-hidden-layer feedforward networks.
IEEE Transactions on Neural Networks, 14(2):274–
281.
Jahromi, M. G., Parsaei, H., Zamani, A., and Stashuk, D. W.
(2018). Cross comparison of motor unit potential fea-
tures used in emg signal decomposition. IEEE Trans-
actions on Neural Systems and Rehabilitation Engi-
neering, 26(5):1017–1025.
Kobayashi, T. (Cardiff, UK, September, 2019). Large mar-
gin in softmax cross-entropy loss. In British Machine
Vision Conference.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learn-
ing. nature, 521(7553):436–444.
Liu, W., Wen, Y., Yu, Z., and Yang, M. (New York City,
USA, June, 2016). Large-margin softmax loss for con-
volutional neural networks. In International Confer-
ence on Machine Learning.
Nasr, G. E., Badr, E., and Joun, C. (2002). Cross entropy er-
ror function in neural networks: Forecasting gasoline
demand. In FLAIRS conference, pages 381–384.
Nielsen, M. A. (2015). Neural networks and deep learn-
ing, volume 2018. Determination press San Francisco,
CA.
Passalis, N., Tefas, A., Kanniainen, J., Gabbouj, M., and
Iosifidis, A. (2019). Deep adaptive input normaliza-
tion for time series forecasting. IEEE Transactions on
Neural Networks and Learning Systems.
Ramamonjy, A., Bavu, E., Garcia, A., and Hengy, S. (Le
Mans, France, 2016). D
´
etection, classification et
suivi de trajectoire de sources acoustiques par capta-
tion pression-vitesse sur capteurs mems num
´
eriques.
In Congr
`
es de la Soci
´
et
´
e Franc¸aise d’Acoustique-
CFA16/VISHNO.
Raschka, S. (2018). Model evaluation, model selection,
and algorithm selection in machine learning. arXiv
preprint arXiv:1811.12808.
Saranya, C. and Manikandan, G. (2013). A study on
normalization techniques for privacy preserving data
mining. International Journal of Engineering and
Technology, 5(3):2701–2704.
Sola, J. and Sevilla, J. (1997). Importance of input data
normalization for the application of neural networks
to complex industrial problems. IEEE Transactions
on nuclear science, 44(3):1464–1468.
Zhao, C. and Gao, X.-S. (2019). Qdnn: Dnn with
quantum neural network layers. arXiv preprint
arXiv:1912.12660.
Zhu, Q., He, Z., Zhang, T., and Cui, W. (2020). Improv-
ing classification performance of softmax loss func-
tion based on scalable batch-normalization. Applied
Sciences, 10(8):2950.
DATA 2021 - 10th International Conference on Data Science, Technology and Applications
214