92.42%
6 CONCLUSION
In this paper, we have described the EMNIST
benchmark dataset alongside Deep Convolutional
Neural Networks architectures and hyperparameters.
By tuning hyperparameters of CNNs, we have
achieved excellent results that compare favourably
with other work under the same scope. We believe
that further tuning would lead to better outcomes.
Thus we intend to evaluate deeper architectures of
the CNNs with larger hyperparameters tuning to
enhance performances even further.
REFERENCES
Baldominos, A., Saez, Y., & Isasi, P. (2019). Hybridizing
Evolutionary Computation and Deep Neural
Networks: An Approach to Handwriting Recognition
Using Committees and Transfer Learning.
Complexity, 2019, 1–16.
https://doi.org/10.1155/2019/2952304
Cavalin, P., & Oliveira, L. (2019). Confusion Matrix-
Based Building of Hierarchical Classification. In R.
Vera-Rodriguez, J. Fierrez, & A. Morales (Eds.),
Progress in Pattern Recognition, Image Analysis,
Computer Vision, and Applications (Vol. 11401, pp.
271–278). Springer International Publishing.
https://doi.org/10.1007/978-3-030-13469-3_32
Ciresan, D. C., Meier, U., Gambardella, L. M., &
Schmidhuber, J. (2011). Convolutional Neural
Network Committees for Handwritten Character
Classification. 2011 International Conference on
Document Analysis and Recognition, 1135–1139.
https://doi.org/10.1109/ICDAR.2011.229
Cohen, G., Afshar, S., Tapson, J., & van Schaik, A.
(2017). EMNIST: An extension of MNIST to
handwritten letters. ArXiv:1702.05373 [Cs].
http://arxiv.org/abs/1702.05373
Grother, P. (1995). NIST Special Database 19
Handprinted Forms and Characters Database.
/paper/NIST-Special-Database-19-Handprinted-
Forms-and-
Grother/1ea788f1f4334095d215afd4c137936ff89d7f6
8
Hinton, G. (2012). Lecture Notes On RMSprop.
http://www.cs.toronto.edu/~hinton/coursera/lecture6/le
c6.pdf
Hussain, R., Raza, A., Siddiqi, I., Khurshid, K., & Djeddi,
C. (2015). A comprehensive survey of handwritten
document benchmarks: Structure, usage and
evaluation. EURASIP Journal on Image and Video
Processing, 2015(1), 46.
https://doi.org/10.1186/s13640-015-0102-5
Ioffe, S., & Szegedy, C. (2015). Batch Normalization:
Accelerating Deep Network Training by Reducing
Internal Covariate Shift. ArXiv:1502.03167 [Cs].
http://arxiv.org/abs/1502.03167
Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S.
(2020). A survey of the recent architectures of deep
convolutional neural networks. Artificial Intelligence
Review, 53(8), 5455–5516.
https://doi.org/10.1007/s10462-020-09825-6
Kingma, D. P., & Ba, J. (2017). Adam: A Method for
Stochastic Optimization. ArXiv:1412.6980 [Cs].
http://arxiv.org/abs/1412.6980
Le Cun, Y., Jackel, L. D., Boser, B., Denker, J. S., Graf,
H. P., Guyon, I., Henderson, D., Howard, R. E., &
Hubbard, W. (1989). Handwritten digit recognition:
Applications of neural network chips and automatic
learning. IEEE Communications Magazine, 27(11),
41–46. https://doi.org/10.1109/35.41400
Misra, D. (2019). Mish: A Self Regularized Non-
Monotonic Activation Function. ArXiv:1908.08681
[Cs, Stat]. https://doi.org/1908.08681
Peng, Y., & Yin, H. (2017). Markov Random Field Based
Convolutional Neural Networks for Image
Classification. In H. Yin, Y. Gao, S. Chen, Y. Wen, G.
Cai, T. Gu, J. Du, A. J. Tallón-Ballesteros, & M.
Zhang (Eds.), Intelligent Data Engineering and
Automated Learning – IDEAL 2017 (Vol. 10585, pp.
387–396). Springer International Publishing.
https://doi.org/10.1007/978-3-319-68935-7_42
Ruder, S. (2017). An overview of gradient descent
optimization algorithms. ArXiv:1609.04747 [Cs].
http://arxiv.org/abs/1609.04747
Sen Sharma, A., Ahmed Mridul, M., Jannat, M.-E., &
Saiful Islam, M. (2018). A Deep CNN Model for
Student Learning Pedagogy Detection Data Collection
Using OCR. 2018 International Conference on Bangla