Authors:
Abdelkader Ghis
1
;
2
;
Kamel Smiri
1
;
3
and
Abderezzak Jemai
4
Affiliations:
1
Laboratory of SERCOM, University of Carthage/Polytechnic School of Tunisia, 2078, La Marsa, Tunisia
;
2
University of Tunis El Manar, Tunis, 1068, Tunisia
;
3
University of Manouba/ISAMM, Manouba, 2010, Tunisia
;
4
Carthage University, Tunisia Polytechnic School, SERCOM-Lab., INSAT, 1080, Tunis, Tunisia
Keyword(s):
Neural Network, Learning, Back-propagation, Data Partitioning, Neural Network Distribution.
Abstract:
Neural Network Learning is a persistent optimization problem due to its complexity and its size that increase considerably with system size. Different optimization approaches has been introduced to overcome the memory occupation and time consumption of neural networks learning. However, with the development of modern systems that are more and more scalable with a complex nature, the existing solutions in literature need to be updated to respond to the recent changes in modern systems. For this reason, we propose a mixed software/hardware optimization approach for neural network learning acceleration. The proposed approach combines software improvement and hardware distribution where data are partitioned in a way that avoid the problem of local convergence. The weights are updated in a manner that overcome the latency problem and learning process is distributed over multiple processing units to minimize time consumption. The proposed approach is tested and validated by exhaustive simu
lations.
(More)