Sherif Eissa

Eindhoven University of Technology

EDL P165 P7: Efficient Deep Learning Platforms 

Research assignment
Ultra-low power deep learning acceleration for mobile platforms

Deep learning (DL) has caused a recent revolution in Artificial Intelligence. This recent surge in the field has sharply increased research interest into the field of deep learning acceleration using dedicated hardware architectures. Despite the huge interest, with many tech companies introducing heterogeneous architectures with dedicated AI cores, DL has suffered from drawbacks against its deployment for edge applications, especially for constrained power budgets. For these reasons, research is heading towards inventing a new generation of deep learning architectures that are able to provide a variety of useful applications under constrained power budgets. One type of network which holds such promises is the Spiking Neural Networks (SNNs) which aim at combining computational paradigms of the human neocortex with Deep learning applications. In order to achieve the low-power promises that SNNs hold, specific hardware needs to be designed with heavy collaboration between hardware architecture and network architecture design. The field of SNN hardware design, which aims at creating silicon more similar to the brain’s “wetware”, is rightfully called “Neuromorphic Computing”. In my PhD, I dive into the world of neuromorphic computing, with the aim of creating scalable low-power efficient hardware that can run SNNs on constrained power budgets (typically less than 1 Watt).

In my first year, other than gaining various knowledge and learning a comprehensive literature review for my sake and the sake of other students and researchers at my institute, I have optimized the event-based computational hardware by aggressively optimizing its most complicated arithmetic operation, the exponential decay function. I have submitted a paper draft discussing three lossy approximation techniques, including one completely novel implementation which achieved competitive results. I have also shown the capability of SNNs to quickly adapt and relearn extremely simplified exponential decay functions.  
Single stage of our novel exponential decay approximation: Logarithmic Scaling


Hardware Approximation of Exponential Decay for Spiking Neural Networks.
Sherif Eissa, Sander Stuijk, Henk Corporaal. Conference proceedings of Artificial Intelligence Circuits and Systems (AiCAS), june 2021. No open access.

Personal information: