Sponsor: TBD
To cope with computational and storage complexity of deep neural networks, this project focuses on a training method that enables a radically different approach for realization of deep neural networks through Boolean logic minimization. The aforementioned realization completely removes the energy-hungry step of accessing memory for obtaining model parameters, consumes about two orders of magnitude fewer computing resources compared to realizations that use floating-point operations, and has a substantially lower latency.Related work:
- M. Nazemi, A. Fayyazi, A. Esmaili, A. Khare, S. N. Shahsavani, and M. Pedram,
NullaNet Tiny: Ultra-low-latency DNN Inference Through Fixed-function Combinational Logic,
Proc. of The 29th IEEE Int’l Symp. On Field-Programmable Custom Computing Machines, May 2021. - M. Nazemi, G. Pasandi, and M. Pedram, “Energy-efficient, low-latency realization of neural networks through Boolean logic minimization,” Asia and South Pacific Design Automation Conference (ASP-DAC), 2019.(Best Paper Award)