Category: Realizing DNNs with Fixed-functional combinational Logic
-
Algorithm-Architecture Co-Design for Energy-Efficient and Reliable Machine Learning Models
Sponsor: National Science Foundation (NSF) The broader scope of this research includes: a) Energy-efficient architecture and algorithm co-design for DNN training to yield compressed models, b) Efficient model compression to retain its robustness, c) Model compression of brain-inspired deep SNNs.Related work:
-
Energy-Efficient, Low-Latency Realization of Neural Networks Through Boolean Logic Minimization
Sponsor: TBD To cope with computational and storage complexity of deep neural networks, this project focuses on a training method that enables a radically different approach for realization of deep neural networks through Boolean logic minimization. The aforementioned realization completely removes the energy-hungry step of accessing memory for obtaining model parameters, consumes about two orders…