VINE: A Variational Inference-Based Bayesian Neural Network Engine

Sponsor: Defense Advanced Research Projects Agency.

The primary goal is to develop a Bayesian Neural Network (BNN) with an integrated Variational Inference (VI) engine to perform inference and learning under uncertain or incomplete input and output features. A secondary goal is to enable robust decision making under noise and variability in the observed data and without reference to a ground truth.

Algorithm/Hardware Co-Design of Input Dimension Reduction Module

We developed a new approximation to gradient descent optimization that is suitable for hardware implementation of input dimension reduction module using independent component analysis. The proposed approximation enables a hardware implementation that operates at an order of magnitude higher clock frequency compared to prior work and achieves two orders of magnitude improvement in throughput.Related work:

  • M. Nazemi, S. Nazarian, and M. Pedram. “High-Performance FPGA Implementation of Equivariant Adaptive Separation via Independence Algorithm for Independent Component Analysis,” IEEE International Conference on Application-specific Systems, Architectures and Processors, July. 2017.

Medical Dataset Construction with Severity Scores and Comorbidities

We investigated physiological measurements from a publicly available and de-identified medical data set called MIMIC-III, which consists of 58,976 distinct hospital admissions. Nine commonly-used severity scores (including SAPS-II, APS-III, LODS, SOFA, QSOFA, SIRS, OASIS, MLODS, and SAPS) and the Elixhauser comorbidities are extracted based on the original dataset. The extracted features are used for a mortality prediction application in a BNN and achieve competitive results.

Gaussian Random Number Generator in Hardware

One crucial component to enable hardware implementation of a BNN is the Gaussian random number generator. We developed a RAM based Linear Feedback Gaussian Random Number Generator (RLF-GRNG) inspired by the properties of binomial distribution and linear feedback logics. The proposed RLF-GRNG requires minimal and sharable extra logic for controlling and indexing and is ideal for parallel random number generation.

Knowledge Transfer Framework

Knowledge transfer is widely adopted in learning applications to allow for different domains, distributions, and tasks to be used during training and testing. Since a conventional BNN cannot be used for action recommendation without encountering the “no ground truth” problem (i.e., the best action is not present in the training data), we propose a knowledge transfer framework which first constructs a BNN for outcome prediction only (i.e. latent model construction) and then transfer the knowledge learned to the domain of action recommendation.