
Mixed Signal Approaches to Machine Learning Hardware Accelerator for Inference Engines
Presentation Menu
The rapid advancements in computing, communication, and networking technologies facilitated by the feature size scaling of transistors, have not only made connected devices, i.e., Internet-of-Things (IoT), possible but also resulted in volumes of data being generated by these devices, which has led to rapid advancements in the area of big-data analytics thereby ushering in a new era of artificial intelligence and machine learning hardware to make smart connected devices.
We have reached an inflection point in the design of such smart connected devices: the machine learning hardware designer must look beyond conventional digital computing blocks and possibly revive analog computing. This has led to a new generation of analog designers who are combining conventional analog circuits with approximate computing techniques using conventional CMOS as well as emerging-devices compatible with CMOS to build energy-efficient systems.
This talk begins with an overview of various Neural Network Architectures and the computing blocks needed to realize them. FPGA-based Convolutional Neural Network (CNN) architectures for energy-efficient inference engines for image/depth-image classification, and seizure prediction which also reduces the sensor-interface front-end power and the energy-per-bit needed to transmit the sensor information to the inference engine will be discussed next. Hardware techniques for memory augmented neural network (MANN) using novel magneto-electric FETs (MeFET) will be discussed as well. Various analog dot-product computation methodologies, viz., conductance-based, charge-based, and gm-based will be discussed next. An oscillator-based mixed-signal Spiking Neural Network (SNN) architecture and techniques to facilitate energy-efficient training-on-the-edge will be presented.