Presentation Type
Lecture

Tutorial - NeuroSim: A Benchmark Framework of Compute-in-Memory Hardware Accelerators from Devices/Circuits to Architectures/Algorithms

Presenter
Title

Shimeng Yu

Country
USA
Affiliation
Georgia Institute of Technology, USA

Presentation Menu

Abstract

DNN+NeuroSim is an integrated framework to benchmark compute-in-memory (CIM) accelerators for deep neural network (DNN), with hierarchical design options from device-level, to circuit-level and up to algorithm-level. NeuroSim is a C++ based circuit-level macro model, which can achieve fast early-stage design exploration (compared to a full SPICE simulation). It takes design parameters including memory types (includes SRAM, RRAM/PCM and FeFET), non-ideal device parameters, transistor technology nodes (from 130 nm to 7nm), memory array size, training dataset and traces to estimate the area, latency, dynamic energy, leakage power. A python wrapper is developed to interface NeuroSim with deep learning platforms Pytorch/Tensorflow, to support flexible network topologies including VGG and ResNet for CIFAR/ImageNet. It supports weight/activation/gradient/error quantization in algorithm, and takes non-ideal properties of synaptic devices and peripheral circuits, in order to estimate training/inference accuracy. The framework is open-sourced and publicly available on GitHub. DNN+NeuroSim’s user community is growing, including industry researchers from Intel, TSMC, Samsung and SK Hynix. Therefore, it is timely to conduct a tutorial for a broader education to the community, and help the researchers to use/modify the code more flexibly for their own research purposes.