Energy Efficient Sparse Machine Learning Processor
Presentation Menu
Sparsity is widely existed in modern neural networks and how to support such sparsity in hardware is an important direction to enhance energy efficiency of machine learning chips. This talk will first begin with an introduction of various pruning algorithm techniques to achieve sparse neural network (i.e., unstructured and structured sparse networks). Furthermore, we review different upto-date architectures and chips to make efficient inference and training of sparse neural network, covering both spatial domain as well as time domain sparsity. Finally, we discuss challenges and future directions to support sparsity in computing-in-memory artificial intelligent chips.