Deep Learning and Neuromorphic Computing - Technology, Hardware, and Implementation
As big data processing becomes pervasive and ubiquitous in our lives, the desire for embedded-everywhere and human-centric information systems calls for an intelligent computing paradigm that is capable of handling large volume of data through massively parallel operations under limited hardware and power resources. This demand, however, is unlikely to be satisfied through the traditional computer systems whose performance is greatly hindered by the increasing performance gap between CPU and memory as well as the fast-growing power consumption. Inspired by the working mechanism of human brains, a neuromorphic system naturally possesses a massively parallel architecture with closely coupled memory, offering a great opportunity to break the "memory wall" in von Neumann architecture. The tutorial will start with the evolution of neural networks, followed by the acceleration on conventional platform. I will then introduce the neuromorphic system designs including the approaches based on CMOS and emerging nanotechnologies. The latest research outcomes on hardware implementation optimization, the reliability and robustness control schemes, and new training methodologies by taking the hardware constraints into the consideration will then be presented. At last, new applications and challenges raised in deep learning and neuromorphic computing will be discussed.