Designing Low-power “Intelligent” Chips in the face of Statistical Variations of Nanoscale Devices: The Neuromorphic Solution
As CMOS technology has been scaling down over the last decade, the effect of statistical variations (or component mismatch) and their impact on circuit design have become increasingly prominent. Further, new nanoscale devices like memristors and spin-mode devices like domain wall memories have emerged as possible candidates for neuromorphic computing at energy levels lower than CMOS—however, they also suffer from issues of variability and mismatch. In this talk, I will present some of the work done by our group where we take inspiration from neuroscience and show new approaches to perform machine learning with low-energy consumption using low-resolution mismatched components. First, I will talk about “combinatoric learning” using binary or 1-bit synapses—an alternative to weight based learning in neural networks that is inspired by structural plasticity in our brains. Second, I will present an example of utilizing component mismatch to perform part of the computation—an example of algorithmhardware co-design involving random projection algorithms like Reservoir Computing or Extreme Learning Machine. Lastly, I will show an application of such a low-power machine learner to perform intention decoding in low-power brain-machine interfaces.