Presentation Type
Webinar

Computation-in-Memory Architectures for Edge-AI

Presenter

Presentation Menu

Abstract

Emerging IoT-edge applications are extremely demanding in terms of storage, computing power and energy efficiency in order to enable the deployment of AI, hence generate the “information” locally rather than communication the data to e.g., the cloud. On the other hand, both today’s computer architectures and device technologies are facing major challenges making them incapable to deliver the required functionalities and features at economical affordable cost. In order for computing systems to continue deliver sustainable benefits for the foreseeable future society, alternative computing architectures and notions have to be explored in the light of emerging new device technologies. This tutorial addresses the potential, design, and test of Computation-in-memory architecture based on non-volatile (NV) devices such as ReRAM, PCM and STT-MRAM as alterative low power hardware architectures that could enable edge-AI. First, the talk briefly explains the limitation of both CMOS scaling and today’s computing architectures. Then it classifies the sateof-the art computer architectures and highlight how the trends is going toward computation-inmemory (CIM) architectures in order to eliminated and/or significantly reduces the limitations of today’s technologies. The concept of CIM based on NV devices is discussed, and logic and arithmetic circuit designs using such devices and how they enable such architectures are covered; data measurements are shown to demonstrate the CIM concept in silicon. The strong dependency of application domains on the selection of appropriate CIM architecture and its building block, as well as the huge potential of CIM (in realizing order of magnitude improvement in terms of computing and energy efficiency) are illustrated based on some case studies. Finally, the research directions in different aspects of computation-in-memory are highlighted.

Description