Feed Your Mind - Practical Aspects of Machine Learning Circuits and Systems

Geographic Location
200 W. Cezar Chavez Austin, Texas United States 78701 Building: Silicon Labs
IEEE Region
Region 04 (Central U.S.)
Feed Your Mind - Practical Aspects of Machine Learning Circuits and Systems

Event Menu


The IEEE Central Texas CASS & SSCS JT. Chapter,
IEEE Circuits and Systems Society Outreach Program,
Silicon Laboratories


Feed Your Mind - Practical Aspects of Machine Learning Circuits and Systems

What is behind the buzzwords Machine Learning, Artificial Intelligence? In this lecture, Prof. H. Li and A. Sanyal will present practical aspects applied to circuits and systems solutions for everyday life.

AI Models for Edge Computing: Hardware-aware Optimizations for Efficiency

Abstract: As artificial intelligence (AI) transforms various industries, state-of-the-art models have exploded in size and capability. The growth in AI model complexity is rapidly outstripping hardware evolution, making the deployment of these models on edge devices remain challenging. To enable advanced AI locally, models must be optimized for fitting into the hardware constraints. In this presentation, we will first discuss how computing hardware designs impact the effectiveness of commonly used AI model optimizations for efficiency, including techniques like quantization and pruning. Additionally, we will present several methods, such as hardware-aware quantization and structured pruning, to demonstrate the significance of software/hardware co-design. We will also demonstrate how these methods can be understood via a straightforward theoretical framework, facilitating their seamless integration in practical applications and their straightforward extension to distributed edge computing. At the conclusion of our presentation, we will share our insights and vision for achieving efficient and robust AI at the edge.

Health management using intelligent wearables with mixed-signal AI

Abstract: As medical wearables become more widely adopted for at-home/early diagnosis/health surveillance, the volume of data produced by these devices are expected to reach thousands of petabytes/month. Transmitting this large volume of data over the cloud for processing will potentially emerge as a communication bottleneck and increase latency of decisions. Transmitting naively all data generated by a wearable medical device is also costly in terms of power/energy- transmitter is usually the highest consumer of energy in a sensor (at least 10~20x more energy than sensing). Key to addressing this data deluge is to increase capabilities of the wearable devices to process information locally and have on-device inference capabilities, such as through embedding AI capabilities into the wearable device that will allow extraction of key information from the sensor data. There needs to be balance between what can be processed locally on-device with low power/energy and how to optimally decide the volume of data communication from the device (to cloud as an example). The barriers to this approach lie in the computational complexity of AI algorithms that makes it challenging to fit AI models on wearables with limited resources. Some of the answers might lie in going back to early days of signal processing in silicon – developing analog circuit techniques for AI development which will require collaborative innovations in both AI model development and analog circuit design techniques. In this talk, I will present our research on developing analog AI circuits and their demonstrations with patient data with use cases from cardiovascular health monitoring and sepsis onset detection.