Theme 3: Circuits and Architectures for Highly Energy-Efficient Computing

Theme Leader: Naresh Shanbhag

Today’s AI workloads are based on deep neural networks and other machine learning algorithms, which are computationally complex and require processing of large volumes of data. When implemented on von Neumann architectures, such as Graphics Processing Units, the resulting data movement dominates the latency and energy costs due to the existence of the so-called “memory wall”. To alleviate the drawbacks of existing computing systems, we are seeking radically different circuit and architecture approaches, as well as cross-layer solutions, to increase the compute density and energy efficiency by >100x compared to state-of-the-art approaches.

Example of new architecture approaches include in-memory computing, analog and brain-inspired computing, and stochastic computing platforms. Ideas that leverage the co-design approach, where investigations at different levels of the design hierarchy inform and guide each other to yield system-level solutions with maximum benefit, are highly encouraged. Circuits and architectures based on heterogeneous materials and devices and those that utilize the emergent physics in non-silicon substrates are of special interest.