Webinar: Theme 3 - Circuits and Architectures for Highly Energy-Efficient Computing

3 pm CT, March 23, 2022
Zoom

 

Register

Topic 1: Scalable Resistive Memory Based Processing Using Memory

Speakers: Nam Sung Kim and Saugata Ghose 

Several resistive memories support processing using memory (PUM), by having memory cells interact with each other to perform bitwise primitive functions in situ. But parallelism exploited by PUM demands a large amount of current flowing through bit-line and/or word-line interconnects, limiting the size of arrays and thus performance. Besides, considering the growing size of AI/ML models, multiple PUM This talk will focus on our initial work, dubbed RACER, which presents a cost-effective PUM architecture for small arrays. RACER uses bit-pipelining, which can pipeline bit-serial w-bit computation across w small tiles. We find that RACER with BEOL-integrated NOR-capable ReRAM cells provides 107x the performance of a 16-core CPU with energy savings of 189x. We will also discuss future cross-stack research opportunities within the ASAP Center to make RACER more scalable, while considering both analog- and digital-based approximate computing, as well as multi-chip dataflow accelerator architecture and chip-to-chip communication interfaces.

Topic 2: Energy-efficient Computing based on Ferroelectric Devices

Speakers: Wenjuan Zhu and Qing Cao 

A key challenge of current computing systems is memory bottleneck, where system performance is limited by the time and power required to access memory rather than computation itself. To address this problem, we propose to create logic/analog devices with embedded ferroelectric memories and develop ferroelectric network connectors with tunable weight based on two-dimensional materials and silicon nanomembranes. Furthermore, we will explore neural networks based on 3D monolithic integration of these devices. These neural circuits will be able to support highly parallel and large network computing with low energy consumption and can learn/ adapt to new tasks. These devices and circuits will serve as core fabrics in future data-centric computing architectures.

Topic 3: Cross-Stack Design of Non-von-Neumann Computers

Speakers: Qing Cao and Saugata Ghose 

Electrochemical random-access memory (ECRAM), whose resistance is tunable by the gate-controlled redox reaction of metal ions shuffling between the gate electrolyte and the intercalatable channel, shows great promise to be implemented as analog nonvolatile memory element in neuromorphic computing architectures, with its low power consumption and near-symmetric weight update in response to pulsed input. In this talk, we will present our latest progress in ECRAM fabrication and plans to integrate scaled ECRAMs into one-transistor-one-cell arrays. These arrays will be utilized to perform both the vector-matrix multiplication and the weight update in the back-propagation algorithm, which allows us to further assess their performance on circuit level and benchmark with simulation results. Co-design from materials, devices, to circuit levels will be adopted to improve the overall performance.