Close

Presentation

Lookup Table-based Multiplication-free All-digital DNN Accelerator Featuring Self-Synchronous Pipeline Accumulation
DescriptionDeep neural networks (DNNs) have been widely applied in our society, yet reducing power consumption due to large-scale matrix computations remains a critical challenge. MADDNESS is a known approach to improving energy efficiency by substituting matrix multiplication with table lookup operations. Previous research has employed large analog computing circuits to convert inputs into LUT addresses, which presents challenges to area-efficiency and computational accuracy. This paper proposes a novel MADDNESS-based all-digital accelerator featuring a self-synchronous pipeline accumulator, resulting in a compact, energy-efficient, and PVT-invariant computation. Post-layout simulation using a commercial 22nm process showed that 2.5x higher energy efficiency (174 TOPS/W) and 5x higher area efficiency (2.01 TOPS/mm2) can be achieved compared to the conventional accelerator.
Event Type
Research Manuscript
TimeTuesday, June 2411:30am - 11:45am PDT
Location3003, Level 3
Topics
Design
Tracks
DES2A: In-memory and Near-memory Computing Circuits