Close

Presentation

Unveiling the Core Truth: Advanced Glitch Power Analysis and Optimization Using Statistical Methodology
DescriptionGlitch power is a critical concern in digital design. When the signal timing paths in a combinational circuit are imbalanced, race conditions arise, causing glitches along the paths. Research indicates that glitch power can account for up to 40% of total power consumption, posing a significant challenge for designs involving large-scale combinational logic chips such as CPUs, GPUs, and AI processors.

To optimize glitch power, two key factors must be addressed: the magnitude of glitch power within the design and its distribution across functional blocks. While existing tools can identify glitches and measure their power impact using accurate delay data and gate-level netlists, this analysis typically occurs during the placement and routing (P&R) stage—too late for effective design optimization. Current EDA flows attempt to address this by estimating wire delays during the RTL stage using P&R engines. However, this approach is time-consuming, requires RTL designers to have in-depth P&R knowledge, and may still yield discrepancies when compared to the final tape-out netlist.

In this paper, we propose an alternative approach to address these challenges at the RTL design stage. Our methodology leverages a uniform delay-aware engine to estimate glitch power caused by imbalances in combinational logic. Additionally, a statistical scaling factor is applied to account for delay effects. We validated this approach across eight different design blocks and three technical corners. The results demonstrate less than 10% variance in total power compared to gate-level netlist power, with a well-matched glitch power distribution.

This level of accuracy is sufficient for identifying glitch risks and optimizing critical combinational logic blocks. Furthermore, our solution is faster and more accessible for RTL designers, eliminating the need for extensive P&R expertise.