Presentation
Enhancing LLMs for HDL Code Optimization using Domain Knowledge Injection
DescriptionOptimizing Hardware Description Language (HDL) code is essential for enhancing power, performance, and area metrics. Despite progress in HDL code generation, challenges remain in RTL optimization. We first introduce RTLOpt, a dataset featuring Verilog examples for pipelining and clock gating for evaluation. Additionally, we propose Mascot, a multi-agent framework that integrates domain knowledge into LLMs for RTL optimization. Mascot employs iterative feedback loops to refine HDL code based on syntax, functionality, and PPA metrics. Empirical results show Mascot improves PPA by 20% for larger LLMs and 10% for smaller ones, establishing it as a foundational approach for HDL code optimization.
Event Type
Networking
Work-in-Progress Poster
TimeMonday, June 236:00pm - 7:00pm PDT
LocationLevel 2 Lobby