Presentation
SystemVerilog Assertion Syntax Correction with Knowledge Distillation: Toward LLM-Guided Automated Hardware Verification
DescriptionAutomating syntax correction in SystemVerilog assertions (SVAs) is essential to streamline hardware verification workflows, reducing the need for laborious, time-consuming manual error correction. However, using proprietary large language models (LLMs) for this task poses challenges due to high costs, privacy concerns, limited access, and slow inference times. Additionally, a substantial performance gap exists between large proprietary models and smaller open-source alternatives in correcting syntax errors in SVAs. To address these challenges, we propose a knowledge distillation (KD)-based approach to transfer the syntax correction capabilities of a larger model to a smaller, open-source model. Our fine-tuned model achieves success rates of 97.77% and 95.69% across two benchmarks, proving it to be a precise, fast, cost-effective, and secure alternative to large proprietary models.
Event Type
Networking
Work-in-Progress Poster
TimeMonday, June 236:00pm - 7:00pm PDT
LocationLevel 2 Lobby
Similar Presentations


