BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
X-LIC-LOCATION:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250625T183018Z
LOCATION:3001\, Level 3
DTSTART;TZID=America/Los_Angeles:20250624T144500
DTEND;TZID=America/Los_Angeles:20250624T150000
UID:dac_DAC 2025_sess105_RESEARCH1131@linklings.com
SUMMARY:PyraNet: A Multi-Layered Hierarchical Dataset for Verilog
DESCRIPTION:Bardia Nadimi, Ghali Omar Boutaib, and Hao Zheng (University o
 f South Florida)\n\nRecently, there has been a growing interest in leverag
 ing Large Language Models for Verilog code generation. \nHowever, the curr
 ent quality of the generated Verilog code remains suboptimal. \nThis is la
 rgely due to the absence of well-defined, well-organized datasets with hig
 h-quality samples, as well as a lack of innovative fine-tuning methods and
  models specifically trained on Verilog. \nIn this paper, we introduce a n
 ovel open-source dataset and a corresponding fine-tuning technique, which 
 utilizes a multi-layered structure that we refer to as PyraNet. \nOur expe
 riments demonstrate that employing the proposed dataset and fine-tuning ap
 proach leads to a more accurate fine-tuned model, producing syntactically 
 and functionally correct Verilog code.\nThe evaluation results show improv
 ements by up-to $32.6\%$ in comparison to the CodeLlama-7B baseline model 
 and up-to $16.7\%$ in comparison to the state-of-the-art models using Veri
 logEval evaluation platform.\n\nTopics: AI\n\nTracks: AI2: AI/ML Applicati
 on and Infrastructure\n\nSession Chairs: Luis Guerra e Silva (IST Tecnico 
 / ULisboa, INESC-ID) and Yutaka Masuda (Nagoya University)\n\n
END:VEVENT
END:VCALENDAR
