Close

Presentation

Introduction to Foundation AI Model and Its EDA Applications
DescriptionThe objectives of this tutorial are to provide a solid foundation in understanding large language models and their applications, equip participants with the trending AI knowledge to apply self-supervised learning techniques effectively in their own target applications, demonstrate the integration of multimodal data for enhanced AI capabilities, and discuss strategies to improve the efficiency of large-scale models. This tutorial content is designed for researchers, industry practitioners, and students interested in the latest advancements in AI model development and deployment. Our target audience may work in different backgrounds, including but not limited to: EDA researchers or engineers, especially those interested in AI for EDA; computer architecture researchers or engineers, especially those working on AI accelerator design; algorithm researchers or engineers, especially those working on AI algorithms, applications, and products. The tutorial will cover basic large language model (LLM) techniques, including transformer and RAG, self-supervised pretraining techniques, such as contrastive learning, multimodal representation learning, the efficiency of large foundation models, and foundation AI model’s applications in EDA.
Section 1: Basic large language model (LLM) techniques (Zhiyao Xie)
Section 2: Self-supervised pre-training techniques (Wei Wen)
Section 3: Multimodal representation techniques (Wei Wen)
Section 4: Eficiency of large foundation models (Ang Li)
Section 5: Application of foundation models in EDA (Zhiyao Xie)