Close

Presentation

First International Workshop on Synergizing AI and Circuit-System Simulation
DescriptionHardware tape-outs are prohibitively expensive and time-consuming, making circuit and system (CAS) simulators crucial for verifying designs efficiently and cost-effectively prior to fabrication. An extensive array of simulators exists today, tailored for various CAS applications, such as Verilog simulators for digital integrated circuits (ICs), SPICE-based simulators for analog ICs, Verilog-AMS simulators for mixed-signal systems, and electromagnetic simulators for high-frequency circuits and antennas. Despite decades of development and the high degree of maturity achieved by CAS simulators, the recent surge of artificial intelligence (AI) is rekindling renewed interest from both software and hardware perspectives. On the hardware front, the exceptional parallelism capabilities of GPUs can be harnessed to expedite CAS simulations, such as GPU-accelerated SPICE simulations and logic gate simulation. On the software side, deep learning (DL) algorithms are being seamlessly integrated into CAS simulators serving as surrogate models or providing initial guesses, to reduce computational workloads and improve efficiency. Conversely, the principles of CAS simulation are catalyzing novel AI models. One prominent example is the use of ordinary differential equations (ODEs), which have long been a cornerstone of time-domain analog circuit simulations in SPICE, with the adjoint method used for gradient computations. In the DL community, these techniques have evolved into Neural ODEs, a class of models that parameterize ODE dynamics using neural networks. Neural ODEs have proven especially effective for time-series forecasting and are closely linked to the development of generative diffusion models. Similarly, state-space models (SSMs), once the bedrock of linear time-invariant systems, now underpin architectures such as Mamba, designed for efficient natural language processing. Another notable adaptation of classical circuit principles in modern AI is Kirchhoff's current law (KCL), which has been leveraged to construct analog neural networks, such as memristor crossbar arrays and KirchhoffNet. Furthermore, Fourier transforms, widely used in frequency-domain CAS simulations for signal processing, have been reimagined as neural operators. This adaptation has led to breakthroughs in AI-driven scientific applications, such as weather forecasting.

The similarities between CAS simulation and AI are profound, yet no dedicated platform exists for researchers, engineers, and practitioners to discuss this interdisciplinary topic. Recognizing this critical need, the First International Workshop on Synergizing AI and Circuit-System Simulation aims to bring together experts to explore innovative methodologies that leverage the synergies between these fields. The workshop will provide a platform to discuss recent advancements and foster interdisciplinary collaboration.

The workshop contains 5 talks; each is scheduled to be 45 mins.

Title: GPU Accelerated Simulation: From RTL to Gate-Level, From Opportunities to Success
Contributors: Yanqing Zhang, Mark (Haoxing) Ren, Nvidia
Abstract: In this talk, we will present a brief history of accelerated simulation, and motivate why GPUs can be an attractive platform to accelerate this uber-important EDA application. We will go through several important types of simulation abstraction levels: RTL, gate-level, and re-simulation, as well as the unique challenges each type of simulation faces when attempting to accelerate them. Next, we go into detailed discussion on some recent research work that aims to attack these challenges, centered around 3 projects GEM (GPU accelerated RTL simulation), GL0AM (GPU accelerated gate-level simulation), and GATSPI (GPU accelerated re-simulation). Finally, we provide some analysis and insight into where the remaining opportunities for improvement and research lie (and why), and which challenges have been successfully conquered.

Title: AI on functions
Contributors: Kamyar Azizzadenesheli, Nvidia
Abstract: Artificial intelligence is rapidly advancing, with neural networks powering breakthroughs in computer vision and natural language processing. Yet, many scientific and engineering challenges—such as material science, climate modeling, and quantum chemistry—rely on data that are not words or images, but functions. Traditional neural networks are not equipped to handle these functional data.
To overcome this limitation, we introduce neural operators, a new paradigm in AI that generalize neural networks to learn mappings between function spaces. Neural operators enable AI to process and reason about functional data directly, opening new frontiers for scientific discovery and technological innovation across diverse disciplines.

Title: Machine Learning for EDA, or EDA for Machine Learning?
Contributors: Zheng Zhang, University of California at Santa Barbara
Abstract: The rapid advancement of machine learning (especially deep learning) in the past decade has impacted, both positively and negatively, many research fields. Driven by the great success of machine learning in image and speech domains, there have been increasing interests in “Machine Learning for EDA”. In the first part of the talk, I will explain the main challenge of data sparsity when applying existing machine learning techniques to EDA. Then I will show how some data-efficient scientific machine learning techniques, specifically uncertainty quantification and physics-constraint operator learning, can be utilized to build high-fidelity surrogate models for variability analysis and for 3D-IC thermal analysis, respectively. These techniques can greatly reduce the number of required device/circuit simulation data samples. Another important but highly ignored direction is “EDA for Machine Learning”. The five decades of EDA research has produced a huge body of solid theory and efficient algorithms for analyzing, modeling and optimizing complex electronic systems. Many of the white-box EDA ideas may be leveraged to solve black-box AI problems. In the second part of the talk, I will show how the self-healing idea and compact modeling idea from EDA can be utilized to improve the trustworthiness and sustainability of deep learning models (including large-language models).

Title: Optimization Meets Circuit Simulation
Contributors: Aayushya Agarwal, Larry Pileggi, Carnegie Mellon University
Abstract: Optimization is central to the design and analysis of modern engineering systems. But as systems scale in complexity, traditional optimization tools, which are often rooted in purely mathematical representations, can struggle to reliably find feasible solutions. In this talk, we explore a new approach that bridges mathematical optimization with circuit simulation. This approach maps optimization problems as analog circuits, where optimization components are modeled as equivalent circuit devices connected through a network. This reframes the development of optimization algorithm as the design and simulation of circuits, which allows us to leverage principles from linear networks, nonlinear device physics, and solution techniques in SPICE and its many derivations. The result is a class of physics-inspired methods tailored to the nonlinearities and structure of each optimization problem that would be far less intuitive without the view through a circuit model lens. We demonstrate the efficacy of the equivalent circuit methods for real-world applications, including training machine-learning, and optimizing power grids.

Title: Oscillator Ising Machines: Principles to Working Hardware
Contributors: Jaijeet Roychowdhury, University of California at Berkeley
Abstract: Modern society has become increasingly reliant on rapid and routine solution of hard discrete optimization problems. Over the past decade, fascinating analog hardware approaches have arisen that combine principles of physics and computer science with optical, electronic and quantum engineering to solve combinatorial optimization problems in new ways---these have come to be known as Ising machines. Such approaches leverage analog dynamics and physics to find good solutions of discrete optimization problems, potentially with advantages over traditional algorithms. Underlying these approaches is the Ising model, a simple but powerful graph formulation with deep historical roots in physics. About eight years ago, we discovered that networks of analog electronic oscillators can solve Ising problems “naturally”. This talk will cover the principles and practical development of these oscillator Ising machines (OIMs). We will touch upon specialized EDA tools for oscillator based systems and note the role of novel nanodevices. Applied to the MU-MIMO detection problem in modern wireless communications, OIMs yield near-optimal symbol-error rates (SERs), improving over the industrial state of the art by 20x for some scenarios.