Close

Presentation

Machine Learning accelerated distributed computing Place and Route framework for High Performance CPU designs
DescriptionDigital design optimization is a crucial aspect of modern design flows, particularly in the context of electronic design automation (EDA) has designs are becoming increasingly complex. By leveraging ML techniques and tools, digital design optimization can be significantly improved, leading to better performance, power and Area (PPA). However, ML based automation flows have its own challenges like ML algorithms require large, high-quality datasets to train effectively which can introduce arbitrariness and uncertainty and might require more iterations to converge. Also, these algorithms vary design parameters in implementation tools that can yield diverse results putting tradeoff between power and performance. In this paper we present a ML accelerated implementation framework to train models to predict optimal design parameters and then by using different ensemble methods combining models trained from different initializations to improve overall performance and robustness, ultimately leading to more efficient and effective machine learning workflows. Using Cadence ML tool CEREBRUS integrated with implementation tools such as Genus, Innovus and Tempus, we achieved 5% Power gain, 20% Timing gain and 30% improvement in overall design cycle by onetime investment of distributed computing which increased the exploration space by factor of 10 which boost PPA and the productivity with less manual intervention.