Cadence Cerebrus to Enable Chip Design with ML: PPA Optimization in Hours, not Months
by Dr. Ian Cutress on July 22, 2021 10:45 AM ESTThe design of most leading edge processors and ASICs rely on steps of optimization, with the three key optimization points being Performance, Power, and Area (and sometimes Cost). Once the architecture of a chip is planned, it comes down to designing the silicon of that chip for a given process node technology, however there are many different ways to lay the design out. Normally this can take a team of engineers several months, even with algorithmic tools and simulation to get a good result, however that role is gradually being taken over with Machine Learning methods. Cadence today is announcing its new Cerebrus integrated ML design tool to assist with PPA optimization – production level silicon is already being made with key partners as the tool directly integrates into Cadence workflows.
Place and Route: The Next Stage of Automation
The act of finding the best layout for a chip, or even for a part of a chip such as a macro or a library, has already been optimized for many years – engineers plug in details about the parts of the design with a variety of parameters and run overnight simulations to find the best layout. These algorithmic implementations of ‘Place and Route’ have been built over time to be very complex, but rely on equations and if/then statements to try and predict the best design. Very often this is a slow process, with the engineering team having to go back, tweak the design, and attempt again. The designs are then simulated for expected performance and power to find which is the best. There is no level of the software ‘learning’, as the algorithm is defined by hard and fast rules.
The advancement of machine learning this decade has put a new twist on traditional Place and Route algorithms. Companies that build EDA (Electronic Design Automation) tools to design chips have been researching into the best way to integrate machine learning into their algorithms with the hope that the software alone can understand what it is doing, make iterative improvements, and essentially be left to its own devices to get the best result. Beyond this, it allows for parallel analysis across many systems – one of the main limitations of traditional EDA test and simulation is that it is single thread limited and doesn’t scale, whereas ML would allow for more parallel testing and simulation.
An example of ML-assisted design from Google
In speaking to EDA companies that are discussing ML tools, the main benefit of this technology is that it creates a simpler workflow but also produces better processors almost equivalent to a benefit of a whole process node. What would take a team of a dozen engineers half a year to find a good design can be superseded by one or two engineers over a couple of weeks, and it would end up with a better PPA than the human plus non-ML methods ever could.
How To Enable Machine Learning on EDA
Today’s announcement is from Cadence, one of the top EDA tool vendors, with the launch of their new Cerebrus technology. Integrated directly into the Cadence toolchain, Cerebrus can work at any level of the stack design, from high level definitions in System C down to standard cells, macros, RTL and signoff, and it allows an engineer to give it objects with defined specifications at any level and optimize for each. The automated floorplanning allows for the engineer to specify optimization points beyond regular PPA, such as wire length, wire delay, power grid distribution, IR drop, IO placement with respect to physical chip boundaries, and other parameters.
Cadence’s Cerebrus tool uses reinforcement machine learning for its optimization process – the technology is already in hands with key customers and in use with chip design, although today’s announcement makes it available to the wider customer base. Cadence states that the machine learning workflow is designed such that it can start from an untrained model and find an optimized point in 50-200 iterations, and within an organization models can be reused if a number of constraints are followed (process node PDK, similar structure) reducing that time even further. Theoretically an organization can build a library of pre-trained models, and enable Cerebrus to attempt the best one for the task, and if that fails, start anew and still get a great result.
One of the common questions I ask about these new advancements is how well the end design can be fed back to engineers to help with higher level design – it is all very well the ML portion of the tool working on reinforcement learning, but is there anything that can be done to assist the engineer in their understanding of their own architectural implementation. In speaking with Cadence’s Kam Kittrell, he explained that a key value of their tool is a replay feature – it records each iteration in the reinforcement learning process, allowing engineers to step through how each cycle decided to do what it did, allowing the engineer to understand why the end result the way it is. I haven’t heard of any other EDA company having this feature at this time.
Cadence Cerebrus Case Studies
As part of the announcement today, two of Cadence’s partners contributed quotes to the efficacy of the technology, however it is the case studies provided that are worth looking over.
First up is a 5nm mobile CPU, which we believe to be part of Cadence’s partnership with Samsung Foundry. According to the information, the Cerebrus tool helped a single engineer in 10 days achieve a 3.5 GHz mobile CPU while also saving leakage power, total power, and improving transistor density. Compared to a predicted timeline using almost a dozen engineers over several months, it is predicted that Cerebrus improved the best hand tuned design for a +420 MHz frequency gain, saving 26 mW of leakage power and 62 mW of total power.
62 mW of total power, as a 3% saving, suggests a 2 W chip (or core). Right now Samsung does not have a 3.5 GHz 5nm mobile processor in the market, but it does suggest that future designs will be more optimized than before.
The second case study involves floorplan optimization and implementation optimization concurrently. In this instance Cadence says a customer wanted a 12nm CPU core at 2 GHz with the lowest power and lowest area, and the Cerebrus tool was able to optimize for that 2 GHz point, reducing wire delay timing by 83% as well as leakage power by 17%.
Samsung Foundry is already rolling out Cerebrus as part of its DTCO program for partners that have a Cadence based workflow.
The Future of ML-enhanced EDA Tools
We recently reported a similar story from the other heavyweight in the EDA industry, Synopsys, about its DSO.ai software. Synopsys has a keynote titled ‘‘Does Artificial Intelligence Require Artificial Architects?’ at this year’s Hot Chips conference, where we expect to hear more information about the work they’re doing with customers. In a similar light, we expect Cadence to also discuss more about wins with its Cerebrus tools.
However, a question I put both companies is about the evolution of the software. There are ultimately two roadmaps for software like DSO.ai and Cerebrus – function and performance. To a certain extent it’s easy to talk about a roadmap of function as the companies’ research and enable ML tools to work across more of the toolchain (as well as potentially violating standard abstraction layer boundaries). But the performance is a big question – while scaling out performance with more tests is ‘easy’ to build for, developing relative ML algorithms that are easier to find the best layouts is going to be a very wide field to discover. Floorplanning designs have millions of degrees of freedom to optimize for, and one of the limitations of human involvement is getting stuck going down one particular design route; with so much to explore, neither company yet is discussing what their plans are to ensure that ML-assisted design can overcome these potential obstacles. Most likely, as the technology becomes more widely adopted, exactly how that development with coalesce into actual research and product roadmaps might become something more tangible for a roadmap of sorts.
Related Reading
- New Cadence Transient EM Simulation Tools: 3D Clarity
- Cadence DDR5 Update: Launching at 4800 MT/s, Over 12 DDR5 SoCs in Development
- Samsung’s 5nm EUV Technology Gets Closer: Tools by Cadence & Synopsys Certified
- Cadence Announces Tensilica Vision Q7 DSP
- Cadence Tapes Out GDDR6 IP on Samsung 7LPP Using EUV
- Using AI to Build Better Processors: Google Was Just the Start, Says Synopsys
19 Comments
View All Comments
mode_13h - Sunday, July 25, 2021 - link
> real world example of that would be Apple SoC clock down to extend useable life of older modelsI thought that was to extend battery life.
Don't AMD CPUs or GPUs have some way of measuring circuit aging, and dialing back their boost clocks to compensate?
Oxford Guy - Monday, July 26, 2021 - link
What Apple claims and what Apple does... not always the same.My venerable anecdote about using extreme bait and switch fraud to sell the original Mac to the tech press is not an isolated example.
If you want a more recent one... the force-feeding of the APFS file system onto Macs using hard disks (the file system is incompatible, causing extreme slowness) was designed, very obviously, to force people to buy replacement machines. It was a ploy that used rubbish about how terrible HFS+ is, due to its age. HFS+ is a million times better for Macs with hard disks.
Spunjji - Monday, July 26, 2021 - link
Thanks for that explanation - I'd read it as a reduction in paths actually failing entirely to meet the required timing for the clock, rather than failing to meet the required margins. That makes much more sense!mode_13h - Thursday, July 22, 2021 - link
lol. For about the first 3rd of the article, I thought the news was they partnered with Cerebras to use their Wafer Scale Engine to run these tools in the cloud.However, that does raise the question how much faster these tools would work, when run on such hardware. Then, of course, you'd use the tools to *build* the next gen WSE, which can also be used to research new semiconductor materials & fabrication techniques. And pretty soon, we reach the singularity. Or so they say.
chipxman7 - Friday, July 30, 2021 - link
I would be a bit skeptical about some of these marketing claims. They tend to cherry pick the best result. They also are likely comparing timing and other metrics to the initial reference design.Machine learning does offer some interesting technological gains. You have to train a machine learning algorithm. And that takes time. It is not free. To be most accurate, you are training on your existing design or a similar design. But once it is established, it may fix issues found later on. And that often is not easy to do. Modern chip and software design is very complex.
mode_13h - Saturday, July 31, 2021 - link
Your skepticism is understandable. However, if these claims prove hollow, their customers will inevitably find out. And that's some serious reputational damage, since these business relationships typically last many years and involve multi-million dollar contracts. So, it's not really in their interest to be too short-term focused, here.Also, building chips involves such large costs and long timescales that it's unlikely customers won't use caution when approaching any big changes in how they operate.
> You have to train a machine learning algorithm. And that takes time.
They seem to account for that.
> To be most accurate, you are training on your existing design or a similar design.
I'm not sure about that. I think it learns (or at least refines its model) from simulation feedback on its layouts, making it a fundamentally iterative process. This is also what enables you to tweak its priorities (e.g. frequency, power, area).
MetalPenguin - Sunday, August 1, 2021 - link
Yeah I'm pretty sure they are cherry picking results, but the interesting thing here is whether a model trained on a decently large set of designs is going to be able to produce good results on any other given new design. My personal opinion is that this might end up working out pretty well because there are a bunch of design rules, or best practices, which apply to a lot of designs. So I would not be surprised if their system does a decent job of learning those.I work in the semiconductor industry and I'm personally excited about this because I see it as a way of amplifying engineer productivity, not necessarily taking away work from people. I work at a smaller company, but we are resource constrained so more intelligent and efficient EDA tools would just allow us to take on more work which we may have to refuse because we don't have enough people resources to do everything.
Satta matka live - Tuesday, August 3, 2021 - link
SattaMatkaLive.com: Resgister to World's #1 Matka Site to get 100% fix leak number for Mumbai matka game and weekly jodi number. Only from our site you will get Jackpot number for Mumbai Matka Game.