THE LATEST NEWS
AI Agents Will Work with AI Agents For Chip Design in 2025

Last week, Synopsys’ lead on AI technology strategy, Stelios Diamantidis, stated that AI will start collaborating with AI in 2025, bringing in the next phase of AI deployment. He said that AI agents—which started out as simple AI bots performing rudimentary tasks using predefined rules and decision trees—have evolved into sophisticated AI agents that can understand human language, generate content, continuously learn and adapt their behavior accordingly.

These may be built for specific use cases and isolated within certain applications, but that will soon change when one AI agent could perceivably collaborate with another AI agent. In a blog post, Diamantidis added that AI agents are being trained for greater integration and collaboration, including in chip design.

To highlight their own internal use of this, Synopsys told EE Times, “Based on results from pilot programs, Synopsys internal GenAI applications are expected to yield 250,000 hours of employee capacity this coming year, freeing our teams to focus more of their time on high-value activities for our customers.”

In his blog, Diamantidis added, “Highly specialized AI agents could combine and analyze incalculable amounts of information spanning software workloads, architecture, data flow, timing, power, parasitics, manufacturing rules and other parameters. This AI-to-AI collaboration would help identify unseen patterns and correlations, develop new solutions for longstanding problems, and provide detailed recommendations for optimizing chip design and performance.”

AI is not just about compute: it is also about power efficiency

While we all get excited about AI and generative AI, we are constantly hearing about the huge energy demands of AI, as well. At several conferences over the last two years, I have heard Mark Papermaster, CTO of AMD, talk about running out of energy to power the huge explosive growth in AI in the near future, and there is constant talk about trying to get more compute with less energy to address such issues.

There are of course many companies innovating, either with improved AI compute, interconnect and memory architectures, to completely innovative compute in memory (such as the recently emerged startup Sagence AI with its analog in-memory compute to address more efficient AI inferencing).

William Ruby, a senior director at Synopsys addressing power analysis, told EE Times, “We need to make power and energy efficiency one of the primary considerations when you start looking at the architecture [in chip design].” Ruby has extensive experience in low power IC design and design methodology.

We sat down with Ruby at Synopsys’ headquarters in Mountain View, Calif., earlier this month to chat about the impact of AI, on the need for power efficiency and how Synopsys is helping to address that requirement.

From EETimes

Back
SoCs Get a Helping Hand from AI Platform FlexGen
FlexGen, a network-on-chip (NoC) interconnect IP, is aiming to accelerate SoC creation by leveraging AI.Developed by Arteris In...
More info
Huang Talks Tokens, Reveals Roadmap at GTC 2025
Aside from Nvidia CEO Jensen Huang gleefully firing a T-shirt cannon into the crowd, this year’s GTC 2025 keynote felt like slig...
More info
BYD Ignites EV Race With Ultra-Fast Charging
Chinese electric vehicle giant BYD has shocked the global automotive industry by unveiling a revolutionary fast-charging technology that...
More info