What will EDA and chip design look like in the year 2020? Prognostications from the ICCAD panel

Last night, half a dozen ICCAD panelists attacked the topic “2020 Vision: What the recent history of EDA will look like in nine years.” That’s such a convoluted and hard-to-parse title that the panelists chose to discuss the state of the industry in the year 2020 instead. There were a lot of really interesting opinions and even a few surprise prognostications.

Patrick Groeneveld went first. Patrick is Magma’s Chief Technologist and he’s the General Chair for DAC 2012. He’s also been a full professor of electrical engineering at the Eindhoven University of Technology. I’ve known Patrick long enough to put a lot of faith in his underlying understanding of chip design and EDA. So when he delivered his assessment of the industry, I paid attention.

“We’ve not seen much fundamental change in the EDA industry in 20 years,” said Groeneveld as he put up this slide, which shows the rate of major conceptual jumps in EDA falling off.

Then he said that when we get to the year 2020, design scale will be the #1 problem. “Exit Verilog. Hello SystemC” he quipped. With those four words, Groeneveld is stating his belief that Verilog (or any HDL for that matter) will be unable to take chip designers to the next level of abstraction. That next level, which seems to be TLM or transaction level modeling, requires descriptions written at a higher level.

SystemC allows such higher-level descriptions—but at a cost measured in the loss of fidelity—a lack of correspondence with “reality.”  You cannot exactly describe what’s happening down at the gate level with SystemC. Coincidentally, you can’t do it with C either. C-based languages employ a sequential machine model that describes algorithms well and complex hardware less well. However, you can describe block behavior with enough fidelity to run system-level simulations. Then what? Groeneveld said that our abstraction levels are increasingly incorrect and the problem is slowly getting worse, causing us to rethink our design flows.

We’re taking two paths to handling the added complexity, said Groeneveld: partitioning and reuse. Both are evil he said because they introduce inefficiencies in the overall design. I’m not sure I can agree with that position, however. “Divide and conquer” as an engineering approach was supposedly introduced to engineering in the days of the Roman empire so we’ve been using it for at least 2000 years. It seems that it’s a pretty good approach for even moderately complex problems. At least, it’s an approach that  seems to have withstood the test of time. However, a divide-and-conquer strategy does indeed lead to suboptimal design in terms of efficient resource use. I just don’t know of any engineering discipline that avoids such inefficiencies when tackling projects of comparable complexity. Is it hubris to think that electrical engineering and chip design are somehow different?

Other EDA and design problems Groeneveld discussed:

  • Analog design automation. Wasn’t solved 10 years ago. Isn’t solved now. Won’t be solved in another 10 years.
  • 3D IC design will be mainstream by 2020 but it doesn’t really need a lot of EDA help.
  • Multicore, GPU, and cloud computing for EDA will prove to be so much hype like DFM, Internet CAD, and plug-and-play EDA tools of the past.

Pei-Hsin Ho, a Synopsys Fellow, flew down from Portland and took a more lighthearted approach to this evening panel. He said that leading-edge IC design will have reached 6nm design rules but that delay scaling will have slowed (seems to me it’s slowed already, with Denard Scaling dead at 90nm).

Ho also said we’d be adding new design metrics to the price/power/performance mix to include new metrics such as reliability. We will also need to develop new design methodologies such as fault-tolerant design and asynchronous design. Ho then predicted that the year 2020 will be the beginning of the end for logic devices and new process technologies and that new devices will need to come on line by then. (Memristors? Graphene transistors? Quantum electronics? Optical logic elements?) New devices will lead to new design paradigms, which in turn will lead to the need for new EDA tools. Ho said he was therefore predicting a new EDA Renaissance by the year 2020.

I particularly enjoyed Ho’s  whimsical look at the ICCAD 2020 conference program, which includes talks such as:

  • Articificial intelligence techniques for the physical design of microprocessors
  • Reliable design for unreliable devices
  • Physical design and verification for emerging logic devices
  • Design automation for wearable and implantable devices
  • Formal verification of privacy and security
  • Verification and implementation techniques for inference computing, quantum computing, and other non-Von-Neumann architectures
  • Special section: recent breakthroughs in physics: cold fusion, superconductivity, string theory, and their implications to EDA

Amusing, and likely pretty spot on I think.

Suk Lee—Director, Design Infrastructure Marketing Division at TSMC—started out by noting that his company has seen more rapid adoption of TSMC’s 28nm process technology than for 40nm technology. Here’s his chart:

He then said that TSMC would have 20nm chips in production in 2012 and 14nm FinFET shuttles operating in 2014. Plenty of EDA challenges there.

Lee differed from Groeneveld in his opinion about 3D. TSMC is currently making silicon interposers for the Xilinx Virtex-7 2000T FPGA and now has an active 3D program in place. (See “3D Thursday: Generation-jumping 2.5D Xilinx Virtex-7 2000T FPGA delivers 1,954,560 logic cells using 6.8 BILLION transistors (UPDATED!)”) With interposer thicknesses heading to 50 microns, there will be many new mechanical and electrical problems for EDA to solve according to Lee. Ditto for FinFETs.

Lee also provided a list of predictions for 2020:

  • Moore: Novel new transistors
  • More than Moore: Massively integrated systems on silicon via silicon interposer and 3D IC
  • A healthy EDA ecosystem driven by customers and silicon innovation

Sani Nassif at the IBM Research Laboratory in Austin capped the evening’s festivities by saying that he’d gone on eBay and purchased a time machine, gone to 2020, and then come back to tell us what he saw.

“It’s not ‘What is the chip you’re going to make?’ by then, it’s ‘What is the app you’re going to write?’”

In other words, the year 2020 will fully be an apps-driven world. The EDA360 Insider couldn’t agree more.

About these ads

About sleibson2

EDA360 Evangelist and Marketing Director at Cadence Design Systems (blog at http://eda360insider.wordpress.com/)
This entry was posted in 20nm, 28nm, 3D, DAC, Design Abstraction, EDA360, Silicon Realization, SoC, SoC Realization, System Realization and tagged , . Bookmark the permalink.

4 Responses to What will EDA and chip design look like in the year 2020? Prognostications from the ICCAD panel

  1. Patrick Groeneveld (from Magma), stating that “cloud computing for EDA will prove to be so much hype like DFM, Internet CAD, and plug-and-play EDA tools of the past”.

    And how exactly will EDA cope with another x20-x60 complexity increase by 2020? We’re already reaching the limit, and the big semi are using (private) cloud computing routinely. Pushing some EDA tools on the cloud looks like a good answer to the scalability issue. Nobody wants to pay $100M to have its own giant farm that it used for peak processing power. And burst computing is only one aspect of EDA on the cloud: security, shutting down SW piracy, and hourly-rate licenses will change the EDA landscape for the better.

  2. I agree that by 2020 mostly everything would be app driven, considering the burst in mobile industry. However, considering design rules reaching down to 6nm is something that seems horizon to me. We still have design implementation challenges such as double patterning at 20nm which still needs fine tuning. I guess the ebay time machine may also need caliberatio

  3. sleibson2 says:

    With TSMC saying it will be running 14nm shuttles by 2014, it’s not a big stretch to get to 6nm in another six years. However, e-beam litho had better kick in by then or there will be trouble getting there. If it does, then double-patterning will no longer be an issue. We will have many other issues, to be sure. Who knows, by the time we get to 6nm, we might be working with 2D graphene FETs and memristor RAM.

  4. Avri Shenwald says:

    I liked this article.
    I think that talk about Cloud computing which is more as a “buzz word” today, is like “looking for the missing coin under the lamp”. It is here already, but yet immature especially for EDA purposes. Even if the technology will be trustable; I am not sure what the industry will benefit from it. At the end this is economical issue, and no one wants to lose money…
    Looking for more efficient parallel processing EDA algorithms is more challenging and might contribute to the EDA industry, as well as artificial intelligence.
    Maybe 3D chips will satisfy Moore law, but it is already here. EDA might contribute to make 3D chips design more efficient.
    Since masks design today becomes harder and more complex, I think that One of the main challenges for the Fubs, is to come up with new mask production technologies, (different wave-length lighting, or even a complete new method of mask generation) which will simplify the masks design – back to sanity.
    New materials, efficient asynchronous designs, combined with synchronous interface will be good directions.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s