What’s driving 3D IC design? Do 2D EDA tools need a total overhaul to support 3D design?

The Electronic Design Process Symposium (EDPS) held last week in Monterey devoted most of Friday to a discussion of 3D design. I’ll be devoting several EDA360 Insider blog entries to this important topic. Today’s entry summarizes the presentation by Rahul Deokar, a Product Marketing Director at Cadence. Among other things, Deokar is responsible for 3D IC design tools.

Here’s what Deokar discussed in his presentation:

Everyone’s pretty much in agreement: 2D IC design is running out of steam. The number of issues to deal with—electrical, physical, and manufacturing issues and increasingly expensive mask costs (a 4x increase at 40nm and below)—all of these factors increasingly impede continued development of Moore’s Law in the 2D world. At 20nm, we need to use double-patterning lithography. At 14nm, we’ll need triple patterning. So mask costs are getting pretty prohibitive. Consequently, perhaps one, or two, or a handful of companies will be able to pursue Moore’s Law along the 2D development curve we’ve become used to over five decades of IC development. It’s just becoming cost prohibitive to go further down this path. Medium- and small-sized semiconductor companies will not be able to afford the cost of going there.

Another factor to consider is time to market. At 65nm, it took 7-8 months to ramp production to high volumes. At 45nm, it took 12-13 months. At 32/28nm, it’s taking much longer—perhaps a 2x increase in the time to market compared to 65nm design. It’s clear that you cannot ride the IC process road map and shrink time to market.

If you want to get your products out there in time, to take advantage of the latest and greatest in integration technology, you’ve got to look beyond the 2D version of Moore’s Law or you’ll need 2-3 years to get your products to market using the latest IC process technology. For a typical consumer product, a 6-to-9 month delay translates into about $5 billion in lost revenue, so the financial implications of the pursuit of Moore’s-Law scaling in the 2D domain are substantial.

Because of these cost and time-to-market issues, more and more vendors are now looking to 3D ICs to continue the progress in integration.

Here’s how 3D ICs help:

1.       3D IC design allows you to do heterogeneous integration. This is a big deal. You can keep your older custom, analog, or memory designs at an existing process node where the design is proven and you can then couple this silicon to the latest processor or graphics CPU built with a more advanced process technology such as 32/28nm. This approach gets you to market much faster than trying to recast existing, working silicon that needs no performance boost or area shrink into a new process technology.

2.       3D IC design can also provide some unique benefits with respect to power consumption and heat dissipation.

However, it’s not going to be smooth sailing for 3D IC design and assembly. There are many problems yet to be solved. For example, there are thermal issues such as differential thermal-expansion coefficients between silicon and the copper in through-silicon vias (TSVs). This differential creates thermal/mechanical stress that you’d prefer to avoid if possible because it can create microcracks in the silicon and can cause the microbump interconnections to fail mechanically and electrically. The industry is looking to EDA providers to help solve this particular issue.

Testing presents yet another challenge. Existing DFT technologies need to be extended and standardized for 3D IC design because the multiple dies in a 3D IC stack can be supplied by multiple IP providers. How do you test an assembled stack built like that? Who’s responsible for creating and verifying the test?

Typically, 2D chip designs are done by a chip designer responsible for the digital design and by custom/analog designers who are responsible for a portion of the IC design. There’s a package design team as well. In the 2D IC-design world, these three groups have a sort of “over-the-wall” relationship. That same sort of relationship cannot continue in the world of 3D IC design without severe economic penalties. If you really want to reduce the cost of the overall package; if you want all of the benefits of packaging digital, memory, and analog components into a 3D design; then all of these design groups must work together and their tools must work together as well.

That need places unique requirements on the EDA supplier. It forces all the tools for these three groups to communicate through a common database so that concurrent design can occur, so that optimizations can cross boundaries to get the best performance at the lowest power.

Here are the design steps for a typical 3D IC design flow:

1.       System-level exploration. Here is where you analyze cost, performance, and power based on estimates. This is where you make your big choices—your macro choices. What type of memory to use? What technology to use on each die? How should the dies be sequenced in a stack? These questions are already important in 2D system design and they become even more important in the 3D realm—particularly when you assemble 7, 8, or 9 die into a 3D stack.

2.       Once you’ve got the big choices made, the next step is to optimize the design across the multiple dies in the stack. This step presents floorplanning tools with new challenges beyond the 2D realm. You need to make sure the power TSVs are placed and optimized to minimize power losses. You need to make sure that circuit elements that need to be close to one another are not widely separated across the die stack. These sorts of floorplanning decisions become critical with 3D design. Again, EDA tool providers can help with this step.

3.       Once the floor plan is defined, you need to place the various blocks and TSVs on the multiple dies to minimize routing lengths.

4.       Once everything is placed and routed, you need to extract and analyze the electrical and thermal characteristics of the stack for timing and thermal analysis. Stacking chips will create hot spots or hot pockets within the stack. You’ll want to perform optimizations to minimize or eliminate these hot spots by adding thermal chimneys, by reducing switching activities through architectural optimization, and by physically separating heat-generating blocks. There are also cross-characteristic factors to consider. For example, thermal effects can change timing. As a result, critical paths can fail.

5.       You will want to ensure that the DFT structures integrated into the design will permit the multiple dies in the 3D stack to be tested.

6.       At the same time, silicon/package codesign becomes more important because of the extra thermal considerations and because of the increase in the number of I/O pins that you generally encounter with a 3D IC design. If you move to 3D IC design without considering silicon/package codesign, the overall assembly and packaging costs can easily exceed your budget.

Here’s yet another consideration when migrating from 2D to 3D IC design: There are new structures to deal with for 3D design such as TSVs and microbumps. These new structures translate into new design rules, new constraints, new electrical and mechanical models, and new layout rules for die-to-die alignment. The tools for 3D IC design must account for all of these additions. Existing design-file database formats need to be enhanced to accommodate these changes.

Once these new factors become part of the design database, then 3D design becomes a natural extension of the existing design tools. At Cadence, we believe that the tool modifications all fall into one of three facets: design intent, design abstraction, and design convergence. These three facets are the unifying force for 3D tool development at Cadence.

The design intent facet captures the designers’ expected behavior for the 3D IC design. All of the EDA tools can make use of the captured constraints and expected behaviors without the need for manual file transfers between tools. Automating sharing of design intent across tools using the unified design database minimizes the possibility that this information might be ignored by the designers performing various tasks.

Design abstraction allows each design team to use design information from the other design teams without drowning in detail that’s not relevant to a particular design task and without needlessly bogging the EDA tools down with massive data files. Abstractions of various blocks can be shared among the teams very efficiently using this concept.

Design convergence allows implementation and verification to proceed quickly, always driving towards project goals. One of the ways Cadence tools accomplish this is to rely on in-design signoff tools in the implementation tools for power, timing, and thermal analysis to minimize iterations between implementation and verification tools.

So 2D design tools need not be completely overhauled to produce 3D IC design tools. What is critical, we believe, is the incorporation of design intent, design abstraction, and design convergence into a unified design database to more closely couple the various silicon- and package-design tools so that the overall design effort can optimize the 3D IC design across the many available degrees of freedom.

About these ads

About sleibson2

EDA360 Evangelist and Marketing Director at Cadence Design Systems (blog at http://eda360insider.wordpress.com/)
This entry was posted in 3D, Design Abstraction, Design Convergence, Design Intent, EDA360, Low Power, Packaging, Silicon Realization, SoC Realization, System Realization, Verification. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s