Last week at the Global Technology Forum held at the Santa Clara Convention Center in Silicon Valley, Cadence VP of R&D Frank Leu discussed the things we’ve learned about 20nm IC manufacturing, what we are learning about 14nm, and where we go after that. Leu started by saying that 20nm is the next process node. By that, he means that the EDA issues for 28nm are pretty much resolved. We know how to get 28nm designs taped out and ready for production because a lot of them have already taped out.
Jumping to the next IC process node is a huge advantage to semiconductor vendors for two reasons:
- You get 100% more transistors per square millimeter with each jump
- You get a 1.6x performance increase with each jump
These two advantages help vendors meet PPA (power, performance, area) goals for new designs.
Then Leu said that there are three things that are important for 20nm development and beyond:
- Design and process discontinuities
- Strong ecosystems with collaboration and partnerships because of the…
- Huge required investment
The most important discontinuity at 20nm is the need for double patterning, which we must use to get wire pitch below 80nm—and at 20nm geometries we want a 64nm wire pitch. Double patterning requires the use of “coloring” to split the polygons onto the two pattern masks. Along with the double patterning requirement you also get “more than 400 new design rules.”
In addition to these complexities, some additional challenges appear at 20nm. First, transistor leakage is worse. Complexity is more challenging because a 20nm chip might encompass some 10 billion transistors (or more). Power density is hard to control over all those billions of transistors and that problem’s also getting worse. Finally the number of I/O pins is also increasing, which exacerbates power problems.
Three factors help us attack these challenges:
- DFM to deal with layout-dependent effects, process variability, and the 400 new design rules
- Concurrent PPA optimization
- Increasing design productivity (but can we maintain the trend?)
Leu discussed some of the DFM features Cadence has incorporated into its EDA tools to deal with 20nm designs. The first is FlexColor, the technology built into the Cadence NanoRoute Advanced Digital Router—which generates correct-by-construction patterns that are both right the first time and area-efficient. There’s also In-Design physical verification based on coloring and odd-loop detection that speeds final signoff. Finally there’s In-Design Extraction, which permits interactive analysis of electromigration in wires (a growing problem at 20nm). These productivity aids emphasize interactive In-Design activity rather than batch-and-fix design methodologies.
Then Leu discussed some of the techniques used to accelerate PPA optimization. The first is the integration of optimization paths through analog/custom and digital IC design flows coupled with 2.5D/3D assembly and packaging design. These three significant design silos cannot be optimized individually without giving up some global optimization ability. In addition, said Leu, you also need to manage power through a low-power design flow throughout the design. Leu used the Cadence GigaOpt Engine as an example of a tool designed specifically for PPA optimization.
To maintain the trend in design productivity gains, the philosophy is to prevent design errors before they occur rather than allowing errors to be made in the first place and then later fixing them. Finding and fixing errors takes time, which costs money and delays tapeout.
Instead, you should approach design using correct-by-construction techniques and In-Design signoff. “It’s a long-term philosophy,” said Leu, “don’t wait for the end to find problems.” He then described some of the Cadence EDA tools that reflect this philosophy including GigaScale hierarchical design, which he said will be the norm at advanced nodes, and color-aware design as described above.
These concepts are not just nice theories, said Leu. There are successes to back up the ideas:
- A project with IBM to develop a 22nm test chip using the ARM Cortex-M0 processor core and 20nm, 9-track libraries. This project employed the Cadence RTL-to-GDS design flow and was done more than a year ago.
- A project with Samsung, again based on the ARM Cortex-M0 processor, and on Samsung 20nm libraries. The design taped out early.
- A project with GLOBALFOUNDRIES using ARM 20nm libraries.
Finally, Leu turned his attention to 14nm. We are already working on new 14nm device models, he said, and these models feed directly into our OpenAcess database. That becomes part of the 14nm design infrastructure. In addition, he said, IP creation must deal with process variability, double or even triple patterning, more design methodology changes, and enhancements to correct-by-construction design techniques.
For additional information, see “On-Line Presentation: 20nm Design Challenges, and a Look Ahead to 14nm” in Richard Goering’s Industry Insights blog.
“You get a 1.6x performance increase with each jump”
Is this still true? Or does it depend on what you mean by “performance”?
Increasingly (and depending on path length measured in process units) speed is limited by metallisation; so we are tending to the point where process shrink just leaves the speed constant*. Indeed, I suspect we are not far from the point where the dimensions of medium-range signal routing cannot be further reduced without reducing circuit speed…
If we mean power-per-transition, performance advantages should continue on this route for a few more generations
*In some cases speed increases remain possible, but require changes to system and to sub-system architectures…
George, see my latest blog on Tom Beckley’s ISQED keynote for more in-depth numbers on 20nm performance.
–Steve