This week at the ISQED Symposium in Silicon Valley, Tom Beckley who is the Senior VP of R&D for Custom IC and Signoff at Cadence opened the conference with a keynote covering the industry’s challenges and progress at 20nm and below. Yesterday, I wrote a blog entry covering the first half of his keynote. (See “Scaling the 20nm peaks to look at the 14nm cliff, Part 1: Tom Beckley from Cadence maps the challenges of advanced node design.”) Today’s blog entry provides you with the second half of Beckley’s keynote speech in detail. Normally, I’d summarize the talk for you here in the EDA360 Insider but Beckley’s presentation really doesn’t need my help at all, so here’s the second part of the speech as delivered:
Taming the Challenges of Advanced-Node Design, Part 2
When attacking the challenges of 20nm process technology, we must look at both design tools and design methodology. The 22nm, 20nm, and 14nm process nodes represent a seismic shift, so changes are going to be needed in both design tools and design methodology.
New devices and tools emerging at 20nm: FinFETs will rule by 14nm
The planar CMOS transistor is running out of steam from the perspectives of both low power and high performance. Luckily, a replacement is available. Invented more than a decade ago, the FinFET or Tri-gate FET has finally come into its own as a way of combating many of the inherent problems associated with planar CMOS transistors. Our experience suggests that although planar transistors may see some manufacturers through the 20nm node, the FinFET will rule the world at 14nm and below.
But FinFETs bring their own set of challenges. Via resistances and metal resistances are so high at these advanced process nodes that driving signals with FinFETs is challenging. Routing parasitics are problematic as well. FinFET leakage is still high. FinFET gate-to-source and -drain capacitance ratios are challenging as well.
Extraction technology is changing significantly on multiple fronts to handle these new device structures and new interconnect layers. Extractors must handle the new process rules but they are also changing so they can extract the critical parasitic information out of the new structures and layers emerging at this node. Without this information, chip designs will fail. In addition, extractors must handle significantly more data and still perform at reasonable performance levels.
Here is an example of a new structure that challenges existing extraction technology:
In this example, the new resistive structure uses fractured vias to accommodate MOL (middle of the line) interconnect routing. Due to double patterning, the number of corner verification files that need verification has effectively doubled and this via structure is considerably more complex than via structures used in previous process nodes. That all adds up to a lot of computation that did not need to occur in previous nodes.
Detecting DFM and litho problems also must be approached differently. The sheer number of rules and massive circuit sizes quickly overwhelm simulation with model validation—the way it’s mostly done today. Pattern matching was introduced at the 40nm node as a way of quickly identifying litho problems. Here’s an example of the patterns used for pattern matching.
The pattern library is created from process hotspot patterns found on wafers. These patterns can be classified into a library and pattern recognition can quickly identify design trouble spots.
Process hotspot repair flows will become common at 20nm and below because they boost yield. But there is more we can do with pattern matching technology—more on that below.
The first thing to realize is that tossing designs “over the wall” will no longer work. The circuit design cannot be done without a deep understanding of the physical implementation and the physical implementation cannot be done without an understanding of the manufacturing implications.
Everything is now intertwined. The key to a successful 20nm design methodology is the ability to share, exchange, and analyze information between the front end and the back end continuously. Here’s a graphic depiction of the kinds of information that the design tools must continuously interchange to achieve a successful design.
Circuit designers need to be aware of layout-dependent effects that can cause electrical and mismatch problems so they can develop the design and direct the implementation accordingly. Layout engineers will need to place and route circuit elements using dummies and new layers to ready the design for manufacturing. Circuit designers and layout engineers working together must be able to analyze mountains of new data that will come their way. They cannot rely on the batch methods that have predominated design at the previous process nodes because the design will never be finished if they do.
To facilitate this cross communication, circuit designers need to quickly prototype a layout to understand of the layout-dependent effects on the circuit. These effects include placement, routing, the use of dummies, and complying with new placement and routing rules. To do this, the circuit designer needs higher level custom building blocks to simplify the design challenge. Furthermore, these building blocks should be simulated and extracted.
At Cadence, we have been investing heavily in building advanced custom design building-block solutions to enable rapid prototyping. Automatic placement and routing of these complex module generators—we call them “modgens”—must incorporate the new local interconnect rules for the 20nm node; they must handle double patterning; and they must deal with parasitics and layout-dependent effects. The resulting modules can then be fed into an analog placer that works within the context of an overall floorplan. This solution is constraint-driven from the schematic and keeps the schematic and layout in sync, which is critical for speeding initial design, verifying the final design, and for handling engineering changes.
This process starts with automatic recognition of basic bricks (current mirrors, differential pairs, buffers, inverters, capacitor arrays, etc.) on the schematic, which are synthesized into layouts, then simulated and extracted, while accounting for layout-dependent effects. These layouts are used in blocks, and those blocks are then assembled into the floorplan. The relative placement is saved as a set of constraints. These constraints enable a form of IP re-use and are also used for ECOs.
Once FinFETs are employed, their nuances must also be factored in. For the physical design engineer, understanding how to color the circuit to maintain lithographic validity will be crucial in successfully manufacturing the final device.
At Cadence, we believe that extraction, DFM, and physical verification for advanced nodes can no longer be performed simply as a sign-off step. Remember, the layout engineer has an additional 400 complex rules to manage at 20nm. You just can’t wait for the design to be finished for sign-off. In-design sign-off greatly improves throughput here.
To be practical, in-design sign-off requires the ability to efficiently call sign-off verification engines at any time during design. However, this mode of working will only be acceptable if verification engine performance can provide immediate, sign-off quality feedback during the implementation phase.
20nm is a journey
Getting to the 20nm node is a journey. While some things may be clearly deterministic, there are multiple methodology, design, and lithography issues that still need to be resolved as we learn more about the process. Let me highlight a couple of these issues.
Much of the 20nm development to date has been based on preliminary process design-rule manuals. Foundry and fabrication processes are still undergoing significant tuning to enhance yield. The DRM, PDKs, and DDKs are constantly under revision and these changes ripple through the associated design tools and methodologies.
Take the issue of double-patterning coloring for example. We have done a substantial amount of work with IBM looking at the issue of persistent color and whether or not it needs to be part of and move with the design. The design could simply be handed to the foundry as “gray” data with coloring done by the foundry. However, the only way to allow the foundry to decompose the design using double patterning post-layout is to enforce even more restrictive design rules, with possible negative effects on performance and area.
So is coloring a design problem or solely a manufacturing problem?
IBM’s conclusion is that custom design requires control of double patterning throughout the design process because coloring needs to be handled continuously, up and down through the design hierarchy, all the time. This is especially critical for ECO management, lithographic efficiency, and to enable rapid time to market.
Earlier, I discussed pattern matching. Today, the design process is still very traditional in that layout problems are found and fixed after the fact. But what if the problem could be avoided? Let’s look at an example.
Here we have a simple shape built with proper spacing rules as defined in the technology file. This layout is DRC correct but it cannot be manufactured in 20nm due to the line pitch. We want to use minimum spacing between the blue and green lines, but what do we do with the grey rectangle?
We could split the shape and then try to stitch it back together leaving enough overlap in the two masks so that the shape will effectively fuse as one, as shown below.
However, stitching requires some pretty tricky maneuvering with the mask sets to ensure that the overlap and the overlay are perfectly aligned.
The other way to solve this problem is to teach the router about this pattern and to prevent the router from laying down the line too close to other objects in the first place, as shown below.
We want to avoid this problem in the first place, so placers and routers must survey the design and extend shapes when necessary so that the lithographic problem never occurs. This approach is more difficult for the placers and routers because it means a lot of patterns will need to be checked while routing, but taking this approach may well prove to be preferable to the traditional “rip up and retry” paradigm.
And there are many more challenges and advances yet to unfold in the 20nm realm.
With the end of optical lithography nearing, EUV’s significantly shorter wavelength appears to set the stage for a new manufacturing process. However there are many challenges in terms of cost and throughput. For example, the EUV light is derived from a plasma light source—which is not yet bright enough for production-level manufacturing throughput—and EUV exposure must be performed in extreme vacuum versus today’s optical lithography, which can occur in ambient atmosphere.
If EUV cannot cost effectively provide the required throughput, then conventional 193nm lithographic equipment will need to be extended. More multiple patterning requirements and potentially more exotic forms of structure deposition like directed self-assembly (DSA) will be needed. (With DSA, a block copolymer or polymer blend is deposited on a substrate and subjected to an annealing process that “directs” it to form ordered structures.) Researchers say DSA is compatible with conventional 193nm lithography equipment and would eliminate the need for double patterning.
Fulfillment of Moore’s Law continues, but it comes at a high cost for new equipment, new foundry techniques, and new design tools and solutions. As the manufacturing challenges are resolved, designers and their EDA vendors will also need to adapt. Developing all of these new tools and methods involves a lot more collaboration among the interested parties. Collaboration on 20nm process development has been underway for several years and design solutions and new methodologies are already emerging. Many 20nm test chips have already been completed. In addition, advanced work on 14nm process technology, FinFETs, and Tri-gate structures has been underway for some time. This work also requires substantial collaboration.
I’ve got a unique opportunity to merge technologies into solutions based on customer and foundry partnerships with a team of 800 people at Cadence covering custom design, simulation, extraction, physical verification, DFM, and lithography. In addition, my team works closely with their digital brethren enabling across-the-board SoC solutions for all process nodes. We are also working on these advanced nodes along with the foundries and other interested parties. There’s no other way.
Ultimately, consumers are demanding low-cost, low-power mobility solutions. That demand drives the quest for new 20nm ICs; it will drive the industry to 14nm; and the discussions regarding 10nm are already starting to unfold.
For Richard Goering’s take on Tom Beckley’s keynote, see “ISQED Keynote: 20nm From a Custom/Analog Perspective” in Richard’s Industry Insights blog.