Science and technology | Difference engine

Beyond Moore's law

Even after Moore’s law ends, chip costs could still halve every few years

|LOS ANGELES

THERE is a popular misconception about Moore’s law (that the number of transistors on a chip doubles every two years) which has led many to conclude that the 50-year-old prognostication is due to end shortly. This doubling of processing power, for the same cost, has continued apace since Gordon Moore, one of Intel's founders, observed the phenomenon in 1965. At the time, a few hundred transistors could be crammed on a sliver of silicon. Today’s chips can carry billions.

Whether Moore’s law is coming to an end is moot. As far as physical barriers to further shrinkage are concerned, there is no question that, having been made smaller and smaller over the decades, crucial features within transistors are approaching the size of atoms. Indeed, quantum and thermodynamic effects that occur at such microscopic dimensions have loomed large for several years.

Until now, integrated circuits have used a two-dimensional (planar) structure, with a metal gate mounted across a flat, conductive channel of silicon. The gate controls the current flowing from a source electrode at one end of the channel to a drain electrode at the other end. A small voltage applied to the gate lets current flow through the transistor. When there is no voltage on the gate, the transistor is switched off. These two binary states (on and off) are the ones and zeros that define the language of digital devices.

However, when transistors are shrunk beyond a certain point, electrons flowing from the source can tunnel their way through the insulator protecting the gate, instead of flowing direct to the drain. This leakage current wastes power, raises the temperature and, if excessive, can cause the device to fail. Leakage becomes a serious problem when insulating barriers within transistors approach thicknesses of 3 nanometres (nm) or so. Below that, leakage increases exponentially, rendering the device pretty near useless.

Features this small are increasingly common in today’s transistors. Intel’s latest Broadwell chips, for instance, are manufactured using 14nm process technology. (Such nomenclature refers to the smallest “half-pitch” between identical features on a chip.) Within the transistor itself, however, some of the features are considerably smaller than their half-pitch. In the latest Intel processors, the gate’s insulating layer is claimed to be just 0.5nm thick. This is little more than the width of a couple of silicon atoms.

Intel, which sets the pace for the semiconductor industry, started preparing for the leakage problem several “nodes” (changes in feature size) ago. At the time, it was still making 32nm chips. The solution adopted was to turn a transistor’s flat conducting channel into a vertical fence (or fin) that stood proud of the substrate. Instead of just one small contact patch, this gave the gate straddling the fence three contact areas (a large one on either side of the fence and a smaller one across the top). With more control over the current flowing through the channel, leakage is reduced substantially. Intel reckons “Tri-Gate” processors switch 37% faster and use 50% less juice than conventional ones.

Having introduced the Tri-Gate transistor design (now known generically as FinFET) with its 22nm node, Intel is using the same three-dimensional architecture in its current 14nm chips, and expects to do likewise with its 10nm ones, due out later this year and in mainstream production by the middle of 2016. Beyond that, Intel says it has some ideas about how to make 7nm devices, but has yet to reveal details. The company’s road map shows question marks next to future 7nm and 5nm nodes, and peters out shortly thereafter.

At a recent event celebrating the 50th anniversary of Moore’s law, Intel’s 86-year-old chairman emeritus said his law would eventually collapse, but that “good engineering” might keep it afloat for another five to ten years. Mr Moore was presumably referring to further refinements in Tri-Gate architecture. No doubt he was also alluding to advanced fabrication processes, such as “extreme ultra-violet lithography” and “multiple patterning”, which seemingly achieve the impossible by being able to print transistor features smaller than the optical resolution of the printing system itself.

What Moore's law is, and why some reckon it will end soon

But, sooner or later, all good things come to an end. As far as physical limits are concerned, most see the doubling of transistors per unit area ceasing around 2022, with 7nm being the last commercial processing node. For bragging rights, some chip-makers may pull out all the stops to produce 5nm devices—even though they are unlikely to offer any real performance gains over the previous generation. But after 5nm, nothing—at least, as far as silicon technology is concerned.

It is a mistake, however, to view Moore’s law as a prophesy based on scientific phenomena that are doomed in the face of immutable laws of physics. If truth be told, Moore's law was never anything more than an economic rule of thumb that morphed into a self-fulfilling axiom about process engineering. Essentially, it was more a way of scheduling manufacturing targets than a means for forecasting the performance of future processors. As such, Moore’s law has served as a metronome that lets Intel set the tempo of product announcements—and thereby encourages computer makers to keep coming back every couple of years for ever-more-powerful processors. Like it or not, the rest of the semiconductor industry, usually a node or two behind, has been obliged to follow Intel’s lead.

Another thing often overlooked is that Moore’s law has always been as much about reductions in the cost of transistors as about increases in their performance. By doubling transistor density, individual chips get smaller, allowing more of them to be printed on a silicon wafer. After the finished wafer is sliced and diced, individual chips then cost less. Because leads and contacts on a chip cannot be shrunk as easily as transistors, doubling the density of the latter does not quite halve the price of individual devices. But it comes pretty close.

Or it does up to a point. Unfortunately, as transistors get smaller, more defects creep in. There is thus a trade-off between complexity and cost. And, while the cost per transistor is almost inversely proportional to the number of transistors crammed in a chip, there comes a point where the decrease in yield (percentage of good chips on a wafer) begins to outweigh the benefits of the chip’s increasing complexity. In short, a minimum transistor cost exists for each particular node of processing technology.

And here’s the crunch: that minimum cost per transistor has been rising since 28nm chips hit the market several years ago, says Henry Samueli, co-founder and chairman of Broadcom, a fabless semiconductor firm based in Irvine, California. That is partly a result of decreasing yields, but also because of the escalating cost of the photo-lithography equipment needed to fabricate ever-smaller integrated circuits. “The cost-effectiveness seems to have hit a sweet spot at about 28nm,” says Dr Samueli.

There is a lot to be said for sticking with legacy technology like 28nm. Transistor shrinkage over the years has left spare room on the chip to add specialised processing units for handling such services as graphics, video and cryptography.

As it is, the popularity of mobile communications and computing has encouraged semiconductor firms to embed as many features as possible in their processors. Such devices, known as “systems on a chip” (SoC), tend to devote around 65% of their real estate to memory, with the rest for everything else—including all the processor’s logic gates (transistors), the necessary input/output circuitry, and numerous analogue functions needed to run a phone, tablet, laptop or whatever. While it is possible to shrink the size of the logic on an SoC, the memory components do not scale anywhere near as well, and the analogue circuitry barely at all.

That means Moore’ law impacts on only a small portion of an SoC. As such a device is never going to gain significant cost or performance benefits from shrinking further, there is good reason to design SoC devices around mature process technologies like 28nm, with their minimal costs. The SoC used in Apple’s latest iPhone employs 20nm technology. But then, Apple is willing to pay a small premium to shrink the overall size of the package further in order to gain crucial millimetres that make its phones yet slimmer and more elegant.

Will integrated circuits continue to see costs halve every few years, even if transistor densities no longer double? With SoC devices based on mature process technology, that is a distinct possibility. That said, silicon will, sooner or later, have to cede room for gallium arsenide and materials with even higher electron mobility, such as graphene.

As it is, a technology known as POET, developed over the past 20 years by a team at the University of Connecticut, promises to power the next wave of innovation in integrated circuits—by using gallium arsenide to combine optics and electronics in a single chip. The developers claim considerable improvements in power, speed and cost over today’s silicon-based chips. One way or another, it seems the 50-year era of driving semiconductor costs down through improvements in process technology is about to be superseded by a new age of making chips cheaper, faster and better through smarter design. In so doing, Moore’s law could be set for a whole new lease of life.

Discover more

Antarctica, Earth’s largest refrigerator, is defrosting

The world must pay more attention to its southern pole

Killer whales deploy brutal, co-ordinated attacks when hunting

Their techniques are passed down through the generations


A new generation of music-making algorithms is here

Their most useful application may lie in helping human composers