Thanks to this article from TechEye.Net, I was alerted to an extremely interesting roundtable discussion organized by the Kavli Foundation by three long-range nanotechnology research experts. The three experts are HP Research Fellow Stan Williams, director of the company’s Cognitive Systems Laboratory who is working on memristor commercialization; Michelle Simmons, Scientia Professor and Director of the Australian Centre of Excellence for Quantum Computation and Communication Technology, University of New South Wales; and Paul Weiss, Kavli Professor at UCLA and Director of the California NanoSystems Institute. All three have deep insight into the frontiers of electronics beyond the very fringes of current semiconductor practice. What caught my eye were the following statements by Williams about the irrelevance of Moore’s Law:
Williams’ comments from the roundtable writeup:
- We have such a long way to go. People talk about reaching the end of Moore’s Law, but really, it’s irrelevant.
- Transistors are not a rate-limiting factor in today’s computers. We could improve transistors by factor of one thousand and it would have no impact on the modern computer.
- The rate-limiting parts are how you store and move information. These are visible targets and we know what we have to do to get there.
- We can continue to improve data centers and computers at Moore’s Law rates — doubling performance every 18 months — for at least another 20 years without getting into something like quantum or neuronal computing.
- That said, it’s a lot of fun to think about building things that compute more like a brain, and we’re doing some of that. But it will be 20 years before any of that is actually needed, because there are so many other improvements we can make first.
Williams also made two more cogent observations:
- By the time you get into systems, people don’t publish. I see that myself. Our group publishes about our devices, but practically nothing at the systems level. There’s a good reason for that. Systems are where lots of know-how and differentiation come from. When things start to move into systems, it’s like a black hole, they just disappear from sight.
- The other thing I learned is that even the most revolutionary new technology has to be introduced as evolutionary because the market doesn’t like disruption.
I commend the entire roundtable discussion to your attention.
If you, like me, have never heard of the Kavli Koundation, it’s worth a few minutes of your time to investigate it. Fred Kavli—a Norwegian-born U.S. citizen, a physicist, an entrepreneur, a business leader, an innovator, and a philanthropist—founded the Kavlico Corporation in Los Angeles in the late 1950s. The company became a major supplier of sensors for aeronautic, automotive and industrial applications. He later started the Kavli Foundation, which is dedicated to supporting research and education that has a positive, long-term impact on the human condition.
In addition, Kavli endowed two chairs in engineering at the University of California, Santa Barbara – the Fred Kavli Chair in Nanotechnology at the University of California, Santa Barbara and the Chair in Optoelectronics and Sensors. Through the Foundation, he has also endowed chairs in neuroscience at Columbia University, Earth systems sciences at the University of California, Irvine, nanoscience at the University of California, Los Angeles and theoretical physics at the California Institute of Technology.
Our limitation is our brain.
Nothing less nothing more.
Please look at “organic circuits
http://en.wikipedia.org/wiki/Organic_electronics
“new technology has to be introduced as evolutionary because the market doesn’t like disruption”
I have no idea what this can mean. The basic definition of “Disruptive Innovation” is an innovation that is so popular that it entirely disrupts the market. Examples include the model T, walkman, I-pad interface…
Obviously, designers will resist learning completely new approaches unless they can see significant benefit – but if they perceive that the technology that requires this has significant advantages it will break through and the designers and the market will love it. And that is the meaning of of “disruptive technology”.
I can identify just two groups that do not like disruptive innovation: the majority of marketeers because anything truly new is unpredictable – and they believe they are measured on prediction accuracy; and people with excesss financial committment to technolologies that the disruptive technologies will displace.
To take an example where Williams uses this argument as a reason for delayed introduction: the memory market: I don’t believe that financial committment is relevant – because new fab investment is needed regardless of technology.
In this case, I see two possible reasons for the slowness: either supplier conservatism (driven by marketeers), or that the technology is not yet ready to be disruptive. Either Williams is overstating the capability of the technology, or he is failing to convince the relevant managers of its disruptive capability.