The Self-Imposed Limits of Computing
For more than five decades, computing power has doubled every two years, but as of recently, companies are prioritizing short term profits over innovation.
Reading Time: 4 minutes
Does your computer from only a couple of years ago feel like an archeological relic of a lost generation? If you were to compare the speed of that old computer to those of computers today, the old computer would seem like a tortoise. Top technology companies such as Apple and AMD have been doubling the performance of their previous generations within one or two years by shrinking the size of transistors—the basic building blocks of processors—and increasing the number of them packed together. The exponential growth in performance over the past five decades has been mind-boggling. For example, a smartphone from a couple years ago has over 100,000 times the computing power and 1,000,000 times the memory of the computer used for the Apollo missions. The co-founder of Intel, Gordon Moore, coined a name for this trend: Moore’s law. It states that the number of transistors doubles every two years, which corresponds to the doubling of computing power. While Moore’s law has remained true to this day, it now seems as if traditional computing is reaching the limit of performance.
Though more transistors are being packed onto processors, the physical processors themselves have remained small or even shrunken. Such a counterintuitive trend has been made possible by the transistors themselves becoming smaller, through billions of dollars being poured into research and development. In the 1970s, the 10-micron process was the latest innovation, but today, the latest fabrication plants are creating transistors around the size of three nanometers—more than 300 times smaller. With transistors shrinking to nearly the size of an atom, electrons start misbehaving and jumping through transistor gates in unexpected ways, which makes computing at this scale almost impossible. The limits of traditional computing are being reached on an atomic scale.
In fact, Moore predicted that exponential growth cannot be sustained forever. Similarly, Nvidia CEO Jensen Huang considers Moore’s law to be dead, citing costs as a major contributor. When it takes more than 10 billion dollars to develop a new process, company executives do not want to invest for the long term, as there are no immediate returns for investors. So, companies are accomplishing improved performance through methods other than simply moving to a smaller and more efficient process, such as by sharply increasing power consumption. The previous consensus was that the doubling of computing power would come at no additional power consumption, but the last five years have proven this prediction false. Manufacturers will increase power targets to produce leaps in computing power and maximize appeal to the consumer and investor. Intel’s flagship central processing unit (CPU) in late 2015, the I7-6700K, used to be manufactured with the 14nm process and draw around 120 watts of power. Now, a chip that isn’t even Intel’s flagship CPU, the I7-12900K, is more than twice as powerful, drawing over 250 watts of power. The trend with graphics processing units (GPUs) is even worse, with Nvidia’s flagship having a factory recommendation of over 450 watts, a drastic change from its previous 250 watts. Currently, most of the graphics cards in production by Nvidia or AMD draw more than 350 watts, with some drawing even more than 400 watts. That’s more than some space heaters, and it’s only one of the processors in your computer!
The culprit of this is Moore’s second law, otherwise known as Rock’s law, and it states that the cost of a new fabrication plant for a new manufacturing process doubles every four years. The latest 3nm process is already at a steeping high price of over $15 billion for a fabrication plant. Even if we could overcome the size limitations so that a smaller process could begin production, the price would be much higher. In addition, the research and design for these chips cost billions of dollars, with manufacturers already struggling to keep up with these costs. Instead of improving their transistors, Nvidia, Intel, and AMD are all choosing to crank up power targets. As long as these companies continue to seek short-term profits, power targets will continuously rise and transistor progress will remain stagnant. Other than the obvious increased electricity costs, with computers drawing more power, they will be emitting more energy in the form of heat.
However, all hope is not lost for the future of computing. When companies decide to make innovations again, they will have plenty of workarounds and new technology to bypass the physical limits of computing. For instance, to avoid the size limitations of traditional silicon fabrication needing to be three-dimensional, a new two-dimensional class of materials is being researched that could be stacked for increased density. More power efficient architectures such as ARM and RISCV are on the rise, so this could be an alternate pathway for manufacturers to improve mainstream computing power without requiring customers to own a small nuclear power station for powering their computers. Regardless, that isn’t the future of innovation that Nvidia, Intel, and AMD are mainly choosing. While they may say they’re still innovating, the truth is clear: Short term profits are maximized as a cop-out solution to a technological problem that skyrockets power consumption.