General / Off-Topic CPU and GPU clock speeds. Cores. Physical constraints. And the future of conventional computing. (Not quantum)

OK. So, many years back when we were reaching the clock frequency of around 3GHz I had assumed that because of the physical constraints of chip design and manufacturing being that one cannot etch a single transistor on the silicon wafer any smaller than a single atom wide with one’s etching laser that it was all about to be over and computers weren’t going to be getting any quicker and that was going to be the end of all that then.

Of course, what happened next was that computer engineers simply added a second core to the CPU, and dialled down the frequency of each core so that a chip could be just a little bit faster than the previous years’ chip, and that’s essentially what has been happening ever since.

Extra cores get added, and the frequency of the CPU product as a whole has been constantly increasing according to the same Moore’s law ever since just as it had done so for some decades previously with the incremental improvements in speed every two years being kept stable, likely for economic capitalistic reasons I expect, but that’s not the discussion I want people to have here.

However, I read a headline the other month that some Nvidia high-up engineer had stated recently that they had reached a ‘brick wall’ with regards to processing power.

Is it so that now chip manufacturers are unable to add any further physical cores to our CPU’s and GPU’s and has the industry now reached the point where we are or will soon no longer be able to manufacture faster computers? And what do people think this would mean for the industry? Not just the hardware industry, but also the gaming industry, which I think we can safely assume has been one of its primary driving forces.

I’m hoping this will generate an interesting, thought-provoking and informative discussion, as I know there are a whole bunch of smart people here on these forums that know far more about computer engineering science than I do, but there will also be a lot of people that know far less than me who would be equally better enlightened from reading such a discussion and hearing other people’s better-informed viewpoints on the matter.

Some of the questions I would posit are; do we really need faster computers now anyway? And should we instead be trying to make computers that are more energy-efficient rather than faster? Or perhaps with better recyclability in mind, rather than cheaper and more efficient disposability being the primary manufacturing goal?

Obviously, a business computer that just needs to run an internet browser together with some office software doesn’t need to have the same processing power that a gaming rig requires, so energy can be conserved there by not needing such fast clock speeds or memory and storage available, and the business computer which I am typing this document on only consumes as much as 30W maximum including the 26” monitor it runs with and it’s still way too over-powered for what I do on it IMHO. But with computer games when so many of them are not requiring such high-powered graphics now often being more focused on gameplay and with development time needing to be kept efficient also being an important factor, does this now mean that the graphics of our computer games, which ever since the first computer game has been improved upon every year and has always been the primary selling point is now no longer as important as it was and is now no longer able to drive the hardware industry into manufacturing ever-more powerful and graphically-capable computers anyway?

Please let’s stick to conventional computing as we know it, and not deviate too much into the realms of quantum computing or any other unusual esoteric computer types that may actually only be theoretical and not actually practical and reality-based.
 
For modern processors about ~10% of the power is actually used to calculate things. The other~ 90% is used for transporting data and for memory etc. All this heat is stuck in the processor. This is one of the main problems of today. We can stack cores but only to a certain limit because the heat has no way to escape the processor. This is just one of the ways todays processors can improve apart from making smaller transistors.

Some of the other ways we could improve calculating power is to use different architectures like RISK-V for CPUs. We could start using different materials like Germanium. (not going to happen soon)

Gaming is not really the main driver for the industry. Look into Nvidia's quarterly numbers

1746897459572.png


So Gaming is at max 3.7 billion... I asked Co-pilot and it answered the following: "NVIDIA's gaming revenue for Q4 2025 was $2.54 billion"
That is 6-7% of nvidia's market
 
For modern processors about ~10% of the power is actually used to calculate things. The other~ 90% is used for transporting data and for memory etc. All this heat is stuck in the processor. This is one of the main problems of today. We can stack cores but only to a certain limit because the heat has no way to escape the processor. This is just one of the ways todays processors can improve apart from making smaller transistors.

Some of the other ways we could improve calculating power is to use different architectures like RISK-V for CPUs. We could start using different materials like Germanium. (not going to happen soon)

Gaming is not really the main driver for the industry. Look into Nvidia's quarterly numbers

View attachment 429175

So Gaming is at max 3.7 billion... I asked Co-pilot and it answered the following: "NVIDIA's gaming revenue for Q4 2025 was $2.54 billion"
That is 6-7% of nvidia's market
It would still be interesting to know how many of those data centres are running game servers, not just Youtube.
(I did just have a quick Google that, but nothing conclusive)
 
However, I read a headline the other month that some Nvidia high-up engineer had stated recently that they had reached a ‘brick wall’ with regards to processing power.

Not quite a brick wall yet, but it's getting dramatically harder to shrink transistors as well as to make chips larger economically (we're at the reticle limit).

Balooning costs are also leading to a major lack of competition that could drive further advancements. The barriers to entry in the semiconductor fabrication industry are much higher than they used to be. We are down to a single tier 1 semiconductor manufacturer (TSMC) and a handful of tier 2s (Samsung, Intel, and maybe a few others if one considers DRAM).

Some of the questions I would posit are; do we really need faster computers now anyway?

Yes. Well, not everyone does, obviously, but there is real demand for more processing power across all kinds of markets for all kinds of reasons.

And should we instead be trying to make computers that are more energy-efficient rather than faster?

Not particularly different issues. Most hurdles to increased performance are also barriers to efficiency and vice versa.

Also, the main driving force for semiconductor progress is cost reduction, it's just that cost reduction implies more efficiency which can be traded for more performance within given budgetary constraints (power and money).

Or perhaps with better recyclability in mind, rather than cheaper and more efficient disposability being the primary manufacturing goal?

Profit trumps sustainability. Manufacturers and suppliers don't want durability, repairability, or recyclability. Planned obsolescence driving continued sales is the name of the game. Policy changes will be needed to rein in the anti-consumer e-waste factories that most system integrators have become.

But with computer games when so many of them are not requiring such high-powered graphics now often being more focused on gameplay and with development time needing to be kept efficient also being an important factor, does this now mean that the graphics of our computer games, which ever since the first computer game has been improved upon every year and has always been the primary selling point is now no longer as important as it was and is now no longer able to drive the hardware industry into manufacturing ever-more powerful and graphically-capable computers anyway?

GPU hardware is rapidly being priced beyond the reach of gamers because that hardware, or hardware that can be made the same way, commands dramatically higher margins.

NVIDIA can devote their TSMC 4N wafer allocation to making RTX gaming video cards, or they can order AI accelerators that can be sold for five times as much per unit area instead. If this trend continues, games will have to adapt to stagnating consumer GPU performance.

As far as hardware manufacturer's are concerned, gaming is a low-priority sideshow.
 
Back
Top Bottom