How Much VRAM Do Gamers Need? 8GB, 12GB, 16GB or MORE?

I doubt the average population needs to care much about this. It's a question for high-end use cases.
If we discount people who don't play even moderately demanding games on PC, I'd argue that for those who do, now when 180 Hz QHD monitors can be bought for 200€, the question of "does 8 GB VRAM suffice?" is not for only high-end users anymore, but for an average person who plays video games and cares about smooth frametimes and picture quality—maybe wants to run the likes of God of War and Assassin's Creed Valhalla at 60...90 FPS on QHD and high settings without stutters, texture popups and other nasties.

The question could get even more pertinent when LLM-s and other AI shenanigans start being used in AAA games. The RX 7800 XT I have has 120 built-in AI accelerators—twice as many as CU-s—they will be taken advantage of by game makers sooner or later🤪

E-sportsmen running 1080p low settings at 360 Hz are the only players that don't need to care about VRAM, they are mostly CPU-limited; and retrogamers have no hardware limitations at all, except for the availability of the original hardware if they want to use that instead of emulation🙃
 
If we discount people who don't play even moderately demanding games on PC, I'd argue that for those who do, now when 180 Hz QHD monitors can be bought for 200€, the question of "does 8 GB VRAM suffice?" is not for only high-end users anymore, but for an average person who plays video games and cares about smooth frametimes and picture quality—maybe wants to run the likes of God of War and Assassin's Creed Valhalla at 60...90 FPS on QHD and high settings without stutters, texture popups and other nasties.

The question could get even more pertinent when LLM-s and other AI shenanigans start being used in AAA games. The RX 7800 XT I have has 120 built-in AI accelerators—twice as many as CU-s—they will be taken advantage of by game makers sooner or later🤪

E-sportsmen running 1080p low settings at 360 Hz are the only players that don't need to care about VRAM, they are mostly CPU-limited; and retrogamers have no hardware limitations at all, except for the availability of the original hardware if they want to use that instead of emulation🙃
It certainly affects those who are addicted to flashy shows, many of which barely qualify as game anymore. Don't confuse the triple A trash for games. It certainly is a money milking machine and pushes hardware limits but only as a means for marketing, artificial superlative but zero gameplay relevance.
 
I certainly like high res and playing with 2x supersampling instead of using AA or even crutches like FSR and DLSS, but since Nvidia have killed SLI it has become hard.
 
Don't confuse the triple A trash for games.
There's trash in every budget category. Games like TLOU, Horizon: Forbidden West, Elden Ring, Death Stranding and others are far from trash.

Also, demanding != AAA. Unreal Engine allows for visually stunning indie games that take the advantage of all the latest hardware capabilities.
 
Also, demanding != AAA. Unreal Engine allows for visually stunning indie games that take the advantage of all the latest hardware capabilities.
Most upcoming UE5 games turn me down. No doubt he engine can do a lot, but most of these games just look the same. Only a few stand out from the crowd, where devs went the extra mile to tweak the engine to fit their game.
 
Most upcoming UE5 games turn me down. No doubt he engine can do a lot, but most of these games just look the same. Only a few stand out from the crowd, where devs went the extra mile to tweak the engine to fit their game.
That's the beauty and the curse of UE. It makes creating stunningly photorealistic games so easy, but if you want your game to look different from all the other stunningly photorealistic games, you need to go the extra mile🙂

And even then UE engine has this distinct look to lighting and materials. Although, frankly, the same applies (to a degree) to other game engines, too, and even stylized indie games tend to fall into the camps of "simple low-poly aesthetics" and "pixel art 2D".

Going off the tangent, if you want truly unique looking games, look at Cosmo D-s creations. They are playfully discarding all the conventional concepts of "good graphics" and go with this DIY punk mindset mixed with beatnik ideals and Tomato aesthetics.
 
There's trash in every budget category. Games like TLOU, Horizon: Forbidden West, Elden Ring, Death Stranding and others are far from trash.

Also, demanding != AAA. Unreal Engine allows for visually stunning indie games that take the advantage of all the latest hardware capabilities.
Triple A has been notorious for pushing gfx at the cost of gameplay for the last two decades. It's been an arms race that also has eliminated many competitors. I wouldn't count console games and conversions in the mix - they are a different story and all these games you list have some console relation. Maybe it's that the worse hardware has led to console games requiring a modicum of more and better gameplay?
 
Triple A has been notorious for pushing gfx at the cost of gameplay for the last two decades.
IMO only if you look at the "usual suspects": Ubisoft, EA et al; and franchises like Assassin's Creed. It tends to be the usual corporate "we made a thing that happened to become really popular, now let's try to milk as much profit from it as we can!" and "don't change a formula that worked once!"
 
IMO only if you look at the "usual suspects": Ubisoft, EA et al; and franchises like Assassin's Creed. It tends to be the usual corporate "we made a thing that happened to become really popular, now let's try to milk as much profit from it as we can!" and "don't change a formula that worked once!"
Yeah, that's another perk of triple A - the risk aversity to do something new and genuine. I think it was around the millenium mark when Hollywood discovered the gold mine that would be Marvel comic universe adapted for movies. 20 years of remake, "platform conversion", redo - it hasn't really been a creative festival. But it ties together with the gfx arms race - publishers are risk averse because new stuff gets ever more expensive. CGI, pushing hardware for more gfx - it costs more capital, puts it at risk and why risk a chunky failure when you can just redo an old success.
 
But it ties together with the gfx arms race - publishers are risk averse because new stuff gets ever more expensive.
...and at the same time, tech like photogrammetry, motion capture, audio recording is becoming more and more democratized and Unreal Engine allows to tie these assets into a game easier than ever. Big studios can't continue competing on pretty graphics alone forever when a tight small indie team with vision and will can beat them in everything, from writing and gameplay to art direction and graphical fidelity.
 
...and at the same time, tech like photogrammetry, motion capture, audio recording is becoming more and more democratized and Unreal Engine allows to tie these assets into a game easier than ever. Big studios can't continue competing on pretty graphics alone forever when a tight small indie team with vision and will can beat them in everything, from writing and gameplay to art direction and graphical fidelity.
Every couple years the cards are shuffled again, but the standards have risen. Maybe there will be a day when some equilibrum has been reached. When the human eye cannot discern any finer resolution? Until then I think there will always a push to market some new technology a game won't need just to beat the competition.
 
Source: https://www.youtube.com/watch?v=P2qs2lLdWHY


Most interesting take away from this video is something I've noticed in my own testing...memory allocation at the same settings can occasionally vary considerably between cards with different VRAM capacity, even if they're using the same drivers, without a corresponding difference/loss of performance. Allocation is usually a pretty good indicator that significantly less VRAM may be problematic, but isn't damning evidence in all cases, especially borderline ones. There are probably plenty of assets that can be evicted with negligible performance impact in most games.
 
Source: https://www.youtube.com/watch?v=P2qs2lLdWHY


Most interesting take away from this video is something I've noticed in my own testing...memory allocation at the same settings can occasionally vary considerably between cards with different VRAM capacity, even if they're using the same drivers, without a corresponding difference/loss of performance. Allocation is usually a pretty good indicator that significantly less VRAM may be problematic, but isn't damning evidence in all cases, especially borderline ones. There are probably plenty of assets that can be evicted with negligible performance impact in most games.
Most current games, especially the games reviewed...like he mentions at the end of the video... are predominantly console targeted, the likes of Horizon FW or Ghost of Tsushima having come over from PS4/5. Current gen mainstream consoles (Xbox seriesS/X or PS5) have a total of 16Gb of unified RAM to play with, so around 12GB (variable, given console architecture) used for VRAM is the norm for graphically instensive console titles.
 
Last edited:
Most current games, especially the games reviewed...like he mentions at the end of the video... are predominantly console targeted, the likes of Horizon FW or Ghost of Tsushima having come over from PS4/5. Current gen mainstream consoles (Xbox seriesS/X or PS5) have a total of 16Gb of unified RAM to play with, so around 12GB (variable, given console architecture) used for VRAM is the norm for graphically instensive console titles.

The XBox Series X has a split 10/6GiB memory pool and the PS5 has a unified 16GiB pool, but yes, most games developed for consoles will be targeting under 12GiB of VRAM and this is certainly a reason why so few games seem to benefit from more. However, even console ports occasionally get more detailed assets (or never ran that well on the consoles in the first place, with memory constraints being possible contributors to that) and there are PC-specific options/features that can inflate VRAM utilization.

This is one of the reasons why I'm becoming more hesitant to recommend the higher-end AD104 parts with 12GiB of VRAM (specifically the RTX 4070 Super and the RTX 4070 Ti). These parts are getting into the territory where they're fast (and expensive) enough that maximum quality textures should be a given and where people are going to want to use ray tracing and DLSS frame generation where they are available (otherwise it would be more rational to buy the AMD equivalents that do the same rasterization performance for less money), all of which inflate VRAM utilization. VRAM use is not going to get lighter either; 12GiB is a good place, for a while, for a card that can't do more than 1440p or manage heavy RT effects, but dropping 600-700 bucks on a card that one will need to start turning texture quality down on in a year or two is not going to appeal to many people. I mean, I replace my main GPU every two years or so, but not everyone is inclined to do this.

More is better...

When all other things are equal, absolutely. However, there are plenty of cases where the trade-off, past a certain point, isn't worth while vs. a part with less, but still adequate, VRAM.

The 4060 Ti 16GB is a prime example of this. It has very few meaningful advantages over the now much cheaper RX 7700 XT and gets the stuffing beat out of it by the barely more expensive RX 7800 XT (which also has 16GiB) or moderately more expensive RTX 4070 (with 12GiB). If a game or combination of settings really needs more than 12GiB of VRAM, the RTX 4060 Ti 16GB is unlikely to even result in a playable experience. NVIDIA's use of VRAM capacity to artifically segment GPUs is just a big middlefinger to consumers in general, and a bait-and-switch for the less technically knowledgable ones.
 
I am pretty sure I am a member of that group. By segmenting do you mean they pop the same stuff on, then make it 32GB and that turns a 4060 into a 4070?

They have a series of price segments they have to put products in to fill out a lineup to maximize revenue. Typically the prices of higher tiers are justified by greater performance and features. However, there are cases where the price jump is there, but the product is fundamentally the same, except for some largely arbitrary handicap in the cheaper part, or some irrelevant psuedo-advantage for the more expensive one.

The RTX 4060 Ti and RTX 4060 Ti 16GB are the exact same GPU on the exact same board, it's just that the latter has four more memory ICs on the back in addition to the four on the front. The ninety seconds of (entirely robotic) labor and four parts with a spot price of about two-dollars a piece are the only physical differences here. NVIDIA has decided they need a ~100 buck spread between mainstream parts to fill out their line-up. They've chosen to do that in this segment by charging $100 more for the card that costs $10-15 dollars more to make. They then funnel people toward the more expensive card by giving the cheaper part a very borderline VRAM capacity, away from competitor parts by playing up the almost uselessly large pool of their more expensive part of this segment, while trying to avoid cannibalizing their faster and more expensive cards that have an intermediate amount of VRAM by not mentioning VRAM too prominently when it comes to those cards. People frequently buy parts based on VRAM capacity, when it's only tangenital to the performance of that part.

I mean everyone plays these market segmentation and product nomenclature games, but this is one of the standout examples of where deliberate lack of clarity can lead consumers to make expensive mistakes.
 
They have a series of price segments they have to put products in to fill out a lineup to maximize revenue. Typically the prices of higher tiers are justified by greater performance and features. However, there are cases where the price jump is there, but the product is fundamentally the same, except for some largely arbitrary handicap in the cheaper part, or some irrelevant psuedo-advantage for the more expensive one.

The RTX 4060 Ti and RTX 4060 Ti 16GB are the exact same GPU on the exact same board, it's just that the latter has four more memory ICs on the back in addition to the four on the front. The ninety seconds of (entirely robotic) labor and four parts with a spot price of about two-dollars a piece are the only physical differences here. NVIDIA has decided they need a ~100 buck spread between mainstream parts to fill out their line-up. They've chosen to do that in this segment by charging $100 more for the card that costs $10-15 dollars more to make. They then funnel people toward the more expensive card by giving the cheaper part a very borderline VRAM capacity, away from competitor parts by playing up the almost uselessly large pool of their more expensive part of this segment, while trying to avoid cannibalizing their faster and more expensive cards that have an intermediate amount of VRAM by not mentioning VRAM too prominently when it comes to those cards. People frequently buy parts based on VRAM capacity, when it's only tangenital to the performance of that part.

I mean everyone plays these market segmentation and product nomenclature games, but this is one of the standout examples of where deliberate lack of clarity can lead consumers to make expensive mistakes.
In my layman knowledge is some residue that claims it is the number of transistors or whatever the logic unit is called that mainly determines performance of GPUs. The performance boost of old calculator chips wasn't due to memory - it was always down to chips that ran instructions. More instructions, more results, more pixels.
 
In my layman knowledge is some residue that claims it is the number of transistors or whatever the logic unit is called that mainly determines performance of GPUs.

Generally, this is true.

Each generation of CPU or GPU is loosely confined to a given architecture, all parts of which are typically built on a given process node. The actual chips are built as different die flavors, from which different SKUs are designated by testing, sorting, and disabling parts as needed. The RTX 4000 series are all Ada Lovelace parts, which have five different designs (AD102, 103, 104, 106, and 107), in a trade off between performance needs and production costs.

The RTX 4060 Ti and 460 Ti 16GiB are both AD106-351 (the second most enabled version of the second smallest die flavor), with the exact same number of functional units (and thus active transistors), and the same clock speeds. The only meaningful difference is the number of memory ICs attached to each memory channel (and the channel counts are the same). No other RTX 4000 series part is segmented in this way. The various versions of the RTX 4070, for example, covers three different die flavors, and six different combinations of functional units; there are only two memory capacity options, dictated by the number of active memory controllers.

The naming conventions are uselessly arcane (e.g. RTX 4070/RTX 4070 AD 103, RTX 4070 Super, RTX 4070 Ti, RTX 4070 Ti Super/RTX 4070 Ti Super AD102, in order of slowest to fastest) to most laymen, so all people really have to go off at first glance is pricing...which is extremely misleading in a few cases.
 
Last edited:
The RTX 4060 Ti and RTX 4060 Ti 16GB are the exact same GPU on the exact same board, it's just that the latter has four more memory ICs on the back in addition to the four on the front. The ninety seconds of (entirely robotic) labor and four parts with a spot price of about two-dollars a piece are the only physical differences here. NVIDIA has decided they need a ~100 buck spread between mainstream parts to fill out their line-up. They've chosen to do that in this segment by charging $100 more for the card that costs $10-15 dollars more to make. They then funnel people toward the more expensive card by giving the cheaper part a very borderline VRAM capacity, away from competitor parts by playing up the almost uselessly large pool of their more expensive part of this segment, while trying to avoid cannibalizing their faster and more expensive cards that have an intermediate amount of VRAM by not mentioning VRAM too prominently when it comes to those cards.
That's precisely like Apple (or these days any laptop manufacturer, really) charging several hundred dollars more for extra 512 GiB of SSD capacity and 8 GiB of RAM. It's dirt cheap for the MFG to solder bigger capacity NAND and RAM chips on the board, but the end product has to be several dollars more expensive to fit the next price point. Or put another way: profit margins are very, very fat in the high end segment. Price gouging people who absolutely need more RAM/SSD and giving people who have tighter budgets at best a marginal performance.
People frequently buy parts based on VRAM capacity, when it's only tangenital to the performance of that part.
Remember the bad old days of bargain bin level GPU-s boasting 2 GB of memory whereas lower midrange cards like ATI HD5770 had 1 GiB? Small print: The bargain bin had GDDR3 or even glacially slow DDR3🤡 And now we have the exact opposite: cards that I would call the end of midrange and start of high end (Nvidia 4070 and its variants) having a marginal amount of VRAM that I suspect will become insufficient when the next console generation launches in a few years, just like 8 GB started to become insufficient when the current gen launched*. GTX 1080 Ti was a mistake Nvidia will never repeat.

*Sidenote: the venerable AMD RX 570 4GB I used till last week was still a quite capable 1080p 60 FPS cards even in 2020, but quickly became completely inadequate after the current gen consoles launched. My suspicion is that the same will happen with current 8 GB cards that are as of now fine at 1080p or upscaled to QHD.
 
Back
Top Bottom