Hardware & Technical What GPU Should I get Nvidia or Amd

So I'm deciding to get a new graphics card I have my eye on either the GTX 970 or R9 290x/390x I read on a different thread that Nvidia drivers are causing systems to crash. I have never used Nvidia before but it seems to be a lot more energy efficient compared to Amd currently I'm running a HD 6990 which is a power guzzler so any new card will save me on electric but I am worried I am going to have regular problems with the Nvidia drivers. I am running 3 monitors 2 DVI monitors and 1 VGA I know I probably should invest in a new monitor and also I know I will not be able to run the VGA on AMD latest cards without a active DP to VGA adapter. But first the GPU then will think about that. Any help and advice would be great.
 
- both AMD and Nvidia are bringing out the next generation of cards in the summer, so if you can wait, I would if I were you (but of course when they are released they will be "full price" and it will take a while for the prices to drop...then again the pence per watt should be much more favourable than current cards)

- I have a 390X and let me put it this way, I've not had the heating on in my office all winter. I'm not looking forward to gaming in the summer evenings, but I bought the card knowing it would be a stop-gap until the next gen came out. I don't want to put you off this card/AMD but make sure your case has good airflow and/or you don't mind a bit of noise from the case fans (the MSI Twin Frozr design does an excellent job given the circumstances). General consensus seems to be that the 970 is a cooler, more efficient GPU

- there is no such thing as perfect graphics driver, whether Nvidia or AMD. ED hangs for me if I use one of the factory OC settings and I know this has been happening on 970s also (and I think some 980s). Seems that ED is very sensitive to cards not running at stock.
 
Last edited:
Just saw a similar thread on the subreddit. The R9 390x is definitely the better card and apparently the newer AMD drivers are meant to be much more energy efficient when using the built-in frame rate control (according to their blurb). Which should bring things a bit more inline with nvidia's power consumption.

 
Just saw a similar thread on the subreddit. The R9 390x is definitely the better card and apparently the newer AMD drivers are meant to be much more energy efficient when using the built-in frame rate control (according to their blurb). Which should bring things a bit more inline with nvidia's power consumption.

Yeah good point I forgot to point out previously that the 970 and 390X aren't directly in competition with each other - from a price POV the 390 is the main competitor to the 970. The 390X is 50-100 GBP more expensive than the 970 depending on which model you look at and is competing against the 980.
 
Last edited:
I use a GTX 760 and get great results.

When the next generation of cards come out, Pascal or whatever, I'll have the funds to hopefully get one.

But we shall see........ :)
 
Just saw a similar thread on the subreddit. The R9 390x is definitely the better card and apparently the newer AMD drivers are meant to be much more energy efficient when using the built-in frame rate control (according to their blurb). Which should bring things a bit more inline with nvidia's power consumption.

I'm confused, wouldn't an ingame frame limiter, or vsync do the exact same thing?
 
I'm confused, wouldn't an ingame frame limiter, or vsync do the exact same thing?

I guess it depends what's going on under the hood. Perhaps under certain thresholds they have it doing x,y or z to clocks and/or voltages which wouldn't normally apply to VSYNC or framerate limiting. Would be good to see an indepth comparison.
 
Last edited:
I'm confused, wouldn't an ingame frame limiter, or vsync do the exact same thing?

I thought there might be a latency benefit but in a quick search I might have given AMD more credit than they deserve! Was I imagining it? I thought it included some logic so for a given situation, instead of draw, wait, show, it was wait, draw, show, thus reducing the time between draw and show. At any decent frame rate such benefit would be small anyway, and the penalty if you get it wrong could be worse than not doing it at all.
 
Thanks for all the great advice but to make things a little clearer I am planning to buy either gtx 970 or R9 390/ 290x I have around 300 pound to spend on this. I am thinking more on the line does the GTX 970 generate a lot of heat as well because it might seem strange but it's true that with AMD cards you do not need to have the heating on much have plenty of household fans to cool down during summer play. Probably will not be overclocking any of these cards because usually the stock levels will be fine for me. Thanks guys great stuff :)
 
I guess it depends what's going on under the hood. Perhaps under certain thresholds they have it doing x,y or z to clocks and/or voltages which wouldn't normally apply to VSYNC or framerate limiting. Would be good to see an indepth comparison.
Sounds similar to target framerate?
I thought there might be a latency benefit but in a quick search I might have given AMD more credit than they deserve! Was I imagining it? I thought it included some logic so for a given situation, instead of draw, wait, show, it was wait, draw, show, thus reducing the time between draw and show. At any decent frame rate such benefit would be small anyway, and the penalty if you get it wrong could be worse than not doing it at all.
Hrm, that sounds like it would end up cause some horrific tearing?
 
Hrm, that sounds like it would end up cause some horrific tearing?

Let's say you have a given amount of time between showing each frame. If you're running below maximum frame rate, you will spend part of that time in that window waiting for the right time to show. The safe way to do it is to start rendering the scene as soon as you can, then wait for the right time to show it. If you can predict how long it takes to render the scene, you could delay a bit before doing so, representing what is happening that fraction of a second later than in the original case. The difficult part would be knowing how long you need to render, so you don't delay too much and run out of time.
 
Sounds similar to target framerate?

I guess it depends on what access your typical framerate limiting application has. I envisage it being similar to tuning an application based on basic "top" output and then tuning it via much more in-depth "dtrace" output, the results would be substantially different. Seeing as AMD obviously have access to source code they can play around with a lot more variables.
 
Wondering if the OP has an update and if he has made a purchase yet. I am going through the same decision making process right now. Currently running the NVIDIA QUADRO 4000 so i think it is time for an upgrade, even though it does run the game fine IMO (about 40-60 fps depending on the situation). But the hardware is old and it needs upgrading anyway but I want to make sure I don't go down in quality.

Any advice is welcomed and appreciated.
 
Back
Top Bottom