Hardware & Technical GTX970 to 1080ti comparison

This is just a little bit of subjective feedback that might help anyone considering a GPU upgrade now the bitcoin boom has died down.

I built a new PC in March last year, carrying over my existing GPU & PSU from the old one. I planned to buy a new card a few months later but the prices went too high & so I've stuck with my Asus GTX 970 OC for much longer than I planned to. I've just fitted an EVGA 1080ti FTW3 into my existing system:

Intel i7 7700 (not K)
16Gb PC2133 DDR4 SDRAM
Gigabyte Z270 motherboard
500Gb m.2 SSD (not NVME)
AOC 2460PF 24" full HD monitor
There are other accessories including two more monitors but nothing relevant to ED :)

It's all wrapped up in a Phanteks Enthoo Pro case, it's nicely made & thoughtfully designed, bigger than I need right now but I have the room, I'm really happy with it.


Playing ED I'm not CPU or RAM limited, my CPU sits around 40-45% usage while playing, rising to around 70% at most in a CZ or busy station instance so the mobo, CPU & RAM aren't overclocked at all atm.

With the GTX 970 I tended to overclock it to around 1450 core, and only a slight memory increase from 7010 stock to 7040. Over several years this light overclock was stable, if I pushed it much more the temps would start to rise without much benefit.

I would normally run ED in full HD with no custom tweaks, just everything maxed out in the in-game graphics settings & 1.0 supersampling. Landed on a planet with the SRV having just dropped down (pointing at the belly of my Conda) is the most taxing scenario repeatable for me, so I'll use that to compare.

Normally I'd see a drop to just below 60fps on the 970, with graphics memory peaking at around 3.7GB.
Switching the DSR factor from 1.0 to 4.0 (4k resolution downscaled to full HD) I'd see 28fps with the headlights on, 30 with them off.
After installing the new card, with 4.0DSR I'm seeing 90fps lights on, just under 100fps lights off, and just shy of 5Gb graphics memory usage.

The DSR factor is effectively giving me pretty expensive anti-aliasing and the image quality is much improved, the occasional stutters have completely disappeared (for now, it's only been a few hours) and the CPU & RAM usage are unchanged.

The 970 is an excellent card for full HD resolution, but the 4GB RAM was starting to become an issue. The 1080ti is both a generation newer & a couple of steps further up the range, but at £719 delivered I'm happy with the price/performance ratio.

Fingers crossed this one will remain useful for as long as the 970 did. It's now sitting in my son's PC, replacing his 2GB 750ti.
 
I also have a 7700, though it's a K running at 4.8GHz, and this March I made an interim upgrade from a 960 4gb to a 1060 6gb - overclocked to about 2100 MHz, memory at 4600 MHz.

The only scenario that dips the fps is complex planetary structures on the lit side, but that dros it to 30fps with some stuttering. All settings in Elite are maxed out and running at 1.5 ss + anti aliasing (SMAA, iirc).

Since I am looking at ugrading to VR, I decided to wait for a new gen card.
 
Re DSR 4 - does that actually work?

Let me explain my viewpoint, and I'm not doubting, I'm truly curious (and will try it myself when I get the chance).

4k is exactly 1080P x 4. Double the wdith in pixels, double the height in pixels, giving exactly four times th enumber of pixels. So, with that in mind, one of the biggest things I hear TV reviewers say (stay with me on this), is "wow, it scales 1080P content to 4k really well". Of course it does. It doesn't have to do anything, it turns each pixel into four pixels, and we're done. try to scale 720P to 4k though, then tell me how it goes, that's the test...

Anyway,pushing 4k down to 1080 seems to me like adding work to the GP for no real gain, as your forcing it to combine every 4 pixels into one, now, it's not too taxing, as it's a fairly easy maths process in the grand scheme of things, each 2x2 pixel set become a singal pixel, and average out the colour being displayed. Do that for ever group of 4 pixels, and you're odne.

However, I'd assume the in game engine should be doing this, and.. if the GPU is doing a better job than the engine, wouldn't that speak of some flaw in the engine? To me, in reality, if the coding is right, there should be no difference. I'm coming at this as someone that works in a field where calibrating screens is part of my job, so I have a good understanding of image generation, but not such a good understanding of game engines, and how they work.

Still, the screens for both my gaming rigs are 4k (one being a 49" 4k Sony TV, the other a 65" 4k Samsung screen - both can handle 60fps). Both rigs are near identical, being 6700K i7, 16 GB DDR4 3200mHz ram, Asus Gene VIII mobo, and NVME SSD's, both in Corasair 240 Air cases, except one has a GTX 1080, the other a GTX1080Ti. I'll give your settings a shot in 1080 mode, just because I am super curious here.

Z...
 
Re DSR 4 - does that actually work?

Let me explain my viewpoint, and I'm not doubting, I'm truly curious (and will try it myself when I get the chance).

4k is exactly 1080P x 4. Double the wdith in pixels, double the height in pixels, giving exactly four times th enumber of pixels. So, with that in mind, one of the biggest things I hear TV reviewers say (stay with me on this), is "wow, it scales 1080P content to 4k really well". Of course it does. It doesn't have to do anything, it turns each pixel into four pixels, and we're done. try to scale 720P to 4k though, then tell me how it goes, that's the test...

Anyway,pushing 4k down to 1080 seems to me like adding work to the GP for no real gain, as your forcing it to combine every 4 pixels into one, now, it's not too taxing, as it's a fairly easy maths process in the grand scheme of things, each 2x2 pixel set become a singal pixel, and average out the colour being displayed. Do that for ever group of 4 pixels, and you're odne.

However, I'd assume the in game engine should be doing this, and.. if the GPU is doing a better job than the engine, wouldn't that speak of some flaw in the engine? To me, in reality, if the coding is right, there should be no difference. I'm coming at this as someone that works in a field where calibrating screens is part of my job, so I have a good understanding of image generation, but not such a good understanding of game engines, and how they work.

Still, the screens for both my gaming rigs are 4k (one being a 49" 4k Sony TV, the other a 65" 4k Samsung screen - both can handle 60fps). Both rigs are near identical, being 6700K i7, 16 GB DDR4 3200mHz ram, Asus Gene VIII mobo, and NVME SSD's, both in Corasair 240 Air cases, except one has a GTX 1080, the other a GTX1080Ti. I'll give your settings a shot in 1080 mode, just because I am super curious here.

Z...

Interesting ;) An AB comparison between 1080 with 4xDSR vs native 4k would be cool, particularly how much better a framerate 4k native gives. I don't have room for a large monitor, and I have three of them so I'm kind of invested in the 24" size. Plus the 144Hz is more important to me than 4k, and I cant afford a monitor with both.


For clarity the DSR factor I mentioned isn't set in the game directly, it's in the Nvidia control panel (I'm sure you understood this Zeeman) along with a DSR Smoothness factor - mine is currently still on the default 33% (for the figures above). As I mentioned in my OP it's a sort of expensive anti-aliasing. In-game all I do is select a higher resolution than 1920x1080 from the list.

In space, no one can hear you scream.

Sorry. In space, the graphics card isn't taxed and 144fps is very easy to maintain, it was on my 970 too. I spend a lot of time in space, and over the years I've experimented with various forms of AA both in-game and in the Nvidia control panel to make use of the available capability. I like the result DSRx4 gives (I used around 25% smoothness before, but I forgot to make a note before swapping the cards), it seems similar in quality to SMAA x2 but with a better framerate.

But of course as I approach a planet the framerate tanks, which is where my old 970 struggled so if I was going to be planetside for a while I'd swap back to native resolution. The 1080ti is intended to last me a while so while DSRx4 is wasteful atm it is at least making use of the extra power in a way that's beneficial to me, and the new card is still the bottleneck. And of course now I don't have to compromise on the shiny for the sake of a smooth framerate.

If anyone has any suggestions for improving image quality at 1920x1080 in a more optimal way I'd love to try them out. I'd like to see if I can make use of more of my graphics memory too if it improves the experience.
 
Last edited:
Are you selling your old card?

The 970 has gone into my son's PC, his 750ti has gone into the media PC to replace the onboard graphics.

The 970 is a great value for money second hand card, you can get them for around £120-£200 on ebay. But a new 1050ti would probably be as good a bet or better and right now Nvidia are offering a free SSD too. That would be my choice if I didn't already have suitable hardware in the house ;)
 
Interesting ;) An AB comparison between 1080 with 4xDSR vs native 4k would be cool, particularly how much better a framerate 4k native gives.
Back when I bought my GTX 980ti and was waiting for the 4k monitor to arrive, I ran E: D with in game supersampling at 2.0 (same effect as 4xDSR), exact same settings as previously GTX 780 regular at 1080p. Performance felt pertty much the same, and FPS counter confirmed that.
Switching to native 4k (and of course setting in game SS to 1.0) when the 4k monitor arrived didn't feel much of a difference in performance - maybe slight improvement, so the gain was abot a few per cent. Picture quality improved dramatically though, IMO at least as much as when enabling SS 2.0 in game did over regular 1080p.
 
Back when I bought my GTX 980ti and was waiting for the 4k monitor to arrive, I ran E: D with in game supersampling at 2.0 (same effect as 4xDSR), exact same settings as previously GTX 780 regular at 1080p. Performance felt pertty much the same, and FPS counter confirmed that.
Switching to native 4k (and of course setting in game SS to 1.0) when the 4k monitor arrived didn't feel much of a difference in performance - maybe slight improvement, so the gain was abot a few per cent. Picture quality improved dramatically though, IMO at least as much as when enabling SS 2.0 in game did over regular 1080p.

I think you've confirmed what I thought might be the case - the process of downscaling & filtering has a small cost in terms of GPU workload, and of course native 4k looks better overall.

I've been mucking about with the EVGA precision XOC software this morning. The Asus ROG software I used before works on this card but the EVGA hardware has way more things to fiddle with that the Asus software didn't do. For example I can now control all three fan speeds independently (not sure why I would want to do this) and most importantly, I was able to switch off the LEDs ;)
 
Anyway,pushing 4k down to 1080 seems to me like adding work to the GP for no real gain, as your forcing it to combine every 4 pixels into one, now, it's not too taxing, as it's a fairly easy maths process in the grand scheme of things, each 2x2 pixel set become a singal pixel, and average out the colour being displayed. Do that for ever group of 4 pixels, and you're odne.

However, I'd assume the in game engine should be doing this, and.. if the GPU is doing a better job than the engine, wouldn't that speak of some flaw in the engine? To me, in reality, if the coding is right, there should be no difference. I'm coming at this as someone that works in a field where calibrating screens is part of my job, so I have a good understanding of image generation, but not such a good understanding of game engines, and how they work.

Linear filtering does work fine for 4k to 1080p and this is what you get with DSR set to 0% smoothness. However, for resolutions that are whole number multipliers, you need some smoothing or it looks like crap.

The native resolution of my main display is 1440p, but I use DSR to render at 4k. The default 33% smoothing works well here...more than this and I loose too much image crispness, too much less, say below 20-25% or so and scaling artifacts become distracting.

In-game supersampling is both objectively slower and subjectively worse IQ (no better at preserving detail, not as good at softening jaggies).

Interesting ;) An AB comparison between 1080 with 4xDSR vs native 4k would be cool, particularly how much better a framerate 4k native gives.

DSR's performance hit (vs. using the same native resolution) is near zero.

For clarity the DSR factor I mentioned isn't set in the game directly, it's in the Nvidia control panel (I'm sure you understood this Zeeman) along with a DSR Smoothness factor - mine is currently still on the default 33% (for the figures above). As I mentioned in my OP it's a sort of expensive anti-aliasing. In-game all I do is select a higher resolution than 1920x1080 from the list.

In space, no one can hear you scream.

Sorry. In space, the graphics card isn't taxed and 144fps is very easy to maintain, it was on my 970 too. I spend a lot of time in space, and over the years I've experimented with various forms of AA both in-game and in the Nvidia control panel to make use of the available capability. I like the result DSRx4 gives (I used around 25% smoothness before, but I forgot to make a note before swapping the cards), it seems similar in quality to SMAA x2 but with a better framerate.

But of course as I approach a planet the framerate tanks, which is where my old 970 struggled so if I was going to be planetside for a while I'd swap back to native resolution. The 1080ti is intended to last me a while so while DSRx4 is wasteful atm it is at least making use of the extra power in a way that's beneficial to me, and the new card is still the bottleneck. And of course now I don't have to compromise on the shiny for the sake of a smooth framerate.

If anyone has any suggestions for improving image quality at 1920x1080 in a more optimal way I'd love to try them out. I'd like to see if I can make use of more of my graphics memory too if it improves the experience.

DSR 4x (double internal x and y resolution) generally looks best with 0% smoothing. A lower DSR value (2.00 to 3.00), plus 25-33% smoothing and SMAA or FXAA are probably the best ways to reduce jaggies in ED without totally destroying performance.
 
Well I've been using this new card for a little while now and I've settled on the following:

60fps cap (I'll go into why shortly)
DSRx4 (4k resolution downscaled) with 0% smoothing
FXAA selected in-game.

There are two 'modes' of Elite that I wanted to balance, Station/planetside instances that are GPU heavy, and supercruise where GPU use is low & I want to use the spare capacity to maximise AA on stuff like orbital lines.

The new card is quite noisy when running maxed out, and in part I now regret buying this card instead of an equivalent Asus or other 2.5 slot card. I can live with it, but it's a minor irritation.

The card is capable of running at 144hz with the above settings, but in a station instance CPU load goes up from a normal 40-45% to around 80%, framerate drops down to around 80-90fps (CPU limited I guess) and seems to be the cause of DirectX 'cannot lock a buffer' errors that cause the game to crash when entering the instance. At 60Hz I don't get this crash at all, and CPU usage remains as normal. Clearly the physics engine is tied to the framerate, and the high CPU usage is causing something (game or DirectX, I don't know) to fall over. Capped at 60fps it's fine.

The only other game I have installed that's as demanding is GTA5 which doesn't have this problem as far as I can tell.

On the bright side everything looks beautiful, with no detectable shimmering in any circumstances, and the extra framerate is only important in time-critical situations which don't happen often. It'd be nice to be able to set the cap at something between 60 & 100Hz but 120 is my only option in-game & is enough to cause the crash.

Fingers crossed the buffer locking thing is a fixable game bug so I can use the higher capabilities of my GPU & monitor. In the mean time I just have a bit of an overhead for futureproofing ;)
 
Last edited:
I've seen one or two buffer lock errors in my last thousand hours of game time and my CPU isn't any better for ED than yours. I also don't run any sort of sync or frame rate cap. I have a feeling issues are more common on Windows 10, but I barely play on Windows 10 so haven't done a direct comparison.

Anyway, as for the noise of the card, if you are willing to try a custom voltage/frequency curve (in MSI AB, for example), you can cut power and heat substantially while maintaining the same performance.
 
I've seen one or two buffer lock errors in my last thousand hours of game time and my CPU isn't any better for ED than yours. I also don't run any sort of sync or frame rate cap. I have a feeling issues are more common on Windows 10, but I barely play on Windows 10 so haven't done a direct comparison.

Anyway, as for the noise of the card, if you are willing to try a custom voltage/frequency curve (in MSI AB, for example), you can cut power and heat substantially while maintaining the same performance.

There are so many possible causes & potentially significant differences between my PC & yours (or anyone elses) that pinning down a cause is going to be difficult. I consistently get the lock up when jumping into a station instance for the first time - I jumped 10kLy back home just fine & the game crashed on entering the station instance, I rejoin, jump to another system & the game locks up on entering the next station instance. I switch to 60Hz & it doesn't happen.

For now I've left tweaking the GPU alone, I have adjusted the fan curve to make it quiet at idle but whatever I do it's never going to be quieter than a card with a bigger heatsink I think. Thanks for the suggestion though, I'll have a play with afterburner. Not tried it on this card.
 
Interesting ;) An AB comparison between 1080 with 4xDSR vs native 4k would be cool, particularly how much better a framerate 4k native gives.

https://forums.frontier.co.uk/showt...c-Super-Resolution-and-Supersampling-in-1080p

also, non-elite:

4k vs 4k DSR 1080p
[video=youtube;8i0N0-iGNdA]https://www.youtube.com/watch?v=8i0N0-iGNdA[/video]

same for Watch Dogs:
[video=youtube;MrP28FEOd0M]https://www.youtube.com/watch?v=MrP28FEOd0M[/video]

And here is the NVidia article explaining why it isn't just "turning 4 pixels into 1" - TL; DR you have much more information to downscale from, than rendering it on lower resolution and applying anti-aliasing.
 
There are so many possible causes & potentially significant differences between my PC & yours (or anyone elses) that pinning down a cause is going to be difficult.

Yes.

For now I've left tweaking the GPU alone, I have adjusted the fan curve to make it quiet at idle but whatever I do it's never going to be quieter than a card with a bigger heatsink I think. Thanks for the suggestion though, I'll have a play with afterburner. Not tried it on this card.

Most 1080 Ti's can handle ~2GHz core with ~1 volt @ ~65C; beyond this there are fairly extreme diminishing returns. They are also fairly sensitive to temperature (and NVIDIA's GPU boost takes this into account). However, the default frequency/voltage curve tends to give too much voltage, which increase power consumption, which increases heat production, which increases temperature...generally resulting in both more noise and less performance than can be had with manual tweaking, sometimes even less stability.

As an example, here are my 'summer clocks':
Vet8nPS.png


The time graphed was while I was playing ED, flying from a starport to an outpost in another system and turning in a mission. You can see that the GPU is pegged at 2012MHz with 1.012v. Now contrast that with the default curve, which is the darker green line. To reach 1999MHz, it will try to apply 1.063v by default, if it doesn't reach temps where GPU boost moves down the speed bin/multiplier. In practice this will never happen because 1.063v tends to run very warm on Pascal. So, with the default frequency/voltage curve ill either hit the temp limit, or power limit, long before it reached 2GHz, even if I keep my custom fan curve.

Now this is a non-reference part with a better cooler and firmware with a higher power limit than the reference Founder's Edition, but the same premise applies...move that frequency voltage curve to the left and you get a faster/more consistent, cooler, quieter part. Of course, some testing is required to find optimal settings, but if you are content with the performance you have now, you could probably just make the curve flatten out after about 0.95 to 0.975v and ~1900MHz to get your fan speed down considerably.
 
I have a 970 but I've never overclocked before. Any advice? Also, just upgraded the CPU fan and added fresh thermal paste, so I wouldn't mind tweaking the CPU as well.
 
I have a 970 but I've never overclocked before. Any advice? Also, just upgraded the CPU fan and added fresh thermal paste, so I wouldn't mind tweaking the CPU as well.

Do you know what brand of 970? And what CPU & motherboard?

CPU:

There are two ways to overclock a CPU - multiplier & bus speed. Most CPUs are multiplier locked (intel ones anyway), the 'K' variants are unlocked. Most motherboards don't allow bus speed overclocking, and even if yours does (in the bios or maybe via a windows utility) the RAM sticks need to be able to run at the higher speed for stability. So know your hardware.

For example my processor is multiplier locked, so I can only change the bus speed. My motherboard supports this, but my RAM is only 2133 so unless I step down it's multiplier any bus speed increase would take that over spec so would probably be the weakest link. I could increase the ram timings (CAS latency etc, the motherboard also supports this) but this is getting complicated. It really depends on a lot of things but realistically any small benefit is likely to be outweighed by decreased stabilty.

As a general rule a processor is fairly tolerant of being run faster than design (just don't overheat it), memory is not IME. If you increase your bus speed every component in your computer will be running faster (SSD, hard drive, sound & network cards etc) so there are lots of things that could be the weak link.


For the GPU (graphics card):

Most 970s are well built with plenty of overhead in terms of heatsink design & design quality - there is probably quite a bit of extra performance you can release here compared to stock. Download a windows utility (MSI Afterburner is a popular one, I used the Asus one that came with my card). If you get the util from yours card manufacturer there is probably a simple 'gaming mode/quiet mode/OC mode' option to get a bit more out of it easily, or you can use manual sliders to independently increase GPU & RAM clock speeds. Change one a few clicks at a time & observe any performance/temp increases over time, if it's stable increase it a little more & once it starts to become unstable back off a bit & try changing another variable.
 
Last edited:
Yes.



Most 1080 Ti's can handle ~2GHz core with ~1 volt @ ~65C; beyond this there are fairly extreme diminishing returns. They are also fairly sensitive to temperature (and NVIDIA's GPU boost takes this into account). However, the default frequency/voltage curve tends to give too much voltage, which increase power consumption, which increases heat production, which increases temperature...generally resulting in both more noise and less performance than can be had with manual tweaking, sometimes even less stability.

As an example, here are my 'summer clocks':


The time graphed was while I was playing ED, flying from a starport to an outpost in another system and turning in a mission. You can see that the GPU is pegged at 2012MHz with 1.012v. Now contrast that with the default curve, which is the darker green line. To reach 1999MHz, it will try to apply 1.063v by default, if it doesn't reach temps where GPU boost moves down the speed bin/multiplier. In practice this will never happen because 1.063v tends to run very warm on Pascal. So, with the default frequency/voltage curve ill either hit the temp limit, or power limit, long before it reached 2GHz, even if I keep my custom fan curve.

Now this is a non-reference part with a better cooler and firmware with a higher power limit than the reference Founder's Edition, but the same premise applies...move that frequency voltage curve to the left and you get a faster/more consistent, cooler, quieter part. Of course, some testing is required to find optimal settings, but if you are content with the performance you have now, you could probably just make the curve flatten out after about 0.95 to 0.975v and ~1900MHz to get your fan speed down considerably.

Still more mucking about & the only issue remains ED's inability to cope with a higher framerate than 60Hz. I played around with the overclocking & voltages, I got it over 2000MHz but it added little to an already under-stressed card so I've put it back to stock for now.

In the game I can max out the graphics settings with no problems but if I raise the FPS cap above 60Hz (in-game menu) when I enter a station instance it will almost certainly CTD or hang with Direct3D failing to lock a buffer - on one occasion it actually caused the PC to reset. But at 60fps it's fine for hours.

One problem I did solve was the constant flickering at 60fps on a 144Hz monitor - I play borderless window (I have multiple monitors, this allows me to take focus away from the game while leaving it on-screen). I change the in-game refresh rate to 60Hz too and the image is smooth at 60fps again. I have vertical sync switched off.

So I can play the game, it looks really pretty & it's more than capable of running at 144fps (dipping down to around 75-90fps in a station instance when it doesn't crash).

Googling the failing to lock a buffer thing all the top results are discussing Elite Dangerous even going back to 2015. I tried all the suggestions found including re-seating the card, checking power line voltages, re-installing the drivers, nothing has helped yet. I noticed some audio crackling (on-board 5.1 sound) in the second or two after entering a station instance before locking up which might be indicative of something, but I don't know what.

I don't know what buffer can't be locked, I suppose it could be sound or even USB related. The 970 was a much less capable card but it never had this issue at any framerate.

BTW when I say station instance I mean a large space station of any kind. It hasn't crashed at an outpost or a normal planetary base, although it did crash once as I approached an engineer base.
 
Do you know what brand of 970? And what CPU & motherboard?


To give you as much info as I can, I have an i5 3450 @3.10 CPU (Ivy Bridge) running a GTX 970 (EVGA GeForce) and 16GB of DDR3 2400 MHz RAM

Does that help?

I'm currently using EVGA Precision OCX to manage things. Set the Power and Temp targets to max and the fan setting to Aggressive (based online recommendations).

I have the GPU and MEM Clock Offset boosted by 50MHz for now, with a +12mV power increase. I tried 100MHz on GPU and MEM, but given that the benchmark test crashed I figured that was too high (or I'm doing something else wrong?)

(BTW, don't mistake the way I'm relaying this for actual competence on my part. I'm amazed that my computer is still working after I installed a new CPU cooler (Hyper 212EVO). I don't know the lingo per se and tend to stumble into solutions in 90% of my hardware/software adventures ;) )

But that's just the GPU, I still don't know how (or how much) to tweak the CPU if I can. Most of the online help I find (forums, videos) are just beyond my ken, in that I get the gist of what they're saying, but not quite enough to implement.


Anyway, enough of me rambling. Hopefully I've given you enough info to work with :)
 
To give you as much info as I can, I have an i5 3450 @3.10 CPU (Ivy Bridge) running a GTX 970 (EVGA GeForce) and 16GB of DDR3 2400 MHz RAM

Does that help?

I'm currently using EVGA Precision OCX to manage things. Set the Power and Temp targets to max and the fan setting to Aggressive (based online recommendations).

I have the GPU and MEM Clock Offset boosted by 50MHz for now, with a +12mV power increase. I tried 100MHz on GPU and MEM, but given that the benchmark test crashed I figured that was too high (or I'm doing something else wrong?)

(BTW, don't mistake the way I'm relaying this for actual competence on my part. I'm amazed that my computer is still working after I installed a new CPU cooler (Hyper 212EVO). I don't know the lingo per se and tend to stumble into solutions in 90% of my hardware/software adventures ;) )

But that's just the GPU, I still don't know how (or how much) to tweak the CPU if I can. Most of the online help I find (forums, videos) are just beyond my ken, in that I get the gist of what they're saying, but not quite enough to implement.


Anyway, enough of me rambling. Hopefully I've given you enough info to work with :)

GPU: You've already done nearly as much as I did on my 970, you could try bumping the sliders up 5 units at a time rather than 50 ;) Find the limit (for each in turn) then take a step back & do an extended test to see if it still crashes. If it doesn't you're golden, if it does go back a little more.

CPU: You have a similar setup to my son's PC, he has a socket 1155 i7 3770 with 8Gb DDR3 1333, (yours is 1600mhz) with a Hyper 212 evo. It's a great cooler for the money & more than enough for your CPU and probably the next few too ;-)

Your processor is multiplier locked just as my son's (and mine) is but your memory is faster. There's probably nothing you can do short of buying a faster CPU an i7 3770 or ideally i7 3770K.

Check in your bios (or with a windows utility) that your RAM is running at 1600Mhz, your mobo might support 1333/1600, or it might only support 1000/1333 (my son's H61 chipset board only supports 1333 max).

If your mobo only supports 1333Mhz RAM max, and if it supports bus speed overclocking you could try bumping it up 2-3Mhz at a time but realistically you're going to see minimal change.

Socket 1155 CPUs are cheap, plentiful & still excellent. If I had your system I'd look for an i7 to stick in there for around £100-£150 and see if I could sell the 3450 on for £25-£30. A cheaper option might be an i5 2500K for around £40.

Socket 1155 is the last round of intel chipsets that still supported PCI slots for older expansion cards. If that's important to you it's worth hanging on to.
 
Back
Top Bottom