I'm probably doing something wrong because my GPU gets 922 points, I have Dual core Core 2 with IDE HDD yet the game plays just fine. 1280x1024 FSAA full screen, medium settings.
What graphics card do you have... It could be that your cpu is the bottle neck, being only a dual core...
The only bottleneck that he probably will experience is the neck on the bottle of beer he is drinking. Bottlenecking is such a freaking overused term, and guess what, with PCI express it is pretty much obsolete. Anyway instead of going on my regular 3 page dissertation on how you don't know what your talking about I'll just let you see this.
http://www.youtube.com/watch?v=DAgpvWc4VBM
Cheers.
Please go ahead... I would like to read 3 pages of crap, instead of the 2 lines you published... Would be very entertaining.
Even the video you shared was flawed, sounds like you do not know what you are talking about either. Maybe you should leave it to the professionals next time, instead of posting a weak clip like that!!!
Sigh, the professionals... never mind that I've been building computers and setting up networks since 1999, and that I did it professionally for the DoD for nine years, before I moved to the private sector. Admittedly most of my experience is in the networking end in commercial data centers. I do know a thing or two about PC builds, the underlying architecture, and what causes performance issues.
Your processor does not have much of an impact on your video card's ability to process data. All the processor does is tell the GPU what to que up next, in a game that is more GPU intensive than CPU intensive such as Elite: Dangerous the hit on the processor is minimal at best, however the GPU does a lot of work. The PCIe lanes that are the connection between the processor and the GPU are where you would most likely take a hit in performance, but that simply isn't the case, since we still aren't using full PCIe 2.0 bandwidth, let alone PCIe 3.0 bandwidth.
read : Theoretical vs. Actual Bandwidth PCIe and TB
Just to prove how little of a hit the processor takes in the game I took some screen shots of my G19's monitor that I keep up all the time. I know that I have a high end gaming desktop, that is currently over kill, but the underlying theory is the same, and the hit should scale relatively well to other computer systems. To set up this test I first took a screen shot without Elite: Dangerous running, just to get a benchmark of what my system is doing at near idle (I have 3 monitors, I had 4 tabs in chrome open, skype, and TeamSpeak running along with some other background processes)
This is what I got as a base line:
View attachment 568
13% CPU usage and 4% GPU utilization.
The next thing I did was to run Elite: Dangerous and go to an area where I would stress the processor and the GPU, I figured most people get frame drops in Super Cruise so I went near a star and throttled up below are the results of that adventure:
View attachment 569
25% Cpu utilization and 29% GPU utilization. For a net of 13% CPU and 25% GPU utilization by the game. To be honest I was expecting more GPU utilization, So I went and found myself a space station thinking that surely with more ships and the station spinning I would definitely stress the GPU more. I ended up at Boodt Gateway in the Li Quang system. This yielded the results below :
View attachment 563
That's a little bit closer to what I was expecting for CPU utilization but it's still not maxing out feeding data to the GPU we're still only at 37% CPU and 37% GPU utilization. For a net increase of 24% and 33% since before I started.
Then because I wanted to further prove that I was not experiencing any kind of bottle neck, I went ahead and ran Unigine Heaven which really only stresses the GPU but still has to use the CPU to feed the information to it. Once more here are the results:
View attachment 564
At maximum draw my CPU was at 34% and my GPU was pegged (I know it reads as 99% but the scale is from 0-99 so it should be 100% it's a software limitation) for a net increas of 21% load on the CPU and 96% on the GPU.
For system reference I use an i7-4770k overclocked at stock power to 4.4Ghz, and a GTX980 overclocked at stock power to 1508Mhz.
So that is 8 threads through the CPU to the Graphics Card, honestly looking back I wish I would have put up the GPU RAM usage on my monitor, but hind sight is 20/20, I assume no matter what the game would use as much of the frame buffer as it could so that would be interesting to see. I also run the game in borderless window mode at 1080p on high settings.
Now then making a bold assumptions that CPU usage is uniform across all systems, all threads are prioritized the same, and that computing power across the thread in parallel are additive. The aggregate cpu usage of the game at it's peak all max is 8.448 Ghz of processing power (4.4Ghz (max CPU speed) * .24 (max usage increase on the CPU) * 8 (number of threads)). Now divide that by 4 to get the load per core of 2.112 Ghz, which is in line with the minimum requirements of a quad core processor for the minimum specs on max settings, probably in the 2.6 - 3.0 Ghz range, a little bit less if it's hyper threaded for the minimum processor on max settings.
But wait!!!! MARPATdroid, you forgot this whole thing started over a dual core processor question? He wanted to know if it would hold back his GPU... Well I tested the settings on the low preset, I could have taken them lower by adjusting the draw distance, but I left that at max where it defaults too.
This is what I got at the same max utilization point I was at for the previous math calculation:
View attachment 565
29% CPU usage and 14% GPU usage, for a net of 16% CPU and 10% GPU usage. so we go back to our trusty CPU power formula (4.4Ghz (max CPU speed) * .16 (max usage increase on the CPU) * 8 (number of threads)) for an aggregate CPU usage of 5.632 Ghz on low. We will go ahead and divide that by 2 and that give us 2.816 Ghz per core load of work, and I am willing to bet that Pendra (the wonderful commander in question) is playing on a Core2Duo E7600 (3.06Ghz) or higher which uses the LGA775 chipset which means most likely has a GPU slotted in a PCI 2.0 x8 or x16 slot, and is neither maxing out his CPU, or utilizing the full bandwidth of his PCIe lanes.
While I have enjoyed writing this post it is 1:30 am and this has seriously cut into my drinking and space exploration time. I'm gonna go get a beer, and fly around to the other side of the pill, I'm also going to turn my graphics back up to high, because it looks weird set to low.
TL: DR; The CPU is most likely not bottle necking his GPU, especially in an artificial benchmark, since all it has to do is send instructions to the GPU and not actually process anything, more over he says the game runs fine for him so it really doesn't matter.
Beer