ED on ATI & Nvidia... The shaders thing!

Hello to all. This question may seem technical but, maybe it is of general knowledge, between gamers.
I have two computers with ED installed. One with an ATI card and the other with an Nvidia one.

The computer with the ATI, after the installation and configuration, begins ED in no time.
The computer with the Nvidia; ED installed and configured, takes almost a minute "preparing shaders", when I start the game. Is it normal?

Is there some file I could edit, to make him stop the shaders thing, over and over again; every time I want to play ED on it. The strange thing is, no other game do it! It's a steam box with Windows 8.1, connected to a TV, on the living room (Alienware Alpha i7 2TB 8GB Nvidia 2GB).

I don't run Elite Dangerous inside Steam on both. Clean Installation!


Kind regards to all,
Xanix

*ho and what's going on, with the ED iOS app... he keeps losing, my login configuration and credentials?!?
 
This is the "shader warning" to help prevent stuttering

It can be turned off (I think) in the appconfig.xml file ( someone correct me if I'm wrong here)
It used to be on for AMD as well, but was recently disabled for those cards because they don't seem to have as much of a stutter problem.

I could explain better but I am typing On my phone
 
Last edited:
I have AMD cards in my main system and enable shader warming manually.

As Fenster states, this can be done from within the AppConfig.xml located in the game's install directory.
 
This is the "shader warning" to help prevent stuttering

It can be turned off (I think) in the appconfig.xml file ( someone correct me if I'm wrong here)
It used to be on for AMD as well, but was recently disabled for those cards because they don't seem to have as much of a stutter problem.

I could explain better but I am typing On my phone

I have AMD cards in my main system and enable shader warming manually.

As Fenster states, this can be done from within the AppConfig.xml located in the game's install directory.

Hi. Commanders, thank you for taking the time and reply. I've already edited the xml file. No more warnings!
Regards,
Xanix
 
Last edited:
Hello to all. This question may seem technical but, maybe it is of general knowledge, between gamers.
I have two computers with ED installed. One with an ATI card and the other with an Nvidia one.

The computer with the ATI, after the installation and configuration, begins ED in no time.
The computer with the Nvidia; ED installed and configured, takes almost a minute "preparing shaders", when I start the game. Is it normal?

Is there some file I could edit, to make him stop the shaders thing, over and over again; every time I want to play ED on it. The strange thing is, no other game do it! It's a steam box with Windows 8.1, connected to a TV, on the living room (Alienware Alpha i7 2TB 8GB Nvidia 2GB).

I don't run Elite Dangerous inside Steam on both. Clean Installation!


Kind regards to all,
Xanix

*ho and what's going on, with the ED iOS app... he keeps losing, my login configuration and credentials?!?


This is because AMD/ATI/Radeon are made for high res and gorgeous textures, whereas Nvidia/EVGA are made purely for speed (But fail at higher resolutions). AMD is a bit slower, but looks better and processes graphics better. Nvidia is faster on lower settings (1080p etc), but fails on higher reses unless your pumping top of the line Nvidia GPU's. And even then, ATI/Radeon will still look better.

On average an AMD card has 10x-20x more shader processes then an Nvidia card of the same line. Nvidia has more Speed. When tested on 2k+ AMD starts to shine and at 4k Blows Nvidia away in both looks and speed. At 1080p Nvidia is much faster then AMD but AMD will still look better.

Source - I build top tier Gaming PC's for a living. Both Intel/Nvidia, and AMD/ATI based.
 
Last edited:
I think they removed it from AMD cards because they don't believe AMD cards stutter as much as Nvidia ones, however since removing it I personally am experiencing more again so I'm not convinced tbh.
 
Last edited:
This is because AMD/ATI/Radeon are made for high res and gorgeous textures, whereas Nvidia/EVGA are made purely for speed (But fail at higher resolutions). AMD is a bit slower, but looks better and processes graphics better. Nvidia is faster on lower settings (1080p etc), but fails on higher reses unless your pumping top of the line Nvidia GPU's. And even then, ATI/Radeon will still look better.

On average an AMD card has 10x-20x more shader processes then an Nvidia card of the same line. Nvidia has more Speed. When tested on 2k+ AMD starts to shine and at 4k Blows Nvidia away in both looks and speed. At 1080p Nvidia is much faster then AMD but AMD will still look better.

Source - I build top tier Gaming PC's for a living. Both Intel/Nvidia, and AMD/ATI based.

Are you including the Titan-X in this appraisal?
 
Are you including the Titan-X in this appraisal?
i run a titan X with an i7. I only have small stutters if i fly to fast past bodys.

rest of the game runs buttery smooth
and i dont have v-sync on. Cause i have a G-sync monitor 1440p
 
Last edited:
Hello to all. This question may seem technical but, maybe it is of general knowledge, between gamers.
I have two computers with ED installed. One with an ATI card and the other with an Nvidia one.

The computer with the ATI, after the installation and configuration, begins ED in no time.
The computer with the Nvidia; ED installed and configured, takes almost a minute "preparing shaders", when I start the game. Is it normal?

Is there some file I could edit, to make him stop the shaders thing, over and over again; every time I want to play ED on it. The strange thing is, no other game do it! It's a steam box with Windows 8.1, connected to a TV, on the living room (Alienware Alpha i7 2TB 8GB Nvidia 2GB).

I don't run Elite Dangerous inside Steam on both. Clean Installation!


Kind regards to all,
Xanix

*ho and what's going on, with the ED iOS app... he keeps losing, my login configuration and credentials?!?


appconfig.xml

Set to 0 to disable. When set to =Auto enabled on GForce and disabled on AMD. Think because of a bug in elite for AMD hardware but unsure.


<PlanetNoiseTextures
Enabled="1"
ShaderWarmingEnabled="1"
ShaderWarmingDialogAnimTimeInMs="600">

<AutoDisableOnOsx>ATI Radeon HD 2400</AutoDisableOnOsx>
<AutoDisableOnOsx>ATI Radeon HD 2600 Pro</AutoDisableOnOsx>
<AutoDisableOnOsx>ATI Radeon HD 5770</AutoDisableOnOsx>
<AutoDisableOnOsx>ATI Radeon HD 5870</AutoDisableOnOsx>
<AutoDisableOnOsx>ATI Radeon HD 6800 Series</AutoDisableOnOsx>
<AutoDisableOnOsx>AMD Radeon HD 6970M</AutoDisableOnOsx>
<AutoDisableOnOsx>AMD Radeon HD 6870 Series</AutoDisableOnOsx>
<AutoDisableOnOsx>AMD Radeon HD 6xxx</AutoDisableOnOsx>
<AutoDisableOnOsx>Intel HD Graphics 3000</AutoDisableOnOsx>
<AutoDisableOnOsx>NVIDIA GeForce GT 120</AutoDisableOnOsx>

</PlanetNoiseTextures>
 
Are you including the Titan-X in this appraisal?

He should be. You can really see the difference when you start using the workstation cards, nvidia just can't cut it in the single or double precision accuracy compared to AMD's cards that's not to say they are bad, far from it and for the gaming market your really not going to notice much of a difference but if your going for an operation involving scans... Ask if they are using Nvidia, if yes... RUN!
 
This is because AMD/ATI/Radeon are made for high res and gorgeous textures, whereas Nvidia/EVGA are made purely for speed (But fail at higher resolutions). AMD is a bit slower, but looks better and processes graphics better. Nvidia is faster on lower settings (1080p etc), but fails on higher reses unless your pumping top of the line Nvidia GPU's. And even then, ATI/Radeon will still look better.

This is incorrect.

The only difference between IQ when running the same options is that NVIDIA has a slightly worse LOD bias with it's default driver settings...which is almost completely imperceptible. Both brands will produce effectively identical images if using the same options.

On average an AMD card has 10x-20x more shader processes then an Nvidia card of the same line.

This is a significant exaggeration and hasn't been the case for a long time, regardless.

Furthermore, at the time it was the case, the fundamental architectures of NVIDIA and AMD graphics processors were different to the extent that comparing number of shaders processors reveals essentially nothing about performance.

Prior to Kepler, NVIDIA GPUs ran a smaller number of higher clocked and more complex processors. As of the 600 series, NVIDIA's strategy regarding shader count is very similar to AMD's.
 
Are you including the Titan-X in this appraisal?

I said unless it's a high end Nvidia cards. I include th titans, x,z, and the 980 TI in that "High End category". Maybe the FTW 980.. Maybe.

- - - Updated - - -

This is incorrect.

The only difference between IQ when running the same options is that NVIDIA has a slightly worse LOD bias with it's default driver settings...which is almost completely imperceptible. Both brands will produce effectively identical images if using the same options.



This is a significant exaggeration and hasn't been the case for a long time, regardless.

Furthermore, at the time it was the case, the fundamental architectures of NVIDIA and AMD graphics processors were different to the extent that comparing number of shaders processors reveals essentially nothing about performance.

Prior to Kepler, NVIDIA GPUs ran a smaller number of higher clocked and more complex processors. As of the 600 series, NVIDIA's strategy regarding shader count is very similar to AMD's.


Please refer your attention here:

Effective Memory Clock SpeedFrequency at which memory can be read from and written to

top 10

Radeon FURY X
1,000 MHz




GeForce GTX 980 Ti
7,012 MHz







Pixel RateNumber of pixels a graphics card can render to the screen every second

top 10

Radeon FURY X
67.2 GPixel/s




GeForce GTX 980 Ti
96 GPixel/s








Floating Point PerformanceHow fast the gpu can crunch numbers

top 10

Radeon FURY X
8,602 GFLOPS




GeForce GTX 980 Ti
5,632 GFLOPS







Shading UnitsSubcomponents of the gpu, these run in parallel to enable fast pixel shading

top 10

Radeon FURY X
4,096




GeForce GTX 980 Ti
2,816








Texture Mapping UnitsBuilt into each gpu, these resize and rotate bitmaps for texturing scenes

top 10

Radeon FURY X
256




GeForce GTX 980 Ti
176







Render Output ProcessorsGPU commponents responsible for transform pixels as they flow between memory buffers

top 10

Radeon FURY X
64




GeForce GTX 980 Ti
96














These are the 2 top end cards for AMD and Nvidia, Notice the graphics and pixel production of the AMD card is almost Double, then that of the 980 TI. I assure you This is the case for all cards in the same line AMD vs Nvidia.

Also the Fury's clock speed is lower because it uses New HBM (High Bandwidth Memory) Effectively it's clock speed is very close to that of the Nvidia card, when in use.

The 980 TI is unique in the fact that it actually produces more then the fury as far as GigaP is concerned. But there are AMD cards of lower quality which outperform the 980 TI in this area also, such as the 390x. (probably should have used that one actually).

But in all honesty these 2 cards are about as close in experience as you'll get. You wont notice the difference as they are both beasts, the only time you may notice is on ultra high graphics on 4k, where the 980 TI struggles a hair vs the Fury X.
 
Last edited:
While your listed specs are technically correct, it's your interpretation of them that I find lacking.

More often than not the 980Ti performs better than a Fury X. The Fury X has more memory bandwidth, more theoretical shader power, and better texture fill rate, but it's raw pixel fill rate is constrained, and it's clocked notably lower. In practice, it does not compete well until you hit 4k resolutions where the superior memory bandwidth allows it to more or less trade blows with the 980Ti.

These are the 2 top end cards for AMD and Nvidia, Notice the graphics and pixel production of the AMD card is almost Double, then that of the 980 TI. I assure you this is the case. This is the case for all cards in the same line AMD vs Nvidia.

The Fury is the exception, not the rule here.

AMD Tahit and Hawaii GPUs were contemporaneous with Kepler and Maxwell. Kepler peaked at 2880 shaders and Tahit at 2048, while Maxwell peaked at 3072 and Hawaii at 2816. The releases are staggered, so the generations don't line up perfectly, but NVIDIA has actually had a trend of more shaders in the same segment than AMD with the last two generations of GPUs.

Fiji, the GPU in the Fury and the Fury X, is new and does exceed all the generations that came before it in shader count, but this number will be matched fairly soon by NVIDIA's Pascal.

Not that such academic figures necessarily translate into gaming performance.

Also the Fury's clock speed is lower because it uses New HBM (High Bandwidth Memory) Effectively it's clock speed is very close to that of the Nvidia card, when in use.

Clock speeds are vastly different, but the Fury and Fury X (512GiB/s actually have significantly more memory bandwidth than the 980Ti (336.5GiB/s), because it's a 4096-bit wide HBM interface on Fiji, vs. a 384-bit wide GDDR5 interface on GM100. Still, even the 980Ti is not memory bandwidth limited in most scenarios, so this advantage is irrelevant most of the time.
 
Last edited:
I feel a lot more informed about newer high end graphics cards now, thanks.

I have been using nvidia for years now, I may jump ship next time I upgrade
 
Well put, have some rep on me. A perfect response to some ill advised in the know posts. In terms of what the dude said about clock speed in fury x was probably on reference to memory clocks.

If people are really inclined to check out the differences in architecture of GCN Vs Kepler or maxwell cuda cores etc, head over to techreport or Anandtech, these sites do a good job of explaining all these architecture Morbad has mentioned.

Also nvidia have traditionally had more rops units than ATI. This means that nvidia cards at the higher end have more rops throughout which can translate to better performance at higher resolutions
 
Last edited:
Back
Top Bottom