Frontier, Are you aware of this?

http://wccftech.com/geforce-radeon-gpus-utilizing-mantle-directx-12-level-api-combine-video-memory/

With the advent of low level API for things like Direct X 12 and Vulcan, this would allow us SLI/Crossfire users to gain access to both cards memory, not just 1 card. The potential this allows is mind boggling as instead of buying one 1 thousand dollar graphics card to be able to scratch the surface of 4k gaming and VR, you can now save a whole lot by just buying 2 cheaper cards and SLI'ing/Crossfire them together and both have the added GPU performance AND the Memory together, As an Example, Buying two GTX 960's with 2GB would allow an accessible memory of 4 gb with this optimization! Buying two R7 360 would yield the same result at even a greater saving in $$$, The result is saving a lot of money! Not to mention the performance gains in upcoming seasons like horizons which looks very memory heavy.

The point of this thread is simple: Will Elite Dangerous live up to the promise of "future proofing" by optimizing the game to allow this relatively unknown by-product of low level API's? GPU VRAM stacking doesn't come free with Vulcan/Dx12, The Devs need to optimize the game to allow this, But I think FD don't even know this is possible. So just suggesting you guys look into this! :D
 
DirectX12 isn't the holy grail, SLI and Crossfire functionality speed up is a function of the kinds of things you are doing. Just in the same way as having a 12 core CPU doesn't automatically give you 12 times the speed for everything you do. There are lots of bottlenecks when it comes to GPUs, mainly it is getting the information off and onto the card.

Put it one way.... the concept of using multiple GPUs to give extra performance can be traced back to the Voodoo2 era... and... no one has quite perfected it yet and as you should know, one does not simply 'switch' APIs it is not a simple re-write, often it is a ground up re-write and implementation, if such a thing was easy, Horizons would probably be coming to OSX in metal.... now

It is the same kind of misunderstanding that people often have about Quantum Computing... or... that the PS3 cell chip was some kind of super-computer... when really the truth was it was a 6 core power PC based chip which simply featured very fast floating point... write-ups of all these things, new APIs etc, are always hyped like nuts...

Most games using DirectX10 or even 11 look perfectly fine, and perform perfectly well on one GPU IF the programmers do it right
 
Last edited:
I think one of the more hopeful improvements in the latest DirectX implementation is the ability to use any GPU on the system, not just your dedicated one. So all those wasted Intel video transistors dedicated on the CPU might finally be useful from those (e.g. most of us) with external dedicated AMD / nVidia cards.

Provided of course that ED is written (and capable of) taking advantage of this extra functionality. ;)
 
OpenCL could already do this, and it is a pretty awesome feature as you said Dextrovix.... doing a 3D render in a ray tracer using the CPU, the GPU on die and a dedicated GPU in my macbook pro is pretty awesome... heat inducing but awesome :)
 
DirectX12 isn't the holy grail, SLI and Crossfire functionality speed up is a function of the kinds of things you are doing. Just in the same way as having a 12 core CPU doesn't automatically give you 12 times the speed for everything you do. There are lots of bottlenecks when it comes to GPUs, mainly it is getting the information off and onto the card.

Put it one way.... the concept of using multiple GPUs to give extra performance can be traced back to the Voodoo2 era... and... no one has quite perfected it yet and as you should know, one does not simply 'switch' APIs it is not a simple re-write, often it is a ground up re-write and implementation, if such a thing was easy, Horizons would probably be coming to OSX in metal.... now

It is the same kind of misunderstanding that people often have about Quantum Computing... or... that the PS3 cell chip was some kind of super-computer... when really the truth was it was a 6 core power PC based chip which simply featured very fast floating point... write-ups of all these things, new APIs etc, are always hyped like nuts...

Most games using DirectX10 or even 11 look perfectly fine, and perform perfectly well on one GPU IF the programmers do it right

Not sure about data transfer to and from card. Pcie 3 allows 500MB/s, effective, per lane which at 16x is 15GB/s pcie 4 doubles that. Of course just as you say there are bottlenecks either side of that link.
 
Last edited:
OpenCL could already do this, and it is a pretty awesome feature as you said Dextrovix.... doing a 3D render in a ray tracer using the CPU, the GPU on die and a dedicated GPU in my macbook pro is pretty awesome... heat inducing but awesome :)

Yeah, shame AMD's implementation doesn't seem to be a patch on OpenCL. SLI seems like an area where heavy rendering or physics crunching could really shine!
 
OpenCL could already do this, and it is a pretty awesome feature as you said Dextrovix.... doing a 3D render in a ray tracer using the CPU, the GPU on die and a dedicated GPU in my macbook pro is pretty awesome... heat inducing but awesome :)
I'll support any non-corporate controlled standard. Means I'll get my Linux gaming station up all the quicker.
 
Not sure about data transfer to and from card. Pcie 3 allows 500MB/s, effective, per lane which at 16x is 15GB/s pcie 4 doubles that. Of course just as you say there are bottlenecks either side of that link.

It used to be per cycle, so the per cycle limit is bad, so it made transfers some what bumpy... i remember this from a conversation with a few people doing the OpenCL route on a GPU, basically it made for some interesting issues... so yes, theoretical transfer limits for the GPU or PCI-E are good, but for a GPU you are limited by how the card works, and so if it is only addressing the PCI-e every so often to transfer things like textures etc... then it can be a very slow process to load and offload.

full cycle use... yes is very fast, but typically i dont think its done like that on a GPU, it still works like a machine, doing certain operations in sequence in a rhythm

Its the same for Memory addressing and transfers, given that you take a 32 or 64 bit register at a single time, regardless of what you are doing...

So if your process uses (for simple arguments) information contained in 64bits, in a 32 bit system you need two operations to grab that? so those operations require time..., if you are on a 64bit system, boom you do it in one go, so we are double speed! toot toot.... but when you go to many megabytes of data... it gets more complicated because the number of operations does completely depend upon where the information is stored since it is mostly not continuous.... anyway this is getting a bit weird haha

please though do correct me if im totally wrong
 
Last edited:
Been using SLI since the Voodoo 2 days which i still own as it was the first SLI card. No problems apart from the odd driver over the years. Aslong as a custom SLI flag exists i get no problems whatsoever with SLI even when using Quad GPU's. The fact is most people dont know what there doing with SLI which is why they cannot get the best out of there second GPU. AMD cards and CFX are a different ballgame altogether and DO NOT work as well as Nvidia cards in SLI....FACT!

Most SLI profiles are released within Nvidia drivers before certain games are even released.

SLI Anti aliasing Bits thread (Nvidia Inspector) this is why you never use the Nvidia control panel.
https://docs.google.com/spreadsheets/d/1ekUZsK2YXgd5XjjH1M7QkHIQgKO_i4bHCUdPeAd6OCo/pub?gid=6
 
Last edited:
DirectX12 isn't the holy grail, SLI and Crossfire functionality speed up is a function of the kinds of things you are doing. Just in the same way as having a 12 core CPU doesn't automatically give you 12 times the speed for everything you do. There are lots of bottlenecks when it comes to GPUs, mainly it is getting the information off and onto the card.

Put it one way.... the concept of using multiple GPUs to give extra performance can be traced back to the Voodoo2 era... and... no one has quite perfected it yet and as you should know, one does not simply 'switch' APIs it is not a simple re-write, often it is a ground up re-write and implementation, if such a thing was easy, Horizons would probably be coming to OSX in metal.... now

It is the same kind of misunderstanding that people often have about Quantum Computing... or... that the PS3 cell chip was some kind of super-computer... when really the truth was it was a 6 core power PC based chip which simply featured very fast floating point... write-ups of all these things, new APIs etc, are always hyped like nuts...

Most games using DirectX10 or even 11 look perfectly fine, and perform perfectly well on one GPU IF the programmers do it right

As I stated, the point is not "speed" per se, but rather capacity, If FD were to develop for a feature/optimization like this, It would allow for more economical 4K gaming, Economical VR experience.

SLI/ Crossfire gaming as we know it today means dual/triple/quad GPU's but you still only end up with the limited VRAM of ONE card. I have always seen this as an extremely inefficient use of memory, Having access to all VRAM from all cards would allow you to not only access GPU calculations, But also Memory storage.

The reasoning for this is because all cards need to mirror all the data in all VRAM, However with low level access API's. Developers could allow split-frame rendering plus frame buffer handling aka partitioning parts of the screen to be rendered per GPU, Similar to allowing multiple GPU's handling different eyes for VR. This allows multiple GPU's to act as One Monolithic GPU harnessing all resources of all GPU's, not just their compute performance from their GPU cores.
 
Last edited:
It will require a lot of development, will consume a lot of time/money, and while it may sound cool it does not seem to be that usefull. How many people run multi gpu setups? How many of them use mid-end cards? And even if you want to use 2*GTX960 you can buy 4gb version, it is only a tiny bit more expensive.
And i am almost 100% sure that things are not as good/easy as it sounds, there will be drawbacks which will make it even less attractive.
 
Meh, I use two 1020 USD graphics cards. :p But yeah, moving toward an open standard like Vulkan would be nice. DirectX 12, not so much.
 
It will require a lot of development, will consume a lot of time/money, and while it may sound cool it does not seem to be that usefull. How many people run multi gpu setups? How many of them use mid-end cards? And even if you want to use 2*GTX960 you can buy 4gb version, it is only a tiny bit more expensive.
And i am almost 100% sure that things are not as good/easy as it sounds, there will be drawbacks which will make it even less attractive.

Or you could use two of those 4GB versions of 960's to make a total of 8GB's rather than buying a titan black. The point is it can be done as outlined in the article, in any case this involves API use which is no cakewalk anyway so it's up to the developers themselves to decide the cost/effect and return on investment as well as prioritization. FD may be uninformed, I just wanna get this out there in the know.
 
Last edited:
As far as I can tell, Mantle and Vulcan are also low level API, allowing the same functions.

Mantle is more or less no longer in development and is now taking a back seat to Vulkan, which they helped create through basically giving Mantle to the Vulkan project, which is run by a big industry conglomerate of well known hardware and software companies, the Khronos Group, the same guys that do the OpenGL standard.

I for one do not plan on updating to Windows 10 and my PCs are making the transition to Linux where and when they can. I'm still running a lot of custom software on Windows 7 for now though as well as some games. Needless to say, I'd rather Vulkan be widely used by developers instead of DirectX 12, which is proprietary and will only work on Windows 10. However, Vulkan will also work on Windows 10, thanks to the various hardware vendors who will support it through their drivers, much as OpenGL is now.

Check this out for more info: https://www.khronos.org/assets/uploads/developers/library/overview/2015_vulkan_v1_Overview.pdf
 
Last edited:
Mantle is more or less no longer in development and is now taking a back seat to Vulkan, which they helped create through basically giving Mantle to the Vulkan project, which is run by a big industry conglomerate of well known hardware and software companies, the Khronos Group, the same guys that do the OpenGL standard.

I for one do not plan on updating to Windows 10 and my PCs are making the transition to Linux where and when they can. I'm still running a lot of custom software on Windows 7 for now though as well as some games. Needless to say, I'd rather Vulkan be widely used by developers instead of DirectX 12, which is proprietary and will only work on Windows 10. However, Vulkan will also work on Windows 10, thanks to the various hardware vendors who will support it through their drivers, much as OpenGL is now.

Check this out for more info: https://www.khronos.org/assets/uploads/developers/library/overview/2015_vulkan_v1_Overview.pdf

Nice, I really do hope they role out vulkan like a standard just like openGL, since I too wish to not upgrade to windows 10 as it seems like a useless upgrade.
 
Back
Top Bottom