Raytracing in ED for everyone!

Vega 20 managed 73Mhz before it was further reduced and twice the HBM2 stacks from going from 14nm to 7nm while still being a 300W toaster. Plus it's cost went up significantly so even at 700$ it was sold with 100-200$ loss.. The biggest problem with lower nodes is that they cost more transistor for transistor after 14nm. So even plain shrinks increase cost.

Going OT, AMD want a slice of Intels iGPU market with the Ryzen 2xxxG line. Smaller sizes means they can then pack in a low power GPU along with with more CPU cores. Thats why AMD want to go smaller.

The 1660 didn´t have anything to do with AMD but to make Nvidia users upgrade. It´s not a static marked you know.

The 1660 is the sweet spot where GPU vendors make money, hitting AMD where they have the value cards. They see AMD starting to gain traction and they strip away the dubious value bits.

Not that node names have any relevance anymore, it´s pure PR renamed whenever needed. TSMC 20nm node for example managed to become a 12nm node. The last time nodes had any meaningful meaning was 20 years ago. Today you have to look at the actual density and electrical properties. Else its just Samsung bananas vs TSMC mangos vs Intel pineapples. TSMC also just this week renamed a 7nm node to 6nm because...Samsung had PR slided a 5nm node and TSMC had already used that number ;)

Like I said, the trend is for AMD to cram more into a CPU / APU and compete with Intel on volumes. The future for them is custom integrated.

Sony is also slowly preparing people for that the PS5 will cost quite more than PS4.

Every recent console launch is the same, and then they'll pare back the next model trying to trim the excesses of the first model.
 
Going OT, AMD want a slice of Intels iGPU market with the Ryzen 2xxxG line. Smaller sizes means they can then pack in a low power GPU along with with more CPU cores. Thats why AMD want to go smaller.

There is no market or benefit to the IGP/APU segment and its the same story since 2011 now. They are and will always be "too slow" and Intel is going dGPUs as well for the same reason. It´s something you give away for free. The problem for AMD is they don't have a mobile solution that isn't bad.

The 1660 is the sweet spot where GPU vendors make money, hitting AMD where they have the value cards. They see AMD starting to gain traction and they strip away the dubious value bits.

The sweetspot is either the 1660TI or the 2060. And the consumers so far goes for the 2060.

Like I said, the trend is for AMD to cram more into a CPU / APU and compete with Intel on volumes. The future for them is custom integrated.

Cram more in cost more money on a product that already cant sell? AMD doesn´t have any ambitions in that matter either. Lisa still dreams of servers like AMD have been punished by since Hector. It´s their nemesis and EPYC already lost more than Opterons ever did. And it´s a shame because the company could have done much better instead of sinking ATI/RTG and abandon the solid business model they had under Sanders.

Every recent console launch is the same, and then they'll pare back the next model trying to trim the excesses of the first model.

This is permanent price hikes due to higher production cost. This is also why everyone tries the cloud gaming, including Sony and Microsoft as a way
to offset the cost so future "console" users essentially can share the cost. Physical consoles will go away due to it´s cost structure, it´s already determined. The only question left is when.
 
Last edited:
There is no market or benefit to the IGP/APU segment and its the same story since 2011 now. It´s something you give away for free. The problem for AMD is they don't have a mobile solution that isn't bad.

Er, are you sure? Intel makes money on laptops with iGPUs- for the first time they have fumbled the ball and can't make enough.

AMD want the same, and use their GPU technology to make APUs that are really nice for what they offer. In the end its the total package that counts- Intel even used a Vega GPU in some of its processors (Kaby Lake G).

The 2200 and 2400 are good enough to replace a dedicated GPU for esports (and even ED). Since more will get sold, it makes more money due to volume.
 
Er, are you sure? Intel makes money on laptops with iGPUs- for the first time they have fumbled the ball and can't make enough.

AMD want the same, and use their GPU technology to make APUs that are really nice for what they offer. In the end its the total package that counts- Intel even used a Vega GPU in some of its processors (Kaby Lake G).

The 2200 and 2400 are good enough to replace a dedicated GPU for esports (and even ED). Since more will get sold, it makes more money due to volume.

Intels capacity restriction currently have nothing to do with the consumer space. It has to do with the datacenter segment. And they automatically prioritize higher paying products over lower. 99% of Intels consumer CPUs have had IGP the last 10 years or so.

If you want an example of how worthless IGPs are. Try and compare 2 Intels SKUs with and without IGP enabled.

2200 and 2400 for eSports? You have been reading too much into the PR slides. Even the G models with boosted graphics can´t do anything previous IGPs didn´t do at their time. And for the most time you still sit at 720p with some 1080p, more often than not at sub 30FPS. A 15-35W CPU with a 20-30W ancient dGPU (GT 1030) runs in circles around a 65W 2400G. For AMD equal you can use an RX550 that's also faster. Perhaps even RX540 as well. How is Elite even doing on the 2400G? Drops to 30FPS on medium settings in 1080P?

There are 3 big problems for getting any worthwhile IGP/APU performance. One is TDP, the other is memory bandwidth and the last one is cost structure. The dGPU always win.

Hence why you see some attempts now and then that always fails hard. Example:
600px_kaby_lake_g_with_amd_radeon_package_129.png


Besides this you can start with sideport memory and so on in the long list.
 
Last edited:
Intels capacity restriction currently have nothing to do with the consumer space. It has to do with the datacenter segment. And they automatically prioritize higher paying products over lower. 99% of Intels consumer CPUs have had IGP the last 10 years or so.

Its because Intel have prioritized datacenters that has made Intel lose money dragging desktop and laptop sales down. And Intel graphics have always been poor.

2200 and 2400 for eSports? You have been reading too much into the PR slides. Even the G models with boosted graphics can´t do anything previous IGPs didn´t do at their time. And for the most time you still sit at 720p with some 1080p, more often than not at sub 30FPS.

Thats what they are designed for though- 720p gaming with non-demanding titles. Low cost, with the option to upgrade later.

A 15-35W CPU with a 20-30W ancient dGPU (GT 1030) runs in circles around a 65W 2400G. For AMD equal you can use an RX550 that's also faster. Perhaps even RX540 as well.

Not as cheap in the low price consumer bracket where any extra cost matters.

There are 3 big problems for getting any worthwhile IGP/APU performance. One is TDP, the other is memory bandwidth and the last one is cost structure. The dGPU always win.

Again, its price per performance, and that has not stopped AMD selling APUs like the Jaguar for a fat pile of money, and looks to do it again with the PS5.
 
Its because Intel have prioritized datacenters that has made Intel lose money dragging desktop and laptop sales down. And Intel graphics have always been poor.

Lose money? I think you need to check Intels finances. Record after record. AMD on the other hand is still making revenue as AMD did in...2011.

And I think you don't understand IGP purpose. It´s not for playing games with good settings in a good resolution.

Thats what they are designed for though- 720p gaming with non-demanding titles. Low cost, with the option to upgrade later.

So same as always, no change.

Not as cheap in the low price consumer bracket where any extra cost matters.

IGP and APUs are not cheap when you try to increase their anemic speeds.

Again, its price per performance, and that has not stopped AMD selling APUs like the Jaguar for a fat pile of money, and looks to do it again with the PS5.

Consoles have compromised heavily and depends on a closed ecosystem to sell. You couldn´t sell the consoles if it wasn't for a closed ecosystem and exclusive titles. Often with games to pay extra towards the hardware on top.

It´s about equal to pronounce the old Tegra in Switch is a massive success because it sells in hotcakes and makes money. Consoles is revenue, but its low margins.
AMD posted a 6 million$ loss in Q4 for its combined Enterprise, Embedded and Semi-Custom division with a meagre 433M revenue.

But maybe we should take this to PM. Since It´s way OT and the topic itself seems to be out debated.
 
Last edited:
Consoles have compromised heavily and depends on a closed ecosystem to sell. You couldn´t sell the consoles if it wasn't for a closed ecosystem and exclusive titles.
As someone who started as a PC gamer and then switched to console back in the PS3 days, I disagree. The forum archives have plenty of posts of why we console gamers chose this over PC, if you are inclined to research the topic. Now granted, exclusive titles have affected (though not exclusively, haha) which console I committed to. Exclusives do have an advantage of being highly optimized for the target platform, making for some amazing games for admittedly modest hardware, so I'm not opposed to them as long as they are for my platform, LOL.
 
In short it´s all about BVH.
You can read more about it here in section D:

But in short the RT cores do the BVH, the Tensor cores do the denoise(and DLSS if applied). This is why RTX cards in raytracing is many times faster than any other card.

geforce-rtx-gtx-dxr-introduces-new-workloads.png


Btw for those wondering how fast RTX cards are vs non RTX cards in rays only. It's 6-10 billion rays for RTX cards while Pascal/non RTX turing/Vega/Navi is in the 200-500 million rays area. No denoise, no DLSS. This is also why AMD haven´t added DXR support. It would only fuel RTX sales as the DXR on GTX cards did.

From the sales of RTX cards I could imagine we would start to see games 2-3 years from now that can´t run without RT/Tensor cores(In any practical way). It´s so much easier from the developer side and the result is so much better.

"It just works"

I wish nVidia made the RTX cards beefier. The 2080ti should have been the same or a little more than the 1080ti at launch. Performance should have been at least 150% of the 1080ti. The RT cores should have twice as many units. The tensor cores seem adequate. And finally, instead of SLI over NVLINK, Nvidia should have abolished SLI altogether. Shared GPU memory should be the standard. Each card should be able to access the other's asynchronously (of course with locks) and each card should know what the other is doing.
 
"It just works"

I wish nVidia made the RTX cards beefier. The 2080ti should have been the same or a little more than the 1080ti at launch. Performance should have been at least 150% of the 1080ti. The RT cores should have twice as many units. The tensor cores seem adequate. And finally, instead of SLI over NVLINK, Nvidia should have abolished SLI altogether. Shared GPU memory should be the standard. Each card should be able to access the other's asynchronously (of course with locks) and each card should know what the other is doing.

And the cards would be 5000€? ;)

Shared GPU memory will not work across cards unless you start to use interconnects between the cards with 500GB/sec or more. NVlink 2.0 to compare does 150GB/sec and only useful for HPC compute. All the MultiGPU dreams at this point is dead.

NVlink are on the cards because they are also used for Tesla cards. It´s just the last huzzah for SLI before it´s gone just as Crossfire.
 
Personally, raytracing will open new possibilities to certain lightning limitations the engine has (single starlight source, better shadows, reflections, etc, etc), as for performance well, not going to be good on 10 series lol.

But as mentioned before, I believe raytracing is here to stay, because it will increase realism on the behavior of light and other things related to graphics I personally can't think of right now. Elite might see some sort of RT implementation, who knows.

I remember when 3D shadows in games were so demanding that your GPU would die a bit lol. Nowadays, shadows are meh for performance in general, the same thing will happen with raytracing.
 
And the cards would be 5000€? ;)

Shared GPU memory will not work across cards unless you start to use interconnects between the cards with 500GB/sec or more. NVlink 2.0 to compare does 150GB/sec and only useful for HPC compute. All the MultiGPU dreams at this point is dead.

NVlink are on the cards because they are also used for Tesla cards. It´s just the last huzzah for SLI before it´s gone just as Crossfire.

The cards would be dear, that's for sure, unless AMD gets their act together and give nVidia real competion.

And as far as NVlink only doing 150GB/sec, the technology is always evolving. I would not be surprised to see >500GB/sec in the future connected by optical interconnects.

SLI should never have been a thing. Shared memory is so much better. You want multiple GPU's to act and present themselves to the OS as one.
 
The cards would be dear, that's for sure, unless AMD gets their act together and give nVidia real competion.

And as far as NVlink only doing 150GB/sec, the technology is always evolving. I would not be surprised to see >500GB/sec in the future connected by optical interconnects.

SLI should never have been a thing. Shared memory is so much better. You want multiple GPU's to act and present themselves to the OS as one.

Competition is something you may get from Intel next year. But you certainly wont see it from AMD. AMD is currently trying to sell overpriced, slow, power hungry and feature missing cards that only gets slower over time. And they have no product roadmap or an R&D budget allocated that shows it will change, but rather just gets worse. Windows 1903 and VRS for example is another one that will move Intel and Nvidia 10-15% ahead of AMD. Also rising cost is just natural as the economics behind nodes are dead. Don´t expect competition to lower price, but rather increase price as R&D, IC design and mask costs gets distributed more.

The problem with NVLink and 150GB/sec is the forever moving goal. Even the slowest Turing card today in the shape of the GTX 1650 at 75W and 150$ got 112GB/sec bandwidth. At the time Nvlink reaches say 600GB/sec. You are now needing 1500-2000GB/sec. Tom Henderson a brilliant former Nvidia guy, now Intel, also said that while SLI would improve due to NVLink. The dream of shared memory for gamers is and will continue to be a dream. Even getting such crude concepts as SLI and Crosfire to somewhat work without stutter and what not required so much work. Try to expand that even if you had the bandwidth (that you don't and never will have) and you are multitudes higher up in complexity. Also at this point 99%+ of the user base wouldn't even consider a second GPU as any option due to many reasons. Mobility, size, power consumption, cost etc.

Opticals doesn't do anything as such and their main benefit is distance(think much longer than your PC and even neighbourhood) and to some amount power usage.

If you want multiple GPUs, the only way you are going to get that in any meaningful way will be die stacking. But then it´s practically one GPU.
 
Last edited:
Ray tracing is not just some simple effect. It simulates light and how it reacts to surfaces.
Global illumination is the future, and everyone saying otherwise doesn't want the industry to move forward.
With ray tracing you wont "need" things like ambient occlusion, screen space reflections, indirect lighting (which are all effects to mimic global illumination anyway but in a totally wrong way), you will get these effects because how lights reacts. Ray tracing is what deferred rendering was a few years ago.

Would Elite benefit from it? Absolutely! Multiple light sources gives true and multiple shadows, which people have been asking for since launch.
Is it worth for FD to implement it into COBRA?
Not in a few years.

Rasterization has gotten real good at what it does. It's fast and it can look great even. But it will NEVER look right. Rasterization is the technique we currently have. Ray tracing is the technique we always been striving for.
 
Top Bottom