Sony is also slowly preparing people for that the PS5 will cost quite more than PS4.
I do think this seems likely, at least for launch window.
Sony is also slowly preparing people for that the PS5 will cost quite more than PS4.
Vega 20 managed 73Mhz before it was further reduced and twice the HBM2 stacks from going from 14nm to 7nm while still being a 300W toaster. Plus it's cost went up significantly so even at 700$ it was sold with 100-200$ loss.. The biggest problem with lower nodes is that they cost more transistor for transistor after 14nm. So even plain shrinks increase cost.
The 1660 didn´t have anything to do with AMD but to make Nvidia users upgrade. It´s not a static marked you know.
Not that node names have any relevance anymore, it´s pure PR renamed whenever needed. TSMC 20nm node for example managed to become a 12nm node. The last time nodes had any meaningful meaning was 20 years ago. Today you have to look at the actual density and electrical properties. Else its just Samsung bananas vs TSMC mangos vs Intel pineapples. TSMC also just this week renamed a 7nm node to 6nm because...Samsung had PR slided a 5nm node and TSMC had already used that number
Sony is also slowly preparing people for that the PS5 will cost quite more than PS4.
Going OT, AMD want a slice of Intels iGPU market with the Ryzen 2xxxG line. Smaller sizes means they can then pack in a low power GPU along with with more CPU cores. Thats why AMD want to go smaller.
The 1660 is the sweet spot where GPU vendors make money, hitting AMD where they have the value cards. They see AMD starting to gain traction and they strip away the dubious value bits.
Like I said, the trend is for AMD to cram more into a CPU / APU and compete with Intel on volumes. The future for them is custom integrated.
Every recent console launch is the same, and then they'll pare back the next model trying to trim the excesses of the first model.
There is no market or benefit to the IGP/APU segment and its the same story since 2011 now. It´s something you give away for free. The problem for AMD is they don't have a mobile solution that isn't bad.
Er, are you sure? Intel makes money on laptops with iGPUs- for the first time they have fumbled the ball and can't make enough.
AMD want the same, and use their GPU technology to make APUs that are really nice for what they offer. In the end its the total package that counts- Intel even used a Vega GPU in some of its processors (Kaby Lake G).
The 2200 and 2400 are good enough to replace a dedicated GPU for esports (and even ED). Since more will get sold, it makes more money due to volume.
Intels capacity restriction currently have nothing to do with the consumer space. It has to do with the datacenter segment. And they automatically prioritize higher paying products over lower. 99% of Intels consumer CPUs have had IGP the last 10 years or so.
2200 and 2400 for eSports? You have been reading too much into the PR slides. Even the G models with boosted graphics can´t do anything previous IGPs didn´t do at their time. And for the most time you still sit at 720p with some 1080p, more often than not at sub 30FPS.
A 15-35W CPU with a 20-30W ancient dGPU (GT 1030) runs in circles around a 65W 2400G. For AMD equal you can use an RX550 that's also faster. Perhaps even RX540 as well.
There are 3 big problems for getting any worthwhile IGP/APU performance. One is TDP, the other is memory bandwidth and the last one is cost structure. The dGPU always win.
Its because Intel have prioritized datacenters that has made Intel lose money dragging desktop and laptop sales down. And Intel graphics have always been poor.
Thats what they are designed for though- 720p gaming with non-demanding titles. Low cost, with the option to upgrade later.
Not as cheap in the low price consumer bracket where any extra cost matters.
Again, its price per performance, and that has not stopped AMD selling APUs like the Jaguar for a fat pile of money, and looks to do it again with the PS5.
As someone who started as a PC gamer and then switched to console back in the PS3 days, I disagree. The forum archives have plenty of posts of why we console gamers chose this over PC, if you are inclined to research the topic. Now granted, exclusive titles have affected (though not exclusively, haha) which console I committed to. Exclusives do have an advantage of being highly optimized for the target platform, making for some amazing games for admittedly modest hardware, so I'm not opposed to them as long as they are for my platform, LOL.Consoles have compromised heavily and depends on a closed ecosystem to sell. You couldn´t sell the consoles if it wasn't for a closed ecosystem and exclusive titles.
In short it´s all about BVH.
You can read more about it here in section D:
But in short the RT cores do the BVH, the Tensor cores do the denoise(and DLSS if applied). This is why RTX cards in raytracing is many times faster than any other card.
Btw for those wondering how fast RTX cards are vs non RTX cards in rays only. It's 6-10 billion rays for RTX cards while Pascal/non RTX turing/Vega/Navi is in the 200-500 million rays area. No denoise, no DLSS. This is also why AMD haven´t added DXR support. It would only fuel RTX sales as the DXR on GTX cards did.
From the sales of RTX cards I could imagine we would start to see games 2-3 years from now that can´t run without RT/Tensor cores(In any practical way). It´s so much easier from the developer side and the result is so much better.
"It just works"
I wish nVidia made the RTX cards beefier. The 2080ti should have been the same or a little more than the 1080ti at launch. Performance should have been at least 150% of the 1080ti. The RT cores should have twice as many units. The tensor cores seem adequate. And finally, instead of SLI over NVLINK, Nvidia should have abolished SLI altogether. Shared GPU memory should be the standard. Each card should be able to access the other's asynchronously (of course with locks) and each card should know what the other is doing.
And the cards would be 5000€?
Shared GPU memory will not work across cards unless you start to use interconnects between the cards with 500GB/sec or more. NVlink 2.0 to compare does 150GB/sec and only useful for HPC compute. All the MultiGPU dreams at this point is dead.
NVlink are on the cards because they are also used for Tesla cards. It´s just the last huzzah for SLI before it´s gone just as Crossfire.
The cards would be dear, that's for sure, unless AMD gets their act together and give nVidia real competion.
And as far as NVlink only doing 150GB/sec, the technology is always evolving. I would not be surprised to see >500GB/sec in the future connected by optical interconnects.
SLI should never have been a thing. Shared memory is so much better. You want multiple GPU's to act and present themselves to the OS as one.