AMD Mantle Support?

The R9 285 is a red-headed stepchild in all regards, it can't be used as a standard for comparison for anything, let alone mantle support.

You cant say that. Its a product sold today with users sitting and playing games. Nextgen GPUs from AMD will be based on GCN1.2 as well. And you simply cant depend on game developers to make patches for future GPUs, just because someone took some shortcuts with an API. And when even games like BF4 and Thief with close collaboration with AMD doesnt fix it. The chance of other developers to fix such issues is close to nonexistant at best.

Its unprofessional and goes against everything an API is designed to do.
 
Last edited:
ED isn't that demanding yet, because we don't have a lot of the things coming up like walking around in the ships and stations, planetary exploration, etc... Expect your framerate to tank when that stuff appears.
The R9 285 is a red-headed stepchild in all regards, it can't be used as a standard for comparison for anything, let alone mantle support.

I sit at 144fps constantly with Gsync at 1440p. I can cope with quite a bit more graphical intensity before I need to drop settings ;)

You make my point for me though, given all that other stuff still to implement, rewriting COBRA to support a completely different render path, then going on to support two render paths for all future dev work, just to improve the framerate by 10% for 10% of 29% of the playerbase that have a high end AMD gpu but a crap CPU, is a complete waste of effort.

And it's not the R9 285 per se, as much as ANY new GPU with mantle support. Read here, about 1/2 way down the page:

http://www.anandtech.com/show/8460/amd-radeon-r9-285-review/6

We don't need Mantle, we need cross vendor APIs. DX12 is set to bring the benefits of mantle without the vendor lock. Why anyone would consider mantle a better option than that, given that DX12 will be free for anyone not on XP or Vista (and those on XP need to upgrade NOW), I have no idea.
 
Think of it from the game engine providers perspective, if you adopt this rendering API and take advantage of it's features your games run faster with lower latency and you can add more umph to your game.

As AMD are providing the GPU/CPU solutions to both consoles, Mantle is probably going to be a go to API for AAA developers.

The API is Open, so there is nothing stopping NVIDIA from adopting it.

It sounds to me that due to legacy and complexity issues the potential performance you can get from a multicore CPU and modern GPU is being limited by DirectX and OpenGL.

Maybe Mantle will be limited to AMD / XBox One / PS [n?].

But hopefully it will kickstart DX and OpenGL to provide similar features and adapt so they are not getting in the way of performance.

End result we should be getting faster better looking games. :D

Except you're missing the market segment..

AMDs market share only just beats NV's GPU share if you include their APUs.

If anything were worth doing it would be DX12 - all 3 vendors implement it (or will) and it is a defacto standard. Of course, if a dev is inclined, then Mantel like Glide in the old days, or anything else is just a time/money tradeoff.

This is from anandtech in August last year: http://www.anandtech.com/show/8446/the-state-of-pc-graphics-sales-q2-2014

AMDIntelNvidia
All GPUs17.94%67.34%14.72%
Discrete GPUs37.9%-62.0%
Steam29.8%18.89%50.93%

As for OpenGL - Even Apple recognizes that GL is a CPU hog with all the state validation etc, which is why they are pushing "Metal" ..

AMD is like ford - cheap and cheerful, but being eaten alive by asian manufacturers (Intel), NV is BMW - they like their profits; this is likely the reason why the PS4 and XBone are both AMD powered - NV didn't see enough profit in it.

If you want to mine bit coins, knock yourself out on AMD. If you don't care about fan noise and/or heat buy AMD. If you want cheap, buy AMD.
If you want performance and can afford it - get NV. As for compute GPGPU - there is a reason why IBM has partnered with NV for their super computers..
 
Its even worse than the numbers you list. Q3 2014.
CPUs with IGP+Discrete GPUs.
Chart2.JPG
Discrete GPUs.
81c33d67_AIB12.jpeg

And its only going to be much worse in Q4 and Q1. 13% revenue drop for AMD in Q4 and an expected 15% revenue drop in Q1 arcording to their CEO Lisa Su.
 
Last edited:
You cant say that. Its a product sold today with users sitting and playing games. Nextgen GPUs from AMD will be based on GCN1.2 as well. And you simply cant depend on game developers to make patches for future GPUs, just because someone took some shortcuts with an API. And when even games like BF4 and Thief with close collaboration with AMD doesnt fix it. The chance of other developers to fix such issues is close to nonexistant at best.

Its unprofessional and goes against everything an API is designed to do.

By that respect let's extrapolate DX12's future success from Nvidia's 660ti and 970, with their defective manufacturing. Same difference.
 
By that respect let's extrapolate DX12's future success to Nvidia's 660ti and 970, with their defective manufacturing. Same difference.

Elaborate?

Just take a look at the DX12 API supporting GPUs. Haswell, Broadwell, GCN, Fermi, Kepler and Maxwell. While Maxwell is the only DX11.3/DX12 full featured part with conservative rasterlization etc. The rest fully support the API overhead reduction. Its a broad adoption from day one with a huge percentage ready to support it.
 
mantle performance is stunning .... strictly speaking amazing and your well advised to try the star swarm test on a decent amd card to see what the engine is capable of...

Fun fact, after the 337.50 NV driver update NV's DX11 performance in star swarm is actually better than AMD's mantle performance. And AMD's DX11 perf there is waaaaay behind anything else.
 
Defective manufacturing? Because they are harvested parts? I don't understand.

Gimped memory controllers that cause reduced bandwidth on the last 500mb of GDDR5. Google it, and read the bits that were written before the fanbois jumped in to save their poor pal Nvidia from any criticism.
 
Gimped memory controllers that cause reduced bandwidth on the last 500mb of GDDR5. Google it, and read the bits that were written before the fanbois jumped in to save their poor pal Nvidia from any criticism.

I am sure someone like you would have payed more attention. Its already debunked. It was a flawed CUDA benchmark.

But it was quite obvious it was a fluke in Nais benchmark. Because none of the reviews, all hitting 4GB usage experienced any issues.

Also its a common usage. AMD did it as well with all harvested parts before they decoupled the ROPs.
 
Last edited:
Gimped memory controllers that cause reduced bandwidth on the last 500mb of GDDR5. Google it, and read the bits that were written before the fanbois jumped in to save their poor pal Nvidia from any criticism.


That's by design with the reduction in crossbar resources etc. Anandtech has shown that it makes at most 1% difference, and few games use more than the 3.5GB that works fine at full speed anyway.

970 is still an extremely competitive card, as shown by benchmarks at release and since. Personally I went 980, then went SLI. I have absolutely no regrets in dumping my old AMD config (7970+7990).

Still is no argument for mantle support.
 
Last edited:
That's by design with the reduction in crossbar resources etc. Anandtech has shown that it makes at most 1% difference, and few games use more than the 3.5GB that works fine at full speed anyway.

970 is still an extremely competitive card, as shown by benchmarks at release and since. Personally I went 980, then went SLI. I have absolutely no regrets in dumping my old AMD config (7970+7990).

Still is no argument for mantle support.

Anandtech didn't show anything. They regurgitated a statement from Nvidia saying that it had a 1% difference. Pretty much everyone else has shown that trying to use the last 500mb has an extremely good chance of causing stuttering, extreme framerate drops and crashes. Note that I mentioned the 660ti, that's because this is the second time Nvidia has done this.
 
Anandtech didn't show anything. They regurgitated a statement from Nvidia saying that it had a 1% difference. Pretty much everyone else has shown that trying to use the last 500mb has an extremely good chance of causing stuttering, extreme framerate drops and crashes. Note that I mentioned the 660ti, that's because this is the second time Nvidia has done this.

You couldnt be more wrong. Not only is your statement about stuttering wrong. Also the amount of products.

All products with harvested parts from AMD (until they decoupled ROPs in GCN) and nVidia got this "issue".

The last 500MB is full speed. The memory is simply partitioned into 2 segments. When nais benchmark reached 3500MB it started to use the system memory over the PCIe, hence the speed reduction. The flaw is how Nais benchmark is coded (Sourcecode is avaliable.).
 
Last edited:
Except you're missing the market segment..

AMDs market share only just beats NV's GPU share if you include their APUs.

Once again I'll say it - Nvidia's "discrete" market share is vastly comprised of the extreme low-end graphics cards like the GT 730 and below. AMD directly competes with them with APUs - if AMD did not have APUs there they would have a much higher discrete market share.


AMD is like ford - cheap and cheerful, but being eaten alive by asian manufacturers (Intel), NV is BMW - they like their profits; this is likely the reason why the PS4 and XBone are both AMD powered - NV didn't see enough profit in it.

As for compute GPGPU - there is a reason why IBM has partnered with NV for their super computers..

What would be the reason for Apple using AMD in the Mac Pro then? And AMD's professional market share continues to rise at Nvidia's expense - more than doubled in 7 years.

AMD_FirePro_marketshare.jpg

You have to realise that AMD's priorities have changed. They sold 30 million very large GPU SOCs in the XBone and PS4. The sheer manufacturing clout required to do that is mind-boggling - these two alone likely require more wafers than Nvidia's entire yearly manufactuing capability.

So yes something has to give somewhere, and it's discrete desktop GPUs. There really isn't a lot of money in those - most of the money is in professional - Nvidias profits are still largely due to professional graphics and payoff money from Intel I believe.
 
Last edited:
Once again I'll say it - Nvidia's "discrete" market share is vastly comprised of the extreme low-end graphics cards like the GT 730 and below. AMD directly competes with them with APUs - if AMD did not have APUs there they would have a much higher discrete market share.

Thats utter rubbish and AMDs financials and CEO and CFO statements contradicts this.




What would be the reason for Apple using AMD in the Mac Pro then? And AMD's professional market share continues to rise at Nvidia's expense - more than doubled in 7 years.

You have to realise that AMD's priorities have changed. They sold 30 million very large GPU SOCs in the XBone and PS4. The sheer manufacturing clout required to do that is mind-boggling - these two alone likely require more wafers than Nvidia's entire yearly manufactuing capability.

So yes something has to give somewhere, and it's discrete GPUs. There really isn't a lot of money in those - most of the money is in professional - Nvidias profits are still largely due to professional graphics and payoff money from Intel I believe.

Apple changes between nVidia and AMD based on cost.

Consoles are extremely low margin(~15%) semi custom products. While PC products commands a 35-65% margin. Those margins is what pays future R&D. Something AMD keeps cutting on. Their R&D budget is now lower than it was 10 years ago.
nVidia will most likely surpass AMD in revenue in Q1. Thats how bad it goes. nVidia is already close to twice the revenue of AMD CPU and graphics division. The console designs is all that keep them a float.

Gross margins pretty much shows who sells the gold.
AMD 29%.
nVidia 55.1%.
Intel 64.5%.
 
Last edited:
You couldnt be more wrong. Not only is your statement about stuttering wrong. Also the amount of products.

All products with harvested parts from AMD (until they decoupled ROPs in GCN) and nVidia got this "issue".

The last 500MB is full speed. The memory is simply partitioned into 2 segments. When nais benchmark reached 3500MB it started to use the system memory over the PCIe, hence the speed reduction. The flaw is how Nais benchmark is coded (Sourcecode is avaliable.).

Nai's benchmark and half of the AAA titles of last year, apparently, which along with a few other benchmarks just flat out refuse to allocate assets to the last 500mb. I'm not here to argue about hardware though. The point is AMD is not on the shaky ground all of the FUD-mongers, including yourself, would like others to believe and Mantle is going to be around for a good long while.
 
Thats utter rubbish and AMDs financials and CEO and CFO statements contradicts this.

Link?

Apple changes between nVidia and AMD based on cost.

Right so Nvidia couldn't just give Apple a sweet deal for the Mac Pro?

Consoles are extremely low margin(~15%) semi custom products. While PC products commands a 35-65% margin. Those margins is what pays future R&D. Something AMD keeps cutting on. Their R&D budget is now lower than it was 10 years ago.

As mentioned before, AMD's R&D budget is down due to no longer paying for SOI at Glofo (moved their entire stack to bulk), no longer paying for R&D at TSMC (AMD were ALWAYS first to a new node, costing R&D) and getting paid R&D expenses by Sony, Microsoft etc. There is no cut in the actual future product R&D contrary to your personal wet dream.

How much of Nvidias profits are down to the Intel payoff again? $63 million a quarter - Nvidia's finances wouldn't look so healthy without that little bonus (running out in 2017) would they?
 
Last edited:
Gimped memory controllers that cause reduced bandwidth on the last 500mb of GDDR5. Google it, and read the bits that were written before the fanbois jumped in to save their poor pal Nvidia from any criticism.

Lol. I love the AMD "fanbois".. try running a 60Hz 4K single stream display on your AMD GPU sometime.. or try a "4K 60Hz dual stream" display and look down the vertical centre of the screen.
 

http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-reportsannual


Right so Nvidia couldn't just give Apple a sweet deal for the Mac Pro?

As mentioned before, AMD's R&D budget is down due to no longer paying for SOI at Glofo (moved their entire stack to bulk), no longer paying for R&D at TSMC (AMD were ALWAYS first to a new node, costing R&D) and getting paid R&D expenses by Sony, Microsoft etc. There is no cut in the actual future product R&D contrary to your personal wet dream.

Thats just pure nonsense. Lets at least keep to reality, ok?

ycharts_chart_AMD_vs_INTC_zpsccd1f993-1.pngycharts_chart_AMD_vs_NVDA_zps794cbcf1-1.png

And lets look on revenue.
Computing and Graphics division:
Q4 2013 888M$(15M$ loss)
Q4 2014 662M$(56M$ loss)
This is how AMD looks like without consoles.
In the same period nVidia grew 11% to 1225M$ revenue for the Q.
AMDs R&D budget is going down because they cant afford it. 7% layoffs in Q4 as well.
 
Last edited:
Back
Top Bottom