The Cost of Supersampling - Windows vs. OS X

So in light of all the performance issues being brought up, I decided to do a big comprehensive test and see just how big of a difference there was between the Windows and Mac OS X builds of Elite: Dangerous. And what better way that to do that than to jack up the settings all the way.

For those who are unfamiliar with it, supersampling is basically a driver trick that renders the game at a resolution much higher than what your monitor is actually capable of, puts a filter over it, and then downscales the game to fit the monitor.

Elite: Dangerous's supersampling feature doesn't tell you the exact resolution the game's running at. It just tells you the multiplication factor. 1.0x being your monitor's native resolution, and what you should be using by default. Just for the sake of the example, I'm going to be using 1920x1080, or "1080p" as the default resolution for this test. This would mean that a supersampling factor of 1.5x would result in the game rendering at 2880x1620 "3k", and 2.0x supersampling would be 3840 x 2160 "4k" resolution.

Naturally this would be extremely hard on any computer's GPU. So why punish your video card and force the game to run at 4k+ resolutions when you're stuck on a 1080p monitor? In short, it's a way to get around the problems of Anti-Aliasing. Anti-Aliasing, or "AA" is a filter placed over the screen to reduce the jagged lines leftover when the computer (badly) approximates the world's geometry. The lower the resolution, the less math the GPU has to do to render the textures. But that also means it's less accurate at doing so. The higher the resolution, the more accurate the world is. AA gets around the need for this by putting a "filter" over the screen at the cost of a bit more power.

There are a great many different types and applications of AA, but the problem is that each and every one of them comes with faults and inherent problems. Some produce a blurring effect while others come at insane performance penalties with very little visual gains.

Assuming you have the horsepower to spare, rendering the game at a straight up higher resolution negates the need for AA altogether. By rendering the game at a higher resolution, lines that would zig zag across the pixels as they came down at an angle are allowed to straighten out, and thus, supersampling is much preferred over any type of Anti-Aliasing method.

I will be conducting the test on the following system:

i7-4770k @ 4.3Ghz
Corsair H100i aftermarket cooler
Gigabyte Z87X-OC
32GB Corsair Vengeance LP 1866Mhz RAM
2x SLI EVGA GTX 770's - Superclocked with ACX Cooling
Corsair HX750 PSU 80+ Gold PSU
Samsung 840 Pro SSD - 128GB

Note that I have disabled SLI on Windows in order to achieve a fair test using a single GPU on both OS's.

First up, Mac OS X at the native 1920x1080 resolution:

View attachment 32222

Next up, Windows 7 at 1080p:

View attachment 32223

Already, the Windows build looks quite a bit nicer. The lighting and particle effects are much better. The Mac build seems to be lacking a lot of the shaders used on Windows. The game also runs a lot faster on Windows 7 at 147 fps. And yes. I had the gamma settings exactly the same both in-game and on the OS level. the difference in color is due to the different color schemes used by the stations. I'm at Meuron Station in the Windows build, and Dalton Orbital in the Mac build. I didn't have the time to fly all the way back to Dalton Orbital, so I found the closest station that matched it as I could.

Next is OS X at 4k via supersampling:

View attachment 32224

Compare that to Windows 7 at 4k:

View attachment 32225

The changes are immediately apparent. Rendering the game at 4k, even on a 1080p monitor, is astonishingly better. The differences are striking. Right off the bat, you can see the lines on the outer frame of the Viper in the foreground are much more defined at 4k while the far end of the station sees smoother lines along it's structure.

Though again, Windows holds the clear lead here. A single GTX 770 can render the game at 3840x2160 and still maintain 60+ fps. The Mac OS X build gets barely half that, and that's on top of all the existing light and shading effects it seems to leave out. With those effects absent, the Mac fps should be quite a bit higher than it is.


Let's try it again:

Mac OS X at 1080p:

View attachment 32229

And Windows 7 at 1080p:

View attachment 32230






Mac OS X at 4k:

View attachment 32227

Windows 7 at 4k:

View attachment 32228

Again, the differences are pretty obvious. Mostly around the holo-ads and the Viper's weapon mounts. And again we see that Mac OS X shows gross deficiencies in lighting and particle effects, as well as a pitiful frame rate.

Overall, it was a fun experiment. But I'm really hoping the Mac version starts showing major improvements in it's lighting/shading, and performance in future builds. I took a lot more screenshots and video of both OS's, but sadly those will have to wait a while to be uploaded.

- - - Updated - - -

Not sure why the attachments aren't loading in the post, but the links work, it seems.
 
Is it even possible to get parity though? I always assumed OpenGL was a little behind D3D, and that Macs never really have cutting edge graphics drivers.

But an interesting read, and rep. I hope FD can squeeze as much into the port for Mac users.
 
Is it even possible to get parity though? I always assumed OpenGL was a little behind D3D, and that Macs never really have cutting edge graphics drivers.

But an interesting read, and rep. I hope FD can squeeze as much into the port for Mac users.

The idea of OpenGL behind behind Direct3D in performance hasn't been true for a long time. OpenGL 3.3 matched Direct X 10.1, and OpenGL 4.0+ matched Direct3D 11. In fact OpenGL 4.3 through 4.5 have actually been benchmarked to be FASTER than Direct3D 11. It's why AMD and nVidia have been pushing against Direct X with Mantle and Vulkan, to get Microsoft to get it's rear in gear.

However, you're right that the drivers in Mac OS X are behind. The latest Mac OS, Mac OS X "Yosemite", only supports up to OpenGL 4.2. Though I was using the nVidia Web Drivers from their website, which are a lot further along. I believe Frontier said they were aiming to use OpenGL 4.1 in the Mac build of the game. Though from my testing, I just got the impression that their Cobra engine the game runs on has a bit more optimizing to do.

What IS interesting though is that the Mac build's CPU-intensive Galaxy Map runs more smoothly than the Windows version. And the game seems to be much more optimized for multithreading on the CPU, as the game will utilize 2... MAYBE 3 CPU cores on the windows build, but on OS X, the game makes pretty equal use of at least 4 cores. I've seen it use as many as 6. This is likely due to OS X's "Grand Central Dispatch", a dev tool used to help automate the multithreading of applications.
 
Super sampling IS a very heavy thing to drive, and yeah mac gaming isn't exactly something that mac has been working much on, so I would guess it is an optimization of drivers, I really don't think the game itself, but more that mac's aren't ready or for that matter something that is intended to game at 4k resolution.
 
Last edited:
Super sampling IS a very heavy thing to drive, and yeah mac gaming isn't exactly something that mac has been working much on, so I would guess it is an optimization of drivers, I really don't think the game itself, but more that mac's aren't ready or for that matter something that is intended to game at 4k resolution.

I wouldn't go that far. Blizzard titles are pretty well known for having almost perfect parity between the two OS's. A bit of optimizations from Frontier could go a long way.
 
The idea of OpenGL behind behind Direct3D in performance hasn't been true for a long time. OpenGL 3.3 matched Direct X 10.1, and OpenGL 4.0+ matched Direct3D 11. In fact OpenGL 4.3 through 4.5 have actually been benchmarked to be FASTER than Direct3D 11. It's why AMD and nVidia have been pushing against Direct X with Mantle and Vulkan, to get Microsoft to get it's rear in gear.

However, you're right that the drivers in Mac OS X are behind. The latest Mac OS, Mac OS X "Yosemite", only supports up to OpenGL 4.2. Though I was using the nVidia Web Drivers from their website, which are a lot further along. I believe Frontier said they were aiming to use OpenGL 4.1 in the Mac build of the game. Though from my testing, I just got the impression that their Cobra engine the game runs on has a bit more optimizing to do.

What IS interesting though is that the Mac build's CPU-intensive Galaxy Map runs more smoothly than the Windows version. And the game seems to be much more optimized for multithreading on the CPU, as the game will utilize 2... MAYBE 3 CPU cores on the windows build, but on OS X, the game makes pretty equal use of at least 4 cores. I've seen it use as many as 6. This is likely due to OS X's "Grand Central Dispatch", a dev tool used to help automate the multithreading of applications.

Very interesting info.

However we must bear in mind that the Mac version is still in beta. This test should be performed in a finished Mac version and even then it may be biased towards Windows as the windows version has been constantly updated and patched up to the present. So the windows ED will always be on an advantage over the Mac version as it had more time to mature and get optimized. A reliable comparison can be made only after official Mac version is released and many updates are applied so probably after a few months follwing release :) .

Nevertheless it was an interesting test. One question though: I have noticed that in windows version, even at normal 1080p res, there are way less jagged lines vs the mac one. Are you sure you had internal and/or external (Nvidia control panel) AA disabled in windows version at 1080p?
 
Last edited:
I'm actually impressed by the MAC performance.
You can really play games on these things nowadays!

Is the cursor lag fixed yet? Last time I checked, in 2010, it was still there.
 
Nevertheless it was an interesting test. One question though: I have noticed that in windows version, even at normal 1080p res, there are way less jagged lines vs the mac one. Are you sure you had internal and/or external (Nvidia control panel) AA disabled in windows version at 1080p?

I turned off AA in both the in-game settings and the control panel. I'm not sure why it looks that way. Though that particular station was darker than most. The game usually looks to be about the same brightness as the Mac pictures.

I'm actually impressed by the MAC performance.
You can really play games on these things nowadays!

Is the cursor lag fixed yet? Last time I checked, in 2010, it was still there.

The cursor lag hasn't been there for years.

And for the record, Elite: Dangerous' performance is something of an oddity on Mac OS. As I mentioned above, games on Steam and especially Blizzard's games have a much closer parity to their Windows counterparts. OS X has come a very long way in gaming performance in the last few years. That's why I'm so sure Frontier has a lot of optimizing to do yet.
 
Last edited:
What IS interesting though is that the Mac build's CPU-intensive Galaxy Map runs more smoothly than the Windows version. And the game seems to be much more optimized for multithreading on the CPU, as the game will utilize 2... MAYBE 3 CPU cores on the windows build, but on OS X, the game makes pretty equal use of at least 4 cores. I've seen it use as many as 6. This is likely due to OS X's "Grand Central Dispatch", a dev tool used to help automate the multithreading of applications.

Now that is interesting! I am debating with my new PC to go for a quad core 4790k or a hex core 5820k (the old single thread speed v more slower cores) and I wonder if some of these optimisations might be used in the Windows version.
 
Now that is interesting! I am debating with my new PC to go for a quad core 4790k or a hex core 5820k (the old single thread speed v more slower cores) and I wonder if some of these optimisations might be used in the Windows version.

As a follow up, I decided to take a screenshot of the game's Galaxy Map while running at 4k via super sampling. The game was switched to "Windowed Mode" in order to show the performance stats along side the game, but otherwise, the game settings are exactly the same as yesterday's 4k test. AA off, but everything else set to max with supersampling set to 2.0x, or 3840x2160.

OS X:
View attachment 32379


Windows 7:
View attachment 32378

Just to do a bit of translating, OS X displays it's usage percentage based on each individual core rather than the CPU's chip as a whole, so since my CPU has 8 cores, 100% usage is just maxing out one core, and 800% would be full CPU usage. On Windows, it just displays full usage of all 8 cores as 100%.

Right away we see that ED doesn't use the CPU very much on Windows. 13% usage translates to roughly 104% usage in Mac OS X terms. Or barely a single core. ED's Mac build on the other hand uses over twice that, at 2.6 of the CPU's 8 cores are being utilized. And as you can see from the CPU History chart, it's spread pretty evenly across four cores, with the remaining four doing a bit of work of their own.

We can see all that extra work being put to use via the game's frame rate. The Galaxy Map is the most CPU-intensive part of Elite: Dangerous. Just before I took the screenshot, I toggled on the "plot routes" option, just to peg the CPU harder for the screenshots themselves. This is the one spot in Elite Dangerous where the Mac build outpaces the Windows version. 56.3 fps vs. 30.2fps.

The GPU is being pretty equally abused on both OS's, running at it's normal 1200Mhz core clock and 7GHz memory clock on each. If anything, OS X seems to again show favor by running a tighter fan curve, as even at the same speeds, the GPU is running several degrees cooler under OS X. 68ºC as opposed to 75ºC on Windows. The GPU doesn't start throttling back until it hits 90ºC, so that's not a factor here.

This has certainly been an interesting test. And I'll be sure to add more of my findings as I go. Ignore the Anaconda wallpaper behind the game's window on the Mac screenshot. That was from my trip to Canis Majoris.
 
Last edited:
Though again, Windows holds the clear lead here. A single GTX 770 can render the game at 3840x2160 and still maintain 60+ fps. The Mac OS X build gets barely half that, and that's on top of all the existing light and shading effects it seems to leave out. With those effects absent, the Mac fps should be quite a bit higher than it is.

Something very wrong with those numbers...

Native 4k is far more taxing than your numbers suggest. A 770 would never be able to sustain 60+ FPS at 4k. I run a R9 290x at native 4k and it runs around 40-50FPS, with small peaks to 60 on limited occasions, more importantly sometimes chugging in the low 20's. No FSAA or other filters used.

The R9 290x is a much faster card than the GTX 770. A GTX 980 might be able to run in the 60FPS+ area at 4k, but not a 770.

It's pretty obvious that the "supersampling" is really not doing what you think it's doing, but is some kind of internal jigery-pokery that simulates rendering at 4k.

This article has a good set of comparisons of games running at 4k: http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/17 and you'll notice that even the 980 rarely gets over 50FPS in most games, and the 770 struggles to sustain 30-ish!
 
Last edited:
I'm not sure what to tell you, EDbubba, you saw the screenshot for yourself. Though ED is a fairly light game. The GTX 770 wouldn't stand a chance running more modern games at 4k at those frame rates, but Elite: Dangerous isn't all that intensive of a game. It's just rendering open space most of the time, and the stations aren't exactly complicated to render when compared to something like a forest.

Though in the order of full disclosure, on the Windows build I tried out BOTH the in-game supersampling option, and nVidia's DSR option, which is nVidia's brand of super sampling from the driver control panel, and both methods had almost exactly the same impact on the game's performance.

With that said, here's an update for Mac Beta 4.0:

Well it seems Frontier did quite a bit of optimizing in the latest build, because the performance is waaayyy up. No more micro-stuttering, and I'm getting as much as 45fps inside the stations and 60+ everywhere else even at 4k via super sampling. Lighting and particle effects also seem to be vastly improved.

Small note, I lightened up the gamma a bit since the screenshots came out so much darker than my actual display last time. But otherwise the settings are the same.

OS X at 1080p:

View attachment 32511

OS X at 4k:

View attachment 32512

OS X 4k settings:

Screen Shot 2015-04-29 at 7.16.46 AM.jpg

That's an increase of almost 20fps across the board in most locations. Well done, Frontier!
 
Last edited:
+1 I've got a decent card and a 4k monitor. Is it worth using super sampling?

Honestly, it depends entirely on you and the quality of your monitor. The higher resolution the monitor, the less AA matters. If jagged lines along the edges of textures really bother you, and you hate the blurriness that traditional AA brings, and you can't afford to get a better monitor... Then supersampling might be worth it.

In all honesty, when I know I'll be playing the game for extended periods of time I'll actually turn AA and supersampling off because I'm simply not going to notice the jaggies on the ships and astroids while I'm neck-deep in combat. Supersampling is actually much more useful in games where there's a lot of grass, trees, and other foliage where leaves look like a pixelated mess at 1080p. Rendering the game at 4k can make a MASSIVE difference in those kinds of scenes.

In the end it comes down to how picky you are. If you look at the pictures I've posted above, and you would never have even noticed a difference otherwise, then it might be a waste of processing power. Because you "are" adding more wear and tear to your video card. If you're concerned about the longevity of the card, then you may not want to do it.
 
Last edited:
+1 I've got a decent card and a 4k monitor. Is it worth using super sampling?
That really depends on the pixel density, if it's 4k on an average-sized monitor, you probably only need little AA or no AA at all
aliasing is pretty much non-existent when playing 3D games on phone screens, not sure how it would look on a normal monitor, but my guess is you don't need much if any at all AA, unless you're taking screenshots and you want the images to look smooth on standard resolution (at this day and age, is probably 1080p) screens at full size.

That said, the more AA you have, the better your image quality overall; for standard play though, using AA methods at the higher-performance cost spectrum will probably not make much of a difference, but play around with the settings and see what best suits you.

I use an ultrawide at 2560x1080 (full HD, but with more horizontal pixels into a 21:9 aspect ratio instead of standard widescreen at 16:9) and 1.5 SSAA, and it drastically improves the image quality in terms of removing the jagged edges I see on the angles edges of my ship windows and converging lines in the station. Eventually though, monitors will have picture resolutions so high that aliasing won't even be a problem in a few years time (at least on regular-sized computer displays).
 
Back
Top Bottom