[WMR] Clarity improvement "hack"

That is not explaining renderTargetScale.
Its not supersampling, its WMR related parameter inaccessible from Steam settings or advanced settings.
 
So I tried this last night but with no noticeable changes. I think I may be doing something wrong though. I've used Notepad to edit the file, don't know if that's correct or not. I also have 2 different steam files, one on each partition C: & D: drives (SSD & HD). The folder on the C: doesn't have the MixedRealityVRDriver folder, so I don't know if that effects anything. I'm going to try to copy it over from the D: to see if that makes a difference.

Right now I'm noticing colors are more vibrant on the Odyssey+, and the blacks are much darker than on the Rift, but the text is blurry enough to be annoying. I'm hoping this can help clear that up some, but we'll see.
 
That is not explaining renderTargetScale.
Its not supersampling, its WMR related parameter inaccessible from Steam settings or advanced settings.

But it is supersampling. The increased intermediate frame buffer target render size corrects for the barrel distortion applied to the rectilinear image coming from the application and is a fundamental part of any VR rendering pipeline regardless of device or platform.

"Windows Mixed Reality immersive headsets contain lenses which distort the presented image to give higher pixel density in the center of view, and lower pixel density in the periphery. In order to have the highest visual fidelity on Windows Mixed Reality Ultra devices, we set the render target’s pixel density to match the highly-dense center of the lens area. As this high pixel density is constant across the whole render target, we end up with a higher resolution than the headset's display."

You can see a code example setting the equivalent variable in the documentation.

https://docs.microsoft.com/en-us/wi...rsive-headset-apps#default-render-target-size

Code:
auto holographicDisplay = Windows::Graphics::Holographic::HolographicDisplay::GetDefault();
if (nullptr != holographicDisplay)
{
	double targetRenderScale = 1.0; auto systemInfo = ref new SystemInfoHelper::SystemInfo(holographicDisplay->AdapterId);
	SystemInfoHelper::RenderScaleOverride^ renderScaleOverride = systemInfo->ReadRenderScaleSpinLockSync();
	if (renderScaleOverride == nullptr || renderScaleOverride->MaxVerticalResolution != (int)holographicDisplay->MaxViewportSize.Height)
	{
		if (renderScaleOverride != nullptr)
		{
			// this deletes the file where we read the override values 
			// it is async but we shouldn't have to wait for it to finish 
			systemInfo->InvalidateRenderScaleAsync();
		}
		/// You may insert logic here to help you determine what your resolution 
		/// should be if you don't have one saved. SystemInfoHelper has some  
		/// functions that may be useful for this
		if (holographicDisplay->MaxViewportSize.Height < 1300.0)
			//  Performance constrained systems that are throttled by the OS will have a 
			//  max resolution of 1280x1280
		{
			// Set default render scale for performance constrained  systems here
			targetRenderScale = 0.8;
		}
	}
	else
	{
		targetRenderScale = renderScaleOverride->RenderScaleValue;
	}
	CoreApplication::Properties->Insert("Windows.Graphics.Holographic.RenderTargetSizeScaleFactorRequest", targetRenderScale);
}
 
Last edited:
Here is explained by MSdev what renderTargetScale does;
https://www.reddit.com/r/WindowsMR/..._to_prove_that_20_render_does_make_a/ed5zzs9/

Let me try to provide more details. What renderTargetScale does is it increases the internal limit on resolution before the downsample/upsample occurs.
More specifically:
On an Acer, HP, Lenovo, Dell, ASUS or other headset (1440x1440 per eye resolution):
renderTargetScale == 1: Limit is 1764x1764 per eye
renderTargetScale == 2: Limit is 2206x2206 per eye (this is the max; setting renderTargetScale > 2 won’t have an effect).
On a Samsung Odyssey or Odyssey+ headset (1440x1600 per eye resolution):
renderTargetScale == 1: Limit is 1657x2065 per eye
renderTargetScale == 2: Limit is 2072x2582 per eye (this is the max; setting renderTargetScale > 2 won’t have an effect).
Setting renderTargetScale will use slightly more video memory, but not much.
Geoff
 
Last edited:
Makes me wonder why it isn't setup like this by default?

What? And make it easy?!!! You'd think it was plug and play or something... :D

Actually, it's likely do to marginal cases where the computer that someone might not be up to snuff... But you would think they could run a setup routine to determine that.
 
Well, latest windowsmr steam update changed rendertarget to 2 as default setting because of the clarity increase. Notes had a thank you to the community as well 👍.

Only bad thing about this is that the clarity upgrade to pimax won’t be that noticeable 😂
 
Last edited:
Well, latest windowsmr steam update changed rendertarget to 2 as default setting because of the clarity increase. Notes had a thank you to the community as well .

Only bad thing about this is that the clarity upgrade to pimax won’t be that noticeable 
Good news and promising for future development and support they've made the change.
 
Well, latest windowsmr steam update changed rendertarget to 2 as default setting because of the clarity increase. Notes had a thank you to the community as well &#55357;&#56397;.

Only bad thing about this is that the clarity upgrade to pimax won’t be that noticeable &#55357;&#56834;

I agree, that is good news. Cheers PetroVitallini.
 
I tested it with steam SS 200% and, holy cow, how beautiful it is on odyssey+. Too bad reprojection kicks in. But now i can't unsee how beatiful ED can be in VR and have to play with reprojection. Time to order 2080ti it seems
 
I tested it with steam SS 200% and, holy cow, how beautiful it is on odyssey+. Too bad reprojection kicks in. But now i can't unsee how beatiful ED can be in VR and have to play with reprojection. Time to order 2080ti it seems

Still won’t be enough!
 
Well, according to WMR reprojection indicator my bottleneck is CPU (i5-8400). Never though CPU matter in modern gaming.
 
Well, according to WMR reprojection indicator my bottleneck is CPU (i5-8400). Never though CPU matter in modern gaming.

Elite likes threads but I'm surprised a 6 core modern i5 is bottle necking it. What gpu and settings are you using? Also not sure what you mean by indicator?
 
Last edited:
My GPU is gtx 1070. I enabled motion re-projection indicator as described here: Microsoft docs. Now indicator shows what exactly causes re-projection. Too my surprise i found out that my CPU is bottleneck, not GPU (which i know is not powerful enough for SteamVR Supersampling). I checked SteamVR advanced frame monitoring, and it reports that my CPU is limiting factor too. So i'm little bit confused right now. I though i5-8400 is just perfect CPU for gaming
 
Last edited:
OP
There is no possible way that 970 can run ED "smoothly" with per eye resolution 2015x2519 (steam VR value), in game SS 1, HMD SS 1 + everything else on max settings.
Only if you consider slide show as smoothly...

I have started playing ED in VR with 970 and also i5 2500K and I have extensively tested what this rig can handle.
My current i7 2700K@4.5ghz + 1080@+140mhz core, 300mhz RAM could handle above settings "smoothly" for me that is around 55fps in stations and even less when approaching planet to land on.
I use pimax 4k so my FPS is limited to 60
 
Last edited:
OP
There is no possible way that 970 can run ED "smoothly" with per eye resolution 2015x2519 (steam VR value), in game SS 1, HMD SS 1 + everything else on max settings.
Only if you consider slide show as smoothly...

I have started playing ED in VR with 970 and also i5 2500K and I have extensively tested what this rig can handle.
My current i7 2700K@4.5ghz + 1080@+140mhz core, 300mhz RAM could handle above settings "smoothly" for me that is around 55fps in stations and even less when approaching planet to land on.
I use pimax 4k so my FPS is limited to 60
Smooth is subjective of course. I have a 970, I use steam set to 150% (above that it's jerky), in game SS1, HMD SS 1, AA off, AO off, most other things high. Works really well. Seems to be the sweet spot for me.
 
My GPU is gtx 1070. I enabled motion re-projection indicator as described here: Microsoft docs. Now indicator shows what exactly causes re-projection. Too my surprise i found out that my CPU is bottleneck, not GPU (which i know is not powerful enough for SteamVR Supersampling). I checked SteamVR advanced frame monitoring, and it reports that my CPU is limiting factor too. So i'm little bit confused right now. I though i5-8400 is just perfect CPU for gaming

Do you have lots of services running in the background? Is the reprojection just near stations or in other areas too?

If you haven't already seen it this thread may be useful for some CPU info for elite.

https://forums.frontier.co.uk/showthread.php/466411-Ryzen-R5-2600x-for-VR
https://forums.frontier.co.uk/showthread.php/468687-Potential-CPU-bottleneck

Edit. Re reading those threads noone has first hand experience with the 6 core i5 (strange as it's one of the most common gaming processors)
 
Last edited:
Thanks, for links! Actually i have no explanation how it is possible that supersampling is CPU bound. From my understanding SS is all about raw GPU power. But we have vr runtime doing lots of stuff so VR gaming is very different from conventional gaming
 
Back
Top Bottom