Even in Gamma, the performances in Rift are ridiculously slow, play at your own risk

I have an MSI 770 Lightning, and I've overvolted and overclocked it well beyond what Afterburner allows for that card; I'm probably slowly cooking it. I still consider it pretty borderline in Elite Dangerous.

There are a lot of variables. In a sidewinder I can have the Oculus Quality slider at max and most of the other settings at high - I'll get 75fps in stations, and occasional dips below that flying around planets, in asteroid rings etc. If I keep the same graphics settings but switch to an Eagle or Cobra it's judder-city in stations, especially if I move my head. Technically it's 'playable' but it really kills the immersion (for me) and even the illusion of 3D to a large degree.

I'll probably be upgrading to a 970 soon :)

Well I'm sitting at 1202Mhz on my GPU, so my options are either to buy GTX970 or spend my ED days in Sidewinder? I mean from Role-play aspect sidewinder can be good enough :D
 
Quietman,
Great and informative posts, though I do think you're presuming the readers will have a technical knowledge in GPU programming equal to your own. It may be best to write it less technically, and pedagogically.

- Of course 400Billion systems aren't stored client side, the galaxy has been pre-generated server side and it feeds local slices, (infinitesimally small), to each island host, (multiplayer), or individual client, (solo); the client then generates a new scene based on that data.


I just thought this one quote was slightly inaccurate, and might mislead and confuse a lot of people. Remember, this game worked in offline mode for quite a while. The galaxy is indeed collated and pre-generated server-side, specifically to ensure the developer has the ability to make changes to the galaxy and its economy / politics, as well as any changes to the galaxy. But when you log in to the game and start it up, the galaxy data is not sent to the client as assets. No 3d models of planets, and very little galactic matrix point data is transferred between server and client. The only time point data and 3d assets is transferred, is for each download or upgrade. It's really all about the procedural generation seed. Once the developers make changes to the procedural generation algorithms, for whatever phenomena they want to reproduce, update or correct.. they send the procedural seeds to each client. The galaxy is then generated on each client using the seed data from the server. Anything else would be madness.
 
Last edited:
Quietman,
Great and informative posts, though I do think you're presuming the readers will have a technical knowledge in GPU programming equal to your own. It may be best to write it less technically, and pedagogically.

I do get carried away sometimes, there is no doubt about that, but it is also extremely difficult to convey most of these technical aspects in layman's terms. The O.P. seems to have a decent grasp of the basic functionality of a GPU, so I was really having a one on one conversation with him, (obviously anyone else is welcome to join in as you did :)), hence my warning at the start, (although I could have added a technical jargon component to the warning as well).

I just thought this one quote was slightly inaccurate, and might mislead and confuse a lot of people. Remember, this game worked in offline mode for quite a while. The galaxy is indeed collated and pre-generated server-side, specifically to ensure the developer has the ability to make changes to the galaxy and its economy / politics, as well as any changes to the galaxy. But when you log in to the game and start it up, the galaxy data is not sent to the client as assets. No 3d models of planets, and very little galactic matrix point data is transferred between server and client. The only time point data and 3d assets is transferred, is for each download or upgrade. It's really all about the procedural generation seed. Once the developers make changes to the procedural generation algorithms, for whatever phenomena they want to reproduce, update or correct.. they send the procedural seeds to each client. The galaxy is then generated on each client using the seed data from the server. Anything else would be madness.

I used generic terms like slices and data, I never intended for those to mean 3d assets or GPU resources, as you say that would be madness :). I could have been clearer and specified what that data is, but I don't know what it is, it's obviously solar system based and as you say it could be seeds that plugs into a client side procedural generator.

Do you know for a fact that this is how a solar system is created when you enter it? What about the hand crafted 160,000 odd systems, are they built into the client executable and client assets?

Or, does the server just send solar system body attributes, to each client as they enter a new solar system for the first time, i.e. stellar body types, positions, sizes, orbits, names, etc? I could see this being possible as obviously the amount of data is fairly small and only needs to be sent every time a new solar system is procedurally created on the client's side, (i.e. the slice of the galaxy I mentioned in the previous post).
 
I just thought this one quote was slightly inaccurate, and might mislead and confuse a lot of people. Remember, this game worked in offline mode for quite a while. The galaxy is indeed collated and pre-generated server-side, specifically to ensure the developer has the ability to make changes to the galaxy and its economy / politics, as well as any changes to the galaxy. But when you log in to the game and start it up, the galaxy data is not sent to the client as assets. No 3d models of planets, and very little galactic matrix point data is transferred between server and client. The only time point data and 3d assets is transferred, is for each download or upgrade. It's really all about the procedural generation seed. Once the developers make changes to the procedural generation algorithms, for whatever phenomena they want to reproduce, update or correct.. they send the procedural seeds to each client. The galaxy is then generated on each client using the seed data from the server. Anything else would be madness.

I like the idea that procedural generation is just a very, very, very lossy compression algorithm. You could argue that using it on such a grand scale is madness too, mostly because artists and game designers lose a lot of control. In the example you give, modifying the seed even slightly can result in a completely different environment. Different in detail I mean. It would look the same on average. This is a problem that the guys working on Infinity had too. Every planet looked the same but different. In order to add uniqueness you have to add new procedural generators that layer on top of the existing ones. Every time you do that you lose some control over the outcome of course. So to a great extent you have to add real data back in to get the kind-of game you want. It's nice to have 400 billion stars but unless the developers have specifically added detail, there won't be any uniqueness.
 
780Ti is a great card, same goes for the 980 (they are over 500 USD if I remember correctly); performance wise they are about 30% faster than a 770 stock, so I would expect that these cards give no issues.

I may add a second card in SLI, since now the 770 are pretty cheap. Although, it is a bit of a waste to have 2 770 when the card is old technology. SLI makes sense for new gen, but not much for old gen, since with the same price of 2 770, you can get a newer card. I am planning to stay with what I have; I will just hope that Oculus improve their technology so the SDK may give a bit more of performances, because we already know that Frontier won't do anything to improve things. I prefer them focusing on planet landing and 1st person mode, to get out of the ship.

Plus I am not investing in something, to support an incomplete development kit :) Probably in march at GDC, we will see either the DK3 or the release candidate for the final version of the Rift; and if they bumped the resolution to 1080p per eye (means the display is a 4K; can't even imagine the price of such thing), you will need twice as much power to render the same game now, and a 790 or a 980 will become as useful as my 770 now :)

VR was complicate for a reason; and now it is a bit easier but not because someone figured out a way to make it better; just because parts are cheaper, and customers are willing to waste 4K on a computer to play a game with a HMD. As usual there is no innovation, just iterative and derivative solutions.
 
strange.....

I have an I5 2500k too. and only a GTX670 (gtx980 on the way)

1st off whilst I do overclock the balls off my rig I would still say yours should be ok with elite on minimum details. Do bear in mind your GPU is right on the advertised minimum spec for oculus rift DK2 however (I am below it). like it or not the rift takes a lot of juice running right.

rift wise the sweet spot for me on my current rig is extended mode 72hz with vsync turned off.

at 60hz its too blurry and at 75hz and esp vsync on its a bit stuttury, but I can (and do) play for hrs in the rift with no ill effects with elite.

bear in mind the final version of the rift will be even more punishing on our hardware

It works in min detail, but if you try the fight training mission, where you have plenty fo asteroids, then the system just can't keep up with it.

I have a 770 OC from Asus; I could overclock the video card and processor to get some more performances, but I doubt that the effort would give me anything beyond a 5% increase; which is a waste, considering the strain that you put on your system with OC (and relative risk of failures...OC was useful when you had a 33 MHz and need to go faster, because the architecture was more permissive to voltage and temperature variations beyond specs...nowadays the amount of components that you have per square mm, is insane, which result in more extended damage if you push the OC too far).

The new rift, if they will increase the res, will have 1080p per eye (4K display in total); which means that to run that thing, you will need twice as much power from your video card...basically something like a Titanium to play at min or med settings. That's scary.
 
780Ti is a great card, same goes for the 980 (they are over 500 USD if I remember correctly); performance wise they are about 30% faster than a 770 stock, so I would expect that these cards give no issues.

I may add a second card in SLI, since now the 770 are pretty cheap. Although, it is a bit of a waste to have 2 770 when the card is old technology. SLI makes sense for new gen, but not much for old gen, since with the same price of 2 770, you can get a newer card. I am planning to stay with what I have; I will just hope that Oculus improve their technology so the SDK may give a bit more of performances, because we already know that Frontier won't do anything to improve things. I prefer them focusing on planet landing and 1st person mode, to get out of the ship.

Plus I am not investing in something, to support an incomplete development kit :) Probably in march at GDC, we will see either the DK3 or the release candidate for the final version of the Rift; and if they bumped the resolution to 1080p per eye (means the display is a 4K; can't even imagine the price of such thing), you will need twice as much power to render the same game now, and a 790 or a 980 will become as useful as my 770 now :)

VR was complicate for a reason; and now it is a bit easier but not because someone figured out a way to make it better; just because parts are cheaper, and customers are willing to waste 4K on a computer to play a game with a HMD. As usual there is no innovation, just iterative and derivative solutions.

true true, but one of their goals was also keep the price tag around 300$. So many ppl would be able to afford it.
 
Don't read this unless you have some serious spare time, you have been warned!

No worries; if I am not playing I am here reading ;) My reply is also quite long, sorry about that.

- When you say you have no issues playing on the monitor at all, what FPS do you get in stations, at asteroid fields and in deep space with vsync/gsync off? If you're not getting 120+ FPS in stations with a 2D monitor, then you have little chance of using the DK2 without stutter, how tolerable that stutter is, is a personal experience for each user.


I need to double check, but I believe I am running at max settings, all on max, every single option on, at 110 FPS, on a 50Hz refresh rate (that's what my 1080p lcd uses).
Vsync is not doing much; since I use a 60Hz display...if you output 120 FPS, your monitor can still only display at 60Hz, which is about 60 screen per seconds; so 60 FPS. Locking your card to the refresh rate of the monitor helps to avoid tearing, because your image output from the graphic card is higher, which force the monitor to do blending, since can't go faster than that.
Many times people forget that the monitor limit the amount of frames that you can display (refresh rate: time needed for the raster to go from top left pixel to bottom right in one unit of time. FPS: frames drawn per second by the video card).


- "Glue and duct tape"; I assume you are talking about the Occulus Rift SDK, (correct me if I am wrong)? Have you actually built anything against the Rift SDK? I am developing with it daily and it has it's good and bad points, but for a beta release API it is more than acceptable.

Yup, referring to the Oculus SDK. I made 2 demo with Unity 3d and one with pure C++ code; I am no expert with it but I know where to put my hands on it. Like any API that is still a work in progress, it has limitations to overcome; in part you can circumvent them with your coding skills, but for others, you can't use anything that is not exposed by the public interface. I am waiting for the final release of the SDK, before mark it as bad SDK. It is usable, if that's what you mean....after all, many people can drag and drop the rift controller in Unity and 70% of the work is done for them.
- Lets say by release FD manage to optimise their rendering engine by another 20%, (improbable at this late stage), will that make a difference to your performance issues? I am guessing not, therefore it's up to you to determine why your machine isn't performing like others with similar setups.

Agree, it is almost impossible, since this is the release candidate at this point; no EPM in their right state of mind would allow any changes that would result in the chance to introduce bugs.
If there was such thing as 20% increase in performance from the engine, that MAY increase my performances, but it is hard to say, since we are dealing with a complex system that has 3 main variables: the SDK from Oculus, the 3D engine from the game and the OS.

The only high level analisys that I can make, is the one that other games works fine with no stutter and no ghosting, and they are as intensive (graphic wise, physics is handled mostly by the CPU, not the GPU, unless you tell the GPU to use spare cycles), so if the variables are the same (same os, same SDK), it means that the variable that is influencing the overall outcome is the game engine.

- ED uses a DirectX based renderer, this has nothing to do with OpenGL.

True, my bad; if you live on Windows, yes.
There is no such thing as DX on OSX; there the SDK uses OGL. I just moved back to windows after 14 years of OSX; I am a bit rusty on DX so I need to read more. I recall that also DX does the context switching like OGL, but it hide it from the user, so you don't have to switch context and just send draw commands once done. I will look into it once I have a moment.

- You as the progammer don't normally switch contexts in DirectX or OpenGL, one thread per context. The system switches between contexts for different windows on a desktop or when switching processes. If you use an API like Equalizer for OpenGL, then yes you can use multiple contexts but that is irrelevant for this discussion. Also DX11 supports multi-threading and deferred contexts, but once again it is one thread per context, there is no unbinding/rebinding of contexts at the singular thread level.

Not sure how you can do it otherwise; this is the basic of how OGL works; if you don't declare each context and load the framebuffer, render the scene and switch to display it, there is nothing on screen.

You are probably confusing User mode and Kernel mode; in that case the graphic scheduler operate the context switch on the GPU pipeline.

- Remember, switching contexts used to be an extremely costly operation for the GPU and to some extent it still is, all it's onboard pipelines and caches get flushed, the driver flushes it's shader caches and re-intialises it's memory manager, etc, etc.

This is actually what happens every frame...load, process, draw, switch context, display and flush. Some info are cached, like light raycast, mesh data (if relevant to the culling area) and such.

At least this is how it used to be in 2.0; I did not touch my OGL bible book since 2009 :) If I am behind, feel free to point me to some technical resources, so I can update myself.
- There is no wait states between GPU context switching, the GPU is effectively stalled whilst the old context is being flushed and the new one is being created. The process of sending draw calls to the GPU is already a deeply pipelined operation at the API level, at the driver level and even internally on the GPU. You NEVER want to stall the GPU, in an ideal game the GPU runs at 100%, (no VSync), all the time and the CPU cores run less than 100% happily feeding it batched draw calls, handling input, streaming resources, performing AI, procedurally generating content, (in ED's case), etc.

There are in fact wait cycles; the CPU send a semaphore to the GPU; once that the GPU is done and switch context, the CPU resume the operations releasing the lock and send new data from the main memory to the GPU memory. This can be done in chunks or multiple frames at time (depends how many threads you have). You will be surprised at how many cycles are wasted, when CPU and GPU communicate with each other. This is the nature of any IC communication protocol: you cannot send the next batch of instructions unless you get a confirmation from the "slave" that the previous operation has been completed. This is true for network protocols, as per internal bus between core and cache memory, or CPU and memory..and obviously between the CPU and GPU.

What changes is the amount of pipelines that you may use, storage size and threads that a system can handle. We don't go fast because we reduced the wait time only, but also because you have more lanes to communicate, and more threads running concurrently; which is a dissertation on its own :)

- What exactly does Star Citizen do that ED doesn't with respect to the DK2?

Simple answer: just works

more in depth answer: no jitter/blur, no stuttering. When I move my head, it goes smooth; when I am dogfighting, I can rotate my head like if I was there.
In ED I can't get this all the time, even at minimum detail. I understand that ED has more objects to move, compared to the tinderbox map size of SC, but the models in SC are much more complex, polygon and texture wise, than ED; and it uses one of the most power hungry engine ever made. The expectation was that ED would run smooth, while SC would stutter and give ghosting.

- Of course 400Billion systems aren't stored client side, the galaxy has been pre-generated server side and it feeds local slices, (infinitesimally small), to each island host, (multiplayer), or individual client, (solo); the client then generates a new scene based on that data.

Yes, that's obvious :) each instance is a defined size; you can tell since you can't go from system A to system B just with super cruse; and also the number of players per instance is limited.
I can't make any number out of what I see, but I suspect that each instance is pretty big (altho mostly empty...there is not much beside planetoids and space stations); but the data to draw each instance I suspect is quite small. Traveling trough systems is done with instance changing (covered by the hyperdrive animation); not different from what SC does when you land on a planet.

- With the DK2, you have two separate cameras, your scene graph/render lists, need to be traversed, culled and converted into draw primitives twice, there is no short cut, no wait states, nothing, it's just twice the work. You can completely cheat by rendering a normal single image into a render buffer and then reconstruct a left and right view from the depth buffer by generating position information and generating a new left and right image in post processing shaders, but it is a hack and usually gives poor results.

True, each camera draw on its own; which is the same effect that you may experience in any racing game that uses rearview mirrors. When you look from cameraA, you are in the cockpit, while cameraB is behind the car mesh, rendering on a surface (usually a texture, which is what the rift SDK does too), so you can see at the same time, 2 different camera output. This technology was used 25 years or so; nothing new under the sun :)

As you pointed out; there are 2 camera, which means you need to render the same scene twice; and while you switch context each time, your wait cycles double. The only saving grace is that our eyes are tied together, so you have a defined space between their field of vision, and most of it is also overlapping. This helps to avoid to draw 2 distinct scene, since you can interpolate what A see, to transfer on B and just recalculate what is different between the 2. It is less expensive than having A and B pointing at 2 different spots (like in a car game, which is why the mirrors are usually so small), but still it takes a toll on the system.

On top of that, the monitor pipeline has to deal with twice the traffic from 2 camera, and the rift does not use the most advanced LCD screen. If you have 2 monitor, the data runs parallel on 2 different pipelines. This just add on top of everything else.

- I am 47 years old; when I first received my DK2 kit, the A lenses that I tried were horrible, (I am slightly near sighted and have an IPD of 68mm), after a couple of weeks of experimenting, I modified the B lenses, (to be 68mm apart), and rebuilt the DK2 examples with a modified HDMInfo.LensSeparationInMeters = 0.068 and guess what, voila everything was suddenly in focus and I could finally see the actual Pentile pattern as a screen door effect that a lot of people had been whining about on the Occulus forums. The CV1 without a shadow of a doubt needs physically adjustable IPD, (just like a set of binoculars), that the API can monitor or be adjusted to; the current optic system with it's sweetspot at 63.5mm +/- 2mm just doesn't cut it in my opinion.

- Now I use VRGear Interceptor which is a small Windows app that intercepts the DK2 API's messages and reports a user defined screen separation that matches my modified B lenses and once again, voila, I now have perfect in focus DK2 imagery for all applications, including ED.

http://www.vr-gear.com/

- Note I am not affiliated with them in anyway and even though I did end up ordering the attachments, I ended up using my own previously modified B lenses with their software.

You are older than me then, I am 40. I wear corrective glasses, so when I play with the rift I need my contact lenses. I can play without glasses, since I am not that blind, but I get tired quickly. My vision problem is that one eye see far but not close, while the other see close but not far; I can barely see 3d, which doesn't help either. I tried both lenses and I prefer the A.

I have the DK2 since the end of September, beside the demo, I play regularly with DCS, ETS2, assetto corsa, HL2 and SC. I do not play plenty, so my sessions are often an hour at most. I like it a lot it is like to be miles away every time that I put the rift on.

BTW you could just run the config setup and change the IPD there, no need to rebuild. The issue as you mentioned, is that there is no physical way to change IPD, because in a binocular, there are 2 lenses...here you deal with 1 monitor, and we already loose part of the center area, behind the contraption of the rift. Another reason why I would rather have 2 smaller 1080p monitor for each eye, instead than one big monitor divided in 2 :)

Interesting, I will take a look at this VRGear Interceptor. BTW how do you resolve the issue with tilted planes and aliasing? Once that you rotate a plane, the line has aliasing at the current resolution; can't really do much, unless you are able to blur the line (which influence the readability of text)


- Weird analogy; here's how I see it, there is no absolute truth, there is a majority truth, there are multiple minority truths and then there is your truth. Your truth is an isolated experience, what you have observed and experienced may have been observed and experienced by others in the minority groups, but not by the majority group.

- How do I know this, well it's extremely simple logic, if the majority experienced what you are experiencing, then the DK2 just would not work and it plainly does, for a lot of people, including me.

The logic is sound, I was not referring to absolute truth that is accepted by all; but on the fact that people has emotions and experiences may be different, so you may have a bunch of people influenced emotionally, which would modify their own perception. Just look at religion, or politics, for an easy and realistic example. I am ok to accept that if most people do not have a problem, it can be considered the truth in that case, but this won't make the problem disappear; don't you agree?

As I am reading from the comments; the owner of cards similar to mine has problems, which makes me believe that there is no workaround. It is not my fault in configuring it wrong, but purely and simply a limitation of the hardware. This is actually what I was trying to accomplish with my post...I was not aiming at cause any kind of animosity.

- I understand your experience and I am not trying to minimise it, as I expressed in my earlier post; but you are projecting your experiences onto others, go through the huge VR thread and collate how many users shares experiences similar to yours and how many share experiences similar to mine.

Now after so many comments; I believe that my issue is legit, and caused by limitation in the hardware. Who's at fault, is a whole different story, but at least I know that I did not become so old, to not be able to configure a computer or a peripheral :)


- I won't argue this point with you, it just has to much flammable potential, so I will leave it at that.

Was just a side note; since I know that potentially; some may just be influenced more emotionally than others. I try to see things from a 40K feet perspective, and try to work on the cause and effects, more than how much I like or care about something or someone. Otherwise are your emotions talking, not you. At least this is what I believe; it is understandable if this view is not shared by others, and I respect that.


- I think you are really making light of the work and optimisation that has already gone into ED, I have been creating 3D engines for a very long time, right from pre-hardware days where you scan line filled the individual triangles themselves on the CPU. I don't share your views, I don't think it is perfect, but from what I have seen it is mostly there in terms of optimisation, especially given the server side size of the galaxy and the real-time generation of procedural content client side.

On the contrary! I am not saying that they did a bad job, but that there is that extra mile that is not done, because "it is ok as is". I remember those days; I was involved in a game with a proprietary engine (at the time of Dark age of Camelot, to give an idea), which was built by us from the ground up. I also remember the time when there was no 3d, and the programming was done on ASMOne, moving blitter and copper data in assembly; not sure if it was more tedious than dealing with vertices nowadays, since everything is 3d.
I just mourn at that attitude, where the code was over optimized, because even if there were graphic accellerator and memory expansion, the average machine was either an Amiga 1200 with 14MHz 68EC020, and 2 MB of ram, or a PC AT with a 386 DX and at most 4 MB of ram. You won't write code and be sloppy on optimization, hoping that people will upgrade, that was not the mentality at that time, and I am sure you remember it, since you are older than me.

If this is progress, so be it; but I can't really see this approach as fair. A good programmer is the one that use all that has available, a mediocre programmer simply blame lack of speed or ram, laughing at who may try to get back even 16K of memory at time, or increase of a couple of CPU cycles a routine.

This is an enjoyable discussion, I hope it continues!

Likewise :) I am happy to learn from anyone has anything to teach; this is the pinnacle of constructive communication!

- - - - - Additional Content Posted / Auto Merge - - - - -

I think the rift DK2 already has 1080p per eye, you mean 2k per eye right?

Nope, the full display used in the DK2 is a 1080p (1920x1080, right?), which is divided by 2, giving you a resolution of 720p per eye.
If you want real 1080p resolution per eye, the screen has to be twice that resolution, so a 4K display or so.
 
I think the rift DK2 already has 1080p per eye, you mean 2k per eye right?

The DK2 has one 1920x1080@75Hz screen for both eyes, so 960x1080 per eye, but applications/games have to render at 2564x1461, (for both eyes), to counteract the optic distortion and shader correction if they want a 1:1 correspondence for the center pixels.

The CV1 is most likely going to be 2560x1440@90Hz for both eyes, so 1280x1440 per eye, but applications/games might have to render at ~3420x1948 to counteract the optic distortion and shader correction, (assuming CV1 does it in the same way as the DK2).

Anything higher than that in resolution and especially at that synchronised vertical frequency is just not feasible on today's hardware and maybe even on the next generation of hardware, (for AAA games).

If Nvidia and AMD get there act together on supporting one GPU per eye as an extra SLI/Crossfire mode then it's probably all doable now.
 
true true, but one of their goals was also keep the price tag around 300$. So many ppl would be able to afford it.

Yes, but if I need to spend 300 plus 600 for the video card, the experience cost me 900 dollars....

They piggyback on the fact that many gamers has already expensive rigs, so their solution has been simplified by that.
Make a controller that would have to deal with double screen instead of one, is a complication that they didn't want.
 
I think that might have been the longest post I've ever seen.

I have a 3960x (I think it's called) hex core at 3ghz and a GTX 780 and 16gb ram.
I usually play on medium settings and it's mostly good. It's less awesome in stations, but sometimes it's like butter in an outpost station. I find it to be very playable. The worst frames I've gotten was in a mining field close to a star. Slideshow.

Also, I have noticed big differences in performance between beta versions. The current version being the best.
 
The DK2 has one 1920x1080@75Hz screen for both eyes, so 960x1080 per eye, but applications/games have to render at 2564x1461, (for both eyes), to counteract the optic distortion and shader correction if they want a 1:1 correspondence for the center pixels.

The CV1 is most likely going to be 2560x1440@90Hz for both eyes, so 1280x1440 per eye, but applications/games might have to render at ~3420x1948 to counteract the optic distortion and shader correction, (assuming CV1 does it in the same way as the DK2).

Anything higher than that in resolution and especially at that synchronised vertical frequency is just not feasible on today's hardware and maybe even on the next generation of hardware, (for AAA games).

If Nvidia and AMD get there act together on supporting one GPU per eye as an extra SLI/Crossfire mode then it's probably all doable now.

1280x720 is the 720p resolution; so it is not even 720p per eye; good point.

2 titan may be able to fit the bill for a 4K/2 resolution; to give a real 1080p per eye...not sure how many people would be interested in spending 2200 USD just for few games thou ;)

- - - - - Additional Content Posted / Auto Merge - - - - -

I think that might have been the longest post I've ever seen.

I have a 3960x (I think it's called) hex core at 3ghz and a GTX 780 and 16gb ram.
I usually play on medium settings and it's mostly good. It's less awesome in stations, but sometimes it's like butter in an outpost station. I find it to be very playable. The worst frames I've gotten was in a mining field close to a star. Slideshow.

Also, I have noticed big differences in performance between beta versions. The current version being the best.


I apologize for the wall of text; got carried away.

780 is a superior card to 770; the price difference was about 200 dollars I believe. A 780 is actually as good if not better, than a 970 current gen.

I think that you can play at min setup even in the mine field without issues; your card is beefy enough to take it.
 
Well I just picked up an EVGA GTX 980 SC tonight and tossed it in a Dell Workstation for a test since it has a nice Xeon E5-2650 @ 2.00 Ghz and 32 GB of ram. Since ED is 32 bit the ram doesn't do anything but what the heck. I did tests with my GTX 580 before and it was horrible, studder everywhere even though frame rate reported wasnt too bad (mid 60's).

The 980 allowed me to instantly set everything to High and max res 1920x1080 without any studder issues, so that was a nice plus. All I have now is a slight "motion blurring" effect when moving my head that appears to be "on purpose" but is very irritating (not sickening thankfully, just annoying), I've tried turning blur off but no luck, I'll experiment some more.

Eventually I need to rebuild my gaming rig as it's currently an old 780i with a Q6600 watercooled to 3.0Ghz and dual SLI GTX 580's. Plays non-Rift ED at max settings just fine but not with Rift lol. Time to save up for an X99 setup.
 
I used generic terms like slices and data, I never intended for those to mean 3d assets or GPU resources, as you say that would be madness :). I could have been clearer and specified what that data is, but I don't know what it is, it's obviously solar system based and as you say it could be seeds that plugs into a client side procedural generator.

Do you know for a fact that this is how a solar system is created when you enter it? What about the hand crafted 160,000 odd systems, are they built into the client executable and client assets?

Or, does the server just send solar system body attributes, to each client as they enter a new solar system for the first time, i.e. stellar body types, positions, sizes, orbits, names, etc? I could see this being possible as obviously the amount of data is fairly small and only needs to be sent every time a new solar system is procedurally created on the client's side, (i.e. the slice of the galaxy I mentioned in the previous post).

I do not know it for a fact. But the only thing that makes sense to me is that the point data of the ca 170`000 known star systems are in the installed package, and in the upgrade patches (for any corrected or new astronomical telemetry). I do believe the only data sent server-to- client is updates to the economy and politics of that system. Any solar system entered for the first time (not among the 170K known stars) has data that is generated by the seeds. I suspect each planet's surface texture is generated from a set of basetypes (Terran, Desert, Volcanic etc), the location and the orbits, are all generated based on the seeds. I suspect these are nested seeds (seeds within seeds). Again, I do not know it, but I can take an educated guess since I know a little about procedural things. :)
 
I like the idea that procedural generation is just a very, very, very lossy compression algorithm. You could argue that using it on such a grand scale is madness too, mostly because artists and game designers lose a lot of control. In the example you give, modifying the seed even slightly can result in a completely different environment. Different in detail I mean. It would look the same on average. This is a problem that the guys working on Infinity had too. Every planet looked the same but different. In order to add uniqueness you have to add new procedural generators that layer on top of the existing ones. Every time you do that you lose some control over the outcome of course. So to a great extent you have to add real data back in to get the kind-of game you want. It's nice to have 400 billion stars but unless the developers have specifically added detail, there won't be any uniqueness.

This is exactly the problem. The only way to properly convey the great variety on a planet such as terran planets is to use nested procedurals. When you have nested procedural functions inside each other, the magnitude of data to be handled increases exponentially.

You can imagine for a terran planet, first a procedural which handles the distribution of the climates:
Ice, Desert, Green

This is the base procedural

And within Ice, you have everything from:
Polar ice caps
Ice sheets on mountains
Your basic snow
Completely flat icy lakes.
etc

This would be its own procedural just for the "ice"

For a Desert biome, you would have to procedurally generate all the different types of dunes.
Crescentic
Linear
Star
Dome
Parabolic
Longtitudinal
Reversing
Lithified (turned to stone)

These can be done with mathematics though. With some tweaking, you can do this with math Cosine and Sine functions. Put some boolean functions in there as well for the dunes intersecting each other (Star dunes fex).

For the Green biome, you would have to include all the potential ways in which plant life could develop. Here comes a bit of exobiology. But if we just take what we have on Earth for reference:
Jungle
Rain forest
Tundra (pressed down by ice sheets from the many ice ages)
Plains with grass
Cacti
Pine tree forests
Giant Redwood forests (Northern California, or alternatively Endor ;) )

Strange as it sounds, there are actually procedural generation softwares made for this specific job though. There are plugins for Houdini, there is Xfrog, and the most famous one http://www.onyxtree.com/
Using something like this, we might even end up with plant life that looks nothing like on Earth.

On top of that, the highest difficulty of procedural generation would be to generate wildlife that we have never seen before. The greatest challenge is to generate lifeforms which has a tenable center of gravity with regards to their limb structures and locomotion, as well as procedural animation. But these are not unheard of. Procedural animation has already been done, and is being used more than people think. Much of GTA V's animation system is actually generated semi-procedurally (non-linear animation wise).

So these would be the "nests" of procedurals as I see them at the top of my head.
Quite a challenge even for the most powerful computers on the planet.
But not impossible I think.
 
Last edited:
The same warning as was applied to my last behemoth applies to this monstrosity, but doubly so, you have been warned; if you do decide to take up the challenge, then there will be a multiple choice exam in the morning!

No worries; if I am not playing I am here reading ;) My reply is also quite long, sorry about that.

No need to apologise, as I said earlier I am enjoying this discussion.

I need to double check, but I believe I am running at max settings, all on max, every single option on, at 110 FPS, on a 50Hz refresh rate (that's what my 1080p lcd uses).

Is this internal to a station or out in deep space, (not in an asteroid field)? Because if this is what you're getting in deep space, then that is not enough, especially at 50Hz.

Vsync is not doing much; since I use a 60Hz display...if you output 120 FPS, your monitor can still only display at 60Hz, which is about 60 screen per seconds; so 60 FPS. Locking your card to the refresh rate of the monitor helps to avoid tearing, because your image output from the graphic card is higher, which force the monitor to do blending, since can't go faster than that.
Many times people forget that the monitor limit the amount of frames that you can display (refresh rate: time needed for the raster to go from top left pixel to bottom right in one unit of time. FPS: frames drawn per second by the video card).

Yup, that's pretty much how it works; but the point of turning VSync off is to get a rough idea of what the rough rendering performance of your system is with respect to ED's graphics renderer.

The monitor doesn't do any blending when your game outputs more frames than the monitor can handle per second, remember the GPU's synchronisation circuits and pixel shift registers are all perfectly matched to the monitor's input characteristics, i.e. Pixel dot clock, HSync, VSync, HBlank, VBlank, etc. What actually happens is that your game renders to a section of the GPU's memory, (backbuffer, rendertarget, etc), at say 120Hz and then the GPU's display controller reads the contents of the frontbuffer, (after a backbuffer to frontbuffer swap), out at whatever frequency it is locked in with the monitor, e.g. 60Hz. The tearing that occurs on the monitor in this situation is actually happening in local memory of the GPU, as the ROPs write the final pixel values to memory at one rate, while the display controller reads them out at a lower rate.

Yup, referring to the Oculus SDK. I made 2 demo with Unity 3d and one with pure C++ code; I am no expert with it but I know where to put my hands on it. Like any API that is still a work in progress, it has limitations to overcome; in part you can circumvent them with your coding skills, but for others, you can't use anything that is not exposed by the public interface. I am waiting for the final release of the SDK, before mark it as bad SDK. It is usable, if that's what you mean....after all, many people can drag and drop the rift controller in Unity and 70% of the work is done for them.

All my work is C/C++ and as I said I find the Rift SDK more than adequate for a beta release; I have in fact used much worse fully released API's, *coughrenderwarecough* in the past.

If there was such thing as 20% increase in performance from the engine, that MAY increase my performances, but it is hard to say, since we are dealing with a complex system that has 3 main variables: the SDK from Oculus, the 3D engine from the game and the OS.

Agreed, but there is one more crucial fundamental that you have already mentioned in a roundabout way and that is your personal experience. One of the issues with the DK2 is that it is a much more subjective experience than a normal 2D display for a whole myriad of technical, biological and psychological reasons. Have you actually tried the DK2 with ED on a top of the line system? Maybe VR in it's current form just doesn't work for you, no matter how good the performance is? Maybe you are overtly sensitive to even the most minute amount of stutter, as ED is not stutter free even in deep space on my system, (it's close but not 100% smooth), though those other games you mention are 100% smooth and locked at 75Hz.

Ok, never-mind, you answered these questions in your next paragraph, (apart from the stutter sensitivity), but I will still leave the previous paragraph intact as I think it is an important, but mostly undefined metric; it may never be able to be measured and may always remain a completely subjective and personal experience.

The only high level analisys that I can make, is the one that other games works fine with no stutter and no ghosting, and they are as intensive (graphic wise, physics is handled mostly by the CPU, not the GPU, unless you tell the GPU to use spare cycles), so if the variables are the same (same os, same SDK), it means that the variable that is influencing the overall outcome is the game engine.

The games you have already mentioned work fantastic on my system as well, but I'd be curious to see how you go with something like Euro Truck Simulator 2 as that is notorious for stuttering even on high end systems, (when you max the graphics options).

True, my bad; if you live on Windows, yes.

Well this discussion is about ED and yeah it currently lives on Windows ;) At work I write GLSL shaders all day for Linux/OSG/OpenGL and at home I still tinker with my DirectX 3D/HLSL engine and Android/OpenGL ES engine and DK2.

There is no such thing as DX on OSX; there the SDK uses OGL. I just moved back to windows after 14 years of OSX; I am a bit rusty on DX so I need to read more. I recall that also DX does the context switching like OGL, but it hide it from the user, so you don't have to switch context and just send draw commands once done. I will look into it once I have a moment.

Not sure how you can do it otherwise; this is the basic of how OGL works; if you don't declare each context and load the framebuffer, render the scene and switch to display it, there is nothing on screen.

I think there is some miscommunication going on here, in a normal game/application there is only ever one context, (forget Equalizer for OpenGL and deferred contexts for DirectX), each game/application/window has it's own singular context, (sure for some special situations an app can create multiple contexts but lets just disregard that for now). Here is a link to OpenGL's description of a context, https://www.opengl.org/wiki/FAQ, as you can see you only need to create it once at application initialisation and then use it to interact with the OpenGL stack.

I think you are getting confused between the terms context and rendertarget/FBO/backbuffer, backbuffers are the surfaces that are swapped after a present/flush in a normal game's render loop. Of course if the system swaps your application out for another one, (that has it's own context), then the system flushes the driver's state/caches/memory manager and the GPU's pipelines/state/caches and gives control to the new application and it's context; this is the only time contexts are being swapped, (this is not strictly true, but for this discussion it is accurate enough).

This is actually what happens every frame...load, process, draw, switch context, display and flush. Some info are cached, like light raycast, mesh data (if relevant to the culling area) and such.

There should never be any loading happening on a frame by frame basis, maybe streaming of resources into dynamic resources in an advanced engine, (like ED). But once you get to the top of the render loop all your static assets, (vertex buffers, index buffers, textures, shaders, etc), better be all in a defined and hardware optimised manner in local memory, (GPU memory), or you are going to be in for a world of performance hurt. Dynamic assets are obviously different, they have completely separate access methods and code paths within the driver as to their usage and control, but again if you over use them your engine will suffer as your GPU will start stalling as it waits on CPU synchronisation of these dynamic resources.

In fact one of the major ways of decreasing performance is in not optimising your resources for the API/GPU you are currently working with, some simple examples:

- Using non hardware based, non compressed textures, this can cause 2x-8x more GPU memory being used and a proportional increase in fill rate and TMU, (Texture Management Unit), requirements.

- Shader patching; changing resource format's on the fly for shaders, (such as texture formats), can cause the vendor specific driver to have to recompile the shader on the fly and store it as a new entry in the shader cache. Not only is the compiling of shaders a heavyweight operation, but it places additional strain on the shader caching mechanism.

- Too many programs/shaders being switched per frame, (this can be devastating), as it can stall the GPU, shader switching and constant re-loading are not lightweight operations and cause shader cache thrashing.

Here is the real issue we have been discussing, "switch context", the render loop does not switch context, (a heavy weight operation), it switches backbuffers with the frontbuffer, (a light weight operation), which is at best a pointer swap in the backbuffer swap chain for fullscreen DirectX applications or worst case a GPU fast blit for windowed applications in both DirectX and OpenGL. This swapping of backbuffers is orders of magnitude more efficient than a worst case context switch.

There are in fact wait cycles; the CPU send a semaphore to the GPU; once that the GPU is done and switch context, the CPU resume the operations releasing the lock and send new data from the main memory to the GPU memory. This can be done in chunks or multiple frames at time (depends how many threads you have). You will be surprised at how many cycles are wasted, when CPU and GPU communicate with each other. This is the nature of any IC communication protocol: you cannot send the next batch of instructions unless you get a confirmation from the "slave" that the previous operation has been completed. This is true for network protocols, as per internal bus between core and cache memory, or CPU and memory..and obviously between the CPU and GPU.

Semaphores aren't really being sent anywhere, they are just variables, (usually implemented with non interruptible CPU instructions), that act as arbitrators and counters for resources that are accessible by multiple threads/processes, (I know I am being picky here, but I think it's important to make sure we are on the same page).

Locking can happen at multiple stages of the entire rendering process:

- Within your game.

- Within the API, OpenGL and DirectX, (at user and Kernel levels).

- Within the GPU vendor specific driver at the kernel level.

Lets use an extremely simple example of a game that is single threaded and all resources are static, (there are plenty of games that are still written this way by the way, even AAA titles), the static assets are loaded during the initialisation phase, converted to appropriate hardware formats and uploaded into the GPU's local memory via the PCIE bus, before even one iteration has commenced of the main rendering loop.

- There is no locking required at the application level whatsoever, because the application is not using multiple threads and not using dynamic GPU data structures, (i.e. lockable and modifiable GPU resources). This application has effectively signed an unwavering contract that it will never attempt to modify the static resources it has uploaded to the GPU during it's initialisation phase. The GPU can then make assumptions about where to store the assets in it's local memory, in whatever formats are most optimal, (i.e. usually hardware optimised formats), and that the application will never attempt to access them again, (apart from freeing them at application destruction).

- Locking within the API will be minimal to non-existent, internal data structures such as queues and command buffers are filled until a present/flush, these data structures are double buffered, so that while the vendor specific driver, (which is multi-threaded), starts converting them into vendor specific operations, the application switches to a discarded queue and begins filling it with new GPU commands. This pipeline can be increased in size even further with the Flip Queue Size parameter, this allows the API to collect multiple presents/flushes before passing them onto the vendor specific driver. This makes the huge pipeline even bigger and more efficient, (in reducing GPU stalls), but adds more latency to the user input of the application.

- Locking at the kernel level within the vendor specific driver is mostly an unknown, (well to me anyway), but this is where traditionally a poorly written application will waste a lot of time, as shaders are recompiled, shader cache thrashing rears it's ugly head, resource formats are converted into hardware specific formats, application specific hacks are applied, (i.e. you would be surprised how many shaders are replaced within the driver by the vendor specific "optimised" versions for popular games, to get better FPS in benchmarks and reviews) and excessive memory management of GPU resources occurs. Any locking that does occur, is at the kernel thread level within the vendor specific driver and does not effect the application rendering thread.

The major point here is that there is minimal locking in a simple and well written application, if any, (I'm not talking about initialisation and creation, but obviously about the rendering loop itself), and what locking there is only locks the particular thread involved and does not stall the GPU as long as the CPU is still filling the massive pipeline with draw calls, which in a balanced system with no configuration issues on a well behaved application, (i.e. fast GPU with a CPU capable of feeding it), is happening 100% of the time.

Even if locking does occur at the kernel level in the multi-threaded vendor specific driver, this doesn't stop the application's thread/s and API, (DirectX/OpenGL), threads from continuing their work.

In ED I can't get this all the time, even at minimum detail. I understand that ED has more objects to move, compared to the tinderbox map size of SC, but the models in SC are much more complex, polygon and texture wise, than ED; and it uses one of the most power hungry engine ever made. The expectation was that ED would run smooth, while SC would stutter and give ghosting.

I would expect the opposite given that ED is about to release with a fully featured galaxy, multi-player support and real time generation of procedural content; SC really still is in the prototyping and demoing stage of small isolated and controlled "rooms".

Maybe you should check out the VR multi-stuttering thread, there are some useful experiments going on over there and being able to show FRAPS graphs of what you are experiencing may allow others to offer suggestions. At the very least you could compare your FRAPS graphs to the DK2 graphs that I and others posted.

Yes, that's obvious :) each instance is a defined size; you can tell since you can't go from system A to system B just with super cruse; and also the number of players per instance is limited.
I can't make any number out of what I see, but I suspect that each instance is pretty big (altho mostly empty...there is not much beside planetoids and space stations); but the data to draw each instance I suspect is quite small. Traveling trough systems is done with instance changing (covered by the hyperdrive animation); not different from what SC does when you land on a planet.

Yup I see it the same way, with the additional detail that it seems that the current displayable area per user is limited to a solar system and that multiplayer islands are probably much smaller still.

True, each camera draw on its own; which is the same effect that you may experience in any racing game that uses rearview mirrors. When you look from cameraA, you are in the cockpit, while cameraB is behind the car mesh, rendering on a surface (usually a texture, which is what the rift SDK does too), so you can see at the same time, 2 different camera output. This technology was used 25 years or so; nothing new under the sun :)

No it's nothing new under the sun in terms of implementation, but I feel you are missing the point. Rear view mirrors, shadow casting cameras, remote view drones, etc, usually take advantage of huge optimisation possibilities so that they can be rendered in small percentages of the total render time. e.g. You are never going to code rear view mirrors that double your render time, you would use small render target resolution, small FOV's, low detail shaders, low detail LOD's, etc so that the rear view mirror might increase your render time by 10%-20%. The DK2 by necessity requires a doubling of your scene graph traversal/render list traversal/culling and draw primitive generation times and then the actual vertex processing by the GPU. It also requires nearly a doubling of fill rate requirements because of the extra burden of the enlarged eye render targets for optics correction.

As you pointed out; there are 2 camera, which means you need to render the same scene twice; and while you switch context each time, your wait cycles double. The only saving grace is that our eyes are tied together, so you have a defined space between their field of vision, and most of it is also overlapping. This helps to avoid to draw 2 distinct scene, since you can interpolate what A see, to transfer on B and just recalculate what is different between the 2. It is less expensive than having A and B pointing at 2 different spots (like in a car game, which is why the mirrors are usually so small), but still it takes a toll on the system.

You aren't switching contexts, you are switching FBO's/render targets for the two cameras, this is an extremely light weight operation, (on modern hardware), there is no wait cycles. Not on the GPU or on the CPU, switching FBO/render target is just another GPU call, it's not free or instant, but it is measured in nanoseconds.

You also can't cheat by interpolating, (well you can, by using the depth buffer of the original non DK2 scene), but the results are horrible, (I've tried it). The scale is non linear, so that it doesn't look or feel right and there are quantisation errors due to the screen pixel fidelity which just doesn't work when projecting back into world/camera space. You have to render both scenes separately and uniquely as if they are two separate cameras, because that is what they are. So it is not "less expensive than having A and B pointing at 2 different spots", it is exactly the same as having A and B pointing at 2 different spots.

On top of that, the monitor pipeline has to deal with twice the traffic from 2 camera, and the rift does not use the most advanced LCD screen. If you have 2 monitor, the data runs parallel on 2 different pipelines. This just add on top of everything else.

It doesn't matter if you use two monitors or one, the actual work done by the GPU is effectively the same, the only thing that runs in parallel with two monitors is the display controllers built into the GPU, (I mentioned earlier), and the electrical signals driving the monitors themselves. Now if you had two GPU's, (and the appropriate VR SLI/Crossfire driver), each driving one half of the display or driving two individual displays, then yes you would get work done in parallel and a corresponding performance increase.

BTW you could just run the config setup and change the IPD there, no need to rebuild.

This is incorrect, the IPD is the distance between your pupils when your eyes are focused at infinity. The HDMInfo.LensSeparationInMeters parameter is a value returned by the DK2 itself as a measurement of the distance between it's lenses. The SDK uses these values in different ways, (check the source code, it's all there). IPD is adjusted for world scale and user comfort, HDMInfo.LensSeparationInMeters has to be the physical distance between the center of the lenses.

The issue as you mentioned, is that there is no physical way to change IPD, because in a binocular, there are 2 lenses...here you deal with 1 monitor, and we already loose part of the center area, behind the contraption of the rift. Another reason why I would rather have 2 smaller 1080p monitor for each eye, instead than one big monitor divided in 2 :)

Two displays for each eye is definitely a viable alternative, but it has it's cons apart from it's obvious pros.

- It's more expensive than one display, obviously you need two.

- It's a LOT more expensive than one standard mobile phone display, because now you have two non standard displays that have to be manufactured purely for the Rift, rather than using mass produced consumer technology that is already available.

- Synchronising two displays EXACTLY I would guess is more problematic than it at first seems, I would assume that the eyes are extremely sensitive to differences in flickering images presented to them individually. You can imagine the beat frequencies that would present themselves even if differencies were only minor, (this is just conjecture).

Interesting, I will take a look at this VRGear Interceptor. BTW how do you resolve the issue with tilted planes and aliasing? Once that you rotate a plane, the line has aliasing at the current resolution; can't really do much, unless you are able to blur the line (which influence the readability of text)

There is no issue with tilting and aliasing, what happens is that once the lenses match the actual distance between your pupils, your eyes start using the optically accurate part of the lenses. So the image attains the maximum focus and clarity possible; it's the same as if you used the DK2 stock, but you physically moved your eyes to be 63.5mm apart.

This is actually what I was trying to accomplish with my post...I was not aiming at cause any kind of animosity.

I may have over-reacted in my first post to your frustration, but as I keep saying, this is a thoughtful and logical discussion worth having; there is no issue with any perceived animosity on my end.

On the contrary! I am not saying that they did a bad job, but that there is that extra mile that is not done, because "it is ok as is". I remember those days; I was involved in a game with a proprietary engine (at the time of Dark age of Camelot, to give an idea), which was built by us from the ground up. I also remember the time when there was no 3d, and the programming was done on ASMOne, moving blitter and copper data in assembly; not sure if it was more tedious than dealing with vertices nowadays, since everything is 3d.

I had an Amiga 500, 1200 and 3000, I remember coding in asm for blitter, (Agnus), and the display coprocessor, (Copper, from memory it had 3 instructions, store, wait and skip right? It's been a long time). I never released any commercial games on it though, but I did complete my masters in pattern recognition on a TMS32010 fixed point DSP and TMS34010 display processor boards that I designed, built and wrote the software for, that hooked up to my Amiga3000 as the controlling UI.

I just mourn at that attitude, where the code was over optimized, because even if there were graphic accellerator and memory expansion, the average machine was either an Amiga 1200 with 14MHz 68EC020, and 2 MB of ram, or a PC AT with a 386 DX and at most 4 MB of ram. You won't write code and be sloppy on optimization, hoping that people will upgrade, that was not the mentality at that time, and I am sure you remember it, since you are older than me.

There is no doubt that software engineering has changed in that regard, back in the day you were forced to optimise because of the severe hardware limitations, (and to some extent software limitations), you were faced with. What inevitably separated the great games from the rest, (apart from great gameplay), was the extent that the developer went to in wringing out every last CPU, Blitter, Copper and DMA cycles to perform what seemed like miracles at the time.

Now we live in a world where there are so many layers of software between you and the hardware, (with a corresponding exponential increase in bugs), that it is much more difficult to hit the metal, let alone know how it works. Hopefully new API's like Mantle and DX12 will change that to some extent and the differences between games developed by gurus compared to games developed in API's like Unity will be like the difference between chalk and cheese.

P.S. Sorry I took so long to post this, I had some other matters I had to attend to and of course this took a not insubstantial amount of time to write as well :)

- - - - - Additional Content Posted / Auto Merge - - - - -

I do not know it for a fact. But the only thing that makes sense to me is that the point data of the ca 170`000 known star systems are in the installed package, and in the upgrade patches (for any corrected or new astronomical telemetry). I do believe the only data sent server-to- client is updates to the economy and politics of that system. Any solar system entered for the first time (not among the 170K known stars) has data that is generated by the seeds. I suspect each planet's surface texture is generated from a set of basetypes (Terran, Desert, Volcanic etc), the location and the orbits, are all generated based on the seeds. I suspect these are nested seeds (seeds within seeds). Again, I do not know it, but I can take an educated guess since I know a little about procedural things. :)

Aaaah ok, nice, I didn't realise that the known part of the galaxy was actually within the client's resources.

I think your educated guess is spot on.
 
Last edited:
The 980 allowed me to instantly set everything to High and max res 1920x1080 without any studder issues, so that was a nice plus. All I have now is a slight "motion blurring" effect when moving my head that appears to be "on purpose" but is very irritating (not sickening thankfully, just annoying), I've tried turning blur off but no luck, I'll experiment some more.
Sounds like low persistence is not kicking in. There are two low persistence modes, one for 72 FPS and the other for 75 FPS. If you drop below these frame rates you get smearing like on the DK1 (which is different than the black to purple smearing that the DK2 suffers from in certain situations). Your CPU at 2.0 GHz is possibly the culprit but see if you can measure your frame rate using afterburner or similar
 
It works in min detail, but if you try the fight training mission, where you have plenty fo asteroids, then the system just can't keep up with it.

I have a 770 OC from Asus; I could overclock the video card and processor to get some more performances, but I doubt that the effort would give me anything beyond a 5% increase; which is a waste, considering the strain that you put on your system with OC (and relative risk of failures...OC was useful when you had a 33 MHz and need to go faster, because the architecture was more permissive to voltage and temperature variations beyond specs...nowadays the amount of components that you have per square mm, is insane, which result in more extended damage if you push the OC too far).

The new rift, if they will increase the res, will have 1080p per eye (4K display in total); which means that to run that thing, you will need twice as much power from your video card...basically something like a Titanium to play at min or med settings. That's scary.

each to their own but I disagree.
Again as I said, on my rig even in all the scenarios the game is perfectly playable on min. something IS wrong your end and if you can find out what I think your experience will improve.

Overclocking is still really beneficial, and if done well offers no real risk - it does not even void the warranty on your gear. So long as you do not do anything stupid, your hardware will protect itself anyway and throttle back if there is an issue. I have been running an i5 since feb 2011 and it has never run at stock speeds. I expect the chip now to be to all intents and purposes as healthy now as it was the day I bought it.

Your final point about performance...... IF CV1 was 1440P and comes out this time next year, sure if you want full bubble you will be looking at a v high end PC, but the thing is, that is the beauty of PCs. Moore's law sadly is no longer true, but even so, hardware is still advancing reasonably rapidly, and prices are coming down.

What is expensive top tier in December 2015 will be mid range gaming pc in December 2016, and entry level in 2017, and probably around mobile phone performance then.

We are on the edge of the curve right now, but by the time the rift is out, and settled down I do not think the hardware will be THAT high a barrier.

As for all the programming stuff....... I am sad to say I skipped most of it as it went right over my head. I am envious of you guys however. I am currently trying to learn python (right at the beginner level). i missed the boat, programming is something I started to do back in the 80's with input magazines on my spectrum 48k but I gave up. I truly believe to be great at this stuff you have to learn when you are young.

I would love to under stand even half of what you guys are chatting about :)
 
Last edited:
Back
Top Bottom