Here is what my research has gotten me to. First some background. You can skip the following four paragraphs if you know a lot about TV/Monitor difference already.
The screen buffer in the video ram has 8 bit color depth - that is 256 shades of each color red, green and blue. A complete range between completely black 0,0,0 and max white 255,255,255 is what the game engines produce as far as I know. Digital computer monitors are all about trying to display this full range so that's what almost all of us are used to. That's possible because the syncing of the signal is not embedded in the content signal itself. Reason for the many wires in (even analouge VGA) monitor cables. There are sync wires.
TV's on the other hand have a huge analouge legacy. and although most have gone digital nowadays, all materials recorded and all signal producing equipment through the years needs to be compatible. When TV's where analouge there was no stepping in the colours since the signal wasn't digital but the syncing of the signal needed to be contained in the same signal as the content. Values below/above the visible colour range where used for this. There was also need for some headroom. So the signal varies in voltages within certain ranges.
When analouge TV went digital it took tens of years until everything was exchanged for digital versions (some still have analouge TV's) so these outside-the-content values needed to be preserved in the digital signal. So we got 16-235 for the content instead of 0-255 in the signal, creating amongst other problems, these full/limited color range settings in both signal producing and display devices when mixing TV tech with Computers. Nvidia cards for example have a bad habit of trying to follow the HDMI specs which tell to use limited range for HDTV resolutions although it's a computer video card.
The TV's expand that digital 16-235 level signal to the range their screen can use (voltages, whatever range the display has) so TV content can look as good as a computer monitor and usually do. But the color resolution is less than a RGB computer display (there is filters and image processing smoothing out stuff).
Enough background, on to the Rift.
As far as I know, the Rift CV1 can take a 0-255 sRGB signal but in this case, it's display technology, OLED, which isn't perfected yet, has both advantages and some drawbacks. In dark areas of the image, it's hard to manufacture displays that are uniform in color reproduction. The display manufacturers solution to this if you want an accurate image, is to take a high res photo of the display when lit and measure it's variances and store those variations in a memory and use these values to pre-distort the signal so that all pixels display evenly when it reaches the imperfect display. This is what Oculus calls SPUD. Again, as far as I know.
Then there is another problem with OLED displays and that is that completely turned off pixels take a very small amount of longer time to turn on than it takes already lit pixels to change color. This is what causes the black smear that was quite visible in DK2. The solution is to not turn of any pixels fully.
When combined, with the need for uniformity and the need to remove black smearing we end up what we have. Not real true blacks and since we can't use the lowest lit-to-unlit pixel range we end up with a lower color resolution compared to our monitors (just like TV's did). Not quite as bad as TV 16-235 but maybe 11-250 or something (just a guess). Here is a Oculus employee admitting to this fact:
https://forums.oculus.com/developer/discussion/comment/362953/#Comment_362953
When people use their DK2 with the drivers intended for CV1 they will see this clearly, showing a quite washed out image without any true blacks (or was there maybe) and less steps in gradients (banding). The CV1 does it's best to hide these drawbacks from us and usually succeeds quite well for the average joes. But we who seeks highest fidelity possible, who are used to high quality monitors, see the lower color range and we miss the true blacks from the DK2. Unfortunately, in Elite's nebulas and coronas, it's easily spotted.
I believe people are making a lot of different conclusions based on individual visual capability along with a lot of other factors:
* Some are getting the HDMI limited range because of a Nvidia bug, which they fix by reinstalling Nvidia drivers or by using an adapter to the displayport. After such a big improvement they tell others online they've solved it! But they've only solved one problem.
* Maybe using displayport improves the image just a little. Hard to notice for some. Easy for others.
* Some are unlucky and get bad displays with lots of SPUD correction.
* Some get the red spots because of buggy SPUD.
* Some that try without SPUD and are seeing the true blacks are happy and doesn't notice the other artifacts it introduces.
I myself are seeing some banding and a sharp cutpoint to black in some colours. I miss the true blacks of the DK2, I do however think that I can't do anything more about this until Oculus lets us adjust the level of SPUD applied. I once saw the red bands that people are talking about and another time, when I was hooking up some other display devices, the Rift seemed to suddenly have gotten a deep colour range with true black but I might have starred into something bright just before, who knows :-/. It was gone later so I can't be 100% sure. But I remember my reaction and it was significant.
With peoples eyes and technical experience, possibly quite different from each other, I think it is hard to determine the exact causes to a bad picture in CV1 unless studies are made in a lab.
There are still many questions:
Why did the DK2 looked so good in the blacks? Was it only because black smear was allowed? Was those screens hardware calibrated and/or of better uniformity?
How come I can get (what looks like) better blacks and less banding in the HTC Vive? Other things about the HTC Vive are worse than the Rift ofcourse.
Why can't Oculus get us an reference test scene with configurations for amount of SPUD to apply, turn off if we wish and adjust the brightness, gamma and contrast?
Is there some way to alter the SPUD correction tables by hacking the SPUD files. Can someone make a tool?

Mine are here it seems: C:\Users\[username]\AppData\Local\Oculus\Spud
I'm no expert and I can not guarantee the exact correctness of all I've said above, it's just a subject I can't let go of until my VR image improves.