I think most people have noticed that things don't take up as much visual space the farther away they are. No need for a Father ted 'Far Away' YouTube video. The moon doesn't look like the size of a pea though, does it?
Yes, like when blowing up models and super imposing them (Back projection King Kong, for instance) onto a real size scene. It has to be slowed down, because our brains already perceive height from the rate at which something falls.
I'm not knocking everything you're saying, WeComeInPeace, but there is a noticeable change in perceivable scale when adjusting some settings. It's not just; if you fly past a large object at extraordinary speed, it will look smaller because the brain has certain presumptions. Have you seen a nature documentary, when a helicopter/plane spots a Blue Whale? There are no reference points (unless you know the size of the waves), yet you can see: 'That is huge'. And that's in 2D.
When I was a kid, in the 80's, HMS Hermes was in the Solent. [From the beach], you must know that after a distance the eyes don't need to converge to focus (I assume from your comments about animation, that you must have an elementary idea about depth of field and depth of focus, from what you're saying), and from there, [with the Isle Of Wight ahead and no focusing with my adolescent eyes whatsoever; I knew it was huge]. My Dad sailed me out there on a Leader dingy to prove it. It was massive, and scary in a 14 foot boat, but as big as it looked from the coast.
I've been simulating reality in 3D for more than 25 years, so I do know about DOF and POV. I have also studied human perception, consciousness and senses. I've even tried to figure out what aesthetics is, from a "scientific" POV, now that everybody used the word so often.
For me, the big difference came with VR. That gives you a much better sense of size and scale. Before ED I used to simrace for many years, and finding the proper braking point at the end of the straight became a lot easier when I switched to VR with the added depth perception.
I also played NMS, but I haven't tried it in VR. However, I remember it as being a lot less realistic than ED. I think you're spot on with the brain making assumptions. The scale of the galaxy in ED is pretty close to correct. If you flew to the Moon, you would have some sort of expectation about what you would perceive along the way, and we have no idea what it would actually look like IRL, especially at the speeds ED is capable of. Try and calculate the G forces we're exposed to in the game.
Normally we also perceive something like atmosphere between ourselves and the object we're trying to estimate the size of. That however, is not useable in space. Have you ever tried to look at the Andromeda Galaxy using binoculars? You get a sense that it's far away, but not 2.5 million ly, and you get no idea how big it is. That one is not an illusion or simulation. Our brain works in mysterious ways and there are countless examples of optical illusions concerning scale, and those illusions btw. are often used to simulate scale.
Source: https://youtu.be/hCV2Ba5wrcs
Another example is this image. You have no idea of scale until you recognize the animal in there:
I didn't mean to sound like a dad, but according to the Shannon Theorem and linguistics, nomenclature can be seen as pure noise, if the receiving part of the communication doesn't understand the meaning of the words being send, and because any communication uses some sort of channel with a limited bandwidth, even speech or writing, any noise will limit the amount of information being transferred over time. Since that dawned on me, I have tried to turn down the nomenclature I use, even though the result sometimes is that people find me patronizing.
I started my 3D career on a monochrome PC XT, using simple math like the Circle Equation to calculate the vertices of 3D wireframe models rotated and moved in all three directions, and I also taught myself to simulate hidden lines. That was in the early 1980s. After that I was working as a VFX supervisor for many years, so I allow myself to consider me somewhat of an "expert" on the subject.
One thing that still puzzles me slightly though, is that in VR, you get the sense of DOF, even though your eyes are focusing on the exact same set of screens through the exact same set of lenses. It seems that some of the blur I would normally consider a result of the optics of the eye, is actually created by the brain and not in the eye. So much for the "expert"

It kind of resembles the captured images from brain scans showing how the brain see stuff, which resembles the way AI perceives faces, letters and numbers. That isn't that mysterious, since a neural network was designed to mimic a human brain, the "mysterious" part being that those images look far from what we feel we see, when looking at something.