The first claim appears to be inconsistent to the degree of incoherence. We don't want objects to be indistinguishable; that would make them perfect clones. This is precisely the opposite of what we want. Right now we have objects with major and common features that are patently repeated, not just similar. We want the system to be flexible enough that even if it does reuse static assets it isn't obvious.I beg to differ... The desire for every object to be indistinguishable from the other billion approachable objects is what is being aimed at here. The fidelity has to be compromised otherwise... (see below)
I... LOVE... Orbiter. For realism and having to use your noggin, it's just the bees knees.
However, to get the kind of fidelity that works in Orbiter, it's somewhere in the region of half a terabyte JUST for the solar system.
The second is also mistaken. We do not need immense datastores for notable differentiation; we only need a sufficiently wide procedural generation function that no seeds produce obvious repetition. The counting argument reveals that this requires that there are enough seeds to uniquely identify the objects, which is easily accomplished (for starters, they all have unique names and placements as is). But more than that, we don't want it to just present a particular unrepeated pattern of just a handful of easily distinguished surface features. The features must mutate or blend sufficiently. In many of the examples shown, merely rotating the rocks a little would likely have sufficed.
The Book of Shaders provides a large number of decent starting points, particularly in the chapter on generative designs.