My point is that there isn't some required number of distances that will guarantee a good result. In some cases, 5 will work fine. In other cases, you'll need more than 5. It really depends on the positioning of the references, and even on where the star happens to be within the +/- .005 range. e.g. you get a stronger one-sided condition if the star happens to be near the edges of the +/- .005 range, rather than in the middle. So you might get 2 distances in the same general direction that have strong one-sided conditions on opposite sides (i.e. the star is close to the -.005 side of one distance, the close to the +.005 side of the other distance), and so the combination of the 2 gives a tight constraint for that dimension.
By doing a search of the candidate region, you can conclusively determine if you have enough data or not. (At least, assuming the data is correct

).
That actually brings up another point - with the lower accuracy distances, it will also be harder to spot bad data. I suspect it's more likely now that a small typo in the data could result in a single "correct" position that matches the incorrect data, but isn't actually the correct position of the star. Previously, the incorrect distance would more likely result in no positions that exactly match the data, and thus raise a flag that something was wrong.
One constraint I can think of is that we should ensure that we get enough distances for a given star such that if any 1 distance is removed, it still results in the same single matching coordinate after searching the candidate region. I think this would ensure that there is enough redundancy in the data to be able to detect 1 incorrect distance. I'm not sure how onerous a requirement that would be though.
So on second thought, I think you're right. 5 probably isn't "optimal", once you start taking into account error detection