No, you guys still aren't getting it. The point I'm trying to make is that all this magical doubling and halving, there's no point in a programmer trying to do it manually, because that's what the "floating" in floating-point
is. The first 8 bits of a float32 are literally that trick, built right into the number format.
I'll try to do a made-up decimal version, let's say we make up a fictional format that has seven decimal places (instead of bits) of mantissa, and two of exponent. Here's how some numbers might get represented:
1.36cm | 1.360000 * 10-2 | (works fine) |
20km | 2.000000 * 104 | (so far so good) |
20km + 1.36cm | 2.000001 * 104 | (oh dear) |
So the format's lost us 36mm. This isn't great, so we decide to shrink eerything 100 times to "regain" the precision.
1.36cm / 100 | 1.360000 * 10-4 |
20km / 100 | 2.000000 * 102 |
20km / 100 + 1.36cm / 100 | 2.000001 * 102 |
That literally doesn't touch the main number representation at all, it just subtracts two from the exponent. Precision hasn't got better, it hasn't got worse. If there's some code somewhere that wants to, I dunno, sort objects into buckets, and it was assuming that 1m intervals was a good granularity, you've confused the hell out of
that, but it'd have been confused by adding a moon several million km away too so it needed fixing anyway.
If you do a "fourteen decimal place conversion" instead, 20km + 1.36cm gets represented as 2.0000013600000 * 10
4 and everything's dandy, besides people a strange recurring bug where people insist you didn't do it.