Saturday, October 31, 2020

Yet another alternative to floating-point numbers

Yet another alternative to floating-point numbers

Floating-point numbers are fine. They are decently designed and well standardized, they provide a good compromise between performance and precision. They work great most of the time. Until the day when they suddenly don't and nobody knows why.

For me, that day came when I encountered a bug in the Taubin estimator. Taubin smoothing takes a triangle mesh, a pair of magic numbers called λ and μ, a little time to think, and it makes the input mesh smooth. It also makes sure that the mesh isn't reduced to a perfectly smooth but rather uninteresting sphere or a point.

The first magic number λ is something between 0 and 1. When it's 0 — nothing happens at all, and when it's 1 — nothing good happens either. It should be somewhere in between. Nobody really knows how to pick this number but an experienced engineer can at least make an educated guess.

The second magic number governs shrinkage compensation. It's a negative number slightly greater than λ in its absolute value. Nobody knows how to pick it either but this time even an educated guess is a challenge.

That's why you need an estimator. Something that picks the μ for you. And we have one. And it was bugged. And I was the one to fix it.

The first day of the investigation showed that the estimator is in fact entirely correct. The second day of the investigation showed that the tests covering the bug are correct just as well. The third day of the investigation brought a promising hypothesis that the whole story is just a bad dream and all it takes to solve the mystery is to wake up. The fourth morning killed that hypothesis and left me completely clueless.

Luckily I have a friend who is smarter than me and he advised to fuzz the input a little and see what happens then. I did. And something happened. The bug didn't go away but the μ changed a lot. Way unproportionally to the small changes in the input. Fascinating!

We can expect a computation to work with some error. It's fine, we get input data from sensors and they embed some error, to begin with. We print out our finished models, and the printers have some limited precision as well so a small error doesn't upset anyone. It's all good while the error is small. But what if it isn't?

As it turns out, in my case, a completely correct algorithm had a computational error of about 1. Not 1e-16 or 1e-5 but 1. Just 1. So if you expect your μ to be somewhat close to -0.7, and the estimator says it's 0.3, it is in fact correct. It is still a correct estimation within its error range.

Correct but entirely useless.

Challenge your intuition with a “Guess the Error” game

Ok, so it's not the computational error per se that causes trouble but only the unexpected and unpredictable error. Can we predict it though?

Well, of course, we can. There is a whole field of numerical error analysis for that but let's be honest, most of the time we use our intuition instead. So how good is our intuition exactly?

I propose a game to find out. Let's take a cubic equation solver. It's a relatively complex computation with a very simple way to validate its precision. We will pick some roots first and then we will generate the cubic equation for these roots. Then we'll make the cubic solver find our roots back through the computation. The difference between the original and the computed value for each root will be our measurable error. This error divided by the original value will be our relative error.

Here is the exact code for the experiment. It's also available on GitHub.

// find roots for ax^3 + bx^2 + cx + d = 0
std::array<double, 3> roots_for_cubic(std::array<double, 4> abcd) {
    double PI = std::atan(1.) * 4.;

    double a1 = abcd[1] / abcd[0];
    double a2 = abcd[2] / abcd[0];
    double a3 = abcd[3] / abcd[0];
    double q = (a1 * a1 - 3. * a2) / 9.;
    double sq = -2. * std::sqrt(q);
    double r = (2. * a1 * a1 * a1 - 9. * a1 * a2 + 27. * a3) / 54.0;
    double z = r * r - q * q * q;

    std::array<double, 3> roots;
    if(z <= 0.) {
        double t = std::acos(r / std::sqrt(q * q * q));
        roots[0] = sq * std::cos(t / 3.) - a1 / 3.;
        roots[1] = sq * std::cos((t + 2. * PI) / 3.) - a1 / 3.;
        roots[2] = sq * std::cos((t + 4. * PI) / 3.) - a1 / 3.;
    } else {
        roots[0] = pow(std::sqrt(z) + std::abs(r) , 1. / 3.);
        roots[0] += q / roots[0];
        roots[0] *= ( r < 0. ) ? 1 : -1;
        roots[0] -= a1 / 3.;
        roots[1] = std::numeric_limits<double>::quiet_NaN();
        roots[2] = std::numeric_limits<double>::quiet_NaN();
    }
    return roots;
}

// find polynomial coefficients a, b, c, and d for the roots
std::array<double, 4> cubic_for_roots(std::array<double, 3> xs) {
    return {1. // one of them should be set as a constant
    , - (xs[0] + xs[1] + xs[2])
    , + (xs[0] * xs[1] + xs[1] * xs[2] + xs[2] * xs[0])
    , - (xs[0] * xs[1] * xs[2])};
}
        

Now if we pick our roots as 1, 2, and 3, the equation produced will be:

(x - 1) (x - 2) (x - 3) = 0

Expanding the brackets we get this:

x3 - 6x2 + 11x - 6 = 0

And when we run our solver we get:

x = 1, x = 2, x = 3

Well, yes. On the equation this simple there is no error whatsoever. The computation works flawlessly. So let's start making it worse.

Round 1

The slider below lets you pick an interval in the logarithmic scale. The interval may start at 10-12 and end at 1012. It's impossible to predict the exact error up to a single number so you should just pick an appropriate interval that covers the error*.. Of course, you can select the whole interval, too. This estimation is guaranteed to be correct. And entirely useless.

Let's say we have the roots of 0.001, 2, 3&hairsp;000. So to which interval, in your opinion, the maximum relative error for a cubic solver belongs?


It's measured as 3.3675e-08 so the estimation between e-8 and e-7 would be more than adequate.

Round 2

Now let's make our roots even more diverse in magnitude. How about 1e-6, 2, and 3e6?

Where do you think the error might be?


It's significantly more than 1. Since it's a relative error, this means than the error for the smallest root value is larger than the value itself. So the computation is now essentially uselses.

While we're talking about a synthetic case just to test our solver, it is frigtengly close to the real world numbers. Consider you're a multi-bilionaire and you want to put your net worth and a few cents in one cubic equation. 10&hairsp;000&hairsp;000&hairsp;000 and a 0.01 has the same difference in magnitude as the roots for our synthetic case. The difference is large but it's not inconcievable.

Round 3

And what about 1e-9, 2, and 3e9?


The error is much-much larger than the smallest root value. It's even larger than the second smallest root value. Essentially, what we got as the output is more noise than the values we're expecting to compute.

Note that the computation is still correct. Within, of course, some margin of error. It's only the scale of this margin that makes the computation useless.

Meet the rational bounds

It's not the error per se that is bad. Error is omnipresent. Every measuring device has its error, every 3D printer and every milling machine has its maximal precision. The error is fine.

It's only the unpredictability that makes it unpleasant.

People address errors in multiple ways. In metrology, a measured value is supplemented with its absolute error. You don't say that the temperature outside is 10°, you say it's 10°&hairsp;±&hairsp;0.5°. This might seem redundant since this is a small error for the weather outside, but in some other context, this very error might become significant. If you're measuring body temperature, a 1-degree difference is enough to tell a sick person from a perfectly healthy one. You can not afford a whole 1-degree error in this scenario.

If you have this input error written down, you can pull it through your computation to see if the resulting error is still tolerable. To do that, you need to exchange your measured numbers for intervals. A number with error becomes an interval: 10°&hairsp;±&hairsp;0.5° becomes [9.5, 10.5]°.

You can sum these intervals just as you add numbers:

+

=

You can multiply them as well.

×

=

So with intervals, you can accommodate the input error. You can drag it through the computation and see how it plays with other values with their own error. In the end, you will have your computed value within some interval and the size of this interval will indicate the output error. But what about the error of the computation itself?

A computational error occurs when we can't store the operation result in the same types we keep our operands in and we're forced to throw some data away. Let's say we have Python-style “unlimited” length integers. This means that if our computation only consists of addition, subtraction, and multiplication, we would never face any computational error.

Of course, with division, things get a little bit different. E. g. we can't store 16&hairsp;¾ in integers so we have to truncate it into 16. In this case, the whole fractional part becomes our error.

We can use pairs of integers to represent rational numbers. This will solve the division issue. However, this type of rational number, while does not accumulate errors, accumulates digits. The more digits it has, the slower the computation goes. At some point, it may simply become impractically slow. Just like with the uncontrollable error, uncontrollable size becomes an issue.

We want a compromise. Some entity that grows error controllably yet still operates in constant speed. And we can get it, for example, by merging two ideas: intervals and rational numbers together.

Let's say, we want to write a number as an interval of its tenths. We can do that most intuitively by throwing away all the digits after the first one.

Just like we can do this with decimal fraction, we can do this with any rational number. We can put it between a pair of rational numbers made by finite integers. In this case, every integer is in the range [0, 1, ..., 99].

Ok, this is not entirely true. If the number is greater than 99, then we can't really store it in our tiny rationals but we can say that it's larger than 99 by assigning the 0 divisor to its upper bound. Of course, it's just a dirty hack but let's leave it like that.

Now if we want to accommodate both input error and computational error, all we have to do is to represent an input interval in its finite rational numbers. A finite rational lower bound of this interval and a finite rational upper bound of the same interval.

This entity — a pair of finite rational bounds, represents a real number along with both its input error and representational error. It also grows computational error explicitly while the size of the entity remains constant.

The implementation example is available on GitHub. Please note that it's just a proof of concept and not the best possible solution in terms of neither error minimization nor speed.

Conclusion

The computational error may become a problem at the least appropriate moment. Both rational numbers and intervals are well-known alternatives for floating-point numbers when the error becomes an issue.

Rational bounds, being a chimeric product of both ideas, let you manage both measurement error and computational error in a coherent way without compromising performance all that much.



from Hacker News https://ift.tt/2HPks6j

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.