Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any need for second order expansions computations? #7

Open
bluenote10 opened this issue Feb 23, 2020 · 3 comments
Open

Any need for second order expansions computations? #7

bluenote10 opened this issue Feb 23, 2020 · 3 comments

Comments

@bluenote10
Copy link
Member

bluenote10 commented Feb 23, 2020

While working on robustifying rust-geo-booleanops I came across something, which I'll leave here as a note -- maybe it can be useful to others.

One of the challenges of computing boolean ops on polygons is the numerical stability of line segment intersections. To study the stability, I was running many instances of random line segment intersections like this:

image

(The left plot is kind of a zoomed-in version of the intersection area, iterating over the raw float "grid" via nextafter calls. The color map shows the summed distance to the two lines computed with an arbitrary precision library.)

I computed the intersection point both with an arbitrary precision library (GMP via rug) to get an exact groundtruth, and with plain f64 (called fast1 and fast2 in the plot, just minor variations of the common line intersection formulas). Due to cancellation effects in perp product formulas, it is not uncommon to see rather large deviations of the fast computations from the exact solution.

This raised the question if it's possible to leverage the ideas behind the robust crate to get more stable results. Instead of computing results with adaptive precision, I went for a simpler approach: work with "second order expansions" only. I.e., every value is just a struct with a major and a minor component (where the major component represents the normal result of an operation and the minor component keeps track of round-off error). The implementations of the Add/Sub/Mul/Div traits internally use the functions of the robust crate e.g. to add two second-order expansions resulting in a temporal 4-th order expansion, which then gets simplified back to a second-order before returning. Thus, the resulting precision cannot get arbitrarily long, but compared to plain floats it allows to propagate round-off errors with decent precision.

It looks like this additional precision is typically enough to get the correct result in line segment intersection. The following plot shows the deltas to the exact solution for 10000 random intersections:

image

In all cases the second order expansion solution was identical to using full arbitrary precision.

In terms of performance using second order expansions is obviously significantly slower than raw floats, but in my benchmarks in was still an order of magnitude faster than using GMP.

Currently it looks like I do not need this feature for rust-geo-booleanops though, so probably I won't have the time to finish it up / verify it correctness etc. I just thought I leave this here just in case someone else has a need for precision in-between normal floats and arbitrary precision -- if so, feel free to ping me ;).

In case we want to turn it into a library we could either:

  • Include it directly into this crate. The code is surprisingly short, I've attached it below. It's basically just a wrapper around a few internal functions.
  • Have it in a separate crate. In this case it would make sense to expose a few more functions like two_sum, two_product, etc. in the public interface.
SOE Implementation
#[derive(Copy, Clone, Debug, PartialEq)]
pub struct SOE {
    pub x_maj: f64,
    pub x_min: f64,
}

impl SOE {

    pub fn from_f64(x: f64) -> SOE {
        SOE{x_maj: x, x_min: 0.}
    }

    pub fn from_add(a: f64, b: f64) -> SOE {
        let (x_maj, x_min) = two_sum(a, b);
        SOE{x_maj, x_min}
    }

    pub fn from_sub(a: f64, b: f64) -> SOE {
        let (x_maj, x_min) = two_diff(a, b);
        SOE{x_maj, x_min}
    }

    pub fn from_mul(a: f64, b: f64) -> SOE {
        let (x_maj, x_min) = two_product(a, b);
        SOE{x_maj, x_min}
    }

    pub fn from_expansion(x3: f64, x2: f64, x1: f64, x0: f64) -> SOE {
        let t = x3 + x2;
        if t - x3 - x2 != 0. {
            return SOE{x_maj: x3, x_min: x0 + x1 + x2};
        }

        let t = x3 + x2 + x1;
        if t - x3 - x2 - x1 != 0. {
            return SOE{x_maj: x2 + x3, x_min: x0 + x1};
        }

        return SOE{x_maj: x1 + x2 + x3, x_min: x0};
    }

    pub fn from_scale_expansion(x: SOE, b: f64) -> SOE {
        let mut temp = [0f64; 4];
        let _ = scale_expansion_zeroelim(&[x.x_maj, x.x_min], b, &mut temp);
        SOE::from_expansion(temp[0], temp[1], temp[2], temp[3])
    }

    pub fn to_f64(self) -> f64 {
        self.x_maj + self.x_min
    }
}

use std::fmt::Display;

impl Display for SOE {

    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
        let mut buffer1 = ryu::Buffer::new();
        let mut buffer2 = ryu::Buffer::new();
        let a = buffer1.format(self.x_maj);
        let b = buffer2.format(self.x_min);
        write!(f, "({}, {})", a, b)
    }

}

use std::ops::Add;
use std::ops::Sub;
use std::ops::Mul;
use std::ops::Div;

impl Add for SOE {
    #[inline]
    fn add(self, that: Self) -> Self {
        let (x3, x2, x1, x0) = two_two_sum(self.x_maj, self.x_min, that.x_maj, that.x_min);
        SOE::from_expansion(x3, x2, x1, x0)
    }
    type Output = Self;
}

impl Sub for SOE {
    #[inline]
    fn sub(self, that: Self) -> Self {
        let (x3, x2, x1, x0) = two_two_diff(self.x_maj, self.x_min, that.x_maj, that.x_min);
        SOE::from_expansion(x3, x2, x1, x0)
    }
    type Output = Self;
}

impl Mul for SOE {
    #[inline]
    fn mul(self, that: Self) -> Self {
        let a = SOE::from_mul(self.x_maj, that.x_maj);
        let b = SOE::from_mul(self.x_maj, that.x_min);
        let c = SOE::from_mul(self.x_min, that.x_maj);
        let d = SOE::from_mul(self.x_min, that.x_min);
        return d + c + b + a;
    }
    type Output = Self;
}

impl Div for SOE {
    #[inline]
    fn div(self, that: Self) -> Self {
        let fac1 = self.to_f64() / that.to_f64();
        let tmp = that * SOE{x_maj: fac1, x_min: 0.};
        let rem = self - tmp;
        let fac2 = rem.to_f64() / that.to_f64();
        SOE{x_maj: fac1, x_min: fac2}
    }
    type Output = Self;
}
@bluenote10 bluenote10 changed the title Add library for second order expansions computations? Any need for second order expansions computations? Feb 23, 2020
@daef
Copy link

daef commented Aug 7, 2020

I don't know whether this could be incoorporated in 'robust', nevertheless I wanted to drop a reference to herbie here : https://herbie.uwplse.org/

@bluenote10
Copy link
Member Author

@daef Interesting project as well. Not sure how relevant it is for 'robust', which actually makes use of round-off errors deliberately, but still something to keep in mind for numerical stability in general.

@urschrei
Copy link
Member

@daef @bluenote10 There are Herbie bindings for Rust (https://github.com/mcarton/rust-herbie-lint), but they're based on a very outdated compiler plugin approach and have long since stopped working. I've mentioned the idea of updating them before, and one of Herbie's authors offered to help, but of course nobody (i.e. me) ever has time. I'd be interested in reviving the idea if either of you are interested. The plugin worked very well for me when I used it (four years ago now)!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants