Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange discrepancy in correction factor calculations #601

Open
mreineck opened this issue Jan 6, 2025 · 3 comments
Open

Strange discrepancy in correction factor calculations #601

mreineck opened this issue Jan 6, 2025 · 3 comments

Comments

@mreineck
Copy link
Collaborator

mreineck commented Jan 6, 2025

Lines

int q = (int)(2 + 3.0 * J2); // not sure why so large? cannot exceed MAX_NQUAD

and

int q = (int)(2 + 2.0 * J2); // > pi/2 ratio. cannot exceed MAX_NQUAD

read slightly differently, but I don't really understand why they should be different at all. The first version agrees with the FINUFFT paper (see the line below eq 3.10), but the reduced accuracy caused by the second version doesn't seem to cause any test failures either ... strange.

Given the comment in the source, I'm apparently not the first one wondering about this.

@ahbarnett
Copy link
Collaborator

ahbarnett commented Jan 7, 2025 via email

@mreineck
Copy link
Collaborator Author

mreineck commented Jan 7, 2025

Hi Alex, thanks a lot for the explanation! You are right, there is no need for speeding up the correction factor calculation for types 1 amd 2, where it should require negligible time (except perhaps in 1D).
It's true, I'm thinking about ways to make the precalculation faster for type 3, since its cost is not too far away from the spreading/interpolatiion cost itself, of course with the difference that it is only required during plan generation and not furing every execution.
One thing I have been wondering about: in the range where we need the correction function, it is pretty smooth and well-behaved, right? So perhaps we could just use the (piecewise?) polynomial approximation trick there as well? This could give about an order of magnitude acceleration, but of course requires lots of testing to gather sufficient confidence.

@SepandKashani
Copy link
Contributor

One thing I have been wondering about: in the range where we need the correction function, it is pretty smooth and well-behaved, right? So perhaps we could just use the (piecewise?) polynomial approximation trick there as well? This could give about an order of magnitude acceleration, but of course requires lots of testing to gather sufficient confidence.

I have tried using the ppoly approximator to compute the deconvolution factors in fourier_toolkit, and I could not see any significant rel-error difference compared to the analytic form. I did not use it for this task in the end to avoid having yet another knob in the system, but I confirm it is doable.

You can test this easily by replacing this line https://github.com/SepandKashani/fourier_toolkit/blob/15dfbdb260cd7959c64088de21d0a67bd5229c77/src/fourier_toolkit/nu2nu.py#L1127

phiF = ftk_kernel.KaiserBesselF(beta)

with

phiF = ftk_kernel.PPoly.from_kernel(
    ftk_kernel.KaiserBesselF(beta),
    B=B_ppoly,
    N=N_ppoly,
    sym=True,
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants