You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #29 the Lorentz correction is computed in event mode and applied to the data. I think a cheaper approach would be something like:
data=data.bin(dspacing, theta) # no more pixels, or data.groupby(theta).bins.concat('pixel')lorentz_factor=sc.midpoints(dspacing)**4*sc.sin(sc.midpoints(theta))
scale=lorentz_factor/vanadium# vandium depends only on dspacingdata*=scale
This would result only in a single event-data op (which we would have to do anyway for the vanadium normalization). Binning in theta would have some extra cost, but we may need it anyway for other purposes. Computation of the Lorentz correction itself would essentially be free, and independent of the event count.
The text was updated successfully, but these errors were encountered:
If that becomes a problem then one could always apply it at the "finest" sensible $d$-spacing binning. There is probably also some point where one exceeds the instrument resolution, i.e., any finer binning is not useful.
One potential problem may be the bias that one would get when using a binned representation: A bin on the rising or falling edge of a peak should actually use a slightly larger or smaller (respectively) value of $d$ than the bin center, since more events are in the right or left part (respectively) of the bin.
According to latest information a) other software might apply this correction (which implies binning in $d$) and b) we might only need to apply the $\sin\theta$-part, which varies less, I think we can conclude that the precision problems are likely not relevant. I would therefore suggest to move ahead with this. This will also simply making the correction "configurable", choosing, e.g., whether only the $\sin\theta$-part gets applied.
In #29 the Lorentz correction is computed in event mode and applied to the data. I think a cheaper approach would be something like:
This would result only in a single event-data op (which we would have to do anyway for the vanadium normalization). Binning in theta would have some extra cost, but we may need it anyway for other purposes. Computation of the Lorentz correction itself would essentially be free, and independent of the event count.
The text was updated successfully, but these errors were encountered: