-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add ultranest import #313
add ultranest import #313
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @JohannesBuchner,
thanks for setting this up!
In the interest of keeping things slim, we try to only keep the minimum needed amount of example_data
files that are needed for anesthetic to produce its dataframes.
So e.g. in the case of PolyChord which produces all kinds of output files, we only use the following three files:
<root>_dead-birth.txt
<root>.paramnames
<root>_phys_live-birth.txt
OK, done |
The h5py import is failing due to no dependency on it. If you want to keep it optional, we can move it into the read_ultranest function. commit caca2f2 does that. |
…ches between key and value
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #313 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 32 33 +1
Lines 2764 2790 +26
=========================================
+ Hits 2764 2790 +26
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good. Many thanks @JohannesBuchner and @lukashergt. Only minor comments/adjustments/organisation to be addressed before merging.
This PR should remind us that it really would be helpful to finish #158 for dynesty support; although until dynesty returns birth contours we'll need to allow constructors to provide a dynamic nlive in the same way that at the moment you can provide fixed nlive in place of logL_birth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, the only thing we need from this is the paramnames entry, although it is nice that it has a maximum likelihood point. Presuming this is found by an optimisation procedure (like e.g. polychord's polishing settings.maximise=true
gives) then this is could be useful in future iterations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the last live point discarded, but for the default frac_remain=0.01 this is good enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Theoretically I disagree -- the rule of thumb is that logLmax ~ <logL>_P + d/2
, and the width of the typical set in logL
space is sqrt(d/2)
, so in even moderate dimensions the true likelihood peak lies some distance away from where nested sampling terminates, regardless of stopping criterion (see figure 2), but this isn't relevant to this PR!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have been wondering if we could somehow make use of PolyChord's maximise=true
output in anesthetic...
- What happens when one appends the list of nested samples with the maximum likelihood point at the very end?
- For safety we could do it with zero weight...?
- Or is there a way of turning the maximum likelihood point into the final nested sampling point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in practice all of these would work fine for a run that had reached convergence. I would worry if you also started trying to use it at different temperatures.
As an API, a better solution would be for the maximum to be appended with a logL_birth=logL=logL_max, so it's officially 'beyond the last live point'.
We'd then have to write the volume calculation to ensure anesthetic set these to zero or nan weight by default, since there's no way to determine the volume if there's a gap.
I could imagine gaps occurring naturally in the case of importance weighting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Issue raised in #317.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Theoretically I disagree -- the rule of thumb is that
logLmax ~ <logL>_P + d/2
, and the width of the typical set inlogL
space issqrt(d/2)
, so in even moderate dimensions the true likelihood peak lies some distance away from where nested sampling terminates, regardless of stopping criterion (see figure 2), but this isn't relevant to this PR!
As an aside, I have been working through this analytically and experimentally this afternoon, and this statement only holds in higher dimensions.
Here is an analytic result for f=0.01, n=1000 for how close you get in loglikelihood to the maximum as a function of dimension:
I won't have time to look at this before my visit to Cambridge in September, lots of travel and conferences at the moment. |
…DataFrame._set_labels` is a dict, it should be expected to be a dict, and if it is not, then it should fail and not silently ignore `labels`
…ls()` as `make_2d_axes` expects a dict and not a list
…tially assigned labels
…o MultiNest files
…ta generating code next to its output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. I'm happy for this to be merged -- not sure who wants to claim the github credits -- it was initiated by @JohannesBuchner (who may be too out-of-action for the next 6 weeks to press the button), but @lukashergt has probably by this point adjusted more lines.
Whoever does it should press 'squash and merge'. |
I don't have the commit bit on this repo ... |
I've just sent you a new invitation which should allow you to. |
It should ideally be @JohannesBuchner to squash and merge (which should hopefully be possible now), but also ideally sooner than in 6 weeks time. However, no obligation if things are too busy, then I'll do it in a couple of days. |
I pressed buttons ¯\(ツ)/¯ |
Great! Thanks @JohannesBuchner, and thanks to MaxEnt 2023, which made this happen. |
Description
Fixes #311
adds
Checklist:
flake8 anesthetic tests
)pydocstyle --convention=numpy anesthetic
)python -m pytest
)