-
Notifications
You must be signed in to change notification settings - Fork 14
Addressing review comments #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addressing review comments #99
Conversation
Codecov Report
@@ Coverage Diff @@
## newapi/dispersion_estimators #99 +/- ##
================================================================
- Coverage 71.95% 70.51% -1.44%
================================================================
Files 31 33 +2
Lines 756 770 +14
================================================================
- Hits 544 543 -1
- Misses 212 227 +15
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming (normalization interface)
We don't have to do it this way, we can still have a "single function" without introducing any new stuff like Renyi
. The key is that in Julia functions are their own types. Thus, we can have a single function
function normalize(entropy_function, args...; kwargs...)
end
which returns the normalized entropy. Instead of the common name normalize
we could have entropy_normalized
instead. You define method dispatch as such:
function entropy_normalized(entropy_function::typeof(entropy_renyi), args...; kwargs...)
end
and this is a new method.
There are some advantages in this approach:
- We define a single API function with a single and rather consise docstring. It practically says that the entropy is normalized versus the entropy of uniform distribution across alphabet.
- If we add more entrpy methods in the future, we can add them to this existing function instead.
- We can have a cleaner code by defining this one function in its separate file.
I am thinking that we need to define only a single function. Given the alphabet length, we can generate a Probabilities
vector with all entries equal, and pass that to the entropy function directly, no matter what this entropy function is!
(throws error if alphabet length cannot be obtained)
Actually in the docs you say it returns nothing
already. So the question is, what should it do by default? At the moment it errors by default, which I'm still not sure. I guess it's okay to leave it as is.
julia> est = SymbolicPermutation(m = 4) | ||
SymbolicPermutation{typeof(Entropies.isless_rand)}(1, 4, Entropies.isless_rand) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put ;
here at the end of est
assignment?
alphabet_length(est::SymbolicPermutation)::Int = factorial(est.m) | ||
alphabet_length(est::SymbolicWeightedPermutation)::Int = factorial(est.m) | ||
alphabet_length(est::SymbolicAmplitudeAwarePermutation)::Int = factorial(est.m) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function shoould be extended in the same file the probabilities
function is extended, i.e., in the file of the estimator.
function alphabet_length end | ||
|
||
alphabet_length(est) = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two definitions should be combined. Also the second definition should type annotate est::ProbabilitiesEstimator
.
Here's my idea for a generic function: function normalize(entropy_function, est::ProbabilitiesEstimator, args...; kwargs...)
L = alphabet_length(est)
np = Probabilities(ones(L))
return entropy_function(est, args...; kwargs...)/entropy_function(np, args...; kwargs...)
end |
Excellent. I didn't think of that at all. I will adjust the PR accordingly and ping you when ready. |
The citation
should be included in the docstring of this function I guess. |
Thinking a bit more about this, I'm not convinced that your pseudocode above is the best way to go. There are two reasons for this:
function normalize(x, entropy_function, est::ProbabilitiesEstimator, args...; kwargs...)
return entropy_function(x, est, args...; kwargs...)/alphabet_length(est)
end |
No, we never assume this. We return the entropy normalized versus the uniform distribution. We don't make claims of what the uniform distribution does.
No, because maximizing was never the goal here. Normalizing versus an established standard is.
The cost of computing the entropy includes computing the probabilities. Here we just create a small vector, which I bet would have minimal cost versus that of actually computing the probabilities... But sure, if you are analytically certain that for all entropy methods, regardless of |
Sure, but the whole point of normalisation is to ensure that the computed value falls in the range For other entropies, this might not be the case. If there is suddenly an entropy that isn't maximised by the uniform distribution, then the normalized value is no longer guaranteed to be on Of course, we can just define normalization in Entropies.jl to mean normalization to the entropy of a uniform distribution (as you suggest), and leave it to the user to find out if this normalization is meaningful to them . However, I think we should strive to provide "meaningful" output out-of-the-box, which makes having a completely generic
Based on the references I found, this is the case for Renyi and Tsallis entropies. Not sure about the Kaniadakis and entropies, though. But this is a potential issue regardless if we normalize to EDIT: we can of course just state in the docstring of EDIT 2: I think an informative docstring message abut what the normalization actually does is the better approach for now, since I too don't want to delve too deeply into all these generalized entropies at the moment. |
That we are spending way too much time thinking about this :P Let's just say, clearly, in the docstring that we divide by |
new entropies will get same treatment, same division, and the user needs to be educated about whether this makes sense for them I think... |
Fair enough 😆
Ok, done. I'll tag you once I've implemented the new interface. |
@Datseris Changes since the last comments:
|
entropy_normalized(f::Function, p::Probabilities, est::ProbabilitiesEstimator, args...; | ||
kwargs...) | ||
|
||
Normalize the entropy, as returned by the entropy function `f` called with the given | ||
arguments (i.e. `f(p, args...; kwargs...)`), to the entropy of a uniform distribution, | ||
inferring [`alphabet_length`](@ref) from `est`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not convinced that this method is useful.... If the user has the est
they may as well use the first method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can just remove it. The user can just normalize to log(N)
manually if they need to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, I'm not so sure of that. If we remove this method, then:
Say I have a set of probabilities p = Probabilities([0.2, 0.2, 0.3, 0.3])
that I have already computed with a known estimator, but I don't have the original data for. I now want to compute 1) Renyi entropy and 2) normalized Renyi entropy. Without this method, it is easy to compute 1), but no convenient way to compute 2), without re-implementing precisely this function does myself. That is: I'd always have to start from raw data if wanting to compute normalized entropy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean, you do entropy_renyi(p)/log(alpgabet_length(est))
. since you have the estimator but not the original data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean, you do entropy_renyi(p)/log(length(p)). What's the problem :D
|
||
See also [`entropy_renyi`](@ref), [`entropy_normalized`](@ref). | ||
""" | ||
maxentropy_renyi(N::Int; base = MathConstants.e) = log(base, N) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would we need such a function? I'm not convinced that providing a wrapper function over log
is useful either :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because maximum entropy in itself is a useful thing to be able to compute. For Renyi entropy, it is trivially log(N)
, but for other entropies, it is not trivial at all. For stretched exponential entropy, for example, which I've prepared in a non-committed PR, the maximum entropy is a function of the parameters of the method and involves complicated expression involving gamma and incomplete gamma functions.
For Tsallis entropy, to be valid for all parameter values, the maxentropy is no longer log(N)
but a more complicated expression too.
I think there should be a convenient way for the user to compute this quantity, to also be able to work with the entropies from a theoretical standpoint. All analytical tests for this stretched exponential entropy, and the example reproducing the original paper, for example, involve using maxentropy_stretched_exponential
.
The maximum entropy is a fundamental property of entropies. I don't see why we shouldn't provide this functionality. This is after all package, or "library", that is dedicated specifically to computing entropies and related quantities. I don't think pruning is necessary just because we don't see an immediate use for the method at the moment.
When we end up with a complete library of generalized entropies (I think I've seen ca. 10 different ones until now), it would be weird not to provide functionality to get relevant parameters/values for all of them (i.e. not providing maxentropy
for Renyi, but for all other entropies).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I see.
How about a compromise between providing such methods, but also this library not reaching 1,000 exported names, by extending an already existing function. Specifically, the maximum
. So the interface would be,
maximum(entropy_function, estimator)
and we add dispatch for the entropy_function
.
Actually, this whole business makes me think if we should make "entropy types". We can have a single entropy(...)
function, and we have "entropy types" such as Renyi()
and Tsallis()
. So the renyi entropy for a given estimator would be entropy(Renyi(), x, est)
This way we can actually call maximum(Renyi(), PermutationX())
instead of passing functions. The estimators can have arguments such as Renyi(q)
for the order q
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I vote for the last suggestion and I am now more convinced of the entropy types you suggested at some point but we decided not to go with...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you also vote for that I'd like to do it after I merge this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this whole business makes me think if we should make "entropy types". We can have a single entropy(...) function, and we have "entropy types" such as Renyi() and Tsallis(). So the renyi entropy for a given estimator would be entropy(Renyi(), x, est) This way we can actually call maximum(Renyi(), PermutationX()) instead of passing functions. The estimators can have arguments such as Renyi(q) for the order q.
Yes, going for the dispatch version would perhaps be better once the codebase grows. I wasn't aware to begin with how many types of entropies there actually are, but they are in fact plentiful.
If the rest of this PR looks ok to you, maybe we just merge this PR (in the current state) and the original one to main
. Then, as you say, we can implement the multiple dispatch entropy(EntropyType(), Estimator())
version in a separate PR.
If this is merged quickly, I can put together a PR tonight for the new version, because I'd like to get this off the table as soon as possible and arrive at a stable API.
I'd still cut down on content here. E.g, i don't think providing |
See comment above |
Let me do some pruning here, and if I prune too much, you let me know and we add back the things you believe are useful. |
@kahaaga I'd prefer to make the first draft for the new API if that's okay with you and you can review it instead. I'll have it done by tonight. |
@kahaaga Actually, wait, I'm confused, where was this PR just got merged into? Not at the |
This was merged into the
Perfect. |
Forget everything I said, I haven't realized that |
Yes, precisely. |
* Dispersion and reverse dispersion probability estimators * Fix tests * ReverseDispersion should be a complexity measure, plus addressing some comments * Export reverse_dispersion * Reverse dispersion * Remove reference to `ReverseDispersion` * Update docstring * Add analytical tests * Improve tests and add doc examples * Fix tests * Better docs and doctests * Remove file * Fix tests * Update src/complexity_measures/reverse_dispersion_entropy.jl Co-authored-by: George Datseris <[email protected]> * Addressing review comments (#99) * Address review comments * Throw error * Remove non-used field * Fix tests * Merge docs and move individual methods to their respective files * Tsallis reduces to Shannon entropy for q -> 1. * Normalized entropy API, including utility functions * Analytical test cases for Tsallis * Include `k` in `maxentropy_tsallis` Co-authored-by: George Datseris <[email protected]>
Hey, @Datseris. I've tried to address all your comments on #96 in this PR.
alphabet_length
function that dispatches on estimators (throws error if alphabet length cannot be obtained)q
.distance_to_whitenoise
part of public API.Some things to potentially discuss are:
Naming (normalization interface)
entropy_normalized
function, I usedentropy_renyi_norm
,entropy_tsallis_norm
andentropy_shannon_norm
. This mirrors theentropy_renyi
andsimilar functions we already have. The alternative is to have
entropy_normalized(x, est = Renyi(); kwargs...)
or something along those lines. As we already discussed elsewhere, I don't think the latter approach is a good idea.entropy_permutation_norm
), or leave it up to the user to figure out how to do so?Normalization/alphabet length
Where to indicate which estimators implements
alphabet_length
? Alternatives are:alphabet_length
for estimators that don't support normalization, e.g.throw("unimplemented")
. I favor explicitly throwing an informative error instead of silentlyreturning
nothing
when an estimator doesn't implementalphabet_length
, because thisreduces changes of errors for users.
renyi_entropy_norm
.##Implements
heading.For user-friendlyness, I think either of the latter is best. For code maintenance purposes, the first alternative is best (which is what I've done in this PR).