-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added psi calculation to categorical columns #1027
Changes from 1 commit
4792144
026b2ef
86aa39a
108d8d8
d3d52f7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,7 @@ | ||
"""Contains class for categorical column profiler.""" | ||
from __future__ import annotations | ||
|
||
import math | ||
from collections import defaultdict | ||
from operator import itemgetter | ||
from typing import cast | ||
|
@@ -304,7 +305,14 @@ def diff(self, other_profile: CategoricalColumn, options: dict = None) -> dict: | |
other_profile._categories.items(), key=itemgetter(1), reverse=True | ||
) | ||
) | ||
|
||
if cat_count1.keys() == cat_count2.keys(): | ||
total_psi = 0.0 | ||
for key in cat_count1.keys(): | ||
perc_A = cat_count1[key] / self.sample_size | ||
perc_B = cat_count2[key] / other_profile.sample_size | ||
total_psi += (perc_B - perc_A) * math.log(perc_B / perc_A) | ||
|
||
differences["statistics"]["psi"] = total_psi | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we should raise a warning (or at least post to the logger) that PSI was not calculated and why it was not calculated in this section of the code. Was looking at L704 in test_categorical_profile.py and that would be a case (L704 - L732) where we that case in L308 is covered but we should assert that none of that code (i.e. a warning or logger) is called |
||
differences["statistics"][ | ||
"categorical_count" | ||
] = profiler_utils.find_diff_of_dicts(cat_count1, cat_count2) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,5 @@ | ||
import json | ||
import math | ||
import os | ||
import unittest | ||
from collections import defaultdict | ||
|
@@ -756,6 +757,44 @@ def test_categorical_diff(self): | |
} | ||
self.assertDictEqual(expected_diff, profile.diff(profile2)) | ||
|
||
# Test diff with psi enabled | ||
df_categorical = pd.Series(["y", "y", "y", "y", "n", "n", "n", "maybe"]) | ||
profile = CategoricalColumn(df_categorical.name) | ||
profile.update(df_categorical) | ||
|
||
df_categorical = pd.Series(["y", "maybe", "y", "y", "n", "n", "maybe"]) | ||
profile2 = CategoricalColumn(df_categorical.name) | ||
profile2.update(df_categorical) | ||
|
||
# Calculate expected_psi | ||
expected_psi = 0 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What about other test cases for non zero? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is non-zero There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. could we make |
||
bin_perc = [4 / 8, 3 / 8, 1 / 8] | ||
bin_perc_2 = [3 / 7, 2 / 7, 2 / 7] | ||
for perc_A, perc_B in zip(bin_perc, bin_perc_2): | ||
expected_psi += (perc_B - perc_A) * math.log(perc_B / perc_A) | ||
|
||
# chi2-statistic = sum((observed-expected)^2/expected for each category in each column) | ||
# df = categories - 1 | ||
# p-value found through using chi2 CDF | ||
expected_diff = { | ||
"categorical": "unchanged", | ||
"statistics": { | ||
"unique_count": "unchanged", | ||
"unique_ratio": -0.05357142857142855, | ||
"chi2-test": { | ||
"chi2-statistic": 0.6122448979591839, | ||
"df": 2, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. outside scope of this PR: Does df stand for dataframe here? Looks like it's an int. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No its supposed to an int, its stands for degrees of freedom, but I agree that it is not a great choice of name |
||
"p-value": 0.7362964551863367, | ||
}, | ||
"categories": "unchanged", | ||
"gini_impurity": -0.059311224489795866, | ||
"unalikeability": -0.08333333333333326, | ||
"psi": expected_psi, | ||
"categorical_count": {"y": 1, "n": 1, "maybe": -1}, | ||
}, | ||
} | ||
self.assertDictEqual(expected_diff, profile.diff(profile2)) | ||
|
||
def test_unalikeability(self): | ||
df_categorical = pd.Series(["a", "a"]) | ||
profile = CategoricalColumn(df_categorical.name) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also is there a case where they would equal but the default shouldn't be
0
... thinking if.keys()
on both is empty (i.e.{}.keys()
returndict_keys([])
) but the issue is no that on the iter it won't do much but... it will still setpsi
to0.0
when should it really? or should we say that is unclculable? add condition for minimum key of len() == 1?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the categories are equal and of equal count the psi is zero. So if there are no categories (and by extension no counts so no percentages to calculate) I have a couple questions:
psi
of nothing compared to nothing should be zero,psi
is used to calculate change between two datasets, if nothing changed because there is nothing in both profiles, returning 0.0 forpsi
I think as a good thing right?