-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyArrow: Avoid buffer-overflow by avoid doing a sort #1555
Conversation
This was already being discussed back here: apache#208 (comment) This PR changes from doing a sort, and then a single pass over the table to the the approach where we determine the unique partition tuples then filter on them one by one. Fixes apache#1491 Because the sort caused buffers to be joined where it would overflow in Arrow. I think this is an issue on the Arrow side, and it should automatically break up into smaller buffers. The `combine_chunks` method does this correctly. Now: ``` 0.42877754200890195 Run 1 took: 0.2507691659993725 Run 2 took: 0.24833179199777078 Run 3 took: 0.24401691700040828 Run 4 took: 0.2419595829996979 Average runtime of 0.28 seconds ``` Before: ``` Run 0 took: 1.0768639159941813 Run 1 took: 0.8784021250030492 Run 2 took: 0.8486490420036716 Run 3 took: 0.8614017910003895 Run 4 took: 0.8497851670108503 Average runtime of 0.9 seconds ``` So it comes with a nice speedup as well :)
855d121
to
cafd39d
Compare
uh oh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
19eeb45
to
9bf6cda
Compare
9bf6cda
to
7442c41
Compare
@kevinjqliu I think the test is a bit too much, according to your comment here #1539 (comment) the test allocates almost 5gb 😀 |
2^32 (4_294_967_296) is around 4GB, we just need to test a scenario greater than that |
pyiceberg/partitioning.py
Outdated
if not isinstance(value, int): | ||
# When adding files, it can be that we still need to convert from logical types to physical types | ||
value = _to_partition_representation(iceberg_type, value) | ||
transformed_value = partition_field.transform.transform(iceberg_type)(value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is causing bugs, I'm going to revisit this to fix it properly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, so I got to the bottom of it. It has to do with the return types of the transforms. eg. When we apply the bucket
transform, the result is always an int, which is great. The problem is with the identity transform where the destination type is equal to the source type. So when a date comes in, it also comes out.
I think in the end it is better to remove the _to_partition_representation
and see if we can consolidate this somewhere, but that's a different PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So when a date comes in, it also comes out.
is it due to not having support for datetime
literal? #1542
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also if its just for adding files, perhaps we can do something special just for that path
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also if its just for adding files, perhaps we can do something special just for that path
Yes, that's exactly what I went for. I think we can simplify the logic in subsequent PRs :)
…verflowing-buffer
…python into fd-fix-overflowing-buffer
…verflowing-buffer
…verflowing-buffer
Following up #1555, which commented out tests in `tests/integration/test_partitioning_key.py` This PR uncomment those tests; they can run succesfully
Second attempt of #1539
This was already being discussed back here: #208 (comment)
This PR changes from doing a sort, and then a single pass over the table to the approach where we determine the unique partition tuples filter on them individually.
Fixes #1491
Because the sort caused buffers to be joined where it would overflow in Arrow. I think this is an issue on the Arrow side, and it should automatically break up into smaller buffers. The
combine_chunks
method does this correctly.Now:
Before:
So it comes with a nice speedup as well :)