-
Notifications
You must be signed in to change notification settings - Fork 28.6k
[DRAFT][PYTHON] Improve Python UDF Arrow Serializer Performance #51225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a cursory look, seems making sense
python/pyspark/worker.py
Outdated
arrow_return_type = to_arrow_type( | ||
return_type, prefers_large_types=use_large_var_types(runner_conf) | ||
) | ||
def wrap_arrow_array_iter_udf(f, return_type, runner_conf): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's get rid of those white spaces tho
def test_complex_input_types(self): | ||
for pandas_conversion in [True, False]: | ||
with self.subTest(pandas_conversion=pandas_conversion), self.sql_conf( | ||
{"spark.sql.legacy.execution.pythonUDF.pandas.conversion.enabled": str(pandas_conversion).lower()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we can move the config setting into the setupClass method
@unittest.skipIf( | ||
not have_pandas or not have_pyarrow, pandas_requirement_message or pyarrow_requirement_message | ||
) | ||
class ArrowPythonUDFLegacyTestsMixin(BaseUDFTestsMixin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: also add parity tests in pyspark.sql.tests.connect.arrow.test_parity_arrow_python_udf
elif isinstance(packed, list): | ||
# multiple array UDFs in a projection | ||
arrs = [self._create_array(t[0], t[1], self._arrow_cast) for t in packed] | ||
elif isinstance(packed, tuple) and len(packed) == 3: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems the conditions in arrow-opt UDF is more complicated.
In what case will this branch be chosen?
What changes were proposed in this pull request?
This PR removes pandas <> Arrow <> pandas conversion in Arrow-optimized Python UDF by directly using PyArrow.
Why are the changes needed?
Python UDF arrow serializer has a lot of overhead from converting arrow batches into pandas series and converting UDF results back to a pandas dataframe.
We can instead convert Python object directly into arrow to avoid the expensive pandas conversion.
Does this PR introduce any user-facing change?
Legacy type coercion (arrow batch eval)
New type coercion (arrow batch eval):
How was this patch tested?
Added tests for both the legacy and new codepath, for arrow-batch eval
Was this patch authored or co-authored using generative AI tooling?
No