-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read: fetch file_schema directly from pyarrow_to_schema #597
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
I want to summarize my understanding, based on the comment from #584 .
When reading the parquet files, we use the projected version of the parquet file's schema, the arrow table created is then
"casted" the Iceberg's schema. This mapping is based on field_id
iceberg-python/pyiceberg/io/pyarrow.py
Line 1026 in 5039b5d
return to_requested_schema(projected_schema, file_project_schema, arrow_table) |
@@ -966,20 +965,15 @@ def _task_to_table( | |||
with fs.open_input_file(path) as fin: | |||
fragment = arrow_format.make_fragment(fin) | |||
physical_schema = fragment.physical_schema | |||
schema_raw = None | |||
if metadata := physical_schema.metadata: | |||
schema_raw = metadata.get(ICEBERG_SCHEMA) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial intent was that it was probably faster to deserialize the schema, rather than run the visitor, but this shows is not worth the additional complexity :)
Thanks @kevinjqliu and @Fokko for reviewing! Thanks @kevinjqliu for the integration test! |
#584 (comment)
I think we do correctly project by IDs. The real problem is the way that we sanitize the column names.
In #83, we add sanitize the file_schema in _task_to_table with the assumption that the column name of the parquet file follows the Avro Naming spec. However, I think the "sanitization" should be more general here: it should just ensure that the final
file_project_schema
contains the same column names of the parquet file's schema.The names in
file_schema
are different from the actual column names in the parquet file because we first try to load the file schema from the json string stored in the parquet file metadata: linkParquet files written by iceberg java contain this metadata json string. The json string represents the iceberg table schema at the time of writting the file. Therefore, it contains un-sanitized column names.
Since we always need to run a visitor to sanitize/ensure column names match, how about we just get the
file_schema
directly from the pyarrow physical schema?This way, we can ensure that the column names match, and thus do not need to sanitize the column names later.
I have verified that changing to this can fix both the sanitization issue in #83 and the issue here. Given that we want to align the writing behavior with the java implementation, we should also proceed #590.
Borrowed the integration test from #590