Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use write.parquet.compression-{codec,level} #358

Merged
merged 9 commits into from
Feb 5, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 36 additions & 7 deletions pyiceberg/io/pyarrow.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
from __future__ import annotations

import concurrent.futures
import fnmatch
import itertools
import logging
import os
Expand Down Expand Up @@ -1720,13 +1721,14 @@ def write_file(table: Table, tasks: Iterator[WriteTask]) -> Iterator[DataFile]:
except StopIteration:
pass

parquet_writer_kwargs = _get_parquet_writer_kwargs(table.properties)

file_path = f'{table.location()}/data/{task.generate_data_file_filename("parquet")}'
file_schema = schema_to_pyarrow(table.schema())

collected_metrics: List[pq.FileMetaData] = []
fo = table.io.new_output(file_path)
with fo.create(overwrite=True) as fos:
with pq.ParquetWriter(fos, schema=file_schema, version="1.0", metadata_collector=collected_metrics) as writer:
with pq.ParquetWriter(fos, schema=file_schema, version="1.0", **parquet_writer_kwargs) as writer:
writer.write_table(task.df)

data_file = DataFile(
Expand All @@ -1745,14 +1747,41 @@ def write_file(table: Table, tasks: Iterator[WriteTask]) -> Iterator[DataFile]:
key_metadata=None,
)

if len(collected_metrics) != 1:
# One file has been written
raise ValueError(f"Expected 1 entry, got: {collected_metrics}")

fill_parquet_file_metadata(
data_file=data_file,
parquet_metadata=collected_metrics[0],
parquet_metadata=writer.writer.metadata,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked this through the debugger, and this looks good. Nice change @jonashaag 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can also tell from the PyArrow code that it's identical :)

stats_columns=compute_statistics_plan(table.schema(), table.properties),
parquet_column_mapping=parquet_path_to_id_mapping(table.schema()),
)
return iter([data_file])


def _get_parquet_writer_kwargs(table_properties: Properties) -> Dict[str, Any]:
def _get_int(key: str) -> Optional[int]:
value = table_properties.get(key)
if value is None:
return None
else:
return int(value)
jonashaag marked this conversation as resolved.
Show resolved Hide resolved

for key_pattern in [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to blow up if one of the properties isn't set?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to raise if one of the properties is set. But I guess we should check for None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None is not allowed, reverted my changes

"write.parquet.row-group-size-bytes",
"write.parquet.page-row-limit",
"write.parquet.bloom-filter-max-bytes",
"write.parquet.bloom-filter-enabled.column.*",
]:
unsupported_keys = fnmatch.filter(table_properties, key_pattern)
if unsupported_keys:
raise NotImplementedError(f"Parquet writer option(s) {unsupported_keys} not implemented")

compression_codec = table_properties.get("write.parquet.compression-codec")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
compression_codec = table_properties.get("write.parquet.compression-codec")
compression_codec = table_properties.get("write.parquet.compression-codec", "zstd")

How about adding the default value here? RestCatalog backend and HiveCatalog explicitly set the default codec at catalog level.

DEFAULT_PROPERTIES = {'write.parquet.compression-codec': 'zstd'}

But other catalogs, such as glue and sql, do not set this explicitly when creating new tables. In general, for tables that have no write.parquet.compression-codec key in its property, we still want to use the default codec zstd when writing parquet.

compression_level = _get_int("write.parquet.compression-level")
if compression_codec == "uncompressed":
compression_codec = "none"

return {
"compression": compression_codec,
"compression_level": compression_level,
"data_page_size": _get_int("write.parquet.page-size-bytes"),
"dictionary_pagesize_limit": _get_int("write.parquet.dict-size-bytes"),
}
6 changes: 5 additions & 1 deletion tests/integration/test_reads.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,11 @@ def test_ray_all_types(catalog: Catalog) -> None:
@pytest.mark.parametrize('catalog', [pytest.lazy_fixture('catalog_hive'), pytest.lazy_fixture('catalog_rest')])
def test_pyarrow_to_iceberg_all_types(catalog: Catalog) -> None:
table_test_all_types = catalog.load_table("default.test_all_types")
fs = S3FileSystem(endpoint_override="http://localhost:9000", access_key="admin", secret_key="password")
fs = S3FileSystem(
endpoint_override=catalog.properties["s3.endpoint"],
access_key=catalog.properties["s3.access-key-id"],
secret_key=catalog.properties["s3.secret-access-key"],
)
data_file_paths = [task.file.file_path for task in table_test_all_types.scan().plan_files()]
for data_file_path in data_file_paths:
uri = urlparse(data_file_path)
Expand Down
Loading