-
Couldn't load subscription status.
- Fork 338
Description
What's the feature are you trying to implement?
Apache DataFusion Comet is an Apache Spark accelerator with Apache Iceberg support. We would like to enhance that support by leveraging Iceberg-Rust. You can find the details of this effort in the POC PR apache/datafusion-comet#2528 and in slides presented at the 10/9/25 Iceberg-Rust community call.
The short version is that Comet will rely on Apache Iceberg's Java integration with Apache Spark for planning, and then pass those generated FileScanTasks to Iceberg-Rust via a new DataFusion IcebergScan operator in Comet. We need a lot of new (or just public) APIs in the ArrowReader since we are bypassing the Table interface to avoid redundant (and possibly incorrect partitioned) planning. I will start to accumulate those efforts here.
One benefit of this approach is that I can run the Iceberg Java tests against Iceberg Rust's reader. There are gaps in features, so I hope to rapidly iterate on improving Iceberg Rust's reader to support them. I am not using Iceberg Rust's table interface or planning, so others will need to fill the gaps there, but I think this will greatly improve and harden Iceberg Rust's reader.
- Make
ArrowReaderBuilder::newpubinstead ofpub(crate). - Expose
ArrowReaderOptionsinArrowReaderBuilder. This likely requires a new Iceberg-Rust Cargo feature like in DataFusion to enable theencryptionfeature for the Parquet crate. - Read Parquet files without field ID metadata (migrated tables)
- Read Parquet files with both equality and position deletes
- Filter row groups when FileScanTask includes byte ranges
- Equality deletes with partial schemas
- Date32 support in RecordBatchTransformer
- Support complex types in pushdown filters
- Support binary, fixedSizeBinary, and decimal(28+) partition values
Willingness to contribute
I can contribute to this feature independently