Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: Add rewrite-table-path in spark procedure #12115

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
89 changes: 88 additions & 1 deletion docs/docs/spark-procedures.md
Original file line number Diff line number Diff line change
Expand Up @@ -972,4 +972,91 @@ CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => 'snap1', columns => array('col1', 'col2'));
```
```

## Table Replication

The `rewrite-table-path` assists in moving or copying an Iceberg table from one location to another.
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved

### `rewrite-table-path`

This procedure writes a new copy of the Iceberg table's metadata files where every path has had its prefix replaced.
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved
The newly rewritten metadata files enable moving or coping an Iceberg table to a new location.
After copying both metadata and data to the desired location, the replicated iceberg
table will appear identical to the source table, including snapshot history, schema and partition specs.

!!! info
This procedure only creates metadata for an existing Iceberg table modified for a new location. The produced file_list can be used for copying rewritten metadata and data files to the new location.
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved
Copying/Moving metadata and data files to the new location is not part of this procedure.


| Argument Name | Required? | default | Type | Description |
|--------------------|-----------|------------------------------------------------|--------|-------------------------------------------------------------------------|
| `table` | ✔️ | | string | Name of the table |
| `source_prefix` | ✔️ | | string | The existing prefix to be replaced |
| `target_prefix` | ✔️ | | string | The replacement prefix for `source_prefix` |
| `start_version` | | first metadata.json in table's metadata log | string | The name or path to the chronologically first metadata.json to rewrite. |
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved
| `end_version` | | latest metadata.json | string | The name or path to the chronologically last metadata.json to rewrite |
| `staging_location` | | new directory under table's metadata directory | string | The output location for newly modified metadata files |


#### Modes of operation:

- Full Rewrite:

By default, the procedure operates in full rewrite mode, rewriting all reachable metadata files. This includes metadata.json, manifest lists, manifests, and position delete files.

- Incremental Rewrite:

If `start_version` is provided, the procedure will only rewrite metadata files created between `start_version` and `end_version`. `end_version` defaults to the latest metadata location of the table.
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved

#### Output

| Output Name | Type | Description |
|----------------------|--------|-------------------------------------------------------------------------------------|
| `latest_version` | string | Name of the latest metadata file rewritten by this procedure |
| `file_list_location` | string | Path to a file containing a listing of comma-separated source and destination paths |
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved

Example file list content :

```csv
sourcepath/datafile1.parquet,targetpath/datafile1.parquet
sourcepath/datafile2.parquet,targetpath/datafile2.parquet
stagingpath/manifest.avro,targetpath/manifest.avro
```

#### Examples

Full rewrite of a table's path from source location in HDFS to a target location in S3 bucket of table `my_table`.
This produces a new set of metadata using the s3a prefix in the default staging location under table's metadata directory
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved

```sql
CALL catalog_name.system.rewrite_table_path(
table => 'db.my_table',
source_prefix => "hdfs://nn:8020/path/to/source_table",
target_prefix => "s3a://bucket/prefix/db.db/my_table"
);
```
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved

Incremental rewrite of a table's path from a source location to a target location between metadata versions
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved
`v2.metadata.json` and `v20.metadata.json`, with files written to an explicit staging location

```sql
CALL catalog_name.system.rewrite_table_path(
table => 'db.my_table',
source_prefix => "s3a://bucketOne/prefix/db.db/my_table",
target_prefix => "s3a://bucketTwo/prefix/db.db/my_table",
start_version => "v2.metadata.json",
end_version => "v20.metadata.json",
staging_location => "s3a://bucketStaging/my_table"
);
```

Once the rewrite is completed, third-party tools (
eg. [Distcp](https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html)) can be used to copy the newly created
metadata files and data files to the target location

Lastly, [register_table](#register_table) procedure can be used to register copied table in the target location with catalog.
dramaticlly marked this conversation as resolved.
Show resolved Hide resolved

!!! warning
Iceberg table with statistics files are not currently supported for path rewrite