Skip to content

Commit b22f1c9

Browse files
author
Martin
committed
docs: address review feedback for Arrow introduction guide
Technical improvements: - Clarify Arrow adoption: native systems (DataFusion, Polars) vs interchange converters (DuckDB, Spark, pandas) - Add note that read_json expects newline-delimited JSON - Fix reference link capitalization to match in-text usage Documentation cleanup: - Remove unnecessary REFERENCES header from dataframe.md - Previously discarded prettier changes to autogenerated scalar_functions.md Addresses feedback from @Jefffrey and @comphead in PR apache#18051
1 parent 8245f5c commit b22f1c9

File tree

2 files changed

+51
-72
lines changed

2 files changed

+51
-72
lines changed

docs/source/user-guide/arrow-introduction.md

Lines changed: 51 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
:depth: 2
2525
```
2626

27-
This guide helps DataFusion users understand Arrow and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues.
27+
This guide helps DataFusion users understand [Arrow] and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues.
2828

2929
**Why Arrow is central to DataFusion**: Arrow provides the unified type system that makes DataFusion possible. When you query a CSV file, join it with a Parquet file, and aggregate results from JSON—it all works seamlessly because every data source is converted to Arrow's common representation. This unified type system, combined with Arrow's columnar format, enables DataFusion to execute efficient vectorized operations across any combination of data sources while benefiting from zero-copy data sharing between query operators.
3030

@@ -66,38 +66,27 @@ Traditional Row Storage: Arrow Columnar Storage:
6666

6767
### Why This Matters
6868

69+
- **Unified Type System**: All data sources (CSV, Parquet, JSON) convert to the same Arrow types, enabling seamless cross-format queries
6970
- **Vectorized Execution**: Process entire columns at once using SIMD instructions
70-
- **Better Compression**: Similar values stored together compress more efficiently
71-
- **Cache Efficiency**: Scanning specific columns doesn't load unnecessary data
71+
- **Cache Efficiency**: Scanning specific columns doesn't load unnecessary data into CPU cache
7272
- **Zero-Copy Data Sharing**: Systems can share Arrow data without conversion overhead
7373

74-
DataFusion, DuckDB, Polars, and Pandas all speak Arrow natively—they can exchange data without expensive serialization/deserialization steps.
74+
Arrow has become the universal standard for in-memory analytics precisely because of its **columnar format**—systems that natively store or process data in Arrow (DataFusion, Polars, InfluxDB 3.0), and runtimes that convert to Arrow for interchange (DuckDB, Spark, pandas), all organize data by column rather than by row. This cross-language, cross-platform adoption of the columnar model enables seamless data flow between systems with minimal conversion overhead.
7575

76-
## What is a RecordBatch? (And Why Batch?)
77-
78-
A **[`RecordBatch`]** represents a horizontal slice of a table—a collection of equal-length columnar arrays sharing the same schema.
79-
80-
### Why Not Process Entire Tables?
76+
Within this columnar design, Arrow's standard unit for packaging data is the **RecordBatch**—the key to making columnar format practical for real-world query engines.
8177

82-
- **Memory Constraints**: A billion-row table might not fit in RAM
83-
- **Pipeline Processing**: Start producing results before reading all data
84-
- **Parallel Execution**: Different threads can process different batches
85-
86-
### Why Not Process Single Rows?
78+
## What is a RecordBatch? (And Why Batch?)
8779

88-
- **Lost Vectorization**: Can't use SIMD instructions on single values
89-
- **Poor Cache Utilization**: Jumping between rows defeats CPU cache optimization
90-
- **High Overhead**: Managing individual rows has significant bookkeeping costs
80+
A **[`RecordBatch`]** cleverly combines the benefits of columnar storage with the practical need to process data in chunks. It represents a horizontal slice of a table, but critically, each column _within_ that slice remains a contiguous array.
9181

92-
### RecordBatches: The Sweet Spot
82+
Think of it as having two perspectives:
9383

94-
RecordBatches typically contain thousands of rows—enough to benefit from vectorization but small enough to fit in memory. DataFusion streams these batches through operators, achieving both efficiency and scalability.
84+
- **Columnar inside**: Each column (`id`, `name`, `age`) is a contiguous array optimized for vectorized operations
85+
- **Row-oriented outside**: The batch represents a chunk of rows (e.g., rows 1-1000), making it a manageable unit for streaming
9586

96-
**Key Properties**:
87+
RecordBatches are **immutable snapshots**—once created, they cannot be modified. Any transformation produces a _new_ RecordBatch, enabling safe parallel processing without locks or coordination overhead.
9788

98-
- Arrays are immutable (create new batches to modify data)
99-
- NULL values tracked via efficient validity bitmaps
100-
- Variable-length data (strings, lists) use offset arrays for efficient access
89+
This design allows DataFusion to process streams of row-based chunks while gaining maximum performance from the columnar layout. Let's see how this works in practice.
10190

10291
## From files to Arrow
10392

@@ -115,7 +104,7 @@ async fn main() -> datafusion::error::Result<()> {
115104
// Pick ONE of these per run (each returns a new DataFrame):
116105
let df = ctx.read_csv("data.csv", CsvReadOptions::new()).await?;
117106
// let df = ctx.read_parquet("data.parquet", ParquetReadOptions::default()).await?;
118-
// let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature
107+
// let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature; expects newline-delimited JSON
119108
// let df = ctx.read_avro("data.avro", AvroReadOptions::default()).await?; // requires "avro" feature
120109

121110
let batches = df
@@ -150,9 +139,7 @@ In this pipeline, [`RecordBatch`]es are the "packages" of columnar data that flo
150139

151140
Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like [`Int32Array`] for numbers), defining a [`Schema`] that describes your columns, and assembling them into a [`RecordBatch`].
152141

153-
You'll notice [`Arc`] ([Atomically Reference Counted](https://doc.rust-lang.org/std/sync/struct.Arc.html)) is used frequently—this is how Arrow enables efficient, zero-copy data sharing. Instead of copying data, different parts of the query engine can safely share read-only references to the same underlying memory. [`ArrayRef`] is simply a type alias for `Arc<dyn Array>`, representing a reference to any Arrow array type.
154-
155-
Notice how nullable columns can contain `None` values, tracked efficiently by Arrow's internal validity bitmap.
142+
Note: You'll see [`Arc`] used frequently in the code—DataFusion's async architecture requires wrapping Arrow arrays in `Arc` (atomically reference-counted pointers) to safely share data across tasks. [`ArrayRef`] is simply a type alias for `Arc<dyn Array>`.
156143

157144
```rust
158145
use std::sync::Arc;
@@ -250,52 +237,48 @@ The DataFrame API handles all the Arrow details under the hood - reading files i
250237

251238
## Further reading
252239

253-
- [Arrow introduction](https://arrow.apache.org/docs/format/Intro.html)
254-
- [Arrow columnar format (overview)](https://arrow.apache.org/docs/format/Columnar.html)
255-
- [Arrow IPC format (files and streams)](https://arrow.apache.org/docs/format/IPC.html)
256-
- [arrow_array::RecordBatch (docs.rs)](https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html)
257-
- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine (Paper)](https://dl.acm.org/doi/10.1145/3626246.3653368)
258-
259-
- DataFusion + Arrow integration (docs.rs):
260-
- [datafusion::common::arrow](https://docs.rs/datafusion/latest/datafusion/common/arrow/index.html)
261-
- [datafusion::common::arrow::array](https://docs.rs/datafusion/latest/datafusion/common/arrow/array/index.html)
262-
- [datafusion::common::arrow::compute](https://docs.rs/datafusion/latest/datafusion/common/arrow/compute/index.html)
263-
- [SessionContext::read_csv](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv)
264-
- [read_parquet](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet)
265-
- [read_json](https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json)
266-
- [DataFrame::collect](https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect)
267-
- [SendableRecordBatchStream](https://docs.rs/datafusion/latest/datafusion/physical_plan/type.SendableRecordBatchStream.html)
268-
- [TableProvider](https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html)
269-
- [MemTable](https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html)
270-
- Deep dive (memory layout internals): [ArrayData on docs.rs](https://docs.rs/datafusion/latest/datafusion/common/arrow/array/struct.ArrayData.html)
271-
- Parquet format and pushdown: [Parquet format](https://parquet.apache.org/docs/file-format/), [Row group filtering / predicate pushdown](https://arrow.apache.org/docs/cpp/parquet.html#row-group-filtering)
272-
- For DataFusion contributors: [DataFusion Invariants](../contributor-guide/specification/invariants.md) - How DataFusion maintains type safety and consistency with Arrow's dynamic type system
273-
274-
[`arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html
275-
[`arrayref`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html
276-
[`field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html
277-
[`schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html
278-
[`datatype`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html
279-
[`int32array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html
280-
[`stringarray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html
281-
[`int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32
282-
[`int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64
283-
[ extension points]: ../library-user-guide/extensions.md
284-
[`tableprovider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html
285-
[custom table providers guide]: ../library-user-guide/custom-table-providers.md
286-
[user-defined functions (udfs)]: ../library-user-guide/functions/adding-udfs.md
287-
[custom optimizer rules and operators]: ../library-user-guide/extending-operators.md
240+
**Arrow Documentation:**
241+
242+
- [Arrow Format Introduction](https://arrow.apache.org/docs/format/Intro.html) - Official Arrow specification
243+
- [Arrow Columnar Format](https://arrow.apache.org/docs/format/Columnar.html) - In-depth look at the memory layout
244+
245+
**DataFusion API References:**
246+
247+
- [RecordBatch](https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html) - Core Arrow data structure
248+
- [DataFrame](https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html) - High-level query interface
249+
- [TableProvider](https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html) - Custom data source trait
250+
- [MemTable](https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html) - In-memory table implementation
251+
252+
**Academic Paper:**
253+
254+
- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine](https://dl.acm.org/doi/10.1145/3626246.3653368) - Published at SIGMOD 2024
255+
256+
[arrow]: https://arrow.apache.org/docs/index.html
257+
[`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html
258+
[`ArrayRef`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html
259+
[`Field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html
260+
[`Schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html
261+
[`DataType`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html
262+
[`Int32Array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html
263+
[`StringArray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html
264+
[`Int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32
265+
[`Int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64
266+
[extension points]: ../library-user-guide/extensions.md
267+
[`TableProvider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html
268+
[Custom Table Providers guide]: ../library-user-guide/custom-table-providers.md
269+
[User-Defined Functions (UDFs)]: ../library-user-guide/functions/adding-udfs.md
270+
[Custom Optimizer Rules and Operators]: ../library-user-guide/extending-operators.md
288271
[`.register_table()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.register_table
289272
[`.sql()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.sql
290273
[`.show()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.show
291-
[`memtable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html
292-
[`sessioncontext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html
293-
[`csvreadoptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html
294-
[`parquetreadoptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html
295-
[`recordbatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html
274+
[`MemTable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html
275+
[`SessionContext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html
276+
[`CsvReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html
277+
[`ParquetReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html
278+
[`RecordBatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html
296279
[`read_csv`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv
297280
[`read_parquet`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet
298281
[`read_json`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json
299282
[`read_avro`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_avro
300-
[`dataframe`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html
283+
[`DataFrame`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html
301284
[`.collect()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect

docs/source/user-guide/dataframe.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -111,10 +111,6 @@ async fn main() -> Result<()> {
111111
}
112112
```
113113

114-
---
115-
116-
# REFERENCES
117-
118114
[pandas dataframe]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html
119115
[spark dataframe]: https://spark.apache.org/docs/latest/sql-programming-guide.html
120116
[`sessioncontext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html

0 commit comments

Comments
 (0)