You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/user-guide/arrow-introduction.md
+51-68Lines changed: 51 additions & 68 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@
24
24
:depth: 2
25
25
```
26
26
27
-
This guide helps DataFusion users understand Arrow and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues.
27
+
This guide helps DataFusion users understand [Arrow] and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues.
28
28
29
29
**Why Arrow is central to DataFusion**: Arrow provides the unified type system that makes DataFusion possible. When you query a CSV file, join it with a Parquet file, and aggregate results from JSON—it all works seamlessly because every data source is converted to Arrow's common representation. This unified type system, combined with Arrow's columnar format, enables DataFusion to execute efficient vectorized operations across any combination of data sources while benefiting from zero-copy data sharing between query operators.
30
30
@@ -66,38 +66,27 @@ Traditional Row Storage: Arrow Columnar Storage:
66
66
67
67
### Why This Matters
68
68
69
+
-**Unified Type System**: All data sources (CSV, Parquet, JSON) convert to the same Arrow types, enabling seamless cross-format queries
69
70
-**Vectorized Execution**: Process entire columns at once using SIMD instructions
70
-
-**Better Compression**: Similar values stored together compress more efficiently
71
-
-**Cache Efficiency**: Scanning specific columns doesn't load unnecessary data
71
+
-**Cache Efficiency**: Scanning specific columns doesn't load unnecessary data into CPU cache
72
72
-**Zero-Copy Data Sharing**: Systems can share Arrow data without conversion overhead
73
73
74
-
DataFusion, DuckDB, Polars, and Pandas all speak Arrow natively—they can exchange data without expensive serialization/deserialization steps.
74
+
Arrow has become the universal standard for in-memory analytics precisely because of its **columnar format**—systems that natively store or process data in Arrow (DataFusion, Polars, InfluxDB 3.0), and runtimes that convert to Arrow for interchange (DuckDB, Spark, pandas), all organize data by column rather than by row. This cross-language, cross-platform adoption of the columnar model enables seamless data flow between systems with minimal conversion overhead.
75
75
76
-
## What is a RecordBatch? (And Why Batch?)
77
-
78
-
A **[`RecordBatch`]** represents a horizontal slice of a table—a collection of equal-length columnar arrays sharing the same schema.
79
-
80
-
### Why Not Process Entire Tables?
76
+
Within this columnar design, Arrow's standard unit for packaging data is the **RecordBatch**—the key to making columnar format practical for real-world query engines.
81
77
82
-
-**Memory Constraints**: A billion-row table might not fit in RAM
83
-
-**Pipeline Processing**: Start producing results before reading all data
84
-
-**Parallel Execution**: Different threads can process different batches
85
-
86
-
### Why Not Process Single Rows?
78
+
## What is a RecordBatch? (And Why Batch?)
87
79
88
-
-**Lost Vectorization**: Can't use SIMD instructions on single values
89
-
-**Poor Cache Utilization**: Jumping between rows defeats CPU cache optimization
90
-
-**High Overhead**: Managing individual rows has significant bookkeeping costs
80
+
A **[`RecordBatch`]** cleverly combines the benefits of columnar storage with the practical need to process data in chunks. It represents a horizontal slice of a table, but critically, each column _within_ that slice remains a contiguous array.
91
81
92
-
### RecordBatches: The Sweet Spot
82
+
Think of it as having two perspectives:
93
83
94
-
RecordBatches typically contain thousands of rows—enough to benefit from vectorization but small enough to fit in memory. DataFusion streams these batches through operators, achieving both efficiency and scalability.
84
+
-**Columnar inside**: Each column (`id`, `name`, `age`) is a contiguous array optimized for vectorized operations
85
+
-**Row-oriented outside**: The batch represents a chunk of rows (e.g., rows 1-1000), making it a manageable unit for streaming
95
86
96
-
**Key Properties**:
87
+
RecordBatches are **immutable snapshots**—once created, they cannot be modified. Any transformation produces a _new_ RecordBatch, enabling safe parallel processing without locks or coordination overhead.
97
88
98
-
- Arrays are immutable (create new batches to modify data)
99
-
- NULL values tracked via efficient validity bitmaps
100
-
- Variable-length data (strings, lists) use offset arrays for efficient access
89
+
This design allows DataFusion to process streams of row-based chunks while gaining maximum performance from the columnar layout. Let's see how this works in practice.
// let df = ctx.read_avro("data.avro", AvroReadOptions::default()).await?; // requires "avro" feature
120
109
121
110
letbatches=df
@@ -150,9 +139,7 @@ In this pipeline, [`RecordBatch`]es are the "packages" of columnar data that flo
150
139
151
140
Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like [`Int32Array`] for numbers), defining a [`Schema`] that describes your columns, and assembling them into a [`RecordBatch`].
152
141
153
-
You'll notice [`Arc`] ([Atomically Reference Counted](https://doc.rust-lang.org/std/sync/struct.Arc.html)) is used frequently—this is how Arrow enables efficient, zero-copy data sharing. Instead of copying data, different parts of the query engine can safely share read-only references to the same underlying memory. [`ArrayRef`] is simply a type alias for `Arc<dyn Array>`, representing a reference to any Arrow array type.
154
-
155
-
Notice how nullable columns can contain `None` values, tracked efficiently by Arrow's internal validity bitmap.
142
+
Note: You'll see [`Arc`] used frequently in the code—DataFusion's async architecture requires wrapping Arrow arrays in `Arc` (atomically reference-counted pointers) to safely share data across tasks. [`ArrayRef`] is simply a type alias for `Arc<dyn Array>`.
156
143
157
144
```rust
158
145
usestd::sync::Arc;
@@ -250,52 +237,48 @@ The DataFrame API handles all the Arrow details under the hood - reading files i
- Deep dive (memory layout internals): [ArrayData on docs.rs](https://docs.rs/datafusion/latest/datafusion/common/arrow/array/struct.ArrayData.html)
271
-
- Parquet format and pushdown: [Parquet format](https://parquet.apache.org/docs/file-format/), [Row group filtering / predicate pushdown](https://arrow.apache.org/docs/cpp/parquet.html#row-group-filtering)
272
-
- For DataFusion contributors: [DataFusion Invariants](../contributor-guide/specification/invariants.md) - How DataFusion maintains type safety and consistency with Arrow's dynamic type system
-[Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine](https://dl.acm.org/doi/10.1145/3626246.3653368) - Published at SIGMOD 2024
0 commit comments