Skip to content

Commit 70d1ed5

Browse files
author
Martin
committed
docs: Enhance Arrow introduction guide with batch size config and improved examples
- Add configuring batch size section with Rust and SQL examples - Add reference link for datafusion.execution.batch_size configuration - Improve introduction with clearer Arrow explanation - Enhance code examples with better imports and error handling - Add links to arrow2 guide for deeper understanding - Improve extension points section with ExecutionPlan details - Reorganize Further Reading section with better categorization - Hope I haven't overseen any comments Addresses feedback from PR apache#18051
1 parent b22f1c9 commit 70d1ed5

File tree

1 file changed

+77
-70
lines changed

1 file changed

+77
-70
lines changed

docs/source/user-guide/arrow-introduction.md

Lines changed: 77 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -17,40 +17,24 @@
1717
under the License.
1818
-->
1919

20-
# A Gentle Introduction to Arrow & RecordBatches (for DataFusion users)
20+
# Introduction to `Arrow` & RecordBatches
2121

2222
```{contents}
2323
:local:
2424
:depth: 2
2525
```
2626

27-
This guide helps DataFusion users understand [Arrow] and its RecordBatch format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues.
27+
This guide helps DataFusion users understand [Apache Arrow]—a language-independent, in-memory columnar format and development platform for analytics. It defines a standardized columnar representation that enables different systems and languages (e.g., Rust and Python) to share data with zero-copy interchange, avoiding serialization overhead. A core building block is the [`RecordBatch`] format. While you may never need to work with Arrow directly, this knowledge becomes valuable when using DataFusion's extension points or debugging performance issues.
2828

2929
**Why Arrow is central to DataFusion**: Arrow provides the unified type system that makes DataFusion possible. When you query a CSV file, join it with a Parquet file, and aggregate results from JSON—it all works seamlessly because every data source is converted to Arrow's common representation. This unified type system, combined with Arrow's columnar format, enables DataFusion to execute efficient vectorized operations across any combination of data sources while benefiting from zero-copy data sharing between query operators.
3030

3131
## Why Columnar? The Arrow Advantage
3232

33-
Apache Arrow is an open **specification** that defines how analytical data should be organized in memory. Think of it as a blueprint that different systems agree to follow, not a database or programming language.
33+
Apache Arrow is an open **specification** that defines a common way to organize analytical data in memory. Think of it as a set of best practices that different systems agree to follow, not a database or programming language.
3434

3535
### Row-oriented vs Columnar Layout
3636

37-
Traditional databases often store data row-by-row:
38-
39-
```
40-
Row 1: [id: 1, name: "Alice", age: 30]
41-
Row 2: [id: 2, name: "Bob", age: 25]
42-
Row 3: [id: 3, name: "Carol", age: 35]
43-
```
44-
45-
Arrow organizes the same data by column:
46-
47-
```
48-
Column "id": [1, 2, 3]
49-
Column "name": ["Alice", "Bob", "Carol"]
50-
Column "age": [30, 25, 35]
51-
```
52-
53-
Visual comparison:
37+
Quick visual: row-major (left) vs Arrow's columnar layout (right). For a deeper primer, see the [arrow2 guide].
5438

5539
```
5640
Traditional Row Storage: Arrow Columnar Storage:
@@ -71,23 +55,44 @@ Traditional Row Storage: Arrow Columnar Storage:
7155
- **Cache Efficiency**: Scanning specific columns doesn't load unnecessary data into CPU cache
7256
- **Zero-Copy Data Sharing**: Systems can share Arrow data without conversion overhead
7357

74-
Arrow has become the universal standard for in-memory analytics precisely because of its **columnar format**—systems that natively store or process data in Arrow (DataFusion, Polars, InfluxDB 3.0), and runtimes that convert to Arrow for interchange (DuckDB, Spark, pandas), all organize data by column rather than by row. This cross-language, cross-platform adoption of the columnar model enables seamless data flow between systems with minimal conversion overhead.
58+
Arrow is widely adopted for in-memory analytics precisely because of its **columnar format**—systems that natively store or process data in Arrow (DataFusion, Polars, InfluxDB 3.0), and runtimes that convert to Arrow for interchange (DuckDB, Spark, pandas), all organize data by column rather than by row. This cross-language, cross-platform adoption of the columnar model enables seamless data flow between systems with minimal conversion overhead.
7559

7660
Within this columnar design, Arrow's standard unit for packaging data is the **RecordBatch**—the key to making columnar format practical for real-world query engines.
7761

7862
## What is a RecordBatch? (And Why Batch?)
7963

80-
A **[`RecordBatch`]** cleverly combines the benefits of columnar storage with the practical need to process data in chunks. It represents a horizontal slice of a table, but critically, each column _within_ that slice remains a contiguous array.
64+
A **[`RecordBatch`]** represents a horizontal slice of a table—a collection of equal-length columnar arrays that conform to a defined schema. Each column within the slice is a contiguous Arrow array, and all columns have the same number of rows (length). This chunked, immutable unit enables efficient streaming and parallel execution.
8165

8266
Think of it as having two perspectives:
8367

8468
- **Columnar inside**: Each column (`id`, `name`, `age`) is a contiguous array optimized for vectorized operations
85-
- **Row-oriented outside**: The batch represents a chunk of rows (e.g., rows 1-1000), making it a manageable unit for streaming
69+
- **Row-chunked externally**: The batch represents a chunk of rows (e.g., rows 1-1000), making it a manageable unit for streaming
8670

8771
RecordBatches are **immutable snapshots**—once created, they cannot be modified. Any transformation produces a _new_ RecordBatch, enabling safe parallel processing without locks or coordination overhead.
8872

8973
This design allows DataFusion to process streams of row-based chunks while gaining maximum performance from the columnar layout. Let's see how this works in practice.
9074

75+
### Configuring Batch Size
76+
77+
DataFusion uses a default batch size of 8192 rows per RecordBatch, balancing memory efficiency with vectorization benefits. You can adjust this via the session configuration or the [`datafusion.execution.batch_size`] configuration setting:
78+
79+
```rust
80+
use datafusion::execution::config::SessionConfig;
81+
use datafusion::prelude::*;
82+
83+
let config = SessionConfig::new().with_batch_size(8192); // default value
84+
let ctx = SessionContext::with_config(config);
85+
```
86+
87+
You can also query and modify this setting using SQL:
88+
89+
```sql
90+
SHOW datafusion.execution.batch_size;
91+
SET datafusion.execution.batch_size TO 1024;
92+
```
93+
94+
See [configuration settings] for more details.
95+
9196
## From files to Arrow
9297

9398
When you call [`read_csv`], [`read_parquet`], [`read_json`] or [`read_avro`], DataFusion decodes those formats into Arrow arrays and streams them to operators as RecordBatches.
@@ -104,7 +109,7 @@ async fn main() -> datafusion::error::Result<()> {
104109
// Pick ONE of these per run (each returns a new DataFrame):
105110
let df = ctx.read_csv("data.csv", CsvReadOptions::new()).await?;
106111
// let df = ctx.read_parquet("data.parquet", ParquetReadOptions::default()).await?;
107-
// let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature; expects newline-delimited JSON
112+
// let df = ctx.read_json("data.ndjson", NdJsonReadOptions::default()).await?; // requires "json" feature; expects newline-delimited JSON (NDJSON)
108113
// let df = ctx.read_avro("data.avro", AvroReadOptions::default()).await?; // requires "avro" feature
109114

110115
let batches = df
@@ -139,14 +144,15 @@ In this pipeline, [`RecordBatch`]es are the "packages" of columnar data that flo
139144

140145
Sometimes you need to create Arrow data programmatically rather than reading from files. This example shows the core building blocks: creating typed arrays (like [`Int32Array`] for numbers), defining a [`Schema`] that describes your columns, and assembling them into a [`RecordBatch`].
141146

142-
Note: You'll see [`Arc`] used frequently in the code—DataFusion's async architecture requires wrapping Arrow arrays in `Arc` (atomically reference-counted pointers) to safely share data across tasks. [`ArrayRef`] is simply a type alias for `Arc<dyn Array>`.
147+
Note: You'll see [`Arc`] used frequently in the code—Arrow arrays are wrapped in `Arc` (atomically reference-counted pointers) to enable cheap, thread-safe sharing across operators and tasks. [`ArrayRef`] is simply a type alias for `Arc<dyn Array>`.
143148

144149
```rust
145150
use std::sync::Arc;
151+
use arrow_schema::ArrowError;
146152
use arrow_array::{ArrayRef, Int32Array, StringArray, RecordBatch};
147153
use arrow_schema::{DataType, Field, Schema};
148154

149-
fn make_batch() -> arrow_schema::Result<RecordBatch> {
155+
fn make_batch() -> Result<RecordBatch, ArrowError> {
150156
let ids = Int32Array::from(vec![1, 2, 3]);
151157
let names = StringArray::from(vec![Some("alice"), None, Some("carol")]);
152158

@@ -205,8 +211,8 @@ When working with Arrow and RecordBatches, watch out for these common issues:
205211

206212
- **Schema consistency**: All batches in a stream must share the exact same [`Schema`]. For example, you can't have one batch where a column is [`Int32`] and the next where it's [`Int64`], even if the values would fit
207213
- **Immutability**: Arrays are immutable—to "modify" data, you must build new arrays or new RecordBatches. For instance, to change a value in an array, you'd create a new array with the updated value
208-
- **Buffer management**: Variable-length types (UTF-8, binary, lists) use offsets + values arrays internally. Avoid manual buffer slicing unless you understand Arrow's internal invariants—use Arrow's built-in compute functions instead
209-
- **Type mismatches**: Mixed input types across files may require explicit casts. For example, a string column `"123"` from a CSV file won't automatically join with an integer column `123` from a Parquet file—you'll need to cast one to match the other
214+
- **Row by Row Processing**: Avoid iterating over Arrays element by element when possible, and use Arrow's built-in compute functions instead
215+
- **Type mismatches**: Mixed input types across files may require explicit casts. For example, a string column `"123"` from a CSV file won't automatically join with an integer column `123` from a Parquet file—you'll need to cast one to match the other. Use Arrow's [`cast`] kernel where appropriate
210216
- **Batch size assumptions**: Don't assume a particular batch size; always iterate until the stream ends. One file might produce 8192-row batches while another produces 1024-row batches
211217

212218
## When Arrow knowledge is needed (Extension Points)
@@ -219,66 +225,67 @@ These APIs are where you can plug your own code into the engine, and they often
219225

220226
- **[User-Defined Functions (UDFs)]**: If you need to perform a custom transformation on your data that isn't built into DataFusion, you can write a UDF. Your function will receive data as Arrow arrays (inside a [`RecordBatch`]) and must produce an Arrow array as its output.
221227

222-
- **[Custom Optimizer Rules and Operators]**: For advanced use cases, you can even add your own rules to the query optimizer or implement entirely new physical operators (like a special type of join). These also operate on the Arrow-based query plans.
228+
- **[Custom Optimizer Rules and Physical Operators]**: For advanced use cases, you can add your own optimizer rules or implement entirely new physical operators by implementing the [`ExecutionPlan`] trait. While optimizer rules primarily reason about schemas and plan properties, ExecutionPlan implementations directly produce and consume streams of [`RecordBatch`]es during query execution.
223229

224230
In short, knowing Arrow is key to unlocking the full power of DataFusion's modular and extensible architecture.
225231

226-
## Next Steps: Working with DataFrames
227-
228-
Now that you understand Arrow's RecordBatch format, you're ready to work with DataFusion's high-level APIs. The [DataFrame API](dataframe.md) provides a familiar, ergonomic interface for building queries without needing to think about Arrow internals most of the time.
229-
230-
The DataFrame API handles all the Arrow details under the hood - reading files into RecordBatches, applying transformations, and producing results. You only need to drop down to the Arrow level when implementing custom data sources, UDFs, or other extension points.
231-
232-
**Recommended reading order:**
233-
234-
1. [DataFrame API](dataframe.md) - High-level query building interface
235-
2. [Library User Guide: DataFrame API](../library-user-guide/using-the-dataframe-api.md) - Detailed examples and patterns
236-
3. [Custom Table Providers](../library-user-guide/custom-table-providers.md) - When you need Arrow knowledge
237-
238232
## Further reading
239233

240234
**Arrow Documentation:**
241235

242-
- [Arrow Format Introduction](https://arrow.apache.org/docs/format/Intro.html) - Official Arrow specification
243-
- [Arrow Columnar Format](https://arrow.apache.org/docs/format/Columnar.html) - In-depth look at the memory layout
236+
- [Arrow Format Introduction](https://arrow.apache.org/docs/format/Intro.html) - Understand the Arrow specification and why it enables zero-copy data sharing
237+
- [Arrow Columnar Format](https://arrow.apache.org/docs/format/Columnar.html) - Deep dive into memory layout for performance optimization
238+
- [Arrow Rust Documentation](https://docs.rs/arrow/latest/arrow/) - Complete API reference for the Rust implementation
239+
240+
**DataFusion Extension Guides:**
241+
242+
- [Custom Table Providers](../library-user-guide/custom-table-providers.md) - Build custom data sources that stream RecordBatches
243+
- [Adding UDFs](../library-user-guide/functions/adding-udfs.md) - Create efficient user-defined functions working with Arrow arrays
244+
- [Extending Operators](../library-user-guide/extending-operators.md) - Implement custom ExecutionPlan operators for specialized processing
244245

245-
**DataFusion API References:**
246+
**Key API References:**
246247

247-
- [RecordBatch](https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html) - Core Arrow data structure
248-
- [DataFrame](https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html) - High-level query interface
249-
- [TableProvider](https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html) - Custom data source trait
250-
- [MemTable](https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html) - In-memory table implementation
248+
- [RecordBatch](https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html) - The fundamental data structure for streaming columnar data
249+
- [ExecutionPlan](https://docs.rs/datafusion/latest/datafusion/physical_plan/trait.ExecutionPlan.html) - Trait for implementing custom physical operators
250+
- [TableProvider](https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html) - Interface for custom data sources
251+
- [ScalarUDF](https://docs.rs/datafusion/latest/datafusion/logical_expr/struct.ScalarUDF.html) - Building scalar user-defined functions
252+
- [MemTable](https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html) - Working with in-memory Arrow data
251253

252254
**Academic Paper:**
253255

254-
- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine](https://dl.acm.org/doi/10.1145/3626246.3653368) - Published at SIGMOD 2024
255-
256-
[arrow]: https://arrow.apache.org/docs/index.html
257-
[`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html
258-
[`ArrayRef`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html
259-
[`Field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html
260-
[`Schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html
261-
[`DataType`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html
262-
[`Int32Array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html
263-
[`StringArray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html
264-
[`Int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32
265-
[`Int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64
256+
- [Apache Arrow DataFusion: A Fast, Embeddable, Modular Analytic Query Engine](https://dl.acm.org/doi/10.1145/3626246.3653368) - Architecture and design decisions (SIGMOD 2024)
257+
258+
[apache arrow]: https://arrow.apache.org/docs/index.html
259+
[`arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html
260+
[`arrayref`]: https://docs.rs/arrow-array/latest/arrow_array/array/type.ArrayRef.html
261+
[`cast`]: https://docs.rs/arrow/latest/arrow/compute/fn.cast.html
262+
[`field`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Field.html
263+
[`schema`]: https://docs.rs/arrow-schema/latest/arrow_schema/struct.Schema.html
264+
[`datatype`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html
265+
[`int32array`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.Int32Array.html
266+
[`stringarray`]: https://docs.rs/arrow-array/latest/arrow_array/array/struct.StringArray.html
267+
[`int32`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int32
268+
[`int64`]: https://docs.rs/arrow-schema/latest/arrow_schema/enum.DataType.html#variant.Int64
266269
[extension points]: ../library-user-guide/extensions.md
267-
[`TableProvider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html
268-
[Custom Table Providers guide]: ../library-user-guide/custom-table-providers.md
269-
[User-Defined Functions (UDFs)]: ../library-user-guide/functions/adding-udfs.md
270-
[Custom Optimizer Rules and Operators]: ../library-user-guide/extending-operators.md
270+
[`tableprovider`]: https://docs.rs/datafusion/latest/datafusion/datasource/trait.TableProvider.html
271+
[custom table providers guide]: ../library-user-guide/custom-table-providers.md
272+
[user-defined functions (udfs)]: ../library-user-guide/functions/adding-udfs.md
273+
[custom optimizer rules and physical operators]: ../library-user-guide/extending-operators.md
274+
[`ExecutionPlan`]: https://docs.rs/datafusion/latest/datafusion/physical_plan/trait.ExecutionPlan.html
271275
[`.register_table()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.register_table
272276
[`.sql()`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.sql
273277
[`.show()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.show
274-
[`MemTable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html
275-
[`SessionContext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html
276-
[`CsvReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html
277-
[`ParquetReadOptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html
278-
[`RecordBatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html
278+
[`memtable`]: https://docs.rs/datafusion/latest/datafusion/datasource/struct.MemTable.html
279+
[`sessioncontext`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html
280+
[`csvreadoptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.CsvReadOptions.html
281+
[`parquetreadoptions`]: https://docs.rs/datafusion/latest/datafusion/execution/options/struct.ParquetReadOptions.html
282+
[`recordbatch`]: https://docs.rs/arrow-array/latest/arrow_array/struct.RecordBatch.html
279283
[`read_csv`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_csv
280284
[`read_parquet`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_parquet
281285
[`read_json`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_json
282286
[`read_avro`]: https://docs.rs/datafusion/latest/datafusion/execution/context/struct.SessionContext.html#method.read_avro
283-
[`DataFrame`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html
287+
[`dataframe`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html
284288
[`.collect()`]: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.collect
289+
[arrow2 guide]: https://jorgecarleitao.github.io/arrow2/main/guide/arrow.html#what-is-apache-arrow
290+
[configuration settings]: configs.md
291+
[`datafusion.execution.batch_size`]: configs.md#setting-configuration-options

0 commit comments

Comments
 (0)