Skip to content

Commit 259ac6d

Browse files
committedSep 17, 2024·
Exclude zappend.levels from coverage (3)
1 parent e61af00 commit 259ac6d

File tree

3 files changed

+26
-2
lines changed

3 files changed

+26
-2
lines changed
 

‎CHANGES.md

+16
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,19 @@
1+
## Version 0.8.0 (in development)
2+
3+
* Added experimental function `zappend.levels.write_levels()` that generates
4+
datasets using the
5+
[multi-level dataset format](https://xcube.readthedocs.io/en/latest/mldatasets.html)
6+
as specified by
7+
[xcube](https://github.com/xcube-dev/xcube).
8+
It resembles the `store.write_data(cube, "<name>.levels", ...)` method
9+
provided by the xcube filesystem data stores ("file", "s3", "memory", etc.).
10+
The zappend version may be used for potentially very large datasets in terms
11+
of dimension sizes or for datasets with very large number of chunks.
12+
It is considerably slower than the xcube version (which basically uses
13+
`xarray.to_zarr()` for each resolution level), but should run robustly with
14+
stable memory consumption.
15+
The function requires `xcube` package to be installed. (#19)
16+
117
## Version 0.7.1 (from 2024-05-30)
218

319
* The function `zappend.api.zappend()` now returns the number of slices

‎zappend/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
# Permissions are hereby granted under the terms of the MIT License:
33
# https://opensource.org/licenses/MIT.
44

5-
__version__ = "0.7.1"
5+
__version__ = "0.8.0.dev0"

‎zappend/levels.py

+9-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,15 @@ def write_levels(
3232
as specified by
3333
[xcube](https://github.com/xcube-dev/xcube).
3434
35-
The source dataset is opened and subdivided into dataset slices
35+
It resembles the `store.write_data(cube, "<name>.levels", ...)` method
36+
provided by the xcube filesystem data stores ("file", "s3", "memory", etc.).
37+
The zappend version may be used for potentially very large datasets in terms
38+
of dimension sizes or for datasets with very large number of chunks.
39+
It is considerably slower than the xcube version (which basically uses
40+
`xarray.to_zarr()` for each resolution level), but should run robustly with
41+
stable memory consumption.
42+
43+
The function opens the source dataset and subdivides it into dataset slices
3644
along the append dimension given by `append_dim`, which defaults
3745
to `"time"`. The slice size in the append dimension is one.
3846
Each slice is downsampled to the number of levels and each slice level

0 commit comments

Comments
 (0)
Please sign in to comment.