Releases: xcube-dev/xcube
0.13.0.dev6
Changes in 0.13.0.dev6
Changes in 0.13.0.dev5
Other
- xcube server Python scripts can now import modules from
the script's directory. - Loading of dynamic cubes is now logged.
0.13.0.dev5
Changes in 0.13.0.dev5
Intermediate changes
- xcube server Python scripts can now import modules from
the script's directory. - Loading of dynamic cubes is now logged.
Changes in 0.13.0.dev4
- xcube serve correctly resolves relative paths to datasets (#758)
Changes in 0.13.0.dev3
- A new function
compute_tiles()
has been
refactored out from functionxcube.core.tile.compute_rgba_tile()
. - Added method
get_level_for_resolution(xy_res)
to
abstract base classxcube.core.mldataset.MultiLevelDataset
. - Removed outdated example resources from
examples/serve/demo
. - Account for different spatial resolutions in x and y in
xcube.core.geom.get_dataset_bounds()
. - Make code robust against 0-size coordinates in
xcube.core.update._update_dataset_attrs()
.
Changes in 0.13.0.dev2
Intermediate changes
-
Fixed unit test w.r.t. change in 0.13.0.dev1
-
xcube now tries to prevent indexing timezone-naive variables with
timezone-aware indexers, or vice versa.
Changes in 0.13.0.dev1
Intermediate changes
- Include package data
xcube/webapi/meta/res/openapi.html
.
Changes in 0.13.0.dev0
Enhancements
-
xcube Server has been rewritten almost from scratch.
-
Introduced a new endpoint
${server_url}/s3
that emulates
and AWS S3 object storage for the published datasets. (#717)
Thebucket
name can be either:s3://datasets
- publishes all datasets in Zarr format.s3://pyramids
- publishes all datasets in a multi-levellevels
format (multi-resolution N-D images)
that comprises level datasets in Zarr format.
Datasets published through the S3 API are slightly
renamed for clarity. For buckets3://pyramids
:- if a dataset identifier has suffix
.levels
, the identifier remains; - if a dataset identifier has suffix
.zarr
, it will be replaced by
.levels
only if such a dataset doesn't exist; - otherwise, the suffix
.levels
is appended to the identifier.
For buckets3://datasets
the opposite is true: - if a dataset identifier has suffix
.zarr
, the identifier remains; - if a dataset identifier has suffix
.levels
, it will be replaced by
.zarr
only if such a dataset doesn't exist; - otherwise, the suffix
.zarr
is appended to the identifier.
With the new S3 endpoints in place, xcube Server instances can be used
as xcube data stores as follows:store = new_data_store( "s3", root="datasets", # bucket "datasets", use also "pyramids" max_depth=2, # optional, but we may have nested datasets storage_options=dict( anon=True, client_kwargs=dict( endpoint_url='http://localhost:8080/s3' ) ) )
-
The limited
s3bucket
endpoints are no longer available and are
replaced bys3
endpoints. -
The
--show
option ofxcube serve
is no longer available. (#750)
We may reintroduce it, but then with a packaged build of
xcube Viewer that matches the current xcube Server version.
-
-
xcube Server's colormap management has been improved in several ways:
- Colormaps are no longer managed globally. E.g., on server configuration
change, new custom colormaps are reloaded from files. - Colormaps are loaded dynamically from underlying
matplotlib and cmocean registries, and custom SNAP color palette files.
That means, latest matplotlib colormaps are now always available. (#687) - Colormaps can now be reversed (name suffix
"_r"
),
can have alpha blending (name suffix"_alpha"
),
or both (name suffix"_r_alpha"
). - Loading of custom colormaps from SNAP
*.cpd
has been rewritten.
Now also theisLogScaled
property of the colormap is recognized. (#661) - The module
xcube.util.cmaps
has been redesigned and now offers
three new classes for colormap management:Colormap
- a colormapColormapCategory
- represents a colormap categoryColormapRegistry
- manages colormaps and their categories
- Colormaps are no longer managed globally. E.g., on server configuration
Other
-
Deprecated CLI
xcube tile
has been removed. -
Deprecated modules, classes, methods, and functions
have finally been removed:xcube.core.geom.get_geometry_mask()
xcube.core.mldataset.FileStorageMultiLevelDataset
xcube.core.mldataset.open_ml_dataset()
xcube.core.mldataset.open_ml_dataset_from_local_fs()
xcube.core.mldataset.open_ml_dataset_from_object_storage()
xcube.core.subsampling.get_dataset_subsampling_slices()
xcube.core.tiledimage
xcube.core.tilegrid
-
The following classes, methods, and functions have been deprecated:
xcube.core.xarray.DatasetAccessor.levels()
xcube.util.cmaps.get_cmap()
xcube.util.cmaps.get_cmaps()
-
Fixed problem with
xcube gen
raisingFileNotFoundError
with Zarr >= 2.13.
Full Changelog: v0.13.0.dev4...v0.13.0.dev5
0.13.0.dev4
Changes in 0.13.0.dev4
Fixes
- xcube server Python scripts can now import modules from
the script's directory. - xcube serve correctly resolves relative paths to datasets (#758)
0.13.0.dev3
Changes in 0.13.0.dev3
Other
- A new function
compute_tiles()
has been
refactored out from functionxcube.core.tile.compute_rgba_tile()
. - Added method
get_level_for_resolution(xy_res)
to
abstract base classxcube.core.mldataset.MultiLevelDataset
. - Removed outdated example resources from
examples/serve/demo
. - Account for different spatial resolutions in x and y in
xcube.core.geom.get_dataset_bounds()
. - Make code robust against 0-size coordinates in
xcube.core.update._update_dataset_attrs()
.
0.13.0.dev2
Changes in 0.13.0.dev2
Intermediate changes
- Fixed unit test w.r.t. change in 0.13.0.dev1
Changes in 0.13.0.dev1
Intermediate changes
- Include package data
xcube/webapi/meta/res/openapi.html
.
Changes in 0.13.0.dev0
Enhancements
-
xcube Server has been rewritten almost from scratch.
-
Introduced a new endpoint
${server_url}/s3
that emulates
and AWS S3 object storage for the published datasets.
Thebucket
name can be either:s3://datasets
- publishes all datasets in Zarr format.s3://pyramids
- publishes all datasets in a multi-levellevels
format (multi-resolution N-D images)
that comprises level datasets in Zarr format.
Datasets published through the S3 API are slightly
renamed for clarity. For buckets3://pyramids
:- if a dataset identifier has suffix
.levels
, the identifier remains; - if a dataset identifier has suffix
.zarr
, it will be replaced by
.levels
only if such a dataset doesn't exist; - otherwise, the suffix
.levels
is appended to the identifier.
For buckets3://datasets
the opposite is true: - if a dataset identifier has suffix
.zarr
, the identifier remains; - if a dataset identifier has suffix
.levels
, it will be replaced by
.zarr
only if such a dataset doesn't exist; - otherwise, the suffix
.zarr
is appended to the identifier.
With the new S3 endpoints in place, xcube Server instances can be used
as xcube data stores as follows:store = new_data_store( "s3", root="datasets", # bucket "datasets", use also "pyramids" max_depth=2, # optional, but we may have nested datasets storage_options=dict( anon=True, client_kwargs=dict( endpoint_url='http://localhost:8080/s3' ) ) )
-
The limited
s3bucket
endpoints are no longer available and are
replaced bys3
endpoints.
-
-
xcube Server's colormap management has been improved in several ways:
- Colormaps are no longer managed globally. E.g., on server configuration
change, new custom colormaps are reloaded from files. - Colormaps are loaded dynamically from underlying
matplotlib and cmocean registries, and custom SNAP color palette files.
That means, latest matplotlib colormaps are now always available. (#687) - Colormaps can now be reversed (name suffix
"_r"
),
can have alpha blending (name suffix"_alpha"
),
or both (name suffix"_r_alpha"
). - Loading of custom colormaps from SNAP
*.cpd
has been rewritten.
Now also theisLogScaled
property of the colormap is recognized. (#661) - The module
xcube.util.cmaps
has been redesigned and now offers
three new classes for colormap management:Colormap
- a colormapColormapCategory
- represents a colormap categoryColormapRegistry
- manages colormaps and their categories
- Colormaps are no longer managed globally. E.g., on server configuration
Other
-
Deprecated CLI
xcube tile
has been removed. -
Deprecated modules, classes, methods, and functions
have finally been removed:xcube.core.geom.get_geometry_mask()
xcube.core.mldataset.FileStorageMultiLevelDataset
xcube.core.mldataset.open_ml_dataset()
xcube.core.mldataset.open_ml_dataset_from_local_fs()
xcube.core.mldataset.open_ml_dataset_from_object_storage()
xcube.core.subsampling.get_dataset_subsampling_slices()
xcube.core.tiledimage
xcube.core.tilegrid
-
The following classes, methods, and functions have been deprecated:
xcube.core.xarray.DatasetAccessor.levels()
xcube.util.cmaps.get_cmap()
xcube.util.cmaps.get_cmaps()
-
Fixed problem with
xcube gen
raisingFileNotFoundError
with Zarr >= 2.13.
0.13.0.dev1
Changes in 0.13.0.dev1
Intermediate dev changes (remove from CHANGES.md in 0.13.0 release)
- Include package data
xcube/webapi/meta/res/openapi.html
.
Changes in 0.13.0.dev0
Enhancements
-
xcube Server has been rewritten almost from scratch.
-
Introduced a new endpoint
${server_url}/s3
that emulates
and AWS S3 object storage for the published datasets.
Thebucket
name can be either:s3://datasets
- publishes all datasets in Zarr format.s3://pyramids
- publishes all datasets in a multi-levellevels
format (multi-resolution N-D images)
that comprises level datasets in Zarr format.
Datasets published through the S3 API are slightly
renamed for clarity. For buckets3://pyramids
:- if a dataset identifier has suffix
.levels
, the identifier remains; - if a dataset identifier has suffix
.zarr
, it will be replaced by
.levels
only if such a dataset doesn't exist; - otherwise, the suffix
.levels
is appended to the identifier.
For buckets3://datasets
the opposite is true: - if a dataset identifier has suffix
.zarr
, the identifier remains; - if a dataset identifier has suffix
.levels
, it will be replaced by
.zarr
only if such a dataset doesn't exist; - otherwise, the suffix
.zarr
is appended to the identifier.
With the new S3 endpoints in place, xcube Server instances can be used
as xcube data stores as follows:store = new_data_store( "s3", root="datasets", # bucket "datasets", use also "pyramids" max_depth=2, # optional, but we may have nested datasets storage_options=dict( anon=True, client_kwargs=dict( endpoint_url='http://localhost:8080/s3' ) ) )
-
The limited
s3bucket
endpoints are no longer available and are
replaced bys3
endpoints.
-
-
xcube Server's colormap management has been improved in several ways:
- Colormaps are no longer managed globally. E.g., on server configuration
change, new custom colormaps are reloaded from files. - Colormaps are loaded dynamically from underlying
matplotlib and cmocean registries, and custom SNAP color palette files.
That means, latest matplotlib colormaps are now always available. (#687) - Colormaps can now be reversed (name suffix
"_r"
),
can have alpha blending (name suffix"_alpha"
),
or both (name suffix"_r_alpha"
). - Loading of custom colormaps from SNAP
*.cpd
has been rewritten.
Now also theisLogScaled
property of the colormap is recognized. (#661) - The module
xcube.util.cmaps
has been redesigned and now offers
three new classes for colormap management:Colormap
- a colormapColormapCategory
- represents a colormap categoryColormapRegistry
- manages colormaps and their categories
- Colormaps are no longer managed globally. E.g., on server configuration
Other
-
Deprecated CLI
xcube tile
has been removed. -
Deprecated modules, classes, methods, and functions
have finally been removed:xcube.core.geom.get_geometry_mask()
xcube.core.mldataset.FileStorageMultiLevelDataset
xcube.core.mldataset.open_ml_dataset()
xcube.core.mldataset.open_ml_dataset_from_local_fs()
xcube.core.mldataset.open_ml_dataset_from_object_storage()
xcube.core.subsampling.get_dataset_subsampling_slices()
xcube.core.tiledimage
xcube.core.tilegrid
-
The following classes, methods, and functions have been deprecated:
xcube.core.xarray.DatasetAccessor.levels()
xcube.util.cmaps.get_cmap()
xcube.util.cmaps.get_cmaps()
-
Fixed problem with
xcube gen
raisingFileNotFoundError
with Zarr >= 2.13.
0.13.0.dev0
Changes in 0.13.0.dev0
Enhancements
-
xcube Server has been rewritten almost from scratch.
-
Introduced a new endpoint
${server_url}/s3
that emulates
and AWS S3 object storage for the published datasets.
Thebucket
name can be either:s3://datasets
- publishes all datasets in Zarr format.s3://pyramids
- publishes all datasets in a multi-levellevels
format (multi-resolution N-D images)
that comprises level datasets in Zarr format.
Datasets published through the S3 API are slightly
renamed for clarity. For buckets3://pyramids
:- if a dataset identifier has suffix
.levels
, the identifier remains; - if a dataset identifier has suffix
.zarr
, it will be replaced by
.levels
only if such a dataset doesn't exist; - otherwise, the suffix
.levels
is appended to the identifier.
For buckets3://datasets
the opposite is true: - if a dataset identifier has suffix
.zarr
, the identifier remains; - if a dataset identifier has suffix
.levels
, it will be replaced by
.zarr
only if such a dataset doesn't exist; - otherwise, the suffix
.zarr
is appended to the identifier.
With the new S3 endpoints in place, xcube Server instances can be used
as xcube data stores as follows:store = new_data_store( "s3", root="datasets", # bucket "datasets", use also "pyramids" max_depth=2, # optional, but we may have nested datasets storage_options=dict( anon=True, client_kwargs=dict( endpoint_url='http://localhost:8080/s3' ) ) )
-
The limited
s3bucket
endpoints are no longer available and are
replaced bys3
endpoints.
-
-
xcube Server's colormap management has been improved in several ways:
- Colormaps are no longer managed globally. E.g., on server configuration
change, new custom colormaps are reloaded from files. - Colormaps are loaded dynamically from underlying
matplotlib and cmocean registries, and custom SNAP color palette files.
That means, latest matplotlib colormaps are now always available. (#687) - Colormaps can now be reversed (name suffix
"_r"
),
can have alpha blending (name suffix"_alpha"
),
or both (name suffix"_r_alpha"
). - Loading of custom colormaps from SNAP
*.cpd
has been rewritten.
Now also theisLogScaled
property of the colormap is recognized. (#661) - The module
xcube.util.cmaps
has been redesigned and now offers
three new classes for colormap management:Colormap
- a colormapColormapCategory
- represents a colormap categoryColormapRegistry
- manages colormaps and their categories
- Colormaps are no longer managed globally. E.g., on server configuration
Other
-
Deprecated CLI
xcube tile
has been removed. -
Deprecated modules, classes, methods, and functions
have finally been removed:xcube.core.geom.get_geometry_mask()
xcube.core.mldataset.FileStorageMultiLevelDataset
xcube.core.mldataset.open_ml_dataset()
xcube.core.mldataset.open_ml_dataset_from_local_fs()
xcube.core.mldataset.open_ml_dataset_from_object_storage()
xcube.core.subsampling.get_dataset_subsampling_slices()
xcube.core.tiledimage
xcube.core.tilegrid
-
The following classes, methods, and functions have been deprecated:
xcube.core.xarray.DatasetAccessor.levels()
xcube.util.cmaps.get_cmap()
xcube.util.cmaps.get_cmaps()
-
Fixed problem with
xcube gen
raisingFileNotFoundError
with Zarr >= 2.13.
0.12.1
Changes in 0.12.1
Enhancements
- Added a new package
xcube.core.zarrstore
that exports a number of useful Zarr store implementations and Zarr store utilities:xcube.core.zarrstore.GenericZarrStore
comprises user-defined, generic array definitions. Arrays will compute their chunks either from a function or a static data array.xcube.core.zarrstore.LoggingZarrStore
is used to log Zarr store access performance and therefore useful for runtime optimisation and debugging.xcube.core.zarrstore.DiagnosticZarrStore
is used for testing Zarr store implementations.- Added a xarray dataset accessor
xcube.core.zarrstore.ZarrStoreHolder
that enhances instances ofxarray.Dataset
by a new propertyzarr_store
. It holds a Zarr store instance that represents the datasets as a key-value mapping. This will prepare later versions of xcube Server for publishing all datasets via an emulated S3 API.
In turn, the classes of module xcube.core.chunkstore
have been deprecated.
-
Added a new function
xcube.core.select.select_label_subset()
that is used to select dataset labels along a given dimension using user-defined predicate functions. -
The xcube Python environment is now requiring
xarray >= 2022.6
andzarr >= 2.11
to ensure sparse Zarr datasets can be written usingdataset.to_zarr(store)
. (#688) -
Added new module
xcube.util.jsonencoder
that offers the classNumpyJSONEncoder
used to serialize numpy-like scalar values to JSON. It also offers the functionto_json_value()
to convert Python objects into JSON-serializable versions. The new functionality is required to ensure dataset attributes that are JSON-serializable. For example, the latest version of therioxarray
package generates a_FillValue
attribute with datatypenp.uint8
.
Fixes
- The filesystem-based data stores for the "s3", "file", and "memory" protocols can now provide
xr.Dataset
instances from image pyramids formats, i.e. thelevels
andgeotiff
formats.
Full Changelog: v0.12.0...v0.12.1
0.12.0
Changes in 0.12.0
Enhancements
-
Allow xcube Server to work with any OIDC-compliant auth service such as
Auth0, Keycloak, or Google. Permissions of the form
"read:dataset:\<dataset\>"
and"read:variable:\<dataset\>"
can now be
passed by two id token claims:permissions
must be a JSON list of permissions;scope
must be a space-separated character string of permissions.
It is now also possible to include id token claim values into the
permissions as template variables. For example, if the currently
authenticated user isdemo_user
, the permission
"read:dataset:$username/*"
will effectively be
"read:dataset:demo_user/*"
and only allow access to datasets
with resource identifiers having the prefixdemo_user/
.With this change, server configuration has changed:
Example of OIDC configuration for auth0
Please note, there must be a trailing slash in the "Authority" URL.
Authentication: Authority: https://some-demo-service.eu.auth0.com/ Audience: https://some-demo-service/api/
Example of OIDC configuration for Keycloak
Please note, no trailing slash in the "Authority" URL.
Authentication: Authority: https://kc.some-demo-service.de/auth/realms/some-kc-realm Audience: some-kc-realm-xc-api
-
Filesystem-based data stores like "file" and "s3" support reading
GeoTIFF and Cloud Optimized GeoTIFF (COG). (#489) -
xcube server
now also allows publishing also 2D datasets
such as opened from GeoTIFF / COG files. -
Removed all upper version bounds of package dependencies.
This increases compatibility with existing Python environments. -
A new CLI tool
xcube patch
has been added. It allows for in-place
metadata patches of Zarr data cubes stored in almost any filesystem
supported by fsspec
including the protocols "s3" and "file". It also allows patching
xcube multi-level datasets (*.levels
format). -
In the configuration for
xcube server
, datasets defined inDataStores
may now have user-defined identifiers. In case the path does not unambiguously
define a dataset (because it contains wildcards), providing a
user-defined identifier will raise an error.
Fixes
- xcube Server did not find any grid mapping if a grid mapping variable
(e.g. spatial_ref or crs) encodes a geographic CRS
(CF grid mapping name "latitude_longitude") and the related geographical
1-D coordinates were named "x" and "y". (#706) - Fixed typo in metadata of demo cubes in
examples/serve/demo
.
Demo cubes now all have consolidated metadata. - When writing multi-level datasets with file data stores, i.e.,
and where
store.write_data(dataset, data_id="test.levels", use_saved_levels=True)
dataset
has different spatial resolutions in x and y,
an exception was raised. This is no longer the case. - xcube Server can now also compute spatial 2D datasets from users'
Python code. In former versions, spatio-temporal 3D cubes were enforced.
0.12.0.dev0
Changes in 0.12.0 (in development)
Enhancements
-
Allow xcube Server to work with any OIDC-compliant auth service such as
Auth0, Keycloak, or Google. Permissions of the form
"read:dataset:\<dataset\>"
and"read:variable:\<dataset\>"
can now be
passed by two id token claims:permissions
must be a JSON list of permissions;scope
must be a space-separated character string of permissions.
It is now also possible to include id token claim values into the
permissions as template variables. For example, if the currently
authenticated user isdemo_user
, the permission
"read:dataset:$username/*"
will effectively be
"read:dataset:demo_user/*"
and only allow access to datasets
with resource identifiers having the prefixdemo_user/
.With this change, server configuration has changed:
Example of OIDC configuration for auth0
Please note, there must be a trailing slash in the "Authority" URL.
Authentication: Authority: https://some-demo-service.eu.auth0.com/ Audience: https://some-demo-service/api/
Example of OIDC configuration for Keycloak
Please note, no trailing slash in the "Authority" URL.
Authentication: Authority: https://kc.some-demo-service.de/auth/realms/some-kc-realm Audience: some-kc-realm-xc-api
-
Filesystem-based data stores like "file" and "s3" support reading
GeoTIFF and Cloud Optimized GeoTIFF (COG). (#489) -
xcube server
now also allows publishing also 2D datasets
such as opened from GeoTIFF / COG files. -
Removed all upper version bounds of package dependencies.
This increases compatibility with existing Python environments. -
A new CLI tool
xcube patch
has been added. It allows for in-place
metadata patches of Zarr data cubes stored in almost any filesystem
supported by fsspec
including the protocols "s3" and "file". It also allows patching
xcube multi-level datasets (*.levels
format). -
In the configuration for
xcube server
, datasets defined inDataStores
may now have user-defined identifiers. In case the path does not unambiguously
define a dataset (because it contains wildcards), providing a
user-defined identifier will raise an error.
Fixes
- xcube Server did not find any grid mapping if a grid mapping variable
(e.g. spatial_ref or crs) encodes a geographic CRS
(CF grid mapping name "latitude_longitude") and the related geographical
1-D coordinates were named "x" and "y". (#706) - Fixed typo in metadata of demo cubes in
examples/serve/demo
.
Demo cubes now all have consolidated metadata. - When writing multi-level datasets with file data stores, i.e.,
and where
store.write_data(dataset, data_id="test.levels", use_saved_levels=True)
dataset
has different spatial resolutions in x and y,
an exception was raised. This is no longer the case. - xcube Server can now also compute spatial 2D datasets from users'
Python code. In former versions, spatio-temporal 3D cubes were enforced.