Skip to content

Releases: xcube-dev/xcube

0.6.2.dev1

02 Mar 16:03
Compare
Choose a tag to compare
0.6.2.dev1 Pre-release
Pre-release

Includes xcube gen2 improvements:

  • Adapted store pool config to new "cost_params" structure
  • Now printing remote output and remote traceback if any on error

0.6.2.dev0

22 Feb 09:12
5df54cc
Compare
Choose a tag to compare
0.6.2.dev0 Pre-release
Pre-release

Changes in 0.6.2 (in development)

  • The S3 data store xcube.core.store.stores.s3.S3DataStore now implements the describe_data() method.
    It therefore can also be used as a data store from which data is queried and read.

  • The xcube gen2 data cube generator tool has been hidden from
    the set of "official" xcube tools. It is considered as an internal tool
    that is subject to change at any time until its interface has stabilized.
    Please refer to xcube gen2 --help for more information.

  • Added coords property to DatasetDescriptor class.
    The data_vars property of the DatasetDescriptor class is now a dictionary.

  • Removed function reproject_crs_to_wgs84() and tests (#375) because

    • it seemed to be no longer be working with GDAL 3.1+;
    • there was no direct use in xcube itself;
    • xcube plans to get rid of GDAL dependencies.
  • CLI tool xcube gen2 may now also ingest non-cube datasets.

  • Fixed unit tests broken by accident. (#396)

  • Added new context manager xcube.util.observe_dask_progress() that can be used
    to observe tasks that known to be dominated by Dask computations:

    with observe_dask_progress('Writing dataset', 100):
        dataset.to_zarr(store)  
  • The xcube normalisation process, which ensures that a dataset meets the requirements
    of a cube, internally requested a lot of data, causing the process to be slow and
    expensive in terms of memory consumption. This problem was resolved by avoiding to
    read in these large amounts of data. (#392)

0.6.1

10 Dec 11:29
Compare
Choose a tag to compare

Changes in 0.6.1

All changes relate to maintenance of xcube's Python environment requirements in envrionment.yml:

  • Removed explicit blas dependency (which required MKL as of blas =*.*=mkl)
    for better interoperability with existing environments.
  • Removed restrictions of fsspec <=0.6.2 which was required due to
    Zarr #650. As #650 has been fixed,
    zarr=2.6.1 has been added as new requirement. (#360)

0.6.0

25 Nov 15:34
Compare
Choose a tag to compare

Changes in 0.6.0

Enhancements

  • Added four new Jupyter Notebooks about xcube's new Data Store Framework in
    examples/notebooks/datastores.

  • CLI tool xcube io dump now has new --config and --type options. (#370)

  • New function xcube.core.store.get_data_store() and new class xcube.core.store.DataStorePool
    allow for maintaining a set of pre-configured data store instances. This will be used
    in future xcube tools that utilise multiple data stores, e.g. "xcube gen", "xcube serve". (#364)

  • Replaced the concept of type_id used by several xcube.core.store.DataStore methods
    by a more flexible type_specifier. Documentation is provided in docs/source/storeconv.md.

    The DataStore interface changed as follows:

    • class method get_type_id() replaced by get_type_specifiers() replaces get_type_id();
    • new instance method get_type_specifiers_for_data();
    • replaced keyword-argument in get_data_ids();
    • replaced keyword-argument in has_data();
    • replaced keyword-argument in describe_data();
    • replaced keyword-argument in get_search_params_schema();
    • replaced keyword-argument in search_data();
    • replaced keyword-argument in get_data_opener_ids().

    The WritableDataStore interface changed as follows:

    • replaced keyword-argument in get_data_writer_ids().
  • The JSON Schema classes in xcube.util.jsonschema have been extended:

    • date and date-time formats are now validated along with the rest of the schema
    • the JsonDateSchema and JsonDatetimeSchema subclasses of JsonStringSchema have been introduced,
      including a non-standard extension to specify date and time limits
  • Extended xcube.core.store.DataStore docstring to include a basic convention for store
    open parameters. (#330)

  • Added documentation for the use of the open parameters passed to
    xcube.core.store.DataOpener.open_data().

Fixes

  • xcube serve no longer crashes, if configuration is lacking a Styles entry.

  • xcube gen can now interpret start_date and stop_date from NetCDF dataset attributes.
    This is relevant for using xcube gen for Sentinel-2 Level 2 data products generated and
    provided by Brockmann Consult. (#352)

  • Fixed both xcube.core.dsio.open_cube() and open_dataset() which failed with message
    "ValueError: group not found at path ''" if called with a bucket URL but no credentials given
    in case the bucket is not publicly readable. (#337)
    The fix for that issue now requires an additional s3_kwargs parameter when accessing datasets
    in public buckets:

    from xcube.core.dsio import open_cube 
      
    public_url = "https://s3.eu-central-1.amazonaws.com/xcube-examples/OLCI-SNS-RAW-CUBE-2.zarr"
    public_cube = open_cube(public_url, s3_kwargs=dict(anon=True))
  • xcube now requires s3fs >= 0.5 which implies using faster async I/O when accessing object storage.

  • xcube now requires gdal >= 3.0. (#348)

  • xcube now only requires matplotlib-base package rather than matplotlib. (#361)

Other

  • Restricted s3fs version in envrionment.yml in order to use a version which can handle pruned xcube datasets.
    This restriction will be removed once changes in zarr PR zarr-developers/zarr-python#650
    are merged and released. (#360)
  • Added a note in the xcube chunk CLI help, saying that there is a possibly more efficient way
    to (re-)chunk datasets through the dedicated tool "rechunker", see https://rechunker.readthedocs.io
    (thanks to Ryan Abernathey for the hint). (#335)
  • For xcube serve dataset configurations where FileSystem: obs, users must now also
    specify Anonymous: True for datasets in public object storage buckets. For example:
    - Identifier: "OLCI-SNS-RAW-CUBE-2"
      FileSystem: "obs"
      Endpoint: "https://s3.eu-central-1.amazonaws.com"
      Path: "xcube-examples/OLCI-SNS-RAW-CUBE-2.zarr"
      Anyonymous: true
      ...
    - ...
  • In environment.yml, removed unnecessary explicit dependencies on proj4
    and pyproj and restricted gdal version to >=3.0,<3.1.

0.5.1

10 Jul 15:34
Compare
Choose a tag to compare

Changes in 0.5.1

  • normalize_dataset now ensures that latitudes are decreasing.

0.5.0

01 Jul 07:48
Compare
Choose a tag to compare

Changes in 0.5.0

New in 0.5.0

  • xcube gen2 CONFIG will generate a cube from a data input store and a user given cube configuration.
    It will write the resulting cube in a user defined output store.

    • Input Stores: CCIODP, CDS, SentinelHub
    • Output stores: memory, directory, S3
  • xcube serve CUBE will now use the last path component of CUBE as dataset title.

  • xcube serve can now be run with AWS credentials (#296).

    • In the form xcube serve --config CONFIG, a Datasets entry in CONFIG
      may now contain the two new keys AccessKeyId: ... and SecretAccessKey: ...
      given that FileSystem: obs.
    • In the form xcube serve --aws-prof PROFILE CUBE
      the cube stored in bucket with URL CUBE will be accessed using the
      credentials found in section [PROFILE] of your ~/.aws/credentials file.
    • In the form xcube serve --aws-env CUBE
      the cube stored in bucket with URL CUBE will be accessed using the
      credentials found in environment variables AWS_ACCESS_KEY_ID and
      AWS_SECRET_ACCESS_KEY.

Enhancements in 0.5.0

  • Added possibility to specify packing of variables within the configuration of
    xcube gen (#269). The user now may specify a different packing variables,
    which might be useful for reducing the storage size of the datacubes.
    Currently it is only implemented for zarr format.
    This may be done by passing the parameters for packing as the following:

    output_writer_params: 
    
      packing: 
        analysed_sst: 
          scale_factor: 0.07324442274239326
          add_offset: -300.0
          dtype: 'uint16'
          _FillValue: 0.65535
  • Example configurations for xcube gen2 were added.

Fixes

  • From 0.4.1: Fixed time-series performance drop (#299).

  • Fixed xcube gen CLI tool to correctly insert time slices into an
    existing cube stored as Zarr (#317).

  • When creating an ImageGeom from a dataset, correct the height if it would
    otherwise give a maximum latitude >90°.

  • Disable the display of warnings in the CLI by default, only showing them if
    a --warnings flag is given.

  • xcube has been extended by a new Data Store Framework (#307).
    It is provided by the xcube.core.store package.
    It's usage is currently documented only in the form of Jupyter Notebook examples,
    see examples/store/*.ipynb.

  • During the development of the new Data Store Framework, some
    utility packages have been added:

    • xcube.util.jsonschema - classes that represent JSON Schemas for types null, boolean,
      number, string, object, and array. Schema instances are used for JSON validation,
      and object marshalling.
    • xcube.util.assertions - numerous assert_* functions that are used for function
      parameter validation. All functions raise ValueError in case an assertion is not met.
    • xcube.util.ipython - functions that can be called for better integration of objects with
      Jupyter Notebooks.
  • Fixed a regression when running "xcube serve" with cube path as parameter (#314)

  • From 0.4.3: Extended xcube serve by reverse URL prefix option.

  • From 0.4.1: Fixed time-series performance drop (#299).

0.4.2

05 May 08:13
Compare
Choose a tag to compare

Changes in 0.4.2

  • Fixed a problem during release process. No code changes.

Changes in 0.4.1

  • Fixed time-series performance drop (#299).

0.4.1

05 May 06:58
Compare
Choose a tag to compare

Changes in 0.4.1

  • Fixed time-series performance drop (#299).

0.4.0

08 Apr 06:12
96805eb
Compare
Choose a tag to compare

Changes in 0.4.0

New

  • Added new /timeseries/{dataset}/{variable} POST operation to xcube web API.
    It extracts time-series for a given GeoJSON object provided as body.
    It replaces all of the /ts/{dataset}/{variable}/{geom-type} operations.
    The latter are still maintained for compatibility with the "VITO viewer".

  • The xcube web API provided through xcube serve can now serve RGBA tiles using the
    dataset/{dataset}/rgb/tiles/{z}/{y}/{x} operation. The red, green, blue
    channels are computed from three configurable variables and normalisation ranges,
    the alpha channel provides transparency for missing values. To specify a default
    RGB schema for a dataset, a colour mapping for the "pseudo-variable" named rbg
    is provided in the configuration of xcube serve:

    Datasets:
      - Identifyer: my_dataset
        Style: my_style
        ...
      ...
    Styles:
      - Identifier: my_style
        ColorMappings:
          rgb:
            Red:
              Variable: rtoa_8
              ValueRange: [0., 0.25]
            Green:
              Variable: rtoa_6
              ValueRange: [0., 0.25]
            Blue:
              Variable: rtoa_4
              ValueRange: [0., 0.25]
          ...

    Note that this concept works nicely in conjunction with the new Augmentation feature (#272) used
    to compute new variables that could be input to the RGB generation.

  • Introduced new (ortho-)rectification algorithm allowing reprojection of
    satellite images that come with (terrain-corrected) geo-locations for every pixel.

    • new CLI tool xcube rectify
    • new API function xcube.core.rectify.rectify_dataset()
  • Utilizing the new rectification in xcube gen tool. It is now the default
    reprojection method in xcube.core.gen.iproc.XYInputProcessor and
    xcube.core.gen.iproc.DefaultInputProcessor, if ground control points are not
    specified, i.e. the input processor is configured with xy_gcp_step=None. (#206)

  • Tile sizes for rectification in xcube gen are now derived from output_writer_params if given in configuration and
    if it contains a chunksizes parameter for 'lat' or 'lon'. This will force the generation of a chunked xcube dataset
    and will utilize Dask arrays for out-of-core computations. This is very useful for large data cubes whose time slices
    would otherwise not fit into memory.

  • Introduced new function xcube.core.select.select_spatial_subset().

  • Renamed function xcube.core.select.select_vars() into xcube.core.select.select_variables_subset().

  • Now supporting xarray and numpy functions in expressions used by the
    xcube.core.evaluate.evaluate_dataset() function and in the configuration of the
    xcube gen tool. You can now use xr and np contexts in expressions, e.g.
    xr.where(CHL >= 0.0, CHL). (#257)

  • The performance of the xcube gen tool for the case that expressions or
    expression parts are reused across multiple variables can now be improved.
    Such as expressions can now be assigned to intermediate variables and loaded
    into memory, so they are not recomputed again.
    For example, let the expression quality_flags.cloudy and CHL > 25.0 occur often
    in the configuration, then this is how recomputation can be avoided:

      processed_variables:
        no_cloud_risk:
          expression: not (quality_flags.cloudy and CHL_raw > 25.0)
          load: True
        CHL:
          expression: CHL_raw
          valid_pixel_expression: no_cloud_risk
        ...        
    
  • Added ability to write xcube datasets in Zarr format into object storage bucket using the xcube python api
    xcube.core.dsio.write_cube(). (#224) The user needs to pass provide user credentials via

    client_kwargs = {'provider_access_key_id': 'user_id', 'provider_secret_access_key': 'user_secret'}
    

    and
    write to existing bucket by executing

    write_cube(ds1, 'https://s3.amazonaws.com/upload_bucket/cube-1-250-250.zarr', 'zarr',
                         client_kwargs=client_kwargs)
    
  • Added new CLI tool xcube tile which is used to generate a tiled RGB image
    pyramid from any xcube dataset. The format and file organisation of the generated
    tile sets conforms to the TMS 1.0 Specification
    (#209).

  • The configuration of xcube serve has been enhanced to support
    augmentation of data cubes by new variables computed on-the-fly (#272).
    You can now add a section Augmentation into a dataset descriptor, e.g.:

      Datasets:
        - Identifier: abc
          ...
          Augmentation:
            Path: compute_new_vars.py
            Function: compute_variables
            InputParameters:
              ...
        - ...

    where compute_variables is a function that receives the parent xcube dataset
    and is expected to return a new dataset with new variables.

  • The xcube serve tool now provides basic access control via OAuth2 bearer tokens (#263).
    To configure a service instance with access control, add the following to the
    xcube serve configuration file:

      Authentication:
        Domain: "<your oauth2 domain>"
        Audience: "<your audience or API identifier>"
    

    Individual datasets can now be protected using the new AccessControl entry
    by configuring the RequiredScopes entry whose value is a list
    of required scopes, e.g. "read:datasets":

      Datasets:
        ...
        - Identifier: <some dataset id>
          ...
          AccessControl:
            RequiredScopes:
              - "read:datasets"
    

    If you want a dataset to disappear for authorized requests, set the
    IsSubstitute flag:

      Datasets:
        ...
        - Identifier: <some dataset id>
          ...
          AccessControl:
            IsSubstitute: true
    

Enhancements

  • The xcube serve tool now also allows for per-dataset configuration
    of chunk caches for datasets read from remote object storage locations.
    Chunk caching avoids recurring fetching of remote data chunks for same
    region of interest.
    It can be configured as default for all remote datasets at top-level of
    the configuration file:

    DatasetChunkCacheSize: 100M
    

    or in individual dataset definitions:

    Datasets: 
       - Identifier: ...
         ChunkCacheSize: 2G
         ...
    
  • Retrieval of time series in Python API function xcube.core.timeseries.get_time_series()
    has been optimized and is now much faster for point geometries.
    This enhances time-series performance of xcube serve.

    • The log-output of xcube serve now contains some more details time-series request
      so performance bottlenecks can be identified more easily from xcube-serve.log,
      if the server is started together with the flag --traceperf.
  • CLI command xcube resample has been enhanced by a new value for the
    frequency option --frequency all
    With this value it will be possible to create mean, max , std, ... of the whole dataset,
    in other words, create an overview of a cube.
    By Alberto S. Rabaneda.

  • The xcube serve tool now also serves dataset attribution information which will be
    displayed in the xcube-viewer's map. To add attribution information, use the DatasetAttribution
    in to your xcube serve configuration. It can be used on top-level (for all dataset),
    or on individual datasets. Its value may be a single text entry or a list of texts:
    For example:

    DatasetAttribution: 
      - "© by Brockmann Consult GmbH 2020, contains modified Copernicus Data 2019, processed by ESA."
      - "Funded by EU H2020 DCS4COP project."
  • The xcube gen tool now always produces consolidated xcube datasets when the output format is zarr.
    Furthermore when appending to an existing zarr xcube dataset, the output now will be consolidated as well.
    In addition, xcube gen can now append input time slices to existing optimized (consolidated) zarr xcube datasets.

  • The unchunk_coords keyword argument of Python API function
    xcube.core.optimize.optimize_dataset() can now be a name, or list of names
    of the coordinate variable(s) to be consolidated. If boolean True is used
    all variables will be consolidated.

  • The xcube serve API operations datasets/ and datasets/{ds_id} now also
    return the metadata attributes of a given dataset and it variables in a property
    named attrs. For variables we added a new metadata property htmlRepr that is
    a string returned by a variable's var.data._repr_html_() method, if any.

  • Renamed default log file for xcube serve command to xcube-serve.log.

  • xcube gen now immediately flushes logging output to standard out

0.3.0

21 Nov 15:13
Compare
Choose a tag to compare

Changes in 0.3.0

New

  • Added new parameter in xcube gen called --no_sort. Using --no_sort,
    the input file list wont be sorted before creating the xcube dataset.
    If --no_sort parameter is passed, order the input list will be kept.
    The parameter --sort is deprecated and the input files will be sorted
    by default.
  • xcube now discovers plugin modules by module naming convention
    and by Setuptools entry points. See new chapter
    Plugins
    in xcube's documentation for details. (#211)
  • Added new xcube compute CLI command and xcube.core.compute.compute_cube() API
    function that can be used to generate an output cube computed from a Python
    function that is applied to one or more input cubes. Replaces the formerly
    hidden xcube apply command. (#167)
  • Added new function xcube.core.geom.rasterize_features()
    to rasterize vector-data features into a dataset. (#222)
  • Extended CLI command xcube verify and API function xcube.core.verify.verify_cube to check whether spatial coordinate variables and their associated bounds variables are equidistant. (#231)
  • Made xarray version 0.14.1 minimum requirement due to deprecation of xarray's Dataset.drop
    method and replaced it with drop_sel and drop_vars accordingly.

Enhancements

  • CLI commands execute much faster now when invoked with the --help and --info options.
  • Added serverPID property to response of web API info handler.
  • Functions and classes exported by following modules no longer require data cubes to use
    the lon and lat coordinate variables, i.e. using WGS84 CRS coordinates. Instead, the
    coordinates' CRS may be a projected coordinate system and coordinate variables may be called
    x and y (#112):
    • xcube.core.new
    • xcube.core.geom
    • xcube.core.schema
    • xcube.core.verify
  • Sometimes the cell bounds coordinate variables of a given coordinate variables are not in a proper,
    CF compliant
    order, e.g. for decreasing latitudes lat the respective bounds coordinate
    lat_bnds is decreasing for lat_bnds[:, 0] and lat_bnds[:, 1], but lat_bnds[i, 0] < lat_bnds[i, 1]
    for all i. xcube is now more tolerant w.r.t. to such wrong ordering of cell boundaries and will
    compute the correct spatial extent. (#233)
  • For xcube serve, any undefined color bar name will default to "viridis". (#238)

Fixes

  • xcube resample now correctly re-chunks its output. By default, chunking of the
    time dimension is set to one. (#212)

Incompatible changes

The following changes introduce incompatibilities with former xcube 0.2.x
versions.

  • The function specified by xcube_plugins entry points now receives an single argument of
    type xcube.api.ExtensionRegistry. Plugins are asked to add their extensions
    to this registry. As an example, have a look at the default xcube_plugins entry points
    in ./setup.py.

  • xcube.api.compute_dataset() function has been renamed to
    xcube.api.evaluate_dataset(). This has been done in order avoid confusion
    with new API function xcube.api.compute_cube().

  • xcube's package structure has been drastically changed:

    • all of xcube's __init__.py files are now empty and no longer
      have side effects such as sub-module aggregations.
      Therefore, components need to be imported from individual modules.
    • renamed xcube.api into xcube.core
    • moved several modules from xcube.util into xcube.core
    • the new xcube.constants module contains package level constants
    • the new xcube.plugin module now registers all standard extensions
    • moved contents of module xcube.api.readwrite into xcube.core.dsio.
    • removed functions read_cube and read_dataset as open_cube and open_dataset are sufficient
    • all internal module imports are now absolute, rather than relative