- New function
azure_storeand added support for virtual chunks in Azure. Virtual chunks must have URLs starting withazure://,az://orabfs://.
- Internal testing and documentation build fixes
- Compatibility with
Xarray<2025.06.0.
- Add
authorized_virtual_container_prefixestoRepositoryto access the prefixes currently whitelisted for virtual chunk access
- Improvements for
anonymousaccess to GCS Storage
- This release removes the
windows x86wheels from the PyPI release.
- Support for
anonymousaccess to GCS Storage
- New
Session.flushmethod allows creating new snapshots without updating the current branch. This is useful to store temporary updates, that can later be made permanent by pointing a tag to them (Repository.create_tag), or a new branch (Repository.create_branch), or an existing branch (Repository.reset_branch). - Added
from_snapshot_idargument toreset_branch. This allows to safely reset a branch, conditionally on its current tip snapshot. - Added support for
align_chunksandsplit_everyarguments into_icechunk.
Store.list_diris more than an order of magnitude faster in repositories with thousands of groups/arrays.Store.list_prefixis also faster.- Increased default snapshot cache size to 500k groups/arrays. This is a better default as we see
people creating larger Icechunk repositories. Of course, this can be modified using
icechunk.CachingConfig.
S3Optionsgetters and setters added to interface stub file for proper type checking.
Storage.new_tigrisnow uses the new Tigris endpointt3.storage.devby default.- Repositories in Tigris object store now use the
X-Tigris-Consistentheader for better consistency.
ForkSessioncan now be used without serializing it.
- Significant speed improvements for
Store.list_prefixandStore.getsize_prefixwhen prefix is not empty.
- Add synchronous
Repository.reopen()method. There was previously only an async versionRepository.reopen_async(). - S3 Storage now uses environment variables
HTTP_PROXY,HTTPS_PROXY, andNO_PROXY. - Emit a warning if
icechunkis misspelled when configuring logs. - Better Ceph Object Gateway compatibility.
- Msgpack format error when trying to pickle distributed stores in repositories with virtual chunk containers.
- Fix issue retrieving groups and arrays with certain special characters in their names
- New
configgetter inSessionobjects to get the repository configuration. - Per-repo network concurrency limit. The zarr-python + dask + Icechunk stack
can be too eager in launching concurrent HTTP requests. This can overload
the machine, timing out some requests and producing an error. There is a new
max_concurrent_requestsfield inRepositoryConfigto limit this. This will limit concurrent HTTP requests to this value. Default is 256. See the docs for more. - Ability to tune and disable network stream stall detection.
s3_storagelike functions now take an optionalnetwork_stream_timeout_secondswith a 60 seconds default. If configured to 0, stream stall detection is disabled completely.
- New
Repository.inspect_snapshotmethod returns a JSON representation of a snapshot, including its properties and the manifests it points to.
- Improved credential refresh logic, avoiding refresh timeouts.
- Icechunk has an asynchronous API now.
- Icechunk internals continue to be fully asynchronous. Most "normal" use cases don't need the async API, the synchronous API will deliver the same performance.
- The async API is useful to get optimal concurrency in operations involving multiple repos or multiple sessions. An example would be users who run Icechunk in the context of a service accessing multiple repositories.
- Not every method in Icechunk has an async version, only those that can benefit because they do I/O.
- The new methods have the same name as they synchronous ones with an
_asyncsuffix. They can be invoked on the same instances as usual. - Some Examples:
Repository.create_async()Repository.open_async()Repository.garbage_collect_async()Repository.total_chunks_storage_async()Repository.lookup_tag_async()Repository.readonly_session_async()Repository.writable_session_async()Session.commit_async()Session.rebase_async()- There are many more, check the API reference
- Icechunk default log level is
warnnow, instead oferror. - Emit a log warning and recommendation when manifests are too large for the configured cache size, which makes Icechunk less performant.
- Add property accessors to
ManifestFileInfo
- We increased the size of the default asset caches
- Snapshots nodes: 10k -> 30k
- Chunk references: 5M -> 15M
- Validate urls on
set_virtual_ref
There are two minor API breaking changes that will affect only virtual dataset users:
- To improve security, the
url_prefixof virtual chunk containers must be declared with a final/character now. This protects, for example, users from authorizing access tofooprefix and inadvertently authorize access tofoo-production. set_virtual_refandset_virtual_refsnow default tovalidate_container = True. This improves usability for repository writers, with an early error when they forget to create their virtual chunk containers.
- Bad byte range addressing for inlined chunks in sharded arrays.
- Significantly improved write performance in low-latency object stores, for single-threaded concurrent tasks.
- Fix issue where some GCS and Azure credentials could not be pickled
- Garbage collection can now be executed in dry run mode with
Repository.garbage_collection(..., dry_run=True). - Support for Ceph Object Gateway.
- Better request retries for GCS object store.
- Better error message when local clock has drifted too far.
- Various documentation improvements and fixes.
- Significant improvements to performance of
Repository.garbage_collectionandRepository.total_chunks_storage. Both methods now accept optional arguments for parallelization. Performance has improved 30x for some real world datasets with many manifests.
Icechunk 1.0 is here! This version represents our commitment to stability, performance, and reliability.
Whether you're processing satellite imagery, running ML pipelines on massive datasets, or building the next generation of scientific computing applications, Icechunk provides the transactional, versioned, cloud-native storage layer you need.
This is a release candidate for Icechunk 1.0
This version has some minor API changes but it is on-disk format compatible with any versions in the 0.2.x series. Repositories written with previous 0.2.x versions are fully compatible, with repositories written with this release candidate.
Changes to code may be needed for users that are using distributed coordinated sessions or virtual datasets. Please refer to our migration guide.
- New
Repository.transactioncontext manager creates and commits write sessions upon exit. - Easier manifest preload and split configuration, combining conditions with
|and&magic methods. - More secure and explicit control over virtual chunk resolution. This improvement motivated the changes
in API for
Repository.openandRepository.createfor virtual datasets.
- Data loss under certain conditions when distributed sessions are started with a "dirty" session.
This issue motivated the change on API introducing
Session.fork.
- Implement new Zarr method:
Store.with_read_only.
- Update Spec documentation to the modern on-disk format based on flatbuffers.
This is a feature packed released, and yet, the most important change is in the performance section. We have put a lot of work implementing manifest split, which significantly improves performance for very large arrays.
- Virtual chunks can now be resolved using HTTP(s) protocol. This extends virtual datasets that can now be stored outside of an object store.
- Users can now configure how they want to retry failed requests to the object store. See StorageRetriesSettings.
- Support dynamically changing log levels. See
set_logs_filterfunction. - Support for anonymous access to GCS buckets.
- The first snapshot has a known id now. This allows to check if a directory is an Icechunk
repo by looking for the
1CECHNKREP0F1RSTCMT0path in thesnapshotssubdir.
- Large arrays can now have multiple manifests, significantly improving memory usage, read, and write time performance. See the relevant section in the documentation.
- GCS sessions that use bearer token can now be serialized.
- Log error when deletion fails during garbage collection.
- Multipart upload for objects > 100MB on Icechunk-native Storage instances.
object_storeStorage instances are not supported yet.
- Compatibility with
dask>=2025.4.1
Session.commitcan dorebasenow, by passing arebase_withargument.
get_credentialsfunctions can now be instructed to scatter the first set of credentials to all pickled copies of the repo or session. This speeds up short-lived distributed tasks, that no longer need to execute the first call toget_credentials.
- Honor Storage settings during repository configuration update
- Object's storage class can now be configured for S3-compatible repos
- Fix a performance regression in writes to GCS and local file storage
- Distributed writes now can use Dask's native reduction.
- Disallow creating tags/branches pointing to non-existing snapshots.
SnapshotInfonow contains information about the snapshot manifests.- More logging for garbage collection and expiration.
- Fix type annotations for the
Difftype.
- Extra commit metadata can now optionally be set on the repository itself. Useful for properties such as commit author.
- New
Repository.lookup_snapshothelper method. - Garbage collection and expiration produce logs now.
- More aggressive commit squashing during snapshot expiration.
- Garbage collection cleans the assets cache, so the same repository can be reused after GC.
- Bug in snapshot expiration that created a commit loop for out-of-range input timestamps.
This version is only partially released, not all Python wheels are released to PyPI. We recommend upgrading to 0.2.10.
- Add support for virtual chunks in Google Cloud Storage. Currently, credentials are needed to access GCS buckets, even if they are public. We'll allow anonymous access in a future version.
- New
Repository.total_chunks_storagemethod to calculate the space used by all chunks in the repo, across all versions. - Rust library is compiled using rustc 1.85.
- Up to 3x faster chunk upload for small chunks on GCS.
- CLI issues 0 exit code when using --help
- Install Icechunk CLI by default.
This is Icechunk's second 1.0 release candidate. This release is backwards compatible with repositories created using any Icechunk version in the 0.2.X series.
- Icechunk got a new CLI executable that allows basic management of repos. Try it by running
icechunk --help. - Use new Tigris features to provide consistency guarantees on par with other object stores.
- New
force_path_styleoption for S3 Storage. Used by certain object stores that cannot use the more modern addressing style. - More documentation.
This is Icechunk's first 1.0 release candidate. This release is backwards compatible with repositories created using any Icechunk version in the 0.2.X series.
- Result of garbage collection informs how many bytes were freed from storage.
- Executable Python documentation.
- Support for
allow_picklingin nested contexts.
- Fixes a bug where object storage paths were incorrectly formatted when using Windows.
Repositorycan now be pickled.icechunk.print_debug_info()now prints out relative information about the installed version of icechunk and relative dependencies.icechunk.Storagenow supports__repr__. Only configuration values will be printed, no credentials.
- Fixes a missing export for Google Cloud Storage credentials.
- Added the ability to checkout a session
as_ofa specific time. This is useful for replaying what the repo would be at a specific point in time. - Support for refreshable Google Cloud Storage credentials.
- Fix a bug where the clean prefix detection was hiding other errors when creating repositories.
- API now correctly uses
snapshot_idinstead ofsnapshotconsistently. - Only write
content-typeto metadata files if the target object store supports it.
-
Users can now override consistency defaults. With this Icechunk is usable in a larger set of object stores, including those without support for conditional updates. In this setting, Icechunk loses some of its consistency guarantees. This configuration variables are for advanced users only, and should only be changed if necessary for compatibility.
class StorageSettings: ... @property def unsafe_use_conditional_update(self) -> bool | None: ... @property def unsafe_use_conditional_create(self) -> bool | None: ... @property def unsafe_use_metadata(self) -> bool | None: ...
This release is focused on stabilizing Icechunk's on-disk serialization format. It's a non-backwards compatible change, hopefully the last one. Data written with previous versions must be reingested to be read with Icechunk 0.2.0.
Repository.ancestrynow returns an iterator, allowing interrupting the traversal of the version tree at any point.- New on-disk format using flatbuffers makes it easier to document and implement (de-)serialization. This enables the creation of alternative readers and writers for the Icechunk format.
Repository.readonly_sessioninterprets its first positional argument as a branch name:
# before:
repo.readonly_session(branch="dev")
# after:
repo.readonly_session("dev")
# still possible:
repo.readonly_session(tag="v0.1")
repo.readonly_session(branch="foo")
repo.readonly_session(snapshot_id="NXH3M0HJ7EEJ0699DPP0")- Icechunk is now more resilient to changes in Zarr metadata spec, and can handle Zarr extensions.
- More documentation.
- We have improved our benchmarks, making them more flexible and effective at finding possible regressions.
- New
Store.set_virtual_refsmethod allows setting multiple virtual chunks for the same array. This significantly speeds up the creation of virtual datasets.
- Fix a bug in clean prefix detection
- Repositories can now evaluate the
diffbetween two snapshots. - Sessions can show the current
statusof the working copy. - Adds the ability to specify bearer tokens for authenticating with Google Cloud Storage.
- Dont write
dimension_namesto the zarr metadata if no dimension names are set. Previously,nullwas written.
-
Improved error messages. Exceptions raised by Icechunk now include a lot more information on what happened, and what was Icechunk doing when the exception was raised. Example error message:

-
Icechunk generates logs now. Set the environment variable
ICECHUNK_LOG=icechunk=debugto print debug logs to stdout. Available "levels" in order of increasing verbosity areerror,warn,info,debug,trace. The default level iserror. Example log:
-
Icechunk can now be installed using
conda:conda install -c conda-forge icechunk
-
Optionally delete branches and tags that point to expired snapshots:
def expire_snapshots( self, older_than: datetime.datetime, *, delete_expired_branches: bool = False, delete_expired_tags: bool = False, ) -> set[str]: ...
-
More documentation. See the Icechunk website
- Faster
existszarrStoremethod. - Implement
Store.getsize_prefixmethod. This significantly speeds upinfo_complete.
- Default regular expression to preload manifests.
- Session deserialization error when using distributed writes
-
Expiration and garbage collection. It's now possible to maintain only recent versions of the repository, reclaiming the storage used exclusively by expired versions.
-
Allow an arbitrary map of properties to commits. Example:
session.commit("some message", metadata={"author": "icechunk-team"})
This properties can be retrieved via
ancestry. -
New
chunk_coordinatesfunction to list all initialized chunks in an array. -
It's now possible to delete tags. New tags with the same name won't be allowed to preserve the immutability of snapshots pointed by a tag.
-
Safety checks on distributed writes via opt-in pickling of the store.
-
More safety around snapshot timestamps, blocking commits if there is too much clock drift.
-
Don't allow creating repositories in dirty prefixes.
-
Experimental support for Tigris object store: it currently requires the bucket to be restricted to a single region to obtain the Icechunk consistency guarantees.
-
This version is the first candidate for a stable on-disk format. At the moment, we are not planning to change the on-disk format prior to releasing v1 but reserve the right to do so.
- Users must now opt-in to pickling and unpickling of Session and IcechunkStore using the
Session.allow_picklingcontext manager to_icechunknow accepts a Session, instead of an IcechunkStore
- Preload small manifests that look like coordinate arrays on session creation.
- Faster
ancestryin an async context viaasync_ancestry.
- Bad manifest split in unmodified arrays
- Documentation was updated to the latest API.
- Add a constructor to
RepositoryConfig
-
Now each array has its own chunk manifest, speeding up reads for large repositories
-
The snapshot now keeps track of the chunk space bounding box for each manifest
-
Configuration settings can now be overridden in a field-by-field basis Example:
config = icechunk.RepositoryConfig(inline_chunk_threshold_byte=0) storage = ... repo = icechunk.Repository.open( storage=storage, config=config, )
will use 0 for
inline_chunk_threshold_bytebut all other configuration fields will come from the repository persistent config. If persistent config is not set, configuration defaults will take its place. -
In preparation for on-disk format stability, all metadata files include extensive format information; including a set of magic bytes, file type, spec version, compression format, etc.
- Zarr's
getsizegot orders of magnitude faster because it's implemented natively and with no need of any I/O - We added several performance benchmarks to the repository
- Better configuration for metadata asset caches, now based on their sizes instead of their number
from icechunk import *no longer fails
-
New
Repository.reopenfunction to ope a repo again, overwriting its configuration and/or virtual chunk container credentials -
Configuration classes are now mutable and easier to use:
storage = ... config = icechunk.RepositoryConfig.default() config.storage.concurrency.ideal_concurrent_request_size = 1_000_000 repo = icechunk.Repository.open( storage=storage, config=config, )
-
ancestryfunction can now receive a branch/tag name or a snapshot id -
set_virtual_refcan now validate the virtual chunk container exists
- Better concurrent download of big chunks, both native and virtual
- We no longer allow
mainbranch to be deleted
- Adds support for Azure Blob Storage
- Manifests now load faster, due to an improved serialization format
- The store now releases the GIL appropriately in multithreaded contexts
- Large chunks are fetched concurrently
IcechunkStore.list_diris now significantly faster- Support for Zarr 3.0 and xarray 2025.1.1
- Transaction logs and snapshot files are compressed
- Manifests compression using Zstd
- Large manifests are fetched using multiple parallel requests
- Functions to fetch and store repository config
- Faster
list_diranddelete_dirimplementations in the Zarr store
- Credentials from environment in GCS
- New Python API using
Repository,SessionandStoreas separate entities - New Python API for configuring and opening
Repositories - Added support for object store credential refresh
- Persistent repository config
- Commit conflict resolution and rebase support
- Added experimental support for Google Cloud Storage
- Add optional checksums for virtual chunks, either using Etag or last-updated-at
- Support for multiple virtual chunk locations using virtual chunk containers concept
- Added function
all_virtual_chunk_locationstoSessionto retrieve all locations where the repo has data
- Refs were stored in the wrong prefix
- Allow overwriting existing groups and arrays in Icechunk stores
- Fixed an error during commits where chunks would get mixed between different arrays
- Sync with zarr 3.0b2. The biggest change is the
modeparam onIcechunkStoremethods has been simplified toread_only. - Changed
IcechunkStore::distributed_committoIcechunkStore::merge, which now does not commit, but attempts to merge the changes from another store back into the current store. - Added a new
icechunk.dask.store_daskmethod to write a dask array to an icechunk store. This is required for safely writing dask arrays to an icechunk store. - Added a new
icechunk.xarray.to_icechunkmethod to write an xarray dataset to an icechunk store. This is required for safely writing xarray datasets with dask arrays to an icechunk store in a distributed or multi-processing context.
- The
StorageConfigmethods have been correctly typed. IcechunkStoreinstances are now set toread_onlyby default after pickling.- When checking out a snapshot or tag, the
IcechunkStorewill be set to read-only. If you want to write to the store, you must callIcechunkStore::set_writable(). - An error will now be raised if you try to checkout a snapshot that does not exist.
- Added
IcechunkStore::reset_branchandIcechunkStore::async_reset_branchmethods to point the head of the current branch to another snapshot, changing the history of the branch
- Zarr metadata will now only include the attributes key when the attributes dictionary of the node is not empty, aligning Icechunk with the python-zarr implementation.
- Initial release