Skip to content

StatNLP/tmlr_2025_compositional_time_series

Repository files navigation

Compositionality in Time Series: A Proof of Concept using Symbolic Dynamics and Compositional Data Augmentation

Abstract

This work investigates whether time series of natural phenomena can be understood as being generated by sequences of latent states which are ordered in systematic and regular ways. We focus on clinical time series and ask whether clinical measurements can be interpreted as being generated by meaningful physiological states whose succession follows systematic principles. Uncovering the underlying compositional structure will allow us to create synthetic data to alleviate the notorious problem of sparse and low-resource data settings in clinical time series forecasting, and deepen our understanding of clinical data. We start by conceptualizing compositionality for time series as a property of the data generation process, and then study data-driven procedures that can reconstruct the elementary states and composition rules of this process. We evaluate the success of this methods using two empirical tests originating from a domain adaptation perspective. Both tests infer the similarity of the original time series distribution and the synthetic time series distribution from the similarity of expected risk of time series forecasting models trained and tested on original and synthesized data in specific ways. Our experimental results show that the test set performance achieved by training on compositionally synthesized data is comparable to training on original clinical time series data, and that evaluation of models on compositionally synthesized test data shows similar results to evaluating on original test data. In both experiments, performance based on compositionally synthesized data by far surpasses that based on synthetic data that were created by randomization-based data augmentation. An additional downstream evaluation of the prediction task of sequential organ failure assessment (SOFA) scores shows significant performance gains when model training is entirely based on compositionally synthesized data compared to training on original data, with improvements increasing with the size of the synthesized training set.

Important Remarks

To create this repository, we used code from github.com/sindhura97/STraTS/ and github.com/cure-lab/LTSF-Linear.

However, the STraTS code got a major update between submission and acceptance of this paper. Whereas before it contained the code for STraTS in the framework Keras , it now contains a PyTorch implementation. In our work, we performed our own conversion of the STraTS code to PyTorch.

We additionally used code from https://github.com/StatNLP/mlhc_2024_prediction_of_causes.

Installation

In theory, the following should be enough to reproduce our environments:

conda create --name strats_pytorch --file env.txt
conda activate strats_pytorch
pip install -r requirements.txt

Data Preprocessing

Preprocess MIMIC-III DB:

  1. Download MIMIC-III DB
  2. Run db-preprocessing/mimic3_01_preprocess_icu.py
  3. Run db-preprocessing/mimic3_02_preprocess_pickle.py
  4. Train a embedding model with db-preprocessing/train_embedding_model-mimic.py
  5. Run db-preprocessing/embeddings_mimic-full_clean.py

This will give you gold train|valid|test data in dense representation (numpy array).

Based on these files you can use the following scripts to generate synthetic data:

  1. preprocessing.py
  2. symbolize_embeddings.py
  3. symbolize_input.py
  4. synthesize_cutmix.py
  5. synthesize_cds.py

After that you can re-run all experiments with the scripts located in experiments/.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •