This is the scratch implementation of PointMAE (arxiv.org/abs/2203.06604) that I used to generate the results for my presentation at AGU2024. This doesn't belong to a paper but I've made it public so I can share it with people who've asked.
You can read some fast-bullets about my findings below, or see the slides here. I delivered most of this work from scratch within ~6-8 weeks of the conference, so the graphics aren't super polished, but the information is there.
Findings:
- MAE-style pretraining on patches of fixed radius improved performance in all cases.
- The improvement was similar or greater on regions that weren't in the pretraining data.
- Custom LR scheduler with gradual unfreezing improved performance quite a lot, likely due to not many labels.
Install Miniconda from here and then run the following commands to create the tlspt environment:
conda env create -f environment.yml
conda activate tlsptNext, install the package:
pip install -e .or if you want development dependencies as well:
pip install -e .[dev]Install pre-commit by running the following command to automatically run code formatting and linting before each commit:
pre-commit installIf using pre-commit, each time you commit, your code will be formatted, linted, checked for imports, merge conflicts, and more. If any of these checks fail, the commit will be aborted.
To add a new package to the environment, open pyproject.toml file and add the package name to "dependencies" list. Then, run the following command to install the new package:
pip install -e . # or .[dev]CACHE_DIR=/path/to/cache/dirThe training script is parameterised using hydra. You can see the existing configs under <configs/example-config>.
The training script can then be run using
python train.py --config-path /path/to/config