Skip to content
/ tlsPT Public

A pytorch lightning re-implementation of Point-MAE [ECCV2022, arxiv.org/abs/2203.06604] for pretraining and semantic segmenation of forest point clouds.

Notifications You must be signed in to change notification settings

mataln/tlsPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tlsPT

This is the scratch implementation of PointMAE (arxiv.org/abs/2203.06604) that I used to generate the results for my presentation at AGU2024. This doesn't belong to a paper but I've made it public so I can share it with people who've asked.

You can read some fast-bullets about my findings below, or see the slides here. I delivered most of this work from scratch within ~6-8 weeks of the conference, so the graphics aren't super polished, but the information is there.

Findings:

  • MAE-style pretraining on patches of fixed radius improved performance in all cases.
  • The improvement was similar or greater on regions that weren't in the pretraining data.
  • Custom LR scheduler with gradual unfreezing improved performance quite a lot, likely due to not many labels.

Alt text for the image

Installation

Install Miniconda from here and then run the following commands to create the tlspt environment:

conda env create -f environment.yml

conda activate tlspt

Next, install the package:

pip install -e .

or if you want development dependencies as well:

pip install -e .[dev]

Optional, but highly recommended

Install pre-commit by running the following command to automatically run code formatting and linting before each commit:

pre-commit install

If using pre-commit, each time you commit, your code will be formatted, linted, checked for imports, merge conflicts, and more. If any of these checks fail, the commit will be aborted.

Adding a new package

To add a new package to the environment, open pyproject.toml file and add the package name to "dependencies" list. Then, run the following command to install the new package:

pip install -e . # or .[dev]
CACHE_DIR=/path/to/cache/dir

Running train.py

The training script is parameterised using hydra. You can see the existing configs under <configs/example-config>.

The training script can then be run using

python train.py --config-path /path/to/config

About

A pytorch lightning re-implementation of Point-MAE [ECCV2022, arxiv.org/abs/2203.06604] for pretraining and semantic segmenation of forest point clouds.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published