Skip to content
This repository has been archived by the owner on Apr 18, 2023. It is now read-only.

Parameter finetuning vs Output finetuning #14

Open
carsonswope opened this issue Oct 5, 2022 · 0 comments
Open

Parameter finetuning vs Output finetuning #14

carsonswope opened this issue Oct 5, 2022 · 0 comments

Comments

@carsonswope
Copy link

It seems that running gradient descent for the depth prediction network makes up the majority of the runtime of this method. The current MiDaS implementation (v3?) contains 1.3 GB of parameters, most of which are for the DPT-Large (https://github.com/isl-org/DPT) backbone.

In your research, did you experiment with performance differences between 'parameter finetuning' and just simple 'output finetuning' for the depth predictions (like as discussed in the GLNet paper (https://arxiv.org/pdf/1907.05820.pdf))?

I would also be curious about whether as a middle ground, maybe just finetuning the 'head' of the MiDaS network would be sufficient, and leave the much larger set of backbone parameters locked.

Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant