the following code works with
- float16 or float32 on cuda
- float32 on mps
When I execute it with float16 on mps, all the patch embeddings are equal.
features = self.embedder.get_intermediate_layers(
processed_frame,
n=1,
reshape=True,
return_class_token=False,
norm=True
)
embedding = features[0].squeeze().detach()