v1.0.0 #390
KevinMusgrave
started this conversation in
General
v1.0.0
#390
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Reference embeddings for tuple losses
You can separate the source of anchors and positive/negatives. In the example below, anchors will be selected from
embeddings
and positives/negatives will be selected fromref_emb
.Efficient mode for DistributedLossWrapper
efficient=True
: each process uses its own embeddings for anchors, and the gathered embeddings for positives/negatives. Gradients will not be equal to those in non-distributed code, but the benefit is reduced memory and faster code.efficient=False
: each process uses gathered embeddings for both anchors and positives/negatives. Gradients will be equal to those in non-distributed code, but at the cost of doing unnecessary operations (i.e. doing computations where both anchors and positives/negatives have no gradient).The default is
False
. You can set it toTrue
like this:Documentation: https://kevinmusgrave.github.io/pytorch-metric-learning/distributed/
Customizing k-nearest-neighbors for AccuracyCalculator
You can use a different type of faiss index:
You can also use a PML distance object:
Relevant docs:
Issues resolved
#204
#251
#256
#292
#330
#337
#345
#347
#349
#353
#359
#361
#362
#363
#368
#376
#380
Contributors
Thanks to @yutanakamura-tky and @KinglittleQ for pull requests, and @mensaochun for providing helpful code in #380
This discussion was created from the release v1.0.0.
Beta Was this translation helpful? Give feedback.
All reactions