You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTENTS.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,12 +21,14 @@
21
21
| [**IntraPairVarianceLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#intrapairvarianceloss) | [Deep Metric Learning with Tuplet Margin Loss](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yu_Deep_Metric_Learning_With_Tuplet_Margin_Loss_ICCV_2019_paper.pdf)
22
22
| [**LargeMarginSoftmaxLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#largemarginsoftmaxloss) | [Large-Margin Softmax Loss for Convolutional Neural Networks](https://arxiv.org/pdf/1612.02295.pdf)
| [**ManifoldLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) | [Ensemble Deep Manifold Similarity Learning using Hard Proxies](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aziere_Ensemble_Deep_Manifold_Similarity_Learning_Using_Hard_Proxies_CVPR_2019_paper.pdf)
24
25
| [**MarginLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#marginloss) | [Sampling Matters in Deep Embedding Learning](https://arxiv.org/pdf/1706.07567.pdf)
25
26
| [**MultiSimilarityLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#multisimilarityloss) | [Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf)
| [**NormalizedSoftmaxLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#normalizedsoftmaxloss) | - [NormFace: L2 Hypersphere Embedding for Face Verification](https://arxiv.org/pdf/1704.06369.pdf) <br/> - [Classification is a Strong Baseline for DeepMetric Learning](https://arxiv.org/pdf/1811.12649.pdf)
28
29
| [**NPairsLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#npairsloss) | [Improved Deep Metric Learning with Multi-class N-pair Loss Objective](http://www.nec-labs.com/uploads/images/Department-Images/MediaAnalytics/papers/nips16_npairmetriclearning.pdf)
29
30
| [**NTXentLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss) | - [Representation Learning with Contrastive Predictive Coding](https://arxiv.org/pdf/1807.03748.pdf) <br/> - [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/pdf/1911.05722.pdf) <br/> - [A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/abs/2002.05709)
31
+
| [**P2SGradLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss) | [P2SGrad: Refined Gradients for Optimizing Deep Face Models](https://arxiv.org/abs/1905.02479)
30
32
| [**PNPLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss) | [Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough](https://arxiv.org/pdf/2102.04640.pdf)
31
33
| [**ProxyAnchorLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#proxyanchorloss) | [Proxy Anchor Loss for Deep Metric Learning](https://arxiv.org/pdf/2003.13911.pdf)
32
34
| [**ProxyNCALoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#proxyncaloss) | [No Fuss Distance Metric Learning using Proxies](https://arxiv.org/pdf/1703.07464.pdf)
Copy file name to clipboardExpand all lines: README.md
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,13 +18,15 @@
18
18
19
19
## News
20
20
21
+
**June 18**: v2.2.0
22
+
- Added [ManifoldLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) and [P2SGradLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss).
23
+
- Added a `symmetric` flag to [SelfSupervisedLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#selfsupervisedloss).
24
+
- See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v2.2.0).
25
+
- Thank you [domenicoMuscill0](https://github.com/domenicoMuscill0).
- Thanks to contributor [interestingzhuo](https://github.com/interestingzhuo).
24
-
25
-
**January 29**: v2.0.0
26
-
- Added SelfSupervisedLoss, plus various API improvements. See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v2.0.0).
27
-
- Thanks to contributor [cwkeam](https://github.com/cwkeam).
29
+
- Thanks you [interestingzhuo](https://github.com/interestingzhuo).
28
30
29
31
30
32
## Documentation
@@ -225,6 +227,7 @@ Thanks to the contributors who made pull requests!
|[mlopezantequera](https://github.com/mlopezantequera)| - Made the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers) work on any combination of query and reference sets <br/> - Made [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) work with arbitrary label comparisons |
229
232
|[cwkeam](https://github.com/cwkeam)| - [SelfSupervisedLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#selfsupervisedloss) <br/> - [VICRegLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#vicregloss) <br/> - Added mean reciprocal rank accuracy to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) <br/> - BaseLossWrapper|
***loss**: The loss per positive pair in the batch. Reduction type is ```"pos_pair"```.
546
546
547
547
548
+
## ManifoldLoss
549
+
550
+
[Ensemble Deep Manifold Similarity Learning using Hard Proxies](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aziere_Ensemble_Deep_Manifold_Similarity_Learning_Using_Hard_Proxies_CVPR_2019_paper.pdf)
551
+
552
+
```python
553
+
losses.ManifoldLoss(
554
+
l: int,
555
+
K: int=50,
556
+
lambdaC: float=1.0,
557
+
alpha: float=0.8,
558
+
margin: float=5e-4,
559
+
**kwargs
560
+
)
561
+
```
562
+
563
+
**Parameters**
564
+
565
+
-**l**: embedding size.
566
+
567
+
-**K**: number of proxies.
568
+
569
+
-**lambdaC**: regularization weight. Used in the formula `loss = intrinsic_loss + lambdaC*context_loss`.
570
+
If `lambdaC=0`, then it uses only the intrinsic loss. If `lambdaC=np.inf`, then it uses only the context loss.
571
+
572
+
-**alpha**: parameter of the Random Walk. Must be in the range `(0,1)`. It specifies the amount of similarity between neighboring nodes.
573
+
574
+
-**margin**: margin used in the calculation of the loss.
575
+
576
+
577
+
Example usage:
578
+
```python
579
+
loss_fn = ManifoldLoss(128)
580
+
581
+
# use random cluster centers
582
+
loss = loss_fn(embeddings)
583
+
# or specify indices of embeddings to use as cluster centers
584
+
loss = loss_fn(embeddings, indices_tuple=indices)
585
+
```
586
+
587
+
**Important notes**
588
+
589
+
`labels`, `ref_emb`, and `ref_labels` are not supported for this loss function.
590
+
591
+
In addition, `indices_tuple` is **not** for the output of miners. Instead, it is for a list of indices of embeddings to be used as cluster centers.
592
+
593
+
594
+
**Default reducer**:
595
+
596
+
- This loss returns an **already reduced** loss.
597
+
598
+
548
599
## MarginLoss
549
600
[Sampling Matters in Deep Embedding Learning](https://arxiv.org/pdf/1706.07567.pdf){target=_blank}
-**num_classes**: The number of classes in your training dataset.
827
+
828
+
829
+
Example usage:
830
+
```python
831
+
loss_fn = P2SGradLoss(128, 10)
832
+
loss = loss_fn(embeddings, labels)
833
+
```
834
+
835
+
**Important notes**
836
+
837
+
`indices_tuple`, `ref_emb`, and `ref_labels` are not supported for this loss function.
838
+
839
+
840
+
**Default reducer**:
841
+
842
+
- This loss returns an **already reduced** loss.
843
+
844
+
845
+
764
846
## PNPLoss
765
847
[Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough](https://arxiv.org/pdf/2102.04640.pdf){target=_blank}
766
848
```python
@@ -849,14 +931,31 @@ loss_optimizer.step()
849
931
850
932
## SelfSupervisedLoss
851
933
852
-
A common use case is to have `embeddings` and `ref_emb` be augmented versions of each other. For most losses, you have to create labels to indicate which `embeddings` correspond with which `ref_emb`. `SelfSupervisedLoss` automates this.
934
+
A common use case is to have `embeddings` and `ref_emb` be augmented versions of each other. For most losses, you have to create labels to indicate which `embeddings` correspond with which `ref_emb`.
935
+
936
+
`SelfSupervisedLoss` is a wrapper that takes care of this by creating labels internally. It assumes that:
937
+
938
+
-`ref_emb[i]` is an augmented version of `embeddings[i]`.
939
+
-`ref_emb[i]` is the only augmented version of `embeddings[i]` in the batch.
***symmetric**: If `True`, then the embeddings in both `embeddings` and `ref_emb` are used as anchors. If `False`, then only the embeddings in `embeddings` are used as anchors.
0 commit comments