Skip to content

Commit

Permalink
Summary
Browse files Browse the repository at this point in the history
  • Loading branch information
DmitryRyumin committed Dec 20, 2023
1 parent 0295daf commit 7ac9cb0
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ Contributions to improve the completeness of this list are greatly appreciated.
<a href="https://github.com/DmitryRyumin/CVPR-2023-Papers/blob/main/sections/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/54-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="/DmitryRyumin/CVPR-2023-Papers/blob/main/sections/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/43-FF0000" alt="Videos"></a>
<a href="/DmitryRyumin/CVPR-2023-Papers/blob/main/sections/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/44-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

## Self-Supervised or Unsupervised Representation Learning

![Section Papers](https://img.shields.io/badge/Section%20Papers-71-42BA16) ![Preprint Papers](https://img.shields.io/badge/Preprint%20Papers-58-b31b1b) ![Papers with Open Code](https://img.shields.io/badge/Papers%20with%20Open%20Code-54-1D7FBF) ![Papers with Video](https://img.shields.io/badge/Papers%20with%20Video-43-FF0000)
![Section Papers](https://img.shields.io/badge/Section%20Papers-71-42BA16) ![Preprint Papers](https://img.shields.io/badge/Preprint%20Papers-58-b31b1b) ![Papers with Open Code](https://img.shields.io/badge/Papers%20with%20Open%20Code-54-1D7FBF) ![Papers with Video](https://img.shields.io/badge/Papers%20with%20Video-44-FF0000)

| **Title** | **Repo** | **Paper** | **Video** |
|-----------|:--------:|:---------:|:---------:|
Expand Down Expand Up @@ -63,7 +63,7 @@
| Texture-guided Saliency Distilling for Unsupervised Salient Object Detection | [![GitHub](https://img.shields.io/github/stars/moothes/A2S-v2)](https://github.com/moothes/A2S-v2) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhou_Texture-Guided_Saliency_Distilling_for_Unsupervised_Salient_Object_Detection_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2207.05921-b31b1b.svg)](http://arxiv.org/abs/2207.05921) | :heavy_minus_sign: |
| Multi-Realism Image Compression with a Conditional Generator | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Agustsson_Multi-Realism_Image_Compression_With_a_Conditional_Generator_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2212.13824-b31b1b.svg)](http://arxiv.org/abs/2212.13824) | :heavy_minus_sign: |
| Understanding Masked Autoencoders via Hierarchical Latent Variable Models <br /> ![CVPR - Highlight](https://img.shields.io/badge/CVPR-Highlight-FFFF00) | [![GitHub](https://img.shields.io/github/stars/martinmamql/mae_understand)](https://github.com/martinmamql/mae_understand) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Kong_Understanding_Masked_Autoencoders_via_Hierarchical_Latent_Variable_Models_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2306.04898-b31b1b.svg)](https://arxiv.org/abs/2306.04898) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=004ojgVKYtA) |
| GeoMAE: Masked Geometric Target Prediction for Self-Supervised Point Cloud Pre-Training | [![GitHub](https://img.shields.io/github/stars/Tsinghua-MARS-Lab/GeoMAE)](https://github.com/Tsinghua-MARS-Lab/GeoMAE) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Tian_GeoMAE_Masked_Geometric_Target_Prediction_for_Self-Supervised_Point_Cloud_Pre-Training_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2305.08808-b31b1b.svg)](http://arxiv.org/abs/2305.08808) | :heavy_minus_sign: |
| GeoMAE: Masked Geometric Target Prediction for Self-Supervised Point Cloud Pre-Training | [![GitHub](https://img.shields.io/github/stars/Tsinghua-MARS-Lab/GeoMAE)](https://github.com/Tsinghua-MARS-Lab/GeoMAE) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Tian_GeoMAE_Masked_Geometric_Target_Prediction_for_Self-Supervised_Point_Cloud_Pre-Training_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2305.08808-b31b1b.svg)](http://arxiv.org/abs/2305.08808) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=ZJ7ukv1-WEk) |
| Siamese DETR | [![GitHub](https://img.shields.io/github/stars/Zx55/SiameseDETR)](https://github.com/Zx55/SiameseDETR) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Huang_Siamese_DETR_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.18144-b31b1b.svg)](http://arxiv.org/abs/2303.18144) | :heavy_minus_sign: |
| Generalizable Implicit Neural Representations via Instance Pattern Composers <br /> ![CVPR - Highlight](https://img.shields.io/badge/CVPR-Highlight-FFFF00) | [![GitHub](https://img.shields.io/github/stars/kakaobrain/ginr-ipc)](https://github.com/kakaobrain/ginr-ipc) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Kim_Generalizable_Implicit_Neural_Representations_via_Instance_Pattern_Composers_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2211.13223-b31b1b.svg)](http://arxiv.org/abs/2211.13223) | :heavy_minus_sign: |
| Pose-Disentangled Contrastive Learning for Self-Supervised Facial Representation | [![GitHub](https://img.shields.io/github/stars/DreamMr/PCL)](https://github.com/DreamMr/PCL) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Liu_Pose-Disentangled_Contrastive_Learning_for_Self-Supervised_Facial_Representation_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2211.13490-b31b1b.svg)](http://arxiv.org/abs/2211.13490) | :heavy_minus_sign: |
Expand Down

0 comments on commit 7ac9cb0

Please sign in to comment.