Skip to content

Commit

Permalink
Summary
Browse files Browse the repository at this point in the history
  • Loading branch information
DmitryRyumin committed Dec 20, 2023
1 parent 2843263 commit f08a3f1
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ Contributions to improve the completeness of this list are greatly appreciated.
<a href="https://github.com/DmitryRyumin/CVPR-2023-Papers/blob/main/sections/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/57-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="/DmitryRyumin/CVPR-2023-Papers/blob/main/sections/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/46-FF0000" alt="Videos"></a>
<a href="/DmitryRyumin/CVPR-2023-Papers/blob/main/sections/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/47-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,6 @@
| ToThePoint: Efficient Contrastive Learning of 3D Point Clouds via Recycling | [![GitHub](https://img.shields.io/github/stars/Lyccl/Tothepoint)](https://github.com/Lyccl/Tothepoint) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Li_ToThePoint_Efficient_Contrastive_Learning_of_3D_Point_Clouds_via_Recycling_CVPR_2023_paper.pdf) | :heavy_minus_sign: |
| MetaViewer: Towards a Unified Multi-View Representation | [![GitHub](https://img.shields.io/github/stars/xxLifeLover/MetaViewer)](https://github.com/xxLifeLover/MetaViewer) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_MetaViewer_Towards_a_Unified_Multi-View_Representation_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.06329-b31b1b.svg)](http://arxiv.org/abs/2303.06329) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=RbFTH8G-w1U) |
| Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture | [![GitHub](https://img.shields.io/github/stars/facebookresearch/ijepa)](https://github.com/facebookresearch/ijepa) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Assran_Self-Supervised_Learning_From_Images_With_a_Joint-Embedding_Predictive_Architecture_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2301.08243-b31b1b.svg)](http://arxiv.org/abs/2301.08243) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=gPlXDlFn0U4) |
| Understanding Masked Image Modeling via Learning Occlusion Invariant Feature <br /> ![CVPR - Highlight](https://img.shields.io/badge/CVPR-Highlight-FFFF00) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Kong_Understanding_Masked_Image_Modeling_via_Learning_Occlusion_Invariant_Feature_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2208.04164-b31b1b.svg)](http://arxiv.org/abs/2208.04164) | :heavy_minus_sign: |
| Understanding Masked Image Modeling via Learning Occlusion Invariant Feature <br /> ![CVPR - Highlight](https://img.shields.io/badge/CVPR-Highlight-FFFF00) | :heavy_minus_sign: | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Kong_Understanding_Masked_Image_Modeling_via_Learning_Occlusion_Invariant_Feature_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2208.04164-b31b1b.svg)](http://arxiv.org/abs/2208.04164) | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/watch?v=rqyhxBz_xYg) |
| CHMATCH: Contrastive Hierarchical Matching and Robust Adaptive Threshold Boosted Semi-Supervised Learning | [![GitHub](https://img.shields.io/github/stars/sailist/CHMatch)](https://github.com/sailist/CHMatch) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Wu_CHMATCH_Contrastive_Hierarchical_Matching_and_Robust_Adaptive_Threshold_Boosted_Semi-Supervised_CVPR_2023_paper.pdf) | :heavy_minus_sign: |
| Regularize Implicit Neural Representation by Itself <br /> ![CVPR - Highlight](https://img.shields.io/badge/CVPR-Highlight-FFFF00) | [![GitHub](https://img.shields.io/github/stars/YannickStruempler/inr_based_compression)](https://github.com/YannickStruempler/inr_based_compression) | [![thecvf](https://img.shields.io/badge/pdf-thecvf-7395C5.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Li_Regularize_Implicit_Neural_Representation_by_Itself_CVPR_2023_paper.pdf) <br /> [![arXiv](https://img.shields.io/badge/arXiv-2303.15484-b31b1b.svg)](http://arxiv.org/abs/2303.15484) | :heavy_minus_sign: |

0 comments on commit f08a3f1

Please sign in to comment.