Skip to content

Commit 5c24e4b

Browse files
committed
Add ISVC2024 Supplementary Materials
1 parent 2362481 commit 5c24e4b

File tree

7 files changed

+17205
-0
lines changed

7 files changed

+17205
-0
lines changed

isvc2024/accjacmax.html

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
<!DOCTYPE html>
2+
<html lang="pt">
3+
<head>
4+
<meta charset="utf-8">
5+
<meta http-equiv="X-UA-Compatible" content="IE=edge">
6+
<meta name="viewport" content="width=device-width, initial-scale=1">
7+
<script src="js/jquery-latest.js" type="text/javascript"></script>
8+
<script src="js/bootstrap.min.js" type="text/javascript"></script>
9+
<link rel="stylesheet" href="css/bootstrap.css">
10+
<title>ISVC 2024: Contrastive Loss based on Contextual Similarity for Image Classification</title>
11+
<style>
12+
.text {
13+
text-align: justify;
14+
}
15+
</style>
16+
</head>
17+
18+
<body>
19+
<h1><b>Unsupervised Effectiveness Estimation Measure Based on Rank Correlation for Image Retrieval</b></h1>
20+
21+
<h4><b>Authors: Thiago César Castilho Almeida, <a href="https://lucasvalem.com" target="_blank">Lucas Pascotti Valem</a>, <a href="https://www.ic.unicamp.br/~dcarlos/" target="_blank">Daniel Carlos Guimarães Pedronette</a></b></h4>
22+
23+
<h4><b>In 19th International Symposium on Visual Computing (<a href="https://isvc.net" target="_blank">ISVC 2024</a>), Lake Tahoe, NV, USA</b></h4>
24+
25+
<p class="text">
26+
<b>Abstract:</b> In recent years, the amount of image data has increased exponentially, driven by advancements in digital technologies. As the volume of data expands, the efforts required for labeling also escalate, which is costly and time-consuming.
27+
This scenario highlights the critical need for methods capable of delivering effective results in scenarios with few or no labels at all.
28+
In unsupervised retrieval, the task of Query Performance Prediction (QPP) is crucial and challenging, as it involves estimating the effectiveness of a query without labeled data.
29+
Besides promising, the QPP approaches are still largely unexplored for image retrieval.
30+
Additionally, recent approaches require training and do not exploit rank correlation to model the data.
31+
To address this gap, we propose a novel QPP measure named Accumulated JaccardMax, which considers contextual similarity information and innovates by exploiting a recent rank correlation measure to assess the effectiveness of ranked lists.
32+
It provides a robust estimation by analyzing the ranked lists in different neighborhood depths and does not require any training or labeled data.
33+
Extensive experiments were conducted across 5 datasets and over 20 different features including hand-crafted (e.g., color, shape, texture) and deep learning (e.g., Convolutional Networks and Vision Transformers) models.
34+
The results reveal that the proposed unsupervised measure exhibits a high correlation with the Mean Average Precision (MAP) in most cases, achieving results that are better or comparable to the baseline approaches in the literature.
35+
</p>
36+
37+
<h4><b>Supplementary Files:</b></h4>
38+
39+
<p class="text">
40+
You can access the supplementary material PDF, which includes comprehensive results and detailed illustrations.
41+
The code for our proposed approach is also available for download through GitHub.
42+
</p>
43+
44+
<center>
45+
<a href="content/supmat_accjacmax.pdf" target="_blank" class="btn btn-primary btn-xl"><b>Supplementary Material (PDF)</b></a>&nbsp;&nbsp;
46+
<a href="https://github.com/lucasPV/AccJacMax" target="_blank" class="btn btn-success btn-xl"><b>Code Available (GitHub)</b></a>&nbsp;&nbsp;
47+
</center>
48+
</body>
49+
50+
<br>
51+
52+
<h4><b>Citation:</b></h4>
53+
<p>
54+
If you use this work, please cite it as follows:
55+
</p>
56+
<pre>
57+
@inproceedings{Almeida2024AccJacMax,
58+
author = {Thiago César Castilho Almeida and Lucas Pascotti Valem and Daniel Carlos Guimarães Pedronette},
59+
title = {Unsupervised Effectiveness Estimation Measure Based on Rank Correlation for Image Retrieval},
60+
booktitle = {19th International Symposium on Visual Computing (ISVC)},
61+
year = {2024},
62+
address = {Lake Tahoe, NV, USA},
63+
}</pre>
64+
</body>
65+
66+
</html>

isvc2024/ccl.html

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
<!DOCTYPE html>
2+
<html lang="pt">
3+
<head>
4+
<meta charset="utf-8">
5+
<meta http-equiv="X-UA-Compatible" content="IE=edge">
6+
<meta name="viewport" content="width=device-width, initial-scale=1">
7+
<script src="js/jquery-latest.js" type="text/javascript"></script>
8+
<script src="js/bootstrap.min.js" type="text/javascript"></script>
9+
<link rel="stylesheet" href="css/bootstrap.css">
10+
<title>ISVC 2024: Contrastive Loss based on Contextual Similarity for Image Classification</title>
11+
<style>
12+
.text {
13+
text-align: justify;
14+
}
15+
</style>
16+
</head>
17+
18+
<body>
19+
<h1><b>Contrastive Loss based on Contextual Similarity for Image Classification</b></h1>
20+
21+
<h4><b>Authors: <a href="https://lucasvalem.com" target="_blank">Lucas Pascotti Valem</a>, <a href="https://www.ic.unicamp.br/~dcarlos/" target="_blank">Daniel Carlos Guimarães Pedronette</a>, <a href="http://w3.uqo.ca/allimo01/" target="_blank">Mohand Said Allili</a></b></h4>
22+
23+
<h4><b>In 19th International Symposium on Visual Computing (<a href="https://isvc.net" target="_blank">ISVC 2024</a>), Lake Tahoe, NV, USA</b></h4>
24+
25+
<p class="text">
26+
<b>Abstract:</b> Contrastive learning has been extensively exploited in self-supervised and supervised learning due to its effectiveness in learning representations that
27+
distinguish between similar and dissimilar images. It offers a robust alternative to cross-entropy by yielding more semantically meaningful image embeddings.
28+
However, most contrastive losses rely on pairwise measures to assess the similarity between elements, ignoring more general neighborhood information that can be
29+
leveraged to enhance model robustness and generalization. In this paper, we propose the Contextual Contrastive Loss (CCL) to replace pairwise image comparison by
30+
introducing a new contextual similarity measure using neighboring elements. The CCL yields a more semantically meaningful image embedding ensuring better separability
31+
of classes in the latent space. Experimental evaluation on three datasets (Food101, MiniImageNet, and CIFAR-100) has shown that CCL yields superior results by achieving
32+
up to 10.76% relative gains in classification accuracy, particularly for fewer training epochs and limited training data. This demonstrates the potential of our approach,
33+
especially in resource-constrained scenarios.
34+
</p>
35+
36+
<h4><b>Supplementary Files:</b></h4>
37+
38+
<p class="text">
39+
You can access the supplementary material PDF, which includes comprehensive results and detailed illustrations.
40+
The code for our proposed approach is also available for download through GitHub.
41+
</p>
42+
43+
<center>
44+
<a href="content/supmat_ccl.pdf" target="_blank" class="btn btn-primary btn-xl"><b>Supplementary Material (PDF)</b></a>&nbsp;&nbsp;
45+
<a href="https://github.com/lucasPV/CCL" target="_blank" class="btn btn-success btn-xl"><b>Code Available (GitHub)</b></a>&nbsp;&nbsp;
46+
</center>
47+
</body>
48+
49+
<br>
50+
51+
<h4><b>Citation:</b></h4>
52+
<p>
53+
If you use this work, please cite it as follows:
54+
</p>
55+
<pre>
56+
@inproceedings{Valem2024CCL,
57+
author = {Lucas Pascotti Valem and Daniel Carlos Guimarães Pedronette and Mohand Said Allili},
58+
title = {Contrastive Loss based on Contextual Similarity for Image Classification},
59+
booktitle = {19th International Symposium on Visual Computing (ISVC)},
60+
year = {2024},
61+
address = {Lake Tahoe, NV, USA},
62+
}</pre>
63+
</body>
64+
65+
</html>
2.47 MB
Binary file not shown.

isvc2024/content/supmat_ccl.pdf

862 KB
Binary file not shown.

0 commit comments

Comments
 (0)