-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathindex.html
166 lines (139 loc) · 7.52 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
<html xmlns="http://www.w3.org/1999/xhtml"><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>
JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting
</title>
<link href="css/index.css" rel="stylesheet" type="text/css">
<!-- Global site tag (gtag.js) - Google Analytics -->
</head>
<body>
<div class="container">
<p><span class="title">JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting</span></p>
<br>
<table border="0" align="center" class="authors">
<tbody><tr align="center">
<td><a href="https://rmokady.github.io/">Ron Mokady</a></td>
<td><a href="mailto:rotemtzaban@mail.tau.ac.il ">Rotem Tzaban</a></td>
<td><a href="https://sagiebenaim.github.io/">Sagie Benaim</a></td>
<td><a href="https://www.cs.tau.ac.il/~amberman/">Amit Bermano</a></td>
<td><a href="https://www.cs.tau.ac.il/~dcor/">Daniel Cohen-Or</a></td>
</tr>
</tbody></table>
<p class="section"> </p>
<table width="200" border="0" align="center">
<tbody><tr>
<td><img src="images/teaser.png" width="950" alt=""></td>
</tr>
<tr>
<td class="caption"><p>Motion retargeting results from a single video pair. The input videos (top) are used to generate the retargeting (bottom). As can be seen, each generated frame corresponds to the motion portrayed by the source, while keeping the style of the target.</p></td>
</tr>
</tbody></table>
<div class="container">
<video class="teaser_video" controls>
<source src="videos/cat_fox_vid.mp4" type="video/mp4"/>
</video>
</div>
<p class="section"> </p>
<p><span class="section">Abstract</span></p>
<p>The task of unsupervised motion retargeting in videos has seen substantial advancements through the use of deep neural networks. While early works concentrated on specific object priors such as a human face or body, recent work considered the unsupervised case. When the source and target videos, however, are of different shapes, current methods fail. To alleviate this problem, we introduce JOKR - a JOint Keypoint Representation that captures the motion common to both the source and target videos, without requiring any object prior or data collection. By employing a domain confusion term, we enforce the unsupervised keypoint representations of both videos to be indistinguishable. This encourages disentanglement between the parts of the motion that are common to the two domains, and their distinctive appearance and motion, enabling the generation of videos that capture the motion of the one while depicting the style of the other. To enable cases where the objects are of different proportions or orientations, we apply a learned affine transformation between the JOKRs. This augments the representation to be affine invariant, and in practice broadens the variety of possible retargeting pairs. This geometry-driven representation enables further intuitive control, such as temporal coherence and manual editing. Through comprehensive experimentation, we demonstrate the applicability of our method to different challenging cross-domain video pairs. We evaluate our method both qualitatively and quantitatively, and demonstrate that our method handles various cross-domain scenarios, such as different animals, different flowers, and humans. We also demonstrate superior temporal coherency and visual quality compared to state-of-the-art alternatives, through statistical metrics and a user study.<br>
</p>
<p class="section">Cross-Domain Motion Retargeting</p>
<p>We demonstrate our results for the task of unsupervised cross-domain motion retargeting compared to other methods, for more details refer to the paper.<br>
</p>
<div class="container">
<video class="comparison_video" controls>
<source src="videos/comparisons/cat_fox.mp4" type="video/mp4" width="200"/>
</video>
<video class="comparison_video" controls>
<source src="videos/comparisons/chi_fox.mp4" type="video/mp4" width="200"/>
</video>
<video class="comparison_video" controls>
<source src="videos/comparisons/horse_deer.mp4" type="video/mp4" width="200"/>
</video>
<video class="comparison_video" controls>
<source src="videos/comparisons/tiger_cat2.mp4" type="video/mp4" width="200"/>
</video>
<video class="comparison_video" controls>
<source src="videos/comparisons/cow_fox.mp4" type="video/mp4" width="200"/>
</video>
<video class="comparison_video" controls>
<source src="videos/comparisons/zebra_deer.mp4" type="video/mp4" width="200"/>
</video>
</div>
<p class="section">Paper</p>
<table border="0">
<tbody>
<tr>
<td><a href="paper.pdf"><img src="images/paper.png" alt="" width="200"></a></td>
<td> </td>
<td><p>[<a href="paper.pdf">PDF</a>]</p></td>
</tr>
</tbody>
</table>
<p class="section">Code</p>
<table border="0">
<tbody>
<tr>
<td><a href="https://github.com/rmokady/JOKR"><img src="images/code.png" alt="" width="200"></a></td>
<td> </td>
<td><p>[<a href="https://github.com/rmokady/JOKR">Link</a>]</p></td>
</tr>
</tbody>
</table>
<!--
<p class="section">5 Minute Video</p>
<iframe width="960" height="540" src="images/video.mp4" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
-->
<p class="section">Other Results</p>
<p>We also show our method is valid for various domains.<br>
</p>
<div class="container">
<video class="comparison_video" controls>
<source src="videos/additional_videos/gif_726_stack.mp4" type="video/mp4" width="200"/>
</video>
<video class="comparison_video" controls>
<source src="videos/additional_videos/gif_279.mp4" type="video/mp4" width="200"/>
</video>
<video class="comparison_video" controls>
<source src="videos/comparisons/edn_full.mp4" type="video/mp4" width="200"/>
</video>
<video class="comparison_video" controls>
<source src="videos/additional_videos/flowers.mp4" type="video/mp4" width="200"/>
</video>
</div>
<p class="section">Keypoints Interpretability</p>
<p>To demonstrate our meaningful keypoints representation, we present editing results obtained by moving a single or a pair of keypoints. <br>
</p>
<div class="container">
<video width="400" height="400" controls>
<source src="videos/editing/cat_front_leg2.mp4" type="video/mp4">
</video>
<video width="400" height="400" controls>
<source src="videos/editing/cat_head_tail2.mp4" type="video/mp4">
</video>
<video width="400" height="400" controls>
<source src="videos/editing/cat_rear_leg2.mp4" type="video/mp4">
</video>
<video width="400" height="400" controls>
<source src="videos/editing/fox_front_leg2.mp4" type="video/mp4">
</video>
<video width="400" height="400" controls>
<source src="videos/editing/deer_head2.mp4" type="video/mp4">
</video>
<video width="400" height="400" controls>
<source src="videos/editing/horse_leg2.mp4" type="video/mp4">
</video>
</div>
<p class="section">Temporal Regularization Ablation Study</p>
<p>The effect of omitting the temporal regularization is presented in the following video.<br>
</p>
<div class="container">
<video width="400" height="400" controls>
<source src="videos/temporal_consistency/temporal_consistency.mp4">
</video>
</div>
</p>
<p class="section"> </p>
<p align="center" class="date">Last updated: June 2021</p>
</div>
</body></html>