Skip to content

Commit

Permalink
fix
Browse files Browse the repository at this point in the history
  • Loading branch information
rmokady committed Jun 16, 2021
1 parent e763e7f commit 8108f5b
Show file tree
Hide file tree
Showing 2 changed files with 32 additions and 35 deletions.
20 changes: 10 additions & 10 deletions .idea/workspace.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

47 changes: 22 additions & 25 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -20,30 +20,14 @@
</tr>
</tbody></table>

<p class="section">&nbsp;</p>
<table width="200" border="0" align="center">
<tbody><tr>
<td><img src="images/teaser.png" width="950" alt=""></td>
</tr>
<tr>
<td class="caption"><p>Motion retargeting results from a single video pair. The input videos (top) are used to generate the retargeting (bottom). As can be seen, each generated frame corresponds to the motion portrayed by the source, while keeping the style of the target.</p></td>
</tr>
</tbody></table>
<p class="section">&nbsp;</p>
<p><span class="section">Abstract</span></p>
<p>The task of unsupervised motion retargeting in videos has seen substantial advancements through the use of deep neural networks. While early works concentrated on specific object priors such as a human face or body, recent work considered the unsupervised case. When the source and target videos, however, are of different shapes, current methods fail. To alleviate this problem, we introduce JOKR - a JOint Keypoint Representation that captures the motion common to both the source and target videos, without requiring any object prior or data collection. By employing a domain confusion term, we enforce the unsupervised keypoint representations of both videos to be indistinguishable. This encourages disentanglement between the parts of the motion that are common to the two domains, and their distinctive appearance and motion, enabling the generation of videos that capture the motion of the one while depicting the style of the other. To enable cases where the objects are of different proportions or orientations, we apply a learned affine transformation between the JOKRs. This augments the representation to be affine invariant, and in practice broadens the variety of possible retargeting pairs. This geometry-driven representation enables further intuitive control, such as temporal coherence and manual editing. Through comprehensive experimentation, we demonstrate the applicability of our method to different challenging cross-domain video pairs. We evaluate our method both qualitatively and quantitatively, and demonstrate that our method handles various cross-domain scenarios, such as different animals, different flowers, and humans. We also demonstrate superior temporal coherency and visual quality compared to state-of-the-art alternatives, through statistical metrics and a user study.<br>
</p>
<!--
<p class="section">5 Minute Video</p>
<iframe width="960" height="540" src="images/video.mp4" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
-->
<p class="section">Paper</p>

<p class="section">Paper</p>
<table border="0">
<tbody>
<tr>
<td><a href="paper.pdf"><img src="images/paper.png" alt="" width="200"></a></td>
<td>&nbsp;</td>
[<a href="paper.pdf">PDF</a>]</td>
<td><p>[<a href="paper.pdf">PDF</a>]</p></td>
</tr>
</tbody>
</table>
Expand All @@ -57,15 +41,28 @@
</tr>
</tbody>
</table>
<p class="section">Applications</p>
<table border="0" align="center">
<td> As detailed in the paper, our method can be used to generate structurally aligned output as well as in different applications. Here are a few examples, see paper for additional results.



<p class="section">&nbsp;</p>
<table width="200" border="0" align="center">
<tbody><tr>
<td><br><img src="images/results.jpg" alt="" width="1000"></a></td>
<td><img src="images/teaser.png" width="950" alt=""></td>
</tr>
<tr>
<td class="caption"><p>Motion retargeting results from a single video pair. The input videos (top) are used to generate the retargeting (bottom). As can be seen, each generated frame corresponds to the motion portrayed by the source, while keeping the style of the target.</p></td>
</tr>

</tbody></table>
</tbody></table>
<p class="section">&nbsp;</p>
<p><span class="section">Abstract</span></p>
<p>The task of unsupervised motion retargeting in videos has seen substantial advancements through the use of deep neural networks. While early works concentrated on specific object priors such as a human face or body, recent work considered the unsupervised case. When the source and target videos, however, are of different shapes, current methods fail. To alleviate this problem, we introduce JOKR - a JOint Keypoint Representation that captures the motion common to both the source and target videos, without requiring any object prior or data collection. By employing a domain confusion term, we enforce the unsupervised keypoint representations of both videos to be indistinguishable. This encourages disentanglement between the parts of the motion that are common to the two domains, and their distinctive appearance and motion, enabling the generation of videos that capture the motion of the one while depicting the style of the other. To enable cases where the objects are of different proportions or orientations, we apply a learned affine transformation between the JOKRs. This augments the representation to be affine invariant, and in practice broadens the variety of possible retargeting pairs. This geometry-driven representation enables further intuitive control, such as temporal coherence and manual editing. Through comprehensive experimentation, we demonstrate the applicability of our method to different challenging cross-domain video pairs. We evaluate our method both qualitatively and quantitatively, and demonstrate that our method handles various cross-domain scenarios, such as different animals, different flowers, and humans. We also demonstrate superior temporal coherency and visual quality compared to state-of-the-art alternatives, through statistical metrics and a user study.<br>
</p>
<!--
<p class="section">5 Minute Video</p>
<iframe width="960" height="540" src="images/video.mp4" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
-->



<p class="section">Cross-Domain Motion Retargeting</p>
<p>The task of unsupervised motion retargeting in videos<br>
Expand Down

0 comments on commit 8108f5b

Please sign in to comment.