Skip to content

Commit

Permalink
Markdown files and example colab for Conv3d-base inbetweening TFHub m…
Browse files Browse the repository at this point in the history
…odules for BAIR and KTH.

PiperOrigin-RevId: 273249523
  • Loading branch information
TensorFlow Hub Authors authored and andresusanopinto committed Oct 7, 2019
1 parent fe1965a commit a5484c6
Showing 1 changed file with 254 additions and 0 deletions.
254 changes: 254 additions & 0 deletions examples/colab/tweening_conv3d.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,254 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "wC0PtNm3Sa_T"
},
"source": [
"**bold text**##### Copyright 2019 The TensorFlow Hub Authors.\n",
"\n",
"Licensed under the Apache License, Version 2.0 (the \"License\");"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "hgOqPjRKSa-7"
},
"outputs": [],
"source": [
"# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# http://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License.\n",
"# =============================================================================="
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "oKAkxAYuONU6"
},
"source": [
"# Demo TF-Hub for video generation using Inbetweening 3d Convolutions\n",
"\n",
"Yunpeng Li, Dominik Roblek, and Marco Tagliasacchi. From Here to There: Video Inbetweening Using Direct 3D Convolutions, 2019.\n",
"\n",
"https://arxiv.org/abs/1905.10240\n",
"\n",
"\n",
"Current Hub characteristics:\n",
"- has models for BAIR Robot pushing videos and KTH action video dataset (though this colab uses only BAIR)\n",
"- BAIR dataset already available in Hub. However, KTH videos need to be supplied by the users themselves.\n",
"- only evaluation (video generation) for now\n",
"- batch size and frame size are hard-coded\n",
"- Tensorflow 1.x only\n",
"\n",
"IMPORTANT NOTE: If an error of \"Not enough disk space\" appears, it's likely due to the downloading of the full dataset (~30GB). In that case, we recommend either connecting via a local runtime or downloading the dataset to Google Drive and loading it manually instead of calling tfds.load()."
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "GhIKakhc7JYL"
},
"outputs": [],
"source": [
"from __future__ import absolute_import, division, print_function\n",
"\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"import tensorflow_hub as hub\n",
"import tensorflow_datasets as tfds"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "iaGU8hhBPi_6"
},
"source": [
"## BAIR: Demo based on numpy array inputs"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "both",
"colab": {},
"colab_type": "code",
"id": "IgWmW8YzEiDo"
},
"outputs": [],
"source": [
"# @title Load some example data (BAIR).\n",
"batch_size = 16\n",
"with tf.Graph().as_default():\n",
" # If unable to download the dataset automatically due to \"not enough disk space\", please download manually to Google Drive and\n",
" # load using tf.data.TFRecordDataset.\n",
" ds = tfds.load('bair_robot_pushing_small', split='test')\n",
" test_videos = ds.batch(batch_size).make_one_shot_iterator().get_next()['image_aux1'][:, ::15]\n",
" with tf.train.MonitoredSession() as sess:\n",
" input_frames = sess.run(test_videos)"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"cellView": "form",
"colab": {},
"colab_type": "code",
"id": "96Jd5XefGHRr"
},
"outputs": [],
"source": [
"# @title Visualize loaded videos start and end frames.\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import seaborn as sns\n",
"\n",
"print('Test videos shape [batch_size, start/end frame, height, width, num_channels]: ', input_frames.shape)\n",
"sns.set_style('white')\n",
"plt.figure(figsize=(4, 2*batch_size))\n",
"\n",
"for i in range(batch_size)[:4]:\n",
" plt.subplot(batch_size, 2, 1 + 2*i)\n",
" plt.imshow(input_frames[i, 0])\n",
" plt.title('Video {}: First frame'.format(i))\n",
" plt.axis('off')\n",
" plt.subplot(batch_size, 2, 2 + 2*i)\n",
" plt.imshow(input_frames[i, 1])\n",
" plt.title('Video {}: Last frame'.format(i))\n",
" plt.axis('off')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "w0FFhkikQABy"
},
"source": [
"### Load Hub Module"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "N3__RcL9J5aG"
},
"outputs": [],
"source": [
"tf.reset_default_graph()"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "cLAUiWfEQAB5"
},
"outputs": [],
"source": [
"hub_handle = 'https://tfhub.dev/google/tweening_conv3d_bair/1'\n",
"module = hub.Module(hub_handle)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "PVHTdXnhbGsK"
},
"source": [
"### Generate and show the videos"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "04LPFDmRQAB7"
},
"outputs": [],
"source": [
"# Expected inputs as a placeholder.\n",
"inputs_ph = tf.placeholder(dtype=tf.float32, shape=(None, 2, None, None, 3))\n",
"outputs = module(inputs_ph)\n",
"\n",
"# Run graph for given numpy arrays.\n",
"with tf.train.MonitoredSession() as sess:\n",
" filled_frames = sess.run(outputs, feed_dict={inputs_ph: input_frames}).astype(np.uint8)"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "tVesWHTnSW1Z"
},
"outputs": [],
"source": [
"# Show sequences of generated video frames.\n",
"\n",
"# Concatenate start/end frames and the generated filled frames for the new videos.\n",
"generated_videos = np.concatenate([input_frames[:, :1], filled_frames, input_frames[:, 1:]], axis=1)\n",
"\n",
"for i in range(4):\n",
" fig = plt.figure(figsize=(10*2, 2))\n",
" for j in range(1, 16):\n",
" ax = fig.add_axes([j*1/16., 0, (j+1)*1/16., 1], xmargin=0, ymargin=0)\n",
" ax.imshow(generated_videos[i, j])\n",
" ax.axis('off')"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"last_runtime": {
"build_target": "//learning/brain/python/client:colab_notebook",
"kind": "private"
},
"name": "Inbetweening TF-Hub Module.ipynb",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 2",
"name": "python2"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

0 comments on commit a5484c6

Please sign in to comment.