Skip to content

Latest commit

 

History

History
94 lines (77 loc) · 3.17 KB

File metadata and controls

94 lines (77 loc) · 3.17 KB

WaveNet FP32 inference

Description

This document has instructions for running WaveNet FP32 inference using Intel-optimized TensorFlow.

Quick Start Scripts

Script name Description
fp32_inference.sh Runs inference with a pretrained model

Run the model

Setup your environment using the instructions below, depending on if you are using AI Kit:

Setup using AI Kit Setup without AI Kit

AI Kit does not currently support TF 1.15.2 models

To run without AI Kit you will need:

  • Python 3.6 or 3.7
  • git
  • numactl
  • wget
  • intel-tensorflow==1.15.2
  • librosa==0.5
  • A clone of the Model Zoo repo
    git clone https://github.com/IntelAI/models.git

In addition to the requirements specified above, you will also need a clone of the tensorflow-wavenet repo with pull request #352 for the CPU optimizations. The path to the cloned repo needs to be set to the TF_WAVENET_DIR environment variable before running a quickstart script.

git clone https://github.com/ibab/tensorflow-wavenet.git
cd tensorflow-wavenet/

git fetch origin pull/352/head:cpu_optimized
git checkout cpu_optimized
export TF_WAVENET_DIR=$(pwd)

cd ..

Download and extract the pretrained model checkpoint files.

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/wavenet_fp32_pretrained_model.tar.gz
tar -xvf wavenet_fp32_pretrained_model.tar.gz
export PRETRAINED_MODEL=$(pwd)/wavenet_checkpoints

Navigate to the your model zoo directory, then set an environment variable for an OUTPUT_DIR directory where logs will be written and ensure that you have the TF_WAVENET_DIR and PRETRAINED_MODEL variables set. Once this setup is done, you can run the fp32_inference.sh quickstart script.

# cd to your model zoo directory
cd models

export OUTPUT_DIR=<directory where log files will be written>
export TF_WAVENET_DIR=<tensorflow-wavenet directory>
export PRETRAINED_MODEL=<path to the downloaded and extracted checkpoints>

./quickstart/text_to_speech/tensorflow/wavenet/inference/cpu/fp32/fp32_inference.sh

Additional Resources