This document has instructions for running WaveNet FP32 inference using Intel-optimized TensorFlow.
Script name | Description |
---|---|
fp32_inference.sh | Runs inference with a pretrained model |
Setup your environment using the instructions below, depending on if you are using AI Kit:
Setup using AI Kit | Setup without AI Kit |
---|---|
AI Kit does not currently support TF 1.15.2 models |
To run without AI Kit you will need:
|
In addition to the requirements specified above, you will also need a clone
of the tensorflow-wavenet
repo with pull request #352 for the CPU optimizations. The path to
the cloned repo needs to be set to the TF_WAVENET_DIR
environment variable
before running a quickstart script.
git clone https://github.com/ibab/tensorflow-wavenet.git
cd tensorflow-wavenet/
git fetch origin pull/352/head:cpu_optimized
git checkout cpu_optimized
export TF_WAVENET_DIR=$(pwd)
cd ..
Download and extract the pretrained model checkpoint files.
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/wavenet_fp32_pretrained_model.tar.gz
tar -xvf wavenet_fp32_pretrained_model.tar.gz
export PRETRAINED_MODEL=$(pwd)/wavenet_checkpoints
Navigate to the your model zoo directory, then set an environment variable
for an OUTPUT_DIR
directory where logs will be written and ensure that you
have the TF_WAVENET_DIR
and PRETRAINED_MODEL
variables set. Once this
setup is done, you can run the fp32_inference.sh
quickstart script.
# cd to your model zoo directory
cd models
export OUTPUT_DIR=<directory where log files will be written>
export TF_WAVENET_DIR=<tensorflow-wavenet directory>
export PRETRAINED_MODEL=<path to the downloaded and extracted checkpoints>
./quickstart/text_to_speech/tensorflow/wavenet/inference/cpu/fp32/fp32_inference.sh
- To run more advanced use cases, see the instructions here
for calling the
launch_benchmark.py
script directly. - To run the model using docker, please see the oneContainer
workload container:
https://software.intel.com/content/www/us/en/develop/articles/containers/wavenet-fp32-inference-tensorflow-container.html.