Skip to content

add github workflow for performance benchmarking #1269

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jan 14, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions .github/workflows/benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
name: API Performance Benchmarks

on:
push:
branches:
- master
schedule:
- cron: "0 12 * * *"

jobs:

macos:
name: macOS ${{ matrix.python }} + ${{ matrix.version }}
runs-on: macos-latest
strategy:
fail-fast: false
matrix:
python: ['3.8']
version: ['tensorflow==2.4.0:tensorflow-io-nightly', 'tensorflow==2.4.0:tensorflow-io']
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python }}
- name: Setup macOS
run: |
set -x -e
python -m pip install -U wheel setuptools
python --version
- name: Test macOS
run: |
set -x -e
python --version
df -h
rm -rf tensorflow_io
echo ${{ matrix.version }} | awk -F: '{print $1}' | xargs python -m pip install -U
echo ${{ matrix.version }} | awk -F: '{print $2}' | xargs python -m pip install --no-deps -U
python -m pip install pytest-benchmark scikit-image
python -m pip freeze
python -c 'import tensorflow as tf; print(tf.version.VERSION)'
python -c 'import tensorflow_io as tfio; print(tfio.version.VERSION)'
python -m pytest --benchmark-only -v --import-mode=append $(find . -type f \( -iname "test_*_eager.py" ! \( -iname "test_bigquery_eager.py" \) \))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is test_bigquery_eager.py handled differently?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@burgerkingeater I remember that was due to grpc/grpc#20034


linux:
name: Linux ${{ matrix.python }} + ${{ matrix.version }}
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
python: ['3.8']
version: ['tensorflow==2.4.0:tensorflow-io-nightly', 'tensorflow==2.4.0:tensorflow-io']
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python }}
- name: Setup Linux
run: |
set -x -e
bash -x -e .github/workflows/build.space.sh
- name: Test Linux
run: |
set -x -e
python --version
df -h
rm -rf tensorflow_io
echo ${{ matrix.version }} | awk -F: '{print $1}' | xargs python -m pip install -U
echo ${{ matrix.version }} | awk -F: '{print $2}' | xargs python -m pip install --no-deps -U
python -m pip install pytest-benchmark scikit-image
python -m pip freeze
python -c 'import tensorflow as tf; print(tf.version.VERSION)'
python -c 'import tensorflow_io as tfio; print(tfio.version.VERSION)'
python -m pytest --benchmark-only --benchmark-json benchmark.json -v --import-mode=append $(find . -type f \( -iname "test_*_eager.py" ! \( -iname "test_bigquery_eager.py" \) \))
- name: Store benchmark result
uses: rhysd/github-action-benchmark@v1
with:
tool: 'pytest'
output-file-path: benchmark.json