Skip to content

Commit

Permalink
Add benchmarks to CI (pytorch#479)
Browse files Browse the repository at this point in the history
Summary:
## Types of changes

- [ ] Bug fix (non-breaking change which fixes an issue)
- [X] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Docs change / refactoring / dependency upgrade

## Motivation and Context / Related issue

## How Has This Been Tested (if it applies)

## Checklist

- [ ] The documentation is up-to-date with the changes I made.
- [X] I have read the **CONTRIBUTING** document and completed the CLA (see **CONTRIBUTING**).
- [ ] All tests passed, and additional code has been covered with new tests.

Pull Request resolved: pytorch#479

Differential Revision: D38999201

Pulled By: moaradwan

fbshipit-source-id: b1999d6f4fca53fa6e3816f062a653565bbb5521
  • Loading branch information
Attia Radwan authored and facebook-github-bot committed Aug 25, 2022
1 parent 892e8e8 commit 2757765
Show file tree
Hide file tree
Showing 3 changed files with 35 additions and 12 deletions.
20 changes: 20 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,24 @@ commands:
- store_artifacts:
path: runs/charlstm/test-reports

benchmark_integration_test:
description: "Runs benchmark end to end"
parameters:
device:
default: "cpu"
type: string
steps:
- run:
name: benchmarks
command: |
mkdir -p benchmarks/results/raw
echo "Using $(python -V) ($(which python))"
echo "Using $(pip -V) ($(which pip))"
python benchmarks/run_benchmarks.py --batch_size 16 --layers embedding gsm_embedding --config_file ./benchmarks/config.json --root ./benchmarks/results/raw/
python -c "import pickle; dp = pickle.load(open('./benchmarks/results/raw/gsm_embedding_bs_16_runs_100_repeats_20_seed_None.pkl', 'rb')); vanilla = pickle.load(open('./benchmarks/results/raw/embedding_bs_16_runs_100_repeats_20_seed_None.pkl', 'rb')); ratio = sum([i['runtime'] for i in dp['results']])/sum([i['runtime'] for i in vanilla['results']]); print(ratio) if (ratio < 3) else print(ratio); exit(1)"
when: always
- store_artifacts:
path: benchmarks/results/raw/
# -------------------------------------------------------------------------------------
# Jobs
# -------------------------------------------------------------------------------------
Expand Down Expand Up @@ -315,6 +333,8 @@ jobs:
device: "cuda"
- dcgan_integration_test:
device: "cuda"
- benchmark_integration_test:
device: "cuda"

unittest_multi_gpu:
machine:
Expand Down
16 changes: 7 additions & 9 deletions benchmarks/benchmark_layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,15 +62,13 @@ def run_layer_benchmark(
)

# benchmark.Timer performs its own warmups
try:
timer = benchmark.Timer(
stmt="benchmark_fun()",
globals={"benchmark_fun": benchmark_fun},
num_threads=1,
)
runtime = timer.timeit(num_repeats).mean
except RuntimeError:
runtime = float("nan")
timer = benchmark.Timer(
stmt="benchmark_fun()",
globals={"benchmark_fun": benchmark_fun},
num_threads=1,
)
runtime = timer.timeit(num_repeats).mean


# get max memory allocated and reset memory statistics
memory_stats["max_memory"] = reset_peak_memory_stats(device).prev_max_mem
Expand Down
11 changes: 8 additions & 3 deletions benchmarks/config.json
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
{
"linear": {
"input_shape": [],
"in_features": 512,
"out_features": 512
"in_features": 10,
"out_features": 10
},
"conv": {
"in_channels": 64,
Expand All @@ -29,13 +29,18 @@
"num_embeddings": 20000,
"embedding_dim": 100
},
"gsm_embedding": {
"input_shape": [],
"num_embeddings": 20000,
"embedding_dim": 100
},
"mha": {
"source_seq_len": 128,
"targ_seq_len": 64,
"embed_dim": 100,
"num_heads": 4
},
"rnn_base": {
"dpmha": {
"seq_len": 128,
"input_size": 100,
"hidden_size": 100
Expand Down

0 comments on commit 2757765

Please sign in to comment.