Skip to content

Commit e9cee79

Browse files
vmoensSilvestre Bahisilvestrebahi
authored
[Doc] Add coverage banner (#533)
* add orb decov to circleci config.yml * Add codecov badge to Readme * Revert "[BugFix] Changing the dm_control import to fail if not installed (#515)" This reverts commit d194735. * codecov coverage w/o orb in circleci * Revert "Revert "[BugFix] Changing the dm_control import to fail if not installed (#515)"" This reverts commit d0dc7de. * [CI] generation of coverage reports (#534) * update test scripts to add coverage * update test scripts to add coverage Co-authored-by: Silvestre Bahi <silvestrebahi@fb.com> * [CI] Add xml coverage reports for codecov (#537) * update test scripts to add coverage * update test scripts to add coverage * generate xml file for coverage * Update run_test.sh lint end of file * Update run_test.sh lint end of file * Update run_test.sh lint end of file Co-authored-by: Silvestre Bahi <silvestrebahi@fb.com> * permissions * permissions Co-authored-by: Silvestre Bahi <silvestrebahi@fb.com> Co-authored-by: silvestrebahi <silvestre.bahi@gmail.com>
1 parent 497ad8f commit e9cee79

File tree

8 files changed

+55
-22
lines changed

8 files changed

+55
-22
lines changed

.circleci/config.yml

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -276,6 +276,12 @@ jobs:
276276
- run:
277277
name: Run tests
278278
command: .circleci/unittest/linux/scripts/run_test.sh
279+
280+
- run:
281+
name: Codecov upload
282+
command: |
283+
bash <(curl -s https://codecov.io/bash) -Z -F linux-cpu
284+
279285
- run:
280286
name: Post process
281287
command: .circleci/unittest/linux/scripts/post_process.sh
@@ -331,6 +337,10 @@ jobs:
331337
name: Run tests
332338
command: bash .circleci/unittest/linux/scripts/run_test.sh
333339
# command: docker run --env-file ./env.list -t --gpus all -v $PWD:$PWD -w $PWD "${image_name}" .circleci/unittest/linux/scripts/run_test.sh
340+
- run:
341+
name: Codecov upload
342+
command: |
343+
bash <(curl -s https://codecov.io/bash) -Z -F linux-gpu
334344
- run:
335345
name: Post Process
336346
command: docker run -t --gpus all -v $PWD:$PWD -w $PWD "${image_name}" .circleci/unittest/linux/scripts/post_process.sh
@@ -386,6 +396,10 @@ jobs:
386396
name: Run tests
387397
command: bash .circleci/unittest/linux_optdeps/scripts/run_test.sh
388398
# command: docker run --env-file ./env.list -t --gpus all -v $PWD:$PWD -w $PWD "${image_name}" .circleci/unittest/linux/scripts/run_test.sh
399+
- run:
400+
name: Codecov upload
401+
command: |
402+
bash <(curl -s https://codecov.io/bash) -Z -F linux-outdeps-gpu
389403
- run:
390404
name: Post Process
391405
command: docker run -t --gpus all -v $PWD:$PWD -w $PWD "${image_name}" .circleci/unittest/linux_optdeps/scripts/post_process.sh
@@ -433,6 +447,10 @@ jobs:
433447
- run:
434448
name: Run tests
435449
command: .circleci/unittest/linux_stable/scripts/run_test.sh
450+
- run:
451+
name: Codecov upload
452+
command: |
453+
bash <(curl -s https://codecov.io/bash) -Z -F linux-stable-cpu
436454
- run:
437455
name: Post process
438456
command: .circleci/unittest/linux_stable/scripts/post_process.sh
@@ -488,6 +506,10 @@ jobs:
488506
name: Run tests
489507
command: bash .circleci/unittest/linux_stable/scripts/run_test.sh
490508
# command: docker run --env-file ./env.list -t --gpus all -v $PWD:$PWD -w $PWD "${image_name}" .circleci/unittest/linux/scripts/run_test.sh
509+
- run:
510+
name: Codecov upload
511+
command: |
512+
bash <(curl -s https://codecov.io/bash) -Z -F linux-stable-gpu
491513
- run:
492514
name: Post Process
493515
command: docker run -t --gpus all -v $PWD:$PWD -w $PWD "${image_name}" .circleci/unittest/linux_stable/scripts/post_process.sh
@@ -532,6 +554,10 @@ jobs:
532554
- run:
533555
name: Run tests
534556
command: .circleci/unittest/linux/scripts/run_test.sh
557+
- run:
558+
name: Codecov upload
559+
command: |
560+
bash <(curl -s https://codecov.io/bash) -Z -F macos-cpu
535561
- run:
536562
name: Post process
537563
command: .circleci/unittest/linux/scripts/post_process.sh

.circleci/unittest/linux/scripts/environment.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,3 +27,4 @@ dependencies:
2727
- dm_control
2828
- mlflow
2929
- av
30+
- coverage

.circleci/unittest/linux/scripts/run_test.sh

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ lib_dir="${env_dir}/lib"
1818
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$lib_dir
1919
export MKL_THREADING_LAYER=GNU
2020

21-
pytest test/smoke_test.py -v --durations 20
22-
pytest test/smoke_test_deps.py -v --durations 20
23-
pytest --instafail -v --durations 20
21+
coverage run -m pytest test/smoke_test.py -v --durations 20
22+
coverage run -m pytest test/smoke_test_deps.py -v --durations 20
23+
coverage run -m pytest --instafail -v --durations 20
24+
coverage xml

.circleci/unittest/linux_optdeps/scripts/environment.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,4 @@ dependencies:
1515
- pyyaml
1616
- scipy
1717
- hydra-core
18+
- coverage

.circleci/unittest/linux_optdeps/scripts/run_test.sh

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,5 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/root/project/.mujoco/mujoco210/bin
1717
export MKL_THREADING_LAYER=GNU
1818

1919
#MUJOCO_GL=glfw pytest --cov=torchrl --junitxml=test-results/junit.xml -v --durations 20
20-
MUJOCO_GL=egl pytest --instafail -v --durations 20
20+
MUJOCO_GL=egl coverage run -m pytest --instafail -v --durations 20
21+
coverage xml

.circleci/unittest/linux_stable/scripts/environment.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,3 +28,4 @@ dependencies:
2828
- dm_control
2929
- mlflow
3030
- av
31+
- coverage

.circleci/unittest/linux_stable/scripts/run_test.sh

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ lib_dir="${env_dir}/lib"
1818
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$lib_dir
1919
export MKL_THREADING_LAYER=GNU
2020

21-
pytest test/smoke_test.py -v --durations 20
22-
pytest test/smoke_test_deps.py -v --durations 20
23-
pytest --instafail -v --durations 20
21+
coverage run -m pytest test/smoke_test.py -v --durations 20
22+
coverage run -m pytest test/smoke_test_deps.py -v --durations 20
23+
coverage run -m pytest --instafail -v --durations 20
24+
coverage xml

README.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
[![pytorch](https://circleci.com/gh/pytorch/rl.svg?style=shield)](https://circleci.com/gh/pytorch/rl)
2+
[![codecov](https://codecov.io/gh/pytorch/rl/branch/main/graph/badge.svg?token=HcpK1ILV6r)](https://codecov.io/gh/pytorch/rl)
23

34
# TorchRL
45

@@ -32,7 +33,7 @@ one object to another without friction.
3233
algorithms. For instance, here's how to code a rollout in TorchRL:
3334
<details>
3435
<summary>Code</summary>
35-
36+
3637
```diff
3738
- obs, done = env.reset()
3839
+ tensordict = env.reset()
@@ -57,7 +58,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
5758
TensorDict abstracts away the input / output signatures of the modules, env, collectors, replay buffers and losses of the library, allowing its primitives
5859
to be easily recycled across settings.
5960
Here's another example of an off-policy training loop in TorchRL (assuming that a data collector, a replay buffer, a loss and an optimizer have been instantiated):
60-
61+
6162
```diff
6263
- for i, (obs, next_obs, action, hidden_state, reward, done) in enumerate(collector):
6364
+ for i, tensordict in enumerate(collector):
@@ -73,7 +74,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
7374
optim.zero_grad()
7475
```
7576
Again, this training loop can be re-used across algorithms as it makes a minimal number of assumptions about the structure of the data.
76-
77+
7778
TensorDict supports multiple tensor operations on its device and shape
7879
(the shape of TensorDict, or its batch size, is the common arbitrary N first dimensions of all its contained tensors):
7980
```python
@@ -96,11 +97,11 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
9697
</details>
9798

9899
Check our [TensorDict tutorial](tutorials/tensordict.ipynb) for more information.
99-
100+
100101
- An associated [`TensorDictModule` class](torchrl/modules/tensordict_module/common.py) which is [functorch](https://github.com/pytorch/functorch)-compatible!
101102
<details>
102103
<summary>Code</summary>
103-
104+
104105
```diff
105106
transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
106107
+ td_module = TensorDictModule(transformer_model, in_keys=["src", "tgt"], out_keys=["out"])
@@ -111,7 +112,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
111112
+ td_module(tensordict)
112113
+ out = tensordict["out"]
113114
```
114-
115+
115116
The `TensorDictSequential` class allows to branch sequences of `nn.Module` instances in a highly modular way.
116117
For instance, here is an implementation of a transformer using the encoder and decoder blocks:
117118
```python
@@ -123,7 +124,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
123124
assert transformer.in_keys == ["src", "src_mask", "tgt"]
124125
assert transformer.out_keys == ["memory", "output"]
125126
```
126-
127+
127128
`TensorDictSequential` allows to isolate subgraphs by querying a set of desired input / output keys:
128129
```python
129130
transformer.select_subsequence(out_keys=["memory"]) # returns the encoder
@@ -144,7 +145,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
144145
A common pytorch-first class of [tensor-specification class](torchrl/data/tensor_specs.py) is also provided.
145146
<details>
146147
<summary>Code</summary>
147-
148+
148149
```python
149150
env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True)
150151
env_parallel = ParallelEnv(4, env_make) # creates 4 envs in parallel
@@ -159,7 +160,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
159160
learning (although the "dataloader" -- read data collector -- is modified on-the-fly):
160161
<details>
161162
<summary>Code</summary>
162-
163+
163164
```python
164165
env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True)
165166
collector = MultiaSyncDataCollector(
@@ -182,7 +183,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
182183
- efficient<sup>(2)</sup> and generic<sup>(1)</sup> [replay buffers](torchrl/data/replay_buffers/replay_buffers.py) with modularized storage:
183184
<details>
184185
<summary>Code</summary>
185-
186+
186187
```python
187188
storage = LazyMemmapStorage( # memory-mapped (physical) storage
188189
cfg.buffer_size,
@@ -205,7 +206,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
205206
which process and prepare the data coming out of the environments to be used by the agent:
206207
<details>
207208
<summary>Code</summary>
208-
209+
209210
```python
210211
env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True)
211212
env_base = ParallelEnv(4, env_make, device="cuda:0") # creates 4 envs in parallel
@@ -237,7 +238,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
237238
- various [architectures](torchrl/modules/models/) and models (e.g. [actor-critic](torchrl/modules/tensordict_module/actors.py))<sup>(1)</sup>:
238239
<details>
239240
<summary>Code</summary>
240-
241+
241242
```python
242243
# create an nn.Module
243244
common_module = ConvNet(
@@ -291,7 +292,7 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
291292
[modules](torchrl/modules/models/exploration.py) to easily swap between exploration and exploitation<sup>(1)</sup>:
292293
<details>
293294
<summary>Code</summary>
294-
295+
295296
```python
296297
policy_explore = EGreedyWrapper(policy)
297298
with set_exploration_mode("random"):
@@ -308,15 +309,15 @@ algorithms. For instance, here's how to code a rollout in TorchRL:
308309

309310
<details>
310311
<summary>Code</summary>
311-
312+
312313
### Loss modules
313314
```python
314315
from torchrl.objectives.costs import DQNLoss
315316
loss_module = DQNLoss(value_network=value_network, gamma=0.99)
316317
tensordict = replay_buffer.sample(batch_size)
317318
loss = loss_module(tensordict)
318319
```
319-
320+
320321
### Advantage computation
321322
```python
322323
from torchrl.objectives.returns.functional import vec_td_lambda_return_estimate

0 commit comments

Comments
 (0)