Skip to content

Commit 89e4804

Browse files
Removing done from the llapi doc (#3810)
1 parent 89b5959 commit 89e4804

File tree

1 file changed

+0
-8
lines changed

1 file changed

+0
-8
lines changed

docs/Python-API.md

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -149,8 +149,6 @@ A `DecisionSteps` has the following fields :
149149
`env.step()`).
150150
- `reward` is a float vector of length batch size. Corresponds to the
151151
rewards collected by each agent since the last simulation step.
152-
- `done` is an array of booleans of length batch size. Is true if the
153-
associated Agent was terminated during the last simulation step.
154152
- `agent_id` is an int vector of length batch size containing unique
155153
identifier for the corresponding Agent. This is used to track Agents
156154
across simulation steps.
@@ -174,8 +172,6 @@ A `DecisionStep` has the following fields:
174172
(Each array has one less dimension than the arrays in `DecisionSteps`)
175173
- `reward` is a float. Corresponds to the rewards collected by the agent
176174
since the last simulation step.
177-
- `done` is a bool. Is true if the Agent was terminated during the last
178-
simulation step.
179175
- `agent_id` is an int and an unique identifier for the corresponding Agent.
180176
- `action_mask` is an optional list of one dimensional array of booleans.
181177
Only available in multi-discrete action space type.
@@ -197,8 +193,6 @@ A `TerminalSteps` has the following fields :
197193
`env.step()`).
198194
- `reward` is a float vector of length batch size. Corresponds to the
199195
rewards collected by each agent since the last simulation step.
200-
- `done` is an array of booleans of length batch size. Is true if the
201-
associated Agent was terminated during the last simulation step.
202196
- `agent_id` is an int vector of length batch size containing unique
203197
identifier for the corresponding Agent. This is used to track Agents
204198
across simulation steps.
@@ -219,8 +213,6 @@ A `TerminalStep` has the following fields:
219213
(Each array has one less dimension than the arrays in `TerminalSteps`)
220214
- `reward` is a float. Corresponds to the rewards collected by the agent
221215
since the last simulation step.
222-
- `done` is a bool. Is true if the Agent was terminated during the last
223-
simulation step.
224216
- `agent_id` is an int and an unique identifier for the corresponding Agent.
225217
- `max_step` is a bool. Is true if the Agent reached its maximum number of
226218
steps during the last simulation step.

0 commit comments

Comments
 (0)