You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the case if the distribution is applied once per environment per step. But what if you want to apply the same distribution multiple times in a single step. For example you have M observations per timestep and you want to infer M actions. You would process them individually and then end up with a (N, M, 3) sized tensor for computing your log probabilities. Is it enough to also just sum up the second dimension or does this kind of policy need to be handled completely differently. This could be interpreted as multi agent but single policy training where the policy for all agents is the same but their observations are not and there is no direct communication between the agent.
The text was updated successfully, but these errors were encountered:
This issue is rather a RL question in general. Given N environments and a
FixedNormal
distribution with 3 outputs, the log probabilities are getting summed up in the following line:https://github.com/ikostrikov/pytorch-a2c-ppo-acktr/blob/12c39346f789d714c4a5cd793cc6266757616d80/distributions.py#L25
This is the case if the distribution is applied once per environment per step. But what if you want to apply the same distribution multiple times in a single step. For example you have M observations per timestep and you want to infer M actions. You would process them individually and then end up with a (N, M, 3) sized tensor for computing your log probabilities. Is it enough to also just sum up the second dimension or does this kind of policy need to be handled completely differently. This could be interpreted as multi agent but single policy training where the policy for all agents is the same but their observations are not and there is no direct communication between the agent.
The text was updated successfully, but these errors were encountered: