Closed
Description
Motivation
Since in the original PER paper the parameter beta is changing its value during the training, it would be desirable to be able to perform similar experiments with torch-rl's Prioritized Experience Replay.
Solution
Being able to manually change beta or both alpha and beta by calling a method on a replay buffer or modifying its properties.
Alternatives
OpenAI baseline's implementation does it by providing an additional parameter beta during sampling (while alpha is fixed). It would be also possible to implement this with a scheduler (similar to lr schedules, like in torch.optim)
Checklist
- I have checked that there is no similar issue in the repo (required)