Releases: salesforce/warp-drive
Releases · salesforce/warp-drive
v2.7 Release
v2.6 Release
Extend to easily support single agent framework. We start to add gym.classic_control as examples. Cartpole, Mountain Car and Acrobot have been included and they can run up to 100K concurrent replicates.
v2.5 Release
Introduce the random reset from the pre-defined reset pool. Users are able to provide reset data pool for the corresponding data array(i.e., reset_target). During reset, the target data array will randomly pick up the reset data from the reset pool for each individual environment replica independently.
v2.4 Release
- Introduce new device context management and autoinit_pycuda
- Therefore, torch (any version) will not conflict with PyCUDA in the GPU context
v2.3 Release
Release 2.3 (2022-03-22)
- Add ModelFactory class to manage custom models
- Add Xavier initialization for the model
- Improve trainer.fetch_episode_states() so it can fetch (s, a, r) and can replay with argmax.
Release 2.2 (2022-12-20)
- Factorize the data loading for placeholders and batches (obs, actions and rewards) for the trainer.
Release 2.1 (2022-10-26)
- v2 trainer integration with Pytorch Lightning
v2.0 release
- supports the dual backends of both CUDA C and the JIT compiled Numba.
- supports end-to-end simulation and training on multi-GPUs with either CUDA C or Numba.
- full backward compatibility with v1.0
v1.6 release
Using the extreme parallelization capability of GPUs, WarpDrive enables orders-of-magnitude faster RL compared to CPU simulation + GPU model implementations.
- It is extremely efficient as it avoids back-and-forth data copying between the CPU and the GPU.
- runs simulations across multiple agents and multiple environment replicas in parallel.
- provides the auto scaling tools to achieve the optimal throughput per device (version 1.3).
- performs the distributed asynchronous training among multiple GPU devices (version 1.4).
- combine multiple GPU blocks for one environment replica (version 1.6).