Releases
ray-0.6.1
Core
Added experimental option to limit Redis memory usage. #3499
Added option for restarting failed actors. #3332
Fixed Plasma TensorFlow operator memory leak. #3448
Fixed compatibility issue with TensorFlow and PyTorch. #3574
Miscellaneous code refactoring and cleanup. #3563 #3564 #3461 #3511
Documentation. #3427 #3535 #3138
Several stability improvements. #3592 #3597
RLlib
Multi-GPU support for Multi-agent PPO. #3479
Unclipped actions are sent to learner. #3496
rllib rollout
now also preprocesses observations. #3512
Basic Offline Data API added. #3473
Improvements to metrics reporting in DQN. #3491
AsyncSampler no longer auto-concats. #3556
QMIX Implementation (Experimental). #3548
IMPALA performance improvements. #3402
Better error messages. #3444
PPO performance improvements. #3552
Autoscaler
Ray Tune
Lambdas now require tune.function
wrapper. #3457
Custom loggers, sync functions, and trial names are now supported. #3465
Improvements to fault tolerance. #3414
Variant Generator docs clarification. #3583
trial_resources
now renamed to resources_per_trial
. #3580
Modin
Modin 0.2.5 is now bundled with Ray
Greater than memory support for object store. #3450
Known Issues
Object broadcasts on large clusters are inefficient. #2945
You can’t perform that action at this time.