Closed
Description
With #5864, the async_api stops working.
Test session log, the warning message seems weird.
ray/experimental/test ➦ 02931e08f pytest -v -s async_test.py
Test session starts (platform: darwin, Python 3.6.5, pytest 4.0.1, pytest-sugar 0.9.2)
cachedir: .pytest_cache
rootdir: /Users/simonmo/Desktop/ray/ray/python, inifile:
plugins: timeout-1.3.3, sugar-0.9.2, rerunfailures-7.0, flaky-3.6.1
collecting ... 2019-10-25 14:05:58,448 INFO resource_spec.py:216 -- Starting Ray with 1.56 GiB memory available for workers and up to 0.78 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2019-10-25 14:06:08,888 WARNING worker.py:1382 -- The actor or task with ID 7d58f415c89effffffff01000000 is pending and cannot currently be scheduled. It requires {CPU: 1.000000} for execution and {CPU: 1.000000} for placement, but this node only has remaining {CPU: 4.000000}, {node:192.168.1.7: 1.000000}, {object_store_memory: 0.537109 GiB}, {memory: 1.562500 GiB}. In total there are 1 pending tasks and 0 pending actors on this node. This is likely due to all cluster resources being claimed by actors. To resolve the issue, consider creating fewer actors or increase the resources available to this Ray cluster. You can ignore this message if this Ray cluster is expected to auto-scale.
... hangs forever
Metadata
Metadata
Assignees
Labels
No labels
Activity