-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maintain a respectful queue of jobs to be run on Quantum Engine #2821
Comments
Yes! |
Can't we achieve the same thing using from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor(max_workers=num_workers) as executor:
for param in params:
executor.submit(func, *param) |
I swear I scoured the python async docs looking for something like this! |
I didn't know about this until @mrwojtek showed me the other day 😛 . |
The concucrrent.futures.ThreadPoolExecutor is a concept that is conceptually orthogonal to asyncio library and they aim to solve slightly different problem. By design, asyncio is a single threaded library which allows for concurrent execution of python code. ThreadPoolExecutor allows for actual parallel execution where different functions run on different threads. This doesn't matter much for Python code since it's protected by a global lock anyway but matters a lot in our cases: where network I/O happens. Matthew, your code looks nice but it is dependent on how does func callback is implemented. If all func instances run on the same thread, they'll block on the same I/O operation. Could you give an example of how "func" looks like in your cases? |
I'd be curious to see what you put in |
Really, it would be sweet if we had an auto-batcher that uses async trickery to build up a local queue until it collects enough jobs to batch up and send |
Has this been superceded by feature testbed stuff @mpharrigan ? |
That would be a natural place for it, although it doesn't exist yet. Probably blocked by #5023 |
@verult Do we think this feature is obsolete now that we have streaming? |
Sometimes you have a whole host of jobs to run. Instead of submitting lots of them and filling up the queue, you could do one at a time. But you can save on latency / classical processing overhead by keeping a respectful queue. I've been using this function
Would something like this be welcome inside cirq.google?
The text was updated successfully, but these errors were encountered: