Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support loops wrapped by Threads.@threads #38

Open
mileslucas opened this issue Jan 19, 2021 · 1 comment
Open

support loops wrapped by Threads.@threads #38

mileslucas opened this issue Jan 19, 2021 · 1 comment

Comments

@mileslucas
Copy link

mileslucas commented Jan 19, 2021

It would be nice if the @progress macro were thread-safe and supported the Threads.@threads macro call.

This would require a bit of a rewrite, since the current _progress function does not update the fraction in a thread-safe manner, so a small rewrite would have to take place to change the fraction variables to use atomics. In my own code, the following has worked in a thread-safe manner-

@withprogress begin
	it = Threads.Atomic{Int}(0)
	N = length(iter)
	Threads.@threads for i in N
		# body
		Threads.atomic_add!(it, 1)
		@logprogress it[] / N
	end
end

Parsing the input should be as simple as

if ex.head == :macrocall && ex.args[1] == :(Threads.var"@threads")
	_forloop = ex.args[2:end]
	# ...

I'm not really sure what is required beyond that to make it work, or I would have given it a shot myself.

@adannenberg
Copy link

Is there any plan to implement this? If not, can someone post a more detailed working example of @mileslucas 's solution? I've tried to incorporate it in my code and it's not working - which is not surprising since I don't understand it :< Fwiw, I'm trying to use ProgressLogging in Pluto where I've parallelized a compute-intensive for loop with Threads.@threads. Works great with the single-threaded for loop...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants