-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Progress bar #435
Comments
I think this'll be tricky in general. You could imagine some kind of hack to increment a tqdm inside of trace_fn, but this probably won't work if you wrap sampling in tf.function (which you almost certainly should, if you're doing anything sufficiently non-trivial, which you probably are because you want a progress bar in the first place :)). I strongly doubt we will add this functionality in tfp.mcmc directly; it would be hard for the same reasons described above, and feels pretty far "out of scope" for a library of fundamental mcmc building blocks. That said, @SiegeLordEx has some nascent design ideas on sampling drivers (with some sort of compositionality, I expect?), and it's conceivable that something like this could fit in to that picture, somehow. |
Yeah, I concur with @csuter. In general, MCMC will be running inside a @tf.function or even an XLA function, e.g. the entire computation will often be on a GPU. I think in this case, it'd be appropriate to do short runs of the chain, updating the progress bar after each one. See the discussion in this #356 for some code examples. There the question is largely the same, except that there you're dumping summaries to TensorBoard. |
I have hacked sample_chain at some point to work with tqdm, at least in
eager mode. Fairly trivially, tf.function could also be supported using
tf.py_function. XLA would be the most difficult here. For TPU, there's an
outside_compilation mechanism we could use. Or as Pavel suggests we could
just compile chunks of steps, and update tqdm around those.
Brian Patton | Software Engineer | bjp@google.com
…On Thu, May 30, 2019 at 3:07 PM Pavel Sountsov ***@***.***> wrote:
Yeah, I concur with @csuter <https://github.com/csuter>. In general, MCMC
will be running inside a @tf.function or even an XLA function, e.g. the
*entire* computation will often be on a GPU. I think in this case, it'd
be appropriate to do short runs of the chain. tfp.mcmc API supports
resuming the MCMC chains, so that should be possible.
See the discussion in this #356
<#356> for some code
examples. There the question is largely the same, except that there you're
dumping summaries to TensorBoard.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#435?email_source=notifications&email_token=AFJFSIYCVCQMCPA6NZCZGDTPYAQXBA5CNFSM4HRHTZ4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWTHJVQ#issuecomment-497448150>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFJFSI6EGBZU2HY4X47V3K3PYAQXBANCNFSM4HRHTZ4A>
.
|
Is it possible to add a progress bar to mcmc training?
The text was updated successfully, but these errors were encountered: