-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Zero-Inflated Poisson and Negative Binomial distributions #1134
Comments
Hi @minaskar If at all useful, I've coded up e.g. zero inflated Poisson stuff as a Mixture of a Deterministic and Poisson before. Something like this: import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
zero_prob = 0.3
poisson_log_rate = 2.5
zero_inflated_poisson = tfd.Mixture(
cat=tfd.Categorical(probs=[zero_prob, 1.0 - zero_prob]),
components=[tfd.Deterministic(loc=0.0), tfd.Poisson(log_rate=poisson_log_rate)],
)
samples = zero_inflated_poisson.sample(1_000)
values, counts = np.unique(samples, return_counts=True)
plt.bar(values, counts)
plt.grid()
plt.show() |
Hi @jeffpollock9 , This looks very nice! How would that work as a layer? I tried the following but it doesn't work: tfpl.DistributionLambda(
make_distribution_fn=lambda t: tfd.Mixture(cat=tfd.Categorical(probs=[t[0], 1.0 - t[0]]),
components=[tfd.Deterministic(loc=0.0), tfd.Poisson(log_rate=t[1])],),
convert_to_tensor_fn=lambda s: s.sample()
) |
Consider instead using Categorical(logits=[0, t[0]]), assuming you have no
activation function applied to the incoming tensor.
…On Tue, Oct 20, 2020 at 8:54 AM Minas Karamanis ***@***.***> wrote:
Hi @jeffpollock9 <https://github.com/jeffpollock9> ,
This looks very nice! How would that work as a layer? I tried the
following but it doesn't work:
tfpl.DistributionLambda(
make_distribution_fn=lambda t: tfd.Mixture(cat=tfd.Categorical(probs=[t[0], 1.0 - t[0]]),components=[tfd.Deterministic(loc=0.0), tfd.Poisson(log_rate=t[1])],),
convert_to_tensor_fn=lambda s: s.sample()
)
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1134 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFJFSI7KFFLTM3D5HGNEAL3SLWCADANCNFSM4SYE3YWQ>
.
|
I'm getting an error message saying |
I'm not 100% sure as I don't use those layers, but I think you need to capture any batch dimensions in tfpl.DistributionLambda(
make_distribution_fn=lambda t: tfd.Mixture(
cat=tfd.Categorical(logits=[t[..., 0], 0.0]),
components=[
tfd.Deterministic(loc=0.0),
tfd.Poisson(log_rate=t[..., 1]),
],
),
convert_to_tensor_fn=lambda s: s.sample(),
) at least that appears to be the pattern in https://www.tensorflow.org/probability/examples/Probabilistic_Layers_Regression#case_4_aleatoric_epistemic_uncertainty |
BTW if anyone wants to send a PR to add some zero-inflated discrete
distributions, sampling and log_prob should not be too complicated.
There might even be a case for a generic ZeroInflated(underlying, prob)
meta-distribution.
|
@jeffpollock9 Yes, this is exactly what I tried next, still get the same error message. @brianwa84 the log_prob has a closed form for both distributions so it shouldn't be very hard. |
Hey @brianwa84, if noone else already started working on this already, I would have a look and implement them. |
No one has started, feel free to have a go at it.
Brian Patton | Software Engineer | ***@***.***
…On Mon, Jun 28, 2021 at 3:26 PM Simon Dirmeier ***@***.***> wrote:
Hey @brianwa84 <https://github.com/brianwa84>, if noone else already
started working on this already, I would have a look and implement them.
Cheers, Simon
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1134 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFJFSI76THN24BSUF7WWXWLTVDEFNANCNFSM4SYE3YWQ>
.
|
Hey guys! I might need zero-infl poisson as a loss asap and have this code so far which throws an error:
Any input on if this implementation is wrong or what could be causing the error would be very much appreciated!
|
I think you have the batch dimension wrong for zip = tfd.Mixture(cat = tfd.Categorical(probs=tf.stack([nonzero_prob, 1 - nonzero_prob], -1)),
components=[tfd.Deterministic(tf.zeros_like(rate)), tfd.Poisson(rate)]) at least has the right shape, but I might be parsing the batch and event shape of this problem wrong... |
I just edited the code but seem to be running into the same issue... thanks for the help though! |
@shtoneyan do you have any following updates about using |
Are there any plans to add a Zero-Inflated Poisson (ZIP) and Zero-Inflated Negative Binomial (ZINB) to TFP? Those are usually very common distributions in other packages, and it shouldn't be hard to implement.
The text was updated successfully, but these errors were encountered: