Skip to content

Logp inference fails for exponentiated distributions #7717

Open
@jessegrabowski

Description

@jessegrabowski

Description

Suppose I want to make a log-normal "by hand":

def d(mu, sigma, size=None):
    return pt.exp(pm.Normal.dist(mu=mu, sigma=sigma, size=size))

with pm.Model() as m:
    y = pm.CustomDist('y', 0, 1, dist=d, observed=[0., 1.])

m.logp().eval() # array(nan)

This should return -inf, since 0 is outside the support of the (exponential) distribution. The same is true for any other support-constraining transformation.

Non-trivial use-case would be in a mixture model with components LogNormal and DiracDelta(0). This works fine, but if I want to explore a more fat-tailed distribution for the nonzero component (like LogStudentT), it fails.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions