Closed
Description
Our logp's include normalization terms which are unnecessary for sampling as that is fine with an unnormalized logp. As such, we can optimize the model logp evaluation if we take the Theano computation graph and apply a custom graph optimization that identifies and removes these terms. This should lead to quite a nice speed-up as it saves unnecessary computations.
I think the way to do this would be to traverse the logp graph, break the large sum into individual terms, and throw out any terms that do not depend on value
nodes.
For more information on how to optimize the theano graph, see https://theano-pymc.readthedocs.io/en/latest/extending/optimization.html and https://theano-pymc.readthedocs.io/en/latest/optimizations.html.