Open
Description
Description
Because we don't save transformed variables in the returned InferenceData (why not?) it's not easy to evaluate the model logp once we have a trace.
One could rewrite the model without transforms (and we can make this automatically for the user) This is possible with https://www.pymc.io/projects/docs/en/stable/api/model/generated/pymc.model.transform.conditioning.remove_value_transforms.html
But someone might still want to evaluate it in the original model (with jacobians and all that).
One dirty implementation is given here: https://discourse.pymc.io/t/logp-questions-synthetic-dataset-to-evaluate-modeling/12129/6?u=ricardov94
Metadata
Metadata
Assignees
Labels
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
ricardoV94 commentedon Dec 16, 2023
Results should be saved in https://python.arviz.org/en/latest/schema/schema.html#unconstrained-posterior
We should make sure there's an option from
pm.sample
to store those, besides allowing users to populate them afterwards with a helper as initially suggested in this issueInterval
foward transformation #7193pipme commentedon Feb 26, 2025
Hi, any updates on this? Or do you have any suggestions for vectorizing the transformation of parameters between the constrained and unconstrained space?
I am currently doing an inefficient for-loop, which also feels a bit hacky:
Thanks!
pipme commentedon Feb 26, 2025
The above seems to work for transforming from the unconstrained to constrained space. Inside
_postprocess_samples
,jax.vmap
is leveraged for vectorization. But the below doesn't work for going from the constrained to unconstrained space:Is it possible to have such a jaxified function?