Closed
Description
This idea emerged in a discussion by @canyon289 on how to do posterior predictive sampling on new random vars.
The current solution is to add a new variable after sampling as in:
with pm.Model() as m:
x = pm.Normal('x')
y = pm.Normal('y', x, 1, observed=data)
trace = pm.sample()
with m:
new_y = pm.Gamma('new_y', mu=x, sigma=2)
ppc = pm.sample_posterior_predictive(trace, var_names=['new_y']
The problem with that approach as mentioned by @michaelosthege is that you only have one shot of getting it right as there is no API to delete a variable from a model. So if you decide to try something else you have to extend with yet another variable and a new name.
A nested model does not work because the new variables are added to the original model (and it is untested territory)
with pm.Model() as m:
x = pm.Normal('x')
y = pm.Normal('y', x, 1, observed=data)
trace = pm.sample()
with m:
with pm.Model() as extended_m:
new_y = pm.Gamma('new_y', mu=x, sigma=2)
ppc = pm.sample_posterior_predictive(trace, var_names=['new_y']
It would be great to have something that extends an existing model without affecting it:
with pm.Model() as m:
x = pm.Normal('x')
y = pm.Normal('y', x, 1, observed=data)
trace = pm.sample()
with pm.Extend(m) as extended_m:
new_y = pm.Gamma('new_y', mu=x, sigma=2)
ppc = pm.sample_posterior_predictive(trace, var_names=['new_y']
Could also be the nested syntax as long as we ensured the original model is not affected.