-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Update posterior_predictive notebook #2853
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update posterior_predictive notebook #2853
Conversation
Looks good, nitpicks:
|
Can do – that's a funny example because the ppc is just sampling outcomes, not probabilities. I would almost rather put a disc, scaled by number of samples, at y=0 and y=1 to view these samples. |
Agree - for discrete observed RVs I also prefer to sample the latent continuous parameters. Currently it is not straightforward to do: you either need to write the forward function yourself or use |
Just a detail, but we can avoid importing seaborn, if we use ax.hist([n.mean() for n in ppc['n']], alpha=0.5) instead of sns.distplot([n.mean() for n in ppc['n']], kde=False, ax=ax) |
Seaborn plots look a lot nicer than MPL though |
Fair enough, in this case. In general seaborn generates nicer plots with less code, but here I agree there is no real difference. |
I prefer some errorbar tho |
There are two problems with the errorbar that are different levels of difficult: 1.
|
If you are ploting the error of the mean, yes. But you can also show the [2.5, 97.5] quantile to represent uncertainty right? |
Hrm... I think I don't understand where you would calculate that. It seems like there is only uncertainty in the |
Reran with @junpenglao's styles from #2834, and also updated grammar, style, and links a bit (we quote extensively from an Edward tutorial which was removed).