Skip to content

refix #1895 #2130

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 4, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@
"source": [
"** More formally, as explained in [the original post](http://mc-stan.org/documentation/case-studies/divergences_and_bias.html) (in markdown block, same below):** \n",
"> \n",
"Markov chain Monte Carlo (MCMC) approximates expectations with respect to a given target distribution, $$ \\mathbb{E}{\\pi} [ f ] = \\int \\mathrm{d}q \\, \\pi (q) \\, f(q), $$ using the states of a Markov chain, ${q{0}, \\ldots, q_{N} }$, $$ \\mathbb{E}{\\pi} [ f ] \\approx \\hat{f}{N} = \\frac{1}{N + 1} \\sum_{n = 0}^{N} f(q_{n}). $$ \n",
"\n",
">These estimators, however, are guaranteed to be accurate only asymptotically as the chain grows to be infinitely long, $$ \\lim_{N \\rightarrow \\infty} \\hat{f}{N} = \\mathbb{E}{\\pi} [ f ]. $$\n",
"Markov chain Monte Carlo (MCMC) approximates expectations with respect to a given target distribution, $$ \\mathbb{E}{\\pi} [ f ] = \\int \\mathrm{d}q \\, \\pi (q) \\, f(q), $$ using the states of a Markov chain, ${q{0}, \\ldots, q_{N} }$, $$ \\mathbb{E}{\\pi} [ f ] \\approx \\hat{f}{N} = \\frac{1}{N + 1} \\sum_{n = 0}^{N} f(q_{n}). $$ \n",
"> \n",
">These estimators, however, are guaranteed to be accurate only asymptotically as the chain grows to be infinitely long, $$ \\lim_{N \\rightarrow \\infty} \\hat{f}{N} = \\mathbb{E}{\\pi} [ f ]. $$ \n",
"> \n",
"To be useful in applied analyses, we need MCMC estimators to converge to the true expectation values sufficiently quickly that they are reasonably accurate before we exhaust our finite computational resources. This fast convergence requires strong ergodicity conditions to hold, in particular geometric ergodicity between a Markov transition and a target distribution. Geometric ergodicity is usually the necessary condition for MCMC estimators to follow a central limit theorem, which ensures not only that they are unbiased even after only a finite number of iterations but also that we can empirically quantify their precision using the MCMC standard error.\n",
"> \n",
Expand All @@ -33,7 +33,9 @@
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import numpy as np\n",
Expand Down Expand Up @@ -112,7 +114,9 @@
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"with pm.Model() as Centered_eight:\n",
Expand Down Expand Up @@ -352,7 +356,9 @@
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# A small wrapper function for displaying the MCMC sampler diagnostics as above\n",
Expand Down Expand Up @@ -1012,7 +1018,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
"version": "3.5.1"
}
},
"nbformat": 4,
Expand Down