Skip to content

Commit a7f8dee

Browse files
Junpeng Laotwiecki
Junpeng Lao
authored andcommitted
refix #1895 (#2130)
1 parent eb866e4 commit a7f8dee

File tree

1 file changed

+13
-7
lines changed

1 file changed

+13
-7
lines changed

docs/source/notebooks/Diagnosing_biased_Inference_with_Divergences.ipynb

+13-7
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,9 @@
1717
"source": [
1818
"** More formally, as explained in [the original post](http://mc-stan.org/documentation/case-studies/divergences_and_bias.html) (in markdown block, same below):** \n",
1919
"> \n",
20-
"Markov chain Monte Carlo (MCMC) approximates expectations with respect to a given target distribution, $$ \\mathbb{E}{\\pi} [ f ] = \\int \\mathrm{d}q \\, \\pi (q) \\, f(q), $$ using the states of a Markov chain, ${q{0}, \\ldots, q_{N} }$, $$ \\mathbb{E}{\\pi} [ f ] \\approx \\hat{f}{N} = \\frac{1}{N + 1} \\sum_{n = 0}^{N} f(q_{n}). $$ \n",
21-
"\n",
22-
">These estimators, however, are guaranteed to be accurate only asymptotically as the chain grows to be infinitely long, $$ \\lim_{N \\rightarrow \\infty} \\hat{f}{N} = \\mathbb{E}{\\pi} [ f ]. $$\n",
20+
"Markov chain Monte Carlo (MCMC) approximates expectations with respect to a given target distribution, $$ \\mathbb{E}{\\pi} [ f ] = \\int \\mathrm{d}q \\, \\pi (q) \\, f(q), $$ using the states of a Markov chain, ${q{0}, \\ldots, q_{N} }$, $$ \\mathbb{E}{\\pi} [ f ] \\approx \\hat{f}{N} = \\frac{1}{N + 1} \\sum_{n = 0}^{N} f(q_{n}). $$ \n",
21+
"> \n",
22+
">These estimators, however, are guaranteed to be accurate only asymptotically as the chain grows to be infinitely long, $$ \\lim_{N \\rightarrow \\infty} \\hat{f}{N} = \\mathbb{E}{\\pi} [ f ]. $$ \n",
2323
"> \n",
2424
"To be useful in applied analyses, we need MCMC estimators to converge to the true expectation values sufficiently quickly that they are reasonably accurate before we exhaust our finite computational resources. This fast convergence requires strong ergodicity conditions to hold, in particular geometric ergodicity between a Markov transition and a target distribution. Geometric ergodicity is usually the necessary condition for MCMC estimators to follow a central limit theorem, which ensures not only that they are unbiased even after only a finite number of iterations but also that we can empirically quantify their precision using the MCMC standard error.\n",
2525
"> \n",
@@ -33,7 +33,9 @@
3333
{
3434
"cell_type": "code",
3535
"execution_count": 1,
36-
"metadata": {},
36+
"metadata": {
37+
"collapsed": true
38+
},
3739
"outputs": [],
3840
"source": [
3941
"import numpy as np\n",
@@ -112,7 +114,9 @@
112114
{
113115
"cell_type": "code",
114116
"execution_count": 4,
115-
"metadata": {},
117+
"metadata": {
118+
"collapsed": true
119+
},
116120
"outputs": [],
117121
"source": [
118122
"with pm.Model() as Centered_eight:\n",
@@ -352,7 +356,9 @@
352356
{
353357
"cell_type": "code",
354358
"execution_count": 41,
355-
"metadata": {},
359+
"metadata": {
360+
"collapsed": true
361+
},
356362
"outputs": [],
357363
"source": [
358364
"# A small wrapper function for displaying the MCMC sampler diagnostics as above\n",
@@ -1012,7 +1018,7 @@
10121018
"name": "python",
10131019
"nbconvert_exporter": "python",
10141020
"pygments_lexer": "ipython3",
1015-
"version": "3.5.2"
1021+
"version": "3.5.1"
10161022
}
10171023
},
10181024
"nbformat": 4,

0 commit comments

Comments
 (0)