diff --git a/docs/source/notebooks/Diagnosing_biased_Inference_with_Divergences.ipynb b/docs/source/notebooks/Diagnosing_biased_Inference_with_Divergences.ipynb index 8876c8de26..373f00ded3 100644 --- a/docs/source/notebooks/Diagnosing_biased_Inference_with_Divergences.ipynb +++ b/docs/source/notebooks/Diagnosing_biased_Inference_with_Divergences.ipynb @@ -17,9 +17,9 @@ "source": [ "** More formally, as explained in [the original post](http://mc-stan.org/documentation/case-studies/divergences_and_bias.html) (in markdown block, same below):** \n", "> \n", - "Markov chain Monte Carlo (MCMC) approximates expectations with respect to a given target distribution, $$ \\mathbb{E}{\\pi} [ f ] = \\int \\mathrm{d}q \\, \\pi (q) \\, f(q), $$ using the states of a Markov chain, ${q{0}, \\ldots, q_{N} }$, $$ \\mathbb{E}{\\pi} [ f ] \\approx \\hat{f}{N} = \\frac{1}{N + 1} \\sum_{n = 0}^{N} f(q_{n}). $$ \n", - "\n", - ">These estimators, however, are guaranteed to be accurate only asymptotically as the chain grows to be infinitely long, $$ \\lim_{N \\rightarrow \\infty} \\hat{f}{N} = \\mathbb{E}{\\pi} [ f ]. $$\n", + "Markov chain Monte Carlo (MCMC) approximates expectations with respect to a given target distribution, $$ \\mathbb{E}{\\pi} [ f ] = \\int \\mathrm{d}q \\, \\pi (q) \\, f(q), $$ using the states of a Markov chain, ${q{0}, \\ldots, q_{N} }$, $$ \\mathbb{E}{\\pi} [ f ] \\approx \\hat{f}{N} = \\frac{1}{N + 1} \\sum_{n = 0}^{N} f(q_{n}). $$ \n", + "> \n", + ">These estimators, however, are guaranteed to be accurate only asymptotically as the chain grows to be infinitely long, $$ \\lim_{N \\rightarrow \\infty} \\hat{f}{N} = \\mathbb{E}{\\pi} [ f ]. $$ \n", "> \n", "To be useful in applied analyses, we need MCMC estimators to converge to the true expectation values sufficiently quickly that they are reasonably accurate before we exhaust our finite computational resources. This fast convergence requires strong ergodicity conditions to hold, in particular geometric ergodicity between a Markov transition and a target distribution. Geometric ergodicity is usually the necessary condition for MCMC estimators to follow a central limit theorem, which ensures not only that they are unbiased even after only a finite number of iterations but also that we can empirically quantify their precision using the MCMC standard error.\n", "> \n", @@ -33,7 +33,9 @@ { "cell_type": "code", "execution_count": 1, - "metadata": {}, + "metadata": { + "collapsed": true + }, "outputs": [], "source": [ "import numpy as np\n", @@ -112,7 +114,9 @@ { "cell_type": "code", "execution_count": 4, - "metadata": {}, + "metadata": { + "collapsed": true + }, "outputs": [], "source": [ "with pm.Model() as Centered_eight:\n", @@ -352,7 +356,9 @@ { "cell_type": "code", "execution_count": 41, - "metadata": {}, + "metadata": { + "collapsed": true + }, "outputs": [], "source": [ "# A small wrapper function for displaying the MCMC sampler diagnostics as above\n", @@ -1012,7 +1018,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.5.2" + "version": "3.5.1" } }, "nbformat": 4,