%! TEX root = TA.tex % vim: tw=50 % 05/03/2024 12PM If we consider continued fraction expansion as a machine for producing digits, it is natural to compare it with decimal expansion. Recall \[ N(x) = \left\lfloor \frac{1}{x} \right\rfloor, \qquad Tx = \frac{1}{x} - \left\lfloor \frac{1}{x} \right\rfloor \] for continued fractions. In comparison, we have \[ D x = \left\lfloor 10x \right\rfloor, \qquad Sx = 10x - \left\lfloor 10x \right\rfloor \] ($0 < x \le 1$) for decimal expansions. \begin{remark*} If we put the uniform density on $(0, 1]$, and $X$ chosen at random from $(0, 1]$ then $DX$ has the same uniform distribution. Indeed, $X, SX, S^2X$ all have the same distribution and so $DX, DSX, DS^2X, \ldots$ are IID random variables. \end{remark*} \vspace{-1em} \begin{flashcard}[gauss-cfe-density] Gauss observed that if take the density \[ f(x) = \cloze{\frac{1}{\log 2} \frac{1}{1 + x}} \] then $X$ and $TX$ have the same distribution. \begin{proof} \cloze{Let $X$ have density function $f(x) = \frac{1}{\log 2} \frac{1}{1 + x}$. Then \begin{align*} \PP(Tx \le t) &= \PP \left( \frac{1}{x} - \left\lfloor \frac{1}{x} \right\rfloor \le t \right) \\ &= \sum_{n = 1}^\infty \PP \left( 0 \le \frac{1}{x} - n \le t \right) \\ &= \sum_{n = 1}^\infty \int_{(t + n)^{-1}}^{n^{-1}} f(x) \dd x \\ &= \sum_{n = 1}^\infty \int_{\frac{1}{n + t}}^{\frac{1}{n}} \frac{1}{\log 2} \frac{1}{x + 1} \dd x \\ &= \frac{1}{\log 2} \sum_{n = 1}^\infty [\log (1 + x)]_{\frac{1}{t + n}}^{\frac{1}{n}} \\ &= \frac{1}{\log 2} \sum_{n = 1}^\infty \log \left( 1 + \frac{1}{n} \right) - \log \left( 1 + \frac{1}{t + n}\right) \\ &= \frac{1}{\log 2} \sum_{n = 1}^\infty (\log(n + 1) - \log(n)) - (\log(n + t + 1) - \log(n + t)) \\ &= \frac{1}{\log 2} \lim_{N \to \infty} \sum_{n = 1}^N (\log(n + 1) - \log(n) - \log(n + t + 1) + \log(n + 1)) \\ &= \frac{1}{\log 2} \lim_{N \to \infty} (\log(N + 1) - \log(N + t + 1) + \log(1 + t)) \\ &= \frac{1}{\log 2} \lim_{N \to \infty} \frac{\log(N + 1)}{(N + t + 1)} + \log(1 + t) \\ &= \frac{1}{\log 2} \log(1 + t) \end{align*} so $TX$ has density \[ \frac{1}{\log 2} \frac{1}{t + 1} . \qedhere \]} \end{proof} \end{flashcard} Thus if we use the density function \[ f(t) = \frac{1}{\log 2} \frac{1}{1 + t} \] then \begin{align*} \PP(T^n X = j) &= \PP(X = j) \\ &= \frac{1}{\log 2} \int_{\frac{1}{j + 1}}^{\frac{1}{j}} \frac{1}{1 + x} \dd x \\ &= \frac{1}{\log 2} [\log(1 + x)]_{\frac{1}{j + 1}}^{\frac{1}{j}} \\ &= \frac{1}{\log 2} \left[ -\log \left( \frac{j + 2}{j + 1} \right) + \log \left( \frac{j + 1}{j} \right) \right] \\ &= \frac{1}{\log 2} \log \left( \frac{(j + 1)^2}{j(j + 2)} \right) \\ &= \frac{1}{\log 2} \log \left( 1 + \frac{1}{j(j + 2)} \right) \\ &\approx \frac{1}{\log 2} \frac{1}{j(j + 1)} \\ &\approx \frac{1}{\log 2} \frac{1}{j^2} \end{align*} where the approximations are assuming $j$ is large (using the fact that $\log(1 + x) \approx x$ for $x$ small. \subsubsection*{* Non-examinable material} Using the above observation and some more work, one can prove that if $a_j$ is the $j$-th term of the continued fraction expansion, then \[ \log(a_1 a_2 \cdots a_n)^{1/n} \] converges as $n \to \infty$ almost everywhere. Uses ergodic theory -- see \courseref[Probability and Measure]{PM}. ** This is the end of the non-examinable comments. \subsubsection*{What about convergence} We show that if $a_0 \in \ZZ$, $a_j \in \ZZ$, $a_j \ge 1$, then writing \[ \frac{p_n}{q_n} = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{\ddots + \frac{1}{a_n}}}} \] we have that $\frac{p_n}{q_n}$ converges as $n \to \infty$. It is then easy to show that for the continued fraction expansion algorithm applied to $x$ will yield a sequence which converges to $x$. Note we shall always take $p_n, q_n$ coprime. $\frac{p_n}{q_n}$ is called the $n$-th convergent. Our discussion starts from the fact that we usually produce continuous fractions downwards but we evaluate them upwards. \[ \frac{r_k}{s_k} = a_k + \frac{1}{\frac{r_{k + 1}}{s_{k + 1}}} = a_k + \frac{s_{k + 1}}{r_{k + 1}} = \frac{a_k r_{k + 1} + s_{k + 1}}{r_{k + 1}} \] and since $r_{k + 1}, s_{k + 1}$ are coprime, we have \begin{align*} r_k &= a_k r_{k + 1} + s_{k + 1} \\ s_k &= r_{k + 1} \end{align*} We write our result in matrix form: \[ \begin{pmatrix} r_k \\ s_k \end{pmatrix} = \begin{pmatrix} a_k & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} r_{k + 1} \\ s_{k + 1} \end{pmatrix} \] with \[ \begin{pmatrix} r_n \\ s_n \end{pmatrix} = \begin{pmatrix} a_n \\ 1 \end{pmatrix} .\] We find that \[ \begin{pmatrix} p_n \\ q_n \end{pmatrix} = \begin{pmatrix} r_0 \\ s_0 \end{pmatrix} = \begin{pmatrix} a_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_1 & 1 \\ 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} a_{n - 1} & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_n \\ 1 \end{pmatrix} .\] \begin{align*} \begin{pmatrix} p_{n - 1} \\ q_{n - 1} \end{pmatrix} &= \begin{pmatrix} a_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_1 & 1 \\ 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} a_{n - 2} & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_{n - 1} \\ 1 \end{pmatrix} \\ &= \begin{pmatrix} a_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_1 & 1 \\ 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} a_{n - 2} & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_{n - 1} & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \\ \begin{pmatrix} p_n & p_{n - 1} \\ q_n & q_{n - 1} \end{pmatrix} &= \begin{pmatrix} a_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_1 & 1 \\ 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} a_n & 1 \\ 1 & 0 \end{pmatrix} \end{align*} Thus \[ \begin{pmatrix} p_n & p_{n - 1} \\ q_n & q_{n - 1} \end{pmatrix} = \begin{pmatrix} p_{n - 1} & p_{n - 2} \\ q_{n - 1} q_{n - 2} \end{pmatrix} \begin{pmatrix} a_n & 1 \\ 1 & 0 \end{pmatrix} \] so \begin{align*} p_n &= a_n p_{n - 1} + p_{n - 2} \\ q_N &= a_n q_{n - 1} + q_{n - 2} \end{align*} \begin{remark*} $q_0 = 1$, $q_1 \ge 1$ and $q_n \ge q_{n - 1} + q_{n - 2}$ (since $a_n \ge 1$, $n \ge 1$). So $q_n \to \infty$. \end{remark*}