%! TEX root = FR.tex % vim: tw=50 % 24/01/2025 10AM \newpage \section{What is Fourier Restriction Theory?} Main object: $f : \Rbb^d \to \Cbb$, $f(x) = \sum_{\xi \in \Rbb} b_\xi e^{2\pi i x \cdot \xi}$, $b_\xi \in \Cbb$. \begin{notation*} We will write $e(x \cdot \xi)i e^{2\pi i x \cdot \xi}$. \end{notation*} $x \in \Rbb^d$ is a spatial variable, and $\xi \in \Rbb^d$ is the frequency variable. The frequencies (or Fourier transform) of $f$ is \emph{restricted} to a set $\mathcal{R}$ (where $\mathcal{R}$ we will always be finite -- so no need to worry about convergence issues). Goal: Understand the behaviour of $f$ in terms of properties of $\mathcal{R}$. \begin{example*} \phantom{} \begin{enumerate}[(i)] \item Schr\"odinger equation: \[ u(x, t) = \sum_{n = 1}^{N} b_n e(nx + n^2 t) .\] Easy: $(2\pi i \partial_+ - \Delta) u = 0$, with initial data $u(x, 0) = \sum_{n - 1}^{N} b_n e(nx)$. $(x, t) = (x_1, x_2)$. Then since $e(nx + n^2 t)i e((n, n^2) \cdot (x, t))$, we might consider $\mathcal{R} = \{(n, n^2) : n = 1, \ldots, N\}$. \item Dirichlet polynomials: \[ D(t) = \sum_{n = N}^{2N} b_n e^{i(\log n)t} .\] with $b_n \equiv 1$, partial sums of Riemann Zeta function. We might consider $\mathcal{R} = \left\{\frac{1}{2\pi}\log n\right\}_{n = N}^{2N}$. \end{enumerate} Both avoid linear structure. $\{\log n\}$ is a concave set (getting closer and closer together). $\{(n, n^2)\}$ lie on a parabola. \end{example*} Guiding principle: if properties of an object avoid (linear) structure, then we expect some random or average behaviour. The above examples avoid linear structure using some notion fo curvature. See Bourgain $\Lambda(p)$ paper: $\to$ extra behaviour. Square root cancellation: If we add $\pm 1$ randomly $N$ times, then we expect a quantity with size $N^{\half}$. \begin{fcthm}[Khinchin's inequality] \label{thm:khinchin} Assuming: - $\{\eps_n\}_{n = 1}^N$ be IID random variables with $\Pbb(\eps_n = 1) = \Pbb(\eps_n = -1) = \half$ - $1 < p < \infty$ - $x_1, \ldots, x_N \in \Cbb$ Then: \[ \left( \Ebb \left| \sum_{n = 1}^{N} \eps_n x_n \right|^p \right)^{\frac{1}{p}} \sim_p \left( \sum_{n = 1}^{N} |x_n|^2 \right)^{\half} = \|x_n\|_2 .\] \end{fcthm} \begin{notation*} $\sim_p$ means $\sim$ but the constant may depend on $p$. \end{notation*} \begin{proof} Without loss of generality, $x_1, \ldots, x_n \in \Rbb$. Without loss of generality, $\|x\|_2 = 1$. $p = 2$: want to show $\Ebb \left( \left| \sum_n \eps_n x_n \right|^2 \right) \sim 1$. \[ \Ebb \left( \sum_n \eps_n x_n \ol{\sum_m \eps_m x_m} \right) = \sum_{n, m} \Ebb(\eps_n \eps_m x_n \ol{x_m}) = \sum_n |x_n|^2 + \sum_n \sum_{m \neq n} x_n \ol{x_m} \ub{\Ebb \eps_n}_{= 0} \Ebb \eps_m .\] What about general exponents $p$? \[ \Ebb \left( \left| \sum_n \eps_n x_n \right|^p \right) = \int_0^\infty \Pbb \left( \left| \sum_n \eps_n x_n \right|^p > \alpha\right) \dd \alpha .\] The equality here is the Layer cake formula, which is true for any $p \in (0, \infty)$. Let $\lambda > 0$. Study the random variable $e^{\lambda \sum_n \eps_n x_n} \in (0, \infty)$. \[ \Ebb \left( e^{\lambda \sum_n \eps_n x_n} \right) = \Ebb \left( \prod_n e^{\lambda \eps_n x_n} \right) = \prod_n \Ebb e^{\lambda \eps_n x_n} = \prod_n \left(\half e^{\lambda x_n} + \half e^{-\lambda x_n}\right) .\] Fact: $\half e^z + \half e^{-z} \le e^{\frac{z^2}{2}}$ (to check, use the Taylor series). So we can get \begin{align*} \alpha \Pbb (e^{\lambda \sum_n \eps_n x_n} > \alpha) &\le \Ebb(e^{\lambda \sum_n \eps_n x_n}) &&(\text{Chebyshev's inequality})\\ &\le \prod_n e^{\lambda^2 |x_n|^2 / 2} \\ &= e^{\lambda^2 / 2} \end{align*} By symmetry, \[ \alpha \Pbb(e^{\left| \lambda \sum_n \eps_n x_n \right|} > \alpha) \lesssim e^{\lambda^2 / 2} .\] Choose $\alpha = e^{\lambda^2}$: \[ \Pbb\left(\left| \sum_n \eps_n x_n \right| > \lambda \right) = \Pbb\left(\left| \sum_n \eps_n x_n \right|^p > \lambda^p \right) \lesssim e^{-\lambda^2 / 2} .\] Use in Layer cake: \[ \Ebb \left( \left| \sum_n \eps_n x_n \right|^p \right) \lesssim \int_0^\infty e^{-\alpha^{2 / p} / 2} \dd \alpha \sim_p 1 .\] Lower bound: use H\"older's inequality. $X = \sum_n \eps_n x_n$. \[ \ub{\Ebb(X \ol{X})}_{= 1} \le \ub{(\Ebb(|X|^p))^{1 / p}}_{\lesssim_p 1} (\Ebb |X|^q)^{1 / q} .\] $\frac{1}{p} + \frac{1}{q} = 1$. \end{proof} Can you find a more intuitive proof? E-mail Dominique Maldague. \begin{corollary*} $\Ebb \left( \int \left| \sum_{n = 1}^N \eps_n f_n(x) \right|^p \dd x \right) \sim_p \int \left| \sum_{n = 1}^{N} |f_n(x)|^2 \right|^{p / 2} \dd x$. \end{corollary*} Useful for exercises! Return to Fourier restriction context. \begin{align*} f(x) &= \sum_{n = 1}^N e(nx) && \mathcal{R} &= \{1, \ldots, N\} \\ g(x) &= \sum_{n = 1}^{N} e(n^2 x) && \mathcal{R} &= \{1^2, 2^2, \ldots, N^2\} \end{align*} Both $f, g$ are $1$-periodic. So study them on $\Tbb = [0, 1]$. $f(0) = N$, $|f(x)| \sim N$ for $ \in \left[ 0, c \frac{1}{N} \right]$. $g(0)i N$, $|g(x)| \sim N$ for $x \in \left[ 0, c \frac{1}{N^2} \right]$. \[ \int_{[0, 1]} |f(x)|^2 \dd x = \sum_{n, m} \int_{[0, 1]} e((n - mx) \dd x = N) .\] \[ \int_{[0, 1]} |g(x)|^2 = \sum_{n, m} \int_{[0, 1]} e((n^2 - m^2) x) \dd x = N .\] \begin{center} \includegraphics[width=0.6\linewidth]{images/cab612ba286f4e61.png} \end{center} \[ \int_{[0, 1]} |f(x)|^p \dd x \ge N^{p / 2} + N^{p - 1} \] \[ \int_{[0, 1]} |f(x)|^p \dd x \ge N{p / 2} + N^{p - 2} .\] For the first one, $N^{p - 1}$ (organised behaviour) dominates as soon as $p > 2$, and for the second one, $N^{p / 2}$ dominates for $2 \le p \le 4$ (``square root cancellation behaviour lasts for longer'').