%! TEX root = PC.tex % vim: tw=80 ft=tex % 05/12/2025 09AM \textbf{This lecture is non-examinable.} \begin{fcthm}[] \label{thm:graphcont} For all $C > 0$, there exists $\delta > 0$ ($\delta = \frac{1}{4C}$) such that the following holds. Let $G$ be a graph with average degree $d$ and $\Delta(G) \le Cd$. Then there exists $\mathcal{C} \subset \mathcal{P}(V(G))$ such that: \begin{enumerate}[(1)] \item $|\mathcal{C}| \le {n \choose \le \frac{2\delta n}{d}}$. \item For $C \in \mathcal{C}$ we have $|C| \le (1 - \delta) n$. \item Every independent set in $G$ is contained in some $C \in \mathcal{C}$. \end{enumerate} \end{fcthm} Due to Kleitman and Winston, 80s. \begin{notation*} ${n \choose \le k} = \sum_{i = 0}^k {n \choose i}$. \end{notation*} Given $G$ and $I$, we run an algorithm that produces $F(I)$, $A(I)$ satisfying \[ \ub{F(I)}_{\text{``the fingerprint''}} \subset I \subset \ub{F(I) \cup A(I)}_{\text{$C(I)$ is the container}} \] \textbf{Graph Containers algorithm.} We maintain, as the algorithm runs, a partition \[ V(G) = \ub{A}_{\text{the vertices}} \cup \ub{B}_{\text{bin}} \cup \ub{F}_{\text{fingerprint so far}} ,\] and we start with $A = V(G)$, $B = \emptyset$, $F = \emptyset$. While $|B| < \delta \cdot n$, we loop the following: Let $v \in A$ be a vertex of maximum degree in $G[A]$ (and tiebreak according to some fixed ordering of $G$ specified in advance, so that the algorithm is deterministic / reproducible: useful for observation later). \begin{center} \includegraphics[width=0.6\linewidth]{images/7032b924b7b64855.png} \end{center} \begin{enumerate}[(1)] \item If $v \notin I$, then just move $v$ into $B$. \item If $v \in I$, then move $v$ into $F$ and move all of $N(v) \cap A$ into $B$. \end{enumerate} When algorithm stops, we define $A(I) = A$ and $F(I) = F$. \textbf{Observation:} $F(I) \subset I \subset F(I) \cup A(I)$. \begin{proof} First inclusion holds by definition. For second one: I never move a vertex of $I$ into $B$. \end{proof} \textbf{Observation:} $A(I)$ is determined by $F(I)$. \begin{proof} We claim that if we run algorithm on input $F(I)$ then we would get same output (here we use the fact that we defined and used an ordering of $G$ to make the algorithm deterministic / reproducible). \end{proof} \textbf{Observation:} $|F(I)| \le \frac{\delta n}{d}$. \begin{proof} We show that if we move a vertex into $F$, we move $\ge \frac{d}{2}$ vertices in $A$ to $B$. Note that at all times in the algorithm, \begin{align*} e(G[A]) &\ge e(G) - |B| \cdot \Delta(G) \\ &\ge \frac{dn}{2} - \delta n \cdot Cd \\ &\ge \frac{dn}{4} \end{align*} So $\Delta(G[A]) \ge \frac{d}{2}$ for all steps. If $|F(I)| > \frac{2\delta n}{2}$, then $|B| > \left( \frac{2\delta n}{d} \right) \cdot \frac{d}{2} > \delta n$, contradiction. \end{proof} \begin{proof}[Proof of \cref{thm:graphcont}] We define \[ \mathcal{C} = \{A(I) \cup F(I) : \text{$I$ \gls{ind} in $G$}\} .\] Property (3) holds by the first observation. For (2), note that $|A(I) \cup F(I)| = n - |B| \le (1 - \delta)n$. For property (1), note that the number of $F(I)$'s is at most ${n \choose \frac{2\delta n}{d}}$, and each $F(I)$ determines $F(I) \cup A(I)$ by the second observation. \end{proof} Informal explanation of the proof: Suppose Alice and Bob both know the structure of a certain graph. Suppose Alice is given an independent set $I$, and wants to communicate about its structure to Bob, by giving him a list of some vertices from $I$. It makes sense for Alice to start by telling Bob which vertex has the highest degree in $I$, since this then tells him a lot of vertices aren't in $I$: \begin{center} \includegraphics[width=0.6\linewidth]{images/4e940b2400244edb.png} \end{center} For the next vertex, Alice shouldn't just tell Bob the next highest degree vertex, because its neighbourhood might overlap a lot with the first vertex, in which case telling Bob about this vertex wouldn't give him much new information. So instead, it makes more sense to pick the vertex with highest degree into the set of vertices which Bob hasn't yet discarded; this gives us the algorithm described above. If none of the vertices in $I$ have large degree, then Bob actually can still gain a lot of information from Alice, if Alice (implicitly) says ``this vertex $v$ is in $I$, and it is the most informative vertex I could have told you about''. Using this strategy, Bob can gain useful no matter what: if $I$ contains a large degree vertex, then Bob can immediately discard a lot of vertices. If it doesn't, then when Alice tells Bob a vertex, Bob now knows $I$ doesn't contain any high degree vertices, which in itself is very valuable information. Generalising to hypergraphs: We employ a similar strategy. \begin{center} \includegraphics[width=0.6\linewidth]{images/3ee799c8169040d9.png} \end{center} In this case, when Alice tells Bob about a vertex, he can't immediately discard any vertices. Instead we have the following: $\{u, v, w\}$ is an edge, and Alice tells Bob that $v \in I$, then Bob now knows that it can't be the case that both of $u$, $v$ are in $I$. Bob can keep track of this information in a graph, where we can borrow ideas from the proof of graph containers. The proof will be more complex, and we will need to make use of the condition on $\Delta_2$. \textbf{Container algorithm for $3$-uniform hypergraphs.} We maintain \[ V(\mathcal{H}) = A \cup B \cup F \] a partition. We also have a graph $G$, that we keep track of throughout the algorithm. We keep the following until $|A| < (1 - \delta)N$. \begin{enumerate}[(1)] \item If $\Delta(G[A]) \ge c\sqrt{d}$, then choose a vertex $x \in A$ with maximum degree (and break ties deterministically as before). If $x \notin I$, then move it to $B$. If $x \in I$, then move into $F$ and move $N_G(x) \cap A$ to $B$. \item If $\Delta(G[A]) < c\sqrt{d}$, then let $x \in \mathcal{H}[A]$ with maximum degree. If $x \notin I$, then move $x$ to $B$. If $x \in I$, move $x$ to $F$, and add all the edges \[ \{yz : \{x, y, z\} \in \mathcal{H}[A]\} \] into $G$. Now remove from $\mathcal{H}$ all edges that contain an edge of $G$. \end{enumerate} We then put $F(I) = F$, $A(I) = A$ at the end of the loop. The proof that this algorithm works is somewhat similar to the proof for graph containers, but more complicated.