Entropy Methods in Combinatorics
Daniel Naylor

Contents

1The Khinchin (Shannon?) axioms for entropy
2A special case of Sidorenko’s conjecture
3Brégman’s Theorem
4Shearer’s lemma and applications
5The union-closed conjecture
6Entropy in additive combinatorics
7A proof of Marton’s conjecture in 𝔽2n
Index

1 The Khinchin (Shannon?) axioms for entropy

Note. In this course, “random variable” will mean “discrete random variable” (unless otherwise specified).

All logarithms will be base 2 (unless otherwise specified).

Definition (Entropy). The entropy of a discrete random variable X is a quantity H[X] that takes real values and has the following properties:

  • (i) Normalisation: If X is uniform on {0,1} then H[X]=1.
  • (ii) Invariance: If X takes values in A, Y takes values in B, f is a bijection from A to B, and for every aA we have [X=a]=[Y=f(a)], then H[Y]=H[X].
  • (iii) Extendability: If X takes values in a set A, and B is disjoint from A, Y takes values in AB, and for all aA we have [Y=a]=[X=a], then H[Y]=H[X].
  • (iv) Maximality: If X takes values in a finite set A and Y is uniformly distributed in A, then H[X]H[Y].
  • (v) Continuity: H depends continuously on X with respect to total variation distance (defined by the distance between X and Y is supE|[XE][YE]|).

For the last axiom we need a definition:

Let X and Y be random variables. The conditional entropy H[X|Y] of X given Y is

y[Y=y]H[X|Y=y].

  • (vi) Additivity: H[X,Y]=H[Y]+H[X|Y].

Lemma 1.1. Assuming that:

  • X and Y are independent random variables

Then
H[X,Y]=H[X]+H[Y].

Proof. H[X|Y]=y[Y=y]H[X|Y=y].

Since X and Y are independent, the distribution of X is unaffected by knowing Y (so by invariance, H[X|Y=y]=H[X]), so

H[X|Y=y]=H[X]

for all y, which gives the result.

Corollary 1.2. If X1,,Xn are independent, then

H[X1,,Xn]=H[X1]++H[Xn].

Proof. Lemma 1.1 and obvious induction.

Lemma 1.3 (Chain rule). Assuming that:

  • X1,,Xn are random variables

Then
H[X1,,Xn]=H[X1]+H[X2|X1]+H[X3|X1,X2]++H[Xn|X1,,Xn1].

Proof. The case n=2 is additivity. In general,

H[X1,,Xn]=H[X1,,Xn1]+H[Xn|X1,,Xn1]

so we are done by induction.

Lemma 1.4. Assuming that:

  • Y=f(X)

Then
H[X,Y]=H[X].

Also,

H[Z|X,Y]=H[Z|X].

Proof. The map g:x(x,f(x)) is a bijection, and (X,Y)=g(X). So the first statement follows by invariance. For the second statement:

H[Z|X,Y]=H[Z,X,Y]H[X,Y](by additivity)=H[Z,X]H[X](by first part)=H[Z|X](by additivity)

Lemma 1.5. Assuming that:

  • X takes only one value

Then H[X]=0.

Proof. X and X are independent. Therefore, by Lemma 1.1, H[X,X]=2H[X]. But by invariance, H[X,X]=H[X]. So H[X]=0.

Proposition 1.6. Assuming that:

  • X is uniformly distributed on a set of size 2n

Then H[X]=n.

Proof. Let X1,,Xn be independent random variables uniformly distributed on {0,1}. By Corollary 1.2 and normalisation, H[X1,,Xn]=n. But (X1,,Xn) is uniformly distributed on {0,1}n, so by invariance, the result follows.

Proposition 1.7. Assuming that:

  • X is uniformly distributed on a set A of size n

Then H[X]=logn.

Reminder: log here is to the base 2 (which is the convention for this course).

Proof. Let r be a positive integer and let X1,,Xr be independent copies of X.

Then (X1,,Xr) is uniform on Ar and

H[X1,,Xr]=rH[X].

Now pick k such that 2knr2k+1. Then by invariance, maximality, and Proposition 1.6, we have that

krH[X]k+1.

So

krlognk+1rkrH[X]k+1rk,r

Therefore, H[X]=logn as claimed.

Notation. We will write pa=[X=a].

We will also use the notation [n]={1,2,,n}.

Theorem 1.8 (Khinchin). Assuming that:

  • H satisfies the Khinchin axioms

  • X takes values in a finite set A

Then
H[X]=aApa log (1pa).

Proof. First we do the case where all pa are rational (and then can finish easily by the continuity axiom).

Pick n such that for all a, there is some ma{0} such that pa=man.

Let Z be uniform on [n]. Let (Ea:aA) be a partition of [n] into sets with |Ea|=ma. By invariance we may assume that X=aZEa. Then

logn=H[Z]=H[Z,X]=H[X]+H[Z|X]=H[X]+aApaH[Z|X=a]=H[X]+aApa log (ma)=H[X]+aApa( log pa+ log n)

Hence

H[X]=aApa log pa=aApa log (1pa).

By continuity, since this holds if all pa are rational, we conclude that the formula holds in general.

Corollary 1.9. Assuming that:

  • X and Y random variables

Then H[X]0 and H[X|Y]0.

Proof. Immediate consequence of Theorem 1.8.

Corollary 1.10. Assuming that:

  • Y=f(X)

Then H[Y]H[X].

Proof. H[X]=H[X,Y]=H[Y]+H[X|Y]. But H[X|Y]0.

Proposition 1.11 (Subadditivity). Assuming that:

  • X and Y be random variables

Then H[X,Y]H[X]+H[Y].

Proof. Note that for any two random variables X,Y we have

H[X,Y]H[X]+H[Y]H[X|Y]H[X]H[Y|X]H[Y]

Next, observe that H[X|Y]H[X] if X is uniform on a finite set. That is because

H[X|Y]=y[Y=y]H[X|Y=y]y[Y=y]H[X](by maximality)=H[X]

By the equivalence noted above, we also have that H[X|Y]H[X] if Y is uniform.

Now let pab=[(X,Y)=(a,b)] and assume that all pab are rational. Pick n such that we can write pab=mabn with each mab an integer. Partition [n] into sets Eab of size mab. Let Z be uniform on [n]. Without loss of generality (by invariance) (X,Y)=(a,b)ZEab.

Let Eb=aEab for each b. So Y=bZEb. Now define a random variable W as follows: If Y=b, then WEb, but then W is uniformly distributed in Eb and independent of X (or Z if you prefer).

So W and X are conditionally independent given Y, and W is uniform on [n].

Then

H[X|Y]=H[X|Y,W](by conditional independence)=H[X|W](as W determines Y)H[X](as W is uniform)

By continuity, we get the result for general probabilities.

Corollary 1.12. Assuming that:

  • X a random variable

Then H[X]0.

Proof (Without using formula). By Subadditivity, H[X|X]H[X]. But H[X|X]=0.

Corollary 1.13. Assuming that:

  • X1,,Xn are random variables

Then
H[X1,,Xn]H[X1]++H[Xn].

Proof. Induction using Subadditivity.

Proposition 1.14 (Submodularity). Assuming that:

  • X,Y,Z are random variables

Then
H[X|Y,Z]H[X|Z].

Proof. Calculate:

H[X|Y,Z]=z[Z=z]H[X|Y,Z=z]z[Z=z]H[X|Z=z]=H[X|Z]

Submodularity can be expressed in many ways.

Expanding using additivity gives the following inequalities:

H[X,Y,Z]H[Y,Z]H[X,Z]H[Z]H[X,Y,Z]H[X,Z]+H[Y,Z]H[Z]H[X,Y,Z]+H[Z]H[X,Z]+H[Y,Z]

Lemma 1.15. Assuming that:

  • X,Y,Z random variables

  • Z=f(Y)

Then
H[X|Y]H[X|Z].

Proof.

H[X|Y]=H[X,Y]H[Y]=H[X,Y,Z]H[Y,Z]H[X,Z]H[Z](Submodularity)=H[X|Z]

Lemma 1.16. Assuming that:

  • X,Y,Z random variables

  • Z=f(X)=g(Y)

Then
H[X,Y]+H[Z]H[X]+H[Y].

Proof. Submodularity says:

H[X,Y,Z]=H[Z]H[X,Z]+H[Y,Z]

which implies the result since Z depends on X and Y.

Lemma 1.17. Assuming that:

  • X takes values in a finite set A

  • Y is uniform on A

  • H[X]=H[Y]

Then X is uniform.

Proof. Let pa=[X=a]. Then

H[X]=aApa log (1pa)=|A|aApa log (1pa)

The function xxlog1x is concave on [0,1]. So, by Jensen’s inequality this is at most

|A|(𝔼apa)log(1𝔼apa)= log (|A|)=H[Y].

Equality holds if and only if apa is constant – i.e. X is uniform.

Corollary 1.18. Assuming that:

  • X,Y random variables

  • H[X,Y]=H[X]+H[Y]

Then X and Y are independent.

Proof. We go through the proof of Subadditivity and check when equality holds.

Suppose that X is uniform on A. Then

H[X|Y]=y[Y=y]H[X|Y=y]H[X]

with equality if and only if H[X|Y=y] is uniform on A for all y (by Lemma 1.17), which implies that X and Y are independent.

At the last stage of the proof we used

H[X|Y]=H[X|Y,W]=H[X|W]H[X]

where W was uniform. So equality holds only if X and W are independent, which implies (since Y depends on W) that X and Y are indpendent.

Definition (Mutual information). Let X and Y be random variables. The mutual information I[X:Y] is

H[X]+H[Y]H[X,Y]=H[X]H[X|Y]=H[Y]H[Y|X]

Subadditivity is equivalent to the statement that I[X:Y]0 and Corollary 1.18 implies that I[X:Y]=0 if and only if X and Y are independent.

Note that

H[X,Y]=H[X]+H[Y]I[X:Y].

Definition (Conditional mutual information). Let X, Y and Z be random variables. The conditional mutual information of X and Y given Z, denoted by I[X:Y|Z] is

z[Z=z]I[X|Z=z:Y|Z=z]=z[Z=z](H[X|Z=z]+H[Y|Z=z]H[X,Y|Z=z])=H[X|Z]+H[Y|Z]H[X,Y|Z]=H[X,Z]+H[Y,Z]H[X,Y,Z]H[Z]

Submodularity is equivalent to the statement that I[X:Y|Z]0.

2 A special case of Sidorenko’s conjecture

Let G be a bipartite graph with vertex sets X and Y (finite) and density α (defined to be |E(G)||X||Y|). Let H be another (think of it as ‘small’) bipartite graph with vertex sets U and V and m edges.

Now let ϕ:UX and ψ:VY be random functions. Say that (ϕ,ψ) is a homomorphism if ϕ(x)ϕ(y)E(G) for every xyE(H).

Sidorenko conjectured that: for every G,H, we have

[(ϕ,ψ) is a homomorphism]αm.

Not hard to prove when H is Kr,s. Also not hard to prove when H is K2,2 (use Cauchy Schwarz).

Theorem 2.1. Sidorenko’s conjecture is true if H is a path of length 3.

Proof. We want to show that if G is a bipartite graph of density α with vertex sets X,Y of size m and n and we choose x1,x2X, y1,y2Y independently at random, then

[x1y1,x2y2,x3y3E(G)]α3.

It would be enough to let P be a P3 chosen uniformly at random and show that H[P]log(α3m2n2).

Instead we shall define a different random variable taking values in the set of all P3s (and then apply maximality).

To do this, let (X1,Y1) be a random edge of G (with X1,X, Y1Y). Now let X2 be a random neighbour of Y1 and let Y2 be a random neighbour of X2.

It will be enough to prove that

H[X1,Y1,X2,Y2]log(α3m2n2).

We can choose X1Y1 in three equivalent ways:

It follows that Y1=y with probability d(y)|E(G)|, so X2Y1 is uniform in E(G), so X2=x with probability d(x)|E(G)|, so X2Y2 is uniform in E(G).

Therefore,

H[X1,Y1,X2,Y2]=H[X1]+H[Y1|X1]+H[X2|X1,Y1]+H[Y2|X1,Y1,X2]=H[X1]+H[Y1|X1]+H[X2|Y1]+H[Y2|X2]=H[X1]+H[X1,Y1]H[X1]+H[X2,Y1]H[Y1]+H[Y2,X2]H[X2]=3H[UE(G)]H[Y1]H[X2]3H[UE(G)]H[UY]H[UX]=3log(αmn) log m log n=log(α3m2n2)

So we are done my maximality.

Alternative finish (to avoid using log!):

Let X,Y be uniform in X,Y and independent of each other and X1,Y1,X2,Y2. Then:

H[X1,Y2,X2,Y2,X,Y]=H[X1,Y1,X2,Y2]+H[UX]+H[UY]3H[UE(G)]

So by maximality,

#P3s×|X|×|Y||E(G)|3.

3 Brégman’s Theorem

Definition (Permanent of a matrix). Let A be an n×n matrix over . The permanent of A, denoted per(A), is

σSni=1nAiσ(i),

i.e. “the determinant without the signs”.

Let G be a bipartite graph with vertex sets X,Y of size n. Given (x,y)XY, let

Axy={1xyE(G)0xyE(G)

ie A is the bipartite adjacency matrix of G.

Then per(A) is the number of perfect matchings in G.

Brégman’s theorem concerns how large per(A) can be if A is a 01-matrix and the sum of entres in the i-th row is di.

Let G be a disjoint union of Kaiais for i=1,,k, with a1++ak=n.

Then the number of perfect matchings in G is

i=1kai!.

Theorem 3.1 (Bregman). Assuming that:

  • G a bipartite graph with vertex sets X,Y of size n

Then the number of perfect matchings in G is at most
xX(d(x)!)1d(x).

Proof (Radhakrishnan). Each matching corresponds to a bijection σ:XY such that xσ(x)E(G) for every x. Let σ be chosen uniformly from all such bijections.

H[σ]=H[σ(x1)]+H[σ(x2)|σ(x1)]++H[σ(xn)|σ(x1),,σ(xn1)],

where x1,,xn is some enumeration of X.

Then

H[σ(x1)]logd(x1)H[σ(x2)|σ(x1)]𝔼σlogdx1σ(x2)

where

dxiσ(x2)=|N(x2){σ(x1)}|.

In general,

H[σ(xi)]σ(x1),,σ(xi1)𝔼σlogdx1,,xi1σ(xi),

where

dx1,,xi1σ(xi)=|N(xi){σ(x1),,σ(xi1)}|.

Key idea: we now regard x1,,xn as a random enumeration of X and take the average.

For each xX, define the contribution of x to be

log(dx1,,xi1σ(xi))

where xi=x (note that this “contribution” is a random variable rather than a constant).

We shall now fix σ. Let the neighbours of x be y1,,yk.

PIC

Then one of the yj will be σ(x), say yh. Note that dx1,,xi1σ(xi) (given that xi=x) is

d(x)|{j:σ1(yj) comes earlier than x=σ1(yh)}|.

All positions of σ1(yh) are equally likely, so the average contribution of x is

1d(x)(logd(x)+ log (d(x)1)++ log 1)=1d(x)log(d(x)!).

By linearity of expectation,

H[σ]xX1d(x) log (d(x)!),

so the number of matchings is at most

xX(d(x)!)1d(x).

Definition (1-factor). Let G be a graph with 2n vertices. A 1-factor in G is a collection of n disjoint edges.

Theorem 3.2 (Kahn-Lovasz). Assuming that:

  • G a graph with 2n vertices

Then the number of 1-factors in G is at most
xV(G)(d(x)!)12d(x).

Proof (Alon, Friedman). Let M be the set of 1-factors of G, and let (M1,M2) be a uniform random element of M2. For each M1,M2, the union M1M2 is a collection of disjoint edges and even cycles that covers all the vertices of G.

PIC

Call such a union a cover of G by edges and even cycles.

If we are given such a cover, then the number of pairs (M1,M2) that could give rise to it is 2k, where k is the number of even cycles.

Now let’s build a bipartite graph G2 out of G. G2 has two vertex sets (call them V1,V2), both copies of V(G). Join xV1 to yV2 if and only if xyE(G).

For example:

PIC

By Bregman, the number of perfect matchings in G2 is xV(G)(d(x)!)1d(x). Each matching gives a permutation σ of V(G), such that xσ(x)E(G) for every xV(G).

Each such σ has a cycle decomposition, and each cycle gives a cycle in G. So σ gives a cover of V(G) by isolated vertices, edges and cycles.

Given such a cover with k cycles, each edge can be directed in two ways, so the number of σ that give rise to is is 2k, where k is the number of cycles.

So there is an injection from M2 to the set of matchings of G2, since every cover by edges and even cycles is a cover by vertices, edges and cycles.

So

|M|2xV(G)(d(x)!)1d(x).

4 Shearer’s lemma and applications

Notation. Given a random variable X=(X1,,Xn) and A={a1,,ak}[n] with a1<a2<<ak, write XA for the random variable (Xa1,Xa2,,Xak).

Lemma 4.1 (Shearer). Assuming that:

  • X=(X1,,Xn) a random variable

  • A a family of subsets of [n] such that every i[n] belongs to at least r of the sets AA

Then
H[X1,,Xn]1rAAH[XA].

Proof. For each a[n], write X<a for (X1,,Xa1).

For each AA, A={a1,,ak} with a1<<ak, we have

H[XA]=H[Xa1]+H[Xa2|Xa1]++H[Xak|Xa1,,Xak]H[Xa1|X<a1]+H[Xa2|X<a2]++H[Xak|X<ak](Lemma 1.15)=aAH[Xa|X<a]

Therefore,

AAH[XA]AAaAH[Xa|X<a]ra=1nH[Xa|X<a]=rH[X]

Alternative version:

Lemma 4.2 (Shearer, expectation version). Assuming that:

  • X=(X1,,Xn) a random variable

  • A[n] a randomly chosen subset of [n], according to some probability distribution (don’t need any independence conditions!)

  • for each i[n], [iA]μ

Then
H[X]μ1𝔼AH[XA].

Proof. As before,

H[XA]aAH[Xa|X<a].

So

𝔼AH[XA]𝔼AaAH[Xa|X<a]μa=1nH[Xa|X<a]=μH[X]

Definition (PA). Let En and let A[n]. Then we write PAE for the set of all uA such that there exists v[n]A such that [u,v]E, where [u,v] is u suitably intertwined with v (i.e. uv as functions).

Corollary 4.3. Assuming that:

  • En

  • A a family of subsets of [n] such that every i[n] is contained at least r sets AA

Then
|E|AA|PAE|1r.

Proof. Let X be a uniform random element of E. Then by Shearer,

H[X]1rAAH[XA].

But XA tkaes values in PAE, so

H[XA]log|PAX,

so

log|E|1rA log |PAE|.

If A={[n]{i}:i=1,,n} we get

|E|i=1n|P[n]{i}E|1n1.

This case is the discrete Loomis-Whitney theorem.

Theorem 4.4. Assuming that:

  • G a graph with m edges

Then G has at most (2m)326 triangles.

Is this bound natural? Yes: if m=n2, and we consider a complete graph on n vertices, then we get approximately (2m)236 triangles.

Proof. Let (X1,X2,X3) be a random ordered triangle (without loss of generality G has a triangle so that this is possible).

Let t be the number of triangles in G. By Shearer,

log(6t)=H[X1,X2,X3]12(H[X1,X2]+H[X1,X3]+H[X2,X3]).

Each edge H[Xi,Xj] is supported in the set of edges G, given a direction, i.e.

12(H[X1,X2]+H[X1,X3]+H[X2,X3])32log(2m).

Definition. Let X be a set of size n and let G be a set of graphs with vertex set X. Then G is Δ-intersecting (read as “triangle-intersecting”) if for all G1,G2G, G1G2 contains a triangle.

Theorem 4.5. Assuming that:

Then G has size at most 2n22.

Proof. Let X be chosen uniformly at random from G. We write V(2) for the set of (unordered) pairs of elements of V. Think of any GG as a function from V(2) to {0,1}. So X=(Xe:eV(2)).

For each RV, let GR be the graph KRKVR

PIC

For each R, we shall look at the projection XGR, which we can think of as taking values in the set {GGR:GG}=:GR.

Note that if G1,G2G, R[n], then G1G2GR, since G1G2 contains a triangle, which must intersect GR by Pigeonhole Principle.

Thus, GR is an intersecting family, so it has size at most 2|E(GR)|1. By Shearer, expectation version,

H[X]2𝔼RH[XGR](since each e belongs to GR with probability 12)2𝔼R(|E(GR)|1)=2(12m21)=n22

Definition (Edge-boundary). Let G be a graph and let AV(G). The edge-boundary A of A is the set of edges xy such that yA.

If G=n or {0,1}n and i[n], then the i-th boundary iA is the set of edges xyA such that xy=±ei, i.e. iA consists of deges pointing in direction i.

Theorem 4.6 (Edge-isoperimetric inequality in Zn). Assuming that:

  • An a finite set

Then |A|2n|A|n1n.

Proof. By the discrete Loomis-Whitney inequality,

|A|i=1n|P[n]{i}A|1n1=(i=1n|P[n]{i}A|1n)nn1(1ni=1n|P[n]{i}A|)nn1

But |iA|2|P[n]{i}A| since each fibre contributes at least 2.

So

|A|(12ni=1n|iA|)nn1=(12n|A|)nn1

Theorem 4.7 (Edge-isoperimetric inequality in the cube). Assuming that:

  • A{0,1}n (where we take the usual graph)

Then |A||A|(nlog|A|).

Proof. Let X be a uniform random element of A and write X=(X1,,Xn). Write Xi for (X1,,Xi1,Xi+1,,Xn). By Shearer,

H[X]1n1i=1rH[Xi]=1n1i=1nH[X]H[Xi|Xi]

Hence

i=1nH[Xi|Xi]H[X].

Note

H[Xi|Xi=u]={1|P[n]{i}1(u)|=20|P[n]{i}1(u)|=1

The number of points of the second kind is |iA|, so H[Xi|Xi]=1|iA||A|. So

H[X]i=1n(1|iA||A|)=n|A||A|

Also, H[X]=log|A|. So we are done.

Definition (Lower shadow). Let A be a family of sets of size d. The lower shadow A is {B:|B|=d1,AA,BA}.

Notation. Let h(x)=xlog1x+(1x) log 11x (for x[0,1]).

Theorem 4.8 (Kruskal-Katona). Assuming that:

  • |A|=td=t(t1)(td1)d! for some real number t

Then |A|td1.

Proof. Let X=(X1,,Xd) be a random ordering of the elements of a uniformly random AA. Then

H[X]=log(d!td).

Note that (X1,,Xd1) is an ordering of the elements of some BA, so

H[X1,,Xd1]log((d1)!|A|).

So it’s enough to show

H[X1,,Xd1]log((d1)!td1).

Also,

H[X]=H[X1,,Xd1]+H[Xd|X1,,Xd1]

and

H[X]=H[X1]+H[X2|X1]++H[Xd|X1,,Xd1].

We would like an upper bound for H[Xd]X<d. Our strategy will be to obtain a lower bound for H[Xk|X<k] in terms of H[Xk+1|X<k+1]. We shall prove that

2H[Xk|X<k]2H[Xk+1|X<k+1]+1k.

Let T be chosen independently of X1,,Xk1 with

T={0probability p1probability 1p

(p will be chosen and optimised later).

Given X1,,Xk1, let

X={Xk+1T=0XkT=1

Note that Xk and Xk+1 have the same distribution (given X1,,Xk1), so X does as well. Then

H[Xk|X<k]=H[X|X<k]H[X|Xk](Submodularity)=H[X,T|Xk](Xk and X determine T)=H[T|Xk]+H[X|T,Xk](additivity)=H[T]+pH[Xk+1|X1,,Xk]   +(1p)H[Xk|X1,,Xk]=h(p)+ps

where h(p)=plog1p+(1p) log 11p and s=H[Xk+1|X1,,Xk].

It turns out that this is maximised when p=2s2s+1. Then we get

2s2s+1(log(2s+1) log 2s)+ log (2s+1)2s+1+s2s2s+1= log (2s+1).

This proves the claim.

Let r=2H[Xd|X1,,Xd1]. Then

H[X]=H[X1]++H[Xd|X1,,Xd1]logr+ log (r+1)++ log (r+d1)=log((r+d1)!(r1)!)=log(d!r+d1d)

Since H[X]=log(d!td), it follows that

r+d1t,rt+1d.

It follows that

H[X1,,Xd1]=log(d!td) log rlog(d!t!d!(td)!(t+1d))=log((d1)!td1)

5 The union-closed conjecture

Definition (Union-closed). Let A be a (finite) family of sets. Say that A is union closed if for any A,BA, we have ABA.

Conjecture. If A is a non-empty union-closed family, then there exists x that belongs to at least 12|A| sets in A.

Theorem (Justin Gilmer). There exists c>0 such that if A is a union-closed family, then there exists x that belongs to at least c|A| of the sets in A.

Justin Gilmer’s constant was about 1100.

His method has a “natural barrier” of 352.

We will briefly and “informally” discuss this.

A reason for this is that if we weaken the property union-closed to “almost union-closed” (if we pick two elements randomly, then with high probability the union is in the family), then 352 is the right bound.

Let A=[n](pn)[n]((2pp2o(1))n). With high probability, if A,B are random elements of [n](pn), then |AB|(2pp2o(1))n.

If 1(2pp2o(1))=p then almost all of A is [n](pn).

PIC

One of the roots of the quadratic 13p+p2=0 is p=352.

If we want to prove Justin Gilmer’s Theorem, it is natural to let A,B be independent uniformly random elements of A and to consider H[AB]. Since A is union-closed, ABA, so H[AB]log|A|. Now we would like to get a lower bound for H[AB] assuming that no x belongs to more than p|A| sets in A.

h(xy)c(xh(y)+yh(x)),h(x2)2cxh(x).

Lemma 5.1. Assuming that:

  • c>0 is such that

    h(xy)c(xh(y)+yh(x))

    for every x,y[0,1]

  • A is a family of sets such that every element (of A) belongs to fewer than p|A| members of A

Then H[AB]>c(1p)(H[A]+H[B]).

Proof. Think of A,B as characteristic functions. Write A<k for (A1,,Ak1) etc. By the Chain rule it is enough to prove for every k that

H[(AB)k|(AB)<k]>c(1p)(H[Ak|A<k]+H[Bk|B<k]).

By Submodularity,

H[(AB)k|(AB)<k]H[(AB)k|A<k,B<k].

For each u,v{0,1}k1 write p(u)=(Ak=0|A<k=u), q(v)=(Bk=0|B<k=v).

Then

H[(AB)k|A<k=u,B<k=v]=h(p(u)q(v))

which by hypothesis is at least

c(p(u)h(q(v))+q(v)h(p(u))).

So

H[(AB)k|(AB)<k]cu,v(A<k=u)(B<k=v)(p(u)h(q(v))+q(v)h(p(u))).

But

u(A<k=u)(Ak=0|A<k=u)=(Ak=0)

and

v(B<k=v)h(q(v))=v(B<k=v)H[Bk|B<k=v]=H[Bk|B<k].

Similarly for the other term, so the RHS equals

c((Ak=0)H[Bk|B<k]+(Bk=0)H[Ak|A<k]),

which by hypothesis is greater than

c(1p)(H[Ak|A<k]+H[Bk|B<k])

as required.

This shows that if A is union-closed, then c(1p)12, so p112c. Non-trivial as long as c>12.

We shall obtain 151. We start by proving the diagonal case – i.e. when x=y.

Lemma 5.2 (Boppana). For every x[0,1],

h(x2)ϕxh(x).

Proof. Write ψ for ϕ1=512. Then ψ2=1ψ, so h(ψ2)=h(1ψ)=h(ψ) and ϕψ=1, so h(ψ2)=ϕψh(ψ). Equality also when x=0,1.

PIC

Toolkit:

ln2h(x)=xlnx(1x) ln (1x)ln2h(x)=lnx1+ ln (1x)+1=ln(1x) ln xln2h (x)=1x11xln2h (x)=1x21(1x)2

Let f(x)=h(x2)ϕxh(x). Then

f(x)=2xh(x2)ϕh(x)ϕxh(x)f(x)=2h(x2)+4x2h(x2)2ϕh(x)ϕxh(x)f(x)=4xh(x2)+8xh(x2)+8x3h(x2)3ϕh(x)ϕxh(x)=12xh(x2)+8x3h(x2)3ϕh(x)ϕxh(x)

So

ln2f (x)=12xx2(1x2)+8x3(12x2)x4(1x2)2+3ϕx(1x)ϕx(12x)x2(1x)2=12x(1x2)+8(12x2)x(1x2)2+3ϕx(1x)ϕ(12x)x(1x)2=12(1x2)+8(12x2)+3ϕ(1x)(1+x)2ϕ(12x)(1+x)2x(1x)2(1+x)2

This is zero if and only if

12+12x2+816x2+3ϕ(1+xx2x3)ϕ(13x22x3)=0

which simplifies to

ϕx34x2+3ϕx4+2ϕ=0.

Since this is a cubic with negative leading coefficient and constant term, it has a negative root, so it has at most two roots in (0,1). It follows (using Rolle’s theorem) that f has at most five roots in [0,1], up to multiplicity.

But

f(x)=2x(log(1x2) log x2)+ϕ(x log x+(1x) log (1x))ϕx( log (1x) log x).

So f(0)=0, so f has a double root at 0.

We can also calculate (using ψ2+ψ=1):

f(ψ)=2ψ(logψ2 log ψ)+ϕ(ψ log ψ+2(1ψ) log ψ)(2 log ψ log ψ)=2ψlogψ+ log ψ+2ϕ log ψ2 log ψ log ψ=2logψ(ψ+ϕ1)=2ϕlogψ(ψ2+1ψ)=0

So there’s a double root at ψ.

Also, note f(1)=0.

So f is either non-negative on all of [0,1] or non-positive on all of [0,1].

If x is small,

f(x)=x2log1x2+(1x2) log 11x2ϕx(x log 1x+(1x) log 11x)=2x2log1xϕx2 log 1x+O(x2)

so there exists x such that f(x)>0.

Lemma 5.3. The function f(x,y)=h(xy)xh(y)+yh(x) is minimised on (0,1)2 at a point where x=y.

Proof. We can extend f continuously to the boundary by setting f(x,y)=1 whenever x or y is 0 or 1. To see this, note first that it’s valid if neither x nor y is 0.

PIC

If either x or y is small, then

h(xy)=xy(logx+ log y)+O(xy)xh(y)+yh(x)=x(ylogy+O(y))y(x log x+O(x))=h(x)+O(xy)

So it tends to 1 again.

One can check that f(12,12)<1, so f is minimised somewhere in (0,1)2.

Let (x,y) be a minimum with f(x,y)=α.

Let g(x)=h(x)x and note that

f(x,y)=g(xy)g(x)+g(y).

Also,

g(xy)α(g(x)+g(y))0

with equality at (x,y). So the partial derivatives of LHS are both 0 at (x,y).

yg(xy)αg(x)=0xg(xy)αg(y)=0

So xg(x)=yg(y). So it’s enough to prove that xg(x) is an injection. g(x)=h(x)xh(x)x2, so

xg(x)=h(x)h(x)x=log(1x) log x+x log x+(1x) log (1x)x=log(1x)x

Differentiating gives

1x(1x)log(x1)x2=x(1x)log(1x)x2(1x).

The numerator differentiates to 1+1+log(1x), which is negative everywhere. Also, it equals 0 at 0. So it has a constant sign.

Combining this with Lemma 5.2, we get that

h(xy)ϕ2(xh(y)+yh(x)).

This allows us to take 11ϕ=1512=352.

6 Entropy in additive combinatorics

We shall need two “simple” results from additive combinatorics due to Imre Ruzsa.

Definition (Sum set / difference set / etc). Let G be an abelian group and let A,BG.

The sumset A+B is the set {x+y:xA,yB}.

The difference set AB is the set {xy:xA,yB}.

We write 2A for A+A, 3A for A+A+A, etc.

Definition (Ruzsa distance). The Ruzsa distance d(A,B) is

|AB||A|12|B|12.

Lemma 6.1 (Ruzsa triangle inequality). d(A,C)d(A,B)+d(B,C).

Proof. This is equivalent to the statement

|AC||B||AB||BC|.

For each xAC, pick a(x)A, c(x)C such that a(x)c(x)=x. Define a map

ϕ:(AC)×B(AB,BC)(x,b)(a(x)b,bc(x))

Adding the coordinates of ϕ(x,b) gives x, so we can calculate a(x) (and c(x)) from ϕ(x,b), and hence can calculate b. So ϕ is an injection.

Lemma 6.2 (Ruzsa covering lemma). Assuming that:

  • G an abelian group

  • A,B finite subsets of G

Then A can be covered by at most |A+B||B| translates of BB.

Proof. Let {x1,,xk} be a maximal subset of A such that the sets xi+B are disjoint.

Then if aA, there exists i such that (a+B)(xi+B). Then axi+BB.

So A can be covered by k translates of BB. But

|B|k=|{x1,,xk}+BA+B||A+B|.

Let X, Y be discrete random variables taking values in an abelian group. What is X+Y when X and Y are independent?

For each z, (X+Y=z)=x+y=z(X=x)(Y=y). Writing px and qy for (X=x) and (Y=y) respectively, this givesim x+y=zpxqy=pq(z) where p(x)=px and q(y)=zy.

So, sums of independent random variables convolutions.

Definition (Entropic Ruzsa distance). Let G be an abelian group and let X, Y be G-valued random variables. The entropic Ruzsa distance d[X;Y] is

H[XY]12H[X]12H[Y]

where X, Y are independent copies of X and Y.

Lemma 6.3. Assuming that:

  • A, B are finite subsets of G

  • X, Y are uniformly distributed on A, B respectively

Then
d[X;Y]logd(A,B).

Proof. Without loss of generality X, Y are indepent. Then

d[X;Y]=H[XY]12H[X]12H[Y]log|AB|12 log A12 log B=logd(A,B)

Lemma 6.4. Assuming that:

  • X, Y are G-valued random variables

Then
H[X+Y]max{H[X],H[Y]}I[X:Y].

Proof.

H[X+Y]H[X+Y|Y](by Subadditivity)=H[X+Y,Y]H[Y]=H[X,Y]H[Y]=H[X]+H[Y]H[Y]I[X:Y]=H[X]I[X:Y]

By symmetry we also have

H[X+Y]H[Y]I[X:Y].

Corollary. Assuming that:

Then:

H[XY]max{H[X],H[Y]}I[X:Y].

Corollary 6.5. Assuming that:

  • X, Y are G-valued random variables

Then
d[X;Y]0.

Proof. Without loss of generality X, Y are independent. Then I[X:Y]=0, so

H[XY]max{H[X],H[Y]}12(H[X]+H[Y])

Lemma 6.6. Assuming that:

  • X, Y are G-valued random variables

Then d[X;Y]=0 if and only if there is some (finite) subgroup H of G such that X and Y are uniform on cosets of H.

Proof.

Recall Lemma 1.16: If Z=f(X)=g(Y), then:

H[X,Y]+H[Z]H[X]+H[Y].

Lemma 6.7 (The entropic Ruzsa triangle inequality). Assuming that:

  • X, Y, Z are G-valued random variables

Then
d[X;Z]d[X;Y]+d[Y;Z].

Proof. We must show that (assuming without loss of generality that X, Y and Z are independent) that

H[XZ]12H[X]12H[Z]H[XY]12H[X]12H[Y]+H[YZ]12H[Y]12H[Z],

i.e. that

H[XZ]+H[Y]H[XY]+H[YZ].(∗)

Since XZ is a function of (XY,YZ) and is also a function of (X,Z), we get using Lemma 1.16 that

H[XY,YZ,X,Z]+H[XZ]H[XY,YZ]+H[X,Z].

This is the same as

H[X,Y,Z]+H[XZ]H[X,Z]+H[XY,YZ].

By independence, cancelling common terms and Subadditivity, we get ().

Lemma 6.8 (Submodularity for sums). Assuming that:

  • X, Y, Z are independent G-valued random variables

Then
H[X+Y+Z]+H[Z]H[X+Z]+H[Y+Z].

Proof. X+Y+Z is a function of (X+Z,Y) and also a function of (X,Y+Z). Therefore (using Lemma 1.16),

H[X+Z,YX,Y+Z]+H[X+Y+Z]H[X+Z,Y]+H[X,Y+Z].

Hence

H[X,Y,Z]+H[X+Y+Z]H[X+Z]+H[Y]+H[X]+H[Y+Z].

By independence and cancellation, we get the desired inequality.

Lemma 6.9. Assuming that:

  • G an abelian group

  • X a G-valued random variable

Then
d[X;X]2d[X;X].

Proof. Let X1, X2, X3 be independent copies of X. Then

d[X;X]=H[X1+X2]12H[X1]12H[X2]H[X1+X2X3]H[X]H[X1X3]+H[X2X3]H[X3]H[X]=2d[X;X]

(as X1,X2,X3 are all copies of X).

Corollary 6.10. Assuming that:

  • X and Y are G-valued random variables

Then
d[X;Y]5d[X;Y].

Proof.

d[X;Y]d[X;Y]+d[Y;Y]d[X;Y]+2d[Y;Y]d[X;Y]+2(d[Y;X]+d[X;Y])=5d[X;Y]
Conditional Distances

Definition (Conditional distance). Let X,Y,U,V be G-valued random variables (in fact, U and V don’t have to be G-valued for the definition to make sense). Then the conditional distance is

d[X|U;Y|V]=u,v[U=u][V=v]d[X|U=u;Y|V=v].

The next definition is not completely standard.

Definition (Simultaneous conditional distance). Let X,Y,U be G-valued random variables. The simultaneous conditional distance of X to Y given U is

d[X;YU]=u[U=u]d[X|U=u;Y|U=u].

We say that X, Y are conditionally independent trials of X, Y given U if:

  • X is distributed like X.

  • Y is distributed like Y.

  • For each uU, X|U=u is distributed like X|U=u,

  • For each uU, Y|U=u is distributed like Y|U=u.

  • X|U=u and Y|U=u are independent.

Then

d[X;YU]=H[XY|U]12H[X|U]12H[Y|U]

(as can be seen directly from the formula).

Lemma 6.11 (The entropic BSG theorem). Assuming that:

  • A and B are G-valued random variables

Then
d[A;B  A+B]3I[A:B]+2H[A+B]H[A]H[B].

Remark. The last few terms look like 2d[A;B]. But they aren’t equal to it, because A and B aren’t (necessarily) independent!

Proof.

d[A;B  A+B]=H[AB|A+B]12H[A|A+B]12H[B|A+B]

where A, B are conditionally independent trials of A, B given A+B. Now calculate

H[A|A+B]=H[A|A+B]=H[A,A+B]H[A+B]=H[A,B]H[A+B]=H[A]+H[B]I[A:B]H[A+B]

Similarly, H[B|A+B] is the same, so 12H[A|A+B]+12H[B|A+B] is also the same.

H[AB|A+B]H[AB].

Let (A1,B1) and (A2,B2) be conditionally independent trials of (A,B) given A+B. Then H[AB]=H[A1B2]. By Submodularity,

H[A1B2]H[A1B2,A]+H[A1B2,B1]H[A1B2,A1,B1]H[A1B2,A1]=H[A1,B2]H[A1]+H[B2]=H[A]+H[B]H[A1B2,B1]=H[A2B1,B1](since A1+B1=A2+B2)=H[A2,B1]H[A]+H[B]

Finally,

H[A1B2,A1,B1]=H[A1,B1,A2,B2]=H[A1,B1,A2,B2|A+B]+H[A+B]=2H[A,B]A+B+H[A+B](by conditional independence of (A1,B1) and (A2,B2))=2H[A,B]H[A+B]=2H[A]+2H[B]2I[A:B]H[A+B]

Adding or subtracting as appropriate all these terms gives the required inequality.

7 A proof of Marton’s conjecture in 𝔽2n

We shall prove the following theorem.

Theorem 7.1 (Green, Manners, Tao, Gowers). There is a polynomial p with the following property:

If n and A𝔽2n is such that |A+A|C|A|, then there is a subspace H𝔽2n of size at most |A| such that A is contained in the union of at most p(C) translates of H. (Equivalently, there exists K𝔽2n, |K|p(C) such that AK+H).

This is known as “Polynomial Freiman–Ruzsa”.

In fact, we shall prove the following statement.

Theorem 7.2 (Entropic Polynomial Freiman–Ruzsa). There exists an absolute constant α satisfying the following: Let G=𝔽2n and let X,Y be G-valued random variables. Then there exists a subsgroup H of G such that

d[X;UH]+d[UH;Y]αd[X;Y]

where UH is the uniform distribution on H.

Lemma 7.3. Assuming that:

  • X a discrete random variable (and write px for (X=x))

Then there exists x such that px2H[X].

Proof. If not, then

H[X]=xpx log (1px)>H[X]xpx=H[X],

contradiction.

Proposition 7.4. Theorem 7.2 implies Theorem 7.1.

Proof. Let A𝔽2n, |A+A|C|A|. Let X and Y be independent copies of UA. Then by Theorem 7.2, there exists H (a subgroup) such that

d[X;UH]+d[UH;X]αd[X;Y]

so

d[X;UH]α2d[X;Y].

But

d[X;Y]=H[UAUA]H[UA]=H[UA+UA]H[UA](characteristic 2)log(C|A|) log |A|=logC

So d[X;UH]αlogC2. Therefore

H[X+UH]12H[X]+12H[UH]+αlogC2=12log|A|+12 log |H|+α log C2

Therefore, by Lemma 7.3, there exists z such that

(X+UH=z)|A|12|H|12Cα2.

But

(X+UH=z)=|A(zH)||A||H|=|A(z+H)||A||H|

(using characteristic 2). So there exists zG such that

|A(z+H)|Cα2|A|12|H|12.

Let B=A(z+H). By the Ruzsa covering lemma, we can cover A by at most |A+B||B| translates of B+B. But Bz+H so B+BH+H=H, so A can be covered by at most |A+B||B| translates of H.

But using BA,

|A+B||A+A|C|A|.

So

|A+B||B|C|A|Cα2|A|12|H|12=Cα2+1|A|12|H|12.

Since B is contained in z+H,

|H|Cα2|A|12|H|12

so |H|Cα|A|, so

Cα2+1|A|12|H|12Cα+1.

If |H||A| then we are done. Otherwise, since BA,

|A|Cα2|A|12|H|12

so |H|Cα|A|.

Pick a subgroup H of H of size between |A|2 and |A|. Then H is a union of at most 2Cα translates of H, so A is a union of at most 2C2α+1 translates of H.

Now we reduce further. We shall prove the following statement:

Theorem 7.5 (EPFR). There is a constant η>0 such that if X and Y are any two 𝔽2n-valued random variables with d[X;Y]>0, then there exists 𝔽2n-valued random variables U and V such that

d[U;V]+η(d[U;X]+d[V;Y])<d[X;Y].

Proposition 7.6. EPFR(η) EPFR(η1).

Proof. By compactness we can find U, V such that

τX,Y[U;V]=d[U;V]+η(d[U;X]+d[V;Y])

is minimised. If d[U;V]0 then by EPFR(η) there exist Z, W such that τU,V[Z;W]<d[U;V].

But then

τX,Y[Z;W]=d[Z;W]+η(d[Z;X]+d[W;Y])d[Z;W]+η(d[Z;U]+d[W;V])+η(d[U;X]+d[V;Y])(by The entropic Ruzsa triangle inequality)<d[U;V]+η(d[U;X]+d[V;Y])=τX,Y[U;V]

Contradiction.

It follows that d[U;V]=0. So there exists H such that U and V are uniform on cosets of H, so

η(d[UH;X]+d[UH;Y])<d[X;Y],

which gives us EPFR(η1).

Definition. Write τX,Y[U|Z;V|W] for

Z,W[Z=z][W=w]τX,Y[U|Z=z;V|W=w]

Definition. Write τX,Y[U;VZ] for

z[Z=z]τX,Y[U|z=z;V|Z=z]

Remark. If we can prove EPFR for conditional random variables, then by averaging we get it for some pair of random variables (e.g. of the form U|Z=z and V|W=w).

Lemma 7.7 (Fibring lemma). Assuming that:

  • G and H are abelian groups

  • ϕ:GH a homomorphism

  • let X, Y be G-valued random variables.

Then
d[X;Y]=d[ϕ(X);ϕ(Y)]+d[X|ϕ(X);Y|ϕ(Y)]+I[XY:ϕ(X),ϕ(Y)|ϕ(X)ϕ(Y)].

Proof.

d[X;Y]=H[XY]12H[X]12H[Y]=H[ϕ(X)ϕ(Y)]+H[XY|ϕ(X)ϕ(Y)]12H[ϕ(X)]  12H[X|ϕ(X)]12H[ϕ(Y)]12H[Y|ϕ(Y)]=d[ϕ(X);ϕ(Y)]+d[X|ϕ(X);Y|ϕ(Y)]+H[XY|ϕ(X)ϕ(Y)]  H[XY|ϕ(X),ϕ(Y)]

But the last line of this expression equals

H[XY|ϕ(X)ϕ(Y)]H[XY|ϕ(X),ϕ(Y),ϕ(X)ϕ(Y)]=I[XY:ϕ(X),ϕ(Y)|ϕ(X)ϕ(Y)].

We shall be interested in the following special case.

Corollary 7.8. Assuming that:

  • G=𝔽2n and X1,X2,X3,X4 are independent G-valued random variables

Then d[(X1,X2);(X3,X4)]=d[X1;X3]+d[X2;X4]=d[X1+X2;X3+X4]+d[X1|X1+X2;X3|X3+X4]  +I[X1+X3,X2+X4:X1+X2,X3+X4|X1+X2+X3+X4]()

Proof. Apply Lemma 7.7 with X=(X1,X2), Y=(X3,X4) and ϕ(x,y)=x+y.

We shall now set W=X1+X2+X3+X4.

Recall that Lemma 6.11 says

d[X;Y  X+Y]3I[X:Y]+2H[X+Y]H[X]H[Y].

Equivalently,

I[X:Y]13(d[X;Y  X+Y]+H[X]+H[Y]2H[X+Y]).

Applying this to the information term (), we get that it is at least

13(d[X1+X3,X2+X4;X1+X2,X3+X4  X2+X3,W]+H[X1+X3,X2+X4|W]  +H[X1+X2,X3+X4|W]2H[X2+X3,X2+X3|W])

which simplifies to

13(d[X1+X3,X2+X4;X1+X2,X3+X4  X2+X3,W]+H[X1+X3|W]  +H[X1+X2|W]2H[X2+X3|W])

So Corollary 7.8 now gives us:

d[X1;X3]+d[X2;X4]d[X1+X2;X3+X4]+d[X1|X1+X2;X3|X4]  13(d[X1+X2;X1+X3  X2+X3,W]  +H[X1+X2|W]+H[X1+X3|W]H[X2+X3|W])

Now apply this to (X1,X2,X3,X4), (X1,X2,X4,X3) and (X1,X4,X3,X2) and add.

We look first at the entropy terms. We get

2H[X1+X2|W]+H[X1+X4|W]+H[X1+X3|W]+H[X1+X4|W]+H[X1+X2|W]  2H[X1+X2|W]2H[X2+X4|W]2H[X1+X2|W]=0

where we made heavy use of the observation that if i,j,k,l are some permutation of 1,2,3,4, then

H[Xi+Xj|W]=H[Xk+Xl|W].

This also allowed use e.g. to replace

d[X1+X2,X3+X4;X1+X3,X2+X4  X2+X3,W]

by

d[X1+X2;X1+X3  X2+X3,W].

Therefore, we get the following inequality:

Lemma 7.9.

2d[X1;X2]+2d[X3;X4]+d[X1;X4]+d[X2;X3]2d[X1+X2;X3+X4]+d[X1+X4;X2+X3]  +2d[X1|X1+X2;X3|X3+X4]+d[X1|X1+X4;X2|X2+X3]  +13(d[X1+X2;X1+X3  X2+X3,W]+d[X1+X2;X1+X4  X2+X4,W]  +d[X1+X4;X1+X3  X3+X4,W])

Proof. Above.

Now let X1,X2 be copies of X and Y1,Y2 copies of Y and apply Lemma 7.9 to (X1,X2,Y1,Y2) (all independent), to get this.

Lemma 7.10. Assuming that:

  • X1,X2,Y1,Y2 satisfy: X1 and X2 are copies of X, Y1 and Y2 are copies of Y, and all of them are independent

Then 6d[X;Y]2d[X1+X2;Y1+Y2]+d[X1+Y2;X2+Y1]  +2d[X1|X1+X2;Y1|Y1+Y2]+d[X1|X1+Y1;X2|X2+Y2]  +23d[X1+X2;X1+Y1  X2+Y1,X1+Y2]  +13d[X1+Y1;X1+Y2  X1+X2,Y1+Y2]

OR? TODO: figure out which is correct

6d[X;Y]2d[X1+X2;Y1+Y2]+d[X1+Y1;X2+Y2]  +2d[X1|X1+X2;Y1|Y1+Y2]+d[X1|X1+Y1;X2|X2+Y2]  +23d[X1+X2|X1+Y1;X2+Y1|X1+Y2]  +13d[X1+Y1|X1+Y2;X1+X1|Y1+Y2]

Proof. Use above.

Recall that we want (U,V) such that

τX,Y[U;V]=d[U;V]+η(d[U;X]+d[V;Y])<d[X;Y]

Lemma 7.10 gives us a collection of distances (some conditioned), at least one of which is at most 67d[X;Y]. So it will be enough to show that for all of them we get

d[U;X]+d[V;Y]Cd[X;Y],

for some absolute constant C. Then we can take η<17C.

Definition (C-relevant). Say that (U,V) is C-relevant to (X,Y) if

d[U;X]+d[V;Y]Cd[X;Y].

Lemma 7.11. (Y,X) is 2-relevant to (X,Y).

Proof. d[Y;X]+d[X;Y]=2d[X;Y].

Lemma 7.12. Assuming that:

  • U,V,X be independent 𝔽2n-valued random variables

Then
d[U+V;X]12(d[U;X]+d[V;X]+d[U;V]).

Proof.

d[U+V;X]=H[U+V+X]12H[U+V]12H[X]=H[U+V+X]H[U+V]+12H[U+V]12H[X]12H[U+X]12H[U]+12H[V+X]12H[V]+12H[U+V]12H[X]=12(d[U;X]+d[V;X]+d[U;V])

Corollary 7.13. Assuming that:

  • (U,V) is C-relevant to (X,Y)

  • U1,U2,V1,V2 are independent copies of U,V

Then (U1+U2,V1+V2) is 2C-relevant to (X,Y).

Proof.

d[U1+U2;X]+d[V1+V2;Y]12(2d[U;V]+d[U;U]+2d[V;Y]+d[V;V])(by Lemma 7.12)2(d[U;X]+d[V;Y])(by The entropic Ruzsa triangle inequality)2Cd[X;Y]

Corollary 7.14. (X1+X2,Y1+Y2) is 4-relevant to (Y,X).

Proof. (X,Y) is 2-relevant to (Y,X), so by Corollary 7.13 we’re done.

Corollary. Assuming that:

Then (U+V,U+V) is (3C+2)-relevant to (X,Y).

Proof. By Lemma 7.12,

d[U+V;X]+d[U+V;Y]12(d[U;X]+d[V;X]+d[U;Y]+d[V;Y]+2d[U;V])12(2d[U;X]+4d[U;V]+2d[V;Y])12(6d[U;X]+6d[V;Y]+4d[X;Y])

Corollary 7.15. Assuming that:

Then (U+V,U+V) is 2(C+1)-relevant to (X,Y).

Proof.

d[U+V;X]12(d[U;X]+d[V;X]+d[U;V])12(d[U;X]+d[V;Y]+d[X;Y]+d[U;X]+d[X;Y]+d[V;Y])=d[U;X]+d[V;Y]+d[X;Y]

Similarly for d[U+V;Y].

Lemma 7.16. Assuming that:

  • U,V,X are independent 𝔽2n-valued random variables

Then
d[U|U+V;X]12(d[U;X]+d[V;X]+d[U;V]).

Proof.

d[U|U+V;X]H[U+X|U+V]12H[U|U+V]12H[X]H[U+X]12H[U]12H[V]+12H[U+V]12H[X]

But d[U|U+V;X]=d[V|U+V;X], so it’s also

H[V+X]12H[U]12H[V]+12H[U+V]12H[X].

Averaging the two inequalities gives the result (as earlier).

Corollary 7.17. Assuming that:

  • U,V are independent random variables

  • (U,V) is C-relevant to (X,Y)

Then

Proof. Use Lemma 7.16. Then as soon as it is used, we are in exactly the situation we were in when bounding the relevance of (U1+U2,V1+V2) and (U1+V1,U2+V2).

It remains to tackle the last two terms in Lemma 7.10. For the fifth term we need to bound

d[X1+X2|X2+Y1,X1+Y2;X]+d[X1+Y1|X2+Y1,X1+Y2;Y].

But first term of this is at most (by Lemma 7.12)

12(d[X1;X2+Y1,X1+Y2]X+d[X2|X1+Y1,X1+Y2;X]+d[X1;X2  X2+Y1,X1+Y2]).

By The entropic Ruzsa triangle inequality and independence, this is at most

d[X1|X1+Y2;X]+d[X2|X2+Y1;X]=2d[X|X+Y;X]

Now we can use Lemma 7.16, and similarly for the other terms.

In this way, we get that the fifth and sixth terms have relevances bounded above by λC for an absolute constant λ.

˙

Index

C-relevant

additivity

entropy

conditional mutual information

conditionally independent trials

continuity

entropy

extendability

Justin Gilmer’s Theorem

invariance

maximality

mutual information

normalisation

1-factor

discrete Loomis-Whitney

Δ-intersecting

union-closed