2 The Ramsey numbers R(3,k)

Theorem 2.1 (Erdos-Szekeres, 1935). R(l,k)k+l2l1.

Proof. Induction.

Note. R(k)=R(k,k)2(k1)k1c4kk.

Corollary 2.2. R(3,k)k2.

Idea 1: If randomly colour Kn, we have 18n3 triangles  :(

Idea 2: We want to bias our distribution in favour of Kk – colour “red”. Say GG(n,p). Sad news: if p1n then G contains a K3 with high probability.  :(

Theorem 2.3 (Erdos, 1960s). R(3,k)c(klogk)32 for some c>0.

Proof idea: Sample GG(n,p), then define G~=G(a vertex from each triangle). For this to work we need #vertices2>#triangles. So n2>p3n3. So take pn23.

Remark. This is called the “deletion method”.

Fact (Markov). Let X be a non-negative random variable. Let t>0. Then

(Xt)𝔼Xt.

That is

(Xs𝔼X)1s,

for any s>0.

Fact. For 1kn we have

nk(enk)k

and

nk(nk)k.

Definition 2.4 (Independence number). Let G be a graph. Then IV(G) is independent if xy in G for all x,yI.

Let α(G)=size of the largest independent set.

PIC

Note. To prove Theorem 2.3, we are trying to construct a triangle free graph on k32o(1) vertices with α(G)<k.

Lemma 2.5. Assuming that:

  • 1np12

  • GG(n,p)

Then
α(G)(2+o(1))lognpp,

with probability tending to 1 as n (known as “with high probability (w.h.p.)”).

Remark. We actually have = in this lemma.

Proof. Let k=(2+δ)lognpp, for some δ>0. Let X be the random variable that counts the number of independent k sets in G. Using 1xex,

𝔼X=𝔼I[n](k)𝟙I is independent=I[n](k)(I is independent)=nk(1p)k2(enk)kepk2=(enke(k12)p)k0

by plugging in k. So by Markov,

(X1)𝔼X0.

We have a method for bounding the probability that X is large: Markov says that (X>t)𝔼Xt. What about bounding the probability that it is small?

Fact: Let X be a random variable with 𝔼X, VarX finite. Then for all t>0,

(|X𝔼X|t)VarXt2.

Reminder:

VarX=𝔼X2(𝔼X)2=𝔼(X𝔼X)2.

Lemma 2.6. Assuming that:

  • 1np1

  • GG(n,p)

Then
#triangles in G=(1+o(1))p3n3

with high probability.

Proof. Let X be the random variable counting the number of triangles in G. Note

X=T[n](3)𝟙T(2)G.

Mean is straightforward:

𝔼X=p3n3.

Variance is a little more tricky. First:

X2=S,T[n](3)𝟙S(2),T(2)G.

We can write

𝔼X2=S,T[n](3)(T(2),S(2)G)(𝔼X)2=S,T[n](3)(T(2)G)(S(2)G)

Since the pairs of events are independent whenever |S(2)T(2)|=0, and since |S(2)T(2)|=2 is impossible, we have

PIC

VarXS,T[n](3)|S(2)T(2)|=1(S,TG)+S,T[n](3)|S(2)T(2)|=3(S,TG)=p5n4+𝔼X Now note (|X𝔼X|δ𝔼X)VarXδ2(𝔼X)2p5n4δ2(𝔼X)2+1δ2𝔼XCp5n4(n3p3)2+1δ2𝔼XC1n2p+1δ2𝔼X0

Recall: In Lemma 2.5 we showed that for GG(n,p), 1np12, we have

α(G)(2+o(1))log(np)p

with high probability.

Proof of Theorem 2.3, a lower bound on R(3,k). Set n=(k2logk)32, p=n23=2logkk. Let GG(n,p) and let G~ be G with a vertex deleted from each triangle. Then clearly α(G~)α(G), so with high probability we have

α(G~)α(G)(2+o(1))lognpp(23+o(1))n23logn(23+o(1))k2logk(logk32)(1+o(1))k2<k(

We also have with high probability:

|V(G~)|n(#of triangles in G)n(1+o(1))p3n36=n(1+o(1))n6n2(

So with high probability () and () hold (if () holds with probability at least 1𝜀1 and () holds with probability at least 1𝜀2, then the probability that they both hold is at least 1𝜀1𝜀21). So there must exist a graph G~ on n2=12(k2logk)32 vertices with no triangles and independence number <k.

Definition (Chromatic number). Let G be a graph. The chromatic number of G is the minimum k such that there exists χ:V(G)[k] such that no edge has both ends in the same colour.

PIC

χ(G) denotes the chromatic number.

Definition 2.7 (Girth). The girth of a graph G is the length of the shortest cycle in G.

Using a method similar to the proof of Theorem 2.3 that we just saw, Erdős proved:

Theorem (Erdős). There exist graphs G with arbitrarily large chromatic number and arbitrarily large girth, i.e. for all g,k there exists G with girth >g and χ(G)>k.

This theorem is somewhat surprising, because if you think of examples of graphs with large chromatic number, the first things you try have small girth (for example, Kn). It shows that the answer to the question “what kind of necessary conditions are there for chromatic number to be large?” is not so simple.