Special Topics in Complexity Theory, Fall 2017. Instructor: Emanuele Viola

### 1 Lectures 16-17, Scribe: Tanay Mehta

In these lectures we prove the corners theorem for pseudorandom groups, following Austin [Aus16]. Our exposition has several non-major differences with that in [Aus16], which may make it more computer-science friendly. The instructor suspects a proof can also be obtained via certain local modifications and simplifications of Green’s exposition [Gre05b, Gre05a] of an earlier proof for the abelian case. We focus on the case for simplicity, but the proof immediately extends to other pseudorandom groups.

**Theorem 1.** Let . Every subset of density contains a corner, i.e., a set of the form .

#### 1.1 Proof Overview

For intuition, suppose is a product set, i.e., for . Let’s look at the quantity

where iff . Note that the random variable in the expectation is equal to exactly when form a corner in . We’ll show that this quantity is greater than , which implies that contains a corner (where ). Since we are taking , we can rewrite the above quantity as

where the last line follows by replacing with in the uniform distribution. If , then and . Condition on , , . Then the distribution is a product of three independent distributions, each uniform on a set of measure greater than . By pseudorandomness is close to uniform in statistical distance. This implies that the above quantity equals

Given this, it is natural to try to write an arbitrary as a combination of product sets (with some error). We will make use of a more general result.

#### 1.2 Weak Regularity Lemma

Let be some universe (we will take ). Let be a function (for us, ). Let be some set of functions, which can be thought of as “easy functions” or “distinguishers.”

**Theorem 2.**[Weak Regularity Lemma] For all , there exists a function where , and such that for all

The lemma is called ‘weak’ because it came after Szemerédi’s regularity lemma, which has a stronger distinguishing conclusion. However, the lemma is also ‘strong’ in the sense that Szemerédi’s regularity lemma has as a tower of whereas here we have polynomial in . The weak regularity lemma is also simpler. There also exists a proof of Szemerédi’s theorem (on arithmetic progressions), which uses weak regularity as opposed to the full regularity lemma used initially.

Proof. We will construct the approximation through an iterative process producing functions . We will show that decreases by each iteration.

**Start**: Define (which can be realized setting ).**Iterate**: If not done, there exists such that . Assume without loss of generality .**Update**: where shall be picked later.

Let us analyze the progress made by the algorithm.

where the last line follows by taking . Therefore, there can only be iterations because .

#### 1.3 Getting more for rectangles

Returning to the lower bound proof, we will use the weak regularity lemma to approximate the indicator function for arbitrary by rectangles. That is, we take to be the collection of indicator functions for all sets of the form for . The weak regularity lemma gives us as a linear combination of rectangles. These rectangles may overlap. However, we ideally want to be a linear combination of *non-overlapping* rectangles.

**Claim 3.** Given a decomposition of into rectangles from the weak regularity lemma with functions, there exists a decomposition with rectangles which don’t overlap.

Proof. Exercise.

In the above decomposition, note that it is natural to take the coefficients of rectangles to be the density of points in that are in the rectangle. This gives rise to the following claim.

**Claim 4.** The weights of the rectangles in the above claim can be the average of in the rectangle, at the cost of doubling the distinguisher error.

Consequently, we have that , where is the sum of non-overlapping rectangles with coefficients .

Proof. Let be a partition decomposition with arbitrary weights. Let be a partition decomposition with weights being the average of . It is enough to show that for all rectangle distinguishers

By the triangle inequality, we have that

To bound , note that the error is maximized for a that respects the decomposition in non-overlapping rectangles, i.e., is the union of some non-overlapping rectangles from the decomposition. This can be argues using that, unlike , the value of and on a rectangle from the decomposition is fixed. But, for such , ! More formally, .

We need to get a little more from this decomposition. The conclusion of the regularity lemma holds with respect to distinguishers that can be written as where and map . We need the same guarantee for and with range . This can be accomplished paying only a constant factor in the error, as follows. Let and have range . Write where and have range , and the same for . The error for distinguisher is at most the sum of the errors for distinguishers , , , and . So we can restrict our attention to distinguishers where and have range . In turn, a function with range can be written as an expectation for functions with range , and the same for . We conclude by observing that

#### 1.4 Proof

Let us now finish the proof by showing a corner exists for sufficiently dense sets . We’ll use three types of decompositions for , with respect to the following three types of distinguishers, where and have range :

- ,
- ,
- .

The last two distinguishers can be visualized as parallelograms with a 45-degree angle between two segments. The same extra properties we discussed for rectangles hold for them too.

Recall that we want to show

We’ll decompose the -th occurrence of via the -th decomposition listed above. We’ll write this decomposition as . We do this in the following order:

We first show that is big (i.e., inverse polylogarithmic in expectation) in the next two claims. Then we show that the expectations of the other terms are small.

**Claim 5.** For all , the values are the same (over ) up to an error of .

Proof. We just need to get error for any product of three functions for the three decomposition types. By the standard pseudorandomness argument we saw in previous lectures,

Recall that we start with a set of density .

**Claim 6.** .

Proof. By the previous claim, we can fix . We will relate the expectation over to by a trick using the Hölder inequality: For random variables ,

To apply this inequality in our setting, write

By the Hölder inequality, we get that

Note that

where is the set in the partition that contains . Finally, by non-negativity of , we have that . This concludes the proof.

We’ve shown that the term is big. It remains to show the other terms are small. Let be the error in the weak regularity lemma with respect to distinguishers with range .

**Claim 7.** .

Proof. Replace with in the uniform distribution to get

where the first inequality is by Cauchy-Schwarz.

Now replace and reason in the same way:

Replace to rewrite the expectation as

We want to view the last three terms as a distinguisher . First, note that has range . This is because and has range .

Fix . The last term in the expectation becomes a constant . The second term only depends on , and the third only on . Hence for appropriate functions and with range this expectation can be rewritten as

which concludes the proof.

There are similar proofs to show the remaining terms are small. For , we can perform simple manipulations and then reduce to the above case. For , we have a slightly easier proof than above.

##### 1.4.1 Parameters

Suppose our set has density . We apply the weak regularity lemma for error . This yields the number of functions . For say , we can bound from below by the same expectation with fixed to , up to an error . Then, . The expectation of terms with is less than . So the proof can be completed for all sufficiently small .

### References

[Aus16] Tim Austin. Ajtai-Szemerédi theorems over quasirandom groups. In Recent trends in combinatorics, volume 159 of IMA Vol. Math. Appl., pages 453–484. Springer, [Cham], 2016.

[Gre05a] Ben Green. An argument of shkredov in the finite field setting, 2005. Available at people.maths.ox.ac.uk/greenbj/papers/corners.pdf.

[Gre05b] Ben Green. Finite field models in additive combinatorics. Surveys in Combinatorics, London Math. Soc. Lecture Notes 327, 1-27, 2005.