Section 2 Duke's theorem
¶Recall
1.2 1.9.
We will now prove this, not via Linnik's proof, but via an analogue of Weyl's criterion. Strategy:
- Weyl's criterion \(\leadsto\) bounding exponential sums.
- To bound these sums we will use automorphic methods.
We will begin with 1.2.
So we will be working on \(S^2\text{.}\) Recall 1.28 which was for \(\lb 0,1)\text{.}\) There we used exponentials, why? Because they are a dense, convenient basis, fourier theory.
To replace this on the sphere \(S^2\) we use the spherical harmonics, these are homogeneous harmonic polynomials. Analogous with the \(S^1\) case.
\begin{equation*}
\left( \frac{x+iy}{|x+iy|}\right)^m = e(m\theta)
\end{equation*}
the same spherical harmonic construction works for \(S^n\text{.}\)
We will show
\begin{equation*}
\frac{1}{\# \Omega_n} \sum_{x\in \Omega_n} P(x) \to 0
\end{equation*}
for \(\deg P(x) \gt 0\text{.}\) i.e.
\begin{equation}
\sum_{\alpha \in \ZZ^3,|\alpha|^2 = n} P\left(\frac\alpha{|\alpha|}\right) = o( r_3(n))\label{eqn-spherical-harmonic-bd}\tag{2.1}
\end{equation}
where
\begin{equation*}
r_3(n) = \# \{ a^2 + b^2 + c^2 = n: a,b,c\in \ZZ\}\text{.}
\end{equation*}
Connection to automorphic forms
\(\theta\) functions, \(P\) spherical harmonics as before.
Definition 2.1
\begin{equation*}
\theta_P(z) = \sum_{\alpha \in \ZZ^3} P(\alpha) e ( |\alpha|^2 z)
\end{equation*}
for \(z\in \HH\) this converges.
\begin{equation*}
\theta_P(z) = \sum_{n=0}^\infty r_3(n,P) e(nz)
\end{equation*}
\begin{equation*}
r_3(n,P) =\sum_{|\alpha|^2 = n} P (\alpha)\text{.}
\end{equation*}
Fact 2.2
- This is a modular form of weight \(\frac 32 + \deg P\) for \(\Gamma_0(4)\text{.}\)
- It is a cusp form if \(\deg P \gt 0\text{.}\)
To show (2.1):
- Show that
\begin{equation*}
r_3(n) \gg_\epsilon n^{1/2 - \epsilon}
\end{equation*}
(Gauss-Siegel)
- Show that
\begin{equation*}
r_3(n,P) \ll_\delta n^{k/2 - 1/4 - \delta}
\end{equation*}
for some \(\delta \gt 0\text{.}\)
Why?
\begin{equation*}
\sum_{|\alpha|^2 = n} P\left( \frac{\alpha}{|\alpha|} \right) = n^{-\deg(P)/2} \sum_{|\alpha|^2 = n} P(\alpha)
\end{equation*}
\begin{equation*}
= n^{-\deg(P)/2} r_3(n,P)
\end{equation*}
note:
\begin{equation*}
\frac k2 - \frac14 - \delta = \frac 12 + \frac{\deg(P)}{2} - \delta\text{.}
\end{equation*}
So if \(r_3(n,P) \ll n^{(1+\deg(P))/2 - \delta}\) which implies
\begin{equation*}
\sum_{|\alpha|^2 = n} P\left( \frac{\alpha}{|\alpha|} \right) \ll n^{\frac 12 - \delta}\text{.}
\end{equation*}
If we knew a half-integral weight Ramanujan conjecture:
\begin{equation*}
f \in S_k(N) ,\, k \in \frac 12 + \ZZ_{\gt 0}
\end{equation*}
for squarefree \(n\)
\begin{equation*}
a_f(n) = O(n^{(k-1)/2 +\epsilon})\text{.}
\end{equation*}
Digression (\(\theta\)-functions)
Recall: Integral weight modular forms
\begin{equation*}
\Gamma = \SL_2(\ZZ) = \Gamma(1)
\end{equation*}
\begin{equation*}
\Gamma_\infty = \left\{ \begin{pmatrix} 1\amp n \\ 0 \amp 1\end{pmatrix} : n \in \ZZ\right\}
\end{equation*}
In general \(\Gamma\) congruence subgroup \(\Gamma(N) \subseteq \Gamma \subseteq \Gamma(1)\) for some \(N\text{.}\)
\begin{equation*}
f\colon \HH \to \CC
\end{equation*}
is called a modular form of weight \(k\) for \(\Gamma\) if \(f\) is holomorphic everywhere, including at the cusps.
\begin{equation*}
f(\gamma z) = (cz+ d)^k f(z) \,\forall \gamma \in \Gamma\text{.}
\end{equation*}
If \(f (z) = \sum _{n=1}^\infty a_nq^n\) then \(f\) is cuspidal, the space of such is \(S_k(\Gamma)\) where S stands for the German Spitzenform. Spitze means cusp, kinda like a pointy spit?
Example 2.4
\begin{equation*}
\Delta(q) = q\prod_{n=1}^\infty (1-q^n)^{24}
\end{equation*}
\begin{equation*}
E_k(z)= \sum_{(c,d) = 1} \frac{1}{(cz +d)^{2k}}
\end{equation*}
Conjecture 2.5 Ramanujan (here theorem of Deligne)
\(f\in S_k(\Gamma)\)
\begin{equation*}
a_n = O_z( n^{(k-1)/2 + \epsilon}) \,\forall \epsilon \gt 0\text{.}
\end{equation*}
Classically
\begin{equation*}
\widetilde \theta(z) = \sum_{m\in \ZZ} e^{i \pi m^2z}
\end{equation*}
converges absolutely on \(z\in \HH\)
\begin{equation*}
\widetilde \theta(z + 2) =\widetilde \theta(z)
\end{equation*}
\begin{equation*}
\widetilde \theta(-1/z ) = \sqrt{-iz} \widetilde \theta(z)
\end{equation*}
in general for \(\gamma \in \Gamma_0(4)\)
\begin{equation*}
\widetilde \theta(\gamma z ) = j(\gamma ; z) \widetilde \theta(z)
\end{equation*}
where \(j(\gamma; z) = \legendre cd \epsilon_d^{-1} (cz+d)^{1/2}\) where \(\epsilon_d = 1\) if \(d\equiv 1 \pmod 4\text{,}\) \(-1\) if \(d\equiv 3 \pmod 4\) sign of Gauss sum. \(\legendre cd\) is a sibling of Legendre form.
Recall: \(P\) spherical harmonic degree \(l \text{.}\) Aim to show:
\begin{equation*}
\sum_{|x|^2 = n} P\left( x \right) = o( r_3(n))
\end{equation*}
where
\begin{equation*}
r_3(n) = \# \{ a^2 + b^2 + c^2 = n: a,b,c\in \ZZ\}\text{.}
\end{equation*}
Took
\begin{equation*}
\theta(z; P) = \sum_{n\in \ZZ} r(n; P) e(nz)\text{.}
\end{equation*}
A modular form of weight \(3/2 + l\) for \(\Gamma_0 (4)\text{.}\) Cusp form if \(l \gt 0\text{.}\)
The strategy is then to show
\begin{equation*}
r_3(n) \gg n^{1/2- \epsilon}\,\forall \epsilon \gt 0
\end{equation*}
\begin{equation*}
r_3(n) \ll n^{k/2 -1/4 - \delta}\,\text{ for some}\,\delta \gt 0\text{.}
\end{equation*}
Definition 2.6 Half-integral weight modular forms
Let \(N \equiv 0 \pmod 4\text{.}\) A modular form of half-integral weight \(k \in \frac 12 + \ZZ_{\ge 0}\text{.}\) For \(\Gamma_0(N)\) is a holomorphic function on \(\HH\) s.t.
\begin{equation*}
f(\gamma z ) = j(\gamma; z)^{2k} f(z)
\end{equation*}
- \(f\) is holomorphic at the cusps
if \(\chi\begin{pmatrix}a\amp b \\c \amp d \end{pmatrix} = \chi (d)\text{,}\) \(f(\gamma z) = \chi(d) j(\gamma ; z)^{2k} f(z)\) gives the space \(M_k(\Gamma_0(N), \chi)\text{.}\)
Where do these things come from?
A construction due to Schoenberg 1939, Pfetzer 1953, Shimura 1973 is as follows:
\(A\) an \(n \times n\) positive definite integral matrix, \(N \in \ZZ\) s.t. \(N A\inv\) is integral, \(P\) a spherical harmonic relative to \(A\) i.e. for \(P\) homogeneous of degree \(v\text{.}\)
\begin{equation*}
\sum_{i,j} \tilde a_{ij} \frac{\partial ^2 P}{\partial x_ix_j} = 0
\end{equation*}
\begin{equation*}
[a_{ij}] = A \inv
\end{equation*}
Definition 2.7
Let \(h\in \ZZ^n\text{,}\) set
\begin{equation*}
\tilde \theta_P (z,h,N) = \sum_{m \equiv h \pmod N} P(m) e\left( \frac{(m^\transpose A m)z}{2N^2} \right)\text{.}
\end{equation*}
Fact 2.8 Poisson summation
\begin{equation*}
\tilde \theta_P (\gamma z, h, N) = e \left( \frac{ab(h^\transpose Ah) }{2N^2} \right)\left( \frac{\det A}{d} \right) \left( \frac{2c}{d}\right)^n \epsilon_d^{-n} (cz+d)^{k/2} \tilde\theta_P(z; ah, N)
\end{equation*}
\begin{equation*}
\gamma = \begin{pmatrix}a\amp b \\ c \amp d\end{pmatrix} \in \SL_2(\ZZ) : b \equiv 0 \pmod 2,c\equiv 0 \pmod{2N}
\end{equation*}
\begin{equation*}
k = n +2v
\end{equation*}
Example 2.9
\(n = 1, N = 1\) \(P(m) = m^v\) \(v= 0,1\)
\begin{equation*}
\tilde \theta_P(z) = \sum_{m\in \ZZ} m^v e(m^2z/2)
\end{equation*}
for \(v=0\) this is classical \(\theta\) for \(v = 1\) cusp form on \(\Gamma_0(8)\) of weight \(3/2\text{,}\) we could also twist by a character mod 4.
Example 2.10
\(A = I_{n \times n}\) \(P\) spherical harmonic of degree \(v\)
\begin{equation*}
\tilde \theta_P(z; 0, 1) = \sum _{m \in \ZZ^n} P(m) e(|m|^2 z/2)
\end{equation*}
set \(z=2z\)
\begin{equation*}
\theta_P(z) = \sum_{m\in \ZZ} P(m) e(|m|^2 z)\text{.}
\end{equation*}
Example 2.12
\(A = 4\times 4\) integral positive definite
\begin{equation*}
Q = Q_A(x),\, x^\transpose A x
\end{equation*}
\begin{equation*}
r_Q(n) = \# \{ x \in \ZZ^4:Q(x) = n\}
\end{equation*}
\begin{equation*}
\theta_Q(z) = \sum_{n=0}^\infty r_Q(n) e(nz) \in M_2(\Gamma_0(N))
\end{equation*}
for some \(N = 4\det(A)\text{.}\)
Half-integral weight Ramanujan conjecture (Metaplectic)
Naively we would like to mimic the integral case and say
\begin{equation*}
a_f(n) = O(n^{(k-1)/2 + \epsilon})
\end{equation*}
where \(f\in S_k(\Gamma)\text{.}\)
But this is not true as stated here!:
Example 2.13
Let
\begin{equation*}
\theta(z; \chi) = \sum_{m\in \ZZ} m \chi(m) e(m^2 z)
\end{equation*}
for odd \(\chi\) this is a cusp form in \(S_k(\Gamma, \chi)\text{.}\) For \(k=3/2\text{,}\) let \(f =\theta\) and
\begin{equation*}
a_f(m^2) \sim m\chi(m) \implies a_f(n) = O(\sqrt{n})
\end{equation*}
for \(n\) square. But \((k -1)/2= 1/4\) but \(O(\sqrt n)\) is not \(O(n^{1/4 + \epsilon})\) for any \(\epsilon \ge 1/4\text{.}\)
So we avoid these or stick to squarefree \(n\text{.}\)
Conjecture 2.14 Half-integral weight Ramanujan conjecture
\begin{equation*}
a_f(n) = O(n^{(k-1)/2 + \epsilon})
\end{equation*}
for all \(n\) square-free \(f\in S_k(\Gamma, \chi)\text{.}\)
Why this exponent? For integral weight: Representation theory gives \(a_f(m)\)'s which correspond to Hecke eigenvalues which under Langlands are tempered for \(\GL_N\text{.}\)
Digression:
Proposition 2.15 Hecke bound
\(f \in S_k(\Gamma)\) then \(a_f(n) = O(n^{k/2})\text{.}\)
Proof
\(y = \Im(z), y(\gamma z) = |cz+d| ^{-2} y\) so
\begin{equation*}
F(z) = y^{k/2} |f(z)|
\end{equation*}
Invariant on \(\Gamma\) bounded at the cusps \(|F(z)| \lt M\text{.}\) So
\begin{equation*}
\int_0^1 f(z) e(nx) = e^{-2 \pi i ny }a_f(n)
\end{equation*}
\begin{equation*}
\le \int_0^1 y^{-k/2} M = O(y^{-k/2})
\end{equation*}
so \(a_f(n) = O(y^{-k/2} e^{2\pi i n y})\) with \(y = 1/n\text{.}\)
Another digression:
What is the distribution of rational points on \(S^2\text{?}\)
\begin{equation*}
\{x_1^2 + x_2^2 + x_3^2 = 1: x_i \in \QQ\}
\end{equation*}
\begin{equation*}
\iff r_3(n^2)
\end{equation*}
Proposition 2.16 Hurwitz
A generating function for this is given by
\begin{equation*}
\sum_{n =1}^\infty \frac{r_3(n^2)}{n^s} = 6(1-2^{1-s}) \frac{\overbrace{\zeta(s) \zeta(1-s)}^{=\sum \sigma(n)/n^s}}{L\left(s, \legendre{-1}{\cdot}\right)}
\end{equation*}
Exercise 2.17
David Fried's proof of Hurwitz on \(r_3(n^2)\)
Without proof Hurwitz stated
\begin{equation*}
r_3(N^2) = 6 P \prod_{q|Q} ( q^a + 2q^{a-1} + \cdots + 2q + q)
\end{equation*}
where \(N = 2^k PQ\) and each prime factor of \(P\) is \(\equiv 1 \pmod 4\) and each prime factor of \(Q\) is \(\equiv -1 \pmod 4\) and
\begin{equation*}
q^a || Q
\end{equation*}
he suggested a proof along the lines of his published note on
\begin{equation*}
r_5(N^2)\text{,}
\end{equation*}
here is such a proof:
We denote \(P = P(N), Q = Q(N)\) We may suppose \(k = 0\) since
\begin{equation*}
r_3(4n) = r_3(n)
\end{equation*}
(each solution of \(4n = x^2 + y^2 + z^2\) has \(x,y,z\) all even). So \(N\) is odd and each solution of \(N^2 = x^2 + y^2 + z^2\) has two of \(x,y,z\) even. Hence
\begin{equation*}
r_3(N^2) = \frac 32 \sum_{x \text{ even}} r_2( N^2- x^2) = \frac 22 \sum_{a+b = 2N,a,b\text{ odd}} r_2(ab)
\end{equation*}
now \(r_2(n)\) is the number of Gaussian integers \(z\) with \(z \bar z = n\text{.}\) As \(\ZZ \lb i \rb\) is a PID with 4 units \(\pm 1, \pm i\text{,}\) the function \(\rho(n) = \frac 14 r_2(n)\) is multiplicative!
\begin{equation*}
\rho (ab) = \rho(a)\rho(b) \text{ if } a,b \text{ coprime}
\end{equation*}
clearly
\begin{equation*}
r_3(N^2) = 6 \sum_{a+b = 2N,\,a,b\,\text{odd}}\rho (ab)\text{.}
\end{equation*}
To evaluate \(\rho(ab)\) we need spme standard functions,
\begin{equation*}
\tau(n)= \# \text{divisors }d\text{ of }n,\,\sigma(n) = \sum d
\end{equation*}
the Möbius function \(\mu(n)\text{.}\) and
\begin{equation*}
\square (n) = \frac 12 r_1(n)
\end{equation*}
\begin{equation*}
w(n) = \mu(P)|\mu(Q)|
\end{equation*}
The multiplicativity of \(\rho\) generalises as follows:
Lemma 2.18
If \(a,b\) are odd with \(g= \gcd(a,b)\) then
\begin{equation*}
\rho(ab) = \sum_{d|g} w(d) \rho\left(\frac ad \right) \rho\left(\frac bd \right)
\end{equation*}
Proof
Each \(p = a^2 + b^2\) with \(a\gt b \gt 0\) uniquely (Euler). Let \(z(p) = a+bi\text{.}\) For \(n\) odd \(n = PQ\) and \(r_2(n) = 0\) unless \(Q = \square\text{.}\) In which case the solutions of \(n = z \bar z\) are
\begin{equation*}
z = i^k \sqrt Q \prod_p z(p)^j \overline{z(p)}^{\alpha- j}
\end{equation*}
where \(p^\alpha || P\text{,}\) \(k \in \ZZ/4\text{.}\) Thus
\begin{equation*}
\rho(n) = \tau(p) \square (Q)\text{.}
\end{equation*}
When \(\rho(ab) = 0\) every term on the right vanishes. For \(Q(ab) \ne \square\) so
\begin{equation*}
Q\left( \frac ad \right),\,Q\left( \frac bd \right)
\end{equation*}
cannot both be squares.
When \(\rho(ab) \ne 0\) we have \(Q(ab) = m^2\) for some\(m\text{.}\) There is exactly one squarefree \(\gamma\) dividing \(m\) and \(g\) such that
\begin{equation*}
Q\left(\frac a\gamma \right),\, Q\left(\frac b\gamma \right)
\end{equation*}
are squares. The term
\begin{equation*}
w(d) \rho(a/d) \rho(b/d)
\end{equation*}
is nonzero just when \(\alpha = \gamma\delta\) where \(\delta\) is squarefree and \(\delta|P(g)\) so the sum reduces to
\begin{equation*}
\sum_\delta \mu(\sigma) \tau(P(a/\gamma\delta)) \tau(P(b/\gamma\delta))
\end{equation*}
\begin{equation*}
= \sum_\delta \mu(\delta) \tau(P(a)/\delta) \tau(P(b)/\delta)
\end{equation*}
but
\begin{equation*}
\sum_{d|r,d|s} \mu(d) \tau(r/d) \tau(r/s) = \tau(rs)
\end{equation*}
by multiplicativity of \(\tau\) this reduces to the case where \(r = \pi^k,s= \pi^l\) for prime \(\pi\) but \(1+ k + l = (1+k)(1+l) - kl\text{.}\) Here now \(r=P(a), s=P(b),\gcd(r,s) = P(g)\text{.}\) So the sum over \(\delta\) equals
\begin{equation*}
\tau(P(a)P(b)) = \tau(P(ab)) \square (Q(ab)) = \rho(ab)
\end{equation*}
as desired.
Using this we find
\begin{equation*}
r_3(n^2) = 6 \sum_{d|N} w(d) S \left(\frac Nd\right)
\end{equation*}
where for odd \(n\)
\begin{equation*}
S(n) = \sum_{a+b = 2n,\,a,b\,\text{odd}} \rho(a) \rho(b)
\end{equation*}
but
\begin{equation*}
S(n) = \frac{1}{16}\#(2n = \underbrace{w^2 + x^2}_{\text{odd}}+ \underbrace{y^2 + z^2}_{\text{odd}})
\end{equation*}
\begin{equation*}
= \frac{1}{4}\#(2n = 4w^2 + 4x^2+y^2 + z^2)
\end{equation*}
\begin{equation*}
= \sigma(n)
\end{equation*}
by Jacobi.
So
\begin{equation*}
r_3(N^2)= 6\sum_{d|N} w(d) \sigma\left(\frac Nd\right)
\end{equation*}
\begin{equation*}
= 6(w \ast \sigma) (N)
\end{equation*}
where \(\ast\) denotes Dirichlet convolution.
Pasing to Dirichlet series
\begin{equation*}
\sum_{N\text{ odd}} \frac{ r_3 (N^2)} {N^s} = 6 \sum _{d\text{ odd}} \frac{w(d)}{d^s } \sum _{m\text{ odd}} \frac{\sigma(m)}{m^s }
\end{equation*}
\begin{equation*}
= 6\underbrace{ \prod_p(1- p^{-s})\prod_q (1+ q^{-s})}_{L(s, \legendre{-4}{\cdot})\inv}\underbrace{\sum_{k \text{ odd}} \frac{1}{k^s}}_{\zeta_{\text{odd}}(s-1)}\underbrace{\sum_{l \text{ odd}} \frac{l}{l^s}}_{\zeta_{\text{odd}}(s-1)}
\end{equation*}
\begin{equation*}
= 6\prod_p \sum_a \frac{p^a}{p^{as}} \prod_q \sum_a \frac{q^a + 2q^{a -1} + \cdots + 2q + 2} {q^{as}}
\end{equation*}
which gives
\begin{equation*}
r_3(N^2) = 6 \prod_{p^a||N} p^a \prod_{q^a||N} (q^a + 2q^{a-1} + \cdots 2q + 2)
\end{equation*}
as in Hurwitz and
\begin{equation*}
\sum_{n=1}\infty \frac{r_3(n^2)}{n^s} = 6 L(s, \legendre{-4}{\cdot}) \inv \zeta(s) \zeta_{\text{odd}}(s-1)
\end{equation*}
as in Duke.
Corollary 2.19
This shows
\begin{equation*}
r_3(n^2) \gg n\text{.}
\end{equation*}
Proof (sketch)
Expand the formula, take \(n=p\) an odd prime
\begin{equation*}
\zeta(s) \zeta(1-s) = \sum \frac{\sigma(n)}{n^s}
\end{equation*}
(exercise)
\begin{equation*}
L(s, \chi_{-4})\inv = \prod_p(1- \frac{\chi_{-4}(p)}{p^s})
\end{equation*}
so
\begin{equation*}
\sum \frac{r_3(n^2)}{n^s} = \sum \frac{\sigma(m_1)}{m_1^s} \prod (1 - \chi_{-4}(m_p)/p^s) 6 (1-2^{1-s})
\end{equation*}
\begin{equation*}
= \sum \frac{\sigma(m_1)}{m_1^s} \sum \mu(m_) \chi_{-4}(m_2)/m_2^s 6 (1-2^{1-s})
\end{equation*}
so
\begin{equation*}
r_3(p^2) = (\sigma(p) - \chi_{-4}(p))6 \ge 6p
\end{equation*}
The next big breakthrough came much later, showing something similar for \(r_3(n, P)\text{,}\) but now we hope to see cancellation.
Theorem 2.20 Shimura '71
Let
\begin{equation*}
f \in S_{k/2}(N,\chi)
\end{equation*}
be a half integral weight cusp form, \(k\) odd, \(4|N\text{.}\)
\begin{equation*}
f(z) = \sum_{n=1}^\infty a_f(n) e(nz)\text{,}
\end{equation*}
assume \(f\) is a common eigenfunction for all \(T_{p^2}\text{.}\)
Let \(\widetilde \chi(m) = \chi(m) \chi_{-4}(m)\) and \(\lambda = (k-1)/2\text{.}\) Set
\begin{equation*}
\sum_{n=1}^\infty \frac{A(n)}{n^s} = L(s+1 - \lambda , \widetilde \chi) \cdot \left( \sum_{m=1}^\infty \frac{a_f(m^2)}{m^s} \right)
\end{equation*}
\begin{equation*}
F(z) = \sum_{n=1}^\infty A(n) e(nz)\text{.}
\end{equation*}
Then
\begin{equation*}
F(z) \in S_{2\lambda}(N_1, \chi^2)\text{.}
\end{equation*}
Recall:
\begin{equation*}
\theta_P(z) = \sum r_3(n; P) e(nz) \in S_{3/2 + l} (\Gamma_0(4))
\end{equation*}
is a weight \(3/2 +l\) form if \(l= \deg P\text{.}\)
Shimura gives us
\begin{equation*}
F_P(z) \in S_{2l+2}(\Gamma_0(2))
\end{equation*}
then we get
Corollary 2.21
\begin{equation*}
r_3(n^2, P) = \sum_{d|n} A_P(d) \mu\left(\frac nd \right) \chi_{-4}\left(\frac nd\right) \left(\frac nd\right)^l
\end{equation*}
hence if \(A_P(d) \ll d^{l+1 -\delta}\) then we are done.
The Hecke bound implies \(A_P(d) \ll d^{l +1}\text{.}\)
Detour: bounding Fourier coefficients
There are various approaches, such as Poincaré series, Kloosterman sums, and bound these, to get a bound on the Fourier coefficient.
If \(f\in S_{k_1} (N_1)\) we will write a spanning set for \(S_{k_1}(N_1)\text{,}\) then bound fourier coefficients on each of these guys, which will suffice.
Poincaré series:
If we were just looking for a modular form we might try Eisenstein series
\begin{equation*}
E_k(z) = \sum_{c,d} \frac{1}{(cz+d)^{2k}}
\end{equation*}
these are not cuspidal. But if we twist this a little bit
\begin{equation*}
P_m(z; k) = \sum_{\gamma \in \Gamma_\infty \backslash \Gamma} j(\gamma; z)^{-2k} e(m \gamma z)\text{.}
\end{equation*}
Where \(\Gamma_\infty = \left\{ \begin{pmatrix} 1\amp n \\ 0 \amp 1 \end{pmatrix}\right\}\text{.}\)
\begin{equation*}
j(\gamma; z) = \left( \frac cd \right) \epsilon_d \inv (cz+d)^{1/2}
\end{equation*}
they converge absolutely for \(k \gt 2\text{.}\)
Some properties:
Lemma 2.22
\begin{equation*}
P_m(\tau z; k) = (cz+d)^{2k} P_m(z; k)
\end{equation*}
Proof
Exercise, \(j\) satisifes a cocycle relation
\begin{equation*}
j(\gamma ; \tau z) j(\tau; z) = j(\gamma \tau; z)\text{.}
\end{equation*}
Proposition 2.23
\begin{equation*}
P_m(z) \in S_k(\Gamma)
\end{equation*}
Proof
Lemma 2.24
Let \(f(z) = \sum a_f(n) e(nz)\)
\begin{equation*}
\pair{P_m(z; k) }{ f}_\mathrm{Pet} =\frac{\overline a_f(m) }{(4 \pi m)^{k-1}}\Gamma(k-1)\text{.}
\end{equation*}
Proof
\begin{equation*}
\pair{P_m(z; k) }{f}_{\mathrm{Pet}} = \int_{\Gamma \backslash \HH} P_m(z; k) \overline f(z) y^k \frac{\diff x \diff y }{y^2}
\end{equation*}
\begin{equation*}
= \int_{\Gamma\backslash \HH} \sum_{\gamma \in \pm \Gamma_\infty \backslash \Gamma} j(\gamma; z)^{-2k} e(m \gamma z) \overline f(z) y^k \frac{\diff x \diff y }{y^2}
\end{equation*}
\begin{equation*}
= \sum_{\gamma \in \pm \Gamma_\infty \backslash \Gamma}\int_{\Gamma\backslash \HH} j(\gamma; z)^{-2k} e(m \gamma z) \overline f(z) y^k \frac{\diff x \diff y }{y^2}
\end{equation*}
\begin{equation*}
= \int_{\Gamma_\infty\backslash \HH} e(m z) \overline f(z) y^k \frac{\diff x \diff y }{y^2}
\end{equation*}
\begin{equation*}
= \int_0^1 \int_0^\infty e(m z) \overline f(z) y^k \frac{\diff x \diff y }{y^2}
\end{equation*}
\begin{equation*}
= \overline a_f(m) \int_0^\infty e(2 m iy ) y^k \frac{\diff x \diff y }{y^2}
\end{equation*}
\begin{equation*}
= \frac{\overline a_f(m) }{(4 \pi m)^{k-1}}\Gamma(k-1)
\end{equation*}
Corollary 2.25
\(P_m(z; k)\) span \(S_k(\Gamma)\) as \(m\) varies through integers.
Proof
If \(f \perp P_m(z, k)\) for all \(m\) then \(f= 0\text{.}\)
Proposition 2.27
\begin{equation*}
\pair{P_m(z,k)}{P_n(z,k)} = \left( \frac mn \right)^{(k-1)/2} \left(\delta_{m,n} + \frac{2\pi}{i^k} \sum_{c\equiv 0 \pmod N} J_{k-1} \left( \frac{2\pi \sqrt{mn}}{c} \right)\frac{K(m,n; c)}{c}\right)
\end{equation*}
where
\begin{equation*}
J_k \text{ is the Bessel function of the first kind}
\end{equation*}
\begin{equation*}
N\text{ s.t. } \Gamma = \Gamma_0(N)
\end{equation*}
\begin{equation*}
K(m,n; c) = \sum _{d\in (\ZZ/c)^\times } \left(\frac cd \right)^{2k} \epsilon_d^{-2k} e\left(\frac {m\overline d + nd}{c}\right)
\end{equation*}
\begin{equation*}
\delta_{m,n} = \begin{cases} 1 \amp m=n,\\ 0 \amp m\ne n\end{cases}\text{.}
\end{equation*}
Proof
\begin{align*}
\int_0^1 P_m(z,k) e(-nz) \diff z \amp= \int_0^1 \left( \sum_{\pm \Gamma_\infty \backslash \Gamma} j(\gamma ; z) ^{-2k} e(m\gamma z)\right) e(-nz) \diff z\\
\amp= \int_0^1 \underbrace{j(1; z)^{-2k} e((m-n)z)}_{\delta_{m,n}} \diff z + \text{rest}\\
\amp= \int_0^1 j(1; z)^{-2k} e((m-n)) \diff z + \text{rest}
\end{align*}
Note:
\begin{equation*}
e(\begin{pmatrix} 1\amp n \\ 0 \amp 1 \end{pmatrix} z) = e(z)\text{.}
\end{equation*}
So
\begin{equation*}
\int_0^1\sum_{\Gamma_\infty\backslash \Gamma} j(\gamma; z)^{-2k} e(m \gamma z) \diff z
\end{equation*}
\begin{equation*}
= \int_0^1sum_{\Gamma_\infty\backslash \Gamma /\Gamma_\infty} \sum_{\gamma_\infty \in \Gamma_\infty} j(\gamma \gamma_\infty z)^{-2k} e(m\gamma \gamma_\infty z) e(-nz) \diff z
\end{equation*}
\begin{equation*}
\int_0^1 \sum_{\Gamma_\infty\backslash \Gamma /\Gamma_\infty} \sum_{\alpha \in \ZZ} j(\gamma ; z+ \alpha )^{-2k} e(m\gamma (z + \alpha)) e(-nz) \diff z
\end{equation*}
\begin{equation*}
\int_0^1 \sum_{\Gamma_\infty\backslash \Gamma /\Gamma_\infty} j(\gamma ; z)^{-2k} e(m\gamma z-nz) \diff z
\end{equation*}
Note:
\begin{equation*}
m\gamma z = m \frac{az+ b}{cz+d} = m \left( \frac ac - \frac{1}{c(cz+d)}\right)
\end{equation*}
\begin{equation*}
\pm \Gamma_\infty \backslash \Gamma / \Gamma_\infty =\{ (a,d,c) : c \gt 0 ,\,a,d \in (\ZZ/c)^\times,\,ad \equiv 1 \pmod c\}
\end{equation*}
So our main integral is
\begin{equation*}
\sum_{c\gt 0 ,\,d\in \ZZ/c^\times} \epsilon_d^{-2k} \left(\frac cd \right)^{2k} \int_{-\infty}^\infty \frac{1}{(cz+d)^k} e\left(m\frac ac - \frac{m}{c(cz+d)} -nz\right) \diff z
\end{equation*}
\begin{equation*}
\sum_{c\gt 0 ,\,d\in \ZZ/c^\times,\,ad\equiv 1 \pmod c} \frac{e((ma+nd)/c)}{c^k} \epsilon_d^{-2k} \left(\frac cd \right)^{2k} \int_{-\infty}^\infty \frac{1}{z^k} e\left( - \left(zn + \frac{m}{c^2z} \right)\right) \diff z
\end{equation*}
Then use the following integral representation
\begin{equation*}
\int_{-\infty + iA}^{\infty + iA} w^{-k} e^{-(\mu_1 w + \mu_2 m\inv)} \diff w
\end{equation*}
\begin{equation*}
= 2\pi \left(\frac {\mu_1}{\mu_2}\right)^{(k-1)/2} e^{i k \pi / 2} J_{k-1} (4 \pi \sqrt{\mu_1\mu_2})
\end{equation*}
To recap:
\begin{equation*}
P_m(z) = \sum_{n=1}^\infty \hat P_m(n) e(nz)
\end{equation*}
\begin{equation*}
\hat P_m(n) = \left( \frac nm \right)^{(k-1)/2} \left\{ \delta_{m,n} + \frac{2\pi}{i^k} \sum_{c\equiv 0 \pmod N,\,c \gt 0} J_{k-1} \left( \frac {4\pi \sqrt{mn}}{c} \right) \frac{K(m,n; c)}{c} \right\}
\end{equation*}
Digression: Let \(\Gamma = \SL_2(\ZZ)\text{,}\) \(k= 12\) then \(S_k(\Gamma) = \langle \Delta \rangle\)
\begin{equation*}
\Delta(z) = (2\pi)^{12} \sum_{n=1}^\infty \tau(n) e(nz)\text{.}
\end{equation*}
As \(\Delta = \eta^{24}\) we have \(\tau(n) \in \ZZ\text{.}\)
As
\begin{equation*}
P_{m}(z; 12) \in S_{12} (1) = \langle \Delta\rangle
\end{equation*}
\begin{equation*}
\implies P_{m}(z; 12) = \kappa(m) \Delta
\end{equation*}
to calculate \(\kappa(m)\)
\begin{equation*}
\pair{P_m(z)}{\Delta} = \frac{\overline a_m}{(4\pi m )^{k-1}} \Gamma(k-1) = \frac{2 \pi \overline{\tau (m)}}{(4 \pi m)^{k-1}} \Gamma(k-1) = \frac{2\pi 10!}{(4\pi m )^{11}}
\end{equation*}
so
\begin{equation*}
P_m(z) = \frac{2\pi 10!}{(4\pi m )^{11}} \frac{\Delta(z)}{\| \Delta\|^2_{\mathrm{Pet}}} \tau(m)\text{.}
\end{equation*}
Another consequence:
Theorem 2.28 Petersson trace formula
For any \(m,n \ge 1 \)
\begin{equation*}
\frac{\Gamma(k-1)}{ ( 4\pi \sqrt{mn})^{k-1}} \sum_{f\text{ o.n. basis for }S_k(\Gamma)} \overline a_f(m) a_f(n) = \delta_{m,n} + \frac{2\pi}{i^k} \sum_{c\equiv 0 \pmod N,\,c \gt 0} J_{k-1} \left( \frac {4\pi \sqrt{mn}}{c} \right) \frac{K(m,n; c)}{c}\text{.}
\end{equation*}
Proof
Let \(P_m(z) = \sum_f \pair{P_m(z)}{f}f\) then
\begin{equation*}
\pair{P_m(z)}{P_n(z)} = \sum_f \overline {a_f(m)} a_f(n) \frac{\Gamma(k-1)^2}{(4 \pi m)^{k-1} (4\pi n )^{k-1}}
\end{equation*}
on the other hand
\begin{equation*}
\pair{P_m(z)}{P_n(z)} = \frac{\Gamma(k-1)}{(4\pi m)^{k-1}} \left(\frac mn\right)^{(k-1)/2} \left\{ \cdots \right\}\text{.}
\end{equation*}
Recall Shimura gave us
\begin{equation*}
\sum_{n=1}^\infty \frac{r_3(n^2, P)}{n^s} = \frac{\sum_{n=1}^\infty \frac{A_P(n)}{n^s}}{L(s-l, \chi_{-4})}
\end{equation*}
where the numerators is the \(L\)-function of a modular form of weight \(2l + 2\) where \(l = \deg P\text{.}\)
We want to show \(A_P(n) \ll n^{1+l-\delta}\) for some \(\delta \gt 0\text{.}\)
Idea: Poincaré series span cusp forms, so its enough to bound Fourier coefficients of these.
We know
\begin{equation*}
\hat P_m(n) = \left(\frac nm \right)^{(k-1)/2} \left( \delta_{m,n} + 2 \pi i^{-k} \sum_{c \equiv 0 \pmod N,\, c \gt 0}\overbrace{\frac{K(m,n,c)}{c}}^{\sum_{x \pmod{^* c}} e((x m + \overline x n)/c)} J_{k-1} \left(\frac {4 \pi \sqrt{mn}}{c}\right) \right)\text{.}
\end{equation*}
Try: Input 1:
\begin{equation*}
J_{k-1}(x) \ll \min\{x^{k-1},1/\sqrt{x}\}\text{.}
\end{equation*}
Trivial input:
\begin{equation*}
| K(m, n; c) | \lt c\text{.}
\end{equation*}
For
\begin{equation*}
c \gg \sqrt{mn} \leadsto J_{k-1}\left( \frac{4 \pi \sqrt{mn}}{c} \right) \sim \left( \frac{\sqrt{mn}}{c} \right)^{k-1}
\end{equation*}
\begin{equation*}
\lt \left(\frac nm \right)^{(k-1)/2} \sum_{c \gg \sqrt{nm}} \frac{1}{c^{k-1}} (mn)^{(k-1)/2}
\end{equation*}
\begin{equation*}
\lt n^{k-1} \frac{1}{\sqrt{mn}^{k-2}} = O(n^{k-1 - (k-2)/2})
\end{equation*}
for
\begin{equation*}
c \ll \sqrt{mn} \leadsto J_{k-1}\left( \frac{4 \pi \sqrt{mn}}{c} \right) \sim \frac{\sqrt c}{(mn)^{1/4}}
\end{equation*}
\begin{equation*}
\lt \left( \frac nm \right)^{(k-1)/2} \sum_{c\ll \sqrt{nm}} \frac{\sqrt{c}}{(mn)^{1/4}} = O (n^{k/2})\text{.}
\end{equation*}
Note \(k/2 = l+1\text{:}\) So we are just short, we have \(O(n^{l+1})\) rather than \(O(n^{l+1-\delta})\) for all \(\delta \gt 0\text{,}\) just by using a stupid bound for \(K(m,n; c)\text{.}\)
More major input (Weil bound):
\begin{equation*}
|K(m,n \chi ) | \ll_\epsilon c^{1/2 +\epsilon}
\end{equation*}
\begin{equation*}
\implies | \hat P_m(n) | = O(n^{k/2 - 1/4 + \epsilon})
\end{equation*}
so
\begin{equation*}
\forall \epsilon \lt \frac 14
\end{equation*}
this does the job.
So we have finished proving Linnik for \(n^2\text{,}\) it wasn't too bad, the major inputs were the Shimura correspondence.
When we move to the general case things will be totally different.
Lemma 2.30
Let \(p\) be an odd prime \(p\nmid mn\text{.}\)
\begin{equation*}
\sum_{x \pmod{^* p},xy=1} e\left( \frac{xm + \overline yn}{p}\right) = \sum_{x^2 - n \overline m = y^2 \pmod p} e\left( \frac { 2mx}{p} \right)
\end{equation*}
Proof
Reparameterise \(xy =1\text{,}\) \(x= a+b,y= a-b\text{.}\) Exercise.
Lemma 2.31
\begin{equation*}
\sum_{x \pmod p} \legendre{x^2 - n \overline m}{p} e\left( \frac {2m x}{p}\right) = \sum_{x^2 - \overline m n y^2} e\left( \frac{2xm}{p}\right)\text{.}
\end{equation*}
Proof
\begin{equation*}
\sum_{x \pmod p} \legendre{x^2 - n \overline m}{p} e\left( \frac {2m x}{p}\right) = \sum_{x\pmod p} \left(\legendre{x^2 - n \overline m}{p} + 1 \right) e\left( \frac {2m x}{p}\right)\text{.}
\end{equation*}
So
\begin{equation*}
h(\QQ(\sqrt{u^2 - 4\overline m n})) \sim L\left( 1, \legendre{u^2 - 4 \overline m n}{\cdot} \right)
\end{equation*}
\begin{equation*}
\text{Selberg trace formula} \leftrightarrow_{\text{Fourier transform}} \text{Petersson trace formula}\text{.}
\end{equation*}
This is because of the subtlety of the Shimura correspondence.
More precisely for \(n\) squarefree \(n^{k/2 - 1/4 - \delta}\)
\begin{equation*}
\theta(z; P) \text{ weight } \frac 32 + \deg P = k
\end{equation*}
\begin{equation*}
r_3(n ; P) \gg n^{1/2 - \delta}
\end{equation*}
want
\begin{equation*}
r_3(n; P ) \ll n^{k/2 - 1/4 - \delta}
\end{equation*}
\begin{equation*}
\sum_{|\alpha|^2 = n} P\left( \frac{\alpha}{|\alpha|}\right) = |n|^{-\deg P / 2} r_(n , P)
\end{equation*}
\begin{equation*}
r_3(n, P ) \lt n^{k/2 - 1/4 - \delta} = n^{3/4 + \deg P/2 - 1/4 - \delta}
\end{equation*}
for \(n^2\)
\begin{equation*}
r_3(n^2) \gg n
\end{equation*}
have Shimura
\begin{equation*}
\theta(z; P) \leadsto \text{weight }k
\end{equation*}
\begin{equation*}
A_p(n^2)
\end{equation*}
are fourier coefficients of a weight \(2 \deg P + 2\) form.
Back to \(r_3(n,P)\)
\(n\)-squarefree, we don't have Shimura, but we still have the Fourier expansion of \(P_m(z)\text{.}\)
Strategy: We will first exploit a certain form of these Kloosterman sums for \(\Delta\) squarefree, Salié sums. Major detail
\begin{equation*}
\sum_c \frac{K(m, n ; c)}{c}
\end{equation*}
will bound these not individually, but by showing that the angles of these cancel. We show other equidistribution results from equidistribution of Kloosterman sums essentially.
Main issue
\begin{equation*}
\sum_c \frac{K(m,n ; c)}{c} J_{k-1} \left(\frac{4\pi \sqrt{mn}}{c}\right)
\end{equation*}
saw that bounding each individual \(K(m, n; c)\) falls short of what is needed.
Detour (Salié sums)
Recall for \(k = \frac 32 + l\) the sums we are getting are
\begin{equation*}
K_k(m, n ; c) = \sum_{d \pmod{^* c}} \epsilon_d^{-2k} \left(\frac cd \right) e\left( \frac{ md + n \overline d}{d} \right)
\end{equation*}
where
\begin{equation*}
\legendre{c}{d} \text{ Kronecker symbol}
\end{equation*}
\begin{equation*}
\epsilon_d = \begin{cases} 1 \amp d \equiv1 \pmod 4,\\ i \amp d \equiv -1 \pmod 4\end{cases}
\end{equation*}
It will turn out that these sums can be calculated in elementary terms. Analogy
\begin{equation*}
K_k(m, n ; c) \leftrightarrow K_k
\end{equation*}
for \(k \in \frac 12 + \ZZ\text{,}\) \(J_k(x)\) is expressible in terms of elementary functions, e.g.
\begin{equation*}
J_{1/2}(x) = \sqrt{\frac{2}{\pi x}} \cdot \sin(x)
\end{equation*}
\begin{equation*}
J_{3/2}(x) = \sqrt{\frac{2}{\pi}} \cdot\left(\frac{\sin(x)}{x^{3/2}} -\frac{\cos(x)}{\sqrt x} \right)
\end{equation*}
Calculation: Reductions:
Lemma 2.34
Let \(c = rq\) with \((r,q) = 1\) and \(4\mid r\text{,}\)
\begin{equation*}
K_k(m,n; c) = K_{ k- q + 1}(m \overline q, n \overline q ; r) S(m \overline r, n \overline r; q)
\end{equation*}
where
\begin{equation*}
S(m, n; q) = \sum_{x \pmod{^* q}} \legendre{x}{q} e\left(\frac{mx + n \overline x}{q}\right)
\end{equation*}
is the Salié sum.
Exercise 2.35
Lemma 2.36
\(q\) prime
\begin{equation*}
S(m,n ; q) = \legendre mn S(q, mn; q)
\end{equation*}
-
\begin{equation*}
S(1, m; q) = 0
\end{equation*}
unless \(\legendre{m}{q} = 1\text{.}\)
\begin{equation*}
S(1, n^2; q) = \underbrace {\epsilon_q \sqrt q}_{G(q)} \sum_{x^2 \equiv1 \pmod q} e\left(\frac{2xn}{q}\right)
\end{equation*}
Proof
\begin{equation*}
x = \overline m y
\end{equation*}
\begin{equation*}
x = m \overline y
\end{equation*}
-
\begin{equation*}
S(1, n^2; q) = \sum_{x\pmod{^* q}} \legendre{x}{q} e\left( \frac{x + n^2 \overline x}{q}\right)
\end{equation*}
recall DFT
\begin{equation*}
f(u) = \frac 1q \sum_{\alpha \pmod q} \hat f(\alpha) e\left( \frac{\alpha u}{q}\right)
\end{equation*}
\begin{equation*}
\hat f(\alpha) = \sum _{u\pmod q} f (u) e\left( \frac{-\alpha u}{q}\right)
\end{equation*}
\begin{equation*}
= \sum_{q\pmod q} \sum _{x\pmod{^* q}} \legendre xq e \left( \frac{x + u^2 \overline x}{q} \right) e\left(\frac{-\alpha u}{q} \right)
\end{equation*}
\begin{equation*}
= \sum_{u \pmod q,x\pmod{^* q}} \legendre xq e\left(\frac xq\right) e\left( \frac{y^2 \overline x - \alpha u}{q}\right)
\end{equation*}
\begin{equation*}
= \sum_{x\pmod{^* q}} \legendre xq e\left( \frac{x(1- \overline 4x^2)}{q}\right)\underbrace{ \sum_{u \pmod q} e\left(\frac{\overline x (u - 2\alpha x)^2}{q}\right)}_{\legendre xq G(q)}
\end{equation*}
\begin{equation*}
G(q) \sum_{x \pmod{^* q}} e\left( \frac{ x (1 - \overline 4 \alpha^2)}{q}\right)
\end{equation*}
we need some more results to conclude!
Lemma 2.37
Let
\begin{equation*}
c_q(r) = \sum_{x\pmod{^* q}} e\left( \frac{rx}{q}\right)
\end{equation*}
then
\begin{equation*}
c_q(r) = \sum_{d|gcd(q,r)} \mu\left(\frac qd \right) d\text{.}
\end{equation*}
Proof
\begin{equation*}
c_q(r) = \sum_{x\pmod{^* q }}e\left( \frac{xr}{q} \right)
\end{equation*}
\begin{equation*}
=\sum_{d|q} \sum_{a \pmod {^* q/d}} e\left( \frac{ar}{q/d}\right)
\end{equation*}
\begin{equation*}
=\sum_{d|q} c_{q/d} (r)
\end{equation*}
\begin{equation*}
=\sum_{d|q} c_d(r)
\end{equation*}
let
\begin{equation*}
F(q)=\sum_{d|q} c_d(r)
\end{equation*}
then by Möbius inversion
\begin{equation*}
c_q(r) = \sum_{d|q} \mu\left( \frac dq\right) F(d)
\end{equation*}
\begin{equation*}
F(d) = \sum_{x\pmod d} \left(\frac{xr}{d} \right) = \begin{cases} d \amp \text{ if } d|r,\\0 \amp \text{otw} \end{cases}
\end{equation*}
This lemma implies that in the proof above we have
\begin{equation*}
G(q) c_{q} (q- \overline 4 \alpha^2) = G(q) \sum_{d|\gcd(q,(1-\overline 4 \alpha^2))} \mu \left(\frac qd \right) d
\end{equation*}
so putting everything together
\begin{equation*}
f(u) = \frac 1q \sum_{\alpha \pmod q} \left( G(q) \sum_{d|\gcd(q,(1-\overline 4 \alpha^2))} \mu \left(\frac qd \right) d \right) e\left(\frac{\alpha u}{q} \right)
\end{equation*}
\begin{equation*}
= \frac{G(q)}q \sum_{d|q} \sum_{\alpha \pmod q,\overline 4 \alpha^2 - 1 \equiv 0 \pmod d} d e\left(\frac{\alpha u}{q} \right)
\end{equation*}
Lemma 2.39
\begin{equation*}
\sum_{\alpha \pmod q,\overline 4 \alpha^2 - 1 \equiv 0 \pmod d} e\left(\frac{\alpha u}{q} \right) = 0 \text{ if } d\ne q
\end{equation*}
Proof
If \(d \ne q\) then \(de = q\text{,}\) \(e \ne 1\text{,}\) \(\alpha = \alpha_1 + d\alpha_2\) so
\begin{equation*}
\sum_\alpha = \sum_{\alpha_1,d,\overline 4 \alpha_1 ^2 = 1 \pmod d} e\left( \frac{\alpha_1 u}{q}\right) \sum_{\alpha_2} e\left( \frac{\alpha_2 u}{q}\right) = 0\text{.}
\end{equation*}
This finishes the proof for Salié sums.
Theorem 2.40 Major theorem (Iwaniec '87)
Use this calculation + Petersson formula + genus Let \(f\in S_k(N)\) of weight \(2k \ge 1 \) and \(2k\) odd. Then for \(n \) squarefree and \(\forall \epsilon \gt 0\text{:}\)
\begin{equation*}
a_f(n) \ll_\epsilon n^{k/2 - 1/4 - 1/28 + \epsilon}\text{.}
\end{equation*}
Corollary 2.41
\begin{equation*}
r_3(n, P) \ll n^{k/2 - 1/4 - 1/28 + \epsilon}
\end{equation*}
for all \(\deg(P) \gt 0\text{.}\)
Final input: Lower bound (Gauss, Siegel): Gauss showed
\begin{equation*}
r_3(n) = \frac{24}{w(d)} h(d) \left( 1- \legendre d2 \right)
\end{equation*}
where \(h(d)\) is the class number \(\QQ(\sqrt{-n})\) where \(d = \disc (\QQ{-n})\) and \(w(d)\) is the number of roots of unity in this field.
Exercise 2.42
Question 2.43
How large is \(h(d)\text{?}\)
Euler, Siegel
Theorem 2.44
\begin{equation*}
h(D) \gg_\epsilon |D|^{1/2 - \epsilon}
\end{equation*}
\begin{equation*}
L(q,\chi_D) \gg_\epsilon |D|^{-\epsilon}
\end{equation*}
Last time \(f\in S_k(\Gamma), k \in \frac 12 + \ZZ\) for \(n \) squarefree Iwaniec implies \(|a_f(n)| \ll_\epsilon n^{k/2 - 1/4 - 1/28 + \epsilon}\text{.}\) This gives \(r_3(n; P) \ll n^{k/2 - 1/4 - \delta}\text{.}\)
Finding the argument \(P = \) spherical harmonic, \(\deg (P) = l\) and \(h= l + 3/2\text{.}\)
\begin{equation*}
r_3(n; P) = \sum_{|\alpha|^2 = n} P(\alpha)\text{.}
\end{equation*}
Today:
\(P \equiv 1\) main contribution.
\(P \not \equiv 1\) error.
\(P \equiv 1\)
\begin{equation*}
r_3(n ) = \sum_{|\alpha|^2= n }1
\end{equation*}
(with no local obstructions, i.e. \(n \) squarefree and \(n \not \equiv 7 \pmod 8\)).
Theorem 2.45 Gauss
\begin{equation*}
r_3(n) = \frac{24 h(\tilde n)}{w(\tilde n)} \left(1 - \legendre {\tilde n}{2} \right)
\end{equation*}
where
\begin{equation*}
\tilde n = \disc(\QQ(\sqrt{-n}))
\end{equation*}
\begin{equation*}
w(\tilde n) = \# \text{roots of 1 in }\QQ(\sqrt{-n})
\end{equation*}
Exercise 2.47
We now wish to demonstrate a lower bound on \(h(\tilde n)\text{.}\)
Theorem 2.48 Siegel '35
\(\forall \epsilon \gt 0\) have
\begin{equation*}
L(1, \chi_d) \gg_\epsilon |d|^{-\epsilon}\text{.}
\end{equation*}
Corollary 2.49
\begin{equation*}
h(\tilde n) \gg_\epsilon n^{1/2 - \epsilon}\text{.}
\end{equation*}
Exercise 2.50
Prove the corollary (Analytic class number formula)
Finally Siegel + Iwaniec implies
\begin{equation*}
\frac{1}{\sqrt n} \# \{ \alpha \in \ZZ^3 : |\alpha|^2 = n\}
\end{equation*}
gets equidistributed on \(S^3\) as \(n \to \infty\text{.}\)
Recall this above was problem 1.
Problem 2
\(\Gamma = \SL_2(\ZZ)\)
\begin{equation*}
\frac{1}{\# \Lambda_d} \sum_{z_Q \in \Lambda_d} f(z_Q) \to \int_{\Gamma \backslash \HH} f(z) \diff \mu (z)
\end{equation*}
where for \(Q = ax^2 + bxy + cy^2\text{,}\) positive definite.
\begin{equation*}
z_Q = \frac{-b + \sqrt{b^2 - 4ac}}{2a} \in \HH\text{.}
\end{equation*}
Theorem 2.52 Duke '88
Problem 2 is true.
Duke's proof
- What to replace the exponential sums, i.e. the \(P\)'s?
- Follow the same strategy and bound the nontrivial sums from above (\(P \not \equiv 1\)), trivial sums from below (\(P \equiv 1\)).
Recall \(\lb 0 ,1) \leadsto P = e(nx)\text{,}\) \(S^2 \leadsto P\) spherical harmonics, \(\Gamma \backslash \HH \leadsto\) automorphic forms \(E(z,s)\) or Maass cusp forms.
Digression \(\Delta_{\text{hyp}} \acts \Gamma\backslash \HH\) fourier transform on \(\RR\) then \(f(x) = \int_{i\RR} \hat f(\alpha) e(\alpha x) \diff x\)
\begin{equation*}
S^1 \implies f(x) = \sum_{n\in \ZZ} \hat f (n) e(nx)
\end{equation*}
i.e.
\begin{equation*}
L^2( \Gamma\backslash \HH) = \oint_\RR E( z, \frac 12 + iit) \diff \mu (t)
\end{equation*}
\begin{equation*}
\bigoplus \sum_\lambda \phi_\lambda
\end{equation*}
\(\phi_\lambda\) a Maass cusp form of eigenvalue \(\lambda\text{.}\)
Detour: Maass forms and Eisenstein series
Definition 2.53 Maass form
\(f\colon \HH \to \CC\) is a Maass form if \(\forall \gamma \in \Gamma = \SL_2(\ZZ)\text{.}\)
\begin{equation*}
f(\gamma z) = f(z)
\end{equation*}
\(f\) is an eigenfunction for
\begin{equation*}
\Delta_{\text{hyp}} = -y^2 \left( \frac{\partial^2}{\partial x^2 } \frac{\partial^2}{\partial y^2 }\right)
\end{equation*}
\begin{equation*}
f(z) = O(y^N)
\end{equation*}
for some \(N\text{.}\)
Cusp form if
\begin{equation*}
\int_0^1 f(x + iy) \diff x = 0\text{.}
\end{equation*}
Example 2.54 Eisenstein series
\begin{equation*}
E(z,s) = \sum_{(c,d)=1} \frac{y^s}{|cz+d|^{2s}}
\end{equation*}
\begin{equation*}
= \sum_{\gamma \in \Gamma_\infty\backslash \Gamma} \im(\gamma z)^s
\end{equation*}
Fact 2.55
- \(E(z,s)\) converges for \(\Re(s) \gg1\text{.}\)
- Has analytic continuation to \(\CC\) in the \(s\)-variable.
- Has a simple pole at \(s =1\) which is a constant \(\Res_{s=1} E(z,s) = \frac 12\text{.}\)
- \(E(z,s) = E(z,1-s)\text{.}\)
- \(E(z,s) \sim y^\sigma\) where \(\sigma = \max \{ \Re(s), \Re(1-s)\}\text{.}\)
Back on track
\begin{equation*}
\sum_{|\alpha|^2 = n} P\left( \frac {\alpha}{|\alpha|}\right)
\end{equation*}
\begin{equation*}
\sum_{z_Q \in \Lambda_d} 1
\end{equation*}
\begin{equation*}
\sum_{z_Q \in \Lambda_d} E(z_Q, \frac 12 + it)
\end{equation*}
\begin{equation*}
\sum_{z_Q \in \Lambda_d} \phi_\lambda(z_Q)
\end{equation*}
Lemma 2.56
\begin{equation*}
\zeta(2s) \sum_{z_Q \in \Lambda_d} E(z_Q, s)
\end{equation*}
\begin{equation*}
\left( \frac {|d|}{4}\right)^{\frac s2} \underbrace{\zeta_{\QQ(\sqrt{d})}(s)}_{=\zeta(s) L(s, \chi_d)}
\end{equation*}
Proof
\begin{equation*}
\sum_{z_Q} \zeta(2s) E(z_Q,s) =\sum_{z_Q} \sum_{(u,v)\in \ZZ^2 \smallsetminus (0,0)} \frac{y_Q^2}{|uz_Q + v|^2s}
\end{equation*}
\(\Lambda_d\) the class group of binary quadratic forms of discriminant \(d\text{.}\)
\begin{equation*}
|uz_Q + v|^2 = |u\left(\frac{-b + \sqrt{d}}{2a}\right) + v|^2
\end{equation*}
\begin{equation*}
= \frac{av^2 - uv b + cv^2}{a}
\end{equation*}
so we get in the above
\begin{equation*}
= \left(\frac{|d|}{4a}\right)^{s/2} \sum _{Q\in \Lambda_d} \frac{1}{Q(u,v)^s}\text{.}
\end{equation*}
This gives
-
\begin{equation*}
\sum_{z_Q} 1 \sum |d|^{1/2} L(1,\chi_d) \gg_\epsilon |d|^{1/2 - \epsilon}
\end{equation*}
by Siegel.
-
\begin{equation*}
\sum_{z_Q} E(z_Q, \frac 12 + it ) \sim \frac{|\zeta(\frac 12 + it) L( \frac 12 + it , \chi_d)|}{|\zeta(1 + 2 it)|} |d|^{\frac 14}
\end{equation*}
want
\begin{equation*}
L(\frac 12 + it, \chi_d) \ll |d|^{\frac 14 - 8}\text{.}
\end{equation*}
We also need:
Theorem 2.57 de la Vallée Poussin
\begin{equation*}
\zeta(1 + 2i t ) \gg \text{something} \gt 0
\end{equation*}
\begin{equation*}
|\zeta(1+ 2it)| \gg \log(2 + 1 + 1 )\inv
\end{equation*}
Phragmén-Lindelöf principle gives convexity bound on the \(d\)-aspect
\begin{equation*}
L(\frac 12 + it, \chi_d ) \ll_\epsilon | d|^{\frac 14 + \epsilon}\text{.}
\end{equation*}
Subconvexity bound
Theorem 2.58 Burgess
\begin{equation*}
L(\frac 12 + it, \chi_d ) \ll_\epsilon | d|^{\frac 1{16} + \epsilon}
\end{equation*}
What about
\begin{equation*}
\sum \phi_\lambda (z_Q)\text{?}
\end{equation*}
Harder even!
Last time
\begin{equation*}
\frac{1}{|\Lambda_d|} \sum_{z_Q \in \Lambda_d} f(z_Q) \to \int_{\Gamma \backslash \HH} \diff \mu (z)
\end{equation*}
Siegel's theorem gives a lower bound
\begin{equation*}
\frac{1}{|\Lambda_d|} \sum_{z_Q \in \Lambda_d} 1
\end{equation*}
Subconvexity bound on \(L(\frac 12 + it, \chi_d)\) and non-vanishing \(\zeta(1 + 2i t)\text{.}\)
\begin{equation*}
\frac{1}{|\Lambda_d|} \sum_{z_Q \in \Lambda_d} E(z_Q, \frac 12 + it)
\end{equation*}
is
\begin{equation*}
\frac{1}{|\Lambda_d|} \sum_{z_Q \in \Lambda_d} \phi(z_Q)
\end{equation*}
a cusp form?
Waldspurger's formula
Roughly Waldspurger says
\begin{equation*}
\frac{|a_{|D|}|}{\pair ff} = \frac{(k - \frac 32)!}{\pi} \frac{|D|^{k-1}}{\pair gg} L(g\otimes \chi_D, k- \frac 12)
\end{equation*}
\(f\in S_k(\Gamma_0(4)\) half integral weight, \(f\mapsto g\) Shimura lift, \(a_n\) fourier coeffs of \(F\text{.}\)
\begin{equation*}
\sum \phi (z_Q) \leadsto a_{\theta_\phi}(|D|)
\end{equation*}
Katok-Sarnak, compare with
\begin{equation*}
P\left(\frac{\alpha}{|\alpha|}\right) \leadsto \sum r_3(P, n) e(nz) = \theta_P(z)
\end{equation*}
Then “all we need to do” is to show that
Subconvexity bound
\begin{equation*}
L(g \otimes \chi_D, k- \frac 12) \ll |D|^{\frac 12 - \delta}
\end{equation*}
this would be enough
Why? Recall
\begin{equation*}
\sum_{|\alpha|^s = n} P(\alpha) \leadsto \theta_P(z)
\end{equation*}
\begin{equation*}
\sum_{|\alpha| = n} P\left( \frac{\alpha}{|\alpha|}\right) = \frac{1}{n^{\deg P/2}} r_3(P; n)
\end{equation*}
\(\theta_P(z)\) of weight \(\deg P + \frac 32\text{.}\)
Want
\begin{equation*}
\frac {|a_{|D|}|^2}{|D|^{\deg P}} \ll |D|^{1-s}
\end{equation*}
\begin{equation*}
g =\text{weight } 2k-1 - 2\deg P + 2
\end{equation*}
\begin{equation*}
\sim \frac{|D|^{k-1}}{|D|^{k- \frac 32}} L(g\otimes \chi_D, k - \frac 12)
\end{equation*}
\begin{equation*}
\sim |D|^{\frac 12}L(g\otimes \chi_D, k - \frac 12)
\end{equation*}
final touch:
\begin{equation*}
\sum_{z_Q \in \Lambda_d} 1 \gg |d|^{1/2 - \epsilon}
\end{equation*}
so if we can get the subconvex bound
\begin{equation*}
\frac{|a_{|D|}|^2}{|D|^{k-3/2}} \ll |D|^{1-\delta}
\end{equation*}
it is enough.
Conjecture 2.60 Lindelöf
In the \(D\)-aspect
\begin{equation*}
L(g \otimes \chi_D, \frac 12 ) \ll |D|^\epsilon
\end{equation*}
\begin{equation*}
L(g, \frac 12 + it) \ll (1+t)^\epsilon
\end{equation*}
This follows from GRH.
Lemma 2.61 Convexity bound
For all \(\epsilon \gt 0\text{:}\)
\begin{equation*}
\zeta( \frac 12 + it ) = O_\epsilon(( 1+ |t|)^{1/4 + \epsilon})\text{.}
\end{equation*}
Proof (sketch)
\begin{equation*}
|\zeta(1+ \epsilon + it)| \le \zeta(1+ \epsilon ) \le c_{\epsilon \gt 0}\text{.}
\end{equation*}
Functional equation + Stirling gives
\begin{equation*}
|\zeta( -\epsilon - it)| = O_\epsilon( |t|^{1/2 + \epsilon})
\end{equation*}
\begin{equation*}
|\zeta(\sigma+ it)| \sim | \zeta(1- \sigma-it) \left| \frac{t}{2\pi} \right|^{\frac 12 - \sigma}\text{.}
\end{equation*}
\begin{equation*}
f(s) = \zeta(s) \zeta(1-s)
\end{equation*}
+ Phragmén-Lindelöf, bound it on \(a = -\epsilon, b= 1+ \epsilon\text{:}\)
\begin{equation*}
|f(s)| \le |t|^{1/2 +\delta}\text{.}
\end{equation*}