Skip to main content

Section 1 Automorphic forms \(/ \GL_2\) (possibly \(\GL_n\))

These are notes for Ali Altuğ's course MA842 at BU Spring 2018.

The course webpage is http://math.bu.edu/people/saaltug/2018_1/2018_1_sem.html.

Course overview: This course will be focused on the two papers Eisenstein Series and the Selberg Trace Formula I by D. Zagier and Eisenstein series and the Selberg Trace Formula II by H. Jacquet and D. Zagier. Although the titles of the papers sound like one is a prerequisite of the other it actually is not the case, the main difference is the language of the papers (the first is written in classical language whereas the second is written in adelically). We will spend most of our time with the second paper, which is adelic.

Subsection 1.1 Goal

Jacquet and Zagier, Eisenstein series and the Selberg Trace Formula II (1980's).

Part I is a paper of Zagier from 1970 in purely classical language. Part II is in adelic language (and somewhat incomplete).

\begin{equation*} \left( \begin{aligned} \amp\text{Arthur-Selberg} \\ \amp\text{trace formula}\end{aligned}\right) \xleftrightarrow{\text{conjecture}} \left( \begin{aligned} \amp\text{Relative} \\ \amp\text{trace formula}\end{aligned}\right) \end{equation*}

the Arthur-Selberg side is used in Langlands functoriality and the Relative is used in arithmetic applications.

Subsection 1.2 Motivation

What does this paper do?

“It rederives the Selberg trace formula for \(\GL_2\) by a regularised process.”

Note 1.1
  • Selberg trace formula only for \(\GL_2\)
  • Arthur-Selberg more general

The Selberg trace formula generalises the more classical Poisson summation formula.

Poisson summation

Notation: \(e(x) = e^{2\pi i x}\text{.}\)

To make this look more general we make the following notational choices.

\begin{equation*} G = \RR, \Gamma = \ZZ \end{equation*}
\begin{equation*} \sum_{\gamma \in \Gamma^\#} f(\gamma) = \sum_{\xi \in (G/\Gamma)^{\vee}} \hat f(\xi) \end{equation*}

where

  • \(\Gamma^\# =\) conjugacy classes of \(\Gamma\) (\(= \Gamma\) in this case since \(\Gamma\) is abelian).
  • \((G/\Gamma)^{\vee} =\)dual of \(G/\Gamma\text{.}\)
Selberg
\begin{equation*} G = \GL_2(\RR),\,\Gamma = \GL_2(\ZZ) \end{equation*}
\begin{equation*} \sum_{\gamma \in \Gamma^\#}\cdots ``=" \sum_{\pi\in ``(G/\Gamma)^{\vee}" } \cdots \end{equation*}

relating conjugacy classes on the left to automorphic forms on the right.

Arthur and Selberg prove the trace formula by a sharp cut off, Jacquet and Zagier derive this using a regularisation.

Subsection 1.3 Motivating example

\begin{equation*} \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} \end{equation*}

converges absolutely for \(s \gt 1\text{.}\)

Step 1: observe

\begin{align*} \frac{1}{s-1} \amp= \int_1^\infty t^{-s} \diff t \ \text{(for }\Re(s) \gt 1\text{)}\\ \amp=\sum_{n=1}^\infty \int_{n}^{n+1} t^{-s} \diff t \end{align*}

Step 2: this implies

\begin{align*} \zeta(s) \amp= \frac{1}{s-1} + \sum_{n=1}^\infty n^{-s} - \int_{n}^{n+1} t^{-s} \diff t\\ \amp = \frac{1}{s-1} + \sum_{n=1}^\infty \left(\int_{n}^{n+1} n^{-s} - \int_{n}^{n+1} t^{-s} \diff t\right) \end{align*}

we denote each of the terms in the right hand sum as \(\phi_n(s)\)

\begin{equation*} \phi_n(s) = \int_n^{n+1} n^{-s} -t^{-s} \diff t \end{equation*}

Step 3:

\begin{align*} |\phi_n(s)| \amp\le \sup_{n \le t \le n+1} | n^{-s} - t^{-s}|\\ \amp \sup_{n\le t\le n+1} \frac{|s|}{t^{\Re(s) + 1}}\le \frac{|s|}{n^{\Re(s) + 1}} \end{align*}

by applying the mean value theorem.

So \(\sum_{n=1}^\infty \phi_n\) converges absolutely. Hence \(\phi = \sum_{n=1}^\infty \phi_n\) is holomorphic

One can push this idea to get analytic continuation to all of \(\CC\text{,}\) one strip at a time. This is an analogue of the sharp cut off method mentioned above. It's fairly elementary but somewhat unmotivated and doesn't give any deep information (like the functional equation).

Introduce

\begin{equation*} \theta(t) = \sum_{n \in \ZZ} e^{-\pi n^2 t },\,t\gt 0 \end{equation*}

note that \(\theta(t) = 1 + 2 \sum_{n=1}^\infty e^{-\pi n^2 t}\text{.}\)

Idea: Mellin transform and properties of \(\theta\) to derive properties of \(\zeta\text{.}\)

\begin{equation*} \frac{\Gamma\left(\frac{s}{2}\right)}{\pi^{s/2}} \frac{1}{ns} = \int_0^\infty e^{-\pi n^2 t} t^{s/2} \frac{\diff t}{t} \end{equation*}

property of \(\theta\text{:}\)

\begin{equation*} \theta(t) = \frac{1}{\sqrt{t}} \theta\left(\frac1t\right) \end{equation*}

Step 1: proof of this property is the Poisson summation formula

  • \begin{equation*} f(x) = e^{-\pi x^2} \implies \hat f(\xi) = f(\xi) \end{equation*}
  • \begin{equation*} g(x) = f(\sqrt{t} x) \implies \hat g(\xi) = \frac{1}{\sqrt{t}}\hat f\left(\frac{\xi}{\sqrt{t}}\right) \end{equation*}

Step 2: Would like to write something like

\begin{equation*} `` \int_0^\infty \theta(t) t^{s/2} \frac{\diff t}{t}" \end{equation*}

This integral makes no sense

  • As \(t \to \infty\text{,}\) \(\theta \sim 1\) thus
    \begin{equation*} \left| \int_A^\infty \theta(t) t^{s/2} \frac{\diff t}{t} \right| \lt \infty \end{equation*}
    \begin{equation*} \iff \left| \int_A^\infty t^{s/2} \frac{\diff t}{t} \right| \lt \infty \end{equation*}
    \begin{equation*} \iff \Re(s) \lt 0 \end{equation*}
  • As \(t \to 0\) consider \(\xi = \frac 1t\) so \(\xi \to \infty\) and
    \begin{equation*} \theta(t) = \frac{1}{\sqrt t } \theta\left(\frac 1t\right) = \sqrt \xi \theta \xi \end{equation*}
    \begin{equation*} \implies \theta(t) = \sqrt \xi \theta(\xi) \sim \sqrt \xi = \frac{1}{\sqrt t} \end{equation*}
    so \(\theta(t) \sim \frac{1}{ \sqrt t}\)
    \begin{equation*} \implies \left | \int_0^A \theta(t) t^{s/2} \frac{\diff t}{t}\right| \lt \infty \end{equation*}
    \begin{equation*} \iff \left| \int_0^A t^{(s-1)/2} \frac{\diff t}{t}\right| \lt \infty \end{equation*}
    \begin{equation*} \iff \Re(s) \gt 1 \end{equation*}

so no values of \(s\) will make sense for this improper integral.

Refined idea: Consider

\begin{equation*} I(s) = \int_0^1 (\theta(t) - \frac{1}{\sqrt t}) t^{s/2} \frac{\diff t}{t} + \int_1^\infty(\theta(t) - 1 ) t^{s/2} \frac{\diff t}{t} \end{equation*}

upshot: \(I(s)\) is well-defined and holomorphic for all \(s\in \CC\text{.}\)

Final step: Compute the above to see

\begin{equation*} I(s) = \frac2s + \frac{2}{1-s} + \frac{2}{\pi^{s/2}} \Gamma\left(\frac s2\right) \zeta(s) \end{equation*}

which implies

  1. \(\zeta(s)\) has analytic continuation to \(s\in \CC\text{,}\) with only a simple pole at \(s = 1\) with residue 1.
  2. \begin{equation*} I(s) = I(1-s)\text{,} \end{equation*}
    this follows from the property of \(\theta\) so if we let
    \begin{equation*} \Lambda(s) = \frac{\Gamma\left(\frac s2\right)}{\pi^{s/2}} \zeta(s)\text{,} \end{equation*}
    then
    \begin{equation*} \Lambda(s) = \Lambda(1-s)\text{.} \end{equation*}

Subsection 1.4 Modular forms

Functions on the upper half plane,

\begin{equation*} \HH = \{z\in \CC : \Im(z) \gt 0\}\text{.} \end{equation*}

Historically elliptic integrals lead to elliptic functions, modular forms and elliptic curves.

Note 1.4

When one is interested in functions on \(\mathcal O/\Lambda\) where \(\mathcal O\) is some object and \(\Lambda\) is some discrete group. Take \(f\) a function on \(\mathcal O\) and average over \(\Lambda\) to get

\begin{equation*} \sum_{\lambda \in \Lambda} f (\lambda z)\text{.} \end{equation*}

If you're lucky this converges, this is good.

Elliptic functions

Weierstrass, take \(\Lambda = \omega_1 \ZZ + \omega_2\ZZ\) a lattice and define

\begin{equation*} \wp_\Lambda (z) = \frac{1}{z^2} + \sum_{\omega \in \Lambda \smallsetminus \{0\}} \left( \frac{1}{(z-\omega)^2} + \frac{1}{\omega^2}\right)\text{.} \end{equation*}

Jacobi, (Elliptic integrals) consider

\begin{equation*} \int_0^\phi \frac{\diff t}{\sqrt{(1-t^2)(1-\kappa t^2)}},\ \kappa \ge 0 \end{equation*}

related by:

\begin{equation*} (\wp_\Lambda'(z))^2 = 4 \wp_\Lambda(z)^3 - 60 G_2(\Lambda) \wp_\Lambda(z) - 140 G_3(\Lambda) \end{equation*}
\begin{equation*} G_k(\Lambda) = \sum_{\lambda \in \Lambda\smallsetminus\{0\}} \lambda^{-2k} \end{equation*}

or

\begin{equation*} G_k(\tau) = \sum_{(m,n) \in \ZZ^2\smallsetminus\{0\}} \frac{1}{(m\tau+n)^{2k}} \end{equation*}

the weight \(2k\) holomorphic Eisenstein series.

Subsection 1.5 Euclidean Harmonic analysis

We'll take a roundabout route to automorphic forms.

Today: Classical harmonic analysis on \(\RR^n\text{.}\) Classical harmonic analysis on \(\HH\text{.}\)

The aim (in general) is to express a certain class of functions (i.e. \(\mathcal L^2\)) in terms of building block (harmonics).

In classical analysis the harmonics are known (\(e(nx)\)), then the question becomes how these things fit together. In number theory the harmonics are extremely mysterious. We are looking at far more complicated geometries, quotient spaces etc. and arithmetic information comes in.

Example 1.6

\(\RR\text{,}\) \(f\colon \RR\to \CC\text{,}\) being periodic in \(\mathcal L^2(S^1)\) leads to a fourier expansion

\begin{equation*} f(x) = \sum_{n\in\ZZ} a_ne(nx)\text{.} \end{equation*}

Subsubsection 1.5.1 \(\RR^2\)

We have a slightly different perspective.

\begin{equation*} \RR^2 = G \acts G \end{equation*}

via translations (i.e. right regular representation of \(G\) will be \(G\acts \mathcal L^2(G)\)). I.e. \(g\cdot x = x+g\text{.}\)

Remark 1.7

This makes \(\RR^2\) a homogeneous space.

\(\RR^2\) with standard metric \(\diff s^2 = \diff x^2 + \diff y^2\) is a flat space \(\kappa = 0\text{.}\)

To the metric we have the associated Laplacian (Laplace-Beltrami operator, \(\nabla\cdot\nabla\))

\begin{equation*} \Delta = \partder[^2]{x^2} + \partder[^2]{y^2} \end{equation*}

we are interested in this as it is essentially the only operator, we will define automorphic forms to be eigenfunctions for this operator.

Note 1.8

The exponential functions

\begin{equation*} \phi_{u,v}(x,y)= e(ux+vy) \end{equation*}

are eigenfunctions of \(\Delta\) with eigenvalue \(\lambda_{u,v} = -4\pi^2 (u^2 + v^2)\) i.e.

\begin{equation*} (\Delta + \lambda_{u,v}) \phi_{u,v} = 0\text{.} \end{equation*}

These are a complete set of harmonics for \(\mathcal L^2 (\RR^2)\text{.}\) The proof is via fourier inversion.

\begin{equation*} f(x,y) = \int\int_{\RR^2} \hat f(u,v) \phi_{u,v} (x,y) \diff u\diff v \end{equation*}

where

\begin{equation*} \hat f(u,v) = \int\int_{\RR^2} f(u,v) \bar \phi_{u,v} (x,y) \diff y\diff x\text{.} \end{equation*}
A little twist

We could have established the spectral resolution ( of \(\Delta\)) by considering invariant integral operators.

Using the spectral theorem if we can find easier to diagonalise operators that commute we can find the eigenspaces for those to cut down the eigenspace.

Recall: an integral operator is

\begin{equation*} L(f)(x)= \int K(x,y)f(y) \diff y \end{equation*}

invariant means

\begin{equation*} L(gf) = gL(f) \ g\in G \end{equation*}

in our case

\begin{equation*} g\cdot f(x) = f(g+x)\text{.} \end{equation*}
Observation 1.9

If \(L\) is invariant then the kernel \(K(x,y)\) is given by

\begin{equation*} K(x,y) = K_0(x-y) \end{equation*}

for some function \(K_0\text{.}\)

\((\Leftarrow)\) obvious

\((\Rightarrow)\) Suppose \(L\) is invariant then

\begin{equation*} \int_{\RR^2} K(x,y) f(y+ \alpha) \diff y = \int_{\RR^2} K(x+ \alpha,y) f(y) \diff y \ \forall f \end{equation*}

implies

\begin{equation*} \int \int_{\RR^2} K(x,y - \alpha) f(y) \diff y = \int_{\RR^2} K(x+ \alpha,y) f(y) \diff y \ \forall f \end{equation*}

so

\begin{equation*} \int_{\RR^2} (K(x+\alpha,y) - K(x,y - \alpha)) f(y) \diff y = 0 \ \forall f \end{equation*}

which implies with some proof that

\begin{equation*} K(x+\alpha,y) = K(x,y - \alpha) \end{equation*}

so

\begin{equation*} K(x,y) = K(x-y,0)\text{.} \end{equation*}
Observation 1.10

Invariant integral operators commute with each other

\begin{equation*} L_1L_2(f)(z) = L_2L_1(f)(z) \end{equation*}
\begin{equation*} L_1L_2(f)(z) = \int_{\RR^2 \times \RR^2} f(w) K_2(u-w) K_1(z-u) \diff w \diff u = L_2L_1(f)(z) \end{equation*}

after change of variables

\begin{equation*} u\mapsto z - u+w \end{equation*}
Observation 1.11

\(L\) commutes with \(\Delta\text{.}\)

Based on the following:

\begin{equation*} K(x,y) = K_0(x-y,0) \end{equation*}
\begin{equation*} \partder[K]{x_i} = - \partder[K]{y_i} \end{equation*}

which implies

\begin{equation*} \Delta_z(L(f))(z) = \Delta_z \int_{\RR^2} f(w) K(z,w) \diff w \end{equation*}
\begin{equation*} = \int\int_{\RR^2} \Delta_z f(w) K(z,w) \diff w \end{equation*}
\begin{equation*} = \int\int_{\RR^2} f(w) \Delta_w K(z,w) \diff w \end{equation*}

which via integration by parts is

\begin{equation*} = \int\int_{\RR^2} \Delta_wf(w) K(z,w) \diff w = L(\Delta f)(z)\text{.} \end{equation*}
Observation 1.12

\(\phi_{u,v}(x,y)\) is an eigenfunction of \(L\text{,}\) \((u,v)\in\RR^2, (x,y) \in \RR^2\text{.}\)

\begin{equation*} L\phi_{u,v}(\overbrace{z}^{=x,y}) = \int_{\RR^2} \phi_{u,v}(w) K(z,w) \diff w \end{equation*}
\begin{equation*} = \int_{\RR^2} \phi_{u,v}(w) K_0(z-w) \diff w \end{equation*}
\begin{equation*} = \int_\RR \int_\RR e(uw_1 + vw_2) K_0(z_1- w_1,z_2-w_2)\diff w_1 \diff w_2 \end{equation*}
\begin{equation*} = e(uw_1 + vw_2)\int_\RR \int_\RR K_0(w_1,w_2)e(-uw_1 - vw_2)\diff w_1 \diff w_2 \end{equation*}

after the change of variable \(w_i \mapsto - w_i + z_i\)

\begin{equation*} \phi_{u,v}(z)\hat K_0(u,v) \end{equation*}

i.e.

\begin{equation*} L\phi_{u,v} = \hat K_0(u,v) \phi_{u,v}\text{.} \end{equation*}

Side remark: these are enough to form a generating set.

Subsubsection 1.5.2 Poisson summation (yet again)

Let's consider integral operators on functions on \(\ZZ^2 \backslash \RR^2 = \mathbf T^2\text{.}\)

Observe: \(L\leadsto K(x,y) = K_0(x-y)\text{.}\)

\begin{equation*} Lf(z) = \int_{\RR^2} f(w) K(z,w) \diff w \end{equation*}
\begin{equation*} = \int\int_{\ZZ^2 \backslash \RR^2} f(w) \underbrace{\left(\sum_{n\in\ZZ^2} K(z,w+n)\right)}_{= \sum_{n\in \ZZ} K_0(z-w + n) = \mathbf K(z,w)} \diff w \end{equation*}

now \(\mathbf K\) is a function on \(\mathbf T^2 \times \mathbf T^2\text{.}\)

Trace of this operator

\begin{equation*} = \int_{\mathbf T^2} \mathbf K(z,z) \diff z = \int_{\mathbf T^2} \left( \sum_{n\in \ZZ^2} K_0(n)\right) \diff z = \sum_{n\in\ZZ^2} K_0(n) \end{equation*}

Using sum of eigenvalues

\begin{equation*} K(z,w) = \sum_{n\in \ZZ^2} K_0(z-w + n) = \sum_{\xi \in \ZZ^2} \lambda_\xi \phi_\xi (z-w) = \sum_{\xi\in \ZZ^2} \lambda_\xi \phi_\xi (z) \bar \phi_\xi (w) \end{equation*}

so the trace is

\begin{equation*} \sum_{\xi \in \ZZ^2} \lambda_\xi = \sum_{\xi\in \ZZ^2} \hat K_0(\xi) \end{equation*}

so we get to

\begin{equation*} \sum_{n\in \ZZ^2} K_0(n) = \sum_{\xi\in \ZZ^2} \hat K_0(\xi) \end{equation*}

i.e. Poisson summation.

Why care about Poisson summation?

\begin{equation*} \hat K_0(0) = \int K_0(z) \diff z \end{equation*}

Gauss circle problem, how many lattice points are there in a circle of radius \(R\text{.}\) We can pick a radially symmetric function that is 1 on the circle and 0 outside, or a smooth approximation of such an indicator function at least. Poisson summation packages the important information into a single term, plus some rapidly decaying ones. Then we get \(\pi R^2 + \) error, Gauss conjectured that the error is \(R^{1/2 + \epsilon}\text{.}\)

Last time we gave a conceptual proof of Poisson summation (this strategy will generalise to the trace formula eventually).

To clean up one loose end: there is a generalisation of Poisson summation called Voronoi summation, which will actually be useful later. For Poisson summation we had

\begin{equation*} \sum_{n_1,n_2\in \ZZ} K(n_1, n_2) = \sum_{\xi_1,\xi_2\in \ZZ} \hat K(\xi_1, \xi_2) \end{equation*}

suppose \(K(x,y)\colon \RR^2 \to \CC\) is radially symmetric i.e.

\begin{equation*} K(x,y) = K_0(x^2 + y^2),\,(x,y) \in \RR^2 \end{equation*}

then the fourier transform

\begin{equation*} \hat K(u,v) = \pi\int_0^\infty K_0(r) J_0(\sqrt{\lambda r})\diff r,\,\lambda = 4\pi^2 (u^2 + v^2) \end{equation*}

where

\begin{equation*} J_0(z) = \frac1\pi \int_0^\pi \cos(z\cos(\alpha)) \diff \alpha \end{equation*}

is a Bessel function of the second kind.

Prove this.

Plug this into Poisson summation

\begin{equation*} \sum_{(n_1,n_2)\in \ZZ^2} K(n_1,n_2) = \sum_{N=0}^\infty r_2(N)K_0(N) \end{equation*}

as \(K\) only depends on \(n_1^2 + n_2^2\) we group terms based on this quantity, so

\begin{equation*} r_2 = \#\{(n_1,n_2) \in \ZZ^2 : n_1^2 + n_2^2 = N\}\text{.} \end{equation*}
\begin{equation*} \sum_{\xi_1,\xi_2\in \ZZ} \pi\int_0^\infty K_0(r) J_0(2\pi \sqrt{(\xi^2_1 + \xi_2^2)r})\diff r \end{equation*}
\begin{equation*} = \sum_{M=0}^\infty r_2(M)\tilde K_0(M)\text{.} \end{equation*}

Note that \(J_0(0) =1 \text{.}\)

How is this useful? Consider point counting in a circle problem. Let \(K_0(x)\) be an approximation to the step function, \(1\) for \(x \le 1\) and 0 for \(x \gt 1\text{.}\) With \(\int K_0 = 1\text{.}\) Then

\begin{equation*} \sum_{N=0}^\infty K_0\left( \frac{N}{ R^2}\right) r_2(N)\text{.} \end{equation*}

This is counting lattice points. The right hand side is then

\begin{equation*} \sum_{M=0}^\infty r_2(M) \tilde K_0(M) = \tilde K_0(0) + \sum_{M=1}^\infty r_2(M) \tilde K_0(M) \end{equation*}
\begin{equation*} = \pi + \sum_{M=1}^\infty r_2(M) \tilde K_0(M)\text{.} \end{equation*}

Finally

\begin{equation*} f(z) = K_0\left( \frac{ z}{R^2}\right) \end{equation*}

so

\begin{equation*} \tilde f(\xi ) = R^2\tilde K_0(\xi R^2)\text{.} \end{equation*}

So

\begin{equation*} \sum_{N=0}^\infty K_0\left( \frac{N}{R^2}\right) r_2(N) = R^2\left( \pi + \sum_{M=1}^\infty r_2(M) \tilde K_0(MR^2)\right) \end{equation*}

where the lead term is the area of the circle. Finally if \(M \ne 0\) then \(f(MR^2)\) doesn't increase fast as \(R\to \infty\text{.}\) i.e. it is smaller than \(R^2\text{.}\) So as \(R\to \infty\) we find \(\#\{\text{lattice points in the circle}\} \sim\pi R^2\text{.}\)

Subsection 1.6 The hyperbolic plane \(\HH\)

What if we consider the same problem on the hyperbolic disk? Things are extremely different.

Generalities
Definition 1.15
\begin{equation*} \HH = \{x+iy:y \gt 0\}\text{.} \end{equation*}
\begin{equation*} \diff s^2 = \frac{1}{y^2} (\diff x^2 + \diff y^2),\,\text{Riemannian metric} \end{equation*}

this gives

\begin{equation*} \kappa= -1 \end{equation*}

i.e. this is negatively curved, this is the cause of huge differences between the euclidean theory.

There is a formula for the (hyperbolic) distance between two points

\begin{equation*} \rho(z,w) = \log \frac{| z- \bar w| + |z-w|}{| z- \bar w| - |z-w|}\text{.} \end{equation*}
Observation 1.16

As \(w\to x\in \RR\) we have \(\rho \to \infty\text{.}\) So \(\RR\) is the boundary.

Recall: the isoperimetric inequality

\begin{equation*} 4\pi A -\kappa A^2 \le L^2 \end{equation*}

where \(L\) is the length of the boundary of a region and \(A\) is the area. Note if \(\kappa = 0\) then \(4\pi A \le L^2\text{.}\) So \(A\) can be and would be as large as \(L^2\text{.}\)

For \(\kappa = -1\) we have

\begin{equation*} 4\pi A +A^2 \le L^2 \end{equation*}

so \(A\) can at most (and most often will) be as large as \(\sim L\text{.}\) The upshot is that under the hyperbolic metric, the area and perimeter can be the same size.

\begin{equation*} |\text{Boundary}| \sim |\text{Area}|\text{.} \end{equation*}

Things are a lot more subtle.

Another interesting setting is the tree of \(\PGL_2(\QQ_p)\) for \(p =2\) this is a \(3\)-regular tree. How many points are there of distance less than \(R\) from a fixed point

\begin{equation*} 1 + 3(1+ 2 + \cdots + 2^R) = 1 + 3(2^{R+1} -1 ) \sim 3 \cdot2^{R+1} = 6\cdot 2^R\text{.} \end{equation*}

But how many points of distance exactly \(R\) are there? Roughly \(2^R\) again.

A hyperbolic disk of radius \(R\) centred at \(i\) would be a euclidean disk, but not centered at \(i\text{.}\) The area is \(4\pi(\sinh (R/2))^2\) and the circumference is \(2\pi\sinh(R)\) these are roughly the same size as \(\sinh(x) = (e^x - e^{-x})/2\text{.}\) The euclidean area is far larger (roughly the square) of the hyperbolic.

Subsection 1.7 \(\HH\) as a homogeneous space

\begin{equation*} \SL_2(\RR) \acts \HH \end{equation*}

via linear fractional transformations, i.e.

\begin{equation*} g\cdot z = \frac{az+b}{cz+d} \text{ for } g= \begin{pmatrix} a\amp b \\c\amp d\end{pmatrix} \in \SL_2(\RR) \end{equation*}

this is the full group of holomorphic isometries of \(\HH\text{,}\) to get all of them take \(z\mapsto -\bar z\) as well.

\begin{equation*} \HH = \SL_2(\HH) / \specialorthogonal(2) \end{equation*}

because \(\specialorthogonal(2) = \Stab_i(\SL_2(\RR))\text{.}\)

Subsubsection 1.7.1 Several decompositions

Cartesian: \(x+ iy\) then the invariant measure is \(\frac{\diff x\diff y}{y^2}\text{.}\)

Iwasawa: \(G = NAK\)

\begin{equation*} N = \left\{ \begin{pmatrix} 1 \amp n \\ \amp 1 \end{pmatrix}:n\in \RR\right\} \end{equation*}
\begin{equation*} A = \left\{ \begin{pmatrix} a \amp \\ \amp a^{-1} \end{pmatrix}:a\in \RR_{\ge 0}\right\} \end{equation*}
\begin{equation*} K = \left\{ \begin{pmatrix} \cos \theta \amp \sin \theta \\ -\sin \theta \amp\cos \theta \end{pmatrix}:\theta\in [0,2\pi)\right\} \end{equation*}
\begin{equation*} x+iy \leftrightarrow \underbrace{\begin{pmatrix} \sqrt{y} \amp \\ \amp \sqrt{y}^{-1} \end{pmatrix}}_A \underbrace{\begin{pmatrix} 1 \amp \frac{x}{\sqrt{y}} \\ \amp 1 \end{pmatrix}}_N \end{equation*}

this is very general, an analogue of Gram-Schmidt.

Observation 1.17
\begin{equation*} \HH = \underbrace{NA}_{=AN} = \underbrace{P}_{\left\{\begin{pmatrix} * \amp * \\ \amp * \end{pmatrix}\right\}} \end{equation*}

but be warned that \(NA \ne AN\) elementwise.

Cartan: \(KAK\) (useful when dealing with rotationally invariant functions).

Prove these decompositions. Use the spectral theorem of symmetric matrices for the Cartan case.

Classification of Motions

We classify motions by the number of fixed points in \(\HH \cup \hat\RR\text{,}\) for \(\hat \RR\) the extended real line.

  • Identity, infinitely many fixed points.
  • Parabolic, 1 fixed point in \(\hat \RR\) (\(\infty\)) \(\begin{pmatrix} 1 \amp n \\ \amp 1 \end{pmatrix}\)
  • Hyperbolic, 2 fixed points in \(\RR\) \((0,\infty)\text{,}\) \(\begin{pmatrix} a \amp \\ \amp a^{-1} \end{pmatrix}\text{.}\)
  • Elliptic, 1 fixed point in \(\HH\) but not in \(\overline \HH\text{,}\) \((i,-i)\) \(\begin{pmatrix} \cos \theta \amp \sin \theta \\ -\sin \theta \amp\cos \theta \end{pmatrix}\text{.}\)
Note 1.19 for the future

These notions are different when we consider \(\gamma \in G(\QQ)\text{,}\) something can be \(\QQ\)-elliptic but \(\RR\)-hyperbolic. This depends on the Jordan decomposition essentially, we can have such \(\gamma\) with no rational roots of the characteristic polynomial but which splits over \(\RR\text{.}\)

So we have

  • Parabolic \(| \tr | = 2\)
  • (\(\RR\)-)Elliptic \(| \tr | \lt 2\)
  • (\(\RR\)-)Hyperbolic \(| \tr | \gt 2\)

Subsection 1.8 \(\Delta_\HH\)

For this section \(\Delta_\HH = \Delta\text{.}\)

Definition 1.20

We have the translation operators

\begin{equation*} g\in \SL_2(\RR) \end{equation*}
\begin{equation*} T_g f(z) = f(g\cdot z)\text{.} \end{equation*}
Definition 1.21 Invariant operators

A linear operator \(L\) will be called invariant if it commutes with \(T_g\) for all \(g\in \SL_2(\RR)\text{,}\) i.e.

\begin{equation*} L(T_g f) = T_g(Lf)\text{.} \end{equation*}
Remark 1.22

On any Riemannian manifold \(\Delta\) can be characterised by: A diffeomorphism is an isometry iff it commutes with \(\Delta\text{.}\)

\(\Delta\) in coordinates:

Cartesian

\begin{equation*} \Delta = y^2\left( \partder[^2]{x^2} + \partder[^2]{y^2}\right) = -(z-\bar z)^2 \partder z \partder{\bar z} \end{equation*}
\begin{equation*} \partder z= \frac 12 \left(\partder x - i \partder y\right) \end{equation*}
\begin{equation*} \partder{\bar z}= \frac 12 \left(\partder x + i \partder y\right) \end{equation*}

Show that \(\Delta\) is an invariant differential operator.

Polar:

\begin{equation*} \Delta = \partder[^2]{r^2} + \frac{1}{\tanh(r)} \partder{r} + \frac{1}{(2\sinh(r))^2} \partder[^2]{\phi^2} \end{equation*}

We will be interested in \(\Delta \acts\cinf(\Gamma \backslash \HH)\text{.}\)

Eigenfunctions of \(\Delta\)

This is a little subtle, lets take the definition of an eigenfunction to be

\begin{equation*} f\in C^2( \HH) \text{ s.t. } (\Delta + \lambda) f \equiv 0\text{.} \end{equation*}
Remark 1.24

\(\Delta\) is an “elliptic” operator with real analytic coefficients. This implies any eigenfunction is real analytic.

Remark 1.25

\(\lambda = 0\) means \(f\) is harmonic.

Some basic eigenfunctions: Lets try \(f(z) = f_0(y)\) independent of \(x\)

\begin{equation*} \Delta f = y^{2} \partder[^2]{y^2} f \end{equation*}

if \(f\) satisfies

\begin{equation*} (\Delta + \lambda)f = 0 \end{equation*}

this implies \(f\) is a linear combination of \((y^s,y^{1-s})\) where \(s(1-s) = \lambda\) if \(\lambda \ne \frac 14\text{.}\)

If \(\lambda = \frac 14\) this gives \(y^{1/2}\) and \(\log (y)y^{1/2}\text{.}\) Note the symmetry! \(s \leftrightarrow 1-s\text{.}\)

Let's look at \(f(z)\) depending periodically on \(x\) (with period \(f\)). Separation of variables: try

\begin{equation*} f(z) = e(x) F(2\pi y) \end{equation*}

where the \(2\pi\) is really in both factors. This gives

\begin{equation*} \partder[^2]{x^2} f = -4\pi^2 f \end{equation*}
\begin{equation*} \partder[^2]{y^2} f = 4\pi^2 e(x) F''(2\pi y) \end{equation*}

which gives

\begin{equation*} (\Delta + \lambda)f = 0 \iff y^2 4\pi^2 e(x) \left( -F(2\pi y) + F''(2\pi y)+\lambda F(2\pi y) \right) = 0 \end{equation*}

which implies

\begin{equation*} F''(2 \pi y) + (\lambda - 1) F(2\pi y) = 0 \end{equation*}

this is a close relative of the Bessel differential equation.

\begin{equation*} F''(u) + \left( \frac{\lambda}{u^2} - 1\right) F(u) = 0\text{.} \end{equation*}

This has two solutions

\begin{equation*} \left(\frac{2y}{\pi}\right)^{\frac 12} K_{s-\frac 12}(y) \sim e^{-2\pi y}\text{ as }y\to \infty \end{equation*}
\begin{equation*} \left(2y\pi\right)^{\frac 12} I_{s-\frac 12}(y) \sim e^{2\pi y}\text{ as }y\to \infty \end{equation*}

intuition: as \(y\to\infty\) we have \(F'' - F = 0\) so \(e^{u}\) or \(e^{-u}\text{.}\)

Remark 1.26

If we insist on some “moderate growth” (at most polynomial in \(y\)) on the eigenfunction. The \(I_{s-\frac 12}\) solution can not contribute. (when we come to automorphic forms we will see that the definition is essentially eigenfunctions with moderate growth).

So our periodic (in \(x\)) eigenfunction with (moderate growth) looks like

\begin{equation*} f_s(z) = \underbrace{C2y^{\frac 12} K_{s-\frac12}(2\pi y) e(x)}_{=W_s(z)}\text{.} \end{equation*}
Definition 1.27 Whittaker functions

\(W_s(z)\) is called a Whittaker function.

These exist for arbitrary lie groups, but we may not always be able to write eigenfunctions in terms of them in general though. They are a replacement for the exponential functions.

Note 1.30

We will be considering automorphic forms

\begin{equation*} \left\langle \begin{pmatrix} 1 \amp n \\ \amp 1 \end{pmatrix}\right \rangle \subseteq \Gamma \subseteq \SL_2(\ZZ)\text{.} \end{equation*}

Subsection 1.9 Integral operators

Recall the Cauchy integral formula for holomorphic functions

\begin{equation*} f(z) = \frac{1}{2\pi i} \int_{B_z} \frac{f(w)}{w-z} \diff w = \int_{B_z} K(w,z) f(w) \diff w \end{equation*}

i.e. using an integral kernel, \(f\) is an eigenfunction for this operator.

Recall: \(L\) is an integral operator if

\begin{equation*} Lf(z) = \int_{\HH} K(z,w) f(w)\diff \underbrace{\mu(w)}_\frac{\diff u\diff v}{v^2} ,\,w = u+iv \end{equation*}

\(K\) will often be smooth of compact support for us. \(L\) is invariant if it commutes with \(T_g\) for all \(g\text{.}\)

Observation 1.31

\(L\) is invariant iff

\begin{equation*} K(gz,gw) = K(z,w)\,\forall g\in \SL_2(\RR)\text{.} \end{equation*}

Show this.

Definition 1.33 Point pair invariants

A function \(K\colon \HH \times \HH \to \CC\) that satisfies \(K(gz, g w) = K(z,w)\) is called a point pair invariant. This was first introduced by Selberg.

Invariant integral operators are convolution operators.

Remark 1.34

A point pair invariant \(K(z,w)\) depends only of the distance between \(z,w\) i.e.

\begin{equation*} K(z,w) = K_0(\rho(z,w))\text{ for } K_0\colon \RR^+ \to \CC \end{equation*}

so an invariant operator is just a convolution operator.

(\(\Leftarrow\)) Let \(L_K\) be

\begin{equation*} L_K(f)(z) = \int_\HH f(w) K(z,w) \diff \mu (z) \end{equation*}

then

\begin{equation*} L_K(f)(z) = \Lambda_K f(z) \end{equation*}

(if \(\Lambda_K = 0\) for all \(K\) then \(f \equiv 0\)). So that

\begin{equation*} \Delta f(z) = \Delta \frac{1}{\Lambda_K} L_K f(z) \end{equation*}
\begin{equation*} \frac{1}{ \Lambda_K} \int_\HH f(w) \Delta_z K(w,z) \diff \mu(w) \end{equation*}

Note \(f \to \int_\HH f(z) \Delta_z K(w,z) \diff \mu(w)\) is another invariant integral operator (exercise, show this).

We will prove an integral representation that looks like the Cauchy integral formula

\begin{equation*} f(z) = \frac{1}{2\pi i} \int_{B_z} \frac{f(w)}{w-z} \diff w\text{.} \end{equation*}

Let for \(w\in \HH\text{,}\)

\begin{equation*} \Phi_w(f)(z) = \int_{G_w} f(gz) \diff\mu(g) \end{equation*}

where \(G_w\) is the stabiliser of \(w\) in \(\SL_2(\RR)\) and \(\diff\mu\) is normalized so that \(G_w\) has volume 1.

Facts:

  1. If \(f\) is an eigenfunction of \(\Delta\) with eigenvalue \((\Delta + \lambda)f \equiv 0\text{,}\) \(\lambda = s(1-s)\) then there exists a unique function \(W(z,w)\) s.t.
    \begin{equation*} \Phi_w(f)(z) = W(z,w) f(w) \end{equation*}
    \begin{equation*} W(w,w) = 1 \end{equation*}
    \begin{equation*} (\Delta_z + \lambda)W(z,w) \equiv 0 \end{equation*}
    \begin{equation*} W \text{ is point pair invariant}\text{.} \end{equation*}
  2. \begin{equation*} L(\Phi_z(f)(z) = L(f)(z) \end{equation*}
    as
    \begin{equation*} L(\Phi_z(f))(z) = \int_\HH \Phi_z(f) (w) K(w,z) \diff \mu(w) \end{equation*}
    \begin{equation*} \int_\HH \int_{B_z} f(gw) \diff \mu(g) K(w,z) \diff \mu(w) \end{equation*}
    \begin{equation*} \int_{B_z} \int_\HH f(w) \underbrace{K(g\inv w,z)}_{=K(w,gz) = K(w,z)} \diff \mu(w)\diff \mu(g) \end{equation*}

Now returning to the proof. Let \((\Delta + \lambda)f \equiv 0\text{,}\) \(L\) invariant.

\begin{equation*} Lf(z) = L(\Phi_zf)(z) \end{equation*}
\begin{equation*} = \int_\HH \Phi_z(f) (w) K(z,w) \diff \mu(w) \end{equation*}
\begin{equation*} = \int_\HH W(w,z) f(w) K(z,w) \diff \mu(w) \end{equation*}
\begin{equation*} =\left\{\int_\HH W(w,z) K(z,w) \diff \mu(w)\right\} f(z) \end{equation*}

Claim: \(\{\cdots\}\) depends only on \(K\) and \(\lambda\) not \(z\text{.}\) Proof: Let \(z_1,z_2 \in \HH\) and pick \(g\in \SL_2(\RR)\) \(gz_1 = z_2\text{.}\)

\begin{equation*} \int_\HH W(w,z_2)K(z_2, w) \diff \mu(w) \end{equation*}
\begin{equation*} = \int_\HH W(w,gz_1) K(gz_1,w) \diff \mu(w) \end{equation*}
\begin{equation*} = \int_\HH W(g^{-1} w,z_1) K(z_1,g\inv w) \diff \mu(w) \end{equation*}
\begin{equation*} = \int_\HH W( w,z_1) K(z_1, w) \diff \mu(w)\text{.} \end{equation*}

Upshot so far: Poisson summation is a duality, but it can be seen as an equality of the trace of an operator calculated in two different ways. In the non-euclidean setting we can do something similar, but not so recognisable.

Digression: Ramanujan conjecture

A weight \(k\) cusp form, eigenfunction of the Hecke operators implies

\begin{equation*} |\lambda_p| \le 2 p^{\frac{k-1}{2}}\text{,} \end{equation*}

“correct” normalisation is \(|\tilde \lambda_p| \le 2\text{.}\)

This is about the components at \(p\) but there is also a component at infinity.

Selberg's eigenvalue conjecture: \(\phi\) is a cuspidal automorphic (Maass) form with eigenvalue \(\lambda = s(1-s)\) implies \(s = \frac 12 + it\text{,}\) \(t\in \RR\) i.e. \(|\lambda| \ge \frac 14\text{.}\)

Back to \(\HH\)

If we have \((\Delta + \lambda) f = 0\) can we say anything about \(\lambda\text{?}\)

Introduce the Petersson inner product

\begin{equation*} \pair FG = \int_\HH F(z) \overline{G(z)} \diff \mu(z)\text{.} \end{equation*}

Now

\begin{equation*} \pair{-\Delta F}{G} = \int_\HH \nabla F \cdot \overline{\nabla G} \diff x\diff y \end{equation*}
\begin{equation*} (\Delta F= \nabla\cdot \nabla F) \end{equation*}

exercise: check this. So \(\pair {-\Delta F}{G} = \pair {F}{-\Delta G}\) which gives \(\lambda \in \RR\text{.}\)

\begin{equation*} \pair{-\Delta F}{F} \ge 0 \implies \lambda \ge 0\text{.} \end{equation*}

For the \(\frac14\) bound one needs to work a little harder.

Let \(D\subseteq \HH\) be a (nice) domain. Consider the Dirichlet problem

\begin{equation*} (\Delta + \lambda)f \equiv 0 \text{ inside }D \end{equation*}
\begin{equation*} f \equiv 0 \text{ on }\partial D\text{.} \end{equation*}

Define

\begin{equation*} \pair FG _D= \int_D F(z) \overline{G(z)} \diff \mu(z)\text{.} \end{equation*}

Then

\begin{equation*} \pair{-\Delta F}{G}_D = \int \nabla F \cdot \overline{\nabla G} \diff x\diff y \end{equation*}

(exercise, show this).

\begin{equation} \lambda\|F\|^2 = \pair{-\Delta F}{F} = \int_D \left(\left(\partder[F]{x}\right)^2 + \left(\partder[F]{y}\right)^2\right) \frac{\diff x\diff y}{y^2} \ge \int_D \left(\partder[F]{y}\right)^2\diff x\diff y\tag{1.1} \end{equation}

For every fixed \(x\text{:}\)

\begin{equation*} \int F^2 \frac{\diff y}{y^2} = 2 \int F\partder[F]{y} \frac{\diff y}{y} \end{equation*}
\begin{equation*} \implies \int_D F^2 \frac{\diff x \diff y}{y^2} = 2\int F\partder[F]{y} \frac{\diff x \diff y}{y}\text{.} \end{equation*}
\begin{equation*} \implies 2\int \left| \frac Fy \partder[F]{y}\right | \diff x \diff y \le 2\left( \int \frac{F^2}{y^2} \diff x\diff y\right)^{\frac 12} \left(\int \left(\partder[F]{y}\right)^2 \diff x \diff y\right)^\frac12 \end{equation*}
\begin{equation} \implies \frac12\int_D \frac{F^2}{y^2} \diff x \diff y \le \left( \int \frac{F^2}{y^2} \diff x\diff y\right)^{\frac 12} \left(\int \left(\partder[F]{y}\right)^2 \diff x \diff y\right)^\frac12\tag{1.2} \end{equation}

previous two imply

\begin{equation*} \frac{1}{4\lambda} \int_D \left(\partder[F]{y}\right)^2 \diff x \diff y \le \frac 14 \int_D \frac{F^2}{y^2} \diff x \diff y \le \int_D \left(\partder[F]{y}\right)^2 \diff x\diff y \end{equation*}
\begin{equation*} \frac{1}{4\lambda}\le 1 \implies \lambda \ge \frac 14\text{.} \end{equation*}
Remark 1.38

In Theorem 1.29 we restricted to the \(1/2\) line, this is a reincarnation of \(\lambda \ge \frac 14\text{.}\) Only certain functions contributed, similar to the way only the unitaries \(e^{2\pi ix\xi}\) contribute to a fourier expansion, not all characters of \(\RR\text{.}\)

The spectrum of \(\Delta\) on \(\HH\) has \(\lambda = s(1-s)\text{,}\) \(s = \frac 12 + it\text{.}\) If we consider the quotient \(\Gamma \backslash \HH\) there is a possibility for \(t = t_{\RR} + it_{\CC}\) where \(0\le t_\CC \le \frac 12\text{.}\) Selberg's conjecture is that these extra ones don't appear for cusp forms. This is very sensitive to the arithmetic, we need a congruence subgroup for this to be true.

Subsection 1.10 Automorphic forms

Subsubsection 1.10.1 Modular forms

These are functions on \(\HH\) that are very symmetric. We already saw one in the first lecture, \(\theta(t)\) in the proof of Theorem 1.3. Its not quite, instead it is a half-integral weight modular form as we involved square roots

\begin{equation*} \theta (t) \leftrightarrow \theta\left(\frac 1t\right)\text{.} \end{equation*}
Definition 1.39 Modular functions and forms

A modular function is some

\begin{equation*} f\colon \HH \to \CC \end{equation*}

with

\begin{equation*} f\left(\frac{az+b}{cz+d}\right) = (cz+d)^k f(z) \, \forall\begin{pmatrix} a\amp b \\c\amp d\end{pmatrix} \in \SL_2(\ZZ) \end{equation*}

where, \(f\) is meromorphic on \(\HH\) including \(\infty\text{.}\) It is called a modular form if it is indeed holomorphic at infinity, this is equivalent to some growth condition.

Definition 1.40 Cusp forms

\(f\) is a cusp form if

\begin{equation*} \int_0^1 f(x+z) \diff x = 0\, \forall z\text{.} \end{equation*}
Remark 1.41

\(f\) has a fourier expansion (invariant under \(x\mapsto x+1\)) holomorphic implies

\begin{equation*} f(z) = \sum_{n=0}^\infty a_n e(nz) \ (e(\alpha) =e^{2\pi i \alpha}) \end{equation*}

cusp form implies

\begin{equation*} f(z) = \sum_{n=1}^\infty a_n e(nz) \end{equation*}

as cuspidal implies \(f(z) = O(e^{-2\pi y})\) as \(y \to \infty\text{.}\) \(f\) not cuspidal implies \(f(z) = O(y^s)\) as \(y \to \infty\text{.}\)

Subsubsection 1.10.2 Examples

Example 1.42

Constant functions for \(k = 0\text{.}\)

Example 1.43

Eisenstein series (holomorphic).

\begin{equation*} G_k(z) = \sum_{(m,n) \in \ZZ^2 \smallsetminus (0,0)} \frac{1}{(mz+n)^{2k}}\text{.} \end{equation*}

Is this cuspidal? Answer: No! Why?

\begin{equation*} G_k(z) = 2\underbrace{\zeta(2k)}_{\ne 0} + \frac{2(2\pi i)^{2k}}{(2k-1)!}\sum_{\alpha = 1}^\infty \underbrace{\sigma_{2k-1}(\alpha)}_{=\sum_{d|\alpha} d^{2k-1}} e(\alpha z) \end{equation*}

this is of weight \(2k\text{!}\)

Prove this.

Example 1.45
\begin{equation*} \Delta(z) = (60G_2(z))^3 - 27(140G_3(z))^2 \end{equation*}

is a cusp form of weight 12.

\begin{equation*} \Delta(z) = \sum_{n=1}^\infty a_n e(nz),\,a_n = n^{11/2} \tau(n) \implies \tau(n) = O(n^\varepsilon) \forall \varepsilon \end{equation*}
\begin{equation*} |\tau(p) | \le 2 \end{equation*}

the original Ramanujan conjecture.

Show these.

Example 1.47 A non-example
\begin{equation*} j(x) = \frac{1728(60G_2(z))^2}{\Delta(z)} \end{equation*}

not holomorphic at \(\infty\text{.}\)

Example 1.48

We had also seen \(\theta\)-function but it does not fit in to this setting. It is rather a modular form for a covering group.

Digression: bounds on fourier coefficients of cusp forms
\begin{equation*} f(z) = \sum_{n=1}^\infty a_n e(nz) = e(z) \sum_{n=1}^\infty a_n e((n-1)z) \end{equation*}

implies

\begin{equation*} |f(x) | \le C e^{-2\pi y} \end{equation*}

now consider

\begin{equation*} \phi(z) = f(z) y^{k/2}\text{.} \end{equation*}

Then

\begin{equation*} \phi(gz) = \phi(z) \ \forall f\in\SL_2(\ZZ) \end{equation*}

(exercise). Moreover \(\phi(z) \to 0\) as \(y\to\infty\) and \(\phi\) is continuous so \(\exists M\) s.t. \(|\phi(z) | \le M\text{.}\)

Therefore \(f(z) \le M y^{-k/2}\text{.}\)

\begin{equation*} f(z) = \sum_{n=1}^\infty a_n e(nz) \end{equation*}
\begin{equation*} a_n e(iy) = \int_0^1 f(x+iy) e(-nz) \diff x \end{equation*}
\begin{equation*} \le \int_0^1 \frac{M}{y^{k/2}} \diff x = O\left(\frac{1}{y^{k/2}}\right) \end{equation*}

for all \(y\) pick \(y=1/n\) so \(O(n^{k/2})\text{.}\)

Subsubsection 1.10.3 Maass forms

Definition 1.50 Maass forms

A function \(f\colon \HH \to \CC\) s.t.

  • \begin{equation*} f(gz) = f(z) \ \forall g\in\SL_2(Z) \end{equation*}
  • \(f\) is an eigenfunction of \(\Delta\text{.}\)
  • \(f\) is of moderate growth, \(f(x+iy) = O(y^N)\) for some \(N\text{.}\)

is called a maass form. If

\begin{equation*} \int_0^1 f(x+iy)\diff x = 0 \end{equation*}

we call it a maass cusp form.

Example 1.51

Constant functions are Maass forms, this is because they are \(L^2\) because \(\HH/\SL_2(\ZZ)\) has finite volume.

Example 1.52 Non-holomorphic Eisenstein series.
\begin{equation*} E(z; s) = \sum_{(c,d) \in \ZZ^2 \smallsetminus (0,0),(c,d) =1,c\ge 0}\frac{\Im(z)^{s+1/2}}{|cz+d|^{2s+1}}\text{,} \end{equation*}

we choose this normalisation (for now) with \(s+\frac 12\) as it generalises better to \(\GL_3\) which has more elements in its Weyl group.

Remark 1.53

In fact most things are non-holomorphic in the sense that many spaces of interest do not have a complex structure.

Properties
  • \begin{equation*} (\Delta + \lambda) E(z; s) = 0 \end{equation*}
    \begin{equation*} \lambda = \frac 14 - s^2 \end{equation*}
  • \begin{equation*} E(\gamma z; s) = E(z,s) \ \forall\gamma\in\SL_2(\ZZ) \end{equation*}
  • \begin{equation*} E(z; s) = O(y^{\max\{\Res(s+\frac 12),\Re(-s + \frac 12)\}}) \end{equation*}

hence \(E(z; s)\) is a Maass form. We have

\begin{equation*} \Im(\gamma z) = \frac{\Im(z)}{|cz+d|^2} \end{equation*}

so

\begin{equation*} E(z; s) = \sum_{(c,d)\in\ZZ^2 \smallsetminus (0,0), (c,d) =1,c\ge 0} \frac{y^{s+\frac 12}}{|cz+d|^{2s+1}}= \sum_{\gamma \in \pm\Gamma_\infty \backslash \SL_2(\ZZ)} \Im(\gamma z)^{s+\frac12} \end{equation*}

where \(\Gamma_\infty = \left\{ \begin{pmatrix} 1 \amp n \\ \amp 1 \end{pmatrix}\in\SL_2(\ZZ)\right\}\text{.}\) Exercise: check.

\begin{equation*} \Delta y^{s+\frac 12} = y^2 (\partder[^2]{x^2}+\partder[^2]{y^2}) y^{s+\frac12} \end{equation*}
\begin{equation*} = (s+\frac12)(s-\frac12) y^{s+\frac12} \end{equation*}
\begin{equation*} = (s^2 - \frac14)y^{s+\frac12} \end{equation*}
\begin{equation*} \implies (\Delta+ (\frac14 - s^2))y^{s+\frac 12} = 0 \end{equation*}

\(\gamma\) is an isometry implies \(\Delta\gamma = \gamma\Delta\text{.}\) So \(\gamma y^{s+\frac 12}\) is also an eigenfunction with eigenvalue \(\frac 14 - s^2\text{.}\)

(Wrong way to prove this) Fourier expansion of Eisenstein series.

\begin{equation*} E(z; s) = a_0(y)+ \sum_{n\ne 0} a_n 2y^{\frac12}K_s(2\pi |n| y) e(nx) \end{equation*}

using Theorem 1.29.

\begin{equation*} \int_0^1 E(x+iy ; s ) e(-nx) \diff x = \begin{cases} a_0 (y) \amp n=0 \\ 2a_n y^{\frac 12} K_s(2\pi|n|y) \amp n \ne0 \end{cases}\text{.} \end{equation*}

Note:

\begin{equation*} E(z; s) = \sum \frac{y^{s+\frac12}}{|cz+d|^{2s+1}} \end{equation*}
\begin{equation*} =\frac12 \frac {1}{\zeta(2s+1)} \sum_{(c,d) \in \ZZ\smallsetminus (0,0)} \frac{y^{s+\frac 12}}{|cz+d|^{2s+1}} \end{equation*}

We will work with

\begin{equation*} E_1(z; s) = \frac {\overbrace{\pi^{-(s+\frac 12)} \Gamma(s+\frac12)}^{\text{for archimidean factors of }\zeta(2s+1)}}{2} \sum_{(c,d) \ne (0,0)} \frac {y^{s+\frac12}}{|cz+d|^{1+2s}} \end{equation*}
  1. \begin{equation*} c= 0 \implies \begin{cases} \frac {\pi^{-(s+\frac 12)} \Gamma(s+\frac12)}{2}\zeta(1+2s)\amp\text{ if }n=0\\ 0 \amp\text{ if }n\ne0 \end{cases} \end{equation*}
  2. \begin{equation*} c\ne 0 : \sum_{(c,d),\,c\ne0} \int_0^1 \frac{y^{s+\frac12}}{|cz+d|^{2y+1}} e(-nx) \diff x \end{equation*}
    \begin{equation*} =2 \sum_{c=1}^\infty \sum_{d = -\infty}^ \infty y^{s+\frac 12} \int_0^1 \frac{e(-nx)}{\underbrace{|cz+d|^{2s+1}}_{=cx+d +icy}} \diff x \end{equation*}

    the right hand side is invariant under \(x\mapsto x+1\) we can absorb the shift into the sum over \(d\text{,}\) in a general context this is known as unfolding.

    \begin{equation*} =2 \sum_{c=1}^\infty\sum_{\alpha \pmod c} \sum_{d \equiv \alpha \pmod c} y^{s+\frac 12} \int_0^1 \frac{e(-nx)}{|cz+d|^{2s+1}} \diff x \end{equation*}
    \begin{equation*} =2 \sum_{c=1}^\infty\sum_{\alpha \pmod c} \sum_{k \in \ZZ}y^{s+\frac 12} \int_0^1 \frac{e(-nx)}{|cx+ck + \alpha + icy|^{2s+1}} \diff x \end{equation*}
    \begin{equation*} =2 y^{s+\frac 12} \sum_{c=1}^\infty\sum_{\alpha \pmod c} \int_{-\infty}^\infty \frac{e(-nx)}{|cx+ \alpha + icy|^{2s+1}} \diff x \end{equation*}
    \begin{equation*} =2 y^{s+\frac 12} \sum_{c=1,\alpha\pmod c}^\infty \frac{e(n\alpha/c)}{c^{2s+1}}\int_0^1 \frac{e(-nx)}{|x+iy|^{2s+1}} \diff x \end{equation*}

    note:

    \begin{equation*} \sum_{\alpha \pmod c} e(n\alpha/c) = \begin{cases} c\amp\text{ if }c|n \\0\amp\text{ if }c\nmid n \end{cases} \end{equation*}

    so we get

    \begin{equation*} 2y^{s+\frac12} \sum_{c=1}^\infty \frac{1}{c^{2s}} \int_{-\infty}^\infty \frac{e(-nx)}{|x+iy|^{2s+1}} \diff x \end{equation*}

    two cases

    1. \(n = 0\)
      \begin{equation*} 2y^{s+\frac12} \sum_{c=1}^\infty \frac{1}{c^{2s}} \int_{-\infty}^\infty \frac{1}{|x+iy|^{2s+1}} \diff x \end{equation*}
      \(x\to yx\)
      \begin{equation*} =2\frac{y^{s+\frac32}}{y^{2s+1}} \sum_{c=1}^\infty \frac{1}{c^{2s}} \int_{-\infty}^\infty \frac{1}{(x^{2}+1)^{\frac{2s+1}{2}}} \diff x \end{equation*}
      \begin{equation*} =2y^{-s+\frac12} \zeta(2s)\int_{-\infty}^\infty \frac{1}{(x^{2}+1)^{\frac{2s+1}{2}}} \diff x \end{equation*}
    2. \(n \ne 0\)
      \begin{equation*} 2y^{s+\frac12} \sigma_{-2s}(|n|)\int_{-\infty}^\infty \frac{e(-nx)}{|x+iy|^{2s+1}} \diff x \end{equation*}
      fact:
      \begin{equation*} \pi^{-(s+\frac 12)} y^{s+\frac 12} \Gamma(s+\frac 12) \int_{-\infty}^\infty \frac{e(-nx)}{|x+iy|^{2s+1} }\diff x = \begin{cases} \pi^{-s} \Gamma (s) y^{\frac12 - s } \amp \text{ if } n = 0 \\ 2|n|^2 \sqrt y K_s(2\pi |n| y) \amp\text{ if } n \ne 0\end{cases} \end{equation*}

    Combining these we have shown

    \begin{equation} E_1(z; s) = \pi^{-(s+\frac12)} \Gamma(s+\frac 12) \zeta(2s+1 ) y^{s+\frac 12}\tag{1.3} \end{equation}
    \begin{equation*} + \pi ^{-s} \Gamma(s) \zeta(2s) y^{\frac 12 -s} + \sum_{n \ne 0} \sigma_{-2s} (|n|) |n| ^s \sqrt{y} K_s( 2\pi |n| y) e(nx) \end{equation*}

    where we have \(K_s = K_{-s}\) and

    \begin{equation*} \sigma_{-2s} |n|^s = \sum_{d|n} d^{-2s}|n|^s = \sum_{d|n} \frac{d^{2s}}{|n|^s} = |n|^{-s} \sigma_{2s}(|n|) = E(z; s)\text{.} \end{equation*}

    So we have proved the functional equation and analytic continuation.

We can see that we have \(\zeta\) appearing here in the constant term, we can determine analytic information about it using what we know about Eisenstein series, this idea in generality is known as the Langlands-Shahidi method.

Remark 1.55

This has poles at \(s = \frac12\text{,}\) \(\Res_{s=\frac12} E(z; s) = \frac12\text{.}\) Note that this residue is constant. We will use this in Rankin-Selberg.

Remark 1.56

If \(G\) is a reductive group and \(M \subseteq G\) a Levi subgroup, e.g. for \(\GL_n\) a Levi is diagonal blocks of size \(n_1 + n_2 + \cdots +n_k = n\text{.}\) One can associate a cusp form to these subgroups following Eisenstein. There are automorphic \(L\)-functions corresponding to these and by doing the same procedure as last time we see these \(L\)-functions appearing in the constant terms of the Eisenstein series. So we can establish analytic properties of these automorphic \(L\)-functions via those of the Eisenstein series. This is known as the Langlands-Shahidi method, it only works in some cases but when it does it is very powerful. Shahidi pushed the idea by looking at non-constant terms. In the example above we have

\begin{equation*} \sigma_{-2s}(n) = \sum_{d|n} d^{-2s} \prod(1+ 1/p^{2s} = \prod_{p|n} \zeta_p^{-1}(2s) \end{equation*}

so there are \(L\)-functions even in the non-constant terms.

Subsubsection 1.10.4 Hecke Operators

The natural setting to view these is over \(\GL_2(\QQ_p)\text{.}\) But as we haven't done this yet we will take the path that Hecke took and just write a formula. They act on the space of modular forms.

Definition 1.57 Slash operators

Let \(k\in \ZZ_+\) fixed, \(\gamma \in \GL_2^+(\RR)\) (positive determinant).

\begin{equation*} f|_{\gamma} (z) = \det(\gamma)^{k/2} (cz+d)^{-k} f\left(\frac{az+b}{cz+d}\right)\text{.} \end{equation*}

There is a determinant twist so that the center acts trivially.

Definition 1.58 \(T_{\gamma}\)

Let \(\gamma\in \GL_2^+(\QQ)\) write

\begin{equation*} \SL_2(\ZZ) \gamma\SL_2(\ZZ) = \bigsqcup_{i=1}^r \SL_2(\ZZ) \gamma_i \end{equation*}

then

\begin{equation*} T_\gamma(f) = \sum_{i=1}^r f|_{\gamma_i}(z)\text{.} \end{equation*}
\begin{equation*} \gamma = \begin{pmatrix}p\amp 0 \\ 0 \amp 1 \end{pmatrix} \end{equation*}
\begin{equation*} \SL_2(\ZZ)\begin{pmatrix}p\amp 0 \\ 0 \amp 1 \end{pmatrix} \SL_2(\ZZ) = \bigsqcup_{b \pmod p} \SL_2(\ZZ) \begin{pmatrix}1\amp b \\ 0 \amp p \end{pmatrix} \bigsqcup \SL_2(\ZZ) \begin{pmatrix}p\amp 0 \\ 0 \amp 1 \end{pmatrix}\text{.} \end{equation*}
More classically
\begin{equation*} T_p(f)(z) = p^{-k/2} \sum_{b\pmod p} f\left(\frac{ z+b}{p}\right) + p^{k/2} f(pz)\text{.} \end{equation*}

This differs by a normalization of \(\det\) won't change too much but will shift the spectrum. Or more generally we have

\begin{equation*} T_n(f)(z) = \sum_{ad = n,\,b\pmod {d} } n^{k/2} d^{-k} f\left(\frac{az+ b}{d}\right)\text{.} \end{equation*}

By the Fourier expansion

\begin{equation*} T_nf(z) = \sum_{ad= n,\, b\pmod d} \left(\frac ad\right)^{k/2} f \left( \frac{az+b}{d} \right) \end{equation*}
\begin{equation*} = \sum_{ad= n,\, b\pmod d} \left(\frac ad\right)^{k/2} \sum_{m=1}^\infty a_m e\left( \frac{maz}{d} \right) e\left( \frac{mb}{d} \right) \end{equation*}
\begin{equation*} = \sum_{ad= n,\, b\pmod d} \left(\frac ad\right)^{k/2} d \sum_{m=1}^\infty a_{md} e\left( maz\right)\text{.} \end{equation*}

Which implies

\begin{equation*} n^{1-k/2} \lambda(n) f(z) = \sum_{ad= n} \left( \frac ad \right)^{k/2} d \sum_{m=1}^\infty e(maz) \end{equation*}

so

\begin{equation*} n^{1-k/2} \lambda(n) a_m = \sum_{ad= n,\, a|m} \left( \frac ad \right)^{k/2} d \frac{a_{md}}{a} \end{equation*}

exercise: check. Take \(m = 1\) so

\begin{equation*} n^{1-k/2} \lambda(n) a_1 = n^{-k/2 + 1} a_n \end{equation*}

hence

\begin{equation*} \lambda(n) a_1 = a_n\text{.} \end{equation*}
  1. If \(a_1 = 0\) then \(f \equiv 0\text{.}\)
  2. If \(a_1 = 1\) then \(\lambda(n) = a_n\text{.}\)
  3. Follows from 4.
  4. Note
    \begin{equation*} (p^r)^{1-k/2} \lambda(p^r) \lambda(p) = \sum_{ad = p^r,\, a|p} \lambda\left(\frac{pd}{a} \right) \left( \frac ad\right)^{k/2} d \end{equation*}
    \begin{equation*} = \lambda(p^{r+1}) (p^r)^{1-k/2} + \lambda(p^{r-1}) (p^r)^{1-k/2} p^{k-1} \end{equation*}
    \begin{equation*} \left(\sum_{r=0}^\infty \frac{\lambda(p^r)}{p^{rs}}\right) (1-\frac{\lambda(p)}{p^s} + \frac{p^{k-1}}{p^{2s}}) = 1\text{.} \end{equation*}

This is very special to \(\GL_2\text{,}\) in general fourier coefficients have more information than Hecke eigenvalues.

Remark 1.63

With the normalisation

\begin{equation*} T_n(f)(z) = \lambda(n) n^{1-k/2} f(z) \end{equation*}

the Ramanujan conjecture reads \(\lambda(n) = O(n^{(k-1)/2 + \epsilon})\text{.}\)

Remark 1.64

Having \(a_1 = 1\) is known as being Hecke normalised.

Subsection 1.11 Rankin-Selberg method

This is a protoype of the integral representation of automorphic \(L\)-functions.

Subsubsection 1.11.1 Mellin transforms of automorphic forms and automorphic \(L\)-functions

Let

\begin{equation*} \phi(z) = \sum_{n\in \ZZ} a_n(y) e(nx) \end{equation*}

then

\begin{equation*} a_n(y) = \int_0^1 \phi(z) \overline {e(nx)} \diff x\text{.} \end{equation*}

Notation:

\begin{equation*} \tilde a_n(s) = \int_0^\infty a_n(y) y^2 \diff^* y,\,\diff^* y = \frac{\diff y}{y} \end{equation*}

converges for \(\Re(s) \gg 0\) if \(a_n(y) = O(y^{-N})\) for all \(N\text{.}\)

Definition 1.66 Mellin transforms

Given

\begin{equation*} f(y) \colon \RR_+ \to\CC \end{equation*}

its Mellin transform is

\begin{equation*} \hat f (s) = \int_0^\infty f(y) y^s \frac{\diff y}{y},\, f(y) = O(y^{-N})\text{.} \end{equation*}

If

\begin{equation*} f(y) = g(Q(n) y) \end{equation*}
\begin{equation*} \hat f(s) = \int g(Q(n)y)y^s \frac{\diff y } y = \int g(y) \frac{y^s}{Q^s(n)} \frac{\diff y}{y} = \frac{1}{Q(n)^s} \tilde g(y)\text{.} \end{equation*}

What is

\begin{equation*} \int_{\SL_2(\ZZ) \backslash \HH} \phi(z) E_3( z; s)\diff \mu(z)\text{?} \end{equation*}

The Eisenstein series is essentially

\begin{equation*} \sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} \Im(\gamma z)^s \end{equation*}

we can see the integrating over \(\SL_2(\ZZ) \backslash \HH\) a sum over \(\Gamma_\infty\backslash \SL_2(\ZZ)\) things should cancel to give us an integral over \(\Gamma_\infty \backslash \HH\text{,}\) a rectangle! So this unfolding should simplify things.

Follow your nose!

Recall

\begin{equation*} E_3(z; s) = \frac{\pi^{-s}}{2} \Gamma(s) \zeta(2s) \sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} \Im(\gamma z)^s\text{.} \end{equation*}

Step 1: The integral converges: Writing \(E\) for \(E_3\) we have

\begin{equation*} E(z; s) = O(y^2 + y^{1-s})\text{.} \end{equation*}

Step 2: Unfold

\begin{gather*} \frac{\pi^{-s}}{2} \Gamma(s) \zeta(2s) \int_{\SL_2(\ZZ)\backslash \HH} \phi(z) \left( \sum_{\Gamma_\infty \backslash \SL_2(\ZZ)} \Im(\gamma z)^s \right) \diff \mu(z)\\ =\frac{\pi^{-s}}{2} \Gamma(s) \zeta(2s) \int_{\SL_2(\ZZ)\backslash \HH} \left( \sum_{\Gamma_\infty \backslash \SL_2(\ZZ)}\phi(\gamma z) \Im(\gamma z)^s \right) \diff \mu(z)\\ =\pi^{-s} \Gamma(s) \zeta(2s) \int_{\Gamma_\infty\backslash \HH} \phi(z) y^s \diff \mu(z)\\ =\pi^{-s} \Gamma(s) \zeta(2s)\int_0^1 \int_{0}^\infty \phi(z) y^s \frac{\diff x \diff y}{y^2}\\ =\pi^{-s} \Gamma(s) \zeta(2s)\int_{0}^\infty a_0(y) y^s \frac{\diff y}{y^2}\\ =\pi^{-s} \Gamma(s) \zeta(2s)\tilde a_0(s-1)\text{.} \end{gather*}

Note that if we use a cusp form for \(\phi\) we get 0 from the integral above, in \(L^2\) the cusp forms and Eisenstein series are orthogonal. Instead we will cook up something interesting from two functions.

Subsubsection 1.11.2 Rankin-Selberg \(L\)-functions

Let

\begin{equation*} f(z) = \sum_{n=0}^\infty a_ne(nz) \end{equation*}
\begin{equation*} g(z) = \sum_{n=0}^\infty b_ne(nz) \end{equation*}

be holomorphic modular forms of weight \(k\text{.}\)

Assume that at least one of \(f\) or \(g\) is cuspidal. Assume additionally that \(f,g\) are normalised Hecke eigenforms so \(a(1) = b(1) = 1\text{.}\)

Definition 1.69
\begin{equation*} \phi(z) = f(z) \overline g(z)y^k\text{.} \end{equation*}
Note 1.70

\(\phi(\gamma z) = \phi(z)\) for any \(\gamma \in \SL_2(\ZZ)\text{.}\) \(\phi\) also satisfies the decay condition.

Note 1.71

If \(f = \sum a_n e(nz)\text{,}\) \(g = \sum b_n e(nz)\) then

\begin{equation*} f(z)\overline g(z) = \sum_{m-n \ne 0} A_{n-m} e((n-m)z) + \sum_{n\in \ZZ} a_n \overline b_n \end{equation*}

so if we were to integrate this from 0 to 1 \(\diff x\) the first term would disappear and we would be left with the second.

\begin{equation*} \phi_0(y) = \int_0^1 f(z)\overline g(z) \diff xy^k = \int_0^1\sum_{m-n \ne 0} A_{n-m} e((n-m)z)y^k +\int_0^1 \sum_{n\in \ZZ} a_n \overline b_ny^k \end{equation*}
\begin{equation*} = \sum_{n\in \ZZ} a_n \overline b_ny^k \end{equation*}

i.e.

\begin{equation*} \phi_0(y) = \int_0^1 \phi(x+iy) \diff x = \sum_{n \in \ZZ} a_n(y) \overline b_n(y) y^k e^{-4\pi i ny}\text{.} \end{equation*}
Note 1.72
\begin{equation*} \tilde \phi_0(s) = \int_0^1 \sum_{n=0}^\infty a_n \overline b_n e^{-4 \pi i y} y^{k+s}\frac{\diff y}{y} \end{equation*}
\begin{equation*} = \sum_{n=0}^\infty a_n\overline b_n\int_0^\infty e^{-4 \pi i y} y^{k+s}\frac{\diff y}{y} \end{equation*}
\begin{equation*} = \frac{1}{(4\pi )^s} \sum_{n=0}^\infty \frac{ a_n\overline b_n}{n^{k+s}}\int_0^\infty e^{-y} y^{k+s}\frac{\diff y}{y} \end{equation*}
\begin{equation*} = \frac{\Gamma(k+s)}{(4\pi)^{k+s}}\underbrace{\sum_{n=1}^\infty\frac{ a_n\overline b_n}{n^{k+s}}}_{L(s+k; f\times \overline g)} \end{equation*}

this is the Rankin-Selberg \(L\)-function. So by the corollary 1.68; \(L(s+k; f\times g)\) has analytic continuation and functional equation, and poles only at \(s= 1+k, s=k\text{.}\)

An application

We proved \(f\) cusp form \(f(z) = \sum a_ne(nz)\) implies \(a_n = O(n^{k/2})\text{,}\) Ramanujan \(a_n = O(n^{(k-1)/2})\text{.}\) As cusp forms often appear as error terms for counting arguments knowing it gives us many results, tells us we can just count with Eisenstein series. The averaged version of the Ramanujan conjecture is much easier

\begin{equation*} \sum_{n\lt X} a_n\text{.} \end{equation*}

Recall Proposition 1.67 and moreover that

\begin{equation*} E(z; s) = \pi^{-s} \Gamma(s) \zeta(2s) \frac 12 \sum_{\gamma \in \Gamma_\infty \backslash \SL_2(\ZZ)} \Im(\gamma z)^s = \pi^{-s } \Gamma(s) \sum_{(m,n) \in \ZZ^2 \smallsetminus (0,0)} \frac{y^s}{|mz+n|^{2s}}\text{,} \end{equation*}

from this and Note 1.71 we conclude.

\begin{equation*} \int_{\SL_2(\ZZ) \backslash \HH} f(z) \overline g(z) y^k E(z; s) \diff \mu (z) \end{equation*}
\begin{equation} = \frac{\pi^{-s} \zeta(2s) \Gamma(s) \Gamma(s+ k-1)}{(4\pi)^{2+k-1}} \sum_{n=1}^\infty \frac{a_n \overline b_n}{n^{s+k-1}}\label{eqn-conclusion-rankin-selberg}\tag{1.5} \end{equation}

(1.5) has analytic continuation as a function of \(s\) to all \(s\in \CC\text{.}\) It has at most simple poles with residue at \(s =1\text{:}\)

\begin{equation*} \frac 12 \pair{f}{g}_{\Pet}\text{.} \end{equation*}

If we let

\begin{equation*} L(s,f\times \overline g) = \zeta(2(s-k+1)) \sum_{n=1} ^\infty \frac{a_n \overline b_n}{n^s} \end{equation*}
\begin{equation*} \Lambda(s,f\times \overline g) = (2\pi)^{-2s} \Gamma(s) \Gamma(s-k+1) L(s,f\times g) \end{equation*}

so

\begin{equation*} \Lambda(s, f\times \overline g) = \Lambda(2k-1-s, f\times g) \end{equation*}

this follows from Theorem 1.54 \(E(z; s) = E(s; 1-s)\text{.}\) .

\(\Lambda(s,f\times \overline g)\) has analytic continuation to \(s\in \CC\) with poles at most at \(s=k,\,s=k-1\text{.}\)

\begin{equation*} \Res_{s=k} \Lambda(s,f\times g) = \frac{1}{2\pi^{k-1}} \pair fg_{\Pet}\text{.} \end{equation*}

This is analogous to when we took the Mellin transform of the theta function. We have obtained some highly nontrivial information above:

Remark 1.73

Given an arbitrary series of the form

\begin{equation*} L(s) = \sum_{n=1}^\infty \frac{a_n}{n^s},\,|a_n| = O(n^\alpha) \end{equation*}

will converge for \(\Re(s) \gt \alpha + 1\text{.}\) As these coefficients often come from point counts, they will in general be polynomial.

Recall the Hecke bound

\begin{equation*} f(z) = \sum_{n=1}^\infty a_n e(nz) \end{equation*}

a cuspidal (Hecke eigen)form of weight \(k\) has \(a_n = O(n^{k/2})\text{.}\) Ramanujan (Deligne) gives us \(a_n =O(n^{(k-1)/2})\text{.}\) Hecke implies that \(a_n\overline b_n = O(n^k)\) implies \(\sum_{n=1}^\infty \frac{a_n\overline b_n}{n^s}\) converges for \(\Re(s) \gt k+1\text{.}\) Deligne implies that \(a_n\overline b_n = O(n^{k-1})\) implies \(\sum_{n=1}^\infty \frac{a_n\overline b_n}{n^s}\) converges for \(\Re(s) \gt k\text{.}\) But the above already gives us this convergence, highly nontrivial \(k\text{!}\)

Remark 1.74

If we take \(f,g\) to be normalized Hecke eigenforms. Let

\begin{equation*} 1-\frac{a_p}{p^s} + \frac{1}{p^{2s -k +1}} = (1-\frac{\alpha_1}{p^s})(1-\frac{\alpha_2}{p^s}) \end{equation*}
\begin{equation*} 1-\frac{b_p}{p^s} + \frac{1}{p^{2s -k +1}} = (1-\frac{\beta_1}{p^s})(1-\frac{\beta_2}{p^s}) \end{equation*}

then

\begin{equation*} L(s,f\times \overline g) = \prod_p \prod_{1\le i,j\le 2}(1- \frac{\alpha_i \overline \beta_j}{p^s})^{-1}\text{.} \end{equation*}

Prove this.

Applications

If one proves the prime number theorem using non-vanishing of the \(\zeta\) function in a certain region you use a weird identity using sines and cosines being positive. This really comes from a Rankin-Selberg product.

Remark 1.76

Rankin-Selberg proves positivity.

\begin{equation*} z\in \ZZ,\,|z|^2 \ge 0\text{.} \end{equation*}

Ramanujan on average:

\begin{equation*} \sum_{n\lt X } a_n \sim X^{(k+1)/2} \end{equation*}

this is equivalent to

\begin{equation*} L(s) = \sum_{n=1}^\infty \frac{a_n}{n^s} \end{equation*}

converges for \(\Re(s) \gt (k+1)/2\text{.}\) Let

\begin{equation*} \phi(z) = \sum_{n=1}^\infty a_n e(nz) \end{equation*}

be a cusp form of weight \(k\text{.}\)

Consider

\begin{equation*} D(s) = \sum_{n=1}^\infty \frac{|a_n|^2}{n^s}\text{,} \end{equation*}

Hecke implies this converges for \(\Re(s) \gt k+1\text{.}\)

Note 1.77
\begin{equation*} D(s) = L(s, f\times \overline f) \end{equation*}

converges for \(\Re(s) \gt k\text{.}\)

Now observe that for any \(\lambda \gt 0\)

\begin{equation*} |a_n| \le \max \left\{n^\lambda, \frac{|a_n|^2}{n^\lambda}\right\}\text{.} \end{equation*}

So

\begin{equation*} \sum_{n=1}^\infty \frac{|a_n|}{n^s} \le \max\sum_{n=1}^\infty \max \left\{ \frac{1}{n^{s-\lambda}}, \frac{|a_n|}{n^{s+\lambda}}\right\} \end{equation*}

choose \(\lambda = (k-1)/2\) so

\begin{equation*} \lt \max\sum_{n=1}^\infty \max \left\{ \frac{1}{n^{s-(k-1)/2}}, \frac{|a_n|}{n^{s+(k-1)/2}}\right\} \end{equation*}

which converges for \(s\gt (k+1)/2\text{.}\)

Question 1.78

Fix \(\diff \mu(z) = \frac{\diff x \diff y}{y^2}\) on \(\HH\text{.}\) What is \(\vol(\SL_2(\ZZ) \backslash \HH)\text{?}\) (\(\pi/3\text{?}\)) What about other \(\Gamma\text{?}\)

Naive observation:

\begin{equation*} \int \frac 12 \diff \mu(z) = \frac{\vol}{2} \end{equation*}

and

\begin{equation*} \frac 12 = \Res_{s=1 } E(z; s) \end{equation*}

so

\begin{equation*} \frac{\vol}{2} = \int_{\SL_2\backslash \HH} \Res_{s=1} E(z; s) \diff \mu(z) ``=" \Res_{s=1} \int_{\SL_2(\ZZ) \backslash \HH} E(z; s) \diff \mu(z) \end{equation*}

but the right hand side does not converge (exercise, check).

Problem: naive idea doesn't work \(\int_{\SL_2(\ZZ)\backslash \HH} E(z; s) \diff \mu(z)\) converges only for \(0 \lt 1 \lt 1\text{.}\) but then we can't unfold because the series defining \(E(z; s)\) does not converge \(0\lt s \lt 1\text{.}\) We must target the source of this divergence, the constant term of the Eisenstein series.

\begin{equation*} \int_1^\infty * y^s + * y^{1-s} \frac{\diff y}{y^2} \end{equation*}

we will truncate the Eisenstein series.

Subsubsection 1.11.3 Applications

First aim of the day: Calculate

\begin{equation*} \vol(\SL_2(\ZZ)\backslash\HH,\,\diff \mu(z) = \frac{\diff x \diff y}{y^2}) \end{equation*}

Idea: Use the pole of \(E(z; s)\) at \(s= 1\) and unfolding.

\begin{equation*} E(z; s) = \frac{\pi^{-s} \Gamma (s)\zeta(2s)}{2} \sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} \Im(\gamma z)^{s} \end{equation*}
\begin{equation*} \Res_{s=1} E(z; s) = \frac 12\text{.} \end{equation*}

Idea:

\begin{equation*} \Res_{s=1} \int_{\SL_2(\ZZ) \backslash \HH} E(z; s) = \diff \mu(z) = \frac{\vol}{2}\text{.} \end{equation*}

Problem: Constant term of \(E(z; s) \sim y^s + y^{1-s}\text{.}\)

\begin{equation*} \int_{1}^\infty y^s + y^{1-s} \frac{\diff y}{y^2} \end{equation*}

converges only if \(0\lt s \lt 1\text{.}\) This approach needs modification, we will look at two such.

  1. Sharp cut-off,
  2. Smooth cut-off.
1 Sharp cut-off

For sharp cut off we will fix some \(T \gt 0\) and only consider \(y\lt T\text{.}\) Setting

\begin{equation*} y_T(z) = \begin{cases} y \amp y\lt T\\ 0 \amp y\ge T \end{cases} \end{equation*}

using this

\begin{equation*} E_T(z; s) = \frac{\pi^{-s} \Gamma (s)\zeta(2s)}{2} \sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} y_T(\gamma z)^{s}\text{.} \end{equation*}

Observations:

\begin{equation*} \Im(\gamma z) = \frac{y}{|cz+ d| ^2 } = \frac{ y}{((cx+d)^2 + (cy)^2)} \end{equation*}
\begin{equation*} \le \max\left\{ \frac{y}{d^s}, \frac{1}{c^2 y}\right\} \end{equation*}
\begin{equation*} \Im(\SL_2(\ZZ)K) \le T_0 \end{equation*}

for some \(T_0\text{.}\)

(Unfold), recall

\begin{equation*} \Gamma_\infty = \left\{ \begin{pmatrix} 1 \amp n \\ \amp 1 \end{pmatrix}\in\SL_2(\ZZ)\right\} \end{equation*}

so

\begin{equation*} \int_{\SL_2(\ZZ) \backslash \HH} E_T(z; s) \diff \mu(z) = \pi^{-s} \Gamma(s) \zeta(2s) \int_{\SL_2(\ZZ) \backslash \HH} \sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} y_T(\gamma z)^{s} \diff \mu(z) \end{equation*}
\begin{equation*} = \pi^{-s} \Gamma(s) \zeta(2s) \int_{\Gamma_\infty \backslash \HH} y_T \diff \mu(z) \end{equation*}
\begin{equation*} = \pi^{-s} \Gamma(s) \zeta(2s) \int_0^1\int_0^T y^{s-1} \diff \mu(z) \end{equation*}
\begin{equation*} = \pi^{-s} \Gamma(s) \zeta(2s) \frac{T^{s-1}}{s-1} \end{equation*}

There is a huge generalization of this lemma by Langlands that allows him to calculate a lot of volumes.

Recall

\begin{equation*} \Im(\gamma z) = \frac{ y}{((cx+d)^2 + (cy)^2)}\text{.} \end{equation*}

Case 1: \(c \ne 0 \) implies \(\Im( \gamma z) \le \frac{1}{c^2 y} \le T\) for all \(y\in \mathcal F\text{.}\)

Case 2: \(c = 0 \) implies \(\Im( \gamma z) = \frac{y}{d^2 } \)

\begin{equation*} \gamma = \begin{pmatrix} a \amp b\\ 0 \amp d\end{pmatrix} \in \SL_2(\ZZ) \end{equation*}

implies \(ad = 1\) so \(a= d= 1\) or \(a=d= -1\text{.}\)

\begin{equation*} E_T(z; s) = \frac{\pi^{-s} \Gamma (s)\zeta(2s)}{2} \sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} y_T(\gamma z)^{s} \end{equation*}
\begin{equation*} = \frac{\pi^{-s} \Gamma (s)\zeta(2s)}{2} \sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} y(\gamma z)^{s} - \frac{\pi^{-s} \Gamma (s)\zeta(2s)}{2} \sum_{ d = \pm 1} y_T(\frac{y}{d^s})^{s} \end{equation*}
\begin{equation*} = \begin{cases} E(z; s) - \pi^{-s} \Gamma (s)\zeta(2s) y^s \amp y\lt T\\ E(z; s) \amp y\ge T \end{cases} \end{equation*}
Remark 1.82

\(E(z; s) - E_T(z; s) \) for fixed \(z\) is holomorphic as a function of \(s\) at \(s =1\text{.}\)

\(E(z; s) - E_T(z; s) \) is holomorphic at \(s =1\text{.}\) So

\begin{equation*} \int_{\SL_2(\ZZ) \backslash \HH} \Res_{s=1}( E(z; s) - E_T(z; s) ) \diff \mu(z) = 0 \end{equation*}
\begin{equation*} \int_{\SL_2(\ZZ) \backslash \HH} \frac 12 -\Res_{s=1}( E_T(z; s) ) \diff \mu(z) = 0 \end{equation*}
\begin{equation*} \int_{\SL_2(\ZZ) \backslash \HH} \Res_{s=1}( E_T(z; s) ) \diff \mu(z) = \frac 12 \vol(\SL_2(\ZZ) \backslash \HH) \end{equation*}
\begin{equation*} \Res_{s=1}\int_{\SL_2(\ZZ) \backslash \HH} E_T(z; s) \diff \mu(z) = \frac 12 \vol(\SL_2(\ZZ) \backslash \HH) \end{equation*}
\begin{equation*} \Res_{s=1}\pi^{-s} \Gamma (s)\zeta(2s) \frac{T^{s-1}}{s-1} = \frac 12 \vol(\SL_2(\ZZ) \backslash \HH) \end{equation*}
\begin{equation*} = \frac 1 \pi \frac{\pi^2}{6} \end{equation*}

hence

\begin{equation*} \vol(\SL_2(\ZZ) \backslash \HH) = \frac {\pi}{3}\text{.} \end{equation*}

Volumes of such domains are known as Tamagawa number and many such were computed via these methods by Langlands in the 60s.

2 Smooth cut-off

Let \(f \in \cinf_c(\RR_{\ge 0 })\) e.g. some nice bump. Consider

\begin{equation*} \theta_f(z ) = \frac 12 \sum_{\Gamma_\infty \backslash \SL_2(\ZZ)} f(\Im(\gamma z)) \end{equation*}

idea

\begin{equation*} f(x) = \frac{1}{2\pi i} \int_{(\sigma)} \tilde f(s) x^{-s} \diff s \end{equation*}

for \(a\lt \sigma\lt b\) such that \(\tilde f = \int_0^\infty y^s f(y) \frac{\diff y}{y}\) converges absolutely for \(a \lt s \lt b\text{.}\) Observation: \(\sigma\in \RR\) for this to hold. So

\begin{equation*} \theta_f(z) = \sum \left( \frac{1}{2\pi i} \int_{(\sigma)} \tilde f(s) \Im(\gamma z)^{-s} \diff s\right) \end{equation*}

for \(\Re(s) \lt -s\text{:}\)

\begin{equation*} = \frac{1}{4\pi i} \int_{(\sigma)} \tilde f ( s) \left(\sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} \Im(\gamma z)^{-s}\right) \diff s \end{equation*}
\begin{equation*} = \frac{1}{2\pi i} \int_{(\sigma)} \tilde f ( s) E_1(z; -s) \diff s \end{equation*}

where

\begin{equation*} E_1(z; s) = \frac12 \sum_{\gamma\in \Gamma_\infty \backslash \SL_2(\ZZ)} \Im(\gamma z)^{s}\text{.} \end{equation*}

Now integrate

\begin{equation*} \int_{\SL_2(\ZZ)\backslash \HH} \theta_f(z) \diff \mu (z) = \int_{\SL_2(\ZZ)\backslash \HH} \frac{1}{2\pi i} \int_{(\sigma)} \tilde f (s) E(z; -1) \diff s\diff \mu(z) \end{equation*}
\begin{equation*} = \tilde f (-1) \left(\Res_{s=1} E_1(z; s)\right) \vol(\SL_2(\ZZ)\backslash \HH) \end{equation*}
\begin{equation*} + \underbrace{\frac{1}{2\pi i} \int_{(-\frac 12)} \tilde f(s) \int_{\SL_2(\ZZ)\backslash \HH} E(z; -s) \diff \mu(z) \diff s}_{=\pair{E(z; s)}{1}_{\Pet}} \end{equation*}

(shifting contours to \(\sigma = \frac{-1}{2} + it\)). The rightmost term is 0 as we have a decomposition

\begin{equation*} L^2(\SL_2(\ZZ)\backslash \HH)= 1\oplus \int_{(\frac 12)} E(z; s) \diff s \oplus \text{cusp form}\text{.} \end{equation*}

We'll do this by hand here. i.e.

\begin{equation*} \int_{\SL_2(\ZZ)\backslash \HH} \theta_f(z) \diff \mu (z) =\tilde f (-1) \underbrace{\left(\Res_{s=1} E_1(z; s)\right)}_{=3/\pi} \vol(\SL_2(\ZZ)\backslash \HH) \end{equation*}
\begin{equation*} + \int_{\SL_2(\ZZ)\backslash \HH} E(z; -s) \diff \mu(z) \diff s\text{.} \end{equation*}

Exercise.

The lemma implies \(\forall f \in \cinf_c(\RR_{\gt 0})\) we have the following

\begin{equation} \tilde f (-1) \left(1-\frac3\pi \vol{}\right) = \frac{1}{2\pi i } \int_{(\frac12)} \tilde f (-s) \int E(z; -s) \diff \mu(z) \diff s\label{eqn-volE}\tag{1.6} \end{equation}

By a change of variables we can rewrite (1.6) as

\begin{equation*} c \tilde f (-1)= \int_{(0 )} \tilde f (-\frac12+s)I(S) \diff s \end{equation*}

for some constant \(c\text{.}\)

  1. Take \(F(y) = y^{\frac12} f(y)\) which implies
    \begin{equation*} \tilde F(s) = \tilde f( s+ \frac 12)\text{.} \end{equation*}
    (1.6) \(\iff\)
    \begin{equation*} \frac{1}{2\pi i } \int_{(0)} \tilde F(s) I(s) \diff s = c \tilde F(-\frac32)\text{.} \end{equation*}
  2. Trick: Let \(G(y) = (y\partder{y} F + \frac32 F)\) so
    \begin{equation*} \tilde G(s) = s\tilde F(s) + \frac 32 \tilde F(s) \end{equation*}
    so
    \begin{equation*} \tilde G(-\frac 32) = 0 \end{equation*}
    implies
    \begin{equation*} \frac{1}{2\pi i} \int \tilde G (-s) I(z; s) \diff s = 0 \end{equation*}
    \begin{equation*} I(z; s) \equiv 0 \end{equation*}
    whenever it converges.