Note 1.1
- Selberg trace formula only for \(\GL_2\)
- Arthur-Selberg more general
These are notes for Ali Altuğ's course MA842 at BU Spring 2018.
The course webpage is http://math.bu.edu/people/saaltug/2018_1/2018_1_sem.html
.
Course overview: This course will be focused on the two papers Eisenstein Series and the Selberg Trace Formula I by D. Zagier and Eisenstein series and the Selberg Trace Formula II by H. Jacquet and D. Zagier. Although the titles of the papers sound like one is a prerequisite of the other it actually is not the case, the main difference is the language of the papers (the first is written in classical language whereas the second is written in adelically). We will spend most of our time with the second paper, which is adelic.
Jacquet and Zagier, Eisenstein series and the Selberg Trace Formula II (1980's).
Part I is a paper of Zagier from 1970 in purely classical language. Part II is in adelic language (and somewhat incomplete).
the Arthur-Selberg side is used in Langlands functoriality and the Relative is used in arithmetic applications.
What does this paper do?
“It rederives the Selberg trace formula for \(\GL_2\) by a regularised process.”
The Selberg trace formula generalises the more classical Poisson summation formula.
Let
then Poisson summation says
where
Notation: \(e(x) = e^{2\pi i x}\text{.}\)
To make this look more general we make the following notational choices.
where
relating conjugacy classes on the left to automorphic forms on the right.
Arthur and Selberg prove the trace formula by a sharp cut off, Jacquet and Zagier derive this using a regularisation.
converges absolutely for \(s \gt 1\text{.}\)
\(\zeta(s)\) has analytic continuation up to \(\Re(s) \gt 0\) with a simple pole at \(s= 1\) residue \(1\text{.}\) i.e.
where \(\phi(s)\) is holomorphic for \(\Re(s) \gt 0\text{.}\)
Step 1: observe
Step 2: this implies
we denote each of the terms in the right hand sum as \(\phi_n(s)\)
Step 3:
by applying the mean value theorem.
So \(\sum_{n=1}^\infty \phi_n\) converges absolutely. Hence \(\phi = \sum_{n=1}^\infty \phi_n\) is holomorphic
One can push this idea to get analytic continuation to all of \(\CC\text{,}\) one strip at a time. This is an analogue of the sharp cut off method mentioned above. It's fairly elementary but somewhat unmotivated and doesn't give any deep information (like the functional equation).
Introduce
note that \(\theta(t) = 1 + 2 \sum_{n=1}^\infty e^{-\pi n^2 t}\text{.}\)
Idea: Mellin transform and properties of \(\theta\) to derive properties of \(\zeta\text{.}\)
property of \(\theta\text{:}\)
Step 1: proof of this property is the Poisson summation formula
Step 2: Would like to write something like
This integral makes no sense
so no values of \(s\) will make sense for this improper integral.
Refined idea: Consider
upshot: \(I(s)\) is well-defined and holomorphic for all \(s\in \CC\text{.}\)
Final step: Compute the above to see
which implies
Functions on the upper half plane,
Historically elliptic integrals lead to elliptic functions, modular forms and elliptic curves.
When one is interested in functions on \(\mathcal O/\Lambda\) where \(\mathcal O\) is some object and \(\Lambda\) is some discrete group. Take \(f\) a function on \(\mathcal O\) and average over \(\Lambda\) to get
If you're lucky this converges, this is good.
Weierstrass, take \(\Lambda = \omega_1 \ZZ + \omega_2\ZZ\) a lattice and define
Jacobi, (Elliptic integrals) consider
related by:
or
the weight \(2k\) holomorphic Eisenstein series.
Let
then
We'll take a roundabout route to automorphic forms.
Today: Classical harmonic analysis on \(\RR^n\text{.}\) Classical harmonic analysis on \(\HH\text{.}\)
The aim (in general) is to express a certain class of functions (i.e. \(\mathcal L^2\)) in terms of building block (harmonics).
In classical analysis the harmonics are known (\(e(nx)\)), then the question becomes how these things fit together. In number theory the harmonics are extremely mysterious. We are looking at far more complicated geometries, quotient spaces etc. and arithmetic information comes in.
\(\RR\text{,}\) \(f\colon \RR\to \CC\text{,}\) being periodic in \(\mathcal L^2(S^1)\) leads to a fourier expansion
We have a slightly different perspective.
via translations (i.e. right regular representation of \(G\) will be \(G\acts \mathcal L^2(G)\)). I.e. \(g\cdot x = x+g\text{.}\)
This makes \(\RR^2\) a homogeneous space.
\(\RR^2\) with standard metric \(\diff s^2 = \diff x^2 + \diff y^2\) is a flat space \(\kappa = 0\text{.}\)
To the metric we have the associated Laplacian (Laplace-Beltrami operator, \(\nabla\cdot\nabla\))
we are interested in this as it is essentially the only operator, we will define automorphic forms to be eigenfunctions for this operator.
The exponential functions
are eigenfunctions of \(\Delta\) with eigenvalue \(\lambda_{u,v} = -4\pi^2 (u^2 + v^2)\) i.e.
These are a complete set of harmonics for \(\mathcal L^2 (\RR^2)\text{.}\) The proof is via fourier inversion.
where
We could have established the spectral resolution ( of \(\Delta\)) by considering invariant integral operators.
Using the spectral theorem if we can find easier to diagonalise operators that commute we can find the eigenspaces for those to cut down the eigenspace.
Recall: an integral operator is
invariant means
in our case
If \(L\) is invariant then the kernel \(K(x,y)\) is given by
for some function \(K_0\text{.}\)
\((\Leftarrow)\) obvious
\((\Rightarrow)\) Suppose \(L\) is invariant then
implies
so
which implies with some proof that
so
Invariant integral operators commute with each other
after change of variables
\(L\) commutes with \(\Delta\text{.}\)
Based on the following:
which implies
which via integration by parts is
\(\phi_{u,v}(x,y)\) is an eigenfunction of \(L\text{,}\) \((u,v)\in\RR^2, (x,y) \in \RR^2\text{.}\)
after the change of variable \(w_i \mapsto - w_i + z_i\)
i.e.
Side remark: these are enough to form a generating set.
Let's consider integral operators on functions on \(\ZZ^2 \backslash \RR^2 = \mathbf T^2\text{.}\)
Observe: \(L\leadsto K(x,y) = K_0(x-y)\text{.}\)
now \(\mathbf K\) is a function on \(\mathbf T^2 \times \mathbf T^2\text{.}\)
Trace of this operator
Using sum of eigenvalues
so the trace is
so we get to
i.e. Poisson summation.
Why care about Poisson summation?
Gauss circle problem, how many lattice points are there in a circle of radius \(R\text{.}\) We can pick a radially symmetric function that is 1 on the circle and 0 outside, or a smooth approximation of such an indicator function at least. Poisson summation packages the important information into a single term, plus some rapidly decaying ones. Then we get \(\pi R^2 + \) error, Gauss conjectured that the error is \(R^{1/2 + \epsilon}\text{.}\)
Last time we gave a conceptual proof of Poisson summation (this strategy will generalise to the trace formula eventually).
To clean up one loose end: there is a generalisation of Poisson summation called Voronoi summation, which will actually be useful later. For Poisson summation we had
suppose \(K(x,y)\colon \RR^2 \to \CC\) is radially symmetric i.e.
then the fourier transform
where
is a Bessel function of the second kind.
Prove this.
Plug this into Poisson summation
as \(K\) only depends on \(n_1^2 + n_2^2\) we group terms based on this quantity, so
where
Note that \(J_0(0) =1 \text{.}\)
How is this useful? Consider point counting in a circle problem. Let \(K_0(x)\) be an approximation to the step function, \(1\) for \(x \le 1\) and 0 for \(x \gt 1\text{.}\) With \(\int K_0 = 1\text{.}\) Then
This is counting lattice points. The right hand side is then
Finally
so
So
where the lead term is the area of the circle. Finally if \(M \ne 0\) then \(f(MR^2)\) doesn't increase fast as \(R\to \infty\text{.}\) i.e. it is smaller than \(R^2\text{.}\) So as \(R\to \infty\) we find \(\#\{\text{lattice points in the circle}\} \sim\pi R^2\text{.}\)
What if we consider the same problem on the hyperbolic disk? Things are extremely different.
this gives
i.e. this is negatively curved, this is the cause of huge differences between the euclidean theory.
There is a formula for the (hyperbolic) distance between two points
As \(w\to x\in \RR\) we have \(\rho \to \infty\text{.}\) So \(\RR\) is the boundary.
Recall: the isoperimetric inequality
where \(L\) is the length of the boundary of a region and \(A\) is the area. Note if \(\kappa = 0\) then \(4\pi A \le L^2\text{.}\) So \(A\) can be and would be as large as \(L^2\text{.}\)
For \(\kappa = -1\) we have
so \(A\) can at most (and most often will) be as large as \(\sim L\text{.}\) The upshot is that under the hyperbolic metric, the area and perimeter can be the same size.
Things are a lot more subtle.
Another interesting setting is the tree of \(\PGL_2(\QQ_p)\) for \(p =2\) this is a \(3\)-regular tree. How many points are there of distance less than \(R\) from a fixed point
But how many points of distance exactly \(R\) are there? Roughly \(2^R\) again.
A hyperbolic disk of radius \(R\) centred at \(i\) would be a euclidean disk, but not centered at \(i\text{.}\) The area is \(4\pi(\sinh (R/2))^2\) and the circumference is \(2\pi\sinh(R)\) these are roughly the same size as \(\sinh(x) = (e^x - e^{-x})/2\text{.}\) The euclidean area is far larger (roughly the square) of the hyperbolic.
via linear fractional transformations, i.e.
this is the full group of holomorphic isometries of \(\HH\text{,}\) to get all of them take \(z\mapsto -\bar z\) as well.
because \(\specialorthogonal(2) = \Stab_i(\SL_2(\RR))\text{.}\)
Cartesian: \(x+ iy\) then the invariant measure is \(\frac{\diff x\diff y}{y^2}\text{.}\)
Iwasawa: \(G = NAK\)
this is very general, an analogue of Gram-Schmidt.
but be warned that \(NA \ne AN\) elementwise.
Cartan: \(KAK\) (useful when dealing with rotationally invariant functions).
Prove these decompositions. Use the spectral theorem of symmetric matrices for the Cartan case.
We classify motions by the number of fixed points in \(\HH \cup \hat\RR\text{,}\) for \(\hat \RR\) the extended real line.
These notions are different when we consider \(\gamma \in G(\QQ)\text{,}\) something can be \(\QQ\)-elliptic but \(\RR\)-hyperbolic. This depends on the Jordan decomposition essentially, we can have such \(\gamma\) with no rational roots of the characteristic polynomial but which splits over \(\RR\text{.}\)
So we have
For this section \(\Delta_\HH = \Delta\text{.}\)
We have the translation operators
A linear operator \(L\) will be called invariant if it commutes with \(T_g\) for all \(g\in \SL_2(\RR)\text{,}\) i.e.
On any Riemannian manifold \(\Delta\) can be characterised by: A diffeomorphism is an isometry iff it commutes with \(\Delta\text{.}\)
\(\Delta\) in coordinates:
Cartesian
Show that \(\Delta\) is an invariant differential operator.
Polar:
We will be interested in \(\Delta \acts\cinf(\Gamma \backslash \HH)\text{.}\)
This is a little subtle, lets take the definition of an eigenfunction to be
\(\Delta\) is an “elliptic” operator with real analytic coefficients. This implies any eigenfunction is real analytic.
\(\lambda = 0\) means \(f\) is harmonic.
Some basic eigenfunctions: Lets try \(f(z) = f_0(y)\) independent of \(x\)
if \(f\) satisfies
this implies \(f\) is a linear combination of \((y^s,y^{1-s})\) where \(s(1-s) = \lambda\) if \(\lambda \ne \frac 14\text{.}\)
If \(\lambda = \frac 14\) this gives \(y^{1/2}\) and \(\log (y)y^{1/2}\text{.}\) Note the symmetry! \(s \leftrightarrow 1-s\text{.}\)
Let's look at \(f(z)\) depending periodically on \(x\) (with period \(f\)). Separation of variables: try
where the \(2\pi\) is really in both factors. This gives
which gives
which implies
this is a close relative of the Bessel differential equation.
This has two solutions
intuition: as \(y\to\infty\) we have \(F'' - F = 0\) so \(e^{u}\) or \(e^{-u}\text{.}\)
If we insist on some “moderate growth” (at most polynomial in \(y\)) on the eigenfunction. The \(I_{s-\frac 12}\) solution can not contribute. (when we come to automorphic forms we will see that the definition is essentially eigenfunctions with moderate growth).
So our periodic (in \(x\)) eigenfunction with (moderate growth) looks like
\(W_s(z)\) is called a Whittaker function.
These exist for arbitrary lie groups, but we may not always be able to write eigenfunctions in terms of them in general though. They are a replacement for the exponential functions.
where
analogue of the Fourier inversion formula for \(\HH\text{.}\)
If \(f\) is actually periodic in \(x\) and \((\Delta + s(1-s)) f= 0\) with growth \(O(e^2{\pi y})\)
where \(f_0(y)\) is a combination of \(y^s, y^{1-s}\text{.}\)
We will be considering automorphic forms
Recall the Cauchy integral formula for holomorphic functions
i.e. using an integral kernel, \(f\) is an eigenfunction for this operator.
Recall: \(L\) is an integral operator if
\(K\) will often be smooth of compact support for us. \(L\) is invariant if it commutes with \(T_g\) for all \(g\text{.}\)
\(L\) is invariant iff
Show this.
A function \(K\colon \HH \times \HH \to \CC\) that satisfies \(K(gz, g w) = K(z,w)\) is called a point pair invariant. This was first introduced by Selberg.
Invariant integral operators are convolution operators.
A point pair invariant \(K(z,w)\) depends only of the distance between \(z,w\) i.e.
so an invariant operator is just a convolution operator.
If \((\Delta + \lambda)f \equiv 0\) and \(L\) is an invariant integral operator. (\(\Rightarrow\)) Then there exists
such that
(\(\Leftarrow\)) Moreover if \(f\) is an eigenfunction of all invariant operators then \(f\) is an eigenfunction of \(\Delta\text{.}\)
(\(\Leftarrow\)) Let \(L_K\) be
then
(if \(\Lambda_K = 0\) for all \(K\) then \(f \equiv 0\)). So that
Note \(f \to \int_\HH f(z) \Delta_z K(w,z) \diff \mu(w)\) is another invariant integral operator (exercise, show this).
We will prove an integral representation that looks like the Cauchy integral formula
Let for \(w\in \HH\text{,}\)
where \(G_w\) is the stabiliser of \(w\) in \(\SL_2(\RR)\) and \(\diff\mu\) is normalized so that \(G_w\) has volume 1.
Facts:
Now returning to the proof. Let \((\Delta + \lambda)f \equiv 0\text{,}\) \(L\) invariant.
Claim: \(\{\cdots\}\) depends only on \(K\) and \(\lambda\) not \(z\text{.}\) Proof: Let \(z_1,z_2 \in \HH\) and pick \(g\in \SL_2(\RR)\) \(gz_1 = z_2\text{.}\)
Upshot so far: Poisson summation is a duality, but it can be seen as an equality of the trace of an operator calculated in two different ways. In the non-euclidean setting we can do something similar, but not so recognisable.
A weight \(k\) cusp form, eigenfunction of the Hecke operators implies
“correct” normalisation is \(|\tilde \lambda_p| \le 2\text{.}\)
This is about the components at \(p\) but there is also a component at infinity.
Selberg's eigenvalue conjecture: \(\phi\) is a cuspidal automorphic (Maass) form with eigenvalue \(\lambda = s(1-s)\) implies \(s = \frac 12 + it\text{,}\) \(t\in \RR\) i.e. \(|\lambda| \ge \frac 14\text{.}\)
If we have \((\Delta + \lambda) f = 0\) can we say anything about \(\lambda\text{?}\)
Introduce the Petersson inner product
Now
exercise: check this. So \(\pair {-\Delta F}{G} = \pair {F}{-\Delta G}\) which gives \(\lambda \in \RR\text{.}\)
For the \(\frac14\) bound one needs to work a little harder.
Let \(D\subseteq \HH\) be a (nice) domain. Consider the Dirichlet problem
Define
Then
(exercise, show this).
For every fixed \(x\text{:}\)
previous two imply
In Theorem 1.29 we restricted to the \(1/2\) line, this is a reincarnation of \(\lambda \ge \frac 14\text{.}\) Only certain functions contributed, similar to the way only the unitaries \(e^{2\pi ix\xi}\) contribute to a fourier expansion, not all characters of \(\RR\text{.}\)
The spectrum of \(\Delta\) on \(\HH\) has \(\lambda = s(1-s)\text{,}\) \(s = \frac 12 + it\text{.}\) If we consider the quotient \(\Gamma \backslash \HH\) there is a possibility for \(t = t_{\RR} + it_{\CC}\) where \(0\le t_\CC \le \frac 12\text{.}\) Selberg's conjecture is that these extra ones don't appear for cusp forms. This is very sensitive to the arithmetic, we need a congruence subgroup for this to be true.
These are functions on \(\HH\) that are very symmetric. We already saw one in the first lecture, \(\theta(t)\) in the proof of Theorem 1.3. Its not quite, instead it is a half-integral weight modular form as we involved square roots
A modular function is some
with
where, \(f\) is meromorphic on \(\HH\) including \(\infty\text{.}\) It is called a modular form if it is indeed holomorphic at infinity, this is equivalent to some growth condition.
\(f\) is a cusp form if
\(f\) has a fourier expansion (invariant under \(x\mapsto x+1\)) holomorphic implies
cusp form implies
as cuspidal implies \(f(z) = O(e^{-2\pi y})\) as \(y \to \infty\text{.}\) \(f\) not cuspidal implies \(f(z) = O(y^s)\) as \(y \to \infty\text{.}\)
Constant functions for \(k = 0\text{.}\)
Eisenstein series (holomorphic).
Is this cuspidal? Answer: No! Why?
this is of weight \(2k\text{!}\)
Prove this.
is a cusp form of weight 12.
the original Ramanujan conjecture.
Show these.
not holomorphic at \(\infty\text{.}\)
We had also seen \(\theta\)-function but it does not fit in to this setting. It is rather a modular form for a covering group.
If \(f(z)\) is a cusp form of weight \(k\)
then
called the Hecke or trivial bound.
If \(\lambda_n = n^{(1-k)/2} a_n\) then this says \(\lambda_n \le \sqrt n\text{.}\)
implies
now consider
Then
(exercise). Moreover \(\phi(z) \to 0\) as \(y\to\infty\) and \(\phi\) is continuous so \(\exists M\) s.t. \(|\phi(z) | \le M\text{.}\)
Therefore \(f(z) \le M y^{-k/2}\text{.}\)
for all \(y\) pick \(y=1/n\) so \(O(n^{k/2})\text{.}\)
A function \(f\colon \HH \to \CC\) s.t.
is called a maass form. If
we call it a maass cusp form.
Constant functions are Maass forms, this is because they are \(L^2\) because \(\HH/\SL_2(\ZZ)\) has finite volume.
we choose this normalisation (for now) with \(s+\frac 12\) as it generalises better to \(\GL_3\) which has more elements in its Weyl group.
In fact most things are non-holomorphic in the sense that many spaces of interest do not have a complex structure.
hence \(E(z; s)\) is a Maass form. We have
so
where \(\Gamma_\infty = \left\{ \begin{pmatrix} 1 \amp n \\ \amp 1 \end{pmatrix}\in\SL_2(\ZZ)\right\}\text{.}\) Exercise: check.
\(\gamma\) is an isometry implies \(\Delta\gamma = \gamma\Delta\text{.}\) So \(\gamma y^{s+\frac 12}\) is also an eigenfunction with eigenvalue \(\frac 14 - s^2\text{.}\)
\(E(z; s)\) has analytic continuation to \(\CC\) (in \(s\)), it satisfies \(E(z; s) = E(z; -s)\) and
(Wrong way to prove this) Fourier expansion of Eisenstein series.
using Theorem 1.29.
Note:
We will work with
the right hand side is invariant under \(x\mapsto x+1\) we can absorb the shift into the sum over \(d\text{,}\) in a general context this is known as unfolding.
note:
so we get
two cases
Combining these we have shown
where we have \(K_s = K_{-s}\) and
So we have proved the functional equation and analytic continuation.
We can see that we have \(\zeta\) appearing here in the constant term, we can determine analytic information about it using what we know about Eisenstein series, this idea in generality is known as the Langlands-Shahidi method.
This has poles at \(s = \frac12\text{,}\) \(\Res_{s=\frac12} E(z; s) = \frac12\text{.}\) Note that this residue is constant. We will use this in Rankin-Selberg.
If \(G\) is a reductive group and \(M \subseteq G\) a Levi subgroup, e.g. for \(\GL_n\) a Levi is diagonal blocks of size \(n_1 + n_2 + \cdots +n_k = n\text{.}\) One can associate a cusp form to these subgroups following Eisenstein. There are automorphic \(L\)-functions corresponding to these and by doing the same procedure as last time we see these \(L\)-functions appearing in the constant terms of the Eisenstein series. So we can establish analytic properties of these automorphic \(L\)-functions via those of the Eisenstein series. This is known as the Langlands-Shahidi method, it only works in some cases but when it does it is very powerful. Shahidi pushed the idea by looking at non-constant terms. In the example above we have
so there are \(L\)-functions even in the non-constant terms.
The natural setting to view these is over \(\GL_2(\QQ_p)\text{.}\) But as we haven't done this yet we will take the path that Hecke took and just write a formula. They act on the space of modular forms.
Let \(k\in \ZZ_+\) fixed, \(\gamma \in \GL_2^+(\RR)\) (positive determinant).
There is a determinant twist so that the center acts trivially.
Let \(\gamma\in \GL_2^+(\QQ)\) write
then
This differs by a normalization of \(\det\) won't change too much but will shift the spectrum. Or more generally we have
Hecke operators commpute with each other (follows from \(KAK\) decomposition).
Hecke operators are self-adjoint with respect to the Petersson inner product on \(M_k(1)\text{,}\) the modular forms of weight \(k\) and level \(1\text{.}\)
Let \(f(z)\ne 0\) be a cusp form of weight \(k\) which is an eigenfunction for all of the Hecke operators with eigenvalue \(n^{1-k/2} \lambda(n)\) i.e.
let
be its fourier expansion and
Then
By the Fourier expansion
Which implies
so
exercise: check. Take \(m = 1\) so
hence
This is very special to \(\GL_2\text{,}\) in general fourier coefficients have more information than Hecke eigenvalues.
With the normalisation
the Ramanujan conjecture reads \(\lambda(n) = O(n^{(k-1)/2 + \epsilon})\text{.}\)
Having \(a_1 = 1\) is known as being Hecke normalised.
This is a protoype of the integral representation of automorphic \(L\)-functions.
Let
then
Notation:
converges for \(\Re(s) \gg 0\) if \(a_n(y) = O(y^{-N})\) for all \(N\text{.}\)
Let \(\phi(x+iy) = O(y^{-N})\) for all \(N\gt 0\) and \(\phi(z)\) is invariant under \(z\mapsto \gamma z\) for \(\gamma\in \SL_2(\ZZ)\text{.}\) Then
Given
its Mellin transform is
If
What is
The Eisenstein series is essentially
we can see the integrating over \(\SL_2(\ZZ) \backslash \HH\) a sum over \(\Gamma_\infty\backslash \SL_2(\ZZ)\) things should cancel to give us an integral over \(\Gamma_\infty \backslash \HH\text{,}\) a rectangle! So this unfolding should simplify things.
Let \(\phi\colon \HH \to \CC\) be automorphic with respect to \(\SL_2(\ZZ)\text{,}\) with fourier expansion
If \(\phi(x+iy) = O(y^{-N})\) for all \(N \gt 0\text{.}\)
where \(\phi(z) = \sum_{n\in \ZZ} a_n(y) e(nx)\text{.}\)
Follow your nose!
Recall
Step 1: The integral converges: Writing \(E\) for \(E_3\) we have
Step 2: Unfold
we showed
(So we can find a functional equation and analytic continuation for from the corresponding properties of the Eisenstein series.)
Note that if we use a cusp form for \(\phi\) we get 0 from the integral above, in \(L^2\) the cusp forms and Eisenstein series are orthogonal. Instead we will cook up something interesting from two functions.
Let
be holomorphic modular forms of weight \(k\text{.}\)
Assume that at least one of \(f\) or \(g\) is cuspidal. Assume additionally that \(f,g\) are normalised Hecke eigenforms so \(a(1) = b(1) = 1\text{.}\)
\(\phi(\gamma z) = \phi(z)\) for any \(\gamma \in \SL_2(\ZZ)\text{.}\) \(\phi\) also satisfies the decay condition.
If \(f = \sum a_n e(nz)\text{,}\) \(g = \sum b_n e(nz)\) then
so if we were to integrate this from 0 to 1 \(\diff x\) the first term would disappear and we would be left with the second.
i.e.
this is the Rankin-Selberg \(L\)-function. So by the corollary 1.68; \(L(s+k; f\times g)\) has analytic continuation and functional equation, and poles only at \(s= 1+k, s=k\text{.}\)
We proved \(f\) cusp form \(f(z) = \sum a_ne(nz)\) implies \(a_n = O(n^{k/2})\text{,}\) Ramanujan \(a_n = O(n^{(k-1)/2})\text{.}\) As cusp forms often appear as error terms for counting arguments knowing it gives us many results, tells us we can just count with Eisenstein series. The averaged version of the Ramanujan conjecture is much easier
Recall Proposition 1.67 and moreover that
from this and Note 1.71 we conclude.
(1.5) has analytic continuation as a function of \(s\) to all \(s\in \CC\text{.}\) It has at most simple poles with residue at \(s =1\text{:}\)
If we let
so
this follows from Theorem 1.54 \(E(z; s) = E(s; 1-s)\text{.}\) .
\(\Lambda(s,f\times \overline g)\) has analytic continuation to \(s\in \CC\) with poles at most at \(s=k,\,s=k-1\text{.}\)
This is analogous to when we took the Mellin transform of the theta function. We have obtained some highly nontrivial information above:
Given an arbitrary series of the form
will converge for \(\Re(s) \gt \alpha + 1\text{.}\) As these coefficients often come from point counts, they will in general be polynomial.
Recall the Hecke bound
a cuspidal (Hecke eigen)form of weight \(k\) has \(a_n = O(n^{k/2})\text{.}\) Ramanujan (Deligne) gives us \(a_n =O(n^{(k-1)/2})\text{.}\) Hecke implies that \(a_n\overline b_n = O(n^k)\) implies \(\sum_{n=1}^\infty \frac{a_n\overline b_n}{n^s}\) converges for \(\Re(s) \gt k+1\text{.}\) Deligne implies that \(a_n\overline b_n = O(n^{k-1})\) implies \(\sum_{n=1}^\infty \frac{a_n\overline b_n}{n^s}\) converges for \(\Re(s) \gt k\text{.}\) But the above already gives us this convergence, highly nontrivial \(k\text{!}\)
If we take \(f,g\) to be normalized Hecke eigenforms. Let
then
Prove this.
If one proves the prime number theorem using non-vanishing of the \(\zeta\) function in a certain region you use a weird identity using sines and cosines being positive. This really comes from a Rankin-Selberg product.
Rankin-Selberg proves positivity.
Ramanujan on average:
this is equivalent to
converges for \(\Re(s) \gt (k+1)/2\text{.}\) Let
be a cusp form of weight \(k\text{.}\)
Consider
Hecke implies this converges for \(\Re(s) \gt k+1\text{.}\)
converges for \(\Re(s) \gt k\text{.}\)
Now observe that for any \(\lambda \gt 0\)
So
choose \(\lambda = (k-1)/2\) so
which converges for \(s\gt (k+1)/2\text{.}\)
Fix \(\diff \mu(z) = \frac{\diff x \diff y}{y^2}\) on \(\HH\text{.}\) What is \(\vol(\SL_2(\ZZ) \backslash \HH)\text{?}\) (\(\pi/3\text{?}\)) What about other \(\Gamma\text{?}\)
Naive observation:
and
so
but the right hand side does not converge (exercise, check).
Problem: naive idea doesn't work \(\int_{\SL_2(\ZZ)\backslash \HH} E(z; s) \diff \mu(z)\) converges only for \(0 \lt 1 \lt 1\text{.}\) but then we can't unfold because the series defining \(E(z; s)\) does not converge \(0\lt s \lt 1\text{.}\) We must target the source of this divergence, the constant term of the Eisenstein series.
we will truncate the Eisenstein series.
First aim of the day: Calculate
Idea: Use the pole of \(E(z; s)\) at \(s= 1\) and unfolding.
Idea:
Problem: Constant term of \(E(z; s) \sim y^s + y^{1-s}\text{.}\)
converges only if \(0\lt s \lt 1\text{.}\) This approach needs modification, we will look at two such.
For sharp cut off we will fix some \(T \gt 0\) and only consider \(y\lt T\text{.}\) Setting
using this
Observations:
\(K \subseteq \HH\) compact, there exists \(T_K\) such that for all \(T \ge T_K\)
for some \(T_0\text{.}\)
(Unfold), recall
so
There is a huge generalization of this lemma by Langlands that allows him to calculate a lot of volumes.
Let \(T \gt 1\) and \(x+ iy\) in the standard fundamental domain \(\mathcal F\) for \(\SL_2(\ZZ) \backslash \HH\) then
Recall
Case 1: \(c \ne 0 \) implies \(\Im( \gamma z) \le \frac{1}{c^2 y} \le T\) for all \(y\in \mathcal F\text{.}\)
Case 2: \(c = 0 \) implies \(\Im( \gamma z) = \frac{y}{d^2 } \)
implies \(ad = 1\) so \(a= d= 1\) or \(a=d= -1\text{.}\)
\(E(z; s) - E_T(z; s) \) for fixed \(z\) is holomorphic as a function of \(s\) at \(s =1\text{.}\)
\(E(z; s) - E_T(z; s) \) is holomorphic at \(s =1\text{.}\) So
hence
Volumes of such domains are known as Tamagawa number and many such were computed via these methods by Langlands in the 60s.
Let \(f \in \cinf_c(\RR_{\ge 0 })\) e.g. some nice bump. Consider
idea
for \(a\lt \sigma\lt b\) such that \(\tilde f = \int_0^\infty y^s f(y) \frac{\diff y}{y}\) converges absolutely for \(a \lt s \lt b\text{.}\) Observation: \(\sigma\in \RR\) for this to hold. So
for \(\Re(s) \lt -s\text{:}\)
where
Now integrate
(shifting contours to \(\sigma = \frac{-1}{2} + it\)). The rightmost term is 0 as we have a decomposition
We'll do this by hand here. i.e.
Exercise.
The lemma implies \(\forall f \in \cinf_c(\RR_{\gt 0})\) we have the following
(1.6) implies that
whenever it converges.
By a change of variables we can rewrite (1.6) as
for some constant \(c\text{.}\)