Ribet's Converse to Herbrand: Part II - Cuspstruction

So around a month back I posted the first post in this 2 (or more who knows?) part series on Ribet’s converse to Herbrand’s theorem. This is the sequel, Cuspstruction, it is basically just my personal notes from my STAGE talk with the same name. We were following Ribet’s paper and this is all about section 3. The goal is to construct a cusp form with some very specific properties, which we can then take the corresponding Galois representation and use that to obtain the converse to Herbrand. In this post though we’ll be focussing on constructing the cusp form, hence, Cuspstruction.

Cuspstruction

We will make use the following building blocks, some specific modular forms of weights 2 and type \(\epsilon\) \begin{align*} G_{2,\epsilon} &= L(-1,\epsilon)/2 + \sum_{n=1}^\infty \sum_{d|n} d \epsilon(d) q^n\\ s_{2,\epsilon} &= \sum_{n=1}^\infty \sum_{d|n} d \epsilon(n/d) q^n \end{align*} the latter is not a cusp form (not cuspidal at the other cusp of \(\Gamma_1(p)\)) we call such forms semi-cusp forms, denote the space of such by \(S^\infty\) (not standard notation) we will also use \begin{equation*} G_{1,\epsilon} = L(0,\epsilon) + \sum_{n=1}^\infty \sum_{d|n} \epsilon(d) q^n \end{equation*} the Eisenstein series are all hecke eigenfunctions for \(T_n\) \(n\) coprime to \(p\text{.}\)

Fix a prime ideal \(\mathfrak p|p\) of \(\mathbf{Q}(\mu_{p-1})\text{,}\) can think of \(\mu_p\subseteq \mathbf{Q}_p^*\) and take \(\omega\colon (\mathbf{Z}/p\mathbf{Z})^* \xrightarrow\sim \mu_{p-1}\) the unique character with \(\omega(d)\equiv d \pmod{\mathfrak p}\) for all \(d\in \mathbf{Z}\text{.}\)

We start with a key lemma, will use this repeatedly.

By our choice of \(\omega\) we get the desired result for the non-constant terms of the \(q\)-expansion.

So it remains to prove that \begin{equation*} L(-1,\omega^{k-2})\equiv -\frac{B_k}{k}\pmod{\mathfrak p} \end{equation*} \begin{equation*} L(0,\omega^{k-1})\equiv -\frac{B_k}{k}\pmod{\mathfrak p} \end{equation*} we make use of the following expressions (see probably Washington) \begin{equation*} L(0,\epsilon) = -\frac 1p \sum_{n=1}^p \epsilon(n) (n- \frac p2) \end{equation*} \begin{equation*} L(-1,\epsilon) = -\frac{1}{2p} \sum_{n=1}^p \epsilon(n) (n^2 - pn + \frac{p^2}{6}) \end{equation*} \(\omega(n)\equiv n^p \pmod{\mathfrak p^2}\) so \begin{equation*} pL(0,\omega^{k-1}) = -\sum_{n=1}^p \omega^{k-1}(n)(n-p/2) \end{equation*} \begin{equation*} \equiv -\sum_{n=1}^p n^{p(k-1) + 1} \pmod{\mathfrak p^2} \end{equation*} and \begin{equation*} pL(-1,\omega^{k-2}) = -\frac 12\sum_{n=1}^p \omega^{k-2}(n)(n^2 - pn +\frac{p^2}{6}) \end{equation*} \begin{equation*} \equiv -\frac 12\sum_{n=1}^p n^{p(k-2) + 2} \pmod{\mathfrak p^2} \end{equation*} but for all \(t\gt 0\) even we have the congruence \begin{equation*} \sum_{n=1}^{p-1} n^t \equiv pB_t \pmod{p^2} \end{equation*} and so \begin{equation*} pL(0,\omega^{k-1}) \equiv -pB_{p(k-1)+1} \pmod{p^2} \end{equation*} cancelling \begin{equation*} L(0,\omega^{k-1}) \equiv -B_{p(k-1)+1} \pmod{p} \end{equation*} \begin{equation*} \equiv -B_{p(k-1)+1} \pmod{p} \end{equation*} \begin{equation*} \equiv -\frac{B_k}{k} \pmod{p} \end{equation*} as \(p(k-1) + 1 \equiv k \pmod{p-1}\) using Kummer's congruence. Similarly \begin{equation*} L(-1,\omega^{k-2})\equiv -\frac 12 \frac 2k B_k \pmod{p}\text{.} \end{equation*}

Note: the constant term is a \(\mathfrak p\) unit unless \(B_n\) or \(B_m\) is divisible by \(p\text{.}\) We need to remove this condition.

Proof

It suffices to find a form with constant coefficient a \(\mathfrak p\)-unit. If \(p\nmid B_k\) then we can use \(G_{2,\omega^{k-2}}\) by lemma 3.1.

If \(p|B_k\) try the possible products \begin{equation*} G_{1,\omega^{m-1}}G_{1,\omega^{n-1}} \end{equation*} with \(2\le m,n\le p-3\) even as above with \(m+n \equiv k \pmod{p-1}\text{.}\) We want to claim that at least one of these must work (i.e. we have a pair \(m,n\) with \(p\nmid B_m, p\nmid B_n\)). If this isn't the case if we let \begin{equation*} t = \#\{2\le n \text{ even} \le p-3: p|B_n\}\text{,} \end{equation*} we must have \(t \ge (p-1)/4\text{,}\) assume otherwise, we will derive a contradiction from this.

Greenberg showed that \begin{equation*} \frac{h_p}{h_{\mathbf{Q}(\mu_p)^+}} = h^*_p = 2^? p \prod_{\substack{k=2\\ \text{even}}}^{p-2} L(0,\omega^{k-1}) \end{equation*} (this is obtained by taking a quotient of the analytic class number formulas for \(\mathbf{Q}(\mu_p),\mathbf{Q}(\mu_p)^+\)) but by lemma 3.1 we know that \(\mathfrak p^t\) will divide the product of \(L\)-values. And so \(p^t|h_p^*\text{,}\) we will get a contradiction if we show \begin{equation*} h_p^*\le p^{(p-1)/4}\text{.} \end{equation*}

Work of Carlitz-Olson '55, Maillet's determinant shows that \begin{equation*} h_p^* = \pm\frac{D}{p^{(p-3)/2}} \end{equation*} where \(D\) is the determinant of a \((p-1)/2 \times (p-1)/2\) matrix with entries in \([1,p-1]\text{.}\) So recalling Hadamard's inequality \begin{equation*} |\det(v_1\cdots v_n)| \le \prod_{i=1}^n ||v_i||\text{,} \end{equation*} or the simpler corollary \begin{equation*} |A_{ij}| \le B \implies |\det(A)| \le n^{n/2}B^{n} \end{equation*} and applying with \(B = p, n=(p-1)/2\) gives \begin{equation*} |D| \le \left(\frac{p-1}{2}\right)^{(p-1)/4} p^{(p-1)/2} \lt 2^{-(p-1)/2} p^{(3p-3)/4} \end{equation*} so \begin{equation*} h_p^* \lt p^{(p+3)/4} 2^{-(p-1)/4}\text{.} \end{equation*} And we are done as \(h_p^* = 1\) for \(p\le 19\) and as \(p\le 2^{(p-1)/4}\) for \(p\gt 19\text{.}\)

Now we fix \(2\le k\le p-3\) even with \(p|B_k\) and let \(\epsilon = \omega^{k-2}\text{,}\) \(k\) must really be at least 4 (or even 10) so \(\omega\) is a non-trivial even character, we will work in weight 2, type \(\epsilon\) from now on.

Let \begin{equation*} f= G_{2,\epsilon} -cg \end{equation*} with \(c = L(-1,\epsilon)/2\text{.}\) This is a semi-cusp form by construction. We get \(f\equiv G_{2,\epsilon}\pmod{\mathfrak p}\) because \begin{equation*} c \equiv -B_k/2k \equiv 0 \pmod{\mathfrak p} \end{equation*} by lemma 3.1 (again!) as we assume \(p|B_k\) now. Additionally by the same lemma \(G_k \equiv G_{2,\epsilon}\pmod{\mathfrak p}\text{.}\)

Let's take stock, we have a semi-cuspidal form which mod \(\mathfrak p\) looks like \(G_k\) and is hence an eigenform mod \(\mathfrak p\) but we want an actual eigenform, bro do you even lift?

We start with \(f\) from the proposition above it's a mod \(\mathfrak p\) eigenform and so we can use Deligne-Serre lifting lemma (6.11 in Formes modulaires de poids 1) to obtain a semi-cusp form \(f'\text{,}\) that is an eigenvalue for the Hecke operators stated.

To promote the semi-cusp form to a full blown cusp form we observe that the space \(S_2^\infty(\Gamma_1(p),\epsilon)\) is generated by the cusp forms and \(s_{2,\epsilon}\) which is also an eigenform we only have to check that \(f'\) isn't \(s_{2,\epsilon}\) (or it's scalar multiple). So we check the eigenvalues mod \(\mathfrak p\text{.}\) \begin{equation*} \epsilon(\ell) + \ell \equiv 1 + \ell\epsilon(\ell)\pmod{\mathfrak p} \end{equation*} implies \(\epsilon(\ell) = 1\text{,}\) but \(\epsilon\) is non-trivial!

The final challenge is to ensure that \(f'\) is also an eigenform for \(T_{p^i}\text{.}\)

Use the theory of newforms. There are no oldforms for \(\Gamma_1(p)\) as \begin{equation*} M_2(\operatorname{SL}_2(\mathbf{Z})) = 0\text{.} \end{equation*} A newform that is an eigenform for all hecke operators coprime to the level \(p\) is also an eigenform for the remaining Hecke operators.

So in conclusion:

Remark

Word on the internet is that Mazur, Mazur-Wiles' proof of the Main conjecture of Iwasawa theory is modelled on this.

That's all for now, in the remainder of Ribet's paper he constructs a Galois representation from this and use it to prove the theorem.

Ribet's Converse to Herbrand: Part I

Tomorrow I’m giving the STAGE talk on Ribet’s converse to Herbrand’s theorem, after I’ll try and post more notes, but for now here’s a little intro to get us thinking about the problem.

Ribet's converse to Herbrand

We are interested in the class groups of cyclotomic fields \begin{equation*} h_p = h_{\mathbf{Q}(\mu_p)}\text{.} \end{equation*} Lets list the first few of these

\(p\) 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67
\(h_p\) 1 1 1 1 1 1 1 1 3 8 9 37 121 211 695 4889 41241 76301 853513
\(p|h_p\) no no no no no no no no no no no yes no no no no yes no yes
Definition Regular primes

We'll call primes for which \(p\nmid h_p\) regular primes. Otherwise irregular primes.

Why is this important from a number theory perspective?

It's hard to tell when a prime is a regular prime, you'd have to compute the class group.

Definition Bernoulli numbers

The Bernoulli numbers are the sequence of integers given by the exponential generating function \begin{equation*} \frac{x}{e^x - 1} + \frac x2 - 1 = \sum_{n\ge 2}^\infty B_k\frac{x^k}{k!}\text{.} \end{equation*}

These have a number of cool properties, such as:

But most important for us is the relation to class numbers:

This is a great theorem relating class numbers to the Bernoulli numbers, but can we do better? What if I know a specific \(k\) so that \(p|B_k\text{,}\) can I say anything more specific about the class group? Yes; there is a strengthening of this theorem due in this form to Herbrand (in one direction) and Ribet (later, in the other direction).

First we need to recall the mod \(p\) cyclotomic character \(\chi\colon \operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q}) \to \mathbf{F}_p^*\) defined by \begin{equation*} \zeta_p^{\chi(\sigma)} = \sigma (\zeta_p)\text{.} \end{equation*}

The \(\Leftarrow\) direction was proved by Herbrand in 1932. And the \(\Rightarrow \) direction by Ribet in 1974.

Now for completeness here is a table of factorisations of Bernoulli number numerators.

\(k\text{:}\) \(2\) \(4\) \(6\) \(8\) \(10\) \(12\) \(14\) \(16\) \(18\) \(20\) \(22\) \(24\) \(26\) \(28\) \(30\) \(32\) \(34\) \(36\) \(38\) \(40\) \(42\) \(44\) \(46\) \(48\) \(50\) \(52\) \(54\) \(56\) \(58\)
Numerator of \(B_{k}\text{:}\) \(1\) \(-1\) \(1\) \(-1\) \(5\) \(-1 \cdot 691\) \(7\) \(-1 \cdot 3617\) \(43867\) \(-1 \cdot 283 \cdot 617\) \(11 \cdot 131 \cdot 593\) \(-1 \cdot 103 \cdot 2294797\) \(13 \cdot 657931\) \(-1 \cdot 7 \cdot 9349 \cdot 362903\) \(5 \cdot 1721 \cdot 1001259881\) \(-1 \cdot 37 \cdot 683 \cdot 305065927\) \(17 \cdot 151628697551\) \(-1 \cdot 26315271553053477373\) \(19 \cdot 154210205991661\) \(-1 \cdot 137616929 \cdot 1897170067619\) \(1520097643918070802691\) \(-1 \cdot 11 \cdot 59 \cdot 8089 \cdot 2947939 \cdot 1798482437\) \(23 \cdot 383799511 \cdot 67568238839737\) \(-1 \cdot 653 \cdot 56039 \cdot 153289748932447906241\) \(5^{2} \cdot 417202699 \cdot 47464429777438199\) \(-1 \cdot 13 \cdot 577 \cdot 58741 \cdot 401029177 \cdot 4534045619429\) \(39409 \cdot 660183281 \cdot 1120412849144121779\) \(-1 \cdot 7 \cdot 113161 \cdot 163979 \cdot 19088082706840550550313\) \(29 \cdot 67 \cdot 186707 \cdot 6235242049 \cdot 3734958336910412\)

PSA: Quadratic reciprocity

The quadratic reciprocity law works for Legendre symbols with odd entries, not just primes

Making T-Shirts!

This is a placeholder for a post that will appear tomorrow to stop Beeminder taking my money. It’ll be worth the wait!

What's that category?

Some vague thoughts about a weird category me and my housemate got to thinking about recently, unfortunately I’m a little too sleepy to write anything more coherent right now, but beeminder demands tribute.

When doing non-abelian group cohomology (and many other things) you end up dealing with the category of pointed sets, that is, sets with a point specified, and where morphisms must map specified points to each other. This category is actually fairly nice (or at least nicer than plain $\mathrm{Set}$ is anyway) insomuch as it has a zero object (i.e. an object that is both initial and terminal), the one element set, this allows us to make sense of kernels etc. which is a nice thing to be able to talk about. So why do we get a 0-object here? Well the one element set is already terminal in $\mathrm{Set}$ so we get terminalness for free, as for why it is initial it’s revealing to describe the category of pointed sets in a different way: We can equivalently describe it as the coslice category $\{* \} \downarrow \mathrm{Set}$, where the objects are morphisms in $\mathrm{Set}$ from the one element set (giving you the specified point) and the new morphisms are commuting triangles in $\mathrm{Set}$. When we do this we of course get the object we cosliced at (or more specifically its identity morphism) as an initial object, and as we already had a unique map from any object to this object we get that its identity map to itself is terminal also.

This got me thinking about $\mathrm{CRing}$, which infamously doesn’t have kernels, so what if we follow the recipe above or at least it’s dual and consider the slice category over the initial object $\mathbf{Z}$. By the dual of the above this should now have a 0-object (the identity map on $\mathbf{Z}$) and so we could form kernels, indeed the kernel of a morphism $A \to B$ (where both $A$ and $B$ have an associated map to $\mathbf{Z}$) should be the preimage of $\mathbf{Z}$ I suppose, and this is probably a subring?

Before we get to that though be should ask what does this category $\mathrm{CRing}\downarrow \mathbf{Z}$ even look like?! It’s not absolutely trivial as far as I can tell as we get maps from polynomial rings into $\mathbf{Z}$ from evaluating at integer vectors. We can also add in nilpotents and other stuff that just ends up mapping to zero but is still fine. However on the other hand we immediately rule out positive characteristic rings, and anything which inverts an element of $\mathbf{Z}$.

I really have no conclusions about what this category of commutative rings with a map to $\mathbf{Z}$ actually is, but it is quite fun to play with.

Some funny representations

As part of a discussion in our Galois representations course John Bergdall challenged us to come up with a representation that is irreducible but not absolutely semi-simple. I found this a pretty fun thing to think about so I thought I’d write up my progress and what the next steps are.

First things first a reminder of the definitions: an irreducible representation is one with no fixed subspace, and a semisimple representation is one which can be written as a direct sum of irreducible representations. Adding the word absolutely just means that we require the property to be true for the representation acting on the vector spaces over algebraic closure of the base field. By allowing more general coefficients things that are irreducible to start with can easily become reducible.

To start our search lets work with the most simple type of potentially interesting representations I can think of, these look something like

which are entirely determined by the image of 1, lets try and find an appropriate matrix to make the example we want work.

In my head non-semi-simple things look something like this

So in order to find an irreducible non absolutely semisimple representation we want to find a matrix over a field which has no eigenvalues, but which over the algebraic closure has repeated eigenvalues. This is not possible over $\mathbf{C}$ for example, as the trace would have to be twice the eigenvalue. This would then give that the eigenvalues were themselves in the ground field, and therefore the eigenvalues were defined over $K$ in the first place.

This sort of weird multiple roots of irreducible polynomials stuff happens only for non-perfect fields (by definition), of which the most quoted example is $\mathbf{F}_p((t))$. Here the polynomial $x^p - t$ is irreducible but has repeated roots over the algebraic closure as it factors as $(x-\sqrt[p]{t})^p$. As we are dealing with 2-dimensional representations here we should look for a matrix over $\mathbf{F}_2((t))$ with characteristic polynomial $x^2 - t$, one simple example of this is the matrix

So we have a matrix that has no eigenvalues over the base field but repeated eigenvalues over the algebraic closure. We now need to check that it only has a one dimensional eigenspace, to be sure that we have a non absolutely semisimple representation. This eigenspace is given by the kernel of

which is indeed one dimensional and so we are done.

One property of this example is that if we consider the restriction of the representation to the subgroup $2\mathbf{Z}$ we get something semisimple as the square of our matrix $M$ is diagonal. The new improved challenge is therefore to find a representation for which this doesn’t happen either; more explicitly to find an irreducible not absolutely semisimple representation which remains that way on all finite index subgroups. I don’t think the example I have right now can be extended to this case and it might be necessary to look at representations of more exotic groups than simply $\mathbf{Z}$ for this.

Thoughts welcome!

Brief intermission - please stand by

Just a quick post to explain my recent absence (and get beeminder off my back).

Recently I haven’t posted that much, the reason being that I just moved to the US to start a PhD in maths at Boston University and things have been kinda busy with the move. I’m really excited about starting doing maths full time again, and I’m sure there will be a lot more to post about in the very near future. I just moved into an apartment today after a brief stay in a hotel while I attended a week long international orientation, not so much maths but I did get the chance to give a 10 minute talk on the 4-colour theorem.

See you soon.

7ECM - Some personal thoughts

So I’m currently on my way back from the 7th European Congress of Mathematics, which took place in Berlin over the course of the last week. While everything is still fresh(ish) in my memory I wanted to record some of the bits that I (personally) found most interesting so that I’ll be able to look back when I invariably do forget what I did for a whole week. If you like similar things to me then maybe you’ll find something interesting here too (if you don’t like similar things to me then I question your choice to read this blog). Some of these topics I’d like to revisit in more detail (possibly in a future post?!) but for now these short snippets will have to do. So in approximately no particular order and undoubtedly with some gaps.

Peter Scholze

This is the most obvious one for me, Scholze talked about Perfectoid spaces, which since he introduced them in his thesis have become a hot topic in number theory. Scholze himself has been awarded various prizes and honours for developing this theory (including another at the ECM).

The motivation is to transfer problems involving the $p$-adics (which can be problematic as they are mixed characteristic) over to the similar looking ring $\mathbf{F}_p((t))$. Scholze said he wants to fight for the freedom of $p$. The way he does this is via a technique called tilting, which was originally used by Fontaine and Wintenberger to prove a result relating the Galois theories of these fields. Scholze (and independently Kedlaya-Liu) takes this result and distils out the definition of a field where this works and calls it a perfectoid field. He then does some natural looking things (which I’m sure are actually very complicated to do properly) and constructs perfectoid algebras and perfectoid spaces. He shows that various things work out nicely and that one can tilt the spaces too in a way that matches what you’d hope for simple spaces like projective space.

In the last part of the talk Scholze got increasingly high level so rather than embarrass myself trying to replicate it I’ll just say that it definitely looks like he has more impressive things forthcoming.

One thing that was especially interesting for me in this talk was mention of work by Jared Weinstein who works at Boston University (where I will also be from September). I was aware that Weinstein was doing work in this area so it was really interesting to have a bit of it put into context.

Karin Vogtmann

Vogtmann first gave a nice introduction to outer automorphisms of the free group and why one might care, and got me interested with the observation that abelianising gives automorphisms of $\mathbf{ZZ}^d$. She also described various related “outer space”s which are spaces which the outer automorphisms of free groups act on. The construction is really cool and involves deforming certain metrized graphs using the interpretation of $\operatorname{Out}(F_n)$ as the automorphisms of the rose graph. (One thing she mentioned in passing here that I’d like to learn more about is Gromov’s topology on the space of all metric spaces, so meta.) The gist was that the various slightly different outer spaces were all very related to the main one. The pictures were very good and extremely helpful and for me this talk definitely wins the Oscar for best visual effects.

Don Zagier

I found Don’s style of presenting (rapidly moving through hand written slides while barely breathing) engaging and super impressive, multiple times he switched to a slide and corrected something before I’d even had time to read the first sentence. The content itself was also really fascinating, he started with a fairly random looking recurrently defined sequence that Apery used to prove that $\zeta(3)$ is irrational and gave multiple interpretations of where such a sequence comes from (though in fact there is still something very special going on). He ended up giving some motivation for the concept motives (pun not intended) and presented us with a few results concerning various special values that were conjectured by a coauthor of his based on pure thought on the level of motives and proved by Zagier and others on the concrete level.

Unfortunately I can’t remember quite as much of what went on as I’d like due to the above-mentioned speed and Don’s recommendation at the start that we “don’t even bother trying to take notes”. I do hope a video of the lecture will appear at some point as out of all the talks I saw I think this is the one I’d most benefit from rewatching in its entirety.

Another fun coincidence for me personally was that he mentioned some work of Irene Bouw, who I met only days beforehand in Ulm.

Tommy Hofman

Tommy talked about an algorithm he and Claus Fieker developed to compute Hermite normal forms over Dedekind domains. This was another exceptional example of the law of large numbers (or whichever law it is that explains funny coincidences) as previously I worked on implementing algorithms for computing HNFs and doing this work was in some way the original cause of me coming to Kaiserslautern, which is where Tommy works! The main takeaway from this talk (other than a nice new algorithm) is that quotients of rings of integers by powers of prime ideals are always Euclidean rings.

Other nice talks

  • Fedor Pakovich found a relationship between Davenport-Zannier pairs (generalisations of pairs of polynomials $(f,g)$ such that $f^3-g^2$ is of minimal possible degree, excluding trivial things) and Dessins d’Enfants. He used this to obtain a classification of these pairs (roughly 10 famillies and 11 sporadic pairs) by couting the corresponding graphs (which ended up being interpretable as certain weighted trees or something like this). One of the reasons I found this so interesting was that it’s the first time I think I’ve seen Dessins used in anger, I started reading Girondo and Gonzailez-Diaz’s book on compact Riemann surfaces and Dessins about year ago but got distracted before I got to the juicy bit.

  • Thomas Willwacher talked about cohomology of graph complexes which it turns out is really hard to compute but relates many other areas of maths (including automorphisms of free groups!).

  • Maryna Viazovska gave a nice talk about her (and others) recent proof of sphere packing in dimensions 8 and 24, which got a lot of popular attention at the time so was nice to hear about.

  • Meinolf Geck spoke about his work computing with groups, remarked that finite groups of Lie type $G(\mathbf{F}_q)$ make up a large portion of the classification of finite simple groups. He wants to construct generic character tables as $q$ varies for different algebraic groups and possibly different primes - and this actually looks to work! That this is even possible totally blew my mind.

  • Uli Wagner spoke about the topological Tverberg conjecture, Gunter Ziegler gave a talk that touched on this in Warwick in 2014 which I really enjoyed, so it was cool to hear about recent progress regarding it.

  • Jeremy Grey spoke about Poincare and Weyl and it was a lot more philosophical than I was expecting, contrasting their views on philosophy of science over time, but it was fascinating stuff.

There were certainly many many other interesting talks that I saw (and indeed many that I didn’t) but I’m tired and my memory is failing now and I have to make a post or Beeminder will take my money.

Other thoughts

There were a few political statements made during the conference for example by Pavel Exner condemning the situation in Turkey (where academics were recalled/banned from travelling) and from Timothy Gowers regarding the Brexit (short version: 48% of us voted to stay). This is something that I think is appropriate for large I/ECM like conferences, which represent the whole mathematical community.

Non-maths

Berlin is a really awesome city so even outside of the conference I did a lot of nice things, and I’m really glad I got a good excuse to visit before I finish my time in Germany. I took quite a lot of photographs (especially the Sunday before the conference when it was really nicely overcast), maybe I’ll upload some to Flickr and put a link here when I’m back in the UK and have some free time.

Weber modular functions

An quick introduction to modular functions, and the Weber modular functions in particular.

Background

A modular function for a subgroup $\Gamma$ of $\operatorname{SL}_2(\mathbf{Z})$ is a meromorphic function $f$ on the upper half plane $\mathbf{H} =\{x+iy\in \mathbf{C} : y > 0\}$, that is invariant by all matrices in $\Gamma$ when they act via

(we also need $f$ to be “meromorphic at the cusps” too but let’s not worry about this right now).

So for example letting $\Gamma$ be the whole of $\operatorname{SL}_2(\mathbf{Z})$ we see that modular functions for $\operatorname{SL}_2(\mathbf{Z})$ must satisfy $f(\tau) = f(\tau+ 1)$ and $f(\tau) = f(-1/\tau) $ for all $\tau\in\mathbf{H}$, as

are both in the modular group. In fact, as the above matrices generate the whole modular group, being invariant under the above two transformations is sufficient for a function on the upper half plane to be modular for $\operatorname{SL}_2(\mathbf{Z})$.

The most well known modular function is undoubtedly the $j$-invariant:

where $q = e^{2\pi i \tau}$. If you haven’t seen the $j$-invariant before this description as a $q$-expansion is not very helpful, but rest assured this function appears very naturally when considering sublattices of $\mathbf{C}$. In any case the properties of this function are the interesting and important part: first and foremost it is as injective as a modular function possibly can be. What I mean by this is that not only does

for any $\tau$, but if $j(\tau) = j(\sigma)$ then there is some matrix

This is useful as it allows you to find out if two points in the upper half plane can be transformed into each other by the above matrix action simply by calculating the value of $j$ at these points. This is especially handy when dealing with elliptic curves, which correspond to points in the upper half plane and are isomorphic only when their representative points are equivalent under the above action. This is the reason that $j$ is called the $j$-invariant.

In addition to this another nice fact about $j$ is that it generates the set of all modular functions for the full modular group when we take rational functions of it, that is

It is no surprise then that $j$ is a sort of poster child for modular functions, and indeed before looking into this topic I don’t think that I knew an interesting example of a modular function other than $j$, well now I do!

Weber modular functions

In order to define some other modular functions we will use an intermediary function, $\eta$, which has $q$-expansion:

Now $\eta$ is not a modular function, rather a modular form of weight $\frac{1}{2}$, which means that instead of the condition we had for modular functions we have instead that

One consequence of this is that if we divide $\eta$ by another modular form of weight $\frac12$ we get a modular function. But we have only seen one such function so far, $\eta$, and $\eta/\eta$ is not a very interesting function. We can however apply transformations to $\eta$ similar to those above with elements of the general linear group $\operatorname{GL}_2(\mathbf{Z})$ instead (specifically ones that $\eta$ is not invariant under). This will allow us to create new functions of the same type. For instance we let:

These are Weber modular functions, named after Heinrich Weber who studied them in his Lehrbuch der Algebra. Indeed we could have defined these via their $q$-expansions without mention of $\eta$ at all, but that makes them seem rather arbitrary, which they certainly aren’t!

Notice that we have applied the elements

all of which have determinant $2$, to $\eta$ here.

Until now the only modular function we have looked at is $j$, which was a modular function for the whole of $\operatorname{SL}_2(\mathbf{Z})$ so you will no doubt be pleased to hear that these functions are in fact modular for

a proper subgroup of the full modular group.

By virtue of the normalising coefficients we used here, these functions satisfy several nice identities, for example

Whilst these are some very lovely looking expressions, such things aren’t quite as exciting in total isolation. So how can we use these functions? One interesting thing we can do with these functions is generate Hilbert class fields, for this sort of work we it will help to look at Shimura reciprocity, which will be the topic of a future post!

New Blog!

I enjoy reading a lot of different blogs, so I have been thinking it would be cool to (re-)start my own mathsy/programmingy blog on a somewhat regular basis. So I have new-years-resolved to start blogging again/more regularly!

The eagle eyed among you may notice that is in fact April (or at least that that is when this post was published) so I have already pretty much failed… However I shall not fail again! No, this time I have set up a beeminder goal pledging to blog regularly. Meaning that I have to pay real money if I don’t keep this habit up, so you’re guaranteed to read something here at least. Whether it was worth reading or not will be for future generations to say (i.e. not you! Ok, ok, constructive criticism is welcomed with open (but slightly bear-like and intimidating) arms). This was probably a very-bad-idea, we shall see.

I have moved my old blog (which mostly concerned my work on FLINT) from Wordpress to this github.io site powered by Jekyll. So there is at least something here for me to build on, and if you want to follow this blog for updates you can do so with rss/atom/smoke signals.

This will almost about any of the questions and explorations I go on and find interesting, probably mostly maths with a bit of computer science/other random things thrown in. Until next time, watch this space (or watch me go broke)!

GSoC with Flint - Finishing up

(Update: after some more work on matrix kernels I managed to improve upon what is given below, I don't think it needs another post but see the long running thread on the flint-devel mailing list if you are interested the current performance of the code.)

First of all apologies for not keeping everyone up to date recently. I've just been cleaning up code and deciding what is in a good enough state to include in my final evaluation for GSoC so there wasn't that much new of interest to report. This is all now done and my GitHub repository now contains only code that appears to be working well.

The main project goals of implementing quick methods to compute Hermite and Smith normal forms have been completed. Below is table and graph comparing the timings to those obtained with Sage (I spent about half a week being very sad about my work until I realised I needed to clear the cache to get the right timings for Sage!). The class of matrices used were random square with dimension and entry bits as in the table.

Flint timings (s):

4 8 16 32 64 128
10 0.0001498 0.000181 0.0003962 0.0003608 0.0011182 0.0011266
20 0.0007708 0.0010738 0.0014056 0.0023238 0.0040728 0.009321
30 0.0021142 0.003127 0.0042294 0.007065 0.015792 0.0427272
40 0.0050182 0.0066494 0.0101572 0.0169392 0.038086 0.0985452
50 0.0105184 0.0138692 0.0183774 0.0319924 0.0812636 0.2245758
60 0.0181876 0.0243814 0.0342512 0.0625658 0.1578844 0.4360994
70 0.028565 0.0373348 0.0546572 0.1110402 0.290543 0.8147328
80 0.0417594 0.0595228 0.0882594 0.1842448 0.4881932 1.3889464
90 0.0694218 0.08668 0.1405782 0.2854802 0.7817248 2.2501918
100 0.0880376 0.1192832 0.2052142 0.4448414 1.245277 3.5487596

Flint to Sage timing ratios (< 1 is best for us):

4 8 16 32 64 128
10 0.6965 1.0258 1.9950 1.5602 3.8941 2.8422
20 0.7261 0.8606 0.9396 1.1124 1.0772 0.9704
30 0.6531 0.7794 0.8381 0.8015 0.7669 0.7449
40 0.6492 0.7048 0.7380 0.5891 0.4896 0.4245
50 0.6595 0.6637 0.5511 0.3997 0.3666 0.3543
60 0.6354 0.6325 0.4950 0.3586 0.3128 0.2958
70 0.6016 0.5616 0.3836 0.3126 0.2764 0.2768
80 0.1957 0.2762 0.3801 0.6697 1.2174 1.6715
90 0.2509 0.3145 0.4730 0.8187 1.5402 2.1104
100 0.2527 0.3257 0.5340 0.9906 1.8450 2.5461

For simplicity's sake I simply compared fmpz_mat_hnf_pernet_stein to Sage's hermite_form, so it's worth noting that Flint is faster still for small matrices if another method is used (which the fmpz_mat_hnf function should choose for you in practice). We can see here that although Flint does well for many matrices in this range it gets worse as the matrix and bit size gets larger, indeed this trend continues and my functions are far worse for very big matrices (Denis Kryzkov's benchmark here gives a good indication of the scale of the problem). The run time of the asymptotically most efficient HNF method I have implemented (Pernet-Stein) depends heavily on the computation of nullspaces and so this is definitely an area that can be improved. Both approaches I've tried to speed up nullspace computation (multimodular and p-adic lifting) haven't worked out being any better (asymptotically) than the pre-existing code based on the row echelon form. The remaining barrier here seems to be modular rref which I've looked at improving over the past week, this is certainly possible and I plan to carry on working on it (I have a complete but buggy implementation of a method described in Storjohann's thesis and a working implementation of classical Gaussian elimination which is fast for small matrices at the moment). Finishing this will I hope bring the timings for Pernet-Stein down to something more like the ones in Sage. Sage uses IML for these computations, but even without using BLAS I think we could get a fair amount closer to IML's performance for large matrices.

As for Smith forms, the randomised algorithm I worked on remains problematic, so I have left that out for now and stuck with the two slightly older approaches, a general one due to Kannan and Bachem and a modular one due to Iliopoulos for full rank square matrices. Below are some timings, again with comparisons to Sage. The class of matrices used is the same as that above, which means that it is highly likely that Iliopoulos' would be appropriate (though it may not have been chosen by fmpz_mat_snf). Sage uses Pari for this, but it was easier for me to change my existing timing code than set up a direct comparison to Pari.

Flint timings (s):

4 8 16 32 64 128
10 0.0001008 0.0001776 0.0002186 0.0005208 0.0016076 0.0042602
20 0.001181 0.0017224 0.0025602 0.0050306 0.0131318 0.038855
30 0.0039768 0.006887 0.013429 0.031941 0.0904368 0.2860226
40 0.0138918 0.0201976 0.0408486 0.1258706 0.386331 1.1849852
50 0.0312358 0.0544678 0.1120322 0.370241 1.155253 3.6941254
60 0.061067 0.116257 0.2779696 0.8377208 2.5126946 7.9345656

Flint to Sage timing ratios (< 1 is best for us):

2 4 8 16 32 64 128
10 0.01630 0.02870 0.04939 0.05714 0.10546 0.22552 0.36044
20 0.05063 0.06777 0.08559 0.08868 0.10665 0.14573 0.19519
30 0.08086 0.07276 0.09437 0.10418 0.13330 0.17716 0.23537
40 0.08905 0.10716 0.09976 0.10617 0.16009 0.21715 0.27530
50 0.10550 0.11037 0.11270 0.11762 0.18405 0.24098 0.30266
60 0.12292 0.10537 0.11135 0.12858 0.17996 0.23113 0.29524

(I will update this table on my blog when more timings finish, I wanted to post this post and it is taking a while)

These timings look good for Flint but I'm not completely sure yet what the large scale behaviour is.

I still have a number of more experimental bits of code around that I will continue to work on getting into a more stable and usable state. Along with some other little bits that I never managed to get around to during the official GSoC period that I hope to get around to at some point.

Finally I want to say a huge thanks to everyone who commented on what I've been doing, and especially to Fredrik for his excellent advice over the course of the project. All the comments were very much appreciated.

GSoC with Flint - Week 11

This week I've worked on a variety of different things, unfortunately getting hung up on most of them!

First I found some bugs in my Saunders-Wan Smith normal form code, most of them ended up being fixable, however one still eludes me. It seems the bug is fairly rare (occurs for roughly 1 in 10000 test cases) and it is certainly related to the random nature of the algorithm but my current thinking is that the current behaviour should not be happening even when we get unlucky with the randomness. I've left this issue alone for now after spending a couple of days making no progress on it.

I then moved on to writing some general HNF/SNF methods that pick between available implementations based on matrix size, norm and anything else that I could work out as being relevant. While doing this I found that the Kannan-Bachem Hermite form method worked better than the modular method some of the time and so I decided to try and combine them into a modular method that works by taking minors into HNF rather than rows. I had some problems making this work correctly when the input was not of full rank and all the fixes I tried ended up having some corner case that made them fail. It just occurred to me however that when the modular method is used as part of the Pernet-Stein algorithm the matrix given is always square and so this was not perhaps a completely wasted effort.

I then moved back to working on Pernet-Stein, specifically I added capacity for non-full row rank matrices, and fully general matrices should be done very soon.

This week I'm going to be doing some profiling and comparisons between my implementations and existing ones and try and work out the reasons why certain results are as they are and how the HNF/SNF methods I now have can be made faster in the future. It should be helpful to have a note of the barriers to having faster HNF/SNF computation in Flint and how they could be overcome.
Of course I'll also be tidying up and documenting several bits as I go along to fill in any gaps in functionality I have left along the way.

GSoC with Flint - Week 10

This week I worked more on the Smith normal form algorithm I talked about last week. I implemented Iliopoulos' algorithm for computation of the Smith form modulo an arbitrary integer, this procedure is used in a couple of places as part of Saunders and Wan's "engineered" algorithm. Firstly we use a prime power modulus to find the Smith normal form locally for small primes (i.e. those less than 100), the modular approach is also used for the rough part (concerning all the other primes) when the largest non-zero invariant factor is small compared to the size of the matrix. This algorithm is now working correctly, though the question of how to test it properly given its Monte Carlo nature is one that will require some thought. Currently whenever I have encountered a matrix for which the output of Saunders and Wan doesn't match the deterministic algorithms' outputs it has turned out to be a code bug rather than a side effect of the algorithm's randomness. I suppose allowing for a reasonable number of failures beyond the expected number in test code would be one approach, but of course there will still be occasions when the number of failures exceeds this and allowing a large number of extra failures could allow real bugs to creep in.

For the next couple of days I'm going to work a little more on Smith forms, hopefully implementing Storjohann's banded method for finding Smith forms of upper triangular matrices, this could be coupled with a HNF algorithm to give another (deterministic!) algorithm for the Smith form of a general matrix. I also need to clean up the Saunders and Wan code a bit as there are still a number of inefficiencies. I have not got a valence method included in this algorithm as this would require implementing a few other things (such as minimal polynomials), but the option is certainly there and it would easily slot in to the current code.

GSoC with Flint - Week 9

So this week I've been working on implementing the methods to find the Smith normal form of an integer matrix, rather than the Hermite normal form which I've worked on up until now. I'm mostly working off of the excellent paper of Saunders and Wan which describes an "engineered" algorithm to compute the Smith form which is highly practical. By engineered they mean that the algorithm switches between different methods based on practical observations regarding which method is likely to be the most efficient for different types of matrix, however the algorithm is structured in such a way that the asymptotic complexity remains as small as possible.

The algorithm is actually randomised of Monte Carlo type, meaning there is a controllable possibility of error present in the results. It becomes harder to be sure that the exponents of primes occurring in the Smith form are correct as the primes get smaller in the algorithm of Eberly, Giesbrecht and Villard (on which Saunders and Wan algorithm is based). So the computation is split into two parts, the smooth part concerning only primes less than a given limit and a rough part concerning all others. Currently I have an Eberly, Giesbrecht and Villard based routine working and finding rough parts fairly well and so I am currently working on a modular SNF algorithm to fill in the gap.

Saunders and Wan don't stop with just this however, they also incorporate ideas described by Dumas, Saunders and Villard which use a so called valence based method which I would also like to add at some point. It seems this does require efficient computation of the minimal polynomial however, so I'm not sure how far I will get with this.

GSoC with Flint - Week 8

This week I've been working on improving the computation of the nullspace of an integer by implementing a p-adic lifting based method. The algorithm is also described in Stein's book on computing with modular forms and is closely related to Dixon's p-adic method for linear system solving. This is pretty much finished now (modulo some weird memory leaks and determining the best values for parameters such as p-adic precision and range of primes used) and does appear to be asymptotically faster than the multimodular algorithm I worked on before, though experiments suggest that currently the matrix needs to be fairly large before the p-adic algorithm becomes better.

I was originally thinking of implementing Pauderis and Storjohann's algorithm if I had time this week, but have spent much of the past week looking at efficient nullspace computation (and being plagued a little by unexpected power cuts!) so haven't got much further than a skeleton implementation. Maybe I can flesh this out at some point but for the next week I'm going to move on to algorithms for the Smith normal form, another important normal form for matrices over the integers, where both column and row operations are allowed.

Hopefully building on what I've done so far I should be able to get some fast Smith normal form algorithms implemented shortly, indeed one simple approach is to repeatedly transpose and compute Hermite normal forms until the resulting matrix is diagonal, this won't necessarily be the Smith form but it can then be computed with relative ease. Something more sophisticated than this will undoubtedly be best, but many SNF algorithms involve computing the HNF at some point and working from there so the current HNF algorithms provide a good basis for those computations.

GSoC with Flint - Week 7

This week I've been working on improving the algorithm of Pernet-Stein as much as possible. After introducing as many of the ideas given in the original paper as I could I found the main bottleneck to actually be the computation of the nullspace of a certain submatrix of the matrix given, this is needed in order to efficiently solve a linear system which (likely) has a nasty final row. If we know a basis for a nullspace of the first n-1 rows of the system we can replace the final row with a random nice (having small entries) row and then find a solution to the original system by adding on a suitable multiple of the nullspace basis vector (the nullspace should be 1 dimensional for random input).
Flint uses the reduced row echelon form of a matrix to compute nullspaces (the nullspace of a matrix in this form can be easily read off and the transformation does not alter it) and so a natural place to improve nullspace computation is to improve the row echelon form computation. We can use a multimodular approach for this problem (this is described in Stein's book on computing with modular forms) and I've achieved some nice speed-ups with this method in the past couple of days. For example the multimodular method is around 200x faster for 125x125 matrices with 32 bit entries. While this has made Hermite form computations a lot faster (nullspace computation is over 90% of the work for large matrices) I still want to try and see if this can be improved upon further, after all, in this case we don't need the whole row echelon form just a vector in the kernel and the knowledge that the nullspace is 1-dimensional. So I plan to work on this further in the next couple of days, and depending on how I feel about this approach I will either spend the rest of this week making improvements to Pernet-Stein or possibly work on implementing the algorithm of Pauderis and Storjohann.

GSoC with Flint - Week 6

This week I've been working on implementing Pernet and Stein's algorithm to compute the HNF. The basic idea of this method (which is in turn due to Micciancio and Warinschi) is to use the given matrix to construct another axillary matrix with small determinant (so the HNF of the new matrix can be computed without coefficient explosion) from which the original HNF can be reconstructed. The algorithm splits in to a few separate stages, the computation of two determinants, the modular HNF of the smaller matrix, the addition of a column to the HNF and then the addition of two rows. Each of these steps can be optimised quite a bit using some clever tricks, which I'm currently working on doing. The adding of rows and columns to the HNF now seems to work well, and exploiting the similarities of the submatrices whose determinants are needed to compute them both faster should be done tomorrow. At that point it should be interesting to take stock and see how well the Pernet-Stein approach is working compared to previous methods I've worked on. I'll likely spend a couple more days after that improving this approach even more.

This is definitely the most complex algorithm I've worked on so far, but when completed it should really blow the others away for large enough random matrices.

GSoC with Flint - Week 5

Week 5 of my GSoC project is now over and this week I've been trying some variations of the older approaches to computing the Hermite normal form that I implemented in the first few weeks, I've had some success extending the Kannan-Bachem approach to non-square matrices and implementing other practical speed-ups. However my main project for this week, working on a variant of the modular approach for use when factors of the determinant are known, hasn't worked out so well yet. I've been really struggling to make it work as intended and have decided to place this to the side for the moment and come back to it later as it is not vital to the rest of the project. The main use case would be if the determinant was larger than word sized but a factorisation into relatively prime factors all was known where each factor fits in a word, here there would probably be an advantage to using multiple matrices modulo these factors. If however one of the factors was larger than word size or indeed the determinant itself fits in a single word I can't see how using such a factorisation would help much (if at all), there is also the problem of obtaining the factors in the first place, certainly some methods of computing the determinant (such as the method described by Abbot, Bronstein and Mulders) can give some factors of the determinant for free when computing it but this is certainly not guaranteed to give us useful information in light of the above. Nevertheless I would like to try and finish implementing this at some point, but don't want to slow the whole project down just working on this.

Next week I'll begin working on the algorithm given by Pernet and Stein which will take some time to do right and so I expect to be working on this in some way or another for the whole week (at least!). Initially I'll get a simplified version working (which will likely end up looking like the algorithm of Micciancio and Warinschi) and then I'll start working on the improvements suggested in Pernet and Stein's paper. Hopefully when this is done it will be extremely fast compared to the algorithms I have worked on so far, however it's run time will depend on what I can squeeze out of the modular approach I worked on a couple of weeks ago.

GSoC with Flint - Week 4

This week I've been working more on some of the older approaches to computing the HNF. First I finished off the modulo determinant method which works well for square matrices of full rank. One of the issues with this method however is that the determinant needs to be calculated first and can be quite large, in which case there may be little gained by calculating modulo it.

I've also implemented the algorithm described by Kannan and Bachem which takes the principal minors to Hermite form one at a time, as opposed to other methods I've implemented so far which work column by column. This also provides significant improvement on the classical method, and indeed is performing better than the modular approach in many cases.

For the next week I'll be trying to optimise the algorithms implemented so far as much as possible, especially the modular approach of Domich, Kannan and Trotter as this plays a key role in the algorithm of Pernet and Stein. One thing I would like to try is combining the two approaches I worked on this week and making a modulo determinant version of the Kannan-Bachem algorithm. There is also the possibility that factoring the determinant into some relatively prime parts and working modulo each of those, before recombining at the end with the Chinese remainder theorem, of course this involves factoring the determinant which could make the whole procedure less worthwhile. I also need to make the necessary modifications to ensure that all algorithms work in the non-square case as efficiently as possible.

GSoC with Flint - Week 3

This week I've been able to get started working on my GSoC project full time after finishing my university exams on Tuesday (hooray!).

I finished off what I was working on at the time of the last update, a more efficient algorithm that still computes the Hermite normal form in a straightforward way but takes less time by using the extended Euclidean algorithm to reduce rows rather than simply repeatedly reducing one row using another. The coefficients can still grow very fast in this algorithm which leaves it highly non-optimal in practice however small changes can be made to reduce this problem such as changing the order in which rows are reduced (this is described by Bradley here).

One way to mitigate the coefficient explosion problem is to perform the computations modulo some number and then use the result of this computation to find the actual HNF. If we use a multiple of the determinant of the HNF matrix we are trying to find (which we can in fact determine from the original matrix) the real HNF can be found fairly easily and this is what I have been working on for the past couple of days. This method of finding the HNF is due to Domich, Kannan and Trotter and the paper describing it can be found here. The function I have implemented seems to be working fine on the test cases it is given but I am getting some memory leaks that I'm really having trouble getting to the bottom of which is my main problem currently.

Once this is done I'll start working on a multimodular algorithm which will involve computing the HNF modulo several primes and combining the resulting matrices via the Chinese remainder theorem.

GSoC with Flint - Week 2

I've just reached the end of the second week of Google summer of code so it's about time for a short update.

As with last week I've been working on the project in between my exams (which finish on Tuesday!) so progress has been slow so far. This week I got a few bugs fixed with the classical implementation and test code before moving on to starting work on another algorithm for for finding the Hermite normal form based on the extended Euclidean algorithm. This is still a classical algorithm but it does improve the efficiency of the most basic approach. However it still suffers from the coefficient expansion problem that causes these algorithms to all end up taking more time than they might initially appear to, which is why more intricate methods are needed for algorithms that scale well.

I'm very much looking forward to getting started working on Flint full time this week and hopefully next week I'll have some more interesting news to share here!

GSoC with Flint - Week 1

My Google summer of code project was accepted, so this summer I'm working on implementing efficient methods to find the Hermite normal form of a matrix in Flint. The official coding period began this week and so I have been starting to code.

It has been a slow first week for me as I still have several university exams on, so I haven't had as much time to work on the project this week as I will in future weeks. So there is not much to report so far but I'll do so anyway to get in to the swing of things.

I originally wrote some sample code when I applied for GSoC which was a very simple HNF algorithm following Henri Cohen's book. This used a column Hermite normal form convention that placed zero columns at the start, however this is not the convention I am using for the actual project where the row Hermite form makes more sense (additionally this convention is used by two of the papers I am trying to implement). So I have been changing the code and tests over to the new convention. This has not been quite as straightforward as I would have liked and it has thrown up a few bugs, seemingly in my test code rather than in the HNF algorithm itself however. Hopefully I can sort these testing issues out and get onto implementing some more interesting algorithms soon.

Google Summer of Code 2014 - FLINT

This year I am applying to take part in Google Summer of Code (GSoC) over the summer, working as part of the FLINT project. FLINT is a C library which contains highly optimised implementations of algorithms to perform number theoretic tasks. For example, FLINT was used a used as part of a project to compute the first one trillion congruent numbers (assuming BSD). One of main techniques used to compute these congruent numbers was by looking at the coefficients of the Fourier expansion of an associated modular form (a function from the upper half plane to the complex numbers satisfying some nice conditions). Now one of the interesting things about modular forms that make them easier to work with is the fact that they form a finite dimensional vector space over the complex numbers. The project I'm hoping to is actually related to this in a slightly tangential way. One way to find a nice basis for the space of $q$-expansions of modular forms uses the concept of the saturation of a module in an essential way, this in turn can be computed using by finding the Hermite normal form of a matrix. The Hermite normal form of a matrix is the unique matrix obtained from it by unimodular (row/column) operations satisfying some conditions on its entries. So by computing Hermite normal forms quickly we can find bases for spaces of $q$-expansions more easily but there are also a whole raft of other applications within algebraic number theory, cryptography and yet more areas. The project I am hoping to work on is to implement very efficient algorithms for computing first the Hermite normal form of a matrix, but later other related special forms of matrices such as the Smith normal form (used heavily in the theory of finitely generated abelian groups). There are a lot of interesting algorithms out there so this should be really fun to work on. Also I quite like the fact I should be able to see the results of implementing efficient algorithms for the HNF first hand as I use them both directly and indirectly (through other algorithms) fairly regularly when computing with all sorts of objects. I'm planning to start of implementing a couple of the more basic HNF algorithms to get a feel for the project and get the relevant helper functions implemented, before moving on to more complicated methods such as Pernet and Stein's algorithm or Pauderis and Storjohann after that I'll try and move on to the Smith and possibly Howell normal forms. If I am accepted I'll continue blogging here weekly about my progress thinking about and implementing these algorithms (possibly more often if I have more to say!).

Tomorrow's mathematicians today 2014

Last weekend I went to Tomorrow's Mathematicians Today 2014, a conference for undergraduates, this year at the university of Surrey. This was the third time the event has taken place but only the second time consecutively, hopefully it will continue to be a yearly event (I did hear a rumour it will be running next year which I really hope is true). I had a good time once again, in fact the whole event was very similar to the year before, but more relaxing for me  personally as I did not give a talk this year.
The attendance seemed very much the same as previously, which was a little surprising seeing as more people from Warwick came this year, maybe the weather affected attendance badly?

The talks were a nice variety again and I saw a lot of cool things, most notably the keynote speaker Professor Luis Alday's talk on some of the mathematics of string theory and the well deserved GCHQ prize winner M. Syafiq Johar's talk on Rational tangles and the Euclidean algorithm. There were also interesting presentations on parallel algorithms and modularity.

I would highly recommend any UK undergraduate to consider attending such a conference (put a note in your calendar to Google for it sometime next October), and moreover to consider presenting a talk or a poster. In fact I almost wish I had prepared a talk now!

Riemann hypotheses

Must... start... writing... regularly, this is just a quick update about a talk I gave recently to try and ease myself into the swing of things.

I recently prepared a talk for the Warwick maths society on some analogues of the Riemann zeta function and associated statements about them that resemble the Riemann hypothesis. The talk was called "Riemann hypotheses", and there are some slides here.
It was a lot of fun to prepare and even though I only had time to discuss the Riemann, Ihara and Dedekind zeta functions in the end I think my message of "zeta functions are cool and varied objects that are fun to play with" came across at least a little bit.