Quirky Facts: Group objects in the category of groups

Recently I was thinking some more about (abelian) groups and I was led to the question of what the group objects in the category of groups are. This is a funny question in and of itself and the answer also has a sense of humor, hence a new series “Quirky facts”. I don’t just work on amusing concequences of axioms of algebra, honestly… But it is fun and provides excellent blog fodder.

So a group object in some category (that has a terminal object and products) is just a few morphisms that give an object the structure of a group. By way of some examples: groups are group objects in the category of sets, lie groups the group objects in smooth manifolds, algebraic groups the group objects in the category of algebraic varieties, the list goes on.

Thinking about this too late at night one is led to the question: what is a group object in the category of groups? Is it all the groups? Some subset perhaps? Are there more somehow (like how a set can often be given two or more group structures)?

In any event I encourage the reader to try and work it out for a while before reading on, it’ll be worth it, I promise!

With that said, let’s begin, let $G$ be a group object in the category of groups, so we have $\times\colon G \times G \to G$ and $e\colon \{1\} \to G$ and $i\colon G \to G\text{,}$ all group homomorphisms. $G$ is itself a group, so we’ll denote its own product by $\cdot$. The first thing to note is that as the group object identity map $e$ must be a group homomorphism, the identity element for $\times$ must be the same as the underlying group operation $\cdot$ as $\{1\}\ni 1 \mapsto 1 \in G$ via $e$.

Now we have a set $G$ with essentially two group operations on it $\cdot, \times\text{,}$ the fact that $\times$ has to be a group morphism and that the product group structure on $G \times G$ is given by $(a_1,b_1)\cdot(a_2,b_2) = (a_1a_2,b_1b_2)$ means that

for any $a_1,a_2,b_1,b_2 \in G\text{.}$ As this is symmetric in $\cdot , \times$ this also says that the group $(G, \times)$ has a group object structure given by $\cdot\text{!}$ At this point one might start to wonder, is $\cdot = \times\text{?}$ So let’s throw in some elements, what about

Ah hah! So the group operations were really the same!

So the answer is just totally boring then? Every group is a group object with the expected operation alone? Well not quite, so far we’ve just been talking about monoids really, i.e. we haven’t mentioned the inverse at all. In order to be a group object the “new” inversion map must be a group homomorphism for the underlying multiplication structure, which is really the group object one too, so by uniqueness of inverses must be the same map. So the cases where this all goes through are the groups for which inversion is a group homomorphism. This is precisely the abelian groups, no more no less!

Okay so in fact one can see the commutativity from the above discussion of the compatibility of multiplications as

so one obtains the corresponding result even just for monoids, but thinking about abelianness and inverses being homomorphisms is what sent me down this little diversion so it seemed rude to cut it out.

It turns out this is known as the Eckmann-Hilton argument/principle/theorem/show. It goes back a long way (1961) and one can read much more elsewhere, there is even a Catsters video or two.

To give a couple of more useful (though admittedly slightly less amusing) applications: This in fact shows that higher homotopy groups $\pi_n(X),\,n \ge 2$ are abelian. If you (like me) think you know a different proof, you don’t! (or maybe you do, who knows what you know).

Finally this also shows that $\pi_1(G,e)$ is abelian for a topological group $G$! No such luck for etale fundamental groups of algebraic groups though.

P.S. anyone who wishes to have someone to blame for the outpouring of uselessness seen here (for example; my advisor) need look no further than Sachi for inspiring me to write something here again.

All rings are commutative (additively)

Normally a ring is defined to be an abelian group (written additively) with an extra multiplication operation under which everything is a semigroup. Why do we restrict to an abelian group additively though? What happens if we try and make the same definition with an arbitrary group?

Well in fact we would get exactly the same objects! Indeed the rest of the ring axioms are enough to force $a + b = b+a$ always, hence removing that axiom is harmless except that it requires you to prove this to get the theory off the ground.

We can prove this very explicitly by observing that

hence

While this calculation shows that the base group must be abelian in order to have a ring structure on top, I think a more conceptual explanation for why this is the case would be nice also. The best I’ve got so far in that vein is the following:

The ring structure on a group $G$ defines an injective function $L_{\bullet}\colon G \to \operatorname{End}(G)$ via the left multiplication map $g\mapsto (h\mapsto g\times h)$, the ring axioms guarantee that each such map is a group endomorphism. Additionally distribution implies that $L_{g+h} = L_g + L_h$. The existence of a unit for multiplication implies that the identity map $\operatorname{id}_G\colon g\mapsto g$ is in the image of $G$ inside $\operatorname{End}(G)$, hence the doubling/squaring map is in the image $G$, and so is an endomorphism of $G$. This fact that squaring is a group morphism then guarantees that the group is abelian. (For a different example of this this statement consider a group where squaring sends everything to the identity, i.e. in which every non-identity element has order 2, this group must then also be abelian).

This still feels a little convoluted to me and the calculation is of course a lot more revealing, but trying to unpack what’s going on in the calculation and why it turns out that way is quite interesting, and who knows maybe it will lend itself to some kind of generalisation?

This little fact is literally the first exercise in Lam’s A first course in noncommutative rings, but it had certainly escaped my attention until now.

A trig integral

Its summertime so I’ve been trying out a few project Euler problems again. In the process of doing one of them I realised something about trig integrals that I forgot, or maybe even never knew. I thought this was cute and wanted to share it.

Consider the integral

this is one where you have to substitute a trig function to work it out, I always found these a little bit magic, so lets look at it in a more elementary way.

Let’s assume to start that $ -1 \le a \le 0$ The function $\sqrt{1-x^2}$ gives us the height of a radius 1 circle in the $xy$ plane, and so computing the integral above is the same as finding the area of enclosed by the $x$-axis, circle and line $y = a$. However we know what the area of a wedge of a circle is, the full circle (of radius 1) has area $\pi$ and so a wedge of angle $\theta$ has area $\pi\cdot \theta/2\pi = \theta/2$. Now the area we are looking for is the area of a wedge minus the area of the extra triangle. What’s the angle of the wedge? We’re stopping at $y=a$ so the wedge has angle $\cos^{-1}(-a)$. Hence the area of the wedge is $\cos^{-1} (-a) /2 $.

The added triangle will have side length $\mid a\mid$ (i.e. $-a$) and $\sqrt{1-a^2}$ so we get the final formula

Using the fact that $\cos^{-1}(-x) = \pi/2 + \sin^{-1}(x)$ we get a slightly cleaner formula for the indefinite integral

Which before now I would have said was hard to remember, but after seeing it like this, probably ok.

Another interesting note is that if $a \ge 0$ using the above formula is also correct, here adding the approriate triangle to the wedge gives us the integral.

Every finite group is a Galois group

The fact that every finite group is a Galois group is pretty well known (and in fact this post is basically just a transcription of the one in Lang’s Algebra) but I’ve been thinking about it recently and its a really cool result so I figured I’d share it. Who knows, maybe I’ll post about the extension to profinite groups next time?

The starting point here is the following theorem of Artin, telling us that we can cut out Galois extensions with any group of field automorphisms we like.

Theorem (Artin)

Let \(K\) be a field and \(G\) a finite group of field automorphisms of \(K\text{,}\) then \(K\) is a Galois extension of the fixed field \(K^G\) with galois group \(G\text{,}\) moreover \([K:K^G] = \#G\text{.}\)

Proof

Pick any \(\alpha \in K\) and consider a maximal subset \(\{\sigma_1, \ldots, \sigma_n\}\subseteq G\) for which all \(\sigma_i \alpha\) are distinct. Now any \(\tau \in G\) must permute the \(\sigma_i \alpha\) as it is an automorphism and if some \(\tau\sigma_i \alpha \ne \sigma_j\alpha\) for all \(j\) then we could extend our set of \(\sigma\)s by adding this \(\tau\sigma_i\text{.}\)

So \(\alpha\) is a root of \begin{equation*} f_\alpha(X) = \prod_{i=1}^n (X- \sigma_i\alpha)\text{,} \end{equation*} note that \(f_\alpha\) is fixed by \(\tau\) by the above. So all the coefficients of \(f_\alpha\) are in \(K^G\text{.}\) By construction \(f_\alpha\) is a separable polynomial as the \(\sigma_i\alpha\) were chosen distinct, note that \(f_\alpha\) also splits into linear factors in \(K\text{.}\)

The above was for arbitrary \(\alpha \in K\) so we have just shown directly that \(K\) is a separable and normal extension of \(K^G\text{,}\) which is the definition of Galois. As every element of \(K^G\) is a root of a polynomial of degree \(n\) we cannot have the extension degree \([K:K^G] \gt n\text{.}\) But we also have a group of \(n\) automorphisms of \(K\) that fix \(K^G\) so \([K : K^G] \ge n\) and hence \([K : K^G] = n\text{.}\)

So now with this in hand we just have to realise our group as a group of field automorphisms of some field.

Corollary

Every finite group is a Galois group.

Proof

Let \(k\) be an arbitrary field, \(G\) any finite group. Now take \(K = k(\overline g:g\in G)\) (i.e. adjoin all elements of \(G\) to \(k\) as indeterminates, denoted by \(\overline g\)). Now we have a natural action of \(G\) on \(K\) defined via \(h\cdot \overline g= \overline {hg}\) and extending \(k\)-linearly. Now \(K\) and \(G\) satisfy the statement of Artin's theorem and hence \(K/K^G\) is a Galois extension with Galois group \(G\text{.}\)

It is interesting to note that we could have started with any field we liked and built a Galois extension with both fields extensions of the base we picked. They won’t necessarily share a huge amount with it, however it is interesting to note that the characteristic will have to be the same and so we can do this for whatever our favourite characteristic is.

Ribet's Converse to Herbrand: Part II - Cuspstruction

So around a month back I posted the first post in this 2 (or more who knows?) part series on Ribet’s converse to Herbrand’s theorem. This is the sequel, Cuspstruction, it is basically just my personal notes from my STAGE talk with the same name. We were following Ribet’s paper and this is all about section 3. The goal is to construct a cusp form with some very specific properties, which we can then take the corresponding Galois representation and use that to obtain the converse to Herbrand. In this post though we’ll be focussing on constructing the cusp form, hence, Cuspstruction.

Cuspstruction

We will make use the following building blocks, some specific modular forms of weights 2 and type \(\epsilon\) \begin{align*} G_{2,\epsilon} &= L(-1,\epsilon)/2 + \sum_{n=1}^\infty \sum_{d|n} d \epsilon(d) q^n\\ s_{2,\epsilon} &= \sum_{n=1}^\infty \sum_{d|n} d \epsilon(n/d) q^n \end{align*} the latter is not a cusp form (not cuspidal at the other cusp of \(\Gamma_1(p)\)) we call such forms semi-cusp forms, denote the space of such by \(S^\infty\) (not standard notation) we will also use \begin{equation*} G_{1,\epsilon} = L(0,\epsilon) + \sum_{n=1}^\infty \sum_{d|n} \epsilon(d) q^n \end{equation*} the Eisenstein series are all hecke eigenfunctions for \(T_n\) \(n\) coprime to \(p\text{.}\)

Fix a prime ideal \(\mathfrak p|p\) of \(\mathbf{Q}(\mu_{p-1})\text{,}\) can think of \(\mu_p\subseteq \mathbf{Q}_p^*\) and take \(\omega\colon (\mathbf{Z}/p\mathbf{Z})^* \xrightarrow\sim \mu_{p-1}\) the unique character with \(\omega(d)\equiv d \pmod{\mathfrak p}\) for all \(d\in \mathbf{Z}\text{.}\)

We start with a key lemma, will use this repeatedly.

By our choice of \(\omega\) we get the desired result for the non-constant terms of the \(q\)-expansion.

So it remains to prove that \begin{equation*} L(-1,\omega^{k-2})\equiv -\frac{B_k}{k}\pmod{\mathfrak p} \end{equation*} \begin{equation*} L(0,\omega^{k-1})\equiv -\frac{B_k}{k}\pmod{\mathfrak p} \end{equation*} we make use of the following expressions (see probably Washington) \begin{equation*} L(0,\epsilon) = -\frac 1p \sum_{n=1}^p \epsilon(n) (n- \frac p2) \end{equation*} \begin{equation*} L(-1,\epsilon) = -\frac{1}{2p} \sum_{n=1}^p \epsilon(n) (n^2 - pn + \frac{p^2}{6}) \end{equation*} \(\omega(n)\equiv n^p \pmod{\mathfrak p^2}\) so \begin{equation*} pL(0,\omega^{k-1}) = -\sum_{n=1}^p \omega^{k-1}(n)(n-p/2) \end{equation*} \begin{equation*} \equiv -\sum_{n=1}^p n^{p(k-1) + 1} \pmod{\mathfrak p^2} \end{equation*} and \begin{equation*} pL(-1,\omega^{k-2}) = -\frac 12\sum_{n=1}^p \omega^{k-2}(n)(n^2 - pn +\frac{p^2}{6}) \end{equation*} \begin{equation*} \equiv -\frac 12\sum_{n=1}^p n^{p(k-2) + 2} \pmod{\mathfrak p^2} \end{equation*} but for all \(t\gt 0\) even we have the congruence \begin{equation*} \sum_{n=1}^{p-1} n^t \equiv pB_t \pmod{p^2} \end{equation*} and so \begin{equation*} pL(0,\omega^{k-1}) \equiv -pB_{p(k-1)+1} \pmod{p^2} \end{equation*} cancelling \begin{equation*} L(0,\omega^{k-1}) \equiv -B_{p(k-1)+1} \pmod{p} \end{equation*} \begin{equation*} \equiv -B_{p(k-1)+1} \pmod{p} \end{equation*} \begin{equation*} \equiv -\frac{B_k}{k} \pmod{p} \end{equation*} as \(p(k-1) + 1 \equiv k \pmod{p-1}\) using Kummer's congruence. Similarly \begin{equation*} L(-1,\omega^{k-2})\equiv -\frac 12 \frac 2k B_k \pmod{p}\text{.} \end{equation*}

Note: the constant term is a \(\mathfrak p\) unit unless \(B_n\) or \(B_m\) is divisible by \(p\text{.}\) We need to remove this condition.

Proof

It suffices to find a form with constant coefficient a \(\mathfrak p\)-unit. If \(p\nmid B_k\) then we can use \(G_{2,\omega^{k-2}}\) by lemma 3.1.

If \(p|B_k\) try the possible products \begin{equation*} G_{1,\omega^{m-1}}G_{1,\omega^{n-1}} \end{equation*} with \(2\le m,n\le p-3\) even as above with \(m+n \equiv k \pmod{p-1}\text{.}\) We want to claim that at least one of these must work (i.e. we have a pair \(m,n\) with \(p\nmid B_m, p\nmid B_n\)). If this isn't the case if we let \begin{equation*} t = \#\{2\le n \text{ even} \le p-3: p|B_n\}\text{,} \end{equation*} we must have \(t \ge (p-1)/4\text{,}\) assume otherwise, we will derive a contradiction from this.

Greenberg showed that \begin{equation*} \frac{h_p}{h_{\mathbf{Q}(\mu_p)^+}} = h^*_p = 2^? p \prod_{\substack{k=2\\ \text{even}}}^{p-2} L(0,\omega^{k-1}) \end{equation*} (this is obtained by taking a quotient of the analytic class number formulas for \(\mathbf{Q}(\mu_p),\mathbf{Q}(\mu_p)^+\)) but by lemma 3.1 we know that \(\mathfrak p^t\) will divide the product of \(L\)-values. And so \(p^t|h_p^*\text{,}\) we will get a contradiction if we show \begin{equation*} h_p^*\le p^{(p-1)/4}\text{.} \end{equation*}

Work of Carlitz-Olson '55, Maillet's determinant shows that \begin{equation*} h_p^* = \pm\frac{D}{p^{(p-3)/2}} \end{equation*} where \(D\) is the determinant of a \((p-1)/2 \times (p-1)/2\) matrix with entries in \([1,p-1]\text{.}\) So recalling Hadamard's inequality \begin{equation*} |\det(v_1\cdots v_n)| \le \prod_{i=1}^n ||v_i||\text{,} \end{equation*} or the simpler corollary \begin{equation*} |A_{ij}| \le B \implies |\det(A)| \le n^{n/2}B^{n} \end{equation*} and applying with \(B = p, n=(p-1)/2\) gives \begin{equation*} |D| \le \left(\frac{p-1}{2}\right)^{(p-1)/4} p^{(p-1)/2} \lt 2^{-(p-1)/2} p^{(3p-3)/4} \end{equation*} so \begin{equation*} h_p^* \lt p^{(p+3)/4} 2^{-(p-1)/4}\text{.} \end{equation*} And we are done as \(h_p^* = 1\) for \(p\le 19\) and as \(p\le 2^{(p-1)/4}\) for \(p\gt 19\text{.}\)

Now we fix \(2\le k\le p-3\) even with \(p|B_k\) and let \(\epsilon = \omega^{k-2}\text{,}\) \(k\) must really be at least 4 (or even 10) so \(\omega\) is a non-trivial even character, we will work in weight 2, type \(\epsilon\) from now on.

Let \begin{equation*} f= G_{2,\epsilon} -cg \end{equation*} with \(c = L(-1,\epsilon)/2\text{.}\) This is a semi-cusp form by construction. We get \(f\equiv G_{2,\epsilon}\pmod{\mathfrak p}\) because \begin{equation*} c \equiv -B_k/2k \equiv 0 \pmod{\mathfrak p} \end{equation*} by lemma 3.1 (again!) as we assume \(p|B_k\) now. Additionally by the same lemma \(G_k \equiv G_{2,\epsilon}\pmod{\mathfrak p}\text{.}\)

Let's take stock, we have a semi-cuspidal form which mod \(\mathfrak p\) looks like \(G_k\) and is hence an eigenform mod \(\mathfrak p\) but we want an actual eigenform, bro do you even lift?

We start with \(f\) from the proposition above it's a mod \(\mathfrak p\) eigenform and so we can use Deligne-Serre lifting lemma (6.11 in Formes modulaires de poids 1) to obtain a semi-cusp form \(f'\text{,}\) that is an eigenvalue for the Hecke operators stated.

To promote the semi-cusp form to a full blown cusp form we observe that the space \(S_2^\infty(\Gamma_1(p),\epsilon)\) is generated by the cusp forms and \(s_{2,\epsilon}\) which is also an eigenform we only have to check that \(f'\) isn't \(s_{2,\epsilon}\) (or it's scalar multiple). So we check the eigenvalues mod \(\mathfrak p\text{.}\) \begin{equation*} \epsilon(\ell) + \ell \equiv 1 + \ell\epsilon(\ell)\pmod{\mathfrak p} \end{equation*} implies \(\epsilon(\ell) = 1\text{,}\) but \(\epsilon\) is non-trivial!

The final challenge is to ensure that \(f'\) is also an eigenform for \(T_{p^i}\text{.}\)

Use the theory of newforms. There are no oldforms for \(\Gamma_1(p)\) as \begin{equation*} M_2(\operatorname{SL}_2(\mathbf{Z})) = 0\text{.} \end{equation*} A newform that is an eigenform for all hecke operators coprime to the level \(p\) is also an eigenform for the remaining Hecke operators.

So in conclusion:

Remark

Word on the internet is that Mazur, Mazur-Wiles' proof of the Main conjecture of Iwasawa theory is modelled on this.

That's all for now, in the remainder of Ribet's paper he constructs a Galois representation from this and use it to prove the theorem.

Ribet's Converse to Herbrand: Part I

Tomorrow I’m giving the STAGE talk on Ribet’s converse to Herbrand’s theorem, after I’ll try and post more notes, but for now here’s a little intro to get us thinking about the problem.

Ribet's converse to Herbrand

We are interested in the class groups of cyclotomic fields \begin{equation*} h_p = h_{\mathbf{Q}(\mu_p)}\text{.} \end{equation*} Lets list the first few of these

\(p\) 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67
\(h_p\) 1 1 1 1 1 1 1 1 3 8 9 37 121 211 695 4889 41241 76301 853513
\(p|h_p\) no no no no no no no no no no no yes no no no no yes no yes
Definition Regular primes

We'll call primes for which \(p\nmid h_p\) regular primes. Otherwise irregular primes.

Why is this important from a number theory perspective?

It's hard to tell when a prime is a regular prime, you'd have to compute the class group.

Definition Bernoulli numbers

The Bernoulli numbers are the sequence of integers given by the exponential generating function \begin{equation*} \frac{x}{e^x - 1} + \frac x2 - 1 = \sum_{n\ge 2}^\infty B_k\frac{x^k}{k!}\text{.} \end{equation*}

These have a number of cool properties, such as:

But most important for us is the relation to class numbers:

This is a great theorem relating class numbers to the Bernoulli numbers, but can we do better? What if I know a specific \(k\) so that \(p|B_k\text{,}\) can I say anything more specific about the class group? Yes; there is a strengthening of this theorem due in this form to Herbrand (in one direction) and Ribet (later, in the other direction).

First we need to recall the mod \(p\) cyclotomic character \(\chi\colon \operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q}) \to \mathbf{F}_p^*\) defined by \begin{equation*} \zeta_p^{\chi(\sigma)} = \sigma (\zeta_p)\text{.} \end{equation*}

The \(\Leftarrow\) direction was proved by Herbrand in 1932. And the \(\Rightarrow \) direction by Ribet in 1974.

Now for completeness here is a table of factorisations of Bernoulli number numerators.

\(k\text{:}\) \(2\) \(4\) \(6\) \(8\) \(10\) \(12\) \(14\) \(16\) \(18\) \(20\) \(22\) \(24\) \(26\) \(28\) \(30\) \(32\) \(34\) \(36\) \(38\) \(40\) \(42\) \(44\) \(46\) \(48\) \(50\) \(52\) \(54\) \(56\) \(58\)
Numerator of \(B_{k}\text{:}\) \(1\) \(-1\) \(1\) \(-1\) \(5\) \(-1 \cdot 691\) \(7\) \(-1 \cdot 3617\) \(43867\) \(-1 \cdot 283 \cdot 617\) \(11 \cdot 131 \cdot 593\) \(-1 \cdot 103 \cdot 2294797\) \(13 \cdot 657931\) \(-1 \cdot 7 \cdot 9349 \cdot 362903\) \(5 \cdot 1721 \cdot 1001259881\) \(-1 \cdot 37 \cdot 683 \cdot 305065927\) \(17 \cdot 151628697551\) \(-1 \cdot 26315271553053477373\) \(19 \cdot 154210205991661\) \(-1 \cdot 137616929 \cdot 1897170067619\) \(1520097643918070802691\) \(-1 \cdot 11 \cdot 59 \cdot 8089 \cdot 2947939 \cdot 1798482437\) \(23 \cdot 383799511 \cdot 67568238839737\) \(-1 \cdot 653 \cdot 56039 \cdot 153289748932447906241\) \(5^{2} \cdot 417202699 \cdot 47464429777438199\) \(-1 \cdot 13 \cdot 577 \cdot 58741 \cdot 401029177 \cdot 4534045619429\) \(39409 \cdot 660183281 \cdot 1120412849144121779\) \(-1 \cdot 7 \cdot 113161 \cdot 163979 \cdot 19088082706840550550313\) \(29 \cdot 67 \cdot 186707 \cdot 6235242049 \cdot 3734958336910412\)

PSA: Quadratic reciprocity

The quadratic reciprocity law works for Legendre symbols with odd entries, not just primes

What's that category?

Some vague thoughts about a weird category me and my housemate got to thinking about recently, unfortunately I’m a little too sleepy to write anything more coherent right now, but beeminder demands tribute.

When doing non-abelian group cohomology (and many other things) you end up dealing with the category of pointed sets, that is, sets with a point specified, and where morphisms must map specified points to each other. This category is actually fairly nice (or at least nicer than plain $\mathrm{Set}$ is anyway) insomuch as it has a zero object (i.e. an object that is both initial and terminal), the one element set, this allows us to make sense of kernels etc. which is a nice thing to be able to talk about. So why do we get a 0-object here? Well the one element set is already terminal in $\mathrm{Set}$ so we get terminalness for free, as for why it is initial it’s revealing to describe the category of pointed sets in a different way: We can equivalently describe it as the coslice category $\{* \} \downarrow \mathrm{Set}$, where the objects are morphisms in $\mathrm{Set}$ from the one element set (giving you the specified point) and the new morphisms are commuting triangles in $\mathrm{Set}$. When we do this we of course get the object we cosliced at (or more specifically its identity morphism) as an initial object, and as we already had a unique map from any object to this object we get that its identity map to itself is terminal also.

This got me thinking about $\mathrm{CRing}$, which infamously doesn’t have kernels, so what if we follow the recipe above or at least it’s dual and consider the slice category over the initial object $\mathbf{Z}$. By the dual of the above this should now have a 0-object (the identity map on $\mathbf{Z}$) and so we could form kernels, indeed the kernel of a morphism $A \to B$ (where both $A$ and $B$ have an associated map to $\mathbf{Z}$) should be the preimage of $\mathbf{Z}$ I suppose, and this is probably a subring?

Before we get to that though be should ask what does this category $\mathrm{CRing}\downarrow \mathbf{Z}$ even look like?! It’s not absolutely trivial as far as I can tell as we get maps from polynomial rings into $\mathbf{Z}$ from evaluating at integer vectors. We can also add in nilpotents and other stuff that just ends up mapping to zero but is still fine. However on the other hand we immediately rule out positive characteristic rings, and anything which inverts an element of $\mathbf{Z}$.

I really have no conclusions about what this category of commutative rings with a map to $\mathbf{Z}$ actually is, but it is quite fun to play with.

Some funny representations

As part of a discussion in our Galois representations course John Bergdall challenged us to come up with a representation that is irreducible but not absolutely semi-simple. I found this a pretty fun thing to think about so I thought I’d write up my progress and what the next steps are.

First things first a reminder of the definitions: an irreducible representation is one with no fixed subspace, and a semisimple representation is one which can be written as a direct sum of irreducible representations. Adding the word absolutely just means that we require the property to be true for the representation acting on the vector spaces over algebraic closure of the base field. By allowing more general coefficients things that are irreducible to start with can easily become reducible.

To start our search lets work with the most simple type of potentially interesting representations I can think of, these look something like

which are entirely determined by the image of 1, lets try and find an appropriate matrix to make the example we want work.

In my head non-semi-simple things look something like this

So in order to find an irreducible non absolutely semisimple representation we want to find a matrix over a field which has no eigenvalues, but which over the algebraic closure has repeated eigenvalues. This is not possible over $\mathbf{C}$ for example, as the trace would have to be twice the eigenvalue. This would then give that the eigenvalues were themselves in the ground field, and therefore the eigenvalues were defined over $K$ in the first place.

This sort of weird multiple roots of irreducible polynomials stuff happens only for non-perfect fields (by definition), of which the most quoted example is $\mathbf{F}_p((t))$. Here the polynomial $x^p - t$ is irreducible but has repeated roots over the algebraic closure as it factors as $(x-\sqrt[p]{t})^p$. As we are dealing with 2-dimensional representations here we should look for a matrix over $\mathbf{F}_2((t))$ with characteristic polynomial $x^2 - t$, one simple example of this is the matrix

So we have a matrix that has no eigenvalues over the base field but repeated eigenvalues over the algebraic closure. We now need to check that it only has a one dimensional eigenspace, to be sure that we have a non absolutely semisimple representation. This eigenspace is given by the kernel of

which is indeed one dimensional and so we are done.

One property of this example is that if we consider the restriction of the representation to the subgroup $2\mathbf{Z}$ we get something semisimple as the square of our matrix $M$ is diagonal. The new improved challenge is therefore to find a representation for which this doesn’t happen either; more explicitly to find an irreducible not absolutely semisimple representation which remains that way on all finite index subgroups. I don’t think the example I have right now can be extended to this case and it might be necessary to look at representations of more exotic groups than simply $\mathbf{Z}$ for this.

Thoughts welcome!

Brief intermission - please stand by

Just a quick post to explain my recent absence (and get beeminder off my back).

Recently I haven’t posted that much, the reason being that I just moved to the US to start a PhD in maths at Boston University and things have been kinda busy with the move. I’m really excited about starting doing maths full time again, and I’m sure there will be a lot more to post about in the very near future. I just moved into an apartment today after a brief stay in a hotel while I attended a week long international orientation, not so much maths but I did get the chance to give a 10 minute talk on the 4-colour theorem.

See you soon.