7ECM - Some personal thoughts

So I’m currently on my way back from the 7th European Congress of Mathematics, which took place in Berlin over the course of the last week. While everything is still fresh(ish) in my memory I wanted to record some of the bits that I (personally) found most interesting so that I’ll be able to look back when I invariably do forget what I did for a whole week. If you like similar things to me then maybe you’ll find something interesting here too (if you don’t like similar things to me then I question your choice to read this blog). Some of these topics I’d like to revisit in more detail (possibly in a future post?!) but for now these short snippets will have to do. So in approximately no particular order and undoubtedly with some gaps.

Peter Scholze

This is the most obvious one for me, Scholze talked about Perfectoid spaces, which since he introduced them in his thesis have become a hot topic in number theory. Scholze himself has been awarded various prizes and honours for developing this theory (including another at the ECM).

The motivation is to transfer problems involving the $p$-adics (which can be problematic as they are mixed characteristic) over to the similar looking ring $\mathbf{F}_p((t))$. Scholze said he wants to fight for the freedom of $p$. The way he does this is via a technique called tilting, which was originally used by Fontaine and Wintenberger to prove a result relating the Galois theories of these fields. Scholze (and independently Kedlaya-Liu) takes this result and distils out the definition of a field where this works and calls it a perfectoid field. He then does some natural looking things (which I’m sure are actually very complicated to do properly) and constructs perfectoid algebras and perfectoid spaces. He shows that various things work out nicely and that one can tilt the spaces too in a way that matches what you’d hope for simple spaces like projective space.

In the last part of the talk Scholze got increasingly high level so rather than embarrass myself trying to replicate it I’ll just say that it definitely looks like he has more impressive things forthcoming.

One thing that was especially interesting for me in this talk was mention of work by Jared Weinstein who works at Boston University (where I will also be from September). I was aware that Weinstein was doing work in this area so it was really interesting to have a bit of it put into context.

Karin Vogtmann

Vogtmann first gave a nice introduction to outer automorphisms of the free group and why one might care, and got me interested with the observation that abelianising gives automorphisms of $\mathbf{Z}^d$. She also described various related “outer space”s which are spaces which the outer automorphisms of free groups act on. The construction is really cool and involves deforming certain metrized graphs using the interpretation of $\operatorname{Out}(F_n)$ as the automorphisms of the rose graph. (One thing she mentioned in passing here that I’d like to learn more about is Gromov’s topology on the space of all metric spaces, so meta.) The gist was that the various slightly different outer spaces were all very related to the main one. The pictures were very good and extremely helpful and for me this talk definitely wins the Oscar for best visual effects.

Don Zagier

I found Don’s style of presenting (rapidly moving through hand written slides while barely breathing) engaging and super impressive, multiple times he switched to a slide and corrected something before I’d even had time to read the first sentence. The content itself was also really fascinating, he started with a fairly random looking recurrently defined sequence that Apery used to prove that $\zeta(3)$ is irrational and gave multiple interpretations of where such a sequence comes from (though in fact there is still something very special going on). He ended up giving some motivation for the concept motives (pun not intended) and presented us with a few results concerning various special values that were conjectured by a coauthor of his based on pure thought on the level of motives and proved by Zagier and others on the concrete level.

Unfortunately I can’t remember quite as much of what went on as I’d like due to the above-mentioned speed and Don’s recommendation at the start that we “don’t even bother trying to take notes”. I do hope a video of the lecture will appear at some point as out of all the talks I saw I think this is the one I’d most benefit from rewatching in its entirety.

Another fun coincidence for me personally was that he mentioned some work of Irene Bouw, who I met only days beforehand in Ulm.

Tommy Hofman

Tommy talked about an algorithm he and Claus Fieker developed to compute Hermite normal forms over Dedekind domains. This was another exceptional example of the law of large numbers (or whichever law it is that explains funny coincidences) as previously I worked on implementing algorithms for computing HNFs and doing this work was in some way the original cause of me coming to Kaiserslautern, which is where Tommy works! The main takeaway from this talk (other than a nice new algorithm) is that quotients of rings of integers by powers of prime ideals are always Euclidean rings.

Other nice talks

  • Fedor Pakovich found a relationship between Davenport-Zannier pairs (generalisations of pairs of polynomials $(f,g)$ such that $f^3-g^2$ is of minimal possible degree, excluding trivial things) and Dessins d’Enfants. He used this to obtain a classification of these pairs (roughly 10 famillies and 11 sporadic pairs) by couting the corresponding graphs (which ended up being interpretable as certain weighted trees or something like this). One of the reasons I found this so interesting was that it’s the first time I think I’ve seen Dessins used in anger, I started reading Girondo and Gonzailez-Diaz’s book on compact Riemann surfaces and Dessins about year ago but got distracted before I got to the juicy bit.

  • Thomas Willwacher talked about cohomology of graph complexes which it turns out is really hard to compute but relates many other areas of maths (including automorphisms of free groups!).

  • Maryna Viazovska gave a nice talk about her (and others) recent proof of sphere packing in dimensions 8 and 24, which got a lot of popular attention at the time so was nice to hear about.

  • Meinolf Geck spoke about his work computing with groups, remarked that finite groups of Lie type $G(\mathbf{F}_q)$ make up a large portion of the classification of finite simple groups. He wants to construct generic character tables as $q$ varies for different algebraic groups and possibly different primes - and this actually looks to work! That this is even possible totally blew my mind.

  • Uli Wagner spoke about the topological Tverberg conjecture, Gunter Ziegler gave a talk that touched on this in Warwick in 2014 which I really enjoyed, so it was cool to hear about recent progress regarding it.

  • Jeremy Grey spoke about Poincare and Weyl and it was a lot more philosophical than I was expecting, contrasting their views on philosophy of science over time, but it was fascinating stuff.

There were certainly many many other interesting talks that I saw (and indeed many that I didn’t) but I’m tired and my memory is failing now and I have to make a post or Beeminder will take my money.

Other thoughts

There were a few political statements made during the conference for example by Pavel Exner condemning the situation in Turkey (where academics were recalled/banned from travelling) and from Timothy Gowers regarding the Brexit (short version: 48% of us voted to stay). This is something that I think is appropriate for large I/ECM like conferences, which represent the whole mathematical community.

Non-maths

Berlin is a really awesome city so even outside of the conference I did a lot of nice things, and I’m really glad I got a good excuse to visit before I finish my time in Germany. I took quite a lot of photographs (especially the Sunday before the conference when it was really nicely overcast), maybe I’ll upload some to Flickr and put a link here when I’m back in the UK and have some free time.

Weber modular functions

An quick introduction to modular functions, and the Weber modular functions in particular.

Background

A modular function for a subgroup $\Gamma$ of $\operatorname{SL}_2(\mathbf{Z})$ is a meromorphic function $f$ on the upper half plane $\mathbf{H} =\{x+iy\in \mathbf{C} : y > 0\}$, that is invariant by all matrices in $\Gamma$ when they act via

(we also need $f$ to be “meromorphic at the cusps” too but let’s not worry about this right now).

So for example letting $\Gamma$ be the whole of $\operatorname{SL}_2(\mathbf{Z})$ we see that modular functions for $\operatorname{SL}_2(\mathbf{Z})$ must satisfy $f(\tau) = f(\tau+ 1)$ and $f(\tau) = f(-1/\tau) $ for all $\tau\in\mathbf{H}$, as

are both in the modular group. In fact, as the above matrices generate the whole modular group, being invariant under the above two transformations is sufficient for a function on the upper half plane to be modular for $\operatorname{SL}_2(\mathbf{Z})$.

The most well known modular function is undoubtedly the $j$-invariant:

where $q = e^{2\pi i \tau}$. If you haven’t seen the $j$-invariant before this description as a $q$-expansion is not very helpful, but rest assured this function appears very naturally when considering sublattices of $\mathbf{C}$. In any case the properties of this function are the interesting and important part: first and foremost it is as injective as a modular function possibly can be. What I mean by this is that not only does

for any $\tau$, but if $j(\tau) = j(\sigma)$ then there is some matrix

This is useful as it allows you to find out if two points in the upper half plane can be transformed into each other by the above matrix action simply by calculating the value of $j$ at these points. This is especially handy when dealing with elliptic curves, which correspond to points in the upper half plane and are isomorphic only when their representative points are equivalent under the above action. This is the reason that $j$ is called the $j$-invariant.

In addition to this another nice fact about $j$ is that it generates the set of all modular functions for the full modular group when we take rational functions of it, that is

It is no surprise then that $j$ is a sort of poster child for modular functions, and indeed before looking into this topic I don’t think that I knew an interesting example of a modular function other than $j$, well now I do!

Weber modular functions

In order to define some other modular functions we will use an intermediary function, $\eta$, which has $q$-expansion:

Now $\eta$ is not a modular function, rather a modular form of weight $\frac{1}{2}$, which means that instead of the condition we had for modular functions we have instead that

One consequence of this is that if we divide $\eta$ by another modular form of weight $\frac12$ we get a modular function. But we have only seen one such function so far, $\eta$, and $\eta/\eta$ is not a very interesting function. We can however apply transformations to $\eta$ similar to those above with elements of the general linear group $\operatorname{GL}_2(\mathbf{Z})$ instead (specifically ones that $\eta$ is not invariant under). This will allow us to create new functions of the same type. For instance we let:

These are Weber modular functions, named after Heinrich Weber who studied them in his Lehrbuch der Algebra. Indeed we could have defined these via their $q$-expansions without mention of $\eta$ at all, but that makes them seem rather arbitrary, which they certainly aren’t!

Notice that we have applied the elements

all of which have determinant $2$, to $\eta$ here.

Until now the only modular function we have looked at is $j$, which was a modular function for the whole of $\operatorname{SL}_2(\mathbf{Z})$ so you will no doubt be pleased to hear that these functions are in fact modular for

a proper subgroup of the full modular group.

By virtue of the normalising coefficients we used here, these functions satisfy several nice identities, for example

Whilst these are some very lovely looking expressions, such things aren’t quite as exciting in total isolation. So how can we use these functions? One interesting thing we can do with these functions is generate Hilbert class fields, for this sort of work we it will help to look at Shimura reciprocity, which will be the topic of a future post!

New Blog!

I enjoy reading a lot of different blogs, so I have been thinking it would be cool to (re-)start my own mathsy/programmingy blog on a somewhat regular basis. So I have new-years-resolved to start blogging again/more regularly!

The eagle eyed among you may notice that is in fact April (or at least that that is when this post was published) so I have already pretty much failed… However I shall not fail again! No, this time I have set up a beeminder goal pledging to blog regularly. Meaning that I have to pay real money if I don’t keep this habit up, so you’re guaranteed to read something here at least. Whether it was worth reading or not will be for future generations to say (i.e. not you! Ok, ok, constructive criticism is welcomed with open (but slightly bear-like and intimidating) arms). This was probably a very-bad-idea, we shall see.

I have moved my old blog (which mostly concerned my work on FLINT) from Wordpress to this github.io site powered by Jekyll. So there is at least something here for me to build on, and if you want to follow this blog for updates you can do so with rss/atom/smoke signals.

This will almost about any of the questions and explorations I go on and find interesting, probably mostly maths with a bit of computer science/other random things thrown in. Until next time, watch this space (or watch me go broke)!

GSoC with Flint - Finishing up

(Update: after some more work on matrix kernels I managed to improve upon what is given below, I don't think it needs another post but see the long running thread on the flint-devel mailing list if you are interested the current performance of the code.)

First of all apologies for not keeping everyone up to date recently. I've just been cleaning up code and deciding what is in a good enough state to include in my final evaluation for GSoC so there wasn't that much new of interest to report. This is all now done and my GitHub repository now contains only code that appears to be working well.

The main project goals of implementing quick methods to compute Hermite and Smith normal forms have been completed. Below is table and graph comparing the timings to those obtained with Sage (I spent about half a week being very sad about my work until I realised I needed to clear the cache to get the right timings for Sage!). The class of matrices used were random square with dimension and entry bits as in the table.

Flint timings (s):

4 8 16 32 64 128
10 0.0001498 0.000181 0.0003962 0.0003608 0.0011182 0.0011266
20 0.0007708 0.0010738 0.0014056 0.0023238 0.0040728 0.009321
30 0.0021142 0.003127 0.0042294 0.007065 0.015792 0.0427272
40 0.0050182 0.0066494 0.0101572 0.0169392 0.038086 0.0985452
50 0.0105184 0.0138692 0.0183774 0.0319924 0.0812636 0.2245758
60 0.0181876 0.0243814 0.0342512 0.0625658 0.1578844 0.4360994
70 0.028565 0.0373348 0.0546572 0.1110402 0.290543 0.8147328
80 0.0417594 0.0595228 0.0882594 0.1842448 0.4881932 1.3889464
90 0.0694218 0.08668 0.1405782 0.2854802 0.7817248 2.2501918
100 0.0880376 0.1192832 0.2052142 0.4448414 1.245277 3.5487596

Flint to Sage timing ratios (< 1 is best for us):

4 8 16 32 64 128
10 0.6965 1.0258 1.9950 1.5602 3.8941 2.8422
20 0.7261 0.8606 0.9396 1.1124 1.0772 0.9704
30 0.6531 0.7794 0.8381 0.8015 0.7669 0.7449
40 0.6492 0.7048 0.7380 0.5891 0.4896 0.4245
50 0.6595 0.6637 0.5511 0.3997 0.3666 0.3543
60 0.6354 0.6325 0.4950 0.3586 0.3128 0.2958
70 0.6016 0.5616 0.3836 0.3126 0.2764 0.2768
80 0.1957 0.2762 0.3801 0.6697 1.2174 1.6715
90 0.2509 0.3145 0.4730 0.8187 1.5402 2.1104
100 0.2527 0.3257 0.5340 0.9906 1.8450 2.5461

For simplicity's sake I simply compared fmpz_mat_hnf_pernet_stein to Sage's hermite_form, so it's worth noting that Flint is faster still for small matrices if another method is used (which the fmpz_mat_hnf function should choose for you in practice). We can see here that although Flint does well for many matrices in this range it gets worse as the matrix and bit size gets larger, indeed this trend continues and my functions are far worse for very big matrices (Denis Kryzkov's benchmark here gives a good indication of the scale of the problem). The run time of the asymptotically most efficient HNF method I have implemented (Pernet-Stein) depends heavily on the computation of nullspaces and so this is definitely an area that can be improved. Both approaches I've tried to speed up nullspace computation (multimodular and p-adic lifting) haven't worked out being any better (asymptotically) than the pre-existing code based on the row echelon form. The remaining barrier here seems to be modular rref which I've looked at improving over the past week, this is certainly possible and I plan to carry on working on it (I have a complete but buggy implementation of a method described in Storjohann's thesis and a working implementation of classical Gaussian elimination which is fast for small matrices at the moment). Finishing this will I hope bring the timings for Pernet-Stein down to something more like the ones in Sage. Sage uses IML for these computations, but even without using BLAS I think we could get a fair amount closer to IML's performance for large matrices.

As for Smith forms, the randomised algorithm I worked on remains problematic, so I have left that out for now and stuck with the two slightly older approaches, a general one due to Kannan and Bachem and a modular one due to Iliopoulos for full rank square matrices. Below are some timings, again with comparisons to Sage. The class of matrices used is the same as that above, which means that it is highly likely that Iliopoulos' would be appropriate (though it may not have been chosen by fmpz_mat_snf). Sage uses Pari for this, but it was easier for me to change my existing timing code than set up a direct comparison to Pari.

Flint timings (s):

4 8 16 32 64 128
10 0.0001008 0.0001776 0.0002186 0.0005208 0.0016076 0.0042602
20 0.001181 0.0017224 0.0025602 0.0050306 0.0131318 0.038855
30 0.0039768 0.006887 0.013429 0.031941 0.0904368 0.2860226
40 0.0138918 0.0201976 0.0408486 0.1258706 0.386331 1.1849852
50 0.0312358 0.0544678 0.1120322 0.370241 1.155253 3.6941254
60 0.061067 0.116257 0.2779696 0.8377208 2.5126946 7.9345656

Flint to Sage timing ratios (< 1 is best for us):

2 4 8 16 32 64 128
10 0.01630 0.02870 0.04939 0.05714 0.10546 0.22552 0.36044
20 0.05063 0.06777 0.08559 0.08868 0.10665 0.14573 0.19519
30 0.08086 0.07276 0.09437 0.10418 0.13330 0.17716 0.23537
40 0.08905 0.10716 0.09976 0.10617 0.16009 0.21715 0.27530
50 0.10550 0.11037 0.11270 0.11762 0.18405 0.24098 0.30266
60 0.12292 0.10537 0.11135 0.12858 0.17996 0.23113 0.29524

(I will update this table on my blog when more timings finish, I wanted to post this post and it is taking a while)

These timings look good for Flint but I'm not completely sure yet what the large scale behaviour is.

I still have a number of more experimental bits of code around that I will continue to work on getting into a more stable and usable state. Along with some other little bits that I never managed to get around to during the official GSoC period that I hope to get around to at some point.

Finally I want to say a huge thanks to everyone who commented on what I've been doing, and especially to Fredrik for his excellent advice over the course of the project. All the comments were very much appreciated.

GSoC with Flint - Week 11

This week I've worked on a variety of different things, unfortunately getting hung up on most of them!

First I found some bugs in my Saunders-Wan Smith normal form code, most of them ended up being fixable, however one still eludes me. It seems the bug is fairly rare (occurs for roughly 1 in 10000 test cases) and it is certainly related to the random nature of the algorithm but my current thinking is that the current behaviour should not be happening even when we get unlucky with the randomness. I've left this issue alone for now after spending a couple of days making no progress on it.

I then moved on to writing some general HNF/SNF methods that pick between available implementations based on matrix size, norm and anything else that I could work out as being relevant. While doing this I found that the Kannan-Bachem Hermite form method worked better than the modular method some of the time and so I decided to try and combine them into a modular method that works by taking minors into HNF rather than rows. I had some problems making this work correctly when the input was not of full rank and all the fixes I tried ended up having some corner case that made them fail. It just occurred to me however that when the modular method is used as part of the Pernet-Stein algorithm the matrix given is always square and so this was not perhaps a completely wasted effort.

I then moved back to working on Pernet-Stein, specifically I added capacity for non-full row rank matrices, and fully general matrices should be done very soon.

This week I'm going to be doing some profiling and comparisons between my implementations and existing ones and try and work out the reasons why certain results are as they are and how the HNF/SNF methods I now have can be made faster in the future. It should be helpful to have a note of the barriers to having faster HNF/SNF computation in Flint and how they could be overcome.
Of course I'll also be tidying up and documenting several bits as I go along to fill in any gaps in functionality I have left along the way.

GSoC with Flint - Week 10

This week I worked more on the Smith normal form algorithm I talked about last week. I implemented Iliopoulos' algorithm for computation of the Smith form modulo an arbitrary integer, this procedure is used in a couple of places as part of Saunders and Wan's "engineered" algorithm. Firstly we use a prime power modulus to find the Smith normal form locally for small primes (i.e. those less than 100), the modular approach is also used for the rough part (concerning all the other primes) when the largest non-zero invariant factor is small compared to the size of the matrix. This algorithm is now working correctly, though the question of how to test it properly given its Monte Carlo nature is one that will require some thought. Currently whenever I have encountered a matrix for which the output of Saunders and Wan doesn't match the deterministic algorithms' outputs it has turned out to be a code bug rather than a side effect of the algorithm's randomness. I suppose allowing for a reasonable number of failures beyond the expected number in test code would be one approach, but of course there will still be occasions when the number of failures exceeds this and allowing a large number of extra failures could allow real bugs to creep in.

For the next couple of days I'm going to work a little more on Smith forms, hopefully implementing Storjohann's banded method for finding Smith forms of upper triangular matrices, this could be coupled with a HNF algorithm to give another (deterministic!) algorithm for the Smith form of a general matrix. I also need to clean up the Saunders and Wan code a bit as there are still a number of inefficiencies. I have not got a valence method included in this algorithm as this would require implementing a few other things (such as minimal polynomials), but the option is certainly there and it would easily slot in to the current code.

GSoC with Flint - Week 9

So this week I've been working on implementing the methods to find the Smith normal form of an integer matrix, rather than the Hermite normal form which I've worked on up until now. I'm mostly working off of the excellent paper of Saunders and Wan which describes an "engineered" algorithm to compute the Smith form which is highly practical. By engineered they mean that the algorithm switches between different methods based on practical observations regarding which method is likely to be the most efficient for different types of matrix, however the algorithm is structured in such a way that the asymptotic complexity remains as small as possible.

The algorithm is actually randomised of Monte Carlo type, meaning there is a controllable possibility of error present in the results. It becomes harder to be sure that the exponents of primes occurring in the Smith form are correct as the primes get smaller in the algorithm of Eberly, Giesbrecht and Villard (on which Saunders and Wan algorithm is based). So the computation is split into two parts, the smooth part concerning only primes less than a given limit and a rough part concerning all others. Currently I have an Eberly, Giesbrecht and Villard based routine working and finding rough parts fairly well and so I am currently working on a modular SNF algorithm to fill in the gap.

Saunders and Wan don't stop with just this however, they also incorporate ideas described by Dumas, Saunders and Villard which use a so called valence based method which I would also like to add at some point. It seems this does require efficient computation of the minimal polynomial however, so I'm not sure how far I will get with this.

GSoC with Flint - Week 8

This week I've been working on improving the computation of the nullspace of an integer by implementing a p-adic lifting based method. The algorithm is also described in Stein's book on computing with modular forms and is closely related to Dixon's p-adic method for linear system solving. This is pretty much finished now (modulo some weird memory leaks and determining the best values for parameters such as p-adic precision and range of primes used) and does appear to be asymptotically faster than the multimodular algorithm I worked on before, though experiments suggest that currently the matrix needs to be fairly large before the p-adic algorithm becomes better.

I was originally thinking of implementing Pauderis and Storjohann's algorithm if I had time this week, but have spent much of the past week looking at efficient nullspace computation (and being plagued a little by unexpected power cuts!) so haven't got much further than a skeleton implementation. Maybe I can flesh this out at some point but for the next week I'm going to move on to algorithms for the Smith normal form, another important normal form for matrices over the integers, where both column and row operations are allowed.

Hopefully building on what I've done so far I should be able to get some fast Smith normal form algorithms implemented shortly, indeed one simple approach is to repeatedly transpose and compute Hermite normal forms until the resulting matrix is diagonal, this won't necessarily be the Smith form but it can then be computed with relative ease. Something more sophisticated than this will undoubtedly be best, but many SNF algorithms involve computing the HNF at some point and working from there so the current HNF algorithms provide a good basis for those computations.

GSoC with Flint - Week 7

This week I've been working on improving the algorithm of Pernet-Stein as much as possible. After introducing as many of the ideas given in the original paper as I could I found the main bottleneck to actually be the computation of the nullspace of a certain submatrix of the matrix given, this is needed in order to efficiently solve a linear system which (likely) has a nasty final row. If we know a basis for a nullspace of the first n-1 rows of the system we can replace the final row with a random nice (having small entries) row and then find a solution to the original system by adding on a suitable multiple of the nullspace basis vector (the nullspace should be 1 dimensional for random input).
Flint uses the reduced row echelon form of a matrix to compute nullspaces (the nullspace of a matrix in this form can be easily read off and the transformation does not alter it) and so a natural place to improve nullspace computation is to improve the row echelon form computation. We can use a multimodular approach for this problem (this is described in Stein's book on computing with modular forms) and I've achieved some nice speed-ups with this method in the past couple of days. For example the multimodular method is around 200x faster for 125x125 matrices with 32 bit entries. While this has made Hermite form computations a lot faster (nullspace computation is over 90% of the work for large matrices) I still want to try and see if this can be improved upon further, after all, in this case we don't need the whole row echelon form just a vector in the kernel and the knowledge that the nullspace is 1-dimensional. So I plan to work on this further in the next couple of days, and depending on how I feel about this approach I will either spend the rest of this week making improvements to Pernet-Stein or possibly work on implementing the algorithm of Pauderis and Storjohann.

GSoC with Flint - Week 6

This week I've been working on implementing Pernet and Stein's algorithm to compute the HNF. The basic idea of this method (which is in turn due to Micciancio and Warinschi) is to use the given matrix to construct another axillary matrix with small determinant (so the HNF of the new matrix can be computed without coefficient explosion) from which the original HNF can be reconstructed. The algorithm splits in to a few separate stages, the computation of two determinants, the modular HNF of the smaller matrix, the addition of a column to the HNF and then the addition of two rows. Each of these steps can be optimised quite a bit using some clever tricks, which I'm currently working on doing. The adding of rows and columns to the HNF now seems to work well, and exploiting the similarities of the submatrices whose determinants are needed to compute them both faster should be done tomorrow. At that point it should be interesting to take stock and see how well the Pernet-Stein approach is working compared to previous methods I've worked on. I'll likely spend a couple more days after that improving this approach even more.

This is definitely the most complex algorithm I've worked on so far, but when completed it should really blow the others away for large enough random matrices.