The Basel problem
Notes adapted from this youtube video
The following infinite series is called the Harmonic series.
It's fairly well-known that this sum diverges, meaning that it doesn't ever converge to a value. The most famous proof of this fact is that you can group together elements such that each group is at least 1/2 and you can create an infinite number of these groups. Thus this sum will keep increasing and isn't interesting in this context so the next infinite sum you might want to look at is this one.
This one seems interesting because you can't group them together in the same way and calculating it numerically seems to be getting closer to a value. The question of what it equalled was posed by Pietro Mengoli in 1650.
0.000000
If you run this for a while, you can see it slowly starts to converge on a value that's higher than 1.6 but less than 1.7 but it's not easy to see exactly what value it is. If you skip ahead a bit it seems to be around 1.64 but even after hundreds of terms it's still changing so isn't going to be a useful way to get a precise numerical value; especially if you're Euler in the 18th century and don't have access to a computer. Therefore he devised a different method called the Euler–Maclaurin formula to compute the sum to 20 decimal places.
Using this, he was convinced the result of the equation should be but to find a method to derive that was a little trickier. It was already known that the Taylor series for sin(x) was as follows:
Dividing through by x:
Euler's idea was to factorise sin x as a product of terms and to show that the infinite sum we're looking at was part of the coefficient for for . Specifically that and therefore since the term should be the same for this method as the Taylor series, and with some basic rearranging that
However how does that factorisation of sin x work?
For finite equations we can form polynomials like which would be equal to 0 if x was either 0, or x was equal to 1. These are called the roots of the polynomial. The same thing would be true for something like and we could multiply them out to get . Or and . Note that this gives coefficients for the x term of -6x and 24x but in the Taylor series we had x on its own. This is a problem if we want the two equations to be equal so in the polynomial we need to divide by n! or n factorial. We also have the problem that we get negatives when there's an odd number of brackets. This is relatively simple to solve if we just multiply each bracket by -1, which has the effect of swapping the order. Thus will give us a root at 0, 1, 2, etc and the coefficient for x will just be positive x on its own, the same as the Taylor series.
Transforming these equations
Follow the next equation to see how we move the factorial inside the brackets for n=4
Euler was thinking what if we use this same method but for sin x. This is slightly strange because it's not necessarily valid to do for an infinite function in the same way as it is for a polynomial. Nevertheless, sometimes it's useful in mathematics to try things and justify them rigorously later.
The values of x for which are when x is a multiple of π. So that's 0, π, 2π, 3π, ... but it's also -1π, -2π, -3π, .... So the factorisation of it must be Note that it must be 1 plus or minus value. To clean this up a bit we do the following, using the fact that - this is the difference of two squares.
We want to multiply out this product to get the coefficients for each term. The term with no x's is just all the 1s in the brackets multiplied together, which will just be 1. Since all the x's are squared, there can't be an x term. The term would be selecting a 1 from every term in the product except for one term which is for every n. Thus the total amount for this coefficient is . We can extract from this to get which gives us our sum we were looking at before! This
Finally since this should be the same as the Taylor series from earlier, the coefficients should be the same for each term, so which is fairly simple to rearrange to get the we're after.