Below is a calculator for various useful number-theoretic computational tasks. For , you'll need to select from a dropdown since is supposed to be an irreducible polynomial (mod 2). Algorithmically checking if a polynomial is irreducible is far from trivial, so a list of low-degree irreducible polynomials has been provided for you. Polynomials you enter for and will automatically get reduced mod .

If you pick a bad value for , the page may time out while trying to factor or compute . It's hard to give an estimate on how large can be. Pollard's Rho Algorithm is used, which is better at finding small factors. The difficulty of factoring doesn't scale linearly — it also depends on the size of each of 's factors.

Prime | |
---|---|

simply means that is divisible by , or is more than a multiple of (that is, for some integer ). For examples and rules of modular arithmetic, see here.

The greatest common divisor (gcd) of two numbers can be computed in logarithmic time. The idea (assume that , otherwise swap them) is that we can always write , where . Then any divisor of and must also divide (since ). Conversely, any divisor of and must divide (since ). Therefore, .

Let's consider an example: we will compute gcd(87, 14).

Once you see a 0 in one of the rows, you go to the remainder one line up to find the gcd. Thus, gcd(87, 14) = 1. This procedure is called the Euclidean Algorithm. The code is very simple:

function gcd(int a, int b):

if b = 0: return a

return gcd(b, a mod b)

What does it mean to divide mod ? It means there is a number such that . has an inverse mod if and only if gcd(a, n) = 1. For example, 2 has no inverse mod 8. To see this, just multiply the numbers 0 through 7 by 2 mod 8. You get . The number 1 does not occur in this list.

If gcd(a, n) = 1, how do we compute ? Easy, just go backwards through the Euclidean algorithm. Let's rework the example above that showed gcd(87, 14) = 1 to compute and (of course, since , this is the same as ).

This tells us how to write the gcd as a linear combination of the two inputs, which can always be done. That is, $$5 \times 87 - 31 \times 14 = 1.$$ Reducing mod 87 gives . Reduce to lie in the range and see that . Thus . Similarly, we can reduce the equation mod 14 to see . Thus .

Rewriting the equations from the gcd steps is known as the Extended Euclidean Algorithm. This can be described very simply using recursion:

function extendedEuclid(int a, int b):

if b = 0: return (a, 1, 0)

(d, s, t) = extendedEuclid(b, a mod b)

return(d, t, s - floor(a/b) * t)

How do we compute ? One way is to compute , then reduce mod . But this is terrible. This number won't fit in a 64-bit long. Even if you use arbitrary-precision integers, you'll be wasting a lot of space. You'll need close to 300 bits of space to store just to reduce it to a number that only requires 7 bits of space! Another way is to compute , etc. But this is really slow.

A better way is to use modular exponentiation (sometimes called repeated squaring). This works by writing 61 in binary: $$187 = 128 + 32 + 16 + 8 + 2 + 1.$$ Thus $$3^{187} = 3^{128}3^{32}3^{16}3^{8}3^{2}3^{1}.$$ So we just need to compute each element in this expression. We do this by repeated squaring: for example, .

Then . In this final multiplication, you should also reduce mod 61 after each multiplication so that you're multiplying smaller numbers.

Notice above that we used the trick . This is because . This is a useful trick when doing these calculations with pen and paper (which you should definitely do some of to get the hang of things).

Notice in the example above under Modular Exponentiation that . In general, multiplication tables will repeat like this. In this particular case, this implies (since 3 is invertible mod 61 since gcd(3, 61) = 1) that . This situation occurs frequently: 61 is a prime, and . In fact, this always happens: it is known as Fermat's Little Theorem.

Specifically, Fermat's Little Theorem states that if is prime, and , then . An obvious consequence of this is that by multiplying both sides by (which also holds when ).

The map is a bijection (which means one-to-one for our purposes), since the domain is a finite set. This holds since implies since implies . Thus, multiplying the set by simply permutes the set . (In other words, if we multiply each element of by , and throw the result into a new set, we get the same exact set — a permutation, but since sets don't care about order, it is the same set.)

Doing this multiplication gives us an extra factors of . In other words, . Since , we can divide by , which means that .

What this lets us do is simplify our calculations. For example, when computing , since 61 is prime, we can write this as . We can then use repeated squaring to compute .

Another cute trick is this tells us , so we can use modular exponentiation to find an inverse (when the modulus is prime). Sometimes this is faster than the Extended Euclidean Algorithm; sometimes it is slower.

A similar result gives us Euler's Theorem, which says that if , then . Here, $$\phi(n) = \left|\{k : 1 \leq k \leq n, \text{ gcd}(k, n) = 1\}\right|.$$ This is the number of integers between and such that . Some useful facts for computing are the following:

for prime,

for prime, ,

if .

In general, computing is hard. Certainly, factoring is one way to do so, but factoring is also difficult. For example, in RSA (where ), , but it turns out (in this simpler case) that factoring is equivalent to computing . See the page on RSA for the details of this equivalence.

The proof of Euler's Theorem is similar to that of Fermat. Here's a simple example of Euler's Theorem in action. Let's find the last digit of . Clearly computing this number will take an unsatisfactory amount of space (and time). An easier way to do this is to note that , so (since ) . Since , Euler's Theorem holds, which means . Note that finding the last digit of a number is the same thing as reducing that number mod .

So write , which means . The important point with Euler (or Fermat) is that when we work mod (or ), we get to reduce the exponents mod (or ) and greatly simplify our lives.

Roughly speaking, a field is a set where we can add, subtract, multiply, and divide. Examples of fields are the real numbers and the integers mod a prime . But the integers mod when is not prime do not form a field. This is because we can't divide by a number a when . Any field with a finite number of elements turns out to have cardinality a power of some prime .

The integers mod are a common example seen throughout cryptography called prime fields. Another example of fields is called binary fields. These are fields with elements for some . To form such a field, we first find an irreducible polynomial mod 2, call it (this means it cannot be factored). We then take the set of every polynomial with coefficients mod 2 with degree strictly less than and stick it in the set. There are precisely such polynomials. This is our field.

We can do this with coefficients mod any prime and get a field with elements, but we will not discuss this for here.

For example, let's take . This is irreducible mod 2. In general, checking if a polynomial is irreducible is challenging, but any cubic polynomial is either irreducible or reduces as a product of three linear factors or a linear factor and a quadratic factor, which can be ruled out by making sure there are no roots. Since we are working mod 2, simply plug in 0 and 1: and see that , is indeed irreducible mod 2.

Thus our field is the set . Addition is straightforward: simply add the corresponding powers mod 2. For example, . Since , subtraction is the same as addition.

Multiplication is slightly more complicated (this is the part where we see why we needed an irreducible polynomial in the first place). How do we multiply ? First multiply it the way we would multiply two polynomials in high school algebra: . But this degree is too high! The degree needs to be at most 2. Notice we can write . Thus, $$z^3 + z^2 + z + 1 \equiv z^2 \pmod{z^3 + z + 1}.$$

This is exactly the same situation as the Euclidean algorithm! For any two polynomials and , we can write for some polynomials and , where the degree of is strictly less than the degree of . Thus, in our 8-element field, we have .

In general, finding and can be done using polynomial long division. Another way (which is really another way of thinking about polynomial long division) is to keep subtracting off appropriate powers of multiplied by the modulus polynomial . For example, let's find out what polynomial is congruent to. We can think of . This means that $$z^7 + z^4(z^3 + z + 1) \equiv z^7 + z^7 + z^6 + z^4 \equiv z^6 + z^4 \pmod{z^3 + z + 1}.$$

We have now reduced this from degree 7 to degree 6, so we can keep on going! $$z^6 + z^4 \equiv z^6 + z^4 + z^3(z^3 + z + 1) \equiv (z^6 + z^6) + (z^4 + z^4) + 1 \equiv 1 \pmod{z^3 + z + 1}.$$ This means that in this field. Note that the choice of irreducible polynomial actually matters here. If we had chosen , for example (which is also irreducible mod 2), we would get two different fields (but the fields are what is known as isomorphic).

So how do we do division? The standard trick is to use the Extended Euclidean algorithm! Use long division each step of for , then rewrite the calculations to find polynomials and such that . Then reduce mod to get , or .

But an easier way (at least to code up, perhaps not to do by hand) is observe for any nonzero polynomial , that . This is very similar to Fermat's Little Theorem. We can then combine our multiplication algorithm with repeated squaring (i.e., modular exponentiation) since this immediately implies that . For example, to find in our field, we can simply take and reduce it to have degree at most 2. It turns out (you should do the calculations yourself to verify this).

How can we take square roots mod ? When can we take square roots mod ? What about mod ? It turns out precisely half of the nonzero numbers have square roots mod , and the other half don't. For example, 2 is not a square mod 5. All the squares mod 5 are . The Legendre symbol (read "a on p") is very useful in determining if a number has a square root mod or not.

The Legendre symbol is defined as $$\left(\frac{a}{p}\right) = \begin{cases}0, &a = 0, \\ 1, &a \text{ is a square,} \\ -1, &a \text{ is not a square.}\end{cases}$$

It turns out this can be computed very effectively as . There are also more efficient ways to compute (using quadratic reciprocity and Jacobi symbols), but we will not need that here. See Quadratic Reciprocity for the details.

One useful fact (for elliptic curve cryptography) is that any integer mod has precisely square roots. Why is this true? If , there is one square root (namely ), and . If , there are square roots. If , there are square roots (if , then , and there can be no more than two roots to a quadratic polynomial over a field).

But how do we actually compute a square root if it does exist? (Once we have one, the other is easy to find: simply negate it.) There is a fairly complicated algorithm called Shanks' Algorithm, which we won't cover. But it is easy in the case when . Simply set . Then $$x^2 \equiv a^{(p+1)/2} \equiv a^{(p-1)/2 + 1} \equiv a\left(\frac{a}{p}\right) = a \pmod{p}.$$

What about square roots mod ? This is difficult in general, and the Chinese Remainder Theorem is useful here. If is a product of distinct primes, will be a square mod if and only if a is a square mod each of the primes that make up 's factorization. In this case, we can find a square root mod by finding one mod each prime and applying the Chinese Remainder Theorem. If is a product of powers of distinct primes, we apply the same method, but need to find square roots mod a power of each prime. This can be done by finding a root mod and "lifting" that root to a solution mod , etc. This idea is formalized by Hensel's Lemma (see pages 9-10 here for details).

Square roots in binary fields turn out to be quite easy. For a binary field with degree (i.e., a field with elements), any polynomial satisfies . This is similar to Fermat's Little Theorem (this is Lagrange's Theorem, if you've studied abstract algebra). So simply define . Then $$g(z)^2 = { {((f(z))}^{2^{r-1}}) }^2 = {(f(z))}^{2 \cdot 2^{r-1}} = {(f(z))}^{2^r} = f(z).$$

It is not so simple for finite fields with a number of elements that is neither prime nor a power of 2. We will not cover that here.