First, a brief reminder of where we are at the moment. We are going through the various parts of mathematical structures called PROPs (product and permutation categories). As we go through each new piece, we pause to check that both our diagrams as well as matrices of natural numbers satisfy the conditions. The goal is to conclude that both diagrams and matrices organise themselves as the arrows of two PROPs. The translation from diagrams to matrices from two episodes ago will turn out to be an **isomorphism** of PROPs, which means that for all intents and purposes we can regard diagrams and matrices as different languages to talk about the same thing.

In the last episode we went through some of the structure of PROPs, but the story is not yet complete. Actually, so far we have talked only about the “PRO” in PROPs. The final P stands for **permutation**, and permutations are the main topic for today.

Let’s start by introducing another convenient syntactic sugar for diagrams. It will be useful for discussing permutations, but it will also come in handy for us later on. This one is pretty simple: whenever we write a natural number k **above** a wire, it simply means the diagram that results from stacking k copies of the identity on top of one another. Of course, when k = 1 there is no need to write any number at all: the wire with 1 above it is just a wire.

In the last episode we have seen that this diagram serves as the identity on k in the world of diagrams. There’s one thing to keep in mind about this sugar: it’s important not to confuse it with our standard intuition of numbers travelling along wires. In the case where we feed numbers into a circuit we will always be careful to write them **below** the wires.

PROPs come with some structure that allows us to permute—that is, jumble up—any collection of wires. Given natural numbers m, n, every PROP has a special arrow called

σ_{m,n} : m + n → n + m.

The symbol σ is the Greek lowercase letter sigma. Now of course, m + n = n + m, so the domain and the codomain of every such σ is actually equal. The reason why I wrote it in two different ways is that the intuition about σ_{m,n} is that it **swaps** the m and n.

In diagrams, we already know σ_{1,1} by its nickname, the twist.

In general, we are going to need a gadget that looks like the following, for any two natural numbers m and n.

The intuition for σ_{m,n} is clear: it should be the diagram consisting of m wires crossing over n wires. Let’s draw it for a particular case; say for m = 2 and n = 3.

The key observation is that we can zoom in, as illustrated below, and apply the Crema di Mascarpone procedure to obtain a formula that only uses identities and twists. For example:

We could now ask whether we could do something along these lines for **any** m and n. First, we can draw a diagram where the m wires cross over the n wires, then… it seems like it should be possible to zoom in far enough and then subdivide the diagram, obtaining an expression that only uses identities and twists.

In any PROP, the family of σs have to satisfy a few equations. For now let’s examine the following two:

σ_{k,m+n} = (σ_{k,m} ⊕ id_{n}) ; (id_{m} ⊕ σ_{k,n}) (†)

σ_{k+m,n} = (id_{k} ⊕ σ_{m,n}) ; (σ_{k,n} ⊕ idm) (‡)

together with special cases σ_{k,0} = id_{k} and σ_{0,k} = id_{k} for all k. These equations tell us how to construct the family of σs using just twist and identity; in fact, they already *almost* look like a recursive definition. We will use them to define the gadget σ_{m,n} recursively as another syntactic sugar. Again, the sugar is something that will be useful to have in our pantry as we do more graphical linear algebra.

The equations may look mysterious, but when we draw them as diagrams, they become pretty straightforward. Starting with (†) (the symbol is called dagger):

Thus, to cross k wires over m+n wires is the same as first crossing k over m wires, then over n wires. The second equation (‡) (double dagger) is similarly intuitive:

Now let’s translate the equations to a bona fide recursive definition.

We start with the case where there is just one wire that crosses over n wires: that is, we will define σ_{1,n} for any n using recursion. The base case n = 0 is simple, it is just the identity: a wire that crosses over zero wires is simply a wire. Now, if I know how to construct the diagram for a wire crossing over k wires then I can construct a diagram for a wire crossing over k + 1. Indeed, the recursive case is

which is a special case of equation (†). That gives us σ_{1,n} for any n.

Now, for σ_{m,n}, we let σ_{0,n} be the identity on n (n copies of the identity, stacked). The corner case is σ_{0,0} which is just the empty diagram.

Since we already know how to construct σ_{1,n} for any n, the general recursive case can be built as follows:

where, in order to cross k + 1 wires over n we use σ_{1,n} and recurse. This definition is a special case of (‡). That takes care of all possible ms and ns. And because of the recursive way we constructed the family σ_{m,n}, it is not difficult to prove, using induction, that (†) and (‡) hold.

Thus, in the world of diagrams, we have our family of σs, as required of all PROPs. But PROPs need this family to satisfy two additional conditions, because they are supposed to be symmetric monoidal categories. The first of these is:

σ_{m,n} ; σ_{n,m} = id_{m+n} ①

We could go ahead and prove this, using induction, and use diagrammatic reasoning to tighten individual wires, but we will not bother as it’s already pretty obvious just by looking at the diagram: it is not difficult to imagine tightening all the wires in the left hand side at the same time until we get to the right hand side: our wires don’t tangle.

The other requirement is called **naturality** and it says that, categorically speaking, the σs define a natural transformation. In any PROP, naturality can be summarised by stating that the following diagram **commutes **for any natural numbers m, n, m’, n’ and arrows A: m → m’, B: n → n’:

Let me explain what the word “commutes” means. If you look at the diagram above, there are two paths we can follow, starting at the top left point of the diagram. We can either go right then down, or down then right. The fact that the diagram commutes means that the two paths are equal. As an equation:

(A ⊕ B) ; σ_{m’,n’} = σ_{m,n} ; (B ⊕ A) ②

Now lets translate this to the world of diagrams. Equation ② says is that for any diagram A with m dangling wires on the left and m’ wires on the right, and any diagram B with n wires on the left and n’ on the right, the following two diagrams ought to be equal:

The above can be considered as a more general version of the rule that we call **sliding along wires,** since we can get from the left hand side to the right hand side of the diagram above by sliding A and B across the crossing of the wires on the right. But we don’t really need to make any additional assumptions about diagrammatic reasoning: it can be proved from the two principles of diagrammatic reasoning that we already identified back in Episode 6: 1) generators can slide along wires, 2) wires do not tangle.

The proof is a little bit more complicated than the kind of inductions that we have seen so far since we have to deal not only with the fact that n, m, n’, m’ are arbitrary numbers, but also that A and B are arbitrary diagrams! The important insight is that we know how any diagram is constructed: by starting with generators, identities and twists and using the two operations ⊕ and ; of the algebra of diagrams. Given the fact that any diagram whatsoever can be constructed like this, the trick is to use a slight generalisation of induction, called **structural induction**, that is very popular in programming language theory. The base cases would be to show that it holds when A and B are actually generators, twist or identity. Then the inductive steps would consider how to construct more complicated instances of A and B with our two diagram operations ⊕ and ;. But again let’s not bother with this for now as it is a bit tedious — but do let me know in the comments if you’re interested in seeing this done in more detail!

We have argued that diagrammatic reasoning can be used to show that our diagrams satisfy all the conditions expected of PROPs. In fact, diagrammatic reasoning **characterises** these conditions: any fact that can be shown by just stretching or tightening of wires or sliding diagrams along wires can be equivalently shown by using only the various equations required by the definition of PROPs.

We have already seen in the last episode how just the action of drawing the diagram can save the writing of equations. I hope that it’s becoming apparent in this episode that our diagrammatic syntax, together with diagrammatic reasoning, is a real equational bureaucracy monster slayer.

So far in this episode we have only talked about the world of diagrams, but we also need to understand what the σs are in the world of matrices. Let’s summarise: σ_{m,n} is a matrix that, in general, looks like this:

where I_{m} and I_{n} are identity matrices of size, respectively, m and n, and the 0s are shorthand for filling in all the available spaces in the grid with 0s. For example, σ_{2,3} is the following matrix:

It is not difficult to show that (†), (‡), ① and ② are true the world of matrices, but it is a bit boring, so we will skip it. You are welcome to go ahead and check it as an exercise! By the way, there is a special name for these kinds of matrices and their products in matrix algebra, they are called permutation matrices.

We have now finally ticked all the required checkboxes that were necessary to establish that both diagrams and matrices are the arrows of PROPs.

Lets call the PROP that has as its arrows diagrams constructed from the copy, discard, add and unit generators, subject to our ten equations with the name **B**, which stands for bimonoids or bialgebras. And we will call the PROP that has as its arrows matrices of natural numbers with the name **Mat**. Given our now more complete understanding of diagrams and matrices, in the next episode we will return to our translation θ from diagrams to matrices.

Permutations are popular topic in maths, especially in combinatorics and group theory: everyone likes to think about shuffling cards. If you already know about a little bit about permutations then you may find this aside useful; otherwise feel free to skip it; it does not really contribute to our story.

Permutations on a finite set of size n are the elements of a mathematical structure called the symmetric group S_{n}. The group action is composition of permutations.

Maybe you already have an inkling that, in any PROP, the arrows from n to n include all the permutations. Indeed, any permutation can be constructed, using the algebra of PROPs from the identity on 1 and the twist (σ_{1,1}). One could ask if the arrows satisfy all the equations that are expected of permutations.

The answer is yes, and it can be easily checked since symmetric groups have a well-known presentation. We just need to check that the following two equations hold in any PROP:

The first equation is clearly an instance of ①. The second follows from the naturality condition ②. To see why this is the case, we can draw a box around the first twist in the left hand side and the last twist in the right hand side:

The remaining structure in each side of the equation is, by (‡), σ_{2,1}. But now I can write the two sides as a diagram that looks as follows

and this is clearly an instance of ②, so it must commute. Neat, right?

Continue reading with Episode 14 – Homomorphisms of PROPs.