13. PROPs (Part 2) and Permutations

First, a brief reminder of where we are at the moment. We are going through the various parts of mathematical structures called PROPs (product and permutation categories). As we go through each new piece, we pause to check that both our diagrams as well as matrices of natural numbers satisfy the conditions. The goal is to conclude that both diagrams and matrices organise themselves as the arrows of two PROPs. The translation from diagrams to matrices from two episodes ago will turn out to be an isomorphism of PROPs, which means that for all intents and purposes we can regard diagrams and matrices as different languages to talk about the same thing.

In the last episode we went through some of the structure of PROPs, but the story is not yet complete. Actually, so far we have talked only about the “PRO” in PROPs. The final P stands for permutation, and permutations are the main topic for today.


 

 

Let’s start by introducing another convenient syntactic sugar for diagrams. It will be useful for discussing permutations, but it will also come in handy for us later on. This one is pretty simple: whenever we write a natural number k above a wire, it simply means the diagram that results from stacking k copies of the identity on top of one another. Of course, when k = 1 there is no need to write any number at all: the wire with 1 above it is just a wire.

idk

In the last episode we have seen that this diagram serves as the identity on k in the world of diagrams.  There’s one thing to keep in mind about this sugar: it’s important not to confuse it with our standard intuition of numbers travelling along wires.  In the case where we feed numbers into a circuit we will always be careful to write them below the wires.

PROPs come with some structure that allows us to permute—that is, jumble up—any collection of wires. Given natural numbers m, n, every PROP has a special arrow called

σm,n : m + n → n + m.

The symbol σ is the Greek lowercase letter sigma. Now of course, m + n = n + m, so the domain and the codomain of every such σ is actually equal. The reason why I wrote it in two different ways is that the intuition about σm,n is that it swaps the m and n.

In diagrams, we already know σ1,1 by its nickname, the twist.

twist

In general, we are going to need a gadget that looks like the following, for any two natural numbers m and n.

generaltwist


 

The intuition for σm,n is clear: it should be the diagram consisting of m wires crossing over n wires. Let’s draw it for a particular case; say for m = 2 and n = 3.

sigma23The key observation is that we can zoom in, as illustrated below, and apply the Crema di Mascarpone procedure to obtain a formula that only uses identities and twists. For example:

sigma23formula

We could now ask whether we could do something along these lines for any m and n. First, we can draw a diagram where the m wires cross over the n wires, then… it seems like it should be possible to zoom in far enough and then subdivide the diagram, obtaining an expression that only uses identities and twists.

 


 

In any PROP, the family of σs have to satisfy a few equations. For now let’s examine the following two:

σk,m+n = (σk,m ⊕ idn) ; (idm ⊕ σk,n)      (†)

σk+m,n = (idk ⊕ σm,n) ; (σk,n ⊕ idm)       (‡)

together with special cases σk,0 = idk and σ0,k = idk for all k. These equations tell us how to construct the family of σs using just twist and identity; in fact, they already almost look like a recursive definition. We will use them to define the gadget σm,n recursively as another syntactic sugar. Again, the sugar is something that will be useful to have in our pantry as we do more graphical linear algebra.

The equations may look mysterious, but when we draw them as diagrams, they become pretty straightforward. Starting with (†) (the symbol is called dagger):

sigma1

Thus, to cross k wires over m+n wires is the same as first crossing k over m wires, then over n wires. The second equation (‡) (double dagger) is similarly intuitive:

sigma2

Now let’s translate the equations to a bona fide recursive definition.

We start with the case where there is just one wire that crosses over n wires: that is, we will define σ1,n for any n using recursion. The base case n = 0 is simple, it is just the identity: a wire that crosses over zero wires is simply a wire. Now, if I know how to construct the diagram for a wire crossing over k wires then I can construct a diagram for a wire crossing over k + 1. Indeed, the recursive case is

1nrec

which is a special case of equation (†). That gives us  σ1,n for any n.

Now, for σm,n, we let σ0,n be the identity on n (n copies of the identity, stacked). The corner case is σ0,0 which is just the empty diagram.

Since we already know how to construct σ1,n for any n, the general recursive case can be built as follows:

mnrec

where, in order to cross k + 1 wires over n we use σ1,n and recurse. This definition is a special case of  (‡). That takes care of all possible ms and ns. And because of the recursive way we constructed the family σm,n, it is not difficult to prove, using induction, that (†) and (‡) hold.

Thus, in the world of diagrams, we have our family of σs, as required of all PROPs. But PROPs need this family to satisfy two additional conditions, because they are supposed to be symmetric monoidal categories. The first of these is:

σm,n ; σn,m = idm+n          ①

Diagrammatically, this means:
sym

We could go ahead and prove this, using induction, and  use diagrammatic reasoning to tighten individual wires, but we will not bother as it’s already pretty obvious just by looking at the diagram: it is not difficult to imagine tightening all the wires in the left hand side at the same time until we get to the right hand side: our wires don’t tangle.

The other requirement is called naturality and it says that, categorically speaking, the σs define a natural transformation.  In any PROP, naturality can be summarised by stating that the following diagram commutes for any natural numbers m, n, m’, n’ and arrows A: m → m’, B: n → n’:

naturalityd

Let me explain what the word “commutes” means. If you look at the diagram above, there are two paths we can follow, starting at the top left point of the diagram. We can either go right then down, or down then right. The fact that the diagram commutes means that the two paths are equal. As an equation:

(A ⊕ B) ;  σm’,n’ = σm,n ; (B ⊕ A)         ②

 Now lets translate this to the world of diagrams. Equation says is that for any diagram A with m dangling wires on the left and m’ wires on the right, and any diagram B with n wires on the left and n’ on the right, the following two diagrams ought to be equal:

naturality

The above can be considered as a more general version of the rule that we call sliding along wires, since we can get from the left hand side to the right hand side of the diagram above by sliding A and B across the crossing of the wires on the right. But we don’t really need to make any additional assumptions about diagrammatic reasoning: it can be proved from the two principles of diagrammatic reasoning that we already identified back in Episode 6: 1) generators can slide along wires, 2) wires do not tangle.

The proof is a little bit more complicated than the kind of inductions that we have seen so far since we have to deal not only with the fact that n, m, n’, m’ are arbitrary numbers, but also that A and B are arbitrary diagrams! The important insight is that we know how any diagram is constructed: by starting with generators, identities and twists and using the two operations and ; of the algebra of diagrams. Given the fact that any diagram whatsoever can be constructed like this, the trick is to use a slight generalisation of induction, called structural induction, that is very popular in programming language theory. The base cases would be to show that it holds when A and B are actually generators, twist or identity. Then the inductive steps would consider how to construct more complicated instances of A and B with our two diagram operations and ;. But again let’s not bother with this for now as it is a bit tedious — but do let me know in the comments if you’re interested in seeing this done in more detail!


 

We have argued that diagrammatic reasoning can be used to show that our diagrams satisfy all the conditions expected of PROPs. In fact, diagrammatic reasoning characterises these conditions: any fact that can be shown by just stretching or tightening of wires or sliding diagrams along wires can be equivalently shown by using only the various equations required by the definition of PROPs.

We have already seen in the last episode how just the action of drawing the diagram can save the writing of equations. I hope that it’s becoming apparent in this episode that our diagrammatic syntax, together with diagrammatic reasoning, is a real equational bureaucracy monster slayer. 


 

So far in this episode we have only talked about the world of diagrams, but we also need to understand what the σs are in the world of matrices. Let’s summarise: σm,n is a matrix that, in general, looks like this:

sigmamat

where Im and In are identity matrices of size, respectively, m and n, and the 0s are shorthand for filling in all the available spaces in the grid with 0s. For example, σ2,3 is the following matrix:

sigma23mat

It is not difficult to show that (†), (‡),   and  are true the world of matrices, but it is a bit boring, so we will skip it. You are welcome to go ahead and check it as an exercise! By the way, there is a special name for these kinds of matrices and their products in matrix algebra, they are called permutation matrices.

We have now finally ticked all the required checkboxes that were necessary to establish that both diagrams and matrices are the arrows of PROPs.

Lets call the PROP that has as its arrows diagrams constructed from the copy, discard, add and unit  generators, subject to our ten equations with the name B, which stands for bimonoids or bialgebras.  And we will call the PROP that has as its arrows matrices of natural numbers with the name Mat. Given our now more complete understanding of diagrams and matrices, in the next episode we will return to our translation θ from diagrams to matrices.

 


 

Permutations are popular topic in maths, especially in combinatorics and group theory: everyone likes to think about shuffling cards. If you already know about a little bit about permutations then you may find this aside useful; otherwise feel free to skip it; it does not really contribute to our story.

Permutations on a finite set of size n are the elements of a mathematical structure called the symmetric group Sn. The group action is composition of permutations.

Maybe you already have an inkling that, in any PROP, the arrows from n to n include all the permutations. Indeed, any permutation can be constructed, using the algebra of PROPs from the identity on 1 and the twist (σ1,1). One could ask if the arrows satisfy all the equations that are expected of permutations.

The answer is yes, and it can be easily checked since symmetric groups have a well-known presentation. We just need to check that the following two equations hold in any PROP:

s

yb

The first equation is clearly an instance of . The second follows from the naturality condition . To see why this is the case, we can draw a box around the first twist in the left hand side and the last twist in the right hand side:

ybbox

The remaining structure in each side of the equation is, by (‡)σ2,1. But now I can write the two sides as a diagram that looks as follows

ybdiag

and this is clearly an instance of , so it must commute. Neat, right?


 

Continue reading with Episode 14 – Homomorphisms of PROPs.

 

 

8 thoughts on “13. PROPs (Part 2) and Permutations

    • This is such a great resource that you are providing and you give it away for free. I enjoy seeing websites that unsrtdeand the value of providing a prime resource for free. I truly loved reading your post. Thanks!

      Like

    • Wow, I love the thought of the parent commenter reading this far into a series called “graphical linear algebra” before getting frustrated at the lack of health-related content.
      This series is beautiful, clear, and really engaging. Thank you so much!

      Like

  1. Dear Dr. Pawel
    Thank you for making the great mathematics so visual and plausible to follow.
    Connected regions (voxels) of the brain or its hyperbolic representation can be understood, simple and easy, if using categorical mathematics? This was the motive for me to follow your succinct explanations until this chapter.
    After studying more chapters of your great topics, I will ask you when I found any clue for the above objective or (fortunately could formuate) the questions.
    Thanks a lot again.

    Like

Leave a comment