12. Monoidal Categories and PROPs (Part 1)

So far we have not discussed categories in any depth. The reason for this is that I wanted to give you a feel for working with diagrams, for which we do not really need any deeper theory. I hope that I managed to convince you that diagrammatic reasoning is very intuitive: stretching, tightening and sliding along wires are all quite easy to visualise and think about. String diagrams are a user friendly notation, and the operations on them are also quite intuitive, at least if you ever played with Lego. These intuitive aspects are extremely important.

Similarly, it is perfectly usual to do sophisticated calculations using natural numbers without knowing the ins and outs of the Peano axioms, the definition of semirings, or the maths of  initial algebras. In fact, maybe thinking too hard about the set theoretical definition of the natural numbers or the Church encoding would actually make calculation more difficult!

My point is that the natural numbers are best understood, at least at first, as an intuitive concept: adding and multiplying are operations that have many applications in the world around us. Similarly it’s possible, or even preferable, to work with diagrams and use them to perform calculations without thinking too much about the underlying mathematical baggage.

I have also, however, promised not to handwave. So I feel that I owe it to you to present a proper mathematical argument that supports the claim that the diagrams built from the copy, discard, add and zero generators are really the same things as matrices of natural numbers. And I do not know how to do this rigorously without using category theory. We have already started the proof of this claim in the last episode, by showing how to translate diagrams to matrices, but we need to understand what this translation amounts to exactly and why it makes sense.

In any case, while it is possible to do graphical linear algebra without any category theory, doing so misses out on a lot of fun. For example, knowing about PROPs, we will eventually have a nice explanation about where the ten equations that we have so far come from. Some techniques of graphical linear algebra rely on understanding this provenance. We will also have a tighter grip on the finer points of diagrammatic reasoning, which is useful since we will be using it more and more as we get deeper into the story. Finally, having the category theory around also makes it easier to convince people who already know about linear algebra that diagrams are relevant to their interests. Indeed, category theory gives us the tools to relate the two worlds: the world of diagrams and the world of “traditional notation”.


 

A PROP is a special kind of monoidal category. I suspect that the definition was kicking around for a while, but the first reference that I know of is in a 1965 paper by Saunders Mac Lane, one of the two founders of category theory.

We need to get through a checklist all of the requirements of PROPs, so please bear with me while we work our way through this taxonomy. Thankfully, we already have two examples that we can refer to: diagrams and matrices.

Maybe you have heard that categories consist of objects and arrows. As their objects, PROPs have the natural numbers: what this really means is that the things that we care about, diagrams and matrices, are indexed by pairs of natural numbers. As we have seen:

  1. Any diagram has a number of dangling wires on the left and a number of dangling wires on the right.
  2. Any matrix has a number of columns and a number of rows.

We need a neutral word that can describe both diagrams and matrices, so that we don’t get confused. That generic word from category theory is arrow. And we will see that both diagrams and matrices are indeed the arrows of two different (albeit extremely closely related) PROPs.

The traditional notation for arrows unfortunately looks very much like function notation, but it is important to keep in mind that arrows do not need to be anything like functions.

arrow

So an arrow in a PROP has two associated natural numbers, the domain and the codomain. For a diagram, the domain is the number of dangling wires on the left and the codomain is the number of dangling wires on the right. In a matrix, the number of columns is the domain and the number of rows is the codomain.

 

Because PROPs are monoidal categories, there is an operation called the monoidal product. It takes two arrows as arguments and produces an arrow as a result. The rule is as follows:

monoidalproduct

As we have seen, direct sum of diagrams, and of matrices fits into this general form. So direct sum, in both cases, is an example of monoidal product. Monoidal product in PROPs is typically not commutative, but it is required to be associative:

(A ⊕ B) ⊕ C = A ⊕ (B ⊕ C)    ①

This is one of the ways in which PROPs, which are strict monoidal categories, are simpler than ordinary monoidal categories, which come with much more bureaucracy: in vanilla monoidal categories, the monoidal product is only assumed to be associative “up-to isomorphism”, which means that the equals symbol in  is replaced with something weaker. This brings a whole load of extra baggage that we can simply ignore in PROPs. Maybe you heard about Homotopy Type Theory (HoTT), which has become quite fashionable in recent years: HoTT is all about getting to grips with the general idea of “replacing equality with something weaker”.

And so far so good for diagrams and matrices: direct sum passes the test for associativity in both of our worlds. Monoidal product also has an identity. It is a special arrow called id₀ : 0 → 0. It satisfies the property that, for any other arrow A, we have

id₀⊕ A = A ⊕ id₀ = A

In the world of diagrams, the identity for the monoidal product is the empty diagram, which we have already seen in the right hand side of the bone equation (B4). In the world of matrices, it is the matrix () : 0 → 0.

 

The second operation in PROPs is the basic operation in category theory called composition. It works as follows.

composition

Composition is required to be associative:

(C ; D) ; E = C ; (D ; E)     ②

We have seen how to compose diagrams, and their composition is associative. In the world of matrices, we have seen that matrix multiplication plays the role of composition, and we have already seen that it is associative in the last episode.

In PROPs, for every natural number m, there is a special arrow idm: m → m. We have already seen id₀ which is the special arrow in the case m = 0. Identities serve a special role with regards to composition: composing any arrow with an identity at either end has no effect. So, for any arrow A: m → n, we have that

id; A = A = A ; idn     

Can you think of what serves as identities in the world of diagrams? We have met id1 which is the single wire diagram that we have already been referring to as identity.  What about idm for m>1? Well, we can construct this by stacking several identity wires on top of each other with the aid of direct sum. In fact, this follows from a requirement of PROPs that

idk+m = idk ⊕ idm     

Taking stacked wires as the identities means that equation is inherent in diagrams: just by drawing diagrams we are implicitly assuming that it holds. This is because we can use the Crema di Mascarpone procedure to cut up the same diagram in different ways. For example, taking just the copy generator, we can get three different expressions for it just by slicing in the right places:

identities

So you see, one diagram, two equations. There is more of this kind of thing to come.

In the world of matrices, the identity on m is simply the m×m matrix that has 1s on the main diagonal and 0s everywhere else. Conveniently, this notion is commonly known as the identity matrix, and identity matrices satisfy .

Now for an interesting way in which composition and the monoidal product are related. Suppose that we have diagrams A, B, C and D that connect to each other as illustrated below.

abcd

There are two ways we could reasonably slice this diagram apart to get a formula. The first one is how we went about it for Crema di Mascarpone, make a vertical slice first

abcdslice1

and read off the formula (A ⊕ C) ; (B ⊕ D).

The other way is to make a horizontal slice first

abcdslice2

and read off the formula (A ; B) ⊕ (C ; D).  For simplicity I assumed that all four diagrams have a single dangling wire on each end, but the concept generalises for any numbers of dangling wires: the fact is that in any PROP, given A, B, C, D that can be composed in the way that I’ve illustrated, the following equation, sometimes called the middle four interchange, holds.

 (A ⊕ C) ; (B ⊕ D) = (A ; B) ⊕ (C ; D)    ⑤

Note that, again, just by drawing diagrams we are assuming that this property holds since the diagrams in each case are exactly the same, only the slicing technique changes. By the way, this is what I was alluding to last time when I mentioned that the same diagram can be cut up in different ways. In the world of matrices, showing that the middle four interchange holds is a matter of performing a simple calculation; we will skip the details.

Equations   and  illustrate the point I made all the way back in episode 2: a (large enough) diagram can tell a thousand formulas. The mere action of drawing a diagram sweeps away swathes of equational bureaucracy! This is one of the reasons why diagrams are such a great notation.

 

There is one final property of PROPs that I wanted to explain in this episode, and it concerns one instance of sliding along wires in diagrammatic reasoning. Let’s stay with diagrams A, B to illustrate it, but again the concept is more general since it generalises to any numbers of dangling wires in a straightforward way. Remember that since we can slide diagrams along wires, we consider the three diagrams below to be equal:

sliding

This yields the equations

(A ⊕ id ) ; (id ⊕ B) = A ⊕ B = (id ⊕ B); (A ⊕ id)

that are required to hold in any PROP and intuitively say that whenever A and B do not interconnect then they can be interleaved in arbitrary order. Actually, it turns out that the equations above are a consequence of the middle four interchange equation  and the way that identities work with composition . Can you see why? Think about what happens when we let some of the four diagrams in  equal the identity.

As we have seen back in Episode 6, the equality of the three diagrams illustrated above is part of what we have described as “sliding along wires”; one of the rules of diagrammatic reasoning. Next time we will complete our discussion of PROPs, explaining the mathematics behind the remainder of diagrammatic reasoning, and in the episode after that we will finish—for the time being— talking about the translation from diagrams to matrices. This detour is taking longer than I planned, and I’m itching to tell you about more graphical linear algebra.


An aside for people who already know the basic concepts of categories: properties and  that relate the monoidal product to composition amount to saying that  is not just any old way of taking two arrows and obtaining a new arrow: it is actually a functor, a mapping from the cartesian product of the category with itself back to itself. Equation  says that the mapping preserves identities and ⑤ that it preserves composition.

By the way, if you are interested in a good book for learning the basic concepts of category theory, I highly recommend Categories and Computer Science by Bob Walters. It is the book that first introduced me to categories. Bob was a professor at Sydney Uni when I was an undergraduate there and I did a vacation scholarship with him at the end of my second year of uni. I kept in touch with Bob over the years and—thinking about it now—he influenced my thought process more than any other person. Sadly, he died earlier this year and I will really miss our discussions.

Continue reading with Episode 13 – PROPs (Part 2) and Permutations

6 thoughts on “12. Monoidal Categories and PROPs (Part 1)

  1. Hi! Shouldn’t we have id_m instead of in_m in Equation (4)? And (A \oplus id) as the last parenthesis of the last inline equation? Thanks for the posts!

    Like

  2. Really enjoying this series! Thank you!

    Just a nitpick, but I believe the final equation in the post might have a typo. It currently reads

    (A ⊕ id ) ; (id ⊕ B) = A ⊕ B = (id ⊕ B) ; (id ⊕ A).

    However, shouldn’t the final (id ⊕ A) be (A ⊕ id)? Otherwise, applying ⑤ gives us

    (id ⊕ B) ; (id ⊕ A) = (id ; id) ⊕ (B ; A) = id ⊕ (B ; A) = B ; A.

    On the other hand, ⑤ gives the expected result in the following case:

    (id ⊕ B) ; (A ⊕ id) = (id ; A) ⊕ (B ; id) = A ⊕ B.

    Intuitively, this also makes sense given that the left diagram and right diagram are just left-right commutes of a vertical split down the middle (i.e. the ; operator).

    Like

    • Nice question!

      In that case the middle four interchange does not apply.

      In your example A connects to D so A must always come before D. In the examples where middle four interchange applies, there are no connections like this.

      In your example we still have the freedom, using diagrammatic reasoning, to move A and C past each other and/or B and D past each other. This is an instance of the sliding along wires property: e.g. in your diagram:

      (A ⊕ id ) ; (id ⊕ C) = A ⊕ C = (id ⊕ C); (A ⊕ id)

      Like

Leave a comment