# 10. Paths and Matrices

Back in episode 5 I promised to explain how to multiply matrices in a way that you probably have not seen before. I also said that I would do that within 5 episodes, and since this is now episode 10, my hands are tied.

By the way, if you are enjoying this blog, I think that you will also enjoy Dan Ghica’s wonderful series on inventing an algebraic knot theory for kids: Dan uses a similar diagrammatic language to talk about knots, but there’s one major difference between his diagrams and ours: his wires tangle! No wonder, since wires that do not tangle are not particularly useful if you want to construct knots. If you want the “real” reason, it is that his underlying mathematical structure is a braided monoidal category, whereas ours is symmetric monoidal category.

There is one more blog to mention: John Baez has recently been writing about PROPs and linear systems; check it out if you want a sneak preview of one of the star applications of graphical linear algebra! John and his student Jason Erbele developed the theory independently and more or less at the same time as Filippo, Fabio and I. But, apart from the fact that they have different conventions for drawing diagrams, the equational theory that they have developed can be shown to be equivalent to the one we will develop here. It’s super interesting that science is full of temporal coincidences like this; it’s not even the first time that this kind of thing happens to me!

We start by quashing the fear that the sky may fall on our heads: that is, that somehow graphical reasoning could let us do unreasonable things, like proving that 2 = 3.

Let’s convince ourselves by doing some counting, armed with a hypothetical example. Suppose we have a diagram A, say with two dangling wires on the left and one on the right. Let’s give the first left wire a name, say p and let’s call the right wire q. Suppose also that we’ve cut up the diagram into chunks, like we did with Crema di Mascarpone, and we’ve identified one particular diagram component, called B, with one dangling wire on the left, called r, and one on the right, called s. In the diagram r and s are drawn as dangling from B, but inside of A they may connect to other components.

Now how do we count the number of paths from p to q? We could write down a formula as follows.

This idea generalises easily enough; think about what happens if B has multiple dangling wires on the left and on the right. But what does all this counting tell us?

Well, for one, to count the number of paths from p to q we do not really need to look inside B, we only need to know about the number of paths from each of its dangling points on the left to each of its points on the right. In particular, if I swap B for another diagram, say C, with the same numbers of paths, then the overall number of paths from p to q will not change. The idea of counting paths is compositional.

So let’s now think about diagrammatic reasoning. Clearly stretching or tightening wires doesn’t change the number of paths. Neither does sliding generators along wires. So the only point where things could go wrong is when we use one of our equations. And what happens when we use an equation? We identify a part of the diagram, a component, that corresponds to one side of the equation, and replace it with the other. Here’s our cheat sheet again, for reference:

The thing to notice is that, in every equation, the path numbers are the same in both left and right hand side. For example, in both sides of (B1) there is one path from every dangling point on the left to every dangling point on the right. In (B2) there are no dangling points on the right, so there are no paths to speak of. In (Unit) and (Counit) there is exactly one path… and so on.

What all of this tells us is that the number of paths from any dangling point on the left to any dangling point on the right is an invariant in diagrammatic reasoning for the theory of adding and copying that we have painstakingly identified so far. So, for instance, starting with the diagram for 3 from the last episode, every diagram equal to it according to diagrammatic reasoning will have precisely three paths from the left dangling wire to the right one. In particular, this excludes the possibility of 2 being equal to 3. Phew!

In the last episode we concentrated on diagrams with precisely one dangling wire on the left and right. Let’s now broaden our horizons and draw some more complicated diagrams.

Last time I claimed that every diagram with one dangling wire at each end is equal to one that comes from the natural number syntactic sugar. We will convince ourselves of this fact soon, but for now let’s just believe that this is indeed the case. I want to compound that claim with another: there is also a general shape that describes every diagram with two dangling wires at each end. It is the following, where a, b, c and d are arbitrary natural numbers.

So, in the diagram above, there are a paths from the first dangling point on the left to the first dangling point on the right, b paths from the second on the left to the first on the right, and so on. All these fun facts almost make you want to write down a nice 2×2 square of natural numbers.

Let’s figure out what happens when we compose two such beasts; it’s an instructive exercise. We will perform a calculation to get the composition to fit into the general pattern described by the diagram above. A word of warning: the diagrams below get a bit complicated, but don’t worry, our calculations will hardly ever be this messy!

So the first thing that we do is use (B1) twice. Next we apply the (copying) lemma from the last episode four times, to bring a,b,c and d over the copying generators to their right.

Next, we use the (adding) lemma to bring e, f, g and h over the adding generators to their left, and use the (multiplying) lemma to multiply each pair.

The next thing to notice is that there are two ways of getting from the first dangling wire on the left to the first dangling wire on the right; one that goes through ea and a second that goes through fc, as indicated by the red wires in the diagram below.

The key to simplifying the diagram further is to collect all such compatible paths and add them up. At this point though, it is useful to introduce some additional syntactic sugar that will help us to reduce the complexity of our diagrams.

The first of our new syntactic sugars looks like the copy generator, but with three dangling wires on the right.

The reason why it’s a nice syntax is that (CoAssoc) tells us that it does not actually matter in which order we perform the copy operation, we simply end up with three equal copies.

We also introduce a sugar that looks like copy, but has four outgoing wires, as follows:

Again, the syntax is evocative, because (CoAssoc) implies that it doesn’t matter in which order we perform the copying. In particular, all of the following are equal.

We can continue this procedure recursively, obtaining a family of sugars indexed by the number of outgoing wires. The recursive case is the following:

The base case, for one outgoing wire, is simply the identity. By the way, it is also useful to consider a “copy with zero outgoing wires”, and let this be equal to the discard generator.

Next we can derive two useful laws for dealing with this family of sugars. The first one generalises (CoComm) and says that if I permute two adjacent wires then the sugar can swallow up the twist.

The equation follows from the fact that I can use (CoAssoc) to transform the diagram in such a way so that the twist connects directly to a copy generator, and then use (CoComm). Now, since any permutation whatsoever can be constructed from twist and identity using the two operations of diagrams, it follows that a copy sugar with k outputs followed by any permutation on the k wires is equal to the copy sugar: the sugar can eat up any permutation thrown at it. Simple, right? There is also a rule that generalises (CoUnit): if I cap any of the k outputs with the discard generator then the resulting diagram is the sugar with k-1 outputs. We will not use this second law for now, but it’s useful to keep in mind.

Now, since addition is bizarro copy, we can introduce a similar family of sugars for addition, which satisfy the corresponding bizarro properties that generalise (Comm) and (Unit). To get the idea, just reflect the diagrams in the explanation above and colour the circles white!

Remember the long proof of the inductive step of (copying) from last time? Using this syntactic sugar makes it much simpler. Try it! For now, we will use our sugar to simplify the rest of our current computation.

After the sugaring step, we can easily rearrange the paths by sliding numbers along wires and getting the sugars to eat up any of the resulting twists. We can then desugar into the form that allows us to apply the (sum) lemma from the last episode.

Now, if you took undergraduate linear algebra then you probably memorised the technique for multiplying 2×2 matrices. Here’s a quick reminder:

$\left(\begin{array}{cc} e & f \\ g & h \end{array}\right)\left(\begin{array}{cc} a & b \\ c & d \end{array}\right) = \left(\begin{array}{cc} ea+ fc & eb + fd \\ ga+ hc & gb + hd\end{array}\right)$           ①

These numbers look familiar, don’t they?

In fact, the diagrams that we have been drawing all along are actually in 1-1 correspondence with matrices that have natural number entries. Moreover, just as composition of diagrams with one dangling wire at each end corresponds to multiplication of natural numbers, composing diagrams of arbitrary shape can be understood as matrix multiplication. We will go through this in more detail next time.

For now, a quick summary: the diagram

corresponds to the matrix $\left(\begin{array}{cc} a & b \\ c & d\end{array}\right)$. In general, the number of columns of the corresponding matrix is the number of dangling wires on the left, and the number of rows is the number of dangling wires on the right. Moreover, the entry at row i and column j is the number of paths from the jth wire on the left to the ith wire on the right.

Finally, the composition

corresponds to the matrix obtained by performing the matrix multiplication .

And, as you’ve seen, it all follows from how adding and copying interact!

Continue reading with Episode 11 – From Diagrams to Matrices

# 8. When Adding met Copying…

Let’s start with a recap. Last time I claimed that linear algebra was all about what happens when adding and copying interact. So far, though, we have only focussed on properties satisfied by adding and copying in isolation. Properties such as (Comm) and (CoComm).

We noticed that adding and copying are bizarro version of the other: not only in the way that they look, but also in the equations that they satisfy. Remember that the word bizarro, as we have been using it, means to reflect the diagram and invert the colours. Left switches places with right, and white switches places with black.

In this episode, copying meets adding for the first time. Sparks fly.

It’s time to meet one of the most interesting equations of graphical linear algebra.

Equation (B1) is by far the most complicated of the ones that we have seen so far, so let’s stop for a moment and take the time to understand and appreciate it. First, notice that both the left hand side and the right hand side have the same number of dangling wires on both sides: two on the left and two on the right. Remember that it is important that any equation that we write down satisfies this property: otherwise the compositionality property—that is, swapping diagrams for equal diagrams in some larger context—wouldn’t make any sense since we wouldn’t know what to do with the extra/missing wires!

Next, let’s try to convince ourselves that (B1) makes sense by using the intuition of numbers moving along wires. In the left hand side, there are two dangling wires on which we can input numbers x and y, and they are first added, then copied. Like so:

Now let’s do the same thing for the right hand side: first we copy the two numbers, we swap two copies in the middle, then we add. Like this:

Since we are making no assumptions about what actual numbers x and y represent, our demonstration shows that the behaviour of the left hand and the right hand sides of (B1) is the same: we can’t detect any difference between them if we cover up the internal contents of the circuits and only experiment with them by providing various inputs and observing outputs.

I want to say one more thing about (B1) — its bizarro version is itself! If we reflect the left hand side and switch the colours, we get the left hand side again. Same with the right hand side. By the way, here’s a useful exercise for you to get comfortable with the algebra of diagrams: write down a formula that constructs the right hand side of (B1) using the operations ⊕ and ; on the basic generators, identity and twist. Remember how we did it for Crema di Mascarpone?

What happens when adding meets discard, copy’s sidekick? That scenario gives rise to the second key equation.

The rationale for (B2) is easy: in the left hand side, we add our two inputs and then throw away the answer. But, clearly, the observable behaviour is the same if we just went ahead and threw away the two inputs without adding: the behaviour in both cases consists of taking any two numbers and not producing any results.

Next up is an equation that concerns the situation when zero meets copying.

Remember that the behaviour of the zero generator is simply to output the number 0 on its result wire. If I copy 0, I get two 0s. That’s what (B3) is saying.

Take another look at (B2) and (B3). Can you see how they are related? One is the bizarro version of the other. Graphical linear algebra is full of bizarro situations like this: we already noticed that the equations for adding and copying are bizarro versions of each other. In fact, this is part of a general fact of graphical linear algebra:

If an equation holds, then its bizarro version holds too.

This will be true for the entire run of our story; even as we consider additional generators, diagrams and equations. It’s a useful thing to keep in mind because sometimes it will save us half the work, and sometimes it will allow us to see connections between seemingly quite different phenomena.

This is probably a good point to take a short break for a discussion about our methodology. Some of you, probably including most of the non-mathematicians, may be confused as to why we keep identifying certain equations and giving them names.

I hinted a few of episodes ago that we want to gradually get away from thinking about “numbers moving on wires”. Instead, we will use the key equations that we have identified so far as axioms; that is, basic postulates that we simply accept to be true.

Much of our attention will be on examining what kind of things these axioms allow us to prove, and what they do not prove, using only the rules of working with diagrams together with basic logical reasoning. Just as when we proved the upside down version of (Unit). Historically, this kind of methodology—that is, identifying basic properties and seeing how far they get you—is tried and true: the tradition goes back at least to 300 BC with Euclid and his elements.

In this episode we identify four equations; we have seen three of them so far, (B1), (B2) and (B3). Together with the three equations for adding and the three for copying, this system of ten axioms will suffice for the next few episodes. As we will discover, it is already a very interesting system, even if it may not look like much at this point! For example, we will use diagrams to do some basic arithmetic in the next episode.

The final axiom is a bit strange.

First, the left hand side is what happens when you compose zero with discard. It’s a strange kind of diagram that we have not seen before, because there are no dangling wires at all. But what is the right hand side? I didn’t forget to draw it: it is also a diagram, the diagram with nothing in it. A blank piece of paper.

We will sometimes call the left hand side of (B4) by the extremely technical name “bone“. I guess that I probably don’t need to explain the etymology. The name was invented by my coauthor Fabio—at one point the whiteboard in my office was covered with them and we needed to call them something!

So (B4) tells us that, in any graphical proof, whenever we come across a bone we can simply erase it. That’s because, if I output zero and promptly discard it, it’s like doing nothing at all.

Mathematicians call structures that satisfy equations (B1) through to (B4) with different names, typically bimonoids or bialgebras. We will use the term bialgebra. It’s not a great name, but it will do.

There is one more thing to be said. Maybe you remember that, in ordinary arithmetic, products distribute over additions: so the following is true for any numbers a, b and c:

a(b+c) = ab + ac

In the equation above, when evaluating the left hand side, the addition comes first followed by a multiplication. Instead, in the right hand side, the order is reversed: we multiply first and add later. So the distributivity property reversed the order in which we perform operations.

Our new equations from this episode can also be understood as reversing the order of the operations. For example, take another look at (B1). In the left hand side, the addition comes first followed by a copy, while in the right hand side it is the copying that comes first, followed by additions.

In fact, it is really the case the equations (B1) through to (B4) can also be considered as a distributive law in a formal mathematical sense. I will not go into the details of this now, because it is not strictly necessary to understand in order for us to make progress in the story. For the interested, it is all explained in a wonderful paper by Steve Lack. Steve is one of the extremely brilliant category theorists that I have been lucky enough to meet and learn from: he supervised my honours project at Sydney Uni, back when I was finishing my undergraduate degree.

Continue reading with Episode 9 – Natural numbers, diagrammatically