Last time we proved that θ—the homomorphism that takes diagrams to matrices—is **full**. This means, roughly speaking, that for any matrix there **is at least one** corresponding diagram. Moreover, starting with an arbitrary matrix, we actually gave a recipe to construct a diagram that does the job. This provides us with a useful syntactic sugar, since any m×n matrix of natural numbers can now be considered as a diagram with n dangling wires on the left and m on the right. The construction extended the sugar for natural numbers from Episode 9.

There is still one final, thorny issue regarding θ that we ought to be nervous about. What if there are more diagrams than matrices? Or, more precisely, can we identify two different diagrams that go, via θ, to the same matrix? Then θ would not be perfect for the same reasons that “serendipity” and “dog” should not translate to the same word in French.

This final fear is laid to rest today as we tackle **faithfulness**. Once we know that θ is both full and faithful (or “fully faithful”) we can finally conclude that θ is a perfect translation; an isomorphism of PROPs.

The proof that θ is faithful is a little bit tricky. Let’s start by reminding ourselves of what faithfulness means: whenever two diagrams D_{1} and D_{2} are different, then the matrices θD_{1} and θD_{2} must also be different:

D_{1} ≠ D_{2} implies θD_{1} ≠ θD_{2 } ①

This is logically equivalent to saying that, for any diagrams D_{1 }and D_{2}, if θD_{1} happens to be equal to θD_{2} then D_{1} and D_{2 }must have been equal to begin with.

θD_{1 }= θD_{2} implies D_{1 }= D_{2} ②

In spite of the logical equivalence of ① and ②, the formulation in ② is somewhat easier to turn into a proof. So let’s focus on what ② is saying. Given a diagram D, θD is table of numbers that record the numbers of paths from each dangling point on the left to each dangling point on the right in D. Then, to prove ②, we need to find a way of showing that:

whenever two diagrams have the same numbers of paths then they are equal.

And we consider two diagrams equal exactly when we can find a **diagrammatic proof** of this fact, using a combination of diagrammatic reasoning and our ten equations, repeated below.

We will outline two different ways to prove faithfulness. They both concern the four equations (B1)-(B4) that tell us what happens when adding meets copying. As we will see, the two different techniques each have their strengths and weaknesses, and for that reason it’s useful to have both in our toolbox.

The story naturally begins with diagrams that have a very particular shape: all of the copying comes before all of the adding. Let’s be a bit more precise: we mean those diagrams in which we can find a point to draw a vertical separator, so that the following condition is satisfied: the only generators allowed to the left of the separator are the black ones (copy and discard); and to the right of it the white ones (add and zero). Here’s a picture:

and here’s an example of a diagram in this form.

To the left of the separator we have two copy generators, and to the right we have two addition generators and a twist. The twist is allowed on either side of the separator: this is because in our discussion about PROPs we understood that the twist is **not** a generator, even if we do consider it a basic brick of our diagrammatic language. The twist, aka σ_{1,1}, is a part of the permutation structure expected in any PROP. In particular, another way of positioning the separator is the following, where the twist is now to the left of the separator.

Here’s a simple non-example for comparison: we cannot find a place to put in a separator, because the addition comes before the copying.

Moreover, because the addition generator connects directly to the copy generator, we cannot use any of our vanilla diagrammatic reasoning, such as sliding along wires. We could, however, use the (B1) equation in our system to replace this diagram with the previous one, which has the right shape.

Most of the work of proving faithfulness is showing that **every** diagram can be transformed using our equations and diagrammatic reasoning into one in which the copying and the adding can be separated. This is because such diagrams are *almost matrices*, as we will now see.

Here’s what we mean by this rather cryptic remark: using the syntactic sugar for copy and add from Episode 10 together with the syntactic sugar for natural numbers from Episode 9, we can easily transform any such diagram into **matrix form. **Diagrams in matrix form look like this:

It’s easy to see that the matrix that results from applying θ to the diagram above is none other than:

So the entries can be read directly off the diagram: we simply scan down the centre to fill out the matrix.

Let’s go through an example to see how this works: it is not difficult to generalise the procedure to any diagram where the copying comes before the adding. Consider the diagram below. It is of the right shape, because we can draw a separator that walls off the black structure from the white structure.

We start by cleaning the diagram up a little bit. We can use (Unit) and (Counit) to make some redundant zeros and discards disappear, and use diagrammatic reasoning to move the ones that remain so that they are right next to the boundary and don’t cause any further problems. In our example, this means using (Counit) to get rid of the discard at the bottom left, and pulling in the zero over on the right. We end up with the diagram below.

Next we introduce the syntactic sugar for copying and adding that gets rid of chains of copies and adds. In our example this means the following:

Next we clean up even more: we eat up any annoying permutations (like the one at the bottom right) with the sugars, and collect compatible paths—those with the same dangling points—using our natural number sugar.

We could really stop here, but being the pedantic sticklers that we are, we can slide the numbers along, and add 1s and 0s to get the diagram looking exactly like the template.

But why bother to go to all this trouble?

The reason is that once we have a diagram D in matrix form, the relationship with its matrix θD is extremely tight since we can read off each matrix entry just by scanning down the middle of the diagram. This close connection can be exploited to prove faithfulness: when D_{1} and D_{2} are in matrix form, and θD_{1} is the same matrix as θD_{2}, they clearly must be equal as diagrams!

So, in general, the question of faithfulness boils down to the following:

**Can we transform any diagram into matrix form?**

And, since we have just argued that any matrix in which copying comes before adding can be put into matrix form, the central question to consider becomes even simpler:

**Can we transform any diagram so that copying comes before adding?**

By the way, we have been putting diagrams in matrix form already without realising it. In Episode 10, we showed how go from the diagram on the left below—which totally fails to be in matrix form since some add generators come before copy generators—to the diagram on the right, which is in matrix form. The procedure of putting a diagram in matrix form is thus closely connected with the concept of matrix multiplication.

It’s time to discuss one of the ways that we can answer the question. Putting the copying before the adding means focussing on what happens when the two structures interact. That story is captured by our four equations.

When we first discussed them in Episode 8, I mentioned that these equations amount to something called a distributive law of PROPs.

We will not go through the details of what this means now, but the upshot is that any diagram D can be **factorised** (a more fancy way of saying “transformed into a form in which we can find a suitable point to put in a separator”) so that D = C ; A, where C is a diagram that uses only the copy and discard generators, and A only the add and zero generators. So the work comes down to showing that the rules (B1)-(B4) actually amount to a distributive law. Fortunately, Steve Lack considers this very example in *Composing PROPs, *it is Example 5.3 in that paper. The deeper story behind all this is actually both extremely interesting and enlightening, but it requires a little bit more category theory. We will come back to it eventually.

Overall, the strength of using this technique is that… we didn’t really have to do very much. For austere, beautiful mathematical reasons, the equations give us a distributive law and guarantee that that the factorisations we need always exist. But if you give me a diagram and tell me to find an actual factorisation then I don’t have very much to go on: I know it exists, but the distributive law does not tell me how to find it. Nevertheless, it **is** enough to prove faithfulness, so we should not complain too much.

The second way of showing that every diagram can be factorised relies on a technique called rewriting. The basic idea is that, instead of considering (B1)-(B4) as equations, we orient them and consider them as rules.

Then, the strategy is to find left hand side of one of the rules in our diagram. This is called a **match **for that rule, and if we can find it, we replace it with the rule’s right hand side. Next, we keep repeating these two steps until we there are no more matches. The rules take care of all the possible ways that white structure can come before black structure, so if we cannot find any matches then we have essentially found the factorisation.

Sounds simple enough, but there are a few complications. Let’s go through an example to get a feel for what’s going on. Our starting diagram is drawn below, and already we have found a match for one of the rules; it’s the area highlighted within the red rectangle.

So we replace the match with the right hand side of the rule, obtaining the diagram below.

The red box in the second diagram is highlighting another match, which we can then replace with a right hand side of the relevant rule. Let’s keep repeating this procedure to see where we end up.

Now we seem to be almost finished, but there’s still an add connected to a discard, highlighted in the last diagram above. We do have a rule for that, but the match is complicated by the fact that there is a twist in the middle. So, if we were going to implement this in software, we’d have to make sure that our algorithm for finding matches takes these kinds of situations into account. And after this final step the diagram becomes

where there are no more bad connections. We can now simply use diagrammatic reasoning to pull all of the black structure to the left and the white structure to the right, obtaining a factorisation.

Another, more radical, way to deal with the issue of matches is to go back to the drawing board and change the way we think about diagrams. We have been thinking of diagrams as magic Lego, built up of basic tiles that connect to other basic tiles. But we could start thinking of them as graphs with white nodes and black nodes. Then, the rewriting we have been discussing becomes a kind of graph rewriting.

This is the approach taken by Aleks Kissinger at Oxford, who has developed the Quantomatic software package for working with string diagrams. Aleks and his collaborators have developed a whole load of really cool new technology, and I hope to convince him to eventually write a few articles on this blog, explaining some of the latest developments!

There is one more issue to consider when dealing with rewriting: **termination**. Let’s look at the first step of our rewriting example again. In the first diagram, there is only one match. But by applying the rewrite rule, we ended up in a diagram with three different matches. It seems that we’ve cut of the head of the Hydra only for three heads to pop up again. If this kind of thing continues indefinitely, then as much as we rewrite, we will never finish.

This thing about rewriting systems in general, is that:

- they may look simple and intuitive
- but it is often surprisingly tough to prove that they terminate.

Fortunately, the rewriting system above **does** terminate. We will prove it eventually on this blog, but you’re welcome to try it yourself; it’s not so trivial!

There are several people studying rewriting theory on diagrams like ours. For example, Samuel Mimram at the École Polytechnique has recently been doing some very exciting work on techniques that simplify the process of proving properties like termination.

Summing up: if we use the distributive law then we don’t have to work very hard but we don’t get a factorisation algorithm. If we use rewriting then we have an algorithm but we have to work quite hard to prove that it works. Such is life.

This episode concludes the work of showing that θ is an isomorphism. We have spent quite a bit of time discussing the mathematics **of** diagrams. This was all very useful because, after all the hard work, we now have a much firmer handle on how to work with them. But, as interesting as it is, the mathematics of diagrams is not really the main point of this blog, which is doing mathematics **with** diagrams. This is the subject of the next episode.

Continue reading with Episode 17 – Maths with Diagrams