23. Frobenius Snakes and Spiders

separator

Graphical linear algebra is about the beautifully symmetric relationship between the operations of adding and copying numbers. The last episode was all about adding, the white structure of our diagrammatic language, using the new relational intuitions. Since copying, the black structure, is the yin to adding’s yang, in order to attain karmic balance we start this episode by shifting our focus to black diagrams. It turns out that the story is eerily similar: it’s the bizarro world all over again.

So, let’s take stock of what we know about copying. Here’s the generator that we already know quite well:

copyWith the relational interpretation, it represents the relation Num ⇸ Num×Num with elements

copyrelel

where x is any number. All this is saying, in the language of relations, is that the behaviour of the copying generator ensures that the three values on the dangling wires are all the same.

The relational interpretation of discard, copy’s sidekick

discard

is a relation of type Num ⇸ {}, but unlike zero from the last episode, it is not a singleton relation. Instead, it contains all elements of the form

( x, ★ )

where x is any number. This just means that discard, unlike backwards zero that “accepts” just 0, is not discriminating in the slightest.

From Episode 7, we know that copy and discard satisfy the commutative comonoid equations:

copying

And of course, these still make sense with the new, relational interpretation.


In the last episode we introduced the mirror image of the adding generator.This was backed by the relational interpretation; by thinking in terms of relations we could make sense of what it means to “add backwards”.

We can now do a similar thing with the copy and discard generators. We get a brand new generator, a backwards copy, or “copy-op”:

copyop

which, although not a function from left to right, is a relation of type Num×Num ⇸ Num. This relation is, as expected, the opposite of the one for ordinary copy:

copyoprel

The second new generator is the backwards discard

discardop

with relation { (★, x) | x ∈ Num }.

Not surprisingly, the mirror images of copy and discard satisfy the mirror versions of the equations for copy and discard.

copyingop


Now that we’ve seen both reverse adding and reverse copying, it’s natural to ask if anything interesting happens if these two structures meet. Back in Episode 8 we went through what happens when the ordinary versions of adding and copying meet.

So imagine you are standing in front of someone who is looking in the mirror. Do you get any more information from looking at the mirror than directly at the person? Typically not so much, the information is the same, but reversed. This is also the case for mirror add and mirror copy: they interact in the same ways, but backwards. Here’s a rundown of the equations; they are just the mirror image of the ones we discussed in Episode 8.

addingcopyingop


As in the case of adding, the interesting stuff happens when copying meets its mirror image. If you’ve just read the previous episode, the equation below will fill you with a sense of déjà-vu.

bfrob

Yes, it’s none other than a black version of the famous Frobenius equation.

In the last episode, to see that the white version of (BFrob) made sense, we had to do a few, admittedly simple, calculations. Here, the job is even easier: the copy generator carries the same value on all the wires; so tagging wires with the numbers that flow on them we see that the behaviour of the left hand side of (BFrob) ensures that all of the wires carry the same value.

froblvars

The story is similar for the right hand side of (BFrob).

frobrvars

In both the left and the right hand side of (BFrob), it’s thus quite simple to compute the underlying relation: it contains as its elements those pairs of elements that look like this

frobrel

where x is any number. The relations of the left and the right hand sides agree, so (BFrob) makes sense for copying.

So what other equations hold? Here’s another one that we’ve seen a few times by now: the special equation.

special

Again, just by looking at the thing, it’s pretty obvious that it works. On the left hand side of (BSpecial), the copy and it’s mirror image make sure that all the wires carry exactly the same value, meaning that the only behaviour exhibited is the one where the values on the dangling wires are the same. Which is the same behaviour as that of a single wire, the right hand side of (BSpecial).

Finally, we also have the bone, in black.

bbone

Summing up, we have identified three equations, just as we did in the last episode.

copyingcopyingop


Since we’ve now seen the Frobenius equation in two different contexts, it’s worthwhile to step back and see what interesting properties hold in Frobenius monoids, the algebraic structure that we mentioned briefly in the last episode. Stepping back and thinking more generally will help us get a better feeling for what’s going on under the hood. For the purposes of this discussion, we will switch to anonymous gray mode, remembering that the gray can be replaced either by white or black.

Here are the equations of Frobenius monoids again.

frobeniusmonoid

There are two interesting diagrams that can be built using the generators in any Frobenius monoid. First, we have a diagram with two dangling wires on the left and none on the right:

epsilon

and second, its mirror image, with two wires on the right and none on the left:

eta

Now, we can plug these two constructions into each other to form two different kind of “snakes”:

snakes

These two are involved in the snake lemma: both the snakes are actually equal to the identity wire. So, one intuitive way to think of  gadgets  and  is that they are a bent wire. When we compose such bent wires, we can stretch them back to get an ordinary straight wire.

snake

Here’s a proof of the second equality, which uses the Frobenius equation. The proof for the other equality is similar.

snakeproof

Because of the snake lemma, it turns out that any PROP with a Frobenius monoid structure is an instance of something called a compact closed category. Compact categories are all about bending things; we will say more about them in future episodes.


Maths is full of pretentiously sounding “fundamental theorems”, although perhaps this kind of language has become somewhat passé in recent decades. There’s the fundamental theorem of calculus, fundamental theorem of algebra, and a fundamental theorem of x for several other choices of x. The adjective fundamental is not really a precise classification; there is no definitive, widely accepted test of fundamentality. Rather, it’s more of a sociological endorsement that identifies a particular result because it holds some kind of spiritual, quintessential centrality for the field in question. A feeling that the point of view expressed by the theorem is a nice representation, a microcosm, of the subject as a whole.

Similarly, some people call the Euler identity e + 1 = 0 the “most beautiful equation” because it links, in one statement, some of the most important 19th century mathematical concepts: e, iπ, with timeless favourites 1 and 0. The 1 and the 0 may seem to be a bit contrived, and -1 seems to be getting a short shrift, why not e = -1? Not that I want to pick a fight with Euler.


It’s time for two particularly useful equations. In the spirit of pretentiousness and Platonic pompousness, we could call these the fundamental equations of graphical linear algebra, or the most beautiful equations, since they involve all of the generators that we have considered so far. But let’s not.

Here are the equations, which are mirror images of each other. Both feature the kind of “bent wires” that we were talking about before.

cc1

cc2

Let’s focus on  and calculate the relations. In the left hand side, we are composing the relation

addrel

with the singleton relation

zerorel

The operation of composition can be thought of as imposing a condition on the result of x+y.  Indeed, the composed relation consists of those elements

composed

where x+y=0. But to say that x+y=0 is to say that y=-x. In other words, the relation can be described as consisting of elements

final

where x is any number. But this is clearly the relation represented by the right hand side of , since the relational interpretation of the antipode is the relation with type Num ⇸ Num that consists of pairs (x,-x) for all numbers x.

 The justification for  is symmetric.


We have seen the snake lemma, it’s time for the spider theorem. It applies in special Frobenius monoids — those Frobenius monoids in which we also have the special equation. This, as we have seen, covers both the copying and the adding in graphical linear algebra. Here are the relevant equations.

specialfrobenius

In a very precise sense that we will eventually make clear, this theory, lets call it F, is dual to the theory we have been calling B, the theory of bimonoids. Remember that B consists of those equations that describes the interactions between adding and copying. Here’s a reminder of the equations, which by now we know quite well.

bimonoid

Back in Episode 16, when we were proving that a diagram in B is really the same thing as a matrix of natural numbers, we argued that diagrams in B can be factorised so that all the comonoid structure comes first, then the monoid structure: doing this made our diagrams look a lot like matrices. In F, the theory of special Frobenius monoids, the opposite thing is true: any diagram can be factorised so that all the monoid structure comes first, followed by all the comonoid stuff.

What this means is that any connected diagram from m to n is equal to the one in the picture below, where we use our usual syntactic sugar that collapses repeated multiplications and comultiplications:

spider1

This suggest an even better syntactic sugar for connected diagrams in special Frobenius monoids: a spider!

spider2

So—long story short—any diagram in F can be seen as a collection of jumbled up spiders: this result has become known as the spider theorem. Here’s an example with three spiders.

spidersex

Notice that if we have the bone equation around, we can get rid of the unfortunate legless spider. One more thing: when two spiders connect, they fuse into one spider. This is a really nice way of thinking about diagrams in F.

The spider theorem was proved by Steve Lack, in the paper Composing PROPs that we have already mentioned on more than one occasion. But later, it took on a life of its own, due to the usefulness of diagrammatic reasoning in quantum computing. The person probably most responsible for this is Bob Coecke; this paper with Eric Paquette is a good example of spiders in action.


It’s again been a pretty hectic few weeks for me.

Two weeks ago I was in Lyon for Fabio’s PhD defence. It’s now possible to download his thesis, which received high praise from the very impressive thesis committee. The thesis is currently the most thorough account of graphical linear algebra and its applications, so do take a look! After a well earned doctorate, Fabio has now moved to Nijmegen in the Netherlands to take up a postdoc.

Last week I was deep in paper writing mode, trying to meet the FoSSaCS deadline. If you remember Episode 2, this for me means pretty hard work. I get a bit too obsessive when writing, and the process takes up all of my time and energy: fortunately, the closer to the deadline, the more the adrenaline kicks in to keep you going. Deadlines are great at forcing you to get something done, but afterwards you end up feeling somewhat exhausted. So I tried writing for the blog this last weekend and it ended up feeling a bit too much like work, which is really not the point.

The paper itself, which has just been put up on the arXiv, is something that I’m really excited about. If you flick through it, I’m sure that you will recognise many of the equations! My coauthors are Brendan Fong, a brilliant PhD student of John Baez, whom we met in Episode 10, and Paolo Rapisarda, who is a control theorist in Southampton and an ex-PhD student of Jan Willems, whom we met in Episode 20. Academia is a small world. Anyway, it was a really fantastic collaboration; I very much enjoyed doing the research and I learned a lot from both Brendan and Paolo.

The paper is about the class of linear time-invariant dynamical systems, a foundational playground of control theory, where basic concepts that gave birth to the subject, such as controllability and observability, show up. In the paper, we give an equational characterisation—which means, for example, that questions about controllability can be reduced to looking at the shape of the diagrams that represent dynamical systems.

Next week I’m off to Cali, Colombia to talk about graphical linear algebra and its applications at the ICTAC conference. Then, hopefully, the noise will die down a little bit so that I can spend a bit more time on the blog.

Continue reading with Episode 24 – Bringing it all together.

Advertisement

20. Causality, Feedback and Relations

This is an important episode in the story of graphical linear algebra. We return to our intuitions, the ones we started with all the way back in Episode 3. It’s high time we replace them with something much more useful.


 

In our diagrams so far, we have been saying that numbers travel on the wires from left to right. By the way, all along, we have been rather vague about what these numbers are exactly — and as we expanded our set of generators and their associated equations, we have been getting more specific. So far, we know at least that we have a zero (which is part of the addition structure) and negatives (the antipode).

Let’s call the type of these numbers, whatever they are, Num. Then the addition generator defines a function of type

Num × Num  →  Num

add

since there are two arguments x and y (the numbers that arrive on the input wires on the left) and one result x+y (the number that exists on the the output wire on the right). Similarly, we could write down the types for all of the other generators. And in our algebra of diagrams, the composition operation always connects outputs to inputs.

Every diagram thus defines a function that takes a certain number (possibly zero) of inputs to a number (possibly zero) of outputs. In other words, every diagram that we drew conforms to the following idealised—and pretty cliché—idea of a computational system. The picture below could appear in a million papers and talks in computer science and related fields.

system

We could even go so far as to say that the inputs we provide are causing the outputs. Let’s not go down that road; it causes a lot of problems. Let me explain why.

(A warning: the rest of this episode is about 90% rant; if you just want the content, feel free to skip to the last section.)


 

Causality is an enchanting, seductive idea. Imagine: we interact with the world by giving it inputs, and the world responds by returning nice, easily predictable outputs. Life would just be so easy and pleasant if everything around us behaved as functions in the mathematical sense: for every input x there would be an output f(x). And, the cherry on top, imagine if f had a closed form. A nice little theory of everything.

Unfortunately, reality is somewhat more brutal. In 1912, the year the Titanic sunk, Bertrand Russell published a very nice philosophical paper entitled On the notion of cause. It is full of delightful quotes, and here is one of my favourites:

The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving like the monarchy, only because it is erroneously supposed to do no harm.

Clearly, Russell was a republican. But his paper is mainly a scathing attack on the facile idea of causality: viewing the various components of the physical world as cause-and-effect systems.

Of course, it is extremely useful to engineer systems that behave in a cause-and-effect manner: our carefully constructed civilisation relies on it. If you buy a car, you would not be very happy if nothing happened as you pressed the accelerator pedal. Similarly, a lamp should do something when you flick the switch. Clearly we are causing the car to go and the light to go on by interacting with those systems appropriately.

The problem arises when we try to use our fuzzy “common-sense” human intuitions of causality to understand the physical, non-manufactured world around us, or even the sub-components of engineered systems. Causal preconceptions affect the way we approach reality: to get the outputs we want, all we need to do is find the right inputs. When a person with this world view sees something happening, they ask why, what caused it? This question may seem quite reasonable at first.

Quoting Feynman:

Aunt Minnie is in a hospital. Why?

Because she went out, she slipped on the ice and broke her hip. That satisfies people.

If you haven’t seen it, go ahead and watch the video, it’s very entertaining. Feynman is addressing the difficulty of answering a question like “Why do magnets repel each other?”One of the messages of the video is that it is difficult to apply “common-sense” to understand the physical world. In “common-sense” the questions “why?” and “what caused it?” are closely linked. And the idea of something causing something else, away from our carefully engineered human civilisation, becomes more flimsy the more one thinks about it.

But maybe this world view is harmless, like the monarchy? They (the monarchy) do bring in loads of tourists, after all. So maybe causal thinking gets us to ask the right questions, e.g. Newton and the Apple incident. What caused the apple to fall?

Not quite. Causality encourages sloppy thinking that is far from harmless: it is killing people, while costing billions in the process. If this sounds like an outrageous hyperbole, a cautionary tale is recounted in Jonah Lehrer’s article Trials and errors: Why science is failing us. Check it out, despite the clickbait title it’s worth a read. Full disclosure: Lehrer is an interesting guy. He seems to have been transported to life from a stock character in some Shakespearean play: first a stratospheric rise from science blogger to best-selling author with a cushy staff writer gig for the New Yorker. Then an equally steep fall from grace—apparently he faked some Bob Dylan quotes for one of his books—so spectacular that it merited two chapters in Jon Ronson’s So you’ve been publicly shamed.  But let’s not throw out the baby with the bath water.

Lehrer writes that in the early 2000s, Pfizer, the multi-billion dollar pharmaceutical company, pumped a lot of money into the development of a drug called torcetrapib, which tweaked the cholesterol pathway—one of the most studied and seemingly well-understood systems in the human body—to increase the concentration of HDL (“good cholesterol”) at the expense of LDL (“bad cholesterol”). Here’s a diagram.

cholesterol

It actually did exactly that; increased HDL and decreased LDL. But there seems to be a deeper problem concerning the diagram above: the simplification involved in considering some chemical as a “good input” and another as a “bad input”. Long story short, quoting Lehrer:

Although the compound was supposed to prevent heart disease, it was actually triggering higher rates of chest pain and heart failure and a 60 per cent increase in overall mortality. The drug appeared to be killing people. That week, Pfizer’s value plummeted by $21 billion (£14 billion)

It’s pleasant and probably quite lucrative—research grant wise— to call HDL “good” and LDL “bad”.  It seems likely, however, that they are not inputs in any useful sense; they are components of a complex system, the cholesterol pathway, which is part of a larger, more complex system, the human body.

Causal thinking and complex physical systems don’t mix very well. Still, the causality meme is as strong as ever, more than 100 years after Russell’s paper. A wonder drug (input) that cures cancer (output) is just around the corner. But you need to donate now.

So what is it about complex systems that makes them so difficult for humans to understand? Why is it that HDL or LDL might not be inputs in any meaningful sense? And what does all of this have to do with graphical linear algebra?


 

One of the most famous, the most studied, and yet the most mysterious features of complex systems is feedback: the somewhat circular idea that an output of a system can be plugged back in as an input. So, suppose that we have a nice, easy causal system with two inputs and three ouputs, as follows.

system2

Now, what happens if I plug the first output in as the first input? I get the following system which now seems to have only one input and two outputs.

feedback

The idea of recursion, which we have seen in many of our syntactic sugars, is related and similarly circular: we used the sugar within its definition. It maybe made your head spin at first, but as long as one is careful, recursion is extremely useful. Similarly with feedback; and nature has figured this out. Many physical systems have feedback loops of some kind. Cholesterol—surprise!—is regulated by a feedback process.

Look again at the diagram above: three out of the four wires seem to be straightforward; they are either inputs or outputs. But what is the status of the feedback wire: is it an input or an output? What about the data that passes along it, numbers, bits, cholesterol, force, charge, whatever? Is that data an input or an output? This is one place where the input/output causal analysis begins to fall apart.


 

So how did we create our causal civilisation, with cars, trains and lamps, from a non causal physical world? Of course, this is where engineering comes in, and at the science end of engineering it is the topic of an entire field of study, control theory.

Control theory has undergone a little anti-causality revolution in the last thirty years or so, with the emergence of a subfield called the the behavioural approach. Its visionary creator, Jan C. Willems, wrote a bunch of fantastic papers on the subject. A good one to start with is The Behavioral Approach to Open and Interconnected Systems. Willems is immensely quotable, and here’s one quote from that paper:

It is remarkable that the idea of viewing a system in terms of inputs and outputs, in terms of cause and effect, kept its central place in systems and control theory throughout the 20th century. Input/output thinking is not an appropriate starting point in a field that has modeling of physical systems as one of its main concerns.

I would go so far as to claim that graphical linear algebra is the behavioural approach applied to linear algebra. We will discuss what I mean by this, as well as some of the contributions of Willems and the behavioural approach in future posts on the blog. The connections with control theory will become especially apparent when we will use graphical linear algebra to explain the behaviour of signal flow graphs; a formalism used in the study of time-invariant linear dynamical systems.

I just want to address one last point before we get to the content. Why does a guy whose job it is to understand input/output systems rail against inputs, outputs and causality? Isn’t this a little bit like a car manufacturer advertising the merits of bicycles?

Not quite: and the hint is in the quote. What Willems doesn’t like is taking inputs and outputs as the starting point. In more mathematical slang: Willems doesn’t want inputs and outputs as definitions, as assumptions that we make about physical systems, because they are a dangerous fiction—the real world simply doesn’t pander to our sociological, evolutionary hang-ups. Inputs and outputs are more like theorems, they exist because we carefully craft systems so that some variables behave as we would expect inputs or outputs to behave.


 

Let’s go back to our intuitions, with a view to getting rid of our functional hang-ups.

It’s actually pretty simple: instead of thinking of our addition generator as a function of type  Num × Num  →  Num we will view it as a relation of type Num × Num  ⇸  Num.

add

If you have not seen relations before, I will briefly explain the basics in the next episode. And we will start to see what the change in perspective in allows us to do. For now, just a brief taste.

By moving from functions to relations we are no longer thinking of the gadget above as taking two inputs to an output; we are thinking of it as determining a kind of contract: the only behaviours allowed by the addition generator are situations where numbers x and y appear on the left and x + y appears on the right.

This means that we loosen the causal associations: we can no longer simply say that the two numbers on the left are inputs and the number on the right is an output.

add1

In fact, for the addition generator, if I know the values on any two wires I can figure out the value on the third. For example, if the value on the first left wire is 2 and the value on the right wire is 1 then I can figure out from the contract that the value on the second left wire is -1, since 2 + -1=1. So we can quite reasonably also consider the first left wire and the right wire as inputs and the second left wire as an output.

add2

But instead of tying ourselves up in knots, it is just better to just stop using the words inputs and outputs for now. We will come back to them only when they become useful.


 

I gave a tutorial on graphical linear algebra at QPL ’15 two weeks ago. I just about managed to finish the slides on time! If you want a preview of what’s coming up on the blog you can take a look here. The videos of my talks will be released on the Oxford Quantum Group youtube channel; the ones that are there currently cut off after 1 hour, but this will be fixed soon.

I’m also going holidays starting next week, so the next episode will likely arrive sometime in early September.

Continue reading with Episode 21 – Functions and relations, diagrammatically