This is an important episode in the story of graphical linear algebra. We return to our intuitions, the ones we started with all the way back in Episode 3. It’s high time we replace them with something much more useful.

In our diagrams so far, we have been saying that numbers travel on the wires from left to right. By the way, all along, we have been rather vague about what these numbers are exactly — and as we expanded our set of generators and their associated equations, we have been getting more specific. So far, we know at least that we have a zero (which is part of the addition structure) and negatives (the antipode).

Let’s call the type of these numbers, whatever they are, **Num**. Then the addition generator defines a function of type

**Num** × **Num** → **Num**

since there are two arguments x and y (the numbers that arrive on the input wires on the left) and one result x+y (the number that exists on the the output wire on the right). Similarly, we could write down the types for all of the other generators. And in our algebra of diagrams, the composition operation always connects outputs to inputs.

Every diagram thus defines a *function* that takes a certain number (possibly zero) of *inputs *to a number (possibly zero) of *outputs*. In other words, every diagram that we drew conforms to the following idealised—and pretty cliché—idea of a computational system. The picture below could appear in a million papers and talks in computer science and related fields.

We could even go so far as to say that the inputs we provide are **causing** the outputs. Let’s not go down that road; it causes a lot of problems. Let me explain why.

(A warning: the rest of this episode is about 90% rant; if you just want the content, feel free to skip to the last section.)

Causality is an enchanting, seductive idea. Imagine: we interact with the world by giving it *inputs*, and the world responds by returning nice, easily predictable *outputs*. Life would just be so easy and pleasant if everything around us behaved as functions in the mathematical sense: for every input x there would be an output f(x). And, the cherry on top, imagine if f had a closed form. A nice little theory of everything.

Unfortunately, reality is somewhat more brutal. In 1912, the year the Titanic sunk, Bertrand Russell published a very nice philosophical paper entitled *On the notion of cause*. It is full of delightful quotes, and here is one of my favourites:

The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving like the monarchy, only because it is erroneously supposed to do no harm.

Clearly, Russell was a republican. But his paper is mainly a scathing attack on the facile idea of causality: viewing the various components of the physical world as cause-and-effect systems.

Of course, it is extremely useful to *engineer* systems that behave in a cause-and-effect manner: our carefully constructed civilisation relies on it. If you buy a car, you would not be very happy if nothing happened as you pressed the accelerator pedal. Similarly, a lamp should do something when you flick the switch. Clearly we are *causing* the car to go and the light to go on by interacting with those systems appropriately.

The problem arises when we try to use our fuzzy “common-sense” human intuitions of causality to understand the physical, non-manufactured world around us, or even the sub-components of engineered systems. Causal preconceptions affect the way we approach reality: to get the outputs we want, all we need to do is find the right inputs. When a person with this world view sees something happening, they ask *why*, *what caused it*? This question may seem quite reasonable at first.

Quoting Feynman:

Aunt Minnie is in a hospital. Why?

Because she went out, she slipped on the ice and broke her hip. That satisfies people.

If you haven’t seen it, go ahead and watch the video, it’s very entertaining. Feynman is addressing the difficulty of answering a question like “Why do magnets repel each other?”*. *One of the messages of the video is that it is difficult to apply “common-sense” to understand the physical world. In “common-sense” the questions “why?” and “what caused it?” are closely linked. And the idea of something **causing** something else, away from our carefully engineered human civilisation, becomes more flimsy the more one thinks about it.

But maybe this world view is harmless, like the monarchy? They (the monarchy) do bring in loads of tourists, after all. So maybe causal thinking gets us to ask the right questions, *e.g*. Newton and the Apple incident. What *caused* the apple to fall?

Not quite. Causality encourages sloppy thinking that is far from harmless: it is killing people, while costing billions in the process. If this sounds like an outrageous hyperbole, a cautionary tale is recounted in Jonah Lehrer’s article *Trials and errors: Why science is failing us. *Check it out, despite the clickbait title it’s worth a read. Full disclosure: Lehrer is an *interesting* guy. He seems to have been transported to life from a stock character in some Shakespearean play: first a stratospheric rise from science blogger to best-selling author with a cushy staff writer gig for the New Yorker. Then an equally steep fall from grace—apparently he faked some Bob Dylan quotes for one of his books—so spectacular that it merited two chapters in Jon Ronson’s *So you’ve been publicly shamed*. But let’s not throw out the baby with the bath water.

Lehrer writes that in the early 2000s, Pfizer, the multi-billion dollar pharmaceutical company, pumped a lot of money into the development of a drug called torcetrapib, which tweaked the cholesterol pathway—one of the most studied and seemingly well-understood systems in the human body—to increase the concentration of HDL (“good cholesterol”) at the expense of LDL (“bad cholesterol”). Here’s a diagram.

It actually did exactly that; increased HDL and decreased LDL. But there seems to be a deeper problem concerning the diagram above: the simplification involved in considering some chemical as a “good input” and another as a “bad input”. Long story short, quoting Lehrer:

Although the compound was supposed to prevent heart disease, it was actually triggering higher rates of chest pain and heart failure and a 60 per cent increase in overall mortality. The drug appeared to be killing people. That week, Pfizer’s value plummeted by $21 billion (£14 billion)

It’s pleasant and probably quite lucrative—research grant wise— to call HDL “good” and LDL “bad”. It seems likely, however, that they are **not inputs **in any useful sense; they are components of a complex system, the cholesterol pathway, which is part of a larger, more complex system, the human body.

Causal thinking and complex physical systems don’t mix very well. Still, the causality meme is as strong as ever, more than 100 years after Russell’s paper. A wonder drug (input) that cures cancer (output) is just around the corner. But you need to donate now.

So what is it about complex systems that makes them so difficult for humans to understand? Why is it that HDL or LDL might not be inputs in any meaningful sense? And what does all of this have to do with graphical linear algebra?

One of the most famous, the most studied, and yet the most mysterious features of complex systems is **feedback**: the somewhat circular idea that an output of a system can be plugged back in as an input. So, suppose that we have a nice, easy causal system with two inputs and three ouputs, as follows.

Now, what happens if I plug the first output in as the first input? I get the following system which now seems to have only one input and two outputs.

The idea of recursion, which we have seen in many of our syntactic sugars, is related and similarly circular: we used the sugar *within* its definition. It maybe made your head spin at first, but as long as one is careful, recursion is extremely useful. Similarly with feedback; and nature has figured this out. Many physical systems have feedback loops of some kind. Cholesterol—surprise!—is regulated by a feedback process.

Look again at the diagram above: three out of the four wires seem to be straightforward; they are either inputs or outputs. But what is the status of the feedback wire: is it an input or an output? What about the data that passes along it, numbers, bits, cholesterol, force, charge, whatever? Is that data an input or an output? This is one place where the input/output causal analysis begins to fall apart.

So how did we create our causal civilisation, with cars, trains and lamps, from a non causal physical world? Of course, this is where engineering comes in, and at the science end of engineering it is the topic of an entire field of study, control theory.

Control theory has undergone a little anti-causality revolution in the last thirty years or so, with the emergence of a subfield called the **the behavioural approach**. Its visionary creator, Jan C. Willems, wrote a bunch of fantastic papers on the subject. A good one to start with is *The Behavioral Approach to Open and Interconnected Systems*. Willems is immensely quotable, and here’s one quote from that paper:

It is remarkable that the idea of viewing a system in terms of inputs and outputs, in terms of cause and effect, kept its central place in systems and control theory throughout the 20th century. Input/output thinking is not an appropriate starting point in a field that has modeling of physical systems as one of its main concerns.

I would go so far as to claim that graphical linear algebra is the behavioural approach applied to linear algebra. We will discuss what I mean by this, as well as some of the contributions of Willems and the behavioural approach in future posts on the blog. The connections with control theory will become especially apparent when we will use graphical linear algebra to explain the behaviour of signal flow graphs; a formalism used in the study of time-invariant linear dynamical systems.

I just want to address one last point before we get to the content. Why does a guy whose job it is to understand input/output systems rail against inputs, outputs and causality? Isn’t this a little bit like a car manufacturer advertising the merits of bicycles?

Not quite: and the hint is in the quote. What Willems doesn’t like is taking inputs and outputs as the **starting point**. In more mathematical slang: Willems doesn’t want inputs and outputs as **definitions**, as assumptions that we make about physical systems, because they are a dangerous fiction—the real world simply doesn’t pander to our sociological, evolutionary hang-ups. Inputs and outputs are more like **theorems**, they exist because we carefully craft systems so that some variables behave as we would expect inputs or outputs to behave.

Let’s go back to our intuitions, with a view to getting rid of our functional hang-ups.

It’s actually pretty simple: instead of thinking of our addition generator as a function of type **Num** × **Num** → **Num **we will view it as a relation of type **Num** × **Num** ⇸ **Num**.

If you have not seen relations before, I will briefly explain the basics in the next episode. And we will start to see what the change in perspective in allows us to do. For now, just a brief taste.

By moving from functions to relations we are no longer thinking of the gadget above as taking two inputs to an output; we are thinking of it as determining a kind of **contract**: the only behaviours allowed by the addition generator are situations where numbers x and y appear on the left and x + y appears on the right.

This means that we loosen the causal associations: we can no longer simply say that the two numbers on the left are inputs and the number on the right is an output.

In fact, for the addition generator, if I know the values on any two wires I can figure out the value on the third. For example, if the value on the first left wire is 2 and the value on the right wire is 1 then I can figure out from the contract that the value on the second left wire is -1, since 2 + -1=1. So we can quite reasonably also consider the first left wire and the right wire as inputs and the second left wire as an output.

But instead of tying ourselves up in knots, it is just better to just stop using the words inputs and outputs for now. We will come back to them only when they become useful.

I gave a tutorial on graphical linear algebra at QPL ’15 two weeks ago. I just about managed to finish the slides on time! If you want a preview of what’s coming up on the blog you can take a look here. The videos of my talks will be released on the Oxford Quantum Group youtube channel; the ones that are there currently cut off after 1 hour, but this will be fixed soon.

I’m also going holidays starting next week, so the next episode will likely arrive sometime in early September.

Continue reading with Episode 21 – Functions and relations, diagrammatically

I’m absolutely loving these posts but craving much more! What texts could you recommend to a math student just about to begin graduate school?

I know these posts each take considerable time and effort to put together, so please know that people are reading these, enjoying them and getting a lot of value out of them as well!

Also, you might be interested in the text “Causality” by Judea Pearl.

LikeLiked by 1 person

Hi Brandon,

Thanks for the super nice feedback!

It’s difficult to recommend texts without knowing your interests — feel free to email me if you want to discuss further. But here are some recommendations anyway.

For control theory, the book by Polderman and Willems is very nice (http://wwwhome.math.utwente.nl/~poldermanjw/onderwijs/DISC/mathmod/book.pdf) but quite steeped in the language and customs of that community; it took me a while to get a hang of it.

For an intro to categories I can recommend RFC Walters “Categories and Computer Science”. Also, the Lawvere and Rosebrugh “Sets for Mathematics” is nice, but probably more at an undergrad level.

You could also check out this paper by Baez and Stay for motivation: http://math.ucr.edu/home/baez/rosetta.pdf

I haven’t read it myself yet but I’ve been told that it’s really nice.

And thanks for the link to “Causality”: I don’t know this work very well but I plan to take a more serious look in the near future. I’m somewhat skeptical of this programme of “manufacturing causality” though.

LikeLike

Hello again! Still following this series with extreme interest, and discovering one of my new favorite “strands” of mathematics! Absolutely fascinating stuff. I really appreciate the tangent on causality – as someone who works in the Bayesian inference world, I would actually say that oftentimes mathematicians don’t engage with the philosophical/cultural aspects of math as much as they could/should. So motivating the transition away from causal input->output thinking via the linked articles I thought was wonderful. Have you considered eventually collating these posts together into a book/monograph? The blog post format is great too, I just feel like I would pay for such an easily accessible and gentle (but not overly so) introduction to these topics, and would recommend it to friends as well. Just wanted to put that out there in case you were looking for a gauge of how your readers are perceiving these posts!

I also think that Pearl’s “Causality” is extremely interesting, despite my skepticism of the causality notion and how it’s used/abused (especially in the social sciences where I work and where it is currently “in vogue”). I’ve gotten into some really interesting debates recently about statistical matching, and whether or not it can/should be interpreted as something like “a method that allows studies to get much closer to measuring ‘causality’ than more classical data analysis methods can”. A good reference for this is here, : http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.175.5684&rep=rep1&type=pdf. Anyways thanks again for this great series, I’m eagerly looking forward to the upcoming posts!

LikeLiked by 1 person

> I would actually say that oftentimes mathematicians don’t engage with the philosophical/cultural aspects of math as much as they could/should.

I definitely agree with this. And vice-versa it seems that in modern philosophy departments it’s rare to find philosophers who have a good level of modern mathematical training. I was excited a few years ago when there was a Wittgenstein symposium in Southampton, but I was a bit disappointed. It seems that for some philosophers, mathematics and mathematical logic stopped somewhere around the time of Frege.

Having said that, there is some great work that combines modern maths with philosophy. One example is David Corfield’s Towards a Philosophy of Real Mathematics.

> Have you considered eventually collating these posts together into a book/monograph?

Yeah, I think so — one thing that I was thinking about eventually was compiling an undergraduate linear algebra text that only uses string diagrams. I plan to tackle some pure linear algebra topics once I finish with the current stream of articles, maybe in about 5 episodes or so.

For now, my main goal is to get enough people interested in this kind of work. There are just so many open questions and directions where research time could be invested!

> I also think that Pearl’s “Causality” is extremely interesting, despite my skepticism of the causality notion and how it’s used/abused

I had an interesting discussion about this on Facebook. I will quote my friend Silvia Crafa who is a brilliant mathematician/philosopher/computer scientist from Padova in Italy (I hope she doesn’t mind!)

> I like the shift from functions to relations very much!! As for causality, I think that by just changing the word into “dependency” many problems immediately become smoother. On the other hand, it is true that causal thinking is not appropriate to MODEL the physical world but is very useful to INTERACT with physical world, indeed causal thinking has probably been selected by evolution.

I agree with her. Looking at Pearl’s Turing award citation, it seems that most of the applications of his work have been in artificial intelligence and related fields; which really makes a lot of sense since we do indeed use a lot of causal thinking in our day to day lives. I will try to free up some time in September to take a more serious look at the book.

Thanks again for your comments!

LikeLiked by 1 person

I find it somewhat ironic that when you take as input the notion that input causes output, it causes the output of a lot of problems. Thanks for pointing out the QPL ’15 videos are up. Do continue to enjoy pleasant holidays. I look forward to reading more from you.

LikeLike

I have noticed you don’t monetize graphicallinearalgebra.net,

don’t waste your traffic, you can earn extra bucks every month with new

monetization method. This is the best adsense alternative for any type of website (they approve all websites), for more details simply search in gooogle: murgrabia’s

tools

LikeLike