25. Fractions, diagrammatically

I’ve been looking forward to this moment for some time. Now that we’ve gone through all of the equations of graphical linear algebra, it’s time to have some fun. In this episode we look at how fractions are represented in the graphical notation, derive the usual rules of fraction arithmetic, and go through a very early example of an algorithm that produces fractions: Fibonacci’s procedure for division by composite numbers.

We will concentrate on diagrams with two dangling wires: one on the left and one on the right. In previous episodes we have seen that these diagrams have a tight relationship with various kinds of numbers. For example, in the theory of bimonoids, a sub-theory of graphical linear algebra, they were in 1-1 correspondence with the natural numbers. When we added the antipode, extending the theory, we got the integers.  Since then, we’ve added the mirror image generators and figured out the equations that relate them. So what additional “numbers” do we get in the theory IH of interacting Hopf monoids that we finished assembling in the last episode?


 

We get fractions, the stars of this episode. Let’s start by proving a useful little lemma. Suppose that w ≠ 0. Then the w sugar can jump over any sugar that’s pointing in the opposite direction, like so.

comm

Here’s the proof of (†):

commproof

In the first and final steps we used the assumption w ≠ 0, since the scalar equations that we introduced in the last episode need this condition. The steps in the middle use the fact that multiplication of integers is the same as composition; we went through this in Episode 18.

We immediately get the mirror version of (†), where the nonzero w scalar is now pointing to the right:

commmirror

No proof needed, because of the mirror symmetry in the equational theory that we discussed in the last episode.


 

Any fraction can be written vw where v and w are integers and w ≠ 0. So let’s extend the syntactic sugar for integers, and define a sugar for fractions as follows:

fractionsugar

At this point we have to be a little bit careful, because a fraction is not merely a pair of integers, one written on top of each other. Fractions represent ratios; for example 12 = 24 since the ratio of one-part-in-two is the same as two-parts-in-four. But (1,2) and (2,4) are different pairs of numbers, and so the usual, formal textbook definition says that a fraction v/w is the equivalence class [(v,w)] where w ≠ 0. The underlying equivalence relation relates (p,q) and (r,s) when ps = qr, since this condition captures exactly what it means for p/q and r/s to represent the same ratio. For (1,2) and (2,4) we have 1 ·4 = 2 ·2, and so 1/2 = 2/4.  So, for our definition of fraction sugar to even make sense—for it to be well-defined—we ought to show that 

welldefn

Let’s do this now. In addition to ps=qr, we can also assume q,s ≠ 0. All three of these assumptions are used in the following proof.

welldefnproof

Our new sugar wouldn’t be much good if we were able to show that two non-equal fractions are the same, using diagrammatic reasoning, i.e. if the translation was not faithful. For this reason, it’s useful to show that the implication that we just proved also goes the other way:

sugarfaithful

Here’s the proof.

sugarfaithfulproofSo we have shown that any fraction can be considered, in a faithful way, as a diagram of graphical linear algebra with one dangling wire on the left and one on the right.

In fact, every diagram of this type—i.e. with one wire dangling from both ends— can be put, using diagrammatic reasoning, either in the form

cospanform

where u,v are integers or in the form

spanform

This follows from some more advanced insights about our equational system, so we will not prove it immediately. You will have to trust me for now.


 

Next up is the algebra of fractions. First a quick reminder of how we used diagrams to talk about the algebra of the natural numbers and the integers. In both cases, addition could be expressed in the graphical language as follows:

additiondef

Of course, this agrees with what we already knew about ordinary addition of integers. One advantage of defining addition in this way is that several of its properties immediately follow by diagrammatic reasoning: e.g. associativity, commutativity and the fact that 0 is the additive identity.

Similarly, multiplication can be defined as composition:

multiplicationdefn

Again, associativity is immediate, but commutativity does not follow for totally generic reasons. One can prove it for the natural numbers by induction, then use this to get commutativity of multiplication for the integers, since the antipode commutes with any natural number sugar.

Finally, for any integer v, we know that its sugar can slide past copying and adding in the following way.

rightway

We proved this for natural numbers by induction in Episode 9, and for integers it follows from the fact that the antipode also slides in this way. These facts can be used to show that multiplication distributes over addition.

Before we delve into the algebra of fractions, we need one more lemma. It says that nonzero scalars can go the “wrong way” past addition and copying.

wrongway

We only need to prove one of the two equations of (‡); the second is bizarro of the first: remember that bizarro is combining the mirror-image symmetry with the invert-colours symmetry and, as we saw in the last episode, both symmetries are present in our equational system.

So let’s assume that w ≠ 0 and prove that w can go the wrong way past copying:

wrongwayproof

In the proof used the scalar equations, which require the assumption w ≠ 0. In the second step we used our knowledge about integers going the “right  way” over copy.

So what happens when w = 0? Let’s take a look at the diagrams. First, if we have 0 going the wrong way to the left of copy we have

zeroleft

and two zeros on the right gives

zeroright

The two results are different: they cannot be made equal using diagrammatic reasoning in graphical linear algebra. We will make the reasons more precise in the next few episodes; for now, we could go back to our relational intuition and figure out that the two diagrams

twodiags

define two different relations. Zero can’t go over addition the wrong way for similar reasons.


Let’s mimic the algebra of the integers and see what happens when we define:

additionfracdefn

and

multiplicationfracdefn

First, multiplication. Using the definition of our fraction syntactic sugar, the right hand side of the above definition is the following diagram

multstart

and we can use diagrammatic reasoning to put it into the form (★):

multiplication

So  r/s · p/q = rp/sq, as we all learned in school. Since we know that multiplication of integers is commutative, this also gives us the commutativity of multiplication for fractions.

Next, let’s do the same thing for addition, transforming the diagram for p/q + r/s into the form (★).

addition

Again, transforming the diagram into (★) using diagrammatic reasoning gives us the formula we all know: p/q + r/s = (sp + qr)/qs . In the fourth step of the proof, we were able to move the qs the wrong way past addition (‡) because we know that both q and s are not zero, so qs is also not zero.

So just by using diagrammatic reasoning, we “discovered” the rules for fraction arithmetic. This is different from the usual treatment: if one defines a fraction to be an equivalence class of pairs (p,q), then multiplication and addition are defined by the formulas we derived. It feels better to derive than to define.

There are a few more things left to do: e.g. show distributivity of multiplication and the fact that we have additive inverses. We will finish the job in the next episode.


 

This weekend I finally got around to reading Liber Abaci by Leonardo of Pisa, aka Fibonacci, translated into English by Laurence Sigler. I’m six chapters in and I’ve really enjoyed it so far. I already knew roughly what was in it, but that’s a bit like saying that I knew that Romeo and Juliet contained a teen suicide before I had to read it at school. A lot of fun, of course, comes from the actual reading, from getting into Fibonacci’s head. It’s a real shame that classic works of mathematics—and you don’t get much more classic than Liber Abaci—don’t get anywhere near as much attention in our culture as classic works of literature or philosophy. Fibonacci’s raw excitement about calculating with the new Indo-Arabic positional notation does not come across anywhere near as vividly in more modern, prescriptive texts.

As I keep reading, I will pull out the bits that I find the most interesting and talk about them on the blog. We start today with fractions, which get a lot of attention in Liber Abaci. Fibonacci used a different notation to the one we all know. Actually, there are several related notations, but the most common one is the following: what Fibonacci would write

\frac{1\ 5\ \ 7}{5\ 6\ 10}3

we would recognise as 3 + 7/10 + 5/6·1/10 + 1/5·1/6·1/10).  The notation is thus read from right to left. It’s a bit like a decimal expansion, in the sense that reading more gives you more information about the number. First, we know the number is larger than 3 but smaller than 4. Then, after reading the next bit of the fraction, we know that it’s a bit more than 7/10ths of the way from 3 to 4, but not quite 8/10ths. Of that final 1/10th, we know it’s a little bit more than 5/6ths. And so forth.

After the quick calculation, this is 284/75, or 359/75, using the evil mixed fraction notation. Fibonacci’s fractions follow the convention that the denominators never get smaller as one reads the fractions left to right: e.g. in the example above 5≤6≤10. Also, by using 10 in the denominators, one can write decimal expansions. For example, 3.14159 can be written \frac{9\ \ 5\ \ 1\ \ 4\ \ 1}{10\ 10\ 10\ 10\ 10}3.

Fibonacci’s fraction notation translates neatly into the graphical notation. For instance \frac{1\ 5\ \ 7}{5\ 6\ 10}3, our example above, is:

fibonaccinotation

It’s easy to pass between the two notations: in the diagram, the numerators go from left to right, following successive copying. The denominators are interleaved with adding on the right.

The reason for Fibonacci’s choice of fraction notation could be that it is the output format of his algorithm—in Chapter 5—for performing division by composite (i.e. non prime) numbers. To prepare, Fibonacci first describes how to divide by prime numbers, and we can recognise this procedure as being equivalent to ordinary long division. Of course, long division works for composite numbers as well, but Fibonacci’s division algorithm means that if the composite has small factors then we can make the procedure simpler: it is usually easier to do long division when you divide by small numbers.

Fibonacci’s writing style is very entertaining: he describes a calculation technique and then solves a number of successively larger problems in full detail; some so large that it’s difficult to think of practical applications, especially since we are talking about medieval Italy. For example, in Chapter 2, he multiplies 12345678 by 87654321 to get 1082152022374638. It’s as if he’s saying: “Ok, so the abacists can probably also do the last example, but look, here’s a much bigger one. Good luck building an abacus big enough!”.

We will go through one of Fibonacci’s small examples from Chapter 5: dividing 749 by 75, illustrating his procedure for dividing by composite numbers with the graphical notation.

749div75

First we factorise 75 into primes—actually Fibonacci is not a stickler about this point and factorises into whatever is the most convenient to divide by—and write them in an increasing order.

749step2

The algorithm works by processing each of the 3, 5, 5 in turn, building up the Fibonacci fraction in the process. We start with the and divide 749 by it, obtaining 249 and 2/3.

749step3

At this point we already have a chunk of the Fibonacci fraction: the 2/3. We can pull it aside for later

part1

and we are left with:

749step4

Next we perform the following calculation:

749step5

First the 5 goes over addition backwards (‡). Then we divide 249 by 5, obtaining 49 and 4/5. Next we use associativity of addition, and pull the 5 back over. At this point we have another chunk of a Fibonacci fraction ready to go, so we can pull it out:

part2

and we are left with

749step6

Now we perform the same steps as in the previous calculation: this is the “loop” in the algorithm. Each time the body of the loop is performed, we get rid of the next number in the decomposition of the denominator. It’s time to process the final 5.

749step7

At this point we do not have any more components of the denominator to consume, so the algorithm terminates. And composing ①, ② and ③ gives us the Fibonacci fraction we were looking for:

result

that the great man himself would write \frac{2\ 4\ 4}{3\ 5\ 5}9, the result of dividing 749 by 75.


 

Fibonacci’s division algorithm is not so useful for us today, since we have computers to do the division for us, and there’s no pressing need to keep the numbers that we are dividing by small. But it is very interesting as an early example of an algorithm tout court; here Fibonacci was probably influenced by al-Khwarizmi. The presentation, using graphical notation, also gave me an opportunity to show off the compositional nature of the graphical language; we could perform the computation in steps, chopping off parts of the diagram that we were finished with, and gluing everything back together at the end. Compositionality is the strong point of the graphical notation, as we will discover time and again.

Continue reading with Episode 26 – Keep Calm and Divide by Zero.

Advertisement

17. Maths with Diagrams

The discussion over the last few episodes has now led us to the point where we have two different languages to talk about the same thing. The two languages, however, are rather different. One consists of diagrams, the other of matrices populated with natural numbers. If you already knew about matrix algebra, maybe you’re asking yourself the question “What’s the point of all these diagrams then? They don’t tell me anything that I didn’t know before. Is this guy claiming that this diagrammatic language is better in some way?”

The short answer is no. I don’t want anyone to throw away their matrices: there are extremely good reasons to know and care about them. But I have two arguments for why we should also care about diagrams.


 

 

The first argument is that a different language means thinking about the conceptual landscape in a new way. We will see this over and over again in graphical linear algebra, and sometimes it will cause us to reevaluate certain dogmas: ways of doing things that have come about because of the way that the standard notation works, rather than real underlying mathematical issues. These kind of things are not easy to spot if you only know one notation.

More generally, the idea that the languages we use influence how we think and act—linguistic relativity if you will—is well-known and appreciated by programmers. Most programming languages are Turing complete and so have equivalent power. So why not just learn one, say x86 assembly, and stick with it?

Programmers deal with real problems, which usually do not involve writing functions to compute the Fibonacci sequence or, infamously, inverting a binary tree.  Real problems are multi-faceted, and for some parts it may be useful to put on a functional hat, for others an object-oriented hat, and so on. The different ways of thinking that the various programming methodologies bring to the table are appreciated because

complex problems benefit from a combination of approaches.

When I was a student in the 90s that there were many entertaining flame-wars about which programming language approach was the best. As with most flame-wars, they were pretty pointless: programming has since grown up and nowadays it’s very rare to hear somebody claiming that a particular way of doing things is intrinsically better than another. Good programmers are comfortable thinking in several different languages, and choose the one that is most suitable for the current (sub)problem at hand. Flexibility is key.

Because of this, in the last 10 years or so, the boundaries between different language families have been blurring. For example, most functional programming languages have clever ways to do imperative things, and conversely, more and more functional features are making their way into industrial strength imperative languages. It’s pointless to argue which is the best approach, or the best language: such a thing doesn’t and will never exist. And anyway, it’s fun to see how your favourite problems can be solved with some newfangled language!


 

It’s been really interesting to see how different communities have reacted to this blog. About a month ago somebody posted a link to this blog on Hacker News. That link made it to the front page and resulted in a massive amount of interest, which really surprised me… who knew that so many people cared about linear algebra! Last week I posted a link there myself and again the feedback has been really great. I’ve also been regularly posting links on Reddit’s /r/math, which has also resulted in a lot of really interesting and useful comments, but also quite a bit of skepticism and maybe even a touch of hostility.

Allow me to philosophise for a moment for why this is the case. I think that it’s because many mathematicians are just not used to the idea of having different languages to talk about their concepts. In many mathematical subfields the notation is very conservative and standardised. It has evolved so slowly, so imperceptibly, that mathematical culture almost brings with an an understanding that  “standard notation” has been given to us by some higher power and it is not to be disturbed. Anyway, the concepts are the important things so who cares about notation. Many mathematicians simply don’t see the value in exploring different ways of expressing themselves. Programmers, it seems, are much more receptive to the idea.


 

The second argument for why we should care about diagrams is Occam’s razor — the idea that one should trim the fat: discard unnecessary assumptions to try to get at the core of problems. So let’s consider the amount of intellectual technology necessary to develop the concept of

matrices of natural numbers, and their algebra       ①

I need you to forget for one moment that you know all of these things already and that they are totally obvious to you; let’s pretend, for one moment, that we are explaining Earth mathematics to a group of interested Martians. And let’s not be too creative about the Martians. Let’s assume that they are of the Star Trek variety: three dimensional and humanoid enough to interest Dr McCoy in extra-curricular activities, but without the experience of an Earth education. So, to make them understand what ① means, we need to:

  1. Explain the concept of natural numbers, together with addition and multiplication
  2. Show that the natural numbers form what is called a commutative semiring. Let’s go through these requirements:
    1. addition is associative, commutative, and has 0 as it’s identity; that is, for any natural number n, we have 0 + n = n = n + 0
    2. multiplication is associative, commutative and has 1 as it’s identity; that is, for any natural number n, we have 1· n = n = n · 1
    3. multiplication distributes over addition, that is, for all natural numbers a,b and c, we have a·(b + c) = a·b + a·c (in a non-commutative semiring we would also require (b+c)·a = b·a + c·a, but that follows here from commutativity of multiplication)
  3. Define the concept of an m×n matrix, a grid of numbers. If we are totally honest then we ought to probably also go through the corner cases where any combination of m and n can be 0.
  4. Write down the formula for matrix multiplication. This relies on already knowing about addition and multiplication of natural numbers. The semiring properties are necessary for matrix multiplication to behave as expected (e.g. for it to be associative).
  5. Write down the formula for matrix addition.

This is actually a significant amount of work. And maybe the Martians are a bit curious about where exactly did the formula for multiplying matrices came from. They know from previous Earth experience that you can’t really trust those pesky humans and their crazy ideas.

On the other hand, we have already done all of the work of showing how diagrams work in the first few episodes of this blog. If explaining to the Martians, we would need to:

  1. Explain the idea of magic Lego; the algebra of diagrams that consists of the two operations of direct sum and composition ;
  2. Go through the rules of diagrammatic reasoning: stretching, tightening, moving generators along wires
  3. Introduce the four generators together with the ten equations.

cheat

Let’s compare the two ways. In the traditional approach, one first defines the natural numbers and then “bootstraps” matrices on top. With diagrams, there is no precedence given to natural numbers as such; everything gets defined at the same time. Natural numbers can be recovered as special kinds of diagrams, those with one dangling wire on each end. We don’t need to invent any extra formulas for multiplying matrices because composition is a basic operation of the algebra of diagrams. Matrix addition is not a primitive operation at all, it can be constructed from the diagrammatic primitives.

Having said all that, this blog so far has followed a rather conventional route: first we talked of natural numbers as diagrams and then of matrices as diagrams. Today, we are going to go back and think about how the maths would have been developed without knowing about natural numbers and matrices from the outset. We will rediscover diagrams by focussing on the operations and constructions that are natural in their language and see where that takes us.


 

To spice things up let’s try some alternate history. This is something that historians really hate; my sister, who is a historian, always gets annoyed when I bring it up. But who doesn’t love to annoy their siblings?

We need to turn the clock back to before matrices were common, before notation settled into what appears in textbooks today. Our story thus takes place in early 13th century Pisa. Our hero is a certain young man, Fibonchi, a contemporary and friend of Fibonacci.

While a teenager, Fibonchi sailed to the Balearic Islands with a Pisan trade fleet. In Palma, he met a wise old Berber merchant who introduced him to four curious operations that spoke of putting things together and copying them, governed by collection of simple, self-evident rules. The old men of Palma enjoyed drawing the diagrams and even used them to perform some calculations.

On his way back home, Fibonchi realised that the diagrams are what is now known as a recursive data type: every diagram is put together from the four simple bricks, the identity and twist. Using this insight, he was able to prove a number of interesting things about them, using the principle of induction.

To help in his calculations Fibonchi wrote down the formula for diagrams that copy and add not just one wire, but an arbitrary number on wires. Coincidentally, this was the syntactic sugar we discussed a couple of episodes back.

Next, Fibonchi proved that these sugars satisfied a very pleasing property with respect to arbitrary diagrams. For example, given any diagram D with m dangling wires on the left and n dangling wires on the right, the following is true:

copy

Of course, once Fibonchi had finished this proof, he immediately wrote down his second theorem.

addHe did not even bother writing down a proof, since he was told about about the principle of “bizarro” by the old man in Palma. Because all the equations in the system have bizarro (reflected, colour-swapped) versions, once you proved a theorem you’ve also proved its bizarro cousin.

Fibonchi also noticed the following two related facts, the second the bizarro version of the first.

discard

zero

These properties intrigued Fibonchi, and he went ahead and defined an operation on compatible diagrams—those that agreed in their respective numbers of dangling wires on the left and right. He called the operation sum. This operation would be rediscovered in different notation a few centuries later and given the name matrix addition.

sum

He proved that sum was associative

assoc

and commutative:

comm

Moreover,  each compatible family of diagrams contained an identity for this operation:

sumunit

Fibonchi showed that D + 0m,n = D = 0m,n + D.

He also noticed that composition distributes over sum: we have

D ; (E+F)    =    D ; E    +    D ; F

The proof, reproduced below, was a straightforward application of his first theorem.

leftdist

He was, of course, immediately able to state the bizarro version:

(D+E) ; F   =  (D ; F) + (E ; F)

Fibonchi knew that composition of diagrams was not commutative, in most examples it didn’t even make sense to compose diagrams in more than one way because the dangling wires did not match up. Non-commutativity was the norm. However, he did notice some peculiar things about diagrams with exactly one dangling wire on each end.

He noted that there was exactly one such diagram for every natural number, which gives the number of paths from the left dangling point to the right one. He understood that if two such diagrams have an equal number of paths then they are equal. His arguments were close to the rewriting argument of the last episode. Another strange thing was that the bizarro of every such diagram was equal to the diagram you started with.

Most curiously of all, composition between such diagrams was commutative, which Fibonchi confirmed with a simple induction. First the base case, where the first diagram has zero paths:

base

and the inductive step:

ind

Inspired by Fibonacci’s masterpiece Liber Abaci, Fibonchi wrote his own treatise entitled Liber Diagrammatis. Unfortunately it is now lost; the last remaining copy fell into the hands of a Livornese fishmonger sometime in late 16th century, who used it to wrap his wares. The Livornese never much liked the Pisans or what they had to say.

Fortunately, we can continue Fibonchi’s story on this blog.

Continue reading with Episode 18 – Introducing the Antipode.

 

3. Adding (Part 1) and Mr Fibonacci

Probably one of your first experiences with maths was adding. The very first experience was probably learning to count: 1, 2, 3,  and so on. Graphical linear algebra has interesting things to say about counting, but that story will have to wait for another time.

Learning about adding coincides with learning about mathematical formulas such as

3 + 4 = 7     ①

Three plus four equals seven. If you have three apples and I give you four more apples, how many apples do you have?

Let’s talk a bit about the formula  . You’ve seen equations like this so many times that you are totally comfortable with it and have internalised all of the concepts. You don’t even remember not understanding it, right? Even the people who hate maths learn this stuff before they learn to hate it.

But appearances deceive, and  is actually quite subtle. Before we go any further, let’s talk about the symbols involved. First, ‘+’ is an operation. The left hand side of equation , the stuff to the left of the symbol ‘=’, can be understood as a procedure, a computation. It’s the processing that you did with your fingers when the teacher was talking about apples.

The symbol ‘+’ is the name of the procedure, and it takes two arguments: here the numbers 3 and 4. The first argument is 3, the second argument is 4. The right hand side of , 7, is the result of the computation. So we can understand the symbol ‘=’ as saying to us “the result of the the computation on the left of me is the thing on the right of me”.

But then maybe the teacher wrote

 7 = 3+4     ②

and confused you: in the computation is on the right and the result is on the left! You probably went on to become a professor of Computer Science. You like to keep your inputs on the left and your outputs on the right. Please stay with us, because in graphical linear algebra the relationship between  and is extremely important and we will spend a lot of time discussing it. Technically speaking, this is because ‘=’ defines a relation, and relations will be vital for us.

For now, let’s forget about the advanced topic of . We will first concentrate on trying to get our heads around . Actually, there are several ways to understand . In graphical linear algebra we will understand it in two ways. In this episode, we are going to talk about the first of those two ways.

As you already know, in this blog we try to avoid writing equations like  and . Equations are so last century and now it’s high time for our first diagram. We will draw ‘+’ as a white circle. The two arguments come in from the left, the first through the top arrow, the second through the bottom. The result then comes out on the right. blog Because the word “arrow” is boring, generic and overused, we will call the lines “wires”. The word “wire” is quite evocative: data travels on wires all around us. For example, wires are carrying bits of information to and from your wireless router as you read this.

Go ahead and imagine numbers moving on the wires, going in the directions indicated by the arrow-heads. When a 3 arrives on the first argument wire, and a 4 arrives on the second argument wire, a 7 will exit on the result wire. 3plus4Similarly, if 43 arrives on the first argument wire and 57 arrives on the second argument wire, 100 will exit on the result wire. fortythreeplusfiftyseven Easy enough, right?


 

Mathematicians don’t usually talk about the symbols they use. Part of the “fun” of being a mathematics undergraduate is learning a foreign language that no one actually bothers to teach you explicitly. You’re supposed to just assimilate it, as if it were something as natural as the concepts it describes.

To be fair, mathematicians have had hundreds of years to find efficient ways to write about the concepts they love. I mean, as much as we diss formulas in this blog, let’s admit that 43 + 57 = 100 is fairly short and concise. It wasn’t always like this. We have some nice, efficient, ways of doing the actual calculation too: remember you draw the second argument below the first, carefully so that the digits line up from right to left, then you draw a line below, etc. You know the procedure; just don’t forget to carry the 1. addalg It wasn’t always like this. Thank Fibonacci. When people hear this name, they usually think of the Fibonacci number sequence 0, 1, 1, 2, 3, 5, 8, …. Don’t worry if you’ve never heard of it: we will eventually explain this famous stream of numbers using graphical linear algebra. For now, I want to try to convince you that this sequence was not Fibonacci’s greatest contribution to civilisation.

Some context: Fibonacci came from a wealthy trading family of the 12th century medieval merchant republic of Pisa. Pisa was one of the world’s richest cities back then — if you’ve ever visited Piazza dei Miracoli in Pisa then you’ve seen the splendour first hand. Back then, much of the rest of Europe consisted of wooden shacks.

When Fibonacci was a boy he travelled with his dad to trading posts in North Africa, where he played with the kids of local merchants and learned about the Hindu-Arabic positional number system that we still use today. Europeans, back then, used Roman numerals: I, II, III, IV, V, and so on. As an exercise, let’s try to use the clever technique that we know for adding, but using Roman numerals. It doesn’t work.

roman

Romans were great at so many things, but their number system sucked. I’m sure that even Cicero, a great Roman patriot, would agree given the evidence.

Anyway, Fibonacci realised how great the Hindu-Arabic number system was and wrote a book about it, the Liber Abaci. To Fibonacci, the superiority of a positional number system was obvious: calculation was so much easier, more efficient, less prone to error. It is also obvious to us now, of course.

In Fibonacci’s day, he did manage to convince some people, who became known as the algorists. The conservatives, those who preferred to stick with the status quo, were called abacists. They used the abacus, and converted to and from the Roman number system between calculations. The conservatives won the day: it took a long, long time before Fibonacci’s convictions became more widespread. In fact, the conversion from Roman to Arabic only finished towards the end of the 16th century, 400 (!) years after the publication of Liber Abaci.


 

I’m sure that Fibonacci would have had problems with the Pathways to Impact document. Although “Pathways to Impact” sounds like a Hollywood disaster movie, it is actually a document that all UK scientists have to include with their grant applications. You have to write on two A4 pages how your research will impact society and academia, but mainly how it will impact the economy.

For fun, let’s suppose that Fibonacci had access to a time capsule and saw how his ideas would transform the Western world. Let’s suppose even that his grant application made it to the interview round, held in a beautiful palazzo on the Piazza dei Cavalieri.

“Mr Fibonacci, we asked you very clearly to provide us an Impact Statement about how your so-called ‘Arabic numerals’ will impact the Pisan economy in the 5-10 year period. All of our greatest merchants use the abacus. I don’t see how this technology will bring them any competitive advantage. In fact, all of this ‘positional number system’ nonsense sounds a bit abstract and useless to us. I mean, you expect us to wait 400 years to make a profit on this? Are you insane?”

The wise heads of the Research Council of the Republic of Pisa (RCRP) then nodded politely as Mr Fibonacci tried to explain, but after a short panel discussion they ended up investing tax-payers’ money in advanced abacus technology instead. Fair enough, they were right. The glorious Pisan republic ended in early 15th century, when Pisa fell under Florentine domination. Obviously the key stakeholders of RCRP couldn’t be expected to wait 400 years to see their investment bear fruit.

Seriously though, the Republic of Pisa actually treated Fibonacci very well. With such poor economic decisions, no wonder that they eventually lost out to Florence.


 

Our graphical notation can be a little bit more concise: the diagrams we have been drawing so far have some redundant information that we can throw away safely, without losing anything important. It is usually good to throw away redundant things. And it makes sense to do this now, because we will be drawing a lot of diagrams.

Remember, the numbers travel on wires from left to right. So the arrowheads are redundant. Also, we will not write the + symbol inside the circle from now on. Whenever you see a white circle in a diagram on this blog, it will always mean adding. notation Actually, by throwing away the arrowheads I’m also doing something a little bit underhanded here. I’m preparing us for the second way of understanding the addition operation, that will help us to solve the mystery of passing between  and . Throwing away arrowheads seems to be in the air at the moment, many people are doing it. We will discuss all of that in more detail in a later episode.

It’s time for our first equation. You may remember an interesting thing about adding. If I try to calculate 3 + 4 and I don’t make any mistakes, I will get the same result as when I calculate 4 + 3. In fact, if I use x and y  as variables, to stand for any two numbers, then a mathematician might write:

∀ x,y.  x + y = y + x      ③

The upside down A means “for all”. So simply says that it is the case that for all possible choices of assigning numbers to x and y, say x = 42 and y = 24, or x = 43 and y = 57, or x = 3 and y = 4, we know that the calculations x + y and y + x will have the same result. This property is known as commutativity of addition.

Here’s how we can express the same thing as  with diagrams. “Comm” is short for “commutativity”. commutativity Before I explain any further, I want you to imagine a white square of paper. Imagine the square to be exactly the right size so that when I cover either of the two diagrams in (Comm), I get two pictures that both look the same: something like an electric plug. box In particular, notice that the parts that are still uncovered are parts where the two diagrams agree — both the diagrams in (Comm) have their dangling wires in the same places. Dangling wires are wires in which one of the two ends is not connected to anything. In both of the (Comm) diagrams two wires are dangling on the left, and one wire is dangling on the right.

This brings me to an important principle of graphical linear algebra: whenever we write an equation between two diagrams, the number of dangling wires on the left and on the right will be the same. By the way, there will never be any dangling wires on the top or on the bottom. Maybe there will be no dangling wires at all, but if there are some, they must dangle from the left, from the right, or from both left and right.

So why is (Comm) correct? Let’s see what happens when we provide some arguments. We could try to feed in some numbers to convince ourselves, but maybe we’d get lucky and convince ourselves of something that is false. Like, for example, it happens to be the case that 2 + 2 = 2 × 2 but it’s not the case that for all possible choices of x and y we have x + y = x × y. Right? Letting x = 3 and y = 5, 3 + 5 = 8 but 3 × 5 = 15. So let’s use variables instead. We’ve already seen that. x+y For the right hand side of (Comm), notice that the value of the first dangling wire on the left is actually connected as the second argument to addition. So we get. y+x But we know that addition is special and it satisfies the commutativity property. So even if we “can’t see inside” the two diagrams any more, for example if we cover them up with a big white square so that they both look like box we know that their behaviour is the same. The anonymous white box, in each of the two cases, simply adds the two arguments provided on the left and spits out the result on the right.


 

Look again at and look at (Comm). Which one do you prefer? If you say  then that’s cool. I will give you many more reasons to love (Comm). For now, the reasons are mainly aesthetic. For example, we don’t need to introduce any upside down letters in (Comm), nor learn how to use them in practice.

If you are confused, don’t worry. I still need to explain a bit more about what diagrams are exactly, and how we construct them. What kind of things we can and can’t do with them. The rules of the game. You already know one, the one about having the same number of dangling wires in equations. The rules of the game are the subject of the next episode. If you’ve ever played with Lego, it should be smooth sailing.

Continue reading with Episode 4: Dumbing Down and Magic Lego.