# 25. Fractions, diagrammatically

I’ve been looking forward to this moment for some time. Now that we’ve gone through all of the equations of graphical linear algebra, it’s time to have some fun. In this episode we look at how fractions are represented in the graphical notation, derive the usual rules of fraction arithmetic, and go through a very early example of an algorithm that produces fractions: Fibonacci’s procedure for division by composite numbers.

We will concentrate on diagrams with two dangling wires: one on the left and one on the right. In previous episodes we have seen that these diagrams have a tight relationship with various kinds of numbers. For example, in the theory of bimonoids, a sub-theory of graphical linear algebra, they were in 1-1 correspondence with the natural numbers. When we added the antipode, extending the theory, we got the integers.  Since then, we’ve added the mirror image generators and figured out the equations that relate them. So what additional “numbers” do we get in the theory IH of interacting Hopf monoids that we finished assembling in the last episode?

We get fractions, the stars of this episode. Let’s start by proving a useful little lemma. Suppose that w ≠ 0. Then the w sugar can jump over any sugar that’s pointing in the opposite direction, like so.

Here’s the proof of (†):

In the first and final steps we used the assumption w ≠ 0, since the scalar equations that we introduced in the last episode need this condition. The steps in the middle use the fact that multiplication of integers is the same as composition; we went through this in Episode 18.

We immediately get the mirror version of (†), where the nonzero w scalar is now pointing to the right:

No proof needed, because of the mirror symmetry in the equational theory that we discussed in the last episode.

Any fraction can be written vw where v and w are integers and w ≠ 0. So let’s extend the syntactic sugar for integers, and define a sugar for fractions as follows:

At this point we have to be a little bit careful, because a fraction is not merely a pair of integers, one written on top of each other. Fractions represent ratios; for example 12 = 24 since the ratio of one-part-in-two is the same as two-parts-in-four. But (1,2) and (2,4) are different pairs of numbers, and so the usual, formal textbook definition says that a fraction v/w is the equivalence class [(v,w)] where w ≠ 0. The underlying equivalence relation relates (p,q) and (r,s) when ps = qr, since this condition captures exactly what it means for p/q and r/s to represent the same ratio. For (1,2) and (2,4) we have 1 ·4 = 2 ·2, and so 1/2 = 2/4.  So, for our definition of fraction sugar to even make sense—for it to be well-defined—we ought to show that

Let’s do this now. In addition to ps=qr, we can also assume q,s ≠ 0. All three of these assumptions are used in the following proof.

Our new sugar wouldn’t be much good if we were able to show that two non-equal fractions are the same, using diagrammatic reasoning, i.e. if the translation was not faithful. For this reason, it’s useful to show that the implication that we just proved also goes the other way:

Here’s the proof.

So we have shown that any fraction can be considered, in a faithful way, as a diagram of graphical linear algebra with one dangling wire on the left and one on the right.

In fact, every diagram of this type—i.e. with one wire dangling from both ends— can be put, using diagrammatic reasoning, either in the form

where u,v are integers or in the form

This follows from some more advanced insights about our equational system, so we will not prove it immediately. You will have to trust me for now.

Next up is the algebra of fractions. First a quick reminder of how we used diagrams to talk about the algebra of the natural numbers and the integers. In both cases, addition could be expressed in the graphical language as follows:

Of course, this agrees with what we already knew about ordinary addition of integers. One advantage of defining addition in this way is that several of its properties immediately follow by diagrammatic reasoning: e.g. associativity, commutativity and the fact that 0 is the additive identity.

Similarly, multiplication can be defined as composition:

Again, associativity is immediate, but commutativity does not follow for totally generic reasons. One can prove it for the natural numbers by induction, then use this to get commutativity of multiplication for the integers, since the antipode commutes with any natural number sugar.

Finally, for any integer v, we know that its sugar can slide past copying and adding in the following way.

We proved this for natural numbers by induction in Episode 9, and for integers it follows from the fact that the antipode also slides in this way. These facts can be used to show that multiplication distributes over addition.

Before we delve into the algebra of fractions, we need one more lemma. It says that nonzero scalars can go the “wrong way” past addition and copying.

We only need to prove one of the two equations of (‡); the second is bizarro of the first: remember that bizarro is combining the mirror-image symmetry with the invert-colours symmetry and, as we saw in the last episode, both symmetries are present in our equational system.

So let’s assume that w ≠ 0 and prove that w can go the wrong way past copying:

In the proof used the scalar equations, which require the assumption w ≠ 0. In the second step we used our knowledge about integers going the “right  way” over copy.

So what happens when w = 0? Let’s take a look at the diagrams. First, if we have 0 going the wrong way to the left of copy we have

and two zeros on the right gives

The two results are different: they cannot be made equal using diagrammatic reasoning in graphical linear algebra. We will make the reasons more precise in the next few episodes; for now, we could go back to our relational intuition and figure out that the two diagrams

define two different relations. Zero can’t go over addition the wrong way for similar reasons.

Let’s mimic the algebra of the integers and see what happens when we define:

and

First, multiplication. Using the definition of our fraction syntactic sugar, the right hand side of the above definition is the following diagram

and we can use diagrammatic reasoning to put it into the form (★):

So  r/s · p/q = rp/sq, as we all learned in school. Since we know that multiplication of integers is commutative, this also gives us the commutativity of multiplication for fractions.

Next, let’s do the same thing for addition, transforming the diagram for p/q + r/s into the form (★).

Again, transforming the diagram into (★) using diagrammatic reasoning gives us the formula we all know: p/q + r/s = (sp + qr)/qs . In the fourth step of the proof, we were able to move the qs the wrong way past addition (‡) because we know that both q and s are not zero, so qs is also not zero.

So just by using diagrammatic reasoning, we “discovered” the rules for fraction arithmetic. This is different from the usual treatment: if one defines a fraction to be an equivalence class of pairs (p,q), then multiplication and addition are defined by the formulas we derived. It feels better to derive than to define.

There are a few more things left to do: e.g. show distributivity of multiplication and the fact that we have additive inverses. We will finish the job in the next episode.

This weekend I finally got around to reading Liber Abaci by Leonardo of Pisa, aka Fibonacci, translated into English by Laurence Sigler. I’m six chapters in and I’ve really enjoyed it so far. I already knew roughly what was in it, but that’s a bit like saying that I knew that Romeo and Juliet contained a teen suicide before I had to read it at school. A lot of fun, of course, comes from the actual reading, from getting into Fibonacci’s head. It’s a real shame that classic works of mathematics—and you don’t get much more classic than Liber Abaci—don’t get anywhere near as much attention in our culture as classic works of literature or philosophy. Fibonacci’s raw excitement about calculating with the new Indo-Arabic positional notation does not come across anywhere near as vividly in more modern, prescriptive texts.

As I keep reading, I will pull out the bits that I find the most interesting and talk about them on the blog. We start today with fractions, which get a lot of attention in Liber Abaci. Fibonacci used a different notation to the one we all know. Actually, there are several related notations, but the most common one is the following: what Fibonacci would write

$\frac{1\ 5\ \ 7}{5\ 6\ 10}3$

we would recognise as 3 + 7/10 + 5/6·1/10 + 1/5·1/6·1/10).  The notation is thus read from right to left. It’s a bit like a decimal expansion, in the sense that reading more gives you more information about the number. First, we know the number is larger than 3 but smaller than 4. Then, after reading the next bit of the fraction, we know that it’s a bit more than 7/10ths of the way from 3 to 4, but not quite 8/10ths. Of that final 1/10th, we know it’s a little bit more than 5/6ths. And so forth.

After the quick calculation, this is 284/75, or 359/75, using the evil mixed fraction notation. Fibonacci’s fractions follow the convention that the denominators never get smaller as one reads the fractions left to right: e.g. in the example above 5≤6≤10. Also, by using 10 in the denominators, one can write decimal expansions. For example, 3.14159 can be written $\frac{9\ \ 5\ \ 1\ \ 4\ \ 1}{10\ 10\ 10\ 10\ 10}3$.

Fibonacci’s fraction notation translates neatly into the graphical notation. For instance $\frac{1\ 5\ \ 7}{5\ 6\ 10}3$, our example above, is:

It’s easy to pass between the two notations: in the diagram, the numerators go from left to right, following successive copying. The denominators are interleaved with adding on the right.

The reason for Fibonacci’s choice of fraction notation could be that it is the output format of his algorithm—in Chapter 5—for performing division by composite (i.e. non prime) numbers. To prepare, Fibonacci first describes how to divide by prime numbers, and we can recognise this procedure as being equivalent to ordinary long division. Of course, long division works for composite numbers as well, but Fibonacci’s division algorithm means that if the composite has small factors then we can make the procedure simpler: it is usually easier to do long division when you divide by small numbers.

Fibonacci’s writing style is very entertaining: he describes a calculation technique and then solves a number of successively larger problems in full detail; some so large that it’s difficult to think of practical applications, especially since we are talking about medieval Italy. For example, in Chapter 2, he multiplies 12345678 by 87654321 to get 1082152022374638. It’s as if he’s saying: “Ok, so the abacists can probably also do the last example, but look, here’s a much bigger one. Good luck building an abacus big enough!”.

We will go through one of Fibonacci’s small examples from Chapter 5: dividing 749 by 75, illustrating his procedure for dividing by composite numbers with the graphical notation.

First we factorise 75 into primes—actually Fibonacci is not a stickler about this point and factorises into whatever is the most convenient to divide by—and write them in an increasing order.

The algorithm works by processing each of the 3, 5, 5 in turn, building up the Fibonacci fraction in the process. We start with the and divide 749 by it, obtaining 249 and 2/3.

At this point we already have a chunk of the Fibonacci fraction: the 2/3. We can pull it aside for later

and we are left with:

Next we perform the following calculation:

First the 5 goes over addition backwards (‡). Then we divide 249 by 5, obtaining 49 and 4/5. Next we use associativity of addition, and pull the 5 back over. At this point we have another chunk of a Fibonacci fraction ready to go, so we can pull it out:

and we are left with

Now we perform the same steps as in the previous calculation: this is the “loop” in the algorithm. Each time the body of the loop is performed, we get rid of the next number in the decomposition of the denominator. It’s time to process the final 5.

At this point we do not have any more components of the denominator to consume, so the algorithm terminates. And composing ①, ② and ③ gives us the Fibonacci fraction we were looking for:

that the great man himself would write $\frac{2\ 4\ 4}{3\ 5\ 5}9$, the result of dividing 749 by 75.

Fibonacci’s division algorithm is not so useful for us today, since we have computers to do the division for us, and there’s no pressing need to keep the numbers that we are dividing by small. But it is very interesting as an early example of an algorithm tout court; here Fibonacci was probably influenced by al-Khwarizmi. The presentation, using graphical notation, also gave me an opportunity to show off the compositional nature of the graphical language; we could perform the computation in steps, chopping off parts of the diagram that we were finished with, and gluing everything back together at the end. Compositionality is the strong point of the graphical notation, as we will discover time and again.

Continue reading with Episode 26 – Keep Calm and Divide by Zero.

## 9 thoughts on “25. Fractions, diagrammatically”

1. This blog has steadily become my mathematical guilty pleasure – I don’t think I’ll have a chance to do anything serious involving category theory, but I love the idea of graphical algebras!
I would love to see a return to applications in linear algebra though! I have especially been wondering about how this notation could be adapted to working with partitioned matrices and matrix inverses. The formulas there often exhibit a lot of structure coupled with a lack of intuition in equal measure. (I’m thinking there might be a very nice graphical result about matrix inverses – maybe from reversing all the integer sugars? That should work in the simple integer case at least, right?).
With the fractions in place, can this be extended to real numbers? Someone discussed using continued fractions which seems possible now, although there is the ever-present issue of infinity (infinite graphs in this case!).
A final question on notation: do you have a short-hand for writing the graphs on paper?

Liked by 1 person

No need for any guilt by the way, I’ve learned to live with my string diagram addiction and I’m coping ok so far 😉

You are right about matrix inverses having a good story in the graphical language: there will be an episode on this pretty soon. And I think that it’s indeed quite intuitive.

About the reals: yes, continued fractions are one way of doing it. I also want to write an episode about this topic soon. There is fantastic reference that Jacob suggested on the front page, chapter 14 of http://cs.nyu.edu/~yap/book/alge/ftpSite/. It’s given me several ideas… And you are right, to deal with irrationals we need to consider infinite string diagrams, for the same reason that continued fractions of irrationals do not terminate.

For notation, I find that using combination of sugars (for numbers, matrices, the two kinds for copying and adding) together with compositionality (like splitting up the Fibonacci calculation in this episode) does a reasonable job of keeping diagrams small. It’s also quite nice to use software for this, since many of the algebraic rules correspond to topological manipulations that can be achieved by dragging or “rewiring” a small portion of the diagram: to go from one step of the construction to the next you can copy and paste, which saves redrawing time.

But it would be really cool to have a proper software package that understood that it is dealing with string diagrams and not some arbitrary collection of shapes and lines: I started working on something like this in the summer of 2014 but unfortunately I stopped once semester started (for obvious reasons) and I never managed to come back to it. Some software along these lines already exists, for example the Quantomatic project (http://quantomatic.github.io). Still, I’d like something a little bit more generally applicable: maybe I can convince somebody to work on this in the future.

Like

• If you find a novel result in the graphical algebra for integer/fractional matrices, what are the caveats that keep you from immediately claiming that the result extends to real variables as well? Could a “generalization” property be established without explicitly defining the reals?
Could something perhaps be gained from using “dummy” variables to represent the irrational remainder of each number? Maybe define a single primitive new sugar which can be shifted around like a number, but does not cancel out with itself. You could then maybe reduce the graph around it, leaving a simpler graph with these irrational bits remaining, or leaving them to be cancelled out by other means. Maybe it’d be worth a look, but maybe it would be too much hassle. (either way, the principle is not much different from working with continued fractions and just truncating it at some level, and then manipulating the unknown truncated remainder separately)

Liked by 1 person

• In short, yes, and this is exactly the approach we have taken in our papers. You assume that you already know about the reals (or elements of any field, for that matter) and introduce a “sugar” for each of them. You then have to make sure that the sugars play well with the copying and the adding and their algebra is compatible with the way you do arithmetic with diagrams. So long story short, any theorem you prove graphically will be valid more generally.

This is the “boring” but easy way to deal with the real numbers using the graphical syntax. But somehow it feels more satisfying to explicitly construct the reals with diagrams than to import them!

Like

• Will says:

Have you considered constructing the algebraic complex numbers instead? It seems quite neat that you could define the elements as “boxes” such that simple finite diagram equivalences hold (i.e. a diagram building the polynomial of which the element is a root, equal to zero) along with some integer identifier of which root the element represents. It’d be fun to see what happens with your infinity/top/bottom elements there; hopefully they’d behave nicely.

Liked by 1 person

2. Hey Will — that’s an interesting idea and I think that it would work. So we’d itroduce each number as a box together with its characterising diagrammatic equation. Then, I guess that showing that these numbers play well with the diagrammatic structure would involve very similar steps to a proof that algebraic numbers are a field. Once you know that, there is no problem with the infinity, top and bottom elements; they would work just as they do for the rationals.

Like

• Will says:

The only slight quirk is that the diagrammatic equations are not quite characterising: you do require the extra integer to specify the root. This is (roughly) how the algebraic complexes are represented computationally. The reason that I’m particularly fond of the algcomps is exactly that – they’re ideally suited for computational representation and manipulation (all of the expected operations can be done efficiently), and so are interesting from a computational complexity perspective.

Thank you for presenting your research in this form; it’s very readable and I’m looking forward to your future posts!

Like

• Right — of course, if there’s more than one root then we need to say which it is that we want. How do you do usually do this, do you order them? What’s the ordering?

Like

• Will says:

I was oversimplifying a bit – you can think of it as an integer ordered by the modulus, but you run into issues with multiplicity if you’re not careful. In practice, a rational approximation is stored of sufficient accuracy to uniquely determine the root. The usual reference is Cohen – A Course in Computational Algebraic Number Theory. Section 4.2 discusses this representation – there are also a couple of other representations there that I’m less familiar with.

Liked by 1 person