I’ve been looking back at some of the mess I produced while trying to get an initial grip on the ideas I wrote up in my lasttwo posts on negative probability. One of my main interests in this blog is the gulf between maths as it is formally written up and the weird informal processes that produce mathematical ideas in the first place, so I thought this might make a kind of mini case study. Apologies in advance for my handwriting.
I do all my work in cheap school-style exercise books. The main thread of what I’m thinking about goes front-to-back: that’s where anything reasonably well-defined that I’m trying to do will go. Working through lecture notes, doing exercises, any calculations where I’m reasonably clear on what I’m actually calculating. But if I have no idea what I’m even trying to do, it goes in the back, instead. The back has all kinds of scribbles and disorganised crap:
Most of it is no good, but new ideas also tend to come from the back. The Wigner function decomposition was definitely a back-of-the-book kind of thing. I’ve mostly forgotten what I was thinking when I made all these scribblings, and I wouldn’t trust the remembered version even if I had one, so I’ll try to refrain from too much analysis.
The idea seems to originate here:
This has the key idea already: start with equal probability for all squares, and then add on correction terms until the bottom left corner goes negative. But the numbers are complete bullshit! Looking back, I can’t make sense of them at all. For instance, I was trying to add to things, instead of . Why? No idea! It’s not like is a number that had come up in any of my previous calculations, so I have no idea what I was thinking.
Even with the bullshit numbers, I must have had some inkling that this line of thought was worth pursuing, so I wrote it out again. This time I realised the numbers were wrong and crossed them out, writing the correct ones to the right:
The little squares above the main squares are presumably to tell me what to do: add to the filled in squares and subtract it from the blank ones.
I then did a sanity check on an example with no negative probabilities, and it worked:
At that point, I think I was convinced it worked in general, even though I’d only checked two cases. So I moved to the front of the book. After that, the rest of it looks like actual legit maths that a sane person would do, so it’s not so interesting. But I had to produce this mess to get there.
OK, so this is where I go back through everything from the last post, but this time show how all the fiddling around with boxes relates back to quantum physics, and also go into some technical details like explaining what I meant by ‘half the information’ in the discussion at the end. This is unavoidably going to need more maths than the last post, and enough quantum physics knowledge to be OK with qubits and density matrices. I’ll start by translating everything into a standard physics problem.
Qubit phase space
So, first off, instead of the ‘strange machine’ of the last post we will have a qubit state – as a first example I’ll take the state. The three questions then become measurements on it. Specifically, these measurements are expectation values of the operators , where the are the three Pauli matrices.
For we get the following:
This can be represented on the same sort of 2×2 grid I used in the previous post:
The state has a definite value of 0 for the measurement, so the probabilities in the cells where must sum to 1. For the state there is an equal chance of either or . The third measurement, , can be shown to be associated with the diagonals of the grid, in the same way as in Piponi’s example in the previous post, and again there is an equal chance of either value. Imposing all these conditions gives the probability assignment above.
The 2×2 grid is called the phase space of the qubit, and the function that assigns probabilities to each cell is called the Wigner function . To save on drawing diagrams, I’ll represent this as a square-bracketed matrix from now on:
For much more detail on how this all works, the best option is probably to read Wootters, who developed a lot of the ideas in the first place. There’s his original paper, which has all the technical details, and a nice follow-up paper on Picturing Qubits in Phase Space which gives a bit more intuition for what’s going on.
In the previous post I gave the following formula for the Wigner function:
which simplifies to
This is a somewhat different form to the standard formula for the Wigner function, but I’ve checked that they’re equivalent. I’ve put the details on a separate notes page here, in a sort of blog post version of the really boring technical appendix you get at the back of papers.
As with the example in the last blog post, it’s possible to get qubit states where some of the values of the Wigner function are negative. The numbers don’t work out so nicely this time, but as one example we can take the qubit state . (This is the +1 eigenvector of the density matrix .)
The Wigner function for is
with one negative entry. I learned while writing this that the states with negative values are called magic states by quantum computing people! These are the states that provide the ‘magic’ for quantum computing, in terms of giving a speed-up over classical computing. I’d like be able to say more about this link, but I’ll never finish the post if I have to get my head around all of that too, so instead I’ll link to this post by Earl Campbell that goes into more detail and points to some references. A quick note on the geometry, though:
The six eigenvectors of the Pauli matrices form the corners of an octahedron on the Bloch sphere, as in my dubious sketch above. We’ve already seen that the state has no magic – all the values are nonnegative. This also holds for the other five, which have the following Wigner functions:
The other states on the surface of the octahedron or inside it also have no magic. The magic states are the ones outside the octahedron, and the further they are from the octahedron the more magic they are. So the most magic states are on the surface of the sphere opposite the middle of the triangular faces.
Half the information
Why can’t we have a probability of as before? Well, I briefly mentioned the reason in the previous blog post, but I can go into more detail now. There are constraints on the values of that forbids values that are this negative. First off, the values of have to sum to 1 – this makes sense, as they are supposed to be something like probabilities.
The second constraint is more interesting. Taking the state as an example again, this state has a definite answer to one of the questions and no information at all about the other two. There’s redundancy in the questions, so exact answers to two of them would be enough to pin down the state precisely. So we have half of the possible information.
This turns out to be the most information you can get from any qubit state, in some sense. I say ‘in some sense’ because it’s a pretty odd definition of information.
I learned about this from a fascinating paper by van Enk, A toy model for quantum mechanics, which was actually my starting point for thinking about this whole topic. He starts with the Spekkens toy model, a very influential idea that reproduces a number of the features of quantum mechanics using a very simple model. Again, this is too big a topic to get into all the details, but the most basic system in this model maps to the six ‘non-magic’ qubit states listed above, in the corners of the octahedron. These all share the half-the-knowledge property of the state, where we know the answer to one question exactly and have no idea about the others.
Now van Enk’s aim is to extend this idea of ‘half the knowledge’ to more general probability distributions over the four boxes. But this requires having some kind of measure of what half the knowledge means. He stipulates that this measure should have for the six half-the-knowledge states we already have, which seems reasonable. Also, it should have for states where we know all the information (impossible in quantum physics), and for the state of total ignorance about all questions. Or to put it a bit differently,
where is an entropy measure – it decreases from 2 to 1 to 0 as we learn more information about the system. There’s a parametrised family of entropies known as the Rényi entropies, which reproduce this behaviour for the cases above, and differ for other distributions over the boxes. (I have some rough notes about these here, which may or may not be helpful.) By far the most well-known one is the Shannon entropy , used widely in information theory, but it turns out that this one doesn’t reproduce the states found in quantum physics. Instead, van Enk picks , the collision entropy. This has quite a simple form:
where the are the four components of – we’re just summing the squares of them. So then our information measure is just , and the second constraint on is this can have value at most :
Why this particular entropy measure? That’s something I don’t really understand. Van Enk describes it as ‘the measure of information advocated by Brukner and Zeilinger’, and links to their paper, but so far I haven’t managed to follow the argument there, either. If anyone reads this and has any insight, I’d like to know!
In some ways, I know a lot more about negative probabilities than I did when I started getting interested in this. But conceptually I’m almost as confused as I was at the start! I think the main improvement is that I have some more focussed questions to be confused about:
Is the way of decomposing the Wigner function that I described in these posts any use for making sense of the negative probabilities? I found it quite helpful for Piponi’s example, in giving some more insight into how the negative value connects to that particular answer being ‘especially inconsistent’. Is it also useful for thinking about qubits?
Any link to the idea of negative probabilities representing events ‘unhappening’? As I said at the beginning of the first post, I love this idea but have never seen it fully developed anywhere in a satisfying way.
What’s going on with this collision entropy measure anyway?
I’m not a quantum foundations researcher – I’m just an interested outsider trying to understand how all these ideas fit together. So I’m likely to be missing a lot of context that people in the field would have. If you read this and have pointers to things that I’m missing, please let me know in the comments!
I’ve been thinking about the idea of negative probabilities a lot recently, and whether it’s possible to make any sense of them. (For some very muddled and meandering background on how I got interested in this, you could wade through my ramblings here, here, here and here, but thankfully none of that is required to understand this post.)
To save impatient readers the hassle of reading this whole thing: I’m not going to come up with any brilliant way of interpreting negative probabilities in this blog post! But recently I did notice a few things that are interesting and that I haven’t seen collected together anywhere else, so I thought it would be worth writing them up.
Now, why would you even bother trying to make sense of negative probabilities? I’m not going to go into this in any depth – John Baez has an great introductory post on negative probability that motivates the idea, and links to a good chunk of the (not very large) literature. This is well worth reading if you want to know more. But there are a couple of main routes that lead people to get interested in this thing.
The first route is pretty much pure curiosity: what happens if we try extending the normal idea of probabilities to negative numbers? This is often introduced in analogy with the way we often use negative numbers in applications to simplify calculations. For example, there’s a fascinating discussion of negative probability by Feynman which starts with the following simple situation:
A man starting a day with five apples who gives away ten and is given eight during the day has three left. I can calculate this in two steps: 5 – 10 = -5 and -5 + 8 = 3.
The final answer is satisfactorily positive and correct although in the intermediate steps of calculation negative numbers appear. In the real situation there must be special limitations of the time in which the various apples are received and given since he never really has a negative number, yet the use of negative numbers as an abstract calculation permits us freedom to do our mathematical calculations in any order, simplifying the analysis enormously, and permitting us to disregard inessential details.
So, although we never actually have a negative number of apples, allowing them to appear in intermediate calculations makes the maths simpler.
The second route is that negative probabilities actually crop up in exactly this way in quantum physics! This isn’t particularly obvious in the standard formulation learned in most undergrad courses, but the theory can also be written in a different way that closely resembles classical statistical mechanics. However, unlike the classical case, the resulting ‘distribution’ is not a normal probability distribution, but a quasiprobability distribution that can also take negative values.
As with Feynman’s apples, these negative values don’t map to anything we observe directly: all measurements we could make give results that occur with zero or positive probabilities, as you would expect. The negative probabilities instead come in as intermediate steps in the calculation.
This should become clearer when I work through a toy example. The particular example I’ll use (which I got from an excellent blog post by Dan Piponi) doesn’t come up in quantum physics, but it’s very close: its main advantage is that the numbers are a bit simpler, so it’s easier to concentrate on the ideas. I’ll do this in two pieces: one that requires no particular physics or maths background and just walks through the example using basic arithmetic, and one that makes connections back to the quantum mechanics literature and might drop in a Pauli matrix or two. This is the no-maths one.
Neither of these routes really get to the point of fully making sense of negative probabilities. In the apple example, we have a tool for making calculations easier, but we also have an interpretation of ‘a negative apple’, in terms of taking away one of the apples you have already. For negative probabilities, we mostly just have the calculational tool. It’s tempting to try and follow the apple analogy and interpret negative probabilities as being to do with something like ‘events unhappening’ – many people have suggested this (see e.g. Michael Nielsen here), and I certainly share the intuition that something like this ought to be possible, but I’ve never seen anything fully worked out along those lines that I’ve found really satisfying.
In the absence of a compelling intuitive explanation, I find it helpful to work through examples and get an idea of how they work. Even if we don’t end up with a good explanation for what negative probabilities are, we can see what they do, and start to build up a better understanding of them that way.
A strange machine
OK, so let’s go through Piponi’s example (here’s the link again). He describes it very clearly and concisely in the post, so it might be a good idea to just switch to reading that first, but for completeness I’ll also reproduce it here.
Piponi asks us to consider a case where:
a machine produces boxes with (ordered) pairs of bits in them, each bit viewable through its own door.
So you could have 0 in both boxes, 0 in the first and 1 in the second, and so on. Now suppose we ask the following three questions about the boxes:
Is the first box in state 0?
Is the second box in state 0?
Are the boxes both in the same state?
I’ll work through two possible sets of answers to these questions: one consistent and unobjectionable set, and one inconsistent and stupid one.
Example 1: consistent answers
Let’s say that we find that the answer to the first question is ‘yes’ , the answer to the second is ‘no’, and the answer to the third is ‘no’. This makes sense, and we can interpret this easily in terms of an underlying state of the two boxes. The first box is in state 0, the second box is in state 1, and so of course the two are in different states and the answer to the third question is also satisfied.
We can represent this situation with the grid below:
The system is in state ‘first box 0, second box 1’, with probability 1, and the other states have probability 0. This is all very obvious – I’m just labouring the point so I can compare it to the case of inconsistent answers, where things get weird.
Example 2: inconsistent answers
Now suppose we find a inconsistent set of answers when we measure the box: ‘no’ to all three questions. This doesn’t make much intuitive sense: both boxes are in state 0, but also they are in different states. Still, Piponi demonstrates that you can still assign something like ‘probabilities’ to the squares on the grid, as long as you’re OK with one of them being negative:
Let’s go through how this matches up with the answers to the questions. For the first question, we have
so the answer is ‘no’ as required. Similarly, for the other two questions we have
so we get ‘no’ to all three, at the expense of having introduced this weird negative probability in one cell of the grid.
It’s not obvious at all what the negative probability means, though! Piponi doesn’t explain how he came up with this solution, but I’m guessing it’s one of either ‘solve the equations and get the answer’ or ‘notice that these numbers happen to work’.
I wanted to think a bit more about interpretation, and although I haven’t fully succeeded, I did notice a more enlightening calculation method, which maybe points in a useful direction. I’ll describe it below.
A calculation method
Some motivating intuition: all four possible assignments of bits to boxes are inconsistent with the answers in Example 2, but ‘both bits are zero’ is particularly inconsistent. It’s inconsistent with the answers to all three questions, whereas the other assignments are inconsistent with only one question each (for example, ‘both bits are 1’ matches the answer to the first two questions, but is inconsistent with the two states being different).
So you can maybe think in terms of consecutively answering the three questions and penalising assignments that are inconsistent. ‘Both bits are zero’ is an especially bad answer, so it gets clobbered three times instead of just once, pushing the probability negative.
The method I’ll describe is a more formal version of this. I’ll go through it first for Example 1, with consistent answers, to show it works there.
Back to Example 1
Imagine that we start in a state of complete ignorance. We have no idea what the underlying state is, so we just assign probability ¼ to each cell of the grid, like this:
(I’ll stop drawing the axes every time from this point on.) We then ask the three questions in succession and make corrections. For the first question, ‘is the first box in state 0’, we have the answer ‘yes’, so after we learn this we know that the left two cells of the grid now have probability ½ each, and the right two have probability 0. We can think of this as adding a correction term to our previous state of ignorance:
Notice that the correction term has some negative probabilities in it! But these seem relatively benign from an interpretational point of view – they are just removing probability from some cells so that it can be reassigned to others, and the final answer is still positive. It’s kind of similar to saying , where we subtract some probability to get to the answer.
Next, we add on two more correction terms, one for each of the remaining two questions. The correction term for the second question needs to remove probability from the bottom row and add it to the top row, and the one for the third question corrects the diagonals:
Adding ‘em all up gives
So the system is definitely in the top left state, which is what we found before. It’s good to verify that the method works on a conventional example like this, where the final probabilities are positive.
Example 2 again
I’ll follow the same method again for Piponi’s example, starting from complete uncertainty and then adding on a correction for each question (this time the answer is ‘no’ each time). This time I’ll do it all in one go:
which adds up to
So we’ve got the same probabilities as Piponi, with the weird negative -½ probability for ‘both in state 0’. This time we get a little bit more insight into where it comes from: it’s picking up a negative correction term from all three questions.
This ‘strange machine’ looks pretty bizarre. But it’s extremely similar to a situation that actually comes up in quantum physics. I’ll go into the details in the follow-up post (‘now with added equations!’), but this example almost replicates the quasiprobability distribution for a qubit, one of the simplest systems in quantum physics. The main difference is that Piponi’s machine is slightly ‘worse’ than quantum physics, in that the -½ value is more negative than anything you get there.
The two examples I did were ones where all three questions have definite yes/no answers, but my method of starting from a state of ignorance and adding on corrections carries over in the obvious way when you have a probability distribution over ‘yes’ and ‘no’. As an example, say you have a 0.8 probability of ‘no’ for the first question. Then you add 0.8 times the correction matrix for ‘no’, with the negative probabilities on the left hand side, and 0.2 times the correction matrix for ‘no’, with the negative probabilities on the right hand side. That’s all there is to it. Just to spell it out I’ll add the general formula: if the three questions have answer ‘no’ with probabilities , , respectively, then we assign probabilities to the cells as follows:
(If you’re wondering where the comes from, it’s just the usual letter used to label this thing – it stands for ‘Wigner’, and is a discrete version of his Wigner function.)
It turns out that all examples in quantum physics are of the type where you don’t have certain knowledge of the answers to all three questions. It’s possible to know the answer to one of them for certain, but then you have to be completely ignorant about the other two, and assign probability ½ to both answers. More usually, you will have partial information about all three questions, with a constraint that the total information you get about the system is at most half the total possible information, in a specific technical sense. To go into this in detail will require some more maths, which I’ll get to in the next post.
In the references to The World Beyond Your Head I found an intriguing paper by Mizuko Ito, Mobilizing Fun in the Production and Consumption of Children’s Software, following the interactions between children and adults at an after-school computer club. It’s written in a fairly heavy dialect of academicese, but the dialogue samples are fascinating. Here a kid, Jimmy, is playing SimCity 2000, with an undergrad, Holly, watching:
J: (Budget window comes up and Jimmy dismisses it.) Yeah. I’m going to bulldoze a skyrise here. (Selects bulldozer tool and destroys building.) OK. (Looks at H.) Ummm! OK, wait, OK. Should I do it right here?
H: Sure, that might work… that way. You can have it …
J: (Builds highway around city.) I wonder if you can make them turn. (Builds highway curving around one corner) Yeah, okay.
H: You remember, you want the highway to be … faster than just getting on regular streets. So maybe you should have it go through some parts.
J: (Dismisses budget pop-up window. Points to screen.) That’s cool! (Inaudible.) I can make it above?
H: Above some places, I think. I don’t know if they’d let you, maybe not.
J: (Moves cursor over large skyscraper.) That’s so cool!
H: Is that a high rise?
J: Yeah. I love them.
H: Is it constantly changing, the city? Is it like …
J: (Builds complicated highway intersection. Looks at H.)
J: So cool. (Builds more highway grids in area, creating a complex overlap of four intersections.)
H: My gosh, you’re going to have those poor drivers going around in circles.
J: I’m going to erase that all. I don’t like that, OK. (Bulldozes highway system and blows up a building in process.) Ohhh …
H: Did you just blow up something else?
J: Yeah. (Laughs.)
J: I’m going to start a new city. I don’t understand this one. I’m going to start with highways. (Quits without saving city.)
As Ito puts it, “by the end Jimmy has wasted thousands of dollars on a highway to nowhere, blown up a building, and trashed his city.” So what’s the point of playing the game in this way?
Well, for a start, it lets him make cool stuff and then blow it up. That might be all the explanation we need! But I think he’s also doing something genuinely useful for understanding the game itself.
Ito mainly seems to be interested in the social dynamics of the situation – the conflict between Jimmy finding ‘fun’, ‘spectacular’ effects in the game, and Holly trying to drag him back to more ‘educational’ behaviours. I can see that too, but I’m interested in a slightly different reading.
To my mind, Jimmy is ‘sketching’: he’s finding out what the highway tool can do as a tool, rather than immediately subsuming it to the overall logic of the game. The highway he’s building is in a pointless location and doesn’t function very well as a highway, but that doesn’t matter. He’s investigating how to make it turn, how to make it intersect with other roads, how to raise it above ground level. While focussed on this, he ignores any more abstract considerations that would pull him out of engagement with the tool. For example, he dismisses the budget popup as fast as he can, so that he can get back to bulldozing buildings.
Now he knows what the tool does, he may as well just trash the current city and start a new one where he can use his knowledge in a more productive way. His explorations are useless in the context of the current game, but will give him raw material to work with later in a different city, where he might need a fancy junction or an overhead highway.
I first wrote a version of this for the newsletter last year. Reading it back this time, I noticed something else: Jimmy’s explorations are a great example of bricolage. I first learned this term from Sherry Turkle and Seymour Papert’s Epistemological Pluralism and the Revaluation of the Concrete, which I talked about here once before. In Turkle and Papert’s sense of the word, adapted from Lévi-Strauss, bricolage is a particular style of programming computers:
Bricoleurs construct theories by arranging and rearranging, by negotiating and renegotiating with a set of well-known materials.
… They are not drawn to structured programming; their work at the computer is marked by a desire to play with the elements of the program, to move them around almost as though they were material elements — the words in a sentence, the notes on a keyboard, the elements of a collage.
… bricoleur programmers, like Levi-Strauss’s bricoleur scientists, prefer negotiation and rearrangement of their materials. The bricoleur resembles the painter who stands back between brushstrokes, looks at the canvas, and only after this contemplation, decides what to do next. Bricoleurs use a mastery of associations and interactions. For planners, mistakes are missteps; bricoleurs use a navigation of midcourse corrections. For planners, a program is an instrument for premeditated control; bricoleurs have goals but set out to realize them in the spirit of a collaborative venture with the machine. For planners, getting a program to work is like ”saying one’s piece”; for bricoleurs, it is more like a conversation than a monologue.
One example in the paper is ‘Alex, 9 years old, a classic bricoleur’, who comes up with a clever repurposing of a Lego motor:
When working with Lego materials and motors, most children make a robot walk by attaching wheels to a motor that makes them turn. They are seeing the wheels and the motor through abstract concepts of the way they work: the wheels roll, the motor turns. Alex goes a different route. He looks at the objects more concretely; that is, without the filter of abstractions. He turns the Lego wheels on their sides to make flat ”shoes” for his robot and harnesses one of the motor’s most concrete features: the fact that it vibrates. As anyone who has worked with machinery knows, when a machine vibrates it tends to ”travel,” something normally to be avoided. When Alex ran into this phenomenon, his response was ingenious. He doesn’t use the motor to make anything ”turn,” but to make his robot (greatly stabilized by its flat ”wheel shoes”) vibrate and thus ”travel.” When Alex programs, he likes to keep things similarly concrete.
This is a similar mode of investigation to Jimmy’s. He’s seeing what kinds of things the motor and wheels can do, as part of an ongoing conversation with his materials, without immediately subsuming them to the normal logic of motors and wheels. In the process, he’s discovered something he wouldn’t have done if he’d just made a normal car. Similarly, Jimmy will have more freedom with the highway tool in the future than if he followed all the rules about budgets and city planning before he understood everything that it can do.
Alternatively, maybe I’m massively overanalysing this short contextless stretch of dialogue, and Jimmy just likes making stuff explode. Maybe he just keeps making and trashing a series of similarly broken cities for the sheer fun of it. Either way, mashing these two papers together has been a fun piece of bricolage of my own.
I wrote a version of this for the newsletter last year and decided to expand it out into a post. I’ve also added in a few thoughts based on an email conversation about the book with David MacIver a while back, and a few more thoughts inspired by a more recent post.
This wasn’t a book I’d been planning to read. In fact, I’d never even heard of it. I was just working in the library one day, and the cover caught my attention. It’s been given the subtitle ‘How To Flourish in an Age of Distraction’, and it looks like the publisher has tried to sell it as a sort of book-length version of one of those hand-wringers in The Atlantic about how we all gawp at our phones too much. I’m a sucker for those. This is a bit pathetic, I know, but there are certain repetitive journalist topics that I like simply because they’re repetitive, and the repetition has given them a comfortingly familiar texture, and ‘we all gawp at our phones too much’ is one of them. So I had a flick through.
The actual contents turned out to be less comfortingly familiar, but a lot more interesting. Actually, I recognised a lot of it! Merleau-Ponty on perception… Polanyi on tacit knowledge… lots of references to embodied cognition. This looks like my part of the internet! I hadn’t seen this set of ideas collected together in a pop book before, so I thought I’d better read it.
The author, Matthew Crawford, previously wrote a book called Shop Class As Soul Craft, on the advantages of working in the skilled trades. In this one he zooms out further to take a more philosophical look at why working with your hands with real objects is so satisfying. There’s a lot of good stuff in the book, which I’ll get to in a minute. I still struggled to warm to it, though, despite it being full of topics I’m really interested in. Some of this was just a tone thing. He writes in a style I’ve seen before and don’t get on with – I’m not American and can’t place it very exactly, but I think it’s something like ‘mild social conservatism repackaged for the educated coastal elite’. According to Wikipedia he writes for something called The New Atlantis, which may be of the places this style comes from. I don’t know. There’s also a more generic ‘get off my lawn’ thing going on, where we are treated to lots of anecdotes about how the airport is too loud and there’s too much advertising and children’s TV is terrible and he can’t change the music in the gym.
The oddest thing for me was his choice of pronouns for the various example characters he makes up throughout the book to illustrate his points. This is always a pain because every option seems to annoy someone, but using ‘he’ consistently would at least have fitted the grumpy old man image quite well. Maybe his editor told him not to do that, though, or maybe he has some kind of point to make, because what he actually decided to do was use a mix of ‘he’ and ‘she’, but only ever pick the pronoun that fits traditional expectations of what gender the character would be. Because he mostly talks about traditionally masculine occupations, this means that maybe 80% of the characters, and almost all of the sympathetic ones, are male – all the hockey players, carpenters, short-order cooks and motorcycle mechanics he’s using to demonstrate skilled interaction with the environment. The only female characters I remember are a gambling addict, a New Age self-help bore, a disapproving old lady, and one musician who actually gets to embody the positive qualities he’s interested in. It’s just weird, and I found it very distracting.
OK, that’s all my whining about tone done. I have some more substantive criticisms later, but first I want to talk about some of the things I actually liked. Underneath all the owning-the-libs surface posturing he’s making a subtle and compelling argument. Unpacking this argument is quite a delicate business, and I kind of understand why the publishers just rounded it off to the gawping at phones thing.
Violins and slot machines
Earlier, I said that Crawford is out to explain ‘why working with your hands with real objects is so satisfying’, but actually he’s going for something a little more nuanced and specific than that. Not all real objects are satisfying to work with. Here’s his discussion of one that isn’t, at least for an adult:
When my oldest daughter was a toddler, we had a Leap Frog Learning Table in the house. Each side of the square table presents some sort of electromechanical enticement. There are four bulbous piano keys; a violin-looking thing that is played by moving a slide rigidly located on a track; a transparent cylinder full of beads mounted on an axle such that any attempt, no matter how oblique, makes it rotate; and a booklike thing with two thick plastic pages in it.
… Turning off the Leap Frog Learning Table would produce rage and hysterics in my daughter… the device seemed to provide not just stimulation but the experience of agency (of a sort). By hitting buttons, the toddler can reliably make something happen.
The Leap Frog Learning Table is designed to take very complicated data from the environment – toddlers bashing the thing any old how, at any old speed or angle – and funnel this mess into a very small number of possible outcomes. The ‘violin-looking thing’ has only one translational degree of freedom, along a single track. Similarly, the cylinder can only be rotated around one axis. So the toddler’s glancing swipe at the cylinder is not dissipated into uselessness, but instead produces a satisfying rolling motion – they get to ‘make something happen’.
This is extremely satisfying for a toddler, who struggles to manipulate the more resistant objects of the adult world. But there is very little opportunity for growth or mastery there. The toddler has already mastered the toy to almost its full extent. Hitting the cylinder more accurately might make it spin for a bit longer, but it’s still pretty much the same motion.
At the opposite end of the spectrum would be a real violin. I play the violin, and you could describe it quite well as a machine for amplifying tiny changes in arm and hand movements into very different sounds (mostly horrible ones, which is why beginners sound so awful). There are a large number of degrees of freedom – the movements of the each jointed finger in three dimensional space, including those on the bow hand, contribute to the final sound. Also, almost all of them are continuous degrees of freedom. There are no keys or frets to accommodate small mistakes in positioning.
Crawford argues that although tools and instruments that transmit this kind of rich information about the world can be very frustrating in the short term, they also have enormous potential for satisfaction in the long term as you come to master them. Whereas objects like the Leap Frog Learning Table have comparatively little to offer if you’re not two years old:
Variations in how you hit the button on a Leap Frog Learning Table or a slot machine do not similarly produce variations in the effect you produce. There is a closed loop between your action and the effect that you perceive, but the bandwidth of variability has been collapsed… You are neither learning something about the world, as the blind man does with his cane, nor acquiring something that could properly be called a skill. Rather, you are acting within the perception-action circuits encoded in the narrow affordances of the game, learned in a few trials. This is a kind of autistic pseudo-action, based on exact repetition, and the feeling of efficacy that it offers evidently holds great appeal.
(As a warning, Crawford consistently uses ‘autistic’ in this derogatory sense throughout the book; if that sounds unpleasant, steer clear.)
Objects can also be actively deceptive, rather than just tediously simple. In the same chapter there’s some interesting material on gambling machines, and the tricks used to make them addictive. Apparently one of the big innovations here was the idea of ‘virtual reel mapping’. Old-style mechanical fruit machines would have three actual reels with images on them that you needed to match up, and just looking at the machine would give you a rough indication of the total number of images on the reel, and therefore the rough odds of matching them up and winning.
Early computerised machines followed this pattern, but soon the machine designers realised that there no longer needed to be this close coupling between the machine’s internal representation and what the gambler sees. So the newer machines would have a much larger number of virtual reel positions that are mostly mapped to losing combinations, with a large percentage of these combinations being ‘near misses’ to make the machine more addictive. The machine still looks simple, like the toddler’s toy, but the intuitive sense of odds you get from watching the machine becomes completely useless, because the internal logic of the machine is now doing something very complicated that the screen actively hides from you. A machine like this is actively ‘out to get you’, alienating you from the evidence of your own eyes.
Apples and sheep
Before reading the book I’d never really thought carefully about any of these properties of objects. For a while after reading it, I noticed them everywhere. Here’s one (kind of silly) example.
Shortly after reading the book I was visiting my family, and came across this wooden puzzle my aunt made:
I had a phase when I was ten or so where I was completely obsessed with this puzzle. Looking back, it’s not obvious why. It’s pretty simple and looks like the kind of thing that would briefly entertain much younger children. I was a weird kid and also didn’t have a PlayStation – maybe that’s explanation enough? But I didn’t have some kind of Victorian childhood where I was happy with an orange and a wooden toy at Christmas. I had access to plenty of plastic and electronic nineties tat that was more obviously fun.
I sat down for half an hour to play with this thing and try and remember what the appeal was. The main thing is that it turns out to be way more controllable than you might expect. The basic aim of the puzzle is just to get the ball bearings in the holes in any old order. This is the game that stops being particularly rewarding once you’re over the age of seven. But it’s actually possible to learn to isolate individual ball bearings by bashing them against the sides until one separates off, and then tilt the thing very precisely to steer one individually into a specific hole. That gives you a lot more options for variants on the basic game. For example, you can fill in the holes in a spiral pattern starting from the middle. Or construct a ‘fence’ of outer apples with a single missing ‘gate’ apple, steer two apples into the central pen (these apples are now sheep), and then close the gate with the last one.
The other interesting feature is that because this is a homemade game, the holes are not uniformly deep. The one in the top right is noticeably shallower than the others, and the ball bearing in this slot can be dislodged fairly easily while keeping the other nine in their place. This gives the potential for quite complicated dynamics of knocking down specific apples, and then steering other ones back in.
Still an odd way to have spent my time! But I can at least roughly understand why. The apple puzzle is less like the Leap Frog Learning Table than you might expect, and so the game can reward a surprisingly high level of skill. Part of this is from the continuous degrees of freedom you have in tilting the board, but the cool thing is that a lot of it comes from unintentional parts of the physical design. My aunt made the basic puzzle for small children, and the more complicated puzzles happened to be hidden within it.
The ability to dislodge the top right apple is not ‘supposed’ to be part of the game at all – an abstract version you might code up would have identical holes. But the world is going about its usual business of being incorrigibly plural, and there is just way more of it than any one abstract ruleset needs. The variation in the holes allows some of that complexity to accidentally leak in, breaking the simple game out into a much richer one.
Pebbles and birdsong
Now for the criticism part. I think there’s a real deficiency in the book that goes deeper than the tone issues I pointed out at the start. Crawford is insightful in his discussions of the kind of complexity that many handcrafted objects exhibit, that’s often standardised away in the modern world. But in his enthusiasm for people doing real things with real tools he’s forgotten the advantages of the systematised, ‘autistic’ world he dislikes. Funnelling messy reality into repeatable categories is how we get shit done at scale. It’s not just some unpleasant feature of modernity, either. Even something as simple as counting with pebbles relies on this:
To make the method work, you must choose bits-of-rock of roughly even sizes, so you can distinguish them from littler bits—stray grains of sand or dust in the bucket—that don’t count. How even? Even enough that you can make a reliable-enough judgement.
The counting procedure abstracts away the vivid specificity of the individual pebbles, and reduces them to simplistic interchangeable tokens. But there’s not much point in complaining about this. You need to do this to get the job done! And you can always break them back out into individuality later on if you want to do something else, like paint a still life of them.
I’m finding myself going back yet again to Christopher Norris’s talk on Derrida, which I discussed in my braindump here. (I’m going to repeat myself a bit in the next section. This was the most thought-provoking single thing I read last year, and I’m still working through the implications, so everything seems to lead back there at the moment.) Derrida picks apart some similar arguments by Rousseau, who was concerned with the bad side of systematisation in music:
One way of looking at Rousseau’s ideas about the melody/harmony dualism is to view them as the working-out of a tiff he was having with Rameau. Thus he says that the French music of his day is much too elaborate, ingenious, complex, ‘civilized’ in the bad (artificial) sense — it’s all clogged up with complicated contrapuntal lines, whereas the Italian music of the time is heartfelt, passionate, authentic, spontaneous, full of intense vocal gestures. It still has a singing line, it’s still intensely melodious, and it’s not yet encumbered with all those elaborate harmonies.
Crawford is advocating for something close to Rousseau’s pure romanticism. He brings along more recent and sophisticated arguments from phenomenology and embodied cognition, but he’s still very much on the side of spontaneity over structure. And I think he’s still vulnerable to the same arguments that Derrida was able to use against Rousseau. Norris explains it as follows:
… Rousseau gets into a real argumentative pickle when he say – lays it down as a matter of self-evident truth – that all music is human music. Bird-song just doesn’t count, he says, since it is merely an expression of animal need – of instinctual need entirely devoid of expressive or passional desire – and is hence not to be considered ‘musical’ in the proper sense of that term. Yet you would think that, given his preference for nature above culture, melody above harmony, authentic (spontaneous) above artificial (‘civilized’) modes of expression, and so forth, Rousseau should be compelled – by the logic of his own argument – to accord bird-song a privileged place vis-à-vis the decadent productions of human musical culture. However Rousseau just lays it down in a stipulative way that bird-song is not music and that only human beings are capable of producing music. And so it turns out, contrary to Rousseau’s express argumentative intent, that the supplement has somehow to be thought of as always already there at the origin, just as harmony is always already implicit in melody, and writing – or the possibility of writing – always already implicit in the nature of spoken language.
Derrida is pointing out that human music always has a structured component. We don’t just pour out a unmarked torrent of frequencies. We define repeatable lengths of notes, and precise intervals between pitches. (The evolution of these is a complicated engineering story in itself.) This doesn’t make music ‘inauthentic’ or ‘artificial’ in itself. It’s a necessary feature of anything we’d define as music.
I’d have been much happier with the book if it had some understanding of this interaction – ‘yes, structure is important, but I think we have too much of it, and here’s why’. But all we get is the romantic side. As with Rousseau’s romanticism, this tips over all too easily into pure reactionary nostalgia for an imagined golden age, and then we have to listen to yet another anecdote about how everything in the modern world is terrible. It’s not the eighteenth century any more, and we can do better now. And for all its genuine insight, this book mostly just doesn’t.
Last month I finally got round to reading The Eureka Factor by John Kounios and Mark Beeman, a popular book summarising research on ‘insightful’ thinking. I first mentioned it a couple of years ago after I’d read a short summary article, when I realised it was directly relevant to my recurring ‘two types of mathematician’ obsession:
The book is not focussed on maths – it’s a general interest book about problem solving and creativity in any domain. But it looks like it has a very similar way of splitting problem solvers into two groups, ‘insightfuls’ and ‘analysts’. ‘Analysts’ follow a linear, methodical approach to work through a problem step by step. Importantly, they also have cognitive access to those steps – if they’re asked what they did to solve the problem, they can reconstruct the argument.
‘Insightfuls’ have no such access to the way they solved the problem. Instead, a solution just ‘pops into their heads’.
Of course, nobody is really a pure ‘insightful’ or ‘analyst’. And most significant problems demand a mixed strategy. But it does seem like many people have a tendency towards one or the other.
I wasn’t too sure what I was getting into. The replication crisis has made me hyperaware of the dangers of uncritically accepting any results in psychology, and I’m way too ignorant of the field to have a good sense for which results still look plausible. However, the book turned out to be so extraordinarily Relevant To My Interests that I couldn’t resist writing up a review anyway.
The final chapters had a few examples along the lines of ‘[weak environmental effect] primes people to be more/less insightful’, and I know enough to stay away from those, but the earlier parts look somewhat more solid to me. I haven’t made much effort to trace back references, though, and I could easily still be being too credulous.
(I didn’t worry so much about replication with my previous post on the Cognitive Reflection Test. Getting the bat and ball question wrong is hardly the kind of weak effect that you need a sensitive statistical instrument to detect. It’s almost impossible to stop people getting it wrong! I did steer clear of any more dubious priming-style results, though, like the claim that people do better on the CRT when reading it ‘in a disfluent font’.)
Insight and intuition
First, it’s worth getting clear on exactly what Kounious and Beeman mean by ‘insight’. As they use it, insight is a specific type of creative thinking, which they define more generally as ‘the ability to reinterpret something by breaking it down into its elements and recombining these elements in a surprising way to achieve some goal.’ Insight is distinguished by its suddenness and lack of conscious control:
When this kind of creative recombination takes place in an instant, it’s an insight. But recombination can also result from the more gradual, conscious process that cognitive psychologists call “analytic” thought. This involves methodically and deliberately considering many possibilities until you find the solution. For example, when you’re playing a game of Scrabble, you must construct words from sets of letters. When you look at the set of letters “A-E-H-I-P-N-Y-P” and suddenly realize that they can form the word “EPIPHANY,” then that would be an insight. When you systematically try out different combinations of the letters until you find the word, that’s analysis.
Insights tend to have a few other features in common. Solving a problem by insight is normally very satisfying: the insight comes into consciousness along with a small jolt of positive affect. The insight itself is usually preceded by a longer period of more effortful thought about the problem. Sometimes this takes place just before the moment of insight, while at other times there is an ‘incubation’ phase, where the solution pops into your head while you’ve taken a break from deliberately thinking about it.
I’m not really going to get into this part in my review, but the related word ‘intuition’ is also used in an interestingly specific sense in the book, to describe the sense that a new idea is lurking beneath the surface, but is not consciously accessible yet. Intuitions often precede an insight, but have a different feel to the insight itself:
This puzzling phenomenon has a strange subjective quality. It feels like an idea is about to burst into your consciousness, almost as though you’re about to sneeze. Cognitive psychologists call this experience “intuition,” meaning an awareness of the presence of information in the unconscious mind — a new idea, solution, or perspective — without awareness of the information itself, at least until it pops into consciousness.
To study insight, psychologists need to come up with problems that reliably trigger an insight solution. One classic example discussed in The Eureka Factor is the Nine Dot Problem, where you are asked to connect the following set of black dots using only four lines, without taking your pen off the page:
If you’ve somehow avoided seeing this puzzle before, think about it for a while first. In the absence of any kind of built-in spoiler blocks for wordpress.com sites, I’ll insert a bunch of blank space here so that you hopefully have to scroll down off your current screen to see my discussion of the solution:
If you didn’t figure it out, a solution can be found in the Wikipedia article on insight problems here. It’ll probably look irritatingly obvious once you see it. The key feature of the solution is that the lines you draw have to extend outside the confines of the square of dots you start with (thus spawning a whole subgenre of annoying business literature on ‘thinking outside the box’). Nothing in the rules forbids this, but the setup focusses most people’s attention on the grid itself, and breaking out of this mindset requires a kind of reframing, a throwing away of artificially imposed constraints. This is a common characteristic of insight problems.
This characteristic also makes insight hard to test. For testing purposes, it’s useful to have a large stock of similar puzzles in hand. But a good reframing like the one in the Nine Dot Problem tends to be a bit of a one-off: once you’ve had the idea of extending the lines outside the box, it applies trivially to all similar puzzles, and not at all to other types of puzzle.
(I talked about something similar in my last post, on the Cognitive Reflection Test. The test was inspired by one good puzzle, the ‘bat and ball problem’, and adds two other questions that were apparently picked to be similar. Five thousand words and many comments later, it’s not obvious to me or most of the other commenters that these three problems form any kind of natural set at all.)
Kounios and Beeman discuss several of these eyecatching ‘one-off’ problems in the book, but their own research that they discuss is focussed on a more standardisable kind of puzzle, the Remote Associates Test. This test gives you three words, such as
and asks you to find the common word that links them. The authors claim that these can be solved either with or without insight, and asked participants to self-categorise their responses as either fitting in the ‘insightful’ or ‘analytic’ categories:
The analytic approach is to consciously search through the possibilities and try out potential answers. For example, start with “pine.” Imagine yourself thinking: What goes with “pine”? Perhaps “tree”? “Pine tree” works. “Crab tree”? Hmmm … maybe. “Tree sauce”? No. Have to try something else. How about “cake”? “Crab cake” works. “Cake sauce” is a bit of a reach but might be acceptable. However, “pine cake” and “cake pine” definitely don’t work. What else? How about “crabgrass”? That works. But “pine grass”? Not sure. Perhaps there is such a thing. But “sauce grass” and “grass sauce” are definitely out. What else goes with “sauce”? How about “applesauce”? That’s good. “Pineapple” and “crab apple” also work. The answer is “apple”!
This is analytical thought: a deliberate, methodical, conscious search through the possible combinations. But this isn’t the only way to come up with the solution. Perhaps you’re trying out possibilities and get stuck or even draw a blank. And then, “Aha! Apple” suddenly pops into your awareness. That’s what would happen if you solved the problem by insight. The solution just occurs to you and doesn’t seem to be a direct product of your ongoing stream of thought.
This categorisation seems suspiciously neat, and if I rely on my own introspection for solving one of these (which is obviously dubious itself) it feels like more of a mix. I’ll often generate some verbal noise about cakes and trees that sounds vaguely like I’m doing something systematic, but the main business of solving the thing seems to be going on nonverbally elsewhere. But I do think there’s something there – the answer can be very immediate and ‘poppy’, or it can surface after a longer and more accessible process of trying plausible words. This was tested in a more objective way by seeing what people do when they don’t come up with the answer:
Insightfuls made more “errors of omission.” When waiting for an insight that hadn’t yet arrived, they had nothing to offer in its place. So when the insight didn’t arrive in time, they let the clock run out without having made a guess. In contrast, Analysts made more “errors of commission.” They rarely timed out, but instead guessed – sometimes correctly – by offering the potential solution they had been consciously thinking about when their time was almost up.
Kounios and Beeman’s research focussed on finding neural correlates of the ‘aha’ moment of insight, using a combination of an EEG test to pinpoint the time of the insight, and fMRI scanning to locate the brain region:
We found that at the moment a solution pops into someone’s awareness as an insight, a sudden burst of high-frequency EEG activity known as “gamma waves” can be picked up by electrodes just above the right ear. (Gamma waves represent cognitive processing in the brain, such as paying attention to something or linking together different pieces of information.) We were amazed at the abruptness of this burst of activity—just what one would expect from a sudden insight. Functional magnetic resonance imaging showed a corresponding increase in blood flow under these electrodes in a part of the brain’s right temporal lobe called the “anterior superior temporal gyrus” (see figure 5.2), an area that is involved in making connections between distantly related ideas, as in jokes and metaphors. This activity was absent for analytic solutions.
So we had found a neural signature of the aha moment: a burst of activity in the brain’s right hemisphere.
I’m not sure how settled this is, though. I haven’t tried to do a proper search of the literature, but certainly a review from 2010 describes the situation as very much in flux:
A recent surge of interest into the neural underpinnings of creative behavior has produced a banquet of data that is tantalizing but, considered as a whole, deeply self-contradictory.
(The book was published somewhat later, in 2015, but mostly cites research from prior to this review, such as this paper.)
As an outsider it’s going to be pretty hard for me to judge this without spending a lot more time than I really want to right now. However, regardless of how this holds up, I was really interested in the authors’ discussion of why a right-hemisphere neural correlate of insight would make sense.
Insight and context
One of the authors, Mark Beeman, had previously studied language deficits in people who had suffered brain damage to the right hemisphere. One such patient was the trial attorney D.B.:
What made D.B. “lucky” was that the stroke had damaged his right hemisphere rather than his left. Had the stroke occurred in the mirror-image left-hemisphere region, he would have experienced Wernicke’s aphasia, a profound deficit of language comprehension. In the worst cases, people with Wernicke’s aphasia may be completely unable to understand written or spoken language.
Nevertheless, D.B. didn’t feel lucky. He may have been better off than if he’d had a left-hemisphere stroke, but he felt that his language ability was far from normal. He said that he “couldn’t keep up” with conversations or stories the way he used to. He felt impaired enough that he had stopped litigating trials—he thought that it would have been a disservice to his clients to continue to represent them in court.
D.B. and the other patients were able to understand the straightforward meanings of words and the literal meanings of sentences. Even so, they complained about vague difficulties with language. They failed to grasp the gist of stories or were unable to follow multiple-character or multiple-plot stories, movies, or television shows. Many didn’t get jokes. Sarcasm and irony left them blank with incomprehension. They could sometimes muddle along without these abilities, but whenever things became subtle or implicit, they were lost.
An example of the kind of problem D.B. struggled with is the following:
Saturday, Joan went to the park by the lake. She was walking barefoot in the shallow water, not knowing that there was glass nearby. Suddenly, she grabbed her foot in pain and called for help, and the lifeguard came running.
If D.B. was given a statement about something that occurred explicitly in the text, such as ‘Joan went to the park on Saturday’, he could say whether it was true or false with no problems at all. In fact, he did better than all of the control subjects on these sorts of explicit questions. But if he was instead presented with a statement like ‘Joan cut her foot’, where some of the facts are left implicit, he was unable to answer.
This was interesting to me, because it seems so directly relevant to the discussionlast year on ‘cognitive decoupling’. This is a term I’d picked up from Sarah Constantin, who herself got it from Keith Stanovich:
Stanovich talks about “cognitive decoupling”, the ability to block out context and experiential knowledge and just follow formal rules, as a main component of both performance on intelligence tests and performance on the cognitive bias tests that correlate with intelligence. Cognitive decoupling is the opposite of holistic thinking. It’s the ability to separate, to view things in the abstract, to play devil’s advocate.
The patients in Beeman’s study have so much difficulty with contextualisation that they struggle with anything at all that is left implicit, even straightforward inferences like ‘Joan cut her foot’. This appears to match with other evidence from visual half-field studies, where subjects are presented with words on either the right or left half of the visual field. Those on the left half will go first to the right hemisphere, so that the right hemisphere gets a head start on interpreting the stimulus. This shows a similar difference between hemispheres:
The left hemisphere is sharp, focused, and discriminating. When a word is presented to the left hemisphere, the meaning of that word is activated along with the meanings of a few closely related words. For example, when the word “table” is presented to the left hemisphere, this might strongly energize the concepts “chair” and “kitchen,” the usual suspects, so to speak. In contrast, the right hemisphere is broad, fuzzy, and promiscuously inclusive. When “table” is presented to the right hemisphere, a larger number of remotely related words are weakly invoked. For example, “table” might activate distant associations such as “water” (for underground water table), “payment” (for paying under the table), “number” (for a table of numbers), and so forth.
Why would picking up on these weak associations be relevant to insight? The story seems to be that this tangle of secondary meanings – the ‘Lovecraftian penumbra of monstrous shadow phalanges’ – works to pull your attention away from the obvious interpretation you’re stuck with, helping you to find a clever new reframing of the problem.
This makes a lot of sense to me as a rough outline. In my own experience at least, the kind of thinking that is likely to lead to an insight experience feels softer and more diffuse than the more ‘analytic’ kind, more a process of sort of rolling the ideas around gently in your head and seeing if something clicks than a really focussed investigation of the problem. ‘Thinking too hard’ tends to break the spell. This fits well with the idea that insights are triggered by activation of weak associations.
There’s a lot of other interesting material in the book about the rest of the insight process, including the incubation period leading up to an insight flash, and the phenomenon of ‘intuitions’, where you feel that an insight is on its way but you don’t know what it is yet. I’ll never get through this review if I try to cover all of that, so instead I’m going to finish up with a couple of weak associations of my own that got activated while reading the book.
I’ve been getting increasingly dissatisfied with the way dual process theories split cognition into a fast/automatic/intuitive ‘System 1’ and a slow/effortful/systematic ‘System 2’. System 1 in particular has started to look to me like an amorphous grab bag of all kinds of things that would be better separated out.
The Eureka Factor has pushed this a little further, by bringing out a distinction between two things that normally get lumped under System 1 but are actually very different. One obvious type of System 1-ish behaviour is routine action, the way you go about tasks you have done many times before, like making a sandwich or walking to work. These kinds of activities require very little explicit thought and generally ‘just happen’ in response to cues in the environment.
The kind of ‘insightful’ thinking discussed in The Eureka Factor would also normally get classed under System 1: it’s not very systematic and involves a fast, opaque process where the answer just pops into your head without much explanation. But it’s also very different to routine action. It involves deliberately choosing to think about a new situation, rather than one you have seen many times before, and a successful insight gives you a qualitatively new kind of understanding. The insight flash itself is a very noticeable, enjoyable feature of your conscious attention, rather than the effortless, unexamined state of absorbed action.
You seem to be lumping “flashes of insight” in with “effortless flow-state”. I don’t think they’re the same. For one thing, inspiration generally comes in bursts, while flow-states can persist for a while (driving on a highway, playing the piano, etc.) Definitely, “flashes of insight” aren’t the same type of thought as “effortful attention” — insight feels easy, instant, and unforced. But they might be their own, unique category of thought. Still working out my ontology here.
I’d sort of had this at the back of my head since then, but the book has really brought out the distinction clearly. I’m sure these aren’t the only types of thinking getting shoved into the System 1 category, and I get the sense that there’s a lot more splitting out that I need to do.
I also thought about how the results in the book fit in with my perennial ‘two types of mathematician’ question. (This is a weird phenomenon I’ve noticed where a lot of mathematicians have written essays about how mathematicians can be divided into two groups; I’ve assembled a list of examples here.) ‘Analytic’ versus ‘insightful’ seems to be one of the distinctions between groups, at least. It seems relevant to Poincaré’s version, for instance:
The one sort are above all preoccupied with logic; to read their works, one is tempted to believe they have advanced only step by step, after the manner of a Vauban who pushes on his trenches against the place besieged, leaving nothing to chance.
The other sort are guided by intuition and at the first stroke make quick but sometimes precarious conquests, like bold cavalrymen of the advance guard.
Just at this time, I left Caen, where I was living, to go on a geologic excursion under the auspices of the School of Mines. The incidents of the travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step, the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’ sake, I verified the result at my leisure.
If the insight/analysis split is going to be relevant here, it would require that people favour either ‘analytic’ or ‘insight’ solutions as a general cognitive style, rather than switching between them freely depending on the problem. The authors do indeed claim that this is the case:
Most people can, and to some extent do, use both of these approaches. A pure type probably doesn’t exist; each person falls somewhere on an analytic-insightful continuum. Yet many—perhaps most—people tend to gravitate toward one of these styles, finding their particular approach to be more comfortable or natural.
This is based on their own research where they recorded participant’s self-report of whether they were using a ‘insight’ or ‘analytic’ approach to solve anagrams, and compared it with EEG recordings of their resting state. They found a number of differences, including more right-hemisphere activity in the ‘insight’ group, and lower levels of communication between the frontal lobe and other parts of the brain, indicating a more disorderly thinking style with less top-down control. This may suggest more freedom to allow weak associations between thoughts to have a crack at the problem, without being overruled by the dominant interpretation.
Again, and you’re probably got very bored of this disclaimer, I have no idea how well the details of this will hold up. That’s true for pretty much every specific detail in the book that I’ve discussed here. Still, the link between insight and weak associations makes a lot of sense to me, and the overall picture certainly triggered some useful reframings. That seems very appropriate for a book about insight.
Almost everyone we ask reports an initial tendency to answer “10 cents” because the sum $1.10 separates naturally into $1 and 10 cents, and 10 cents is about the right magnitude. Many people yield to this immediate impulse. The surprisingly high rate of errors in this easy problem illustrates how lightly System 2 monitors the output of System 1: people are not accustomed to thinking hard, and are often content to trust a plausible judgment that quickly comes to mind.
In Thinking, Fast and Slow, the bat and ball problem is used as an introduction to the major theme of the book: the distinction between fluent, spontaneous, fast ‘System 1’ mental processes, and effortful, reflective and slow ‘System 2’ ones. The explicit moral is that we are too willing to lean on System 1, and this gets us into trouble:
The bat-and-ball problem is our first encounter with an observation that will be a recurrent theme of this book: many people are overconfident, prone to place too much faith in their intuitions. They apparently find cognitive effort at least mildly unpleasant and avoid it as much as possible.
This story is very compelling in the case of the bat and ball problem. I got this problem wrong myself when I first saw it, and still find the intuitive-but-wrong answer very plausible looking. I have to consciously remind myself to apply some extra effort and get the correct answer.
However, this becomes more complicated when you start considering other tests of this fast-vs-slow distinction. Frederick later combined the bat and ball problem with two other questions to create the Cognitive Reflection Test:
(2) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes
(3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days
These are designed to also have an ‘intuitive-but-wrong’ answer (100 minutes, 24 days), and an ‘effortful-but-right’ answer (5 minutes, 47 days). But this time I seem to be immune to the wrong answers, in a way that just doesn’t happen with the bat and ball:
I always have the same reaction, and I don’t know if it’s common or I’m just the lone idiot with this problem. The ‘obvious wrong answers’ for 2. and 3. are completely unappealing to me (I had to look up 3. to check what the obvious answer was supposed to be). Obviously the machine-widget ratio hasn’t changed, and obviously exponential growth works like exponential growth.
When I see 1., however, I always think ‘oh it’s that bastard bat and ball question again, I know the correct answer but cannot see it’. And I have to stare at it for a minute or so to work it out, slowed down dramatically by the fact that Obvious Wrong Answer is jumping up and down trying to distract me.
If this test was really testing my propensity for effortful thought over spontaneous intuition, I ought to score zero. I hate effortful thought! As it is, I score two out of three, because I’ve trained my intuitions nicely for ratios and exponential growth. The ‘intuitive’, ‘System 1’ answer that pops into my head is, in fact, the correct answer, and the supposedly ‘intuitive-but-wrong’ answers feel bad on a visceral level. (Why the hell would the lily pads take the same amount of time to cover the second half of the lake as the first half, when the rate of growth is increasing?)
The bat and ball still gets me, though. My gut hasn’t internalised anything useful, and it’s super keen on shouting out the wrong answer in a distracting way. My dislike for effortful thought is definitely a problem here.
I wanted to see if others had raised the same objection, so I started doing some research into the CRT. In the process I discovered a lot of follow-up work that makes the story much more complex and interesting.
I’ve come nowhere near to doing a proper literature review. Frederick’s original paper has been cited nearly 3000 times, and dredging through that for the good bits is a lot more work than I’m willing to put in. This is just a summary of the interesting stuff I found on my limited, partial dig through the literature.
Thinking, inherently fast and inherently slow
Frederick’s original Cognitive Reflection Test paper describes the System 1/System 2 divide in the following way:
Recognizing that the face of the person entering the classroom belongs to your math teacher involves System 1 processes — it occurs instantly and effortlessly and is unaffected by intellect, alertness, motivation or the difficulty of the math problem being attempted at the time. Conversely, finding to two decimal places without a calculator involves System 2 processes — mental operations requiring effort, motivation, concentration, and the execution of learned rules.
I find it interesting that he frames mental processes as being inherently effortless or effortful, independent of the person doing the thinking. This is not quite true even for the examples he gives — faceblind people and calculating prodigies exist.
This framing is important for interpreting the CRT. If the problem inherently has a wrong ‘System 1 solution’ and a correct ‘System 2 solution’, the CRT can work as intended, as an efficient tool to split people by their propensity to use one strategy or the other. If there are ‘System 1’ ways to get the correct answer, the whole thing gets much more muddled, and it’s hard to disentangle natural propensity to reflection from prior exposure to the right mathematical concepts.
My tentative guess is that the bat and ball problem is close to being this kind of efficient tool. Although in some ways it’s the simplest of the three problems, solving it in a ‘fast’, ‘intuitive’ way relies on seeing the problem in a way that most people’s education won’t have provided. (I think this is true, anyway – I’ll go into more detail later.) I suspect that this is less true the other two problems – ratios and exponential growth are topics that a mathematical or scientific education is more likely to build intuition for.
(Aside: I’d like to know how these other two problems were chosen. The paper just states the following:
Motivated by this result [the answers to the bat and ball question], two other problems found to yield impulsive erroneous responses were included with the “bat and ball” problem to form a simple, three-item “Cognitive Reflection Test” (CRT), shown in Figure 1.
I have a vague suspicion that Frederick trawled through something like ‘The Bumper Book of Annoying Riddles’ to find some brainteasers that don’t require too much in the way of mathematical prerequisites. The lilypads one has a family resemblance to the classic grains-of-wheat-on-a-chessboard puzzle, for instance.)
However, I haven’t found any great evidence either way for this guess. The original paper doesn’t break down participants’ scores by question – it just gives mean scores on the test as a whole. I did however find this meta-analysis of 118 CRT studies, which shows that the bat and ball question is the most difficult on average – only 32% of all participants get it right, compared with 40% for the widgets and 48% for the lilypads. It also has the biggest jump in success rate when comparing university students with non-students. That looks like better mathematical education does help on the bat and ball, but it doesn’t clear up how it helps. It could improve participants’ ability to intuitively see the answer. Or it could improve ability to come up with an ‘unintuitive’ solution, like solving the corresponding simultaneous equations by a rote method.
What I’d really like is some insight into what individual people actually do when they try to solve the problems, rather than just this aggregate statistical information. I haven’t found exactly what I wanted, but I did turn up a few interesting studies on the way.
No, seriously, the answer isn’t ten cents
My favourite thing I found was this (apparently unpublished) ‘extremely rough draft’ by Meyer, Spunt and Frederick from 2013, revisiting the bat and ball problem. The intuitive-but-wrong answer turns out to be extremely sticky, and the paper is basically a series of increasingly desperate attempts to get people to actually think about the question.
One conjecture for what people are doing when they get this question wrong is the attribute substitution hypothesis. This was suggested early on by Kahneman and Frederick, and is a fancy way of saying that they are instead solving the following simpler problem:
(1) A bat and a ball cost $1.10 in total. The bat costs $1.00.
How much does the ball cost? _____ cents
Notice that this is missing the ‘more than the ball’ clause at the end, turning the question into a much simpler arithmetic problem. This simple problem does have ‘ten cents’ as the answer, so it’s very plausible that people are getting confused by it.
Meyer, Spunt and Frederick tested this hypothesis by getting respondents to recall the problem from memory. This showed a clear difference: 94% of ‘five cent’ respondents could recall the correct question, but only 61% of ‘ten cent’ respondents. It’s possible that there is a different common cause of both the ‘ten cent’ response and misremembering the question, but it at least gives some support for the substitution hypothesis.
However, getting people to actually answer the question correctly was a much more difficult problem. First they tried bolding the words more than the ball to make this clause more salient. This made surprisingly little impact: 29% of respondents solved it, compared with 24% for the original problem. Printing both versions was slightly more successful, bumping up the correct response to 35%, but it was still a small effect.
After this, they ditched subtlety and resorted to pasting these huge warnings above the question:
These were still only mildly effective, with a correct solution jumping to 50% from 45%. People just really like the answer ‘ten cents’, it seems.
At this point they completely gave up and just flat out added “HINT: 10 cents is not the answer.” This worked reasonably well, though there was still a hard core of 13% who persisted in writing down ‘ten cents’.
That’s where they left it. At this point there’s not really any room to escalate beyond confiscating the respondents’ pens and prefilling in the answer ‘five cents’, and I worry that somebody would still try and scratch in ‘ten cents’ in their own blood. The wrong answer is just incredibly compelling.
So, what are people doing when they solve this problem?
Unfortunately, it’s hard to tell from the published literature (or at least what I found of it). What I’d really like is lots of transcripts of individuals talking through their problem solving process. The closest I found was this paper by Szaszi et al, who did carry out these sort of interview, but it doesn’t include any examples of individual responses. Instead, it gives a aggregated overview of types of responses, which doesn’t go into the kind of detail I’d like.
Still, the examples given for their response categories give a few clues. The categories are:
Correct answer, correct start. Example given: ‘I see. This is an equation. Thus if the ball equals to x, the bat equals to x plus 1… ‘
Correct answer, incorrect start. Example: ‘I would say 10 cents… But this cannot be true as it does not sum up to €1.10…’
Incorrect answer, reflective, i.e. some effort was made to reconsider the answer given, even if it was ultimately incorrect. Example: ‘… but I’m not sure… If together they cost €1.10, and the bat costs €1 more than the ball… the solution should be 10 cents. I’m done.’
No reflection. Example: ‘Ok. I’m done.’
These demonstrate one way to reason your way to the correct answer (solve the simultaneous equations) and one way to be wrong (just blurt out the answer). They also demonstrate one way to recover from an incorrect solution (think about the answer you blurted out and see if it actually works). Still, it’s all rather abstract and high level.
How To Solve It
However, I did manage to stumble onto another source of insight. While researching the problem I came across this article from the online magazine of the Association for Psychological Science, which discusses a variant ‘Ford and Ferrari problem’. This is quite interesting in itself, but I was most excited by the comments section. Finally some examples of how the problem is solved in the wild!
The simplest ‘analytical’, ‘System 2’ solution is to rewrite the problem as two simultaneous linear equations and plug-and-chug your way to the correct answer. For example, writing for the bat and for the ball, we get the two equations
which we could then solve in various standard ways, e.g.
which then gives
There are a couple of variants of this explained in the comments. It’s a very reliable way to tackle the problem: if you already know how to do this sort of rote method, there are no surprises. This sort of method would work for any similar problem involving linear equations.
However, it’s pretty obvious that a lot of people won’t have access to this method. Plenty of people noped out of mathematics long before they got to simultaneous equations, so they won’t be able to solve it this way. What might be less obvious, at least if you mostly live in a high-maths-ability bubble, is that these people may also be missing the sort of tacit mathematical background that would even allow them to frame the problem in a useful form in the first place.
That sounds a bit abstract, so let’s look at some responses (I’ll paste all these straight in, so any typos are in the original). First, we have these two confused commenters:
The thing is, why does the ball have to be $.05? It could have been .04 0r.03 and the bat would still cost more than $1.
This is exactly what bothers me and resulted in me wanting to look up the question online. On the quiz the other 2 questions were definitive. This one technically could have more than one answer so this is where phycologists actually mess up when trying to give us a trick question. The ball at .4 and the bat at 1.06 doesn’t break the rule either.
These commenters don’t automatically see two equations in two variables that together are enough to constrain the problem. Instead they seem to focus mainly on the first condition (adding up to $1.10) and just use the second one as a vague check at best (‘the bat would still cost more than $1’). This means that they are unable to immediately tell that the problem has a unique solution.
In response, another commenter, Tony, suggests a correct solution which is an interesting mix of writing the problem out formally and then figuring out the answer by trial and error:
I hear your pain. I feel as though psychologists and psychiatrists get together every now and then to prove how stoopid I am. However, after more than a little head scratching I’ve gained an understanding of this puzzle. It can be expressed as two facts and a question A=100+B and A+B=110, so B=? If B=2 then the solution would be 100+2+2 and A+B would be 104. If B=6 then the solution would be 100+6+6 and A+B would be 112. But as be KNOW A+B=110 the only number for B on it’s own is 5.
This suggests enough half-remembered mathematical knowledge to find a sensible abstract framing, but not enough to solve it the standard way.
Finally, commenter Marlo Eugene provides an ingenious way of solving the problem without writing all the algebraic steps out:
Linguistics makes all the difference. The conceptual emphasis seems to lie within the word MORE.
X + Y = $1.10. If X = $1 MORE then that leaves $0.10 TO WORK WITH rather than automatically assign to Y
So you divide the remainder equally (assuming negative values are disqualified) and get 0.05.
So even this small sample of comments suggests a wide diversity of problem-solving methods leading to the two common answers. Further, these solutions don’t all split neatly into ‘System 1’ ‘intuitive’ and ‘System 2’ ‘analytic’. Marlo Eugene’s solution, for instance, is a mixed solution of writing the equations down in a formal way, but then finding a clever way of just seeing the answer rather than solving them by rote.
I’d still appreciate more detailed transcripts, including the time taken to solve the problem. My suspicion is still that very few people solve this problem with a fast intuitive response, in the way that I rapidly see the correct answer to the lilypad question. Even the more ‘intuitive’ responses, like Marlo Eugene’s, seem to rely on some initial careful reflection and a good initial framing of the problem.
If I’m correct about this lack of fast responses, my tentative guess for the reason is that it has something to do with the way most of us learn simultaneous equations in school. We generally learn arithmetic as young children in a fairly concrete way, with the formal numerical problems supplemented with lots of specific examples of adding up apples and bananas and so forth.
But then, for some reason, this goes completely out of the window once the unknown quantity isn’t sitting on its own on one side of the equals sign. This is instead hived off into its own separate subject, called ‘algebra’, and the rules are taught much later in a much more formalised style, without much attempt to build up intuition first.
(One exception is the sort of puzzle sheets that are often given to young kids, where the unknowns are just empty boxes to be filled in. Sometimes you get 2+3=□, sometimes it’s 2+□=5, but either way you go about the same process of using your wits to figure out the answer. Then, for some reason I’ll never understand, the worksheets get put away and the poor kids don’t see the subject again until years later, when the box is now called for some reason and you have to find the answer by defined rules. Anyway, this is a separate rant.)
This lack of a rich background in puzzling out the answer to specific concrete problems means most of us lean hard on formal rules in this domain, even if we’re relatively mathematically sophisticated. Only a few build up the necessary repertoire of tricks to solve the problem quickly by insight. I’m reminded of a story in Feynman’s The Pleasure of Finding Things Out:
Around that time my cousin, who was three years older, was in high school. He was having considerable difficulty with his algebra, so a tutor would come. I was allowed to sit in a corner while the tutor would try to teach my cousin algebra. I’d hear him talking about x.
I said to my cousin, “What are you trying to do?”
“I’m trying to find out what x is, like in 2x + 7 = 15.”
I say, “You mean 4.”
“Yeah, but you did it by arithmetic. You have to do it by algebra.”
I learned algebra, fortunately, not by going to school, but by finding my aunt’s old schoolbook in the attic, and understanding that the whole idea was to find out what x is – it doesn’t make any difference how you do it.
I think this reliance on formal methods might be somewhat less true for exponential growth and ratios, the subjects underpinning the lilypad and widget questions. Certainly I seem to have better intuition there, without having to resort to rote calculation. But I’m not sure how general this is.
How To Visualise It
If you wanted to solve the bat and ball problem without having to ‘do it by algebra’, how would you go about it?
My original post on the problem was a pretty quick, throwaway job, but over time it picked up some truly excellent comments by anders and Kyzentun, which really start to dig into the structure of the problem and suggest ways to ‘just see’ the answer. The thread with anders in particular goes into lots of other examples of how we think through solving various problems, and is well worth reading in full. I’ll only summarise the bat-and-ball-related parts of the comments here.
We all used some variant of the method suggested by Marlo Eugene in the comments above. Writing out the basic problem again, we have:
Now, instead of immediately jumping to the standard method of eliminating one of the variables, we can just look at what these two equations are saying and solve it directly ‘by thinking’. We have a bat, . If you add the price of the ball, , you get 110 cents. If you instead remove the same quantity you get 100 cents. So the bat’s price must be exactly halfway between these two numbers, at 105 cents. That leaves five for the ball.
Now that I’m thinking of the problem in this way, I directly see the equations as being ‘about a bat that’s halfway between 100 and 110 cents’, and the answer is incredibly obvious.
Kyzentun suggests a variant on the problem that is much less counterintuitive than the original:
A centered piece of text and its margins are 110 columns wide. The text is 100 columns wide. How wide is one margin?
Same numbers, same mathematical formula to reach the solution. But less misleading because you know there are two margins, and thus know to divide by two after subtracting.
In the original problem, the 110 units and 100 units both refer to something abstract, the sum and difference of the bat and ball. In Kyzentun’s version these become much more concrete objects, the width of the text and the total width of the margins. The work of seeing the equations as relating to something concrete has mostly been done for you.
Similarly, anders works the problem by ‘getting rid of the 100 cents’, and splitting the remainder in half to get at the price of the ball:
I just had an easy time with #1 which I haven’t before. What I did was take away the difference so that all the items are the same (subtract 100), evenly divide the remainder among the items (divide 10 by 2) and then add the residuals back on to get 105 and 5.
The heuristic I seem to be using is to treat objects as made up of a value plus a residual. So when they gave me the residual my next thought was “now all the objects are the same, so whatever I do to one I do to all of them”.
I think that after reasoning my way through all these perspectives, I’m finally at the point where I have a quick, ‘intuitive’ understanding of the problem. But it’s surprising how much work it was for such a simple bit of algebra.
Rather than making any big conclusions, the main thing I wanted to demonstrate in this post is how complicated the story gets when you look at one problem in detail. I’ve written about close reading recently, and this has been something like a close reading of the bat and ball problem.
Frederick’s original paper on the Cognitive Reflection Test is in that generic social science style where you define a new metric and then see how it correlates with a bunch of other macroscale factors (either big social categories like gender or education level, or the results of other statistical tests that try to measure factors like time preference or risk preference). There’s a strange indifference to the details of the test itself – at no point does he discuss why he picked those specific three questions, and there’s no attempt to model what was making the intuitive-but-wrong answer appealing.
The later paper by Meyer, Spunt and Frederick is much more interesting to me, because it really starts to pick apart the specifics of the bat and ball problem. Is an easier question getting substituted? Can participants reproduce the correct question from memory?
I learned the most from the individual responses, though, and seeing the variety of ways people go about solving the problem. It’s very strange to me that I had an easier time digging this out from an internet comment thread than the published literature! I would love to see a lot more research into what people actually do when they do mathematics, and the bat and ball problem would be a great place to start.
I’m interested in any comments on the post, but here are a few specific things I’d like to get your answers to:
My rapid, intuitive answer for the bat and ball question is wrong (at least until I retrained it by thinking about the problem way too much). However, for the other two I ‘just see’ the correct answer. Is this common for other people, or do you have a different split?
If you’re able to rapidly ‘just see’ the answer to the bat and ball question, how do you do it?
How do people go about designing tests like these? This isn’t at all my field and I’d be interested in any good sources. I’d kind of assumed that there’d be some kind of serious-business Test Creation Methodology, but for the CRT at least it looks like people just noticed they got surprising answers for the bat and ball question and looked around for similar questions. Is that unusual compared to other psychological tests?
[I’ve cross-posted this at LessWrong, because I thought the topic fits quite nicely – comments at either place are welcome.]
(I posted this on Less Wrong back in April and forgot to cross post here. It’s just the same references I’ve posted before, but it’s worth reading over there for the comments, which are great.)
This is an expansion of a linkdump I made a while ago with examples of mathematicians splitting other mathematicians into two groups, which may be of wider interest in the context of the recent elephant/rider discussion. (Though probably not especially wide interest, so I’m posting this to my personal page.)
The two clusters vary a bit, but there’s some pattern to what goes in each – it tends to be roughly ‘algebra/problem-solving/analysis/logic/step-by-step/precision/explicit’ vs. ‘geometry/theorising/synthesis/intuition/all-at-once/hand-waving/implicit’.
(Edit to add: ‘analysis’ in the first cluster is meant to be analysis as opposed to ‘synthesis’ in the second cluster, i.e. ‘breaking down’ as opposed to ‘building up’. It’s not referring to the mathematical subject of analysis, which is hard to place!)
These seem to have a family resemblance to the S2/S1 division, but there’s a lot lumped under each one that could helpfully be split out, which is where some of the confusion in the comments to the elephant/rider post is probably coming in. (I haven’t read The Elephant in the Brain yet, but from the sound of it that is using something of a different distinction again, which is also adding to the confusion). Sarah Constantin and Owen Shen have both split out some of these distinctions in a more useful way.
I wanted to chuck these into the discussion because: a) it’s a pet topic of mine that I’ll happily shoehorn into anything; b) it shows that a similar split has been present in mathematical folk wisdom for at least a century; c) these are all really good essays by some of the most impressive mathematicians and physicists of the 20th century, and are well worth reading on their own account.
“It is impossible to study the works of the great mathematicians, or even those of the lesser, without noticing and distinguishing two opposite tendencies, or rather two entirely different kinds of minds. The one sort are above all preoccupied with logic; to read their works, one is tempted to believe they have advanced only step by step, after the manner of a Vauban who pushes on his trenches against the place besieged, leaving nothing to chance.
The other sort are guided by intuition and at the first stroke make quick but sometimes precarious conquests, like bold cavalrymen of the advance guard.”
Felix Klein’s ‘Elementary Mathematics from an Advanced Standpoint’ in 1908 has ‘Plan A’ (‘the formal theory of equations’) and ‘Plan B’ (‘a fusion of the perception of number with that of space’). He also separates out ‘ordered formal calculation’ into a Plan C.
Gian-Carlo Rota made a division into ‘problem solvers and theorizers’ (in ‘Indiscrete Thoughts’, excerpt here).
Timothy Gowers makes a very similar division in his ‘Two Cultures of Mathematics’ (discussion and link to pdf here).
Vladimir Arnold’s ‘On Teaching Mathematics’ is an incredibly entertaining rant from a partisan of the geometry/intuition side – it’s over-the-top but was 100% what I needed to read when I first found it.
Broadly speaking I want to suggest that geometry is that part of mathematics in which visual thought is dominant whereas algebra is that part in which sequential thought is dominant. This dichotomy is perhaps better conveyed by the words “insight” versus “rigour” and both play an essential role in real mathematical problems.
There’s also his famous quote:
Algebra is the offer made by the devil to the mathematician. The devil says: `I will give you this powerful machine, it will answer any question you like. All you need to do is give me your soul: give up geometry and you will have this marvellous machine.’
Grothendieck was seriously weird, and may not fit well to either category, but I love this quote from Récoltes et semailles too much to not include it:
Since then I’ve had the chance in the world of mathematics that bid me welcome, to meet quite a number of people, both among my “elders” and among young people in my general age group who were more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle – while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things I had to learn (so I was assured), things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates almost by sleight of hand, the most forbidding subjects.
In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective or thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve done all things, often beautiful things, in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone.
Freeman Dyson calls his groups ‘Birds and Frogs’ (this one’s more physics-focussed).
This may be too much partisanship from me for the geometry/implicit cluster, but I think the Mark Kac ‘magician’ quote is also connected to this:
There are two kinds of geniuses: the ‘ordinary’ and the ‘magicians.’ an ordinary genius is a fellow whom you and I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what they’ve done, we feel certain that we, too, could have done it. It is different with the magicians… Feynman is a magician of the highest caliber.
The algebra/explicit cluster is more ‘public’ in some sense, in that its main product is a chain of step-by-step formal reasoning that can be written down and is fairly communicable between people. (This is probably also the main reason that formal education loves it.) The geometry/implicit cluster relies on lots of pieces of hard-to-transfer intuition, and these tend to stay ‘stuck in people’s heads’ even if they write a legitimising chain of reasoning down, so it can look like ‘magic’ on the outside.
Edit to add: Seo Sanghyeon contributed the following example by email, from Weinberg’s Dreams of a Final Theory:
Theoretical physicists in their most successful work tend to play one of two roles: they are either sages or magicians… It is possible to teach general relativity today by following pretty much the same line of reasoning that Einstein used when he finally wrote up his work in 1915. Then there are magician-physicists, who do not seem to be reasoning at all but who jump over all intermediate steps to a new insight about nature. The authors of physics textbook are usually compelled to redo the work of the magicians so they seem like sages; otherwise no reader would understand the physics.
I wrote this for a monthly newsletter I’ve been experimenting with. I feel a bit awkward about publishing this as a post, because it’s very meandering and unpolished and plain weird. But I did manage to cover a lot of ground, in a way that would be be really difficult and time-consuming to do in a normal post, and I quite like the result in some ways.
Also this is a blog, not some formal venue, and if I start fussing too much about quality I should probably just get over myself. Thanks to David Chapman for encouraging me to post it anyway.
I wasn’t sure I’d stick to the newsletter format, so I didn’t advertise it much, but it turns out I really like doing it. If you’re interested in getting a monthly email with more of this nonsense, please email me at bossdrucket(at)gmail(dot)com and I’ll add you to the list. Fair warning: it’s normally a pretty disjointed mix of physics and whatever this braindump is.
Rephrasing the famous words on the electron and atom, it can be said that a hypocycloid is as inexhaustible as an ideal in a polynomial ring.
… my biggest unusual pervasive influence is probably the New Critics: Eliot, Empson and I.A. Richards especially, and a bit of Leavis. They occupy an area of intellectual territory that mostly seems to be empty now (that or I don’t know where to find it). They’re strong contextualisers with a focus on what they would call ‘developing a refined sensibility’, by deepening sensitivity to tiny subtle nuances in expression. But at the same time, they’re operating in a pre-pomo world with a fairly stable objective ladder of ‘good’ and ‘bad’ art. (Eliot’s version of this is one of my favourite ever wrong ideas, where poetic images map to specific internal emotional states which are consistent between people, creating some sort of objective shared world.)
This leads to a lot of snottiness and narrow focus on a defined canon of ‘great authors’ and ‘minor authors’. But also the belief in reliable intersubjective understanding gives them the confidence for detailed close reading and really carefully picking apart what works and what doesn’t, and the time they’ve spent developing their ear for fine nuance gives them the ability to actually do this.
The continuation of this is probably somewhere on the other side of the ‘fake pomo blocks path’ wall in David Chapman’s diagram, but I haven’t got there yet, and I really feel like I’m missing something important.
I’ve been wanting to write more about this as a blog post for a while but it never comes out right, so this time I’m going to just start writing with no preplanned structure at all, and see what comes out.
I spent a ridiculous amount of time staring at Chapman’s diagram I linked above when I first found it. I think the main thing that made it really sticky was my experience with the top row, the one with the brick wall marked ‘fake pomo blocks path’. My teenage autodidact blundering accidentally got me past the wall to the ‘Stage 4 via humanities education’ forbidden box while barely knowing that postmodernism existed.
I started out by reading books my dad had from his university course in the sixties. This included a very wide-ranging multi-year course on literature, philosophy and general history of ideas called ‘The European Mind’. Even the name is brilliantly pre-pomo! There’s one unified mind, and it’s distinctively European, and you can learn about it by studying the classic Western canon.
I got particularly fixated on the early twentieth century. I was also reading a lot of pop science for the first time and learning about the insanely productive and disorientating revolution in physics, from Planck’s constant in 1900 up to the solidifying of the new quantum theory in the late 20s, with special and general relativity along the way. (Also a little bit about the crisis in foundations of mathematics, but I never got particularly interested in that at the time.)
It was easy to switch between science and humanities stuff from that era, because the tone and writing style was quite similar, in the Anglosphere anyway. The analytic philosophers had all got maths envy and were trying to adopt the language of logic and dig to the foundations. And even the literary critics wrote in a style that was easily accessible to a STEM nerd like me. Eliot, for instance, writes up his experiences in trying to revive verse drama like it’s a retrospective on some lab experiments that went wrong:
It was only when I put my mind to thinking what sort of play I wanted to do next, that I realized that in Murder in the Cathedral I had not solved any general problem; but that from my point of view the play was a dead end. For one thing, the problem of language which that play had presented to me was a special problem…
The style was similar to the scientists, but their main method was somewhat different. The New Critics tended to work via very detailed close reading of individual passages, really picking apart what makes specific examples work. For another Eliot example, here he talks about the vivid specificity of the images in Macbeth, and compares it to the artificial, conventional images of much eighteenth century poetry. The overarching ‘Shakespeare good, Milton bad’ moral might not be worth much, but it’s fantastic for pointing out what Shakespeare does that Milton can’t.
I really like the close reading method. If I study one example in depth, I normally come away with some new ability to read the situation, in a way that grand abstract theorising can’t match. (I do also really like sloppy big-picture grand theorising, but in my case it’s normally motivated by trying to mash examples together.) Examples are inexhaustible, like Arnold’s hypocycloid in the quote at the start. In fact, ‘close reading’ has become my idiosyncratic mental label for any kind of heavily example-driven work, not just the detailed study of written texts. Toy models in maths and physics also count, for example.
Once I’d got my ear in for the New Critical style, it was pretty easy to find more of it in secondhand bookshops and never really read anything that went beyond it, so the ‘wall of fake pomo’ wasn’t a problem. (I knew what postmodernism was anyway! It was that stupid rubbish that Sokal made fun of, where people make word salad out of hermeneutics and deconstruction and logocentrism. I wasn’t going to fall for any of that!)
Now, funnily enough, I seem to be back with the New Critics for different reasons. Something like ‘well, I’m in the right place to follow that ‘genuine pomo critique’ arrow, so let’s give it a go’. This time I got there by reading the text of this fantastic talk on Derrida by Christopher Norris. Norris is a literary critic who wrote about Empson as well – I’ll get to that in a minute. But first I’ll talk about that linked Norris piece, especially the bit on Rousseau and Rameau. (I did warn you that this is going to be a rambling braindump.)
I learned a lot from the Norris talk. The first thing I picked up was that Derrida is not the vague sort of waffler I’d imagined from the Science Wars stereotype. He does have a weird writing style that I find a complete pain to read (I have started wading through Of Grammatology now, and I’m not particularly enjoying the process), but it’s not a vague style. He’s actually a close reader with a similar method to the New Critics, even if the tone is completely different, and he works by going through specific examples in detail. Which is actually my favourite way of learning!
Derrida’s main targets for the close reading treatment in Of Grammatology are Rousseau, Saussure and Lévi-Strauss. A very French list which doesn’t mean a whole lot to me, so I’m having to read backwards too. But I was able to understand the part about Rousseau’s fight with the musician Rameau. Norris explains it here:
Rousseau was himself a musician, a performer and composer, and he wrote a great deal about music history and theory, in particular about the relationship between melody and harmony… One way of looking at Rousseau’s ideas about the melody/harmony dualism is to view them as the working-out of a tiff he was having with Rameau. Thus he says that the French music of his day is much too elaborate, ingenious, complex, ‘civilized’ in the bad (artificial) sense — it’s all clogged up with complicated contrapuntal lines, whereas the Italian music of the time is heartfelt, passionate, authentic, spontaneous, full of intense vocal gestures. It still has a singing line, it’s still intensely melodious, and it’s not yet encumbered with all those elaborate harmonies.
This was really accessible to me because of my weird music listening habits. I listen to a lot of baroque music, and I specifically like the Italian baroque, pretty much for the reasons Rousseau lists above. It’s very different to Bach’s sort of baroque music, which is very harmonically and structurally complex, and it sticks much closer to its roots in dance music. The melodies are straightforwardly, immediately appealing, instead of being subsumed into some big contrapuntal structure. It’s not simple folk music, though – the complexity is instead in the very delicate, constantly changing mood and texture.
(If you want to get an idea of the kind of music I’m thinking of, the Youtube channel Ispirazione Barrocca is my best source for obscure but brilliant Italian composers. For a specific example, at the moment I keep listening to the ciaccona and rondeau here. The ciaccona is the simplest thing possible harmonically, it’s just the same chords over and again. The melody is pretty straightforward too, and there’s no fancy structure. But I love the shadings in mood, and that fantastic change in energy at 7:00 as it transitions into the rondeau.)
So I’m probably the right sort of person to be persuaded by Rousseau’s argument. But actually, as Norris/Derrida tells it, it doesn’t work at all. I’m just going to quote this big glob of Norris’s text, because I can’t possibly explain it any better myself:
What’s more, Rousseau says, this is where writing came in and exerted its deleterious effect, because if you have a complex piece of contrapuntal music, by Rameau let’s say, then you’ve got to write it down. People can’t learn it off by heart; you can quite easily learn a folk tune, or an unaccompanied aria, or perhaps a piece of plainchant, or anything that doesn’t involve harmony because it sinks straight in, it strikes a responsive chord straight away. But as soon as you have harmony then you have this bad supplement that comes along and usurps the proper place of melody, that somehow corrupts or denatures melody, so to speak, from the inside. Now the interesting thing, as Derrida points out, is that Rousseau can’t sustain that line of argument, because as soon as he starts to think harder about the nature of music, as soon as he begins to write his articles about music theory, he recognizes that in fact there is no such thing as melody without harmony. I think this is one of the remarkable things about Derrida’s reading of Rousseau, that it carries conviction as a matter of intuitive rightness as well as through sheer philosophical acuity and close attention to the detail of Rousseau’s text. His arguments seem to be very cerebral, very technical and even counter-intuitive, but in this case they can be checked out against anyone’s – or any responsive listener’s – first-hand experience of music. Thus even if you think of an unaccompanied folk song, or if you just hum a tune or pick it out in single notes on the piano, it will carry harmonic overtones or suggestions. What makes it a tune, what gives it a sense of character, shape, cadence, etc., is precisely this implicit harmonic dimension.
Derrida brings out this contradiction through his characteristic method (again I’ll just quote a glob of Norris’s talk):
Derrida gets to this point through a close reading of Rousseau’s text which shows it to concede – not so much ‘between the lines’ but in numerous details of phrasing and turns of logico-semantic implication – that there is no melody (nothing perceivable or recognizable as such) without the ‘bad supplement’ of harmony. Thus, for instance, Rousseau gets into a real argumentative pickle when he say – lays it down as a matter of self-evident truth – that all music is human music. Bird-song just doesn’t count, he says, since it is merely an expression of animal need – of instinctual need entirely devoid of expressive or passional desire – and is hence not to be considered ‘musical’ in the proper sense of that term. Yet you would think that, given his preference for nature above culture, melody above harmony, authentic (spontaneous) above artificial (‘civilized’) modes of expression, and so forth, Rousseau should be compelled – by the logic of his own argument – to accord bird-song a privileged place vis-à-vis the decadent productions of human musical culture. However Rousseau just lays it down in a stipulative way that bird-song is not music and that only human beings are capable of producing music. And so it turns out, contrary to Rousseau’s express argumentative intent, that the supplement has somehow to be thought of as always already there at the origin, just as harmony is always already implicit in melody, and writing – or the possibility of writing – always already implicit in the nature of spoken language.
Birdsong is an awkward case for Rousseau, because it really is melody without the implicit harmonic dimension. It’s mostly missing the exact, ‘engineered’ side of human music – the precise, repeatable harmonic intervals and lengths of notes and bars. But this is the same structure that makes a tune sound like a tune, rather than a loose cascade of pitches. Even Rousseau didn’t want music that was that structureless and undifferentiated, so he gets himself tied in knots trying to exclude this case that ruins his argument.
This is really just a sideline in the book. Derrida’s main interest is not the tension between harmony and melody, but that between writing and speech. The argument goes through in a similar way, though. Speech is presumed to be the fundamental one of the pair, and writing is derivative – what Saussure called ‘a signifier of a signifier’. But Derrida points out that speech also has a structural element, using repeatable components and arbitrary conventions. The possibility of writing is inherent in speech from the start, in the same way that harmonic structure is inherent in the simplest folk tune:
For Derrida, ‘writing’ should rather be defined as a sort of metonym for all those aspects of language – or of human culture generally – that set it apart from the realm of natural (that is, pre-social, hence pre-human) existence. That is to say, it encompasses not only writing in the usual, restricted (graphematic) sense but also speech in so far as spoken language likewise depends on structures, conventions, codes, systems of relationship and difference ‘without positive terms’, and so forth.
Derrida was in the right time and place to take both sides of the writing/speech opposition seriously. He’d started out in phenomenology, with a deep study of Husserl and his emphasis on the immediacy of raw experience. And then structuralism had been in the air in France at the time, and he’d picked up its insights through Saussure and Lévi-Strauss. He could see that either on its own was not enough:
What Derrida does, essentially, is juxtapose the insights of structuralism and phenomenology, the two great movements of thought that really formed the matrix of Derrida’s work, especially his early work. Phenomenology because it had gone so far – in the writings of Husserl and Merleau-Ponty after him – toward describing that creative or expressive ‘surplus’ in language (and also, for Merleau-Ponty, in the visual arts) that would always elude the most detailed and meticulous efforts of structuralist analysis. Structuralism because, on its own philosophic and methodological terms, it revealed how this claim for the intrinsic priority of expressive parole over pre-constituted langue would always run up against the kind of counter-argument that I have outlined above.
This might be a good point to briefly try and explain my other reason for being interested in this Derrida stuff, beyond ‘trying to understand the New Critics’. This goes back to my normal pet topic. There’s a very similar tension in mathematics between the structural, ‘algebraic’ element, where the individual symbols are arbitrary and only their relations matter, and the ‘geometric’ side where these symbols become grounded in our perceptual experience, and are experienced as being ‘about’ curved surfaces or nodes in a graph or whatever. I’m scare-quoting ‘algebra’ and ‘geometry’ because I’m using them in a weird way – there is of course a structural component to any geometric problem, and a perceptual component to any algebraic one. But algebra tends to be closer to the structuralist style, and geometry to the phenomenological one, so they work quite well as labels for the two sides of the opposition. This is actually pretty similar to Derrida’s weird use of ‘writing’ and ‘speech’.
I really want to understand this parallel story, and how the ‘algebra’/’geometry’ tension played out in the twentieth century. I can figure out who a lot of the main characters are: Bourbaki on the structuralist side, for a start, and Poincare and Weyl on the phenomenology one, in very different ways. Also Brouwer and the intuitionist weirdos must fit in there somewhere. But I’m struggling to find much in the way of good secondary material. About the only thing I’ve found is Mathematics and the Roots of Postmodern Thought by Vladimir Tasić, which has a lot of good material but is kind of all over the place. It’s very different to the situation with Derrida, where there’s an overabundance of crappy secondary sources, mostly teaching literature students how to make up bullshit ‘deconstructions’ of any text that comes their way. I’m glad the maths students don’t have to suffer in the same way, but it would be nice if there was more to read.
Finally, I’m going to wrench this braindump back to the New Critics where I started. Reading Norris on Derrida made me think of Empson. Empson was particularly keen on the close reading technique, particularly early on when he wrote Seven Types of Ambiguity. Here’s a short example where he goes over a few lines of The Waste Land in minute detail. His main interest in this is trying to track down all the fleeting associations that words in a poem pull with them:
As a rule, all that you recognise as in your mind is the one final association of meanings which seems sufficiently rewarding to be the answer—‘now I have understood that’; it is only at intervals that the strangeness of the process can be observed. I remember once clearly seeing a word so as to understand it, and, at the same time, hearing myself imagine that I had read its opposite. In the same way, there is a preliminary stage in reading poetry when the grammar is still being settled, and the words have not all been given their due weight; you have a broad impression of what it is all about, but there are various incidental impressions wandering about in your mind; these may not be part of the final meaning arrived at by the judgment, but tend to be fixed in it as part of its colour.
At times he seems pretty close to Derrida, at least as Norris tells it, in the way he combines structural critique (he later wrote a book called The Structure of Complex Words, which I haven’t read) with an interest in the phenomenology of how a poem is experienced.
I did some research and discovered that Norris has written a lot about Empson as well. In fact he edited a whole book about him. So maybe that isn’t so surprising! I looked to see if he wrote anything specifically about both Empson and Derrida, and it turns out that Norris actually sent Empson a copy of one of Derrida’s essays, along with some other stuff by de Man and Barthes, to see whether he liked it. He got this very funny response (quoted in the introduction of that book):
‘I feel very bad’, Empson wrote,
not to have answered you for so long, and not to have read those horrible Frenchmen you posted to me. I did go through the first one, Jacques Nerrida [sic], and nosed about in several others, but they seem to me so very disgusting, in a simple moral or social way, that I cannot stomach them. Nerrida does express the idea that, just as people were talking grammar before grammarians arose, so there are other unnoticed regularities in human language and probably in other human systems. This is what I meant by the book title *The Structure of Complex Words*, and it was not an out-of-the-way idea, indeed I may have got it from someone else, but of course it is no use unless you try to present an actual grammar, and actual grammar of the means by which a speaker makes his choice while using the language correctly. This I attempted to supply, and I do not notice that the French ever even try … They use enormously fussy language, always pretending to be plumbing the very depths, and never putting your toe into the water. Please tell me I am wrong.
That’ll be a no, then. I don’t blame him about the writing style. But I have convinced myself that I want to read Derrida anyway.
In my lasttwo posts I’ve been talking about my experience of thinking through some website design problems. At the same time that I was muddling along with these, I happened to be reading Donald Schön’s The Reflective Practitioner, which is about how people solve problems in professions like architecture, town planning, teaching and management. (I got the recommendation from David Chapman’s ‘Further reading’ list for his Meaningness site.)
This turned out to be surprisingly relevant. Schön is considering ‘real’ professional work, rather than my sort of amateur blundering around, but the domain of web design shares a lot of characteristics with the professions he studied. The problems in these fields are context-dependent and open-ended, resisting any sort of precise theory that applies to all cases. On the other hand, people do manage to solve problems and develop expertise anyway.
Schön argues that this expertise mostly comes through building up familiarity with many individual examples, rather than through application of an overarching theory. He builds up his own argument in the same way, letting his ideas shine through a series of case studies of successful practitioners.
In the one I find most compelling, an established architect, Quist, reviews the work of a student, Petra, who is in the early stages of figuring out a design for an elementary school site. I’m going to follow Schön’s examples-driven approach here and describe this in some detail.
Petra has already made some preliminary sketches and come up with some discoveries of her own. Her first idea for the classrooms was the diagonal line of six rooms in the top right of the picture below. Playing around, she found that ‘they were too small in scale to do much with’, so she ‘changed them to this much more significant layout’, the L shapes in the bottom left.
I’m not sure I can fully explain why the L shapes are ‘more significant’, but I do agree with her assessment. There’s more of a feeling of spaciousness than there was with the six cramped little rectangles, and the pattern is more interesting geometrically and suggests more possibilities for interacting with the geography of the site.
At this point, we already get to see a theme that Schön goes back to repeatedly, the idea of a ‘reflective conversation with the materials’. The designer finds that:
His materials are continually talking back to him, causing him to apprehend unanticipated problems and potentials.
Petra has found a simple example of this. She switched to the L shapes on more-or-less aesthetic grounds, but then she discovers that the new plan ‘relates one to two, three to four, and five to six grades, which is more what I wanted to do educationally anyway.’ The materials have talked back and given her more than she originally put in, which is a sign that she is on to something promising.
After this early success, Petra runs into difficulties. She has learnt the rule that buildings should fit to the contours of the site. Unfortunately the topography of this particular site is really incoherent and nothing she tries will fit into the slope.
Quist advises her to break this rule:
You should begin with a discipline, even if it is arbitrary, because the site is so screwy – you can always break it open later.
Together they work through the implications of the following design:
This kicks off a new round of conversation with the materials.
Quist now proceeds to play out the imposition of the two-dimensional geometry of the L-shaped classrooms upon the “screwy” three-dimensional contours of the slope… The roofs of the classroom will rise five feet above the ground at the next level up, and since five feet is “maximum height for a kid”, kids will be able to be in “nooks”…
A drawing experiment has been conducted and its outcome partially confirms Quist’s way of setting the L-shaped classrooms upon the incoherent slope. Classrooms now flow down the slope in three stages, creating protected spaces “maximum height for a kid” at each level.
In an echo of Petra’s initial experiment, Quist has got back more than he put in. He hasn’t solved the problem in the clean, definitive way you’d solve a mathematical optimisation problem. Many other designs would probably have worked just as well. But the design has ‘talked back’, and his previous experience of working through problems like this has given him the skills to understand what it is saying.
I find the ‘reflective conversation’ idea quite thought-provoking and appealing. It seems to fit well with my limited experience: prototyping my design in a visual environment was an immediate improvement over just writing code, because it enabled this sort of conversation. Instead of planning everything out in advance, I could mess around with the basic elements of the design and ‘apprehend unanticipated problems and potentials’ as they came up.
I don’t find the other examples in the book quite as convincing as this one. Quist is unusually articulate, so the transcripts tell you a lot. Also, architectural plans can be reproduced easily as figures in a book, so you can directly see his solution for yourself, rather than having to take someone’s word for it. With the other practitioners it’s often hard to get a sense of how good their solutions are. (I guess Schön was also somewhat limited by who he could persuade to be involved.)
Alongside the case studies, there is some discussion of the implications for how these professions are normally taught. Some of this is pretty dry, but there are a few interesting topics. The professions he considers often have something like ‘engineering envy’ or ‘medicine envy’: doctors and engineers can borrow from the hard sciences and get definitive answers to some of their questions, so they don’t always have to do this more nebulous ‘reflective conversation’ thing.
It’s tempting for experts in the ‘softer’ professions to try and borrow some of this prestige, leading to the introduction of a lot of theory into the curriculum, even if this theory turns out to be pretty bullshit-heavy and less useful than the kind of detailed reflection on individual cases that Quist is doing. Schön advocates for the reintroduction of practice, pointing out that this can never be fully separated from theory anyway:
If we separate thinking from doing, seeing thought only as a preparation for action and action only as an implementation of thought, then it is easy to believe that when we step into the separate domain of thought we will become lost in an infinite regress of thinking about thinking. But in actual reflection-in-action, as we have seen, doing and thinking are complementary. Doing extends thinking in the tests, moves, and probes of experimental action, and reflection feeds on doing and its results. Each feeds the other, and each sets boundaries for the other. It is the surprising result of action that triggers reflection, and it is the production of a satisfactory move that brings reflection temporarily to a close… Continuity of enquiry entails a continual interweaving of thinking and doing.
For some reason this book has become really popular with education departments, even though teaching only makes up a tiny part of the book. Googling ‘reflective practitioner’ brings up lots of education material, most of which looks cargo-culty and uninspired. Schön’s ideas seem to have mostly been routinised into exactly the kind of useless theory he was trying to go beyond, and I haven’t yet found any good follow-ups in the spirit of the original. It’s a shame, as there’s a lot more to explore here.