SimCity Bricolage

In the references to The World Beyond Your Head I found an intriguing paper by Mizuko Ito, Mobilizing Fun in the Production and Consumption of Children’s Software, following the interactions between children and adults at an after-school computer club. It’s written in a fairly heavy dialect of academicese, but the dialogue samples are fascinating. Here a kid, Jimmy, is playing SimCity 2000, with an undergrad, Holly, watching:

J: (Budget window comes up and Jimmy dismisses it.) Yeah. I’m going to bulldoze a skyrise here. (Selects bulldozer tool and destroys building.) OK. (Looks at H.) Ummm! OK, wait, OK. Should I do it right here?

H: Sure, that might work… that way. You can have it …

J: (Builds highway around city.) I wonder if you can make them turn. (Builds highway curving around one corner) Yeah, okay.

H: You remember, you want the highway to be … faster than just getting on regular streets. So maybe you should have it go through some parts.

J: (Dismisses budget pop-up window. Points to screen.) That’s cool! (Inaudible.) I can make it above?

H: Above some places, I think. I don’t know if they’d let you, maybe not.

J: (Moves cursor over large skyscraper.) That’s so cool!

H: Is that a high rise?

J: Yeah. I love them.

H: Is it constantly changing, the city? Is it like …

J: (Builds complicated highway intersection. Looks at H.)

H: (Laughs.)

J: So cool. (Builds more highway grids in area, creating a complex overlap of four intersections.)

H: My gosh, you’re going to have those poor drivers going around in circles.

J: I’m going to erase that all. I don’t like that, OK. (Bulldozes highway system and blows up a building in process.) Ohhh …

H: Did you just blow up something else?

J: Yeah. (Laughs.)

H: (Laughs.)

J: I’m going to start a new city. I don’t understand this one. I’m going to start with highways. (Quits without saving city.)

As Ito puts it, “by the end Jimmy has wasted thousands of dollars on a highway to nowhere, blown up a building, and trashed his city.” So what’s the point of playing the game in this way?

Well, for a start, it lets him make cool stuff and then blow it up. That might be all the explanation we need! But I think he’s also doing something genuinely useful for understanding the game itself.

Ito mainly seems to be interested in the social dynamics of the situation – the conflict between Jimmy finding ‘fun’, ‘spectacular’ effects in the game, and Holly trying to drag him back to more ‘educational’ behaviours. I can see that too, but I’m interested in a slightly different reading.

To my mind, Jimmy is ‘sketching’: he’s finding out what the highway tool can do as a tool, rather than immediately subsuming it to the overall logic of the game. The highway he’s building is in a pointless location and doesn’t function very well as a highway, but that doesn’t matter. He’s investigating how to make it turn, how to make it intersect with other roads, how to raise it above ground level. While focussed on this, he ignores any more abstract considerations that would pull him out of engagement with the tool. For example, he dismisses the budget popup as fast as he can, so that he can get back to bulldozing buildings.

Now he knows what the tool does, he may as well just trash the current city and start a new one where he can use his knowledge in a more productive way. His explorations are useless in the context of the current game, but will give him raw material to work with later in a different city, where he might need a fancy junction or an overhead highway.

I first wrote a version of this for the newsletter last year. Reading it back this time, I noticed something else: Jimmy’s explorations are a great example of bricolage. I first learned this term from Sherry Turkle and Seymour Papert’s Epistemological Pluralism and the Revaluation of the Concrete, which I talked about here once before. In Turkle and Papert’s sense of the word, adapted from Lévi-Strauss, bricolage is a particular style of programming computers:

Bricoleurs construct theories by arranging and rearranging, by negotiating and renegotiating with a set of well-known materials.

… They are not drawn to structured programming; their work at the computer is marked by a desire to play with the elements of the program, to move them around almost as though they were material elements — the words in a sentence, the notes on a keyboard, the elements of a collage.

… bricoleur programmers, like Levi-Strauss’s bricoleur scientists, prefer negotiation and rearrangement of their materials. The bricoleur resembles the painter who stands back between brushstrokes, looks at the canvas, and only after this contemplation, decides what to do next. Bricoleurs use a mastery of associations and interactions. For planners, mistakes are missteps; bricoleurs use a navigation of midcourse corrections. For planners, a program is an instrument for premeditated control; bricoleurs have goals but set out to realize them in the spirit of a collaborative venture with the machine. For planners, getting a program to work is like ”saying one’s piece”; for bricoleurs, it is more like a conversation than a monologue.

One example in the paper is ‘Alex, 9 years old, a classic bricoleur’, who comes up with a clever repurposing of a Lego motor:

When working with Lego materials and motors, most children make a robot walk by attaching wheels to a motor that makes them turn. They are seeing the wheels and the motor through abstract concepts of the way they work: the wheels roll, the motor turns. Alex goes a different route. He looks at the objects more concretely; that is, without the filter of abstractions. He turns the Lego wheels on their sides to make flat ”shoes” for his robot and harnesses one of the motor’s most concrete features: the fact that it vibrates. As anyone who has worked with machinery knows, when a machine vibrates it tends to ”travel,” something normally to be avoided. When Alex ran into this phenomenon, his response was ingenious. He doesn’t use the motor to make anything ”turn,” but to make his robot (greatly stabilized by its flat ”wheel shoes”) vibrate and thus ”travel.” When Alex programs, he likes to keep things similarly concrete.

This is a similar mode of investigation to Jimmy’s. He’s seeing what kinds of things the motor and wheels can do, as part of an ongoing conversation with his materials, without immediately subsuming them to the normal logic of motors and wheels. In the process, he’s discovered something he wouldn’t have done if he’d just made a normal car. Similarly, Jimmy will have more freedom with the highway tool in the future than if he followed all the rules about budgets and city planning before he understood everything that it can do.

Alternatively, maybe I’m massively overanalysing this short contextless stretch of dialogue, and Jimmy just likes making stuff explode. Maybe he just keeps making and trashing a series of similarly broken cities for the sheer fun of it. Either way, mashing these two papers together has been a fun piece of bricolage of my own.


Book Review: The World Beyond Your Head

I wrote a version of this for the newsletter last year and decided to expand it out into a post. I’ve also added in a few thoughts based on an email conversation about the book with David MacIver a while back, and a few more thoughts inspired by a more recent post.

This wasn’t a book I’d been planning to read. In fact, I’d never even heard of it. I was just working in the library one day, and the cover caught my attention. It’s been given the subtitle ‘How To Flourish in an Age of Distraction’, and it looks like the publisher has tried to sell it as a sort of book-length version of one of those hand-wringers in The Atlantic about how we all gawp at our phones too much. I’m a sucker for those. This is a bit pathetic, I know, but there are certain repetitive journalist topics that I like simply because they’re repetitive, and the repetition has given them a comfortingly familiar texture, and ‘we all gawp at our phones too much’ is one of them. So I had a flick through.

The actual contents turned out to be less comfortingly familiar, but a lot more interesting. Actually, I recognised a lot of it! Merleau-Ponty on perception… Polanyi on tacit knowledge… lots of references to embodied cognition. This looks like my part of the internet! I hadn’t seen this set of ideas collected together in a pop book before, so I thought I’d better read it.

The author, Matthew Crawford, previously wrote a book called Shop Class As Soul Craft, on the advantages of working in the skilled trades. In this one he zooms out further to take a more philosophical look at why working with your hands with real objects is so satisfying. There’s a lot of good stuff in the book, which I’ll get to in a minute.  I still struggled to warm to it, though, despite it being full of topics I’m really interested in. Some of this was just a tone thing. He writes in a style I’ve seen before and don’t get on with – I’m not American and can’t place it very exactly, but I think it’s something like ‘mild social conservatism repackaged for the educated coastal elite’. According to Wikipedia he writes for something called The New Atlantis, which may be of the places this style comes from. I don’t know. There’s also a more generic ‘get off my lawn’ thing going on, where we are treated to lots of anecdotes about how the airport is too loud and there’s too much advertising and children’s TV is terrible and he can’t change the music in the gym.

The oddest thing for me was his choice of pronouns for the various example characters he makes up throughout the book to illustrate his points. This is always a pain because every option seems to annoy someone, but using ‘he’ consistently would at least have fitted the grumpy old man image quite well. Maybe his editor told him not to do that, though, or maybe he has some kind of point to make, because what he actually decided to do was use a mix of ‘he’ and ‘she’, but only ever pick the pronoun that fits traditional expectations of what gender the character would be. Because he mostly talks about traditionally masculine occupations, this means that maybe 80% of the characters, and almost all of the sympathetic ones, are male – all the hockey players, carpenters, short-order cooks and motorcycle mechanics he’s using to demonstrate skilled interaction with the environment. The only female characters I remember are a gambling addict, a New Age self-help bore, a disapproving old lady, and one musician who actually gets to embody the positive qualities he’s interested in. It’s just weird, and I found it very distracting.

OK, that’s all my whining about tone done. I have some more substantive criticisms later, but first I want to talk about some of the things I actually liked. Underneath all the owning-the-libs surface posturing he’s making a subtle and compelling argument. Unpacking this argument is quite a delicate business, and I kind of understand why the publishers just rounded it off to the gawping at phones thing.

Violins and slot machines

Earlier, I said that Crawford is out to explain ‘why working with your hands with real objects is so satisfying’, but actually he’s going for something a little more nuanced and specific than that. Not all real objects are satisfying to work with. Here’s his discussion of one that isn’t, at least for an adult:

When my oldest daughter was a toddler, we had a Leap Frog Learning Table in the house. Each side of the square table presents some sort of electromechanical enticement. There are four bulbous piano keys; a violin-looking thing that is played by moving a slide rigidly located on a track; a transparent cylinder full of beads mounted on an axle such that any attempt, no matter how oblique, makes it rotate; and a booklike thing with two thick plastic pages in it.

… Turning off the Leap Frog Learning Table would produce rage and hysterics in my daughter… the device seemed to provide not just stimulation but the experience of agency (of a sort). By hitting buttons, the toddler can reliably make something happen.

The Leap Frog Learning Table is designed to take very complicated data from the environment – toddlers bashing the thing any old how, at any old speed or angle – and funnel this mess into a very small number of possible outcomes. The ‘violin-looking thing’ has only one translational degree of freedom, along a single track. Similarly, the cylinder can only be rotated around one axis. So the toddler’s glancing swipe at the cylinder is not dissipated into uselessness, but instead produces a satisfying rolling motion – they get to ‘make something happen’.

This is extremely satisfying for a toddler, who struggles to manipulate the more resistant objects of the adult world. But there is very little opportunity for growth or mastery there. The toddler has already mastered the toy to almost its full extent. Hitting the cylinder more accurately might make it spin for a bit longer, but it’s still pretty much the same motion.

At the opposite end of the spectrum would be a real violin. I play the violin, and you could describe it quite well as a machine for amplifying tiny changes in arm and hand movements into very different sounds (mostly horrible ones, which is why beginners sound so awful). There are a large number of degrees of freedom – the movements of the each jointed finger in three dimensional space, including those on the bow hand, contribute to the final sound. Also, almost all of them are continuous degrees of freedom. There are no keys or frets to accommodate small mistakes in positioning.

Crawford argues that although tools and instruments that transmit this kind of rich information about the world can be very frustrating in the short term, they also have enormous potential for satisfaction in the long term as you come to master them. Whereas objects like the Leap Frog Learning Table have comparatively little to offer if you’re not two years old:

Variations in how you hit the button on a Leap Frog Learning Table or a slot machine do not similarly produce variations in the effect you produce. There is a closed loop between your action and the effect that you perceive, but the bandwidth of variability has been collapsed… You are neither learning something about the world, as the blind man does with his cane, nor acquiring something that could properly be called a skill. Rather, you are acting within the perception-action circuits encoded in the narrow affordances of the game, learned in a few trials. This is a kind of autistic pseudo-action, based on exact repetition, and the feeling of efficacy that it offers evidently holds great appeal.

(As a warning, Crawford consistently uses ‘autistic’ in this derogatory sense throughout the book; if that sounds unpleasant, steer clear.)

Objects can also be actively deceptive, rather than just tediously simple. In the same chapter there’s some interesting material on gambling machines, and the tricks used to make them addictive. Apparently one of the big innovations here was the idea of ‘virtual reel mapping’. Old-style mechanical fruit machines would have three actual reels with images on them that you needed to match up, and just looking at the machine would give you a rough indication of the total number of images on the reel, and therefore the rough odds of matching them up and winning.

Early computerised machines followed this pattern, but soon the machine designers realised that there no longer needed to be this close coupling between the machine’s internal representation and what the gambler sees. So the newer machines would have a much larger number of virtual reel positions that are mostly mapped to losing combinations, with a large percentage of these combinations being ‘near misses’ to make the machine more addictive. The machine still looks simple, like the toddler’s toy, but the intuitive sense of odds you get from watching the machine becomes completely useless, because the internal logic of the machine is now doing something very complicated that the screen actively hides from you. A machine like this is actively ‘out to get you’, alienating you from the evidence of your own eyes.

Apples and sheep

Before reading the book I’d never really thought carefully about any of these properties of objects. For a while after reading it, I noticed them everywhere. Here’s one (kind of silly) example.

Shortly after reading the book I was visiting my family, and came across this wooden puzzle my aunt made:


I had a phase when I was ten or so where I was completely obsessed with this puzzle. Looking back, it’s not obvious why. It’s pretty simple and looks like the kind of thing that would briefly entertain much younger children. I was a weird kid and also didn’t have a PlayStation – maybe that’s explanation enough? But I didn’t have some kind of Victorian childhood where I was happy with an orange and a wooden toy at Christmas. I had access to plenty of plastic and electronic nineties tat that was more obviously fun.

I sat down for half an hour to play with this thing and try and remember what the appeal was. The main thing is that it turns out to be way more controllable than you might expect. The basic aim of the puzzle is just to get the ball bearings in the holes in any old order. This is the game that stops being particularly rewarding once you’re over the age of seven. But it’s actually possible to learn to isolate individual ball bearings by bashing them against the sides until one separates off, and then tilt the thing very precisely to steer one individually into a specific hole. That gives you a lot more options for variants on the basic game. For example, you can fill in the holes in a spiral pattern starting from the middle. Or construct a ‘fence’ of outer apples with a single missing ‘gate’ apple, steer two apples into the central pen (these apples are now sheep), and then close the gate with the last one.

The other interesting feature is that because this is a homemade game, the holes are not uniformly deep. The one in the top right is noticeably shallower than the others, and the ball bearing in this slot can be dislodged fairly easily while keeping the other nine in their place. This gives the potential for quite complicated dynamics of knocking down specific apples, and then steering other ones back in.

Still an odd way to have spent my time! But I can at least roughly understand why. The apple puzzle is less like the Leap Frog Learning Table than you might expect, and so the game can reward a surprisingly high level of skill. Part of this is from the continuous degrees of freedom you have in tilting the board, but the cool thing is that a lot of it comes from unintentional parts of the physical design. My aunt made the basic puzzle for small children, and the more complicated puzzles happened to be hidden within it.

The ability to dislodge the top right apple is not ‘supposed’ to be part of the game at all – an abstract version you might code up would have identical holes. But the world is going about its usual business of being incorrigibly plural, and there is just way more of it than any one abstract ruleset needs. The variation in the holes allows some of that complexity to accidentally leak in, breaking the simple game out into a much richer one.

Pebbles and birdsong

Now for the criticism part. I think there’s a real deficiency in the book that goes deeper than the tone issues I pointed out at the start. Crawford is insightful in his discussions of the kind of complexity that many handcrafted objects exhibit, that’s often standardised away in the modern world. But in his enthusiasm for people doing real things with real tools he’s forgotten the advantages of the systematised, ‘autistic’ world he dislikes. Funnelling messy reality into repeatable categories is how we get shit done at scale. It’s not just some unpleasant feature of modernity, either. Even something as simple as counting with pebbles relies on this:

To make the method work, you must choose bits-of-rock of roughly even sizes, so you can distinguish them from littler bits—stray grains of sand or dust in the bucket—that don’t count. How even? Even enough that you can make a reliable-enough judgement.

The counting procedure abstracts away the vivid specificity of the individual pebbles, and reduces them to simplistic interchangeable tokens. But there’s not much point in complaining about this. You need to do this to get the job done! And you can always break them back out into individuality later on if you want to do something else, like paint a still life of them.

I’m finding myself going back yet again to Christopher Norris’s talk on Derrida, which I discussed in my braindump here. (I’m going to repeat myself a bit in the next section. This was the most thought-provoking single thing I read last year, and I’m still working through the implications, so everything seems to lead back there at the moment.) Derrida picks apart some similar arguments by Rousseau, who was concerned with the bad side of systematisation in music:

One way of looking at Rousseau’s ideas about the melody/harmony dualism is to view them as the working-out of a tiff he was having with Rameau. Thus he says that the French music of his day is much too elaborate, ingenious, complex, ‘civilized’ in the bad (artificial) sense — it’s all clogged up with complicated contrapuntal lines, whereas the Italian music of the time is heartfelt, passionate, authentic, spontaneous, full of intense vocal gestures. It still has a singing line, it’s still intensely melodious, and it’s not yet encumbered with all those elaborate harmonies.

Crawford is advocating for something close to Rousseau’s pure romanticism. He brings along more recent and sophisticated arguments from phenomenology and embodied cognition, but he’s still very much on the side of spontaneity over structure. And I think he’s still vulnerable to the same arguments that Derrida was able to use against Rousseau. Norris explains it as follows:

… Rousseau gets into a real argumentative pickle when he say – lays it down as a matter of self-evident truth – that all music is human music. Bird-song just doesn’t count, he says, since it is merely an expression of animal need – of instinctual need entirely devoid of expressive or passional desire – and is hence not to be considered ‘musical’ in the proper sense of that term. Yet you would think that, given his preference for nature above culture, melody above harmony, authentic (spontaneous) above artificial (‘civilized’) modes of expression, and so forth, Rousseau should be compelled – by the logic of his own argument – to accord bird-song a privileged place vis-à-vis the decadent productions of human musical culture. However Rousseau just lays it down in a stipulative way that bird-song is not music and that only human beings are capable of producing music. And so it turns out, contrary to Rousseau’s express argumentative intent, that the supplement has somehow to be thought of as always already there at the origin, just as harmony is always already implicit in melody, and writing – or the possibility of writing – always already implicit in the nature of spoken language.

Derrida is pointing out that human music always has a structured component. We don’t just pour out a unmarked torrent of frequencies. We define repeatable lengths of notes, and precise intervals between pitches. (The evolution of these is a complicated engineering story in itself.) This doesn’t make music ‘inauthentic’ or ‘artificial’ in itself. It’s a necessary feature of anything we’d define as music.

I’d have been much happier with the book if it had some understanding of this interaction – ‘yes, structure is important, but I think we have too much of it, and here’s why’. But all we get is the romantic side. As with Rousseau’s romanticism, this tips over all too easily into pure reactionary nostalgia for an imagined golden age, and then we have to listen to yet another anecdote about how everything in the modern world is terrible. It’s not the eighteenth century any more, and we can do better now. And for all its genuine insight, this book mostly just doesn’t.

Book Review: The Eureka Factor

Last month I finally got round to reading The Eureka Factor by John Kounios and Mark Beeman, a popular book summarising research on ‘insightful’ thinking. I first mentioned it a couple of years ago after I’d read a short summary article, when I realised it was directly relevant to my recurring ‘two types of mathematician’ obsession:

The book is not focussed on maths – it’s a general interest book about problem solving and creativity in any domain. But it looks like it has a very similar way of splitting problem solvers into two groups, ‘insightfuls’ and ‘analysts’. ‘Analysts’ follow a linear, methodical approach to work through a problem step by step. Importantly, they also have cognitive access to those steps – if they’re asked what they did to solve the problem, they can reconstruct the argument.

‘Insightfuls’ have no such access to the way they solved the problem. Instead, a solution just ‘pops into their heads’.

Of course, nobody is really a pure ‘insightful’ or ‘analyst’. And most significant problems demand a mixed strategy. But it does seem like many people have a tendency towards one or the other.

I wasn’t too sure what I was getting into. The replication crisis has made me hyperaware of the dangers of uncritically accepting any results in psychology, and I’m way too ignorant of the field to have a good sense for which results still look plausible. However, the book turned out to be so extraordinarily Relevant To My Interests that I couldn’t resist writing up a review anyway.

The final chapters had a few examples along the lines of ‘[weak environmental effect] primes people to be more/less insightful’, and I know enough to stay away from those, but the earlier parts look somewhat more solid to me. I haven’t made much effort to trace back references, though, and I could easily still be being too credulous.

(I didn’t worry so much about replication with my previous post on the Cognitive Reflection Test. Getting the bat and ball question wrong is hardly the kind of weak effect that you need a sensitive statistical instrument to detect. It’s almost impossible to stop people getting it wrong! I did steer clear of any more dubious priming-style results, though, like the claim that people do better on the CRT when reading it ‘in a disfluent font’.)

Insight and intuition

First, it’s worth getting clear on exactly what Kounious and Beeman mean by ‘insight’. As they use it, insight is a specific type of creative thinking, which they define more generally as ‘the ability to reinterpret something by breaking it down into its elements and recombining these elements in a surprising way to achieve some goal.’ Insight is distinguished by its suddenness and lack of conscious control:

When this kind of creative recombination takes place in an instant, it’s an insight. But recombination can also result from the more gradual, conscious process that cognitive psychologists call “analytic” thought. This involves methodically and deliberately considering many possibilities until you find the solution. For example, when you’re playing a game of Scrabble, you must construct words from sets of letters. When you look at the set of letters “A-E-H-I-P-N-Y-P” and suddenly realize that they can form the word “EPIPHANY,” then that would be an insight. When you systematically try out different combinations of the letters until you find the word, that’s analysis.

Insights tend to have a few other features in common. Solving a problem by insight is normally very satisfying: the insight comes into consciousness along with a small jolt of positive affect. The insight itself is usually preceded by a longer period of more effortful thought about the problem. Sometimes this takes place just before the moment of insight, while at other times there is an ‘incubation’ phase, where the solution pops into your head while you’ve taken a break from deliberately thinking about it.

I’m not really going to get into this part in my review, but the related word ‘intuition’ is also used in an interestingly specific sense in the book, to describe the sense that a new idea is lurking beneath the surface, but is not consciously accessible yet. Intuitions often precede an insight, but have a different feel to the insight itself:

This puzzling phenomenon has a strange subjective quality. It feels like an idea is about to burst into your consciousness, almost as though you’re about to sneeze. Cognitive psychologists call this experience “intuition,” meaning an awareness of the presence of information in the unconscious mind — a new idea, solution, or perspective — without awareness of the information itself, at least until it pops into consciousness.

Insight problems

To study insight, psychologists need to come up with problems that reliably trigger an insight solution. One classic example discussed in The Eureka Factor is the Nine Dot Problem, where you are asked to connect the following set of black dots using only four lines, without taking your pen off the page:

[image source]
If you’ve somehow avoided seeing this puzzle before, think about it for a while first. In the absence of any kind of built-in spoiler blocks for sites, I’ll insert a bunch of blank space here so that you hopefully have to scroll down off your current screen to see my discussion of the solution:

If you didn’t figure it out, a solution can be found in the Wikipedia article on insight problems here. It’ll probably look irritatingly obvious once you see it. The key feature of the solution is that the lines you draw have to extend outside the confines of the square of dots you start with (thus spawning a whole subgenre of annoying business literature on ‘thinking outside the box’). Nothing in the rules forbids this, but the setup focusses most people’s attention on the grid itself, and breaking out of this mindset requires a kind of reframing, a throwing away of artificially imposed constraints. This is a common characteristic of insight problems.

This characteristic also makes insight hard to test. For testing purposes, it’s useful to have a large stock of similar puzzles in hand. But a good reframing like the one in the Nine Dot Problem tends to be a bit of a one-off: once you’ve had the idea of extending the lines outside the box, it applies trivially to all similar puzzles, and not at all to other types of puzzle.

(I talked about something similar in my last post, on the Cognitive Reflection Test. The test was inspired by one good puzzle, the ‘bat and ball problem’, and adds two other questions that were apparently picked to be similar. Five thousand words and many comments later, it’s not obvious to me or most of the other commenters that these three problems form any kind of natural set at all.)

Kounios and Beeman discuss several of these eyecatching ‘one-off’ problems in the book, but their own research that they discuss is focussed on a more standardisable kind of puzzle, the Remote Associates Test. This test gives you three words, such as


and asks you to find the common word that links them. The authors claim that these can be solved either with or without insight, and asked participants to self-categorise their responses as either fitting in the ‘insightful’ or ‘analytic’ categories:

The analytic approach is to consciously search through the possibilities and try out potential answers. For example, start with “pine.” Imagine yourself thinking: What goes with “pine”? Perhaps “tree”? “Pine tree” works. “Crab tree”? Hmmm … maybe. “Tree sauce”? No. Have to try something else. How about “cake”? “Crab cake” works. “Cake sauce” is a bit of a reach but might be acceptable. However, “pine cake” and “cake pine” definitely don’t work. What else? How about “crabgrass”? That works. But “pine grass”? Not sure. Perhaps there is such a thing. But “sauce grass” and “grass sauce” are definitely out. What else goes with “sauce”? How about “applesauce”? That’s good. “Pineapple” and “crab apple” also work. The answer is “apple”!

This is analytical thought: a deliberate, methodical, conscious search through the possible combinations. But this isn’t the only way to come up with the solution. Perhaps you’re trying out possibilities and get stuck or even draw a blank. And then, “Aha! Apple” suddenly pops into your awareness. That’s what would happen if you solved the problem by insight. The solution just occurs to you and doesn’t seem to be a direct product of your ongoing stream of thought.

This categorisation seems suspiciously neat, and if I rely on my own introspection for solving one of these (which is obviously dubious itself) it feels like more of a mix. I’ll often generate some verbal noise about cakes and trees that sounds vaguely like I’m doing something systematic, but the main business of solving the thing seems to be going on nonverbally elsewhere. But I do think there’s something there – the answer can be very immediate and ‘poppy’, or it can surface after a longer and more accessible process of trying plausible words. This was tested in a more objective way by seeing what people do when they don’t come up with the answer:

Insightfuls made more “errors of omission.” When waiting for an insight that hadn’t yet arrived, they had nothing to offer in its place. So when the insight didn’t arrive in time, they let the clock run out without having made a guess. In contrast, Analysts made more “errors of commission.” They rarely timed out, but instead guessed – sometimes correctly – by offering the potential solution they had been consciously thinking about when their time was almost up.

Kounios and Beeman’s research focussed on finding neural correlates of the ‘aha’ moment of insight, using a combination of an EEG test to pinpoint the time of the insight, and fMRI scanning to locate the brain region:

We found that at the moment a solution pops into someone’s awareness as an insight, a sudden burst of high-frequency EEG activity known as “gamma waves” can be picked up by electrodes just above the right ear. (Gamma waves represent cognitive processing in the brain, such as paying attention to something or linking together different pieces of information.) We were amazed at the abruptness of this burst of activity—just what one would expect from a sudden insight. Functional magnetic resonance imaging showed a corresponding increase in blood flow under these electrodes in a part of the brain’s right temporal lobe called the “anterior superior temporal gyrus” (see figure 5.2), an area that is involved in making connections between distantly related ideas, as in jokes and metaphors. This activity was absent for analytic solutions.

So we had found a neural signature of the aha moment: a burst of activity in the brain’s right hemisphere.

I’m not sure how settled this is, though. I haven’t tried to do a proper search of the literature, but certainly a review from 2010 describes the situation as very much in flux:

A recent surge of interest into the neural underpinnings of creative behavior has produced a banquet of data that is tantalizing but, considered as a whole, deeply self-contradictory.

(The book was published somewhat later, in 2015, but mostly cites research from prior to this review, such as this paper.)

As an outsider it’s going to be pretty hard for me to judge this without spending a lot more time than I really want to right now. However, regardless of how this holds up, I was really interested in the authors’ discussion of why a right-hemisphere neural correlate of insight would make sense.

Insight and context

One of the authors, Mark Beeman, had previously studied language deficits in people who had suffered brain damage to the right hemisphere. One such patient was the trial attorney D.B.:

What made D.B. “lucky” was that the stroke had damaged his right hemisphere rather than his left. Had the stroke occurred in the mirror-image left-hemisphere region, he would have experienced Wernicke’s aphasia, a profound deficit of language comprehension. In the worst cases, people with Wernicke’s aphasia may be completely unable to understand written or spoken language.

Nevertheless, D.B. didn’t feel lucky. He may have been better off than if he’d had a left-hemisphere stroke, but he felt that his language ability was far from normal. He said that he “couldn’t keep up” with conversations or stories the way he used to. He felt impaired enough that he had stopped litigating trials—he thought that it would have been a disservice to his clients to continue to represent them in court.

D.B. and the other patients were able to understand the straightforward meanings of words and the literal meanings of sentences. Even so, they complained about vague difficulties with language. They failed to grasp the gist of stories or were unable to follow multiple-character or multiple-plot stories, movies, or television shows. Many didn’t get jokes. Sarcasm and irony left them blank with incomprehension. They could sometimes muddle along without these abilities, but whenever things became subtle or implicit, they were lost.

An example of the kind of problem D.B. struggled with is the following:

Saturday, Joan went to the park by the lake. She was walking barefoot in the shallow water, not knowing that there was glass nearby. Suddenly, she grabbed her foot in pain and called for help, and the lifeguard came running.

If D.B. was given a statement about something that occurred explicitly in the text, such as ‘Joan went to the park on Saturday’, he could say whether it was true or false with no problems at all. In fact, he did better than all of the control subjects on these sorts of explicit questions. But if he was instead presented with a statement like ‘Joan cut her foot’, where some of the facts are left implicit, he was unable to answer.

This was interesting to me, because it seems so directly relevant to the discussion last year on ‘cognitive decoupling’. This is a term I’d picked up from Sarah Constantin, who herself got it from Keith Stanovich:

Stanovich talks about “cognitive decoupling”, the ability to block out context and experiential knowledge and just follow formal rules, as a main component of both performance on intelligence tests and performance on the cognitive bias tests that correlate with intelligence. Cognitive decoupling is the opposite of holistic thinking. It’s the ability to separate, to view things in the abstract, to play devil’s advocate.

The patients in Beeman’s study have so much difficulty with contextualisation that they struggle with anything at all that is left implicit, even straightforward inferences like ‘Joan cut her foot’. This appears to match with other evidence from visual half-field studies, where subjects are presented with words on either the right or left half of the visual field. Those on the left half will go first to the right hemisphere, so that the right hemisphere gets a head start on interpreting the stimulus. This shows a similar difference between hemispheres:

The left hemisphere is sharp, focused, and discriminating. When a word is presented to the left hemisphere, the meaning of that word is activated along with the meanings of a few closely related words. For example, when the word “table” is presented to the left hemisphere, this might strongly energize the concepts “chair” and “kitchen,” the usual suspects, so to speak. In contrast, the right hemisphere is broad, fuzzy, and promiscuously inclusive. When “table” is presented to the right hemisphere, a larger number of remotely related words are weakly invoked. For example, “table” might activate distant associations such as “water” (for underground water table), “payment” (for paying under the table), “number” (for a table of numbers), and so forth.

Why would picking up on these weak associations be relevant to insight? The story seems to be that this tangle of secondary meanings – the ‘Lovecraftian penumbra of monstrous shadow phalanges’ – works to pull your attention away from the obvious interpretation you’re stuck with, helping you to find a clever new reframing of the problem.

This makes a lot of sense to me as a rough outline. In my own experience at least, the kind of thinking that is likely to lead to an insight experience feels softer and more diffuse than the more ‘analytic’ kind, more a process of sort of rolling the ideas around gently in your head and seeing if something clicks than a really focussed investigation of the problem. ‘Thinking too hard’ tends to break the spell. This fits well with the idea that insights are triggered by activation of weak associations.

Final thoughts

There’s a lot of other interesting material in the book about the rest of the insight process, including the incubation period leading up to an insight flash, and the phenomenon of ‘intuitions’, where you feel that an insight is on its way but you don’t know what it is yet. I’ll never get through this review if I try to cover all of that, so instead I’m going to finish up with a couple of weak associations of my own that got activated while reading the book.

I’ve been getting increasingly dissatisfied with the way dual process theories split cognition into a fast/automatic/intuitive ‘System 1’ and a slow/effortful/systematic ‘System 2’. System 1 in particular has started to look to me like an amorphous grab bag of all kinds of things that would be better separated out.

The Eureka Factor has pushed this a little further, by bringing out a distinction between two things that normally get lumped under System 1 but are actually very different. One obvious type of System 1-ish behaviour is routine action, the way you go about tasks you have done many times before, like making a sandwich or walking to work. These kinds of activities require very little explicit thought and generally ‘just happen’ in response to cues in the environment.

The kind of ‘insightful’ thinking discussed in The Eureka Factor would also normally get classed under System 1: it’s not very systematic and involves a fast, opaque process where the answer just pops into your head without much explanation. But it’s also very different to routine action. It involves deliberately choosing to think about a new situation, rather than one you have seen many times before, and a successful insight gives you a qualitatively new kind of understanding. The insight flash itself is a very noticeable, enjoyable feature of your conscious attention, rather than the effortless, unexamined state of absorbed action.

This was pointed out to me once before by Sarah Constantin, in the comments section of her Distinctions in Types of Thought:

You seem to be lumping “flashes of insight” in with “effortless flow-state”. I don’t think they’re the same. For one thing, inspiration generally comes in bursts, while flow-states can persist for a while (driving on a highway, playing the piano, etc.) Definitely, “flashes of insight” aren’t the same type of thought as “effortful attention” — insight feels easy, instant, and unforced. But they might be their own, unique category of thought. Still working out my ontology here.

I’d sort of had this at the back of my head since then, but the book has really brought out the distinction clearly. I’m sure these aren’t the only types of thinking getting shoved into the System 1 category, and I get the sense that there’s a lot more splitting out that I need to do.

I also thought about how the results in the book fit in with my perennial ‘two types of mathematician’ question. (This is a weird phenomenon I’ve noticed where a lot of mathematicians have written essays about how mathematicians can be divided into two groups; I’ve assembled a list of examples here.) ‘Analytic’ versus ‘insightful’ seems to be one of the distinctions between groups, at least. It seems relevant to Poincaré’s version, for instance:

The one sort are above all preoccupied with logic; to read their works, one is tempted to believe they have advanced only step by step, after the manner of a Vauban who pushes on his trenches against the place besieged, leaving nothing to chance.

The other sort are guided by intuition and at the first stroke make quick but sometimes precarious conquests, like bold cavalrymen of the advance guard.

In fact, Poincaré once also gave a striking description of an insight flash himself:

Just at this time, I left Caen, where I was living, to go on a geologic excursion under the auspices of the School of Mines. The incidents of the travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step, the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’ sake, I verified the result at my leisure.

If the insight/analysis split is going to be relevant here, it would require that people favour either ‘analytic’ or ‘insight’ solutions as a general cognitive style, rather than switching between them freely depending on the problem. The authors do indeed claim that this is the case:

Most people can, and to some extent do, use both of these approaches. A pure type probably doesn’t exist; each person falls somewhere on an analytic-insightful continuum. Yet many—perhaps most—people tend to gravitate toward one of these styles, finding their particular approach to be more comfortable or natural.

This is based on their own research where they recorded participant’s self-report of whether they were using a ‘insight’ or ‘analytic’ approach to solve anagrams, and compared it with EEG recordings of their resting state. They found a number of differences, including more right-hemisphere activity in the ‘insight’ group, and lower levels of communication between the frontal lobe and other parts of the brain, indicating a more disorderly thinking style with less top-down control. This may suggest more freedom to allow weak associations between thoughts to have a crack at the problem, without being overruled by the dominant interpretation.

Again, and you’re probably got very bored of this disclaimer, I have no idea how well the details of this will hold up. That’s true for pretty much every specific detail in the book that I’ve discussed here. Still, the link between insight and weak associations makes a lot of sense to me, and the overall picture certainly triggered some useful reframings. That seems very appropriate for a book about insight.

The Bat and Ball Problem Revisited

In this post, I’m going to assume you’ve come across the Cognitive Reflection Test before and know the answers. If you haven’t, it’s only three quick questions, go and do it now.

Bat & Ball train station, Sevenoaks [source]

One of the striking early examples in Kahneman’s Thinking, Fast and Slow is the following problem:

(1) A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball.

How much does the ball cost? _____ cents

This question first turns up informally in a paper by Kahneman and Frederick, who find that most people get it wrong:

Almost everyone we ask reports an initial tendency to answer “10 cents” because the sum $1.10 separates naturally into $1 and 10 cents, and 10 cents is about the right magnitude. Many people yield to this immediate impulse. The surprisingly high rate of errors in this easy problem illustrates how lightly System 2 monitors the output of System 1: people are not accustomed to thinking hard, and are often content to trust a plausible judgment that quickly comes to mind.

In Thinking, Fast and Slow, the bat and ball problem is used as an introduction to the major theme of the book: the distinction between fluent, spontaneous, fast ‘System 1’ mental processes, and effortful, reflective and slow ‘System 2’ ones. The explicit moral is that we are too willing to lean on System 1, and this gets us into trouble:

The bat-and-ball problem is our first encounter with an observation that will be a recurrent theme of this book: many people are overconfident, prone to place too much faith in their intuitions. They apparently find cognitive effort at least mildly unpleasant and avoid it as much as possible.

This story is very compelling in the case of the bat and ball problem. I got this problem wrong myself when I first saw it, and still find the intuitive-but-wrong answer very plausible looking. I have to consciously remind myself to apply some extra effort and get the correct answer.

However, this becomes more complicated when you start considering other tests of this fast-vs-slow distinction. Frederick later combined the bat and ball problem with two other questions to create the Cognitive Reflection Test:

(2) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes

(3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days

These are designed to also have an ‘intuitive-but-wrong’ answer (100 minutes, 24 days), and an ‘effortful-but-right’ answer (5 minutes, 47 days). But this time I seem to be immune to the wrong answers, in a way that just doesn’t happen with the bat and ball:

I always have the same reaction, and I don’t know if it’s common or I’m just the lone idiot with this problem. The ‘obvious wrong answers’ for 2. and 3. are completely unappealing to me (I had to look up 3. to check what the obvious answer was supposed to be). Obviously the machine-widget ratio hasn’t changed, and obviously exponential growth works like exponential growth.

When I see 1., however, I always think ‘oh it’s that bastard bat and ball question again, I know the correct answer but cannot see it’. And I have to stare at it for a minute or so to work it out, slowed down dramatically by the fact that Obvious Wrong Answer is jumping up and down trying to distract me.

If this test was really testing my propensity for effortful thought over spontaneous intuition, I ought to score zero. I hate effortful thought! As it is, I score two out of three, because I’ve trained my intuitions nicely for ratios and exponential growth. The ‘intuitive’, ‘System 1’ answer that pops into my head is, in fact, the correct answer, and the supposedly ‘intuitive-but-wrong’ answers feel bad on a visceral level. (Why the hell would the lily pads take the same amount of time to cover the second half of the lake as the first half, when the rate of growth is increasing?)

The bat and ball still gets me, though. My gut hasn’t internalised anything useful, and it’s super keen on shouting out the wrong answer in a distracting way. My dislike for effortful thought is definitely a problem here.

I wanted to see if others had raised the same objection, so I started doing some research into the CRT. In the process I discovered a lot of follow-up work that makes the story much more complex and interesting.

I’ve come nowhere near to doing a proper literature review. Frederick’s original paper has been cited nearly 3000 times, and dredging through that for the good bits is a lot more work than I’m willing to put in. This is just a summary of the interesting stuff I found on my limited, partial dig through the literature.

Thinking, inherently fast and inherently slow

Frederick’s original Cognitive Reflection Test paper describes the System 1/System 2 divide in the following way:

Recognizing that the face of the person entering the classroom belongs to your math teacher involves System 1 processes — it occurs instantly and effortlessly and is unaffected by intellect, alertness, motivation or the difficulty of the math problem being attempted at the time. Conversely, finding \sqrt{19163} to two decimal places without a calculator involves System 2 processes — mental operations requiring effort, motivation, concentration, and the execution of learned rules.

I find it interesting that he frames mental processes as being inherently effortless or effortful, independent of the person doing the thinking. This is not quite true even for the examples he gives — faceblind people and calculating prodigies exist.

This framing is important for interpreting the CRT. If the problem inherently has a wrong ‘System 1 solution’ and a correct ‘System 2 solution’, the CRT can work as intended, as an efficient tool to split people by their propensity to use one strategy or the other. If there are ‘System 1’ ways to get the correct answer, the whole thing gets much more muddled, and it’s hard to disentangle natural propensity to reflection from prior exposure to the right mathematical concepts.

My tentative guess is that the bat and ball problem is close to being this kind of efficient tool. Although in some ways it’s the simplest of the three problems, solving it in a ‘fast’, ‘intuitive’ way relies on seeing the problem in a way that most people’s education won’t have provided. (I think this is true, anyway – I’ll go into more detail later.) I suspect that this is less true the other two problems – ratios and exponential growth are topics that a mathematical or scientific education is more likely to build intuition for.

(Aside: I’d like to know how these other two problems were chosen. The paper just states the following:

Motivated by this result [the answers to the bat and ball question], two other problems found to yield impulsive erroneous responses were included with the “bat and ball” problem to form a simple, three-item “Cognitive Reflection Test” (CRT), shown in Figure 1.

I have a vague suspicion that Frederick trawled through something like ‘The Bumper Book of Annoying Riddles’ to find some brainteasers that don’t require too much in the way of mathematical prerequisites. The lilypads one has a family resemblance to the classic grains-of-wheat-on-a-chessboard puzzle, for instance.)

However, I haven’t found any great evidence either way for this guess. The original paper doesn’t break down participants’ scores by question – it just gives mean scores on the test as a whole. I did however find this meta-analysis of 118 CRT studies, which shows that the bat and ball question is the most difficult on average – only 32% of all participants get it right, compared with 40% for the widgets and 48% for the lilypads. It also has the biggest jump in success rate when comparing university students with non-students. That looks like better mathematical education does help on the bat and ball, but it doesn’t clear up how it helps. It could improve participants’ ability to intuitively see the answer. Or it could improve ability to come up with an ‘unintuitive’ solution, like solving the corresponding simultaneous equations by a rote method.

What I’d really like is some insight into what individual people actually do when they try to solve the problems, rather than just this aggregate statistical information. I haven’t found exactly what I wanted, but I did turn up a few interesting studies on the way.

No, seriously, the answer isn’t ten cents

My favourite thing I found was this (apparently unpublished) ‘extremely rough draft’ by Meyer, Spunt and Frederick from 2013, revisiting the bat and ball problem. The intuitive-but-wrong answer turns out to be extremely sticky, and the paper is basically a series of increasingly desperate attempts to get people to actually think about the question.

One conjecture for what people are doing when they get this question wrong is the attribute substitution hypothesis. This was suggested early on by Kahneman and Frederick, and is a fancy way of saying that they are instead solving the following simpler problem:

(1) A bat and a ball cost $1.10 in total. The bat costs $1.00.

How much does the ball cost? _____ cents

Notice that this is missing the ‘more than the ball’ clause at the end, turning the question into a much simpler arithmetic problem. This simple problem does have ‘ten cents’ as the answer, so it’s very plausible that people are getting confused by it.

Meyer, Spunt and Frederick tested this hypothesis by getting respondents to recall the problem from memory. This showed a clear difference: 94% of ‘five cent’ respondents could recall the correct question, but only 61% of ‘ten cent’ respondents. It’s possible that there is a different common cause of both the ‘ten cent’ response and misremembering the question, but it at least gives some support for the substitution hypothesis.

However, getting people to actually answer the question correctly was a much more difficult problem. First they tried bolding the words more than the ball to make this clause more salient. This made surprisingly little impact: 29% of respondents solved it, compared with 24% for the original problem. Printing both versions was slightly more successful, bumping up the correct response to 35%, but it was still a small effect.

After this, they ditched subtlety and resorted to pasting these huge warnings above the question:

Computation warning: 'Be careful! Many people miss the following problem because they do not take the time to check their answer. ' Comprehension warning: 'Be careful! Many people miss the following problem because they read it too quickly and actually answer a different question than the one that was asked.''

These were still only mildly effective, with a correct solution jumping to 50% from 45%. People just really like the answer ‘ten cents’, it seems.

At this point they completely gave up and just flat out added “HINT: 10 cents is not the answer.” This worked reasonably well, though there was still a hard core of 13% who persisted in writing down ‘ten cents’.

That’s where they left it. At this point there’s not really any room to escalate beyond confiscating the respondents’ pens and prefilling in the answer ‘five cents’, and I worry that somebody would still try and scratch in ‘ten cents’ in their own blood. The wrong answer is just incredibly compelling.

So, what are people doing when they solve this problem?

Unfortunately, it’s hard to tell from the published literature (or at least what I found of it). What I’d really like is lots of transcripts of individuals talking through their problem solving process. The closest I found was this paper by Szaszi et al, who did carry out these sort of interview, but it doesn’t include any examples of individual responses. Instead, it gives a aggregated overview of types of responses, which doesn’t go into the kind of detail I’d like.

Still, the examples given for their response categories give a few clues. The categories are:

  • Correct answer, correct start. Example given: ‘I see. This is an equation. Thus if the ball equals to x, the bat equals to x plus 1… ‘

  • Correct answer, incorrect start. Example: ‘I would say 10 cents… But this cannot be true as it does not sum up to €1.10…’

  • Incorrect answer, reflective, i.e. some effort was made to reconsider the answer given, even if it was ultimately incorrect. Example: ‘… but I’m not sure… If together they cost €1.10, and the bat costs €1 more than the ball… the solution should be 10 cents. I’m done.’

  • No reflection. Example: ‘Ok. I’m done.’

These demonstrate one way to reason your way to the correct answer (solve the simultaneous equations) and one way to be wrong (just blurt out the answer). They also demonstrate one way to recover from an incorrect solution (think about the answer you blurted out and see if it actually works). Still, it’s all rather abstract and high level.

How To Solve It

However, I did manage to stumble onto another source of insight. While researching the problem I came across this article from the online magazine of the Association for Psychological Science, which discusses a variant ‘Ford and Ferrari problem’. This is quite interesting in itself, but I was most excited by the comments section. Finally some examples of how the problem is solved in the wild!

The simplest ‘analytical’, ‘System 2’ solution is to rewrite the problem as two simultaneous linear equations and plug-and-chug your way to the correct answer. For example, writing B for the bat and b for the ball, we get the two equations

B + b = 110,
B - b = 100,

which we could then solve in various standard ways, e.g.

2B = 210,
B = 105,

which then gives

b = 110 - B = 5.

There are a couple of variants of this explained in the comments. It’s a very reliable way to tackle the problem: if you already know how to do this sort of rote method, there are no surprises. This sort of method would work for any similar problem involving linear equations.

However, it’s pretty obvious that a lot of people won’t have access to this method. Plenty of people noped out of mathematics long before they got to simultaneous equations, so they won’t be able to solve it this way. What might be less obvious, at least if you mostly live in a high-maths-ability bubble, is that these people may also be missing the sort of tacit mathematical background that would even allow them to frame the problem in a useful form in the first place.

That sounds a bit abstract, so let’s look at some responses (I’ll paste all these straight in, so any typos are in the original). First, we have these two confused commenters:

The thing is, why does the ball have to be $.05? It could have been .04 0r.03 and the bat would still cost more than $1.


This is exactly what bothers me and resulted in me wanting to look up the question online. On the quiz the other 2 questions were definitive. This one technically could have more than one answer so this is where phycologists actually mess up when trying to give us a trick question. The ball at .4 and the bat at 1.06 doesn’t break the rule either.

These commenters don’t automatically see two equations in two variables that together are enough to constrain the problem. Instead they seem to focus mainly on the first condition (adding up to $1.10) and just use the second one as a vague check at best (‘the bat would still cost more than $1’). This means that they are unable to immediately tell that the problem has a unique solution.

In response, another commenter, Tony, suggests a correct solution which is an interesting mix of writing the problem out formally and then figuring out the answer by trial and error:

I hear your pain. I feel as though psychologists and psychiatrists get together every now and then to prove how stoopid I am. However, after more than a little head scratching I’ve gained an understanding of this puzzle. It can be expressed as two facts and a question A=100+B and A+B=110, so B=? If B=2 then the solution would be 100+2+2 and A+B would be 104. If B=6 then the solution would be 100+6+6 and A+B would be 112. But as be KNOW A+B=110 the only number for B on it’s own is 5.

This suggests enough half-remembered mathematical knowledge to find a sensible abstract framing, but not enough to solve it the standard way.

Finally, commenter Marlo Eugene provides an ingenious way of solving the problem without writing all the algebraic steps out:

Linguistics makes all the difference. The conceptual emphasis seems to lie within the word MORE.

X + Y = $1.10. If X = $1 MORE then that leaves $0.10 TO WORK WITH rather than automatically assign to Y

So you divide the remainder equally (assuming negative values are disqualified) and get 0.05.

So even this small sample of comments suggests a wide diversity of problem-solving methods leading to the two common answers. Further, these solutions don’t all split neatly into ‘System 1’ ‘intuitive’ and ‘System 2’ ‘analytic’. Marlo Eugene’s solution, for instance, is a mixed solution of writing the equations down in a formal way, but then finding a clever way of just seeing the answer rather than solving them by rote.

I’d still appreciate more detailed transcripts, including the time taken to solve the problem. My suspicion is still that very few people solve this problem with a fast intuitive response, in the way that I rapidly see the correct answer to the lilypad question. Even the more ‘intuitive’ responses, like Marlo Eugene’s, seem to rely on some initial careful reflection and a good initial framing of the problem.

If I’m correct about this lack of fast responses, my tentative guess for the reason is that it has something to do with the way most of us learn simultaneous equations in school. We generally learn arithmetic as young children in a fairly concrete way, with the formal numerical problems supplemented with lots of specific examples of adding up apples and bananas and so forth.

But then, for some reason, this goes completely out of the window once the unknown quantity isn’t sitting on its own on one side of the equals sign. This is instead hived off into its own separate subject, called ‘algebra’, and the rules are taught much later in a much more formalised style, without much attempt to build up intuition first.

(One exception is the sort of puzzle sheets that are often given to young kids, where the unknowns are just empty boxes to be filled in. Sometimes you get 2+3=□, sometimes it’s 2+□=5, but either way you go about the same process of using your wits to figure out the answer. Then, for some reason I’ll never understand, the worksheets get put away and the poor kids don’t see the subject again until years later, when the box is now called x for some reason and you have to find the answer by defined rules. Anyway, this is a separate rant.)

This lack of a rich background in puzzling out the answer to specific concrete problems means most of us lean hard on formal rules in this domain, even if we’re relatively mathematically sophisticated. Only a few build up the necessary repertoire of tricks to solve the problem quickly by insight. I’m reminded of a story in Feynman’s The Pleasure of Finding Things Out:

Around that time my cousin, who was three years older, was in high school. He was having considerable difficulty with his algebra, so a tutor would come. I was allowed to sit in a corner while the tutor would try to teach my cousin algebra. I’d hear him talking about x.

I said to my cousin, “What are you trying to do?”

“I’m trying to find out what x is, like in 2x + 7 = 15.”

I say, “You mean 4.”

“Yeah, but you did it by arithmetic. You have to do it by algebra.”

I learned algebra, fortunately, not by going to school, but by finding my aunt’s old schoolbook in the attic, and understanding that the whole idea was to find out what x is – it doesn’t make any difference how you do it.

I think this reliance on formal methods might be somewhat less true for exponential growth and ratios, the subjects underpinning the lilypad and widget questions. Certainly I seem to have better intuition there, without having to resort to rote calculation. But I’m not sure how general this is.

How To Visualise It

If you wanted to solve the bat and ball problem without having to ‘do it by algebra’, how would you go about it?

My original post on the problem was a pretty quick, throwaway job, but over time it picked up some truly excellent comments by anders and Kyzentun, which really start to dig into the structure of the problem and suggest ways to ‘just see’ the answer. The thread with anders in particular goes into lots of other examples of how we think through solving various problems, and is well worth reading in full. I’ll only summarise the bat-and-ball-related parts of the comments here.

We all used some variant of the method suggested by Marlo Eugene in the comments above. Writing out the basic problem again, we have:

B + b = 110,
B - b = 100.

Now, instead of immediately jumping to the standard method of eliminating one of the variables, we can just look at what these two equations are saying and solve it directly ‘by thinking’. We have a bat, B. If you add the price of the ball, b, you get 110 cents. If you instead remove the same quantity b you get 100 cents. So the bat’s price must be exactly halfway between these two numbers, at 105 cents. That leaves five for the ball.

Now that I’m thinking of the problem in this way, I directly see the equations as being ‘about a bat that’s halfway between 100 and 110 cents’, and the answer is incredibly obvious.

Kyzentun suggests a variant on the problem that is much less counterintuitive than the original:

A centered piece of text and its margins are 110 columns wide. The text is 100 columns wide. How wide is one margin?

Same numbers, same mathematical formula to reach the solution. But less misleading because you know there are two margins, and thus know to divide by two after subtracting.

In the original problem, the 110 units and 100 units both refer to something abstract, the sum and difference of the bat and ball. In Kyzentun’s version these become much more concrete objects, the width of the text and the total width of the margins. The work of seeing the equations as relating to something concrete has mostly been done for you.

Similarly, anders works the problem by ‘getting rid of the 100 cents’, and splitting the remainder in half to get at the price of the ball:

I just had an easy time with #1 which I haven’t before. What I did was take away the difference so that all the items are the same (subtract 100), evenly divide the remainder among the items (divide 10 by 2) and then add the residuals back on to get 105 and 5.

The heuristic I seem to be using is to treat objects as made up of a value plus a residual. So when they gave me the residual my next thought was “now all the objects are the same, so whatever I do to one I do to all of them”.

I think that after reasoning my way through all these perspectives, I’m finally at the point where I have a quick, ‘intuitive’ understanding of the problem. But it’s surprising how much work it was for such a simple bit of algebra.

Final thoughts

Rather than making any big conclusions, the main thing I wanted to demonstrate in this post is how complicated the story gets when you look at one problem in detail. I’ve written about close reading recently, and this has been something like a close reading of the bat and ball problem.

Frederick’s original paper on the Cognitive Reflection Test is in that generic social science style where you define a new metric and then see how it correlates with a bunch of other macroscale factors (either big social categories like gender or education level, or the results of other statistical tests that try to measure factors like time preference or risk preference). There’s a strange indifference to the details of the test itself – at no point does he discuss why he picked those specific three questions, and there’s no attempt to model what was making the intuitive-but-wrong answer appealing.

The later paper by Meyer, Spunt and Frederick is much more interesting to me, because it really starts to pick apart the specifics of the bat and ball problem. Is an easier question getting substituted? Can participants reproduce the correct question from memory?

I learned the most from the individual responses, though, and seeing the variety of ways people go about solving the problem. It’s very strange to me that I had an easier time digging this out from an internet comment thread than the published literature! I would love to see a lot more research into what people actually do when they do mathematics, and the bat and ball problem would be a great place to start.


I’m interested in any comments on the post, but here are a few specific things I’d like to get your answers to:

  • My rapid, intuitive answer for the bat and ball question is wrong (at least until I retrained it by thinking about the problem way too much). However, for the other two I ‘just see’ the correct answer. Is this common for other people, or do you have a different split?

  • If you’re able to rapidly ‘just see’ the answer to the bat and ball question, how do you do it?

  • How do people go about designing tests like these? This isn’t at all my field and I’d be interested in any good sources. I’d kind of assumed that there’d be some kind of serious-business Test Creation Methodology, but for the CRT at least it looks like people just noticed they got surprising answers for the bat and ball question and looked around for similar questions. Is that unusual compared to other psychological tests?

[I’ve cross-posted this at LessWrong, because I thought the topic fits quite nicely – comments at either place are welcome.]

Two types of mathematician reference list

(I posted this on Less Wrong back in April and forgot to cross post here. It’s just the same references I’ve posted before, but it’s worth reading over there for the comments, which are great.)

This is an expansion of a linkdump I made a while ago with examples of mathematicians splitting other mathematicians into two groups, which may be of wider interest in the context of the recent elephant/rider discussion. (Though probably not especially wide interest, so I’m posting this to my personal page.)

The two clusters vary a bit, but there’s some pattern to what goes in each – it tends to be roughly ‘algebra/problem-solving/analysis/logic/step-by-step/precision/explicit’ vs. ‘geometry/theorising/synthesis/intuition/all-at-once/hand-waving/implicit’.

(Edit to add: ‘analysis’ in the first cluster is meant to be analysis as opposed to ‘synthesis’ in the second cluster, i.e. ‘breaking down’ as opposed to ‘building up’. It’s not referring to the mathematical subject of analysis, which is hard to place!)

These seem to have a family resemblance to the S2/S1 division, but there’s a lot lumped under each one that could helpfully be split out, which is where some of the confusion in the comments to the elephant/rider post is probably coming in. (I haven’t read The Elephant in the Brain yet, but from the sound of it that is using something of a different distinction again, which is also adding to the confusion). Sarah Constantin and Owen Shen have both split out some of these distinctions in a more useful way.

I wanted to chuck these into the discussion because: a) it’s a pet topic of mine that I’ll happily shoehorn into anything; b) it shows that a similar split has been present in mathematical folk wisdom for at least a century; c) these are all really good essays by some of the most impressive mathematicians and physicists of the 20th century, and are well worth reading on their own account.

  • The earliest one I know (and one of the best) is Poincare’s ‘Intuition and Logic in Mathematics’ from 1905, which starts:

    “It is impossible to study the works of the great mathematicians, or even those of the lesser, without noticing and distinguishing two opposite tendencies, or rather two entirely different kinds of minds. The one sort are above all preoccupied with logic; to read their works, one is tempted to believe they have advanced only step by step, after the manner of a Vauban who pushes on his trenches against the place besieged, leaving nothing to chance.

    The other sort are guided by intuition and at the first stroke make quick but sometimes precarious conquests, like bold cavalrymen of the advance guard.”


    • Felix Klein’s ‘Elementary Mathematics from an Advanced Standpoint’ in 1908 has ‘Plan A’ (‘the formal theory of equations’) and ‘Plan B’ (‘a fusion of the perception of number with that of space’). He also separates out ‘ordered formal calculation’ into a Plan C.


    • Gian-Carlo Rota made a division into ‘problem solvers and theorizers’ (in ‘Indiscrete Thoughts’, excerpt here).


    • Timothy Gowers makes a very similar division in his ‘Two Cultures of Mathematics’ (discussion and link to pdf here).


    • Vladimir Arnold’s ‘On Teaching Mathematics’ is an incredibly entertaining rant from a partisan of the geometry/intuition side – it’s over-the-top but was 100% what I needed to read when I first found it.


  • Michael Atiyah makes the distinction in ‘What is Geometry?’:

    Broadly speaking I want to suggest that geometry is that part of mathematics in which visual thought is dominant whereas algebra is that part in which sequential thought is dominant. This dichotomy is perhaps better conveyed by the words “insight” versus “rigour” and both play an essential role in real mathematical problems.

    There’s also his famous quote:

    Algebra is the offer made by the devil to the mathematician. The devil says: `I will give you this powerful machine, it will answer any question you like. All you need to do is give me your soul: give up geometry and you will have this marvellous machine.’

  • Grothendieck was seriously weird, and may not fit well to either category, but I love this quote from Récoltes et semailles too much to not include it:

    Since then I’ve had the chance in the world of mathematics that bid me welcome, to meet quite a number of people, both among my “elders” and among young people in my general age group who were more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle – while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things I had to learn (so I was assured), things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates almost by sleight of hand, the most forbidding subjects.

    In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective or thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve done all things, often beautiful things, in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone.

  • Freeman Dyson calls his groups ‘Birds and Frogs’ (this one’s more physics-focussed).
  • This may be too much partisanship from me for the geometry/implicit cluster, but I think the Mark Kac ‘magician’ quote is also connected to this:

    There are two kinds of geniuses: the ‘ordinary’ and the ‘magicians.’ an ordinary genius is a fellow whom you and I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what they’ve done, we feel certain that we, too, could have done it. It is different with the magicians… Feynman is a magician of the highest caliber.

    The algebra/explicit cluster is more ‘public’ in some sense, in that its main product is a chain of step-by-step formal reasoning that can be written down and is fairly communicable between people. (This is probably also the main reason that formal education loves it.) The geometry/implicit cluster relies on lots of pieces of hard-to-transfer intuition, and these tend to stay ‘stuck in people’s heads’ even if they write a legitimising chain of reasoning down, so it can look like ‘magic’ on the outside.

  • Finally, I think something similar is at the heart of William Thurston’s debate with Jaffe and Quinn over the necessity of rigour in mathematics – see Thurston’s ‘On proof and progress in mathematics’.

Edit to add: Seo Sanghyeon contributed the following example by email, from Weinberg’s Dreams of a Final Theory:

Theoretical physicists in their most successful work tend to play one of two roles: they are either sages or magicians… It is possible to teach general relativity today by following pretty much the same line of reasoning that Einstein used when he finally wrote up his work in 1915. Then there are magician-physicists, who do not seem to be reasoning at all but who jump over all intermediate steps to a new insight about nature. The authors of physics textbook are usually compelled to redo the work of the magicians so they seem like sages; otherwise no reader would understand the physics.

A braindump on Derrida and close reading

I wrote this for a monthly newsletter I’ve been experimenting with. I feel a bit awkward about publishing this as a post, because it’s very meandering and unpolished and plain weird. But I did manage to cover a lot of ground, in a way that would be be really difficult and time-consuming to do in a normal post, and I quite like the result in some ways.

Also this is a blog, not some formal venue, and if I start fussing too much about quality  I should probably just get over myself.  Thanks to David Chapman for encouraging me to post it anyway.

I wasn’t sure I’d stick to the newsletter format, so I didn’t advertise it much, but it turns out I really like doing it. If you’re interested in getting a monthly email with more of this nonsense, please email me at bossdrucket(at)gmail(dot)com and I’ll add you to the list.  Fair warning: it’s normally a pretty disjointed mix of physics and whatever this braindump is.

Rephrasing the famous words on the electron and atom, it can be said that a hypocycloid is as inexhaustible as an ideal in a polynomial ring. 

Vladimir Arnold, On teaching mathematics

A little while ago I wrote this:

… my biggest unusual pervasive influence is probably the New Critics: Eliot, Empson and I.A. Richards especially, and a bit of Leavis. They occupy an area of intellectual territory that mostly seems to be empty now (that or I don’t know where to find it). They’re strong contextualisers with a focus on what they would call ‘developing a refined sensibility’, by deepening sensitivity to tiny subtle nuances in expression. But at the same time, they’re operating in a pre-pomo world with a fairly stable objective ladder of ‘good’ and ‘bad’ art. (Eliot’s version of this is one of my favourite ever wrong ideas, where poetic images map to specific internal emotional states which are consistent between people, creating some sort of objective shared world.)

This leads to a lot of snottiness and narrow focus on a defined canon of ‘great authors’ and ‘minor authors’. But also the belief in reliable intersubjective understanding gives them the confidence for detailed close reading and really carefully picking apart what works and what doesn’t, and the time they’ve spent developing their ear for fine nuance gives them the ability to actually do this.

The continuation of this is probably somewhere on the other side of the ‘fake pomo blocks path’ wall in David Chapman’s diagram, but I haven’t got there yet, and I really feel like I’m missing something important.

I’ve been wanting to write more about this as a blog post for a while but it never comes out right, so this time I’m going to just start writing with no preplanned structure at all, and see what comes out.

I spent a ridiculous amount of time staring at Chapman’s diagram I linked above when I first found it. I think the main thing that made it really sticky was my experience with the top row, the one with the brick wall marked ‘fake pomo blocks path’. My teenage autodidact blundering accidentally got me past the wall to the ‘Stage 4 via humanities education’ forbidden box while barely knowing that postmodernism existed.

I started out by reading books my dad had from his university course in the sixties. This included a very wide-ranging multi-year course on literature, philosophy and general history of ideas called ‘The European Mind’. Even the name is brilliantly pre-pomo! There’s one unified mind, and it’s distinctively European, and you can learn about it by studying the classic Western canon.

I got particularly fixated on the early twentieth century. I was also reading a lot of pop science for the first time and learning about the insanely productive and disorientating revolution in physics, from Planck’s constant in 1900 up to the solidifying of the new quantum theory in the late 20s, with special and general relativity along the way. (Also a little bit about the crisis in foundations of mathematics, but I never got particularly interested in that at the time.)

It was easy to switch between science and humanities stuff from that era, because the tone and writing style was quite similar, in the Anglosphere anyway. The analytic philosophers had all got maths envy and were trying to adopt the language of logic and dig to the foundations. And even the literary critics wrote in a style that was easily accessible to a STEM nerd like me. Eliot, for instance, writes up his experiences in trying to revive verse drama like it’s a retrospective on some lab experiments that went wrong:

It was only when I put my mind to thinking what sort of play I wanted to do next, that I realized that in Murder in the Cathedral I had not solved any general problem; but that from my point of view the play was a dead end. For one thing, the problem of language which that play had presented to me was a special problem…

The style was similar to the scientists, but their main method was somewhat different. The New Critics tended to work via very detailed close reading of individual passages, really picking apart what makes specific examples work. For another Eliot example, here he talks about the vivid specificity of the images in Macbeth, and compares it to the artificial, conventional images of much eighteenth century poetry. The overarching ‘Shakespeare good, Milton bad’ moral might not be worth much, but it’s fantastic for pointing out what Shakespeare does that Milton can’t.

I really like the close reading method. If I study one example in depth, I normally come away with some new ability to read the situation, in a way that grand abstract theorising can’t match. (I do also really like sloppy big-picture grand theorising, but in my case it’s normally motivated by trying to mash examples together.) Examples are inexhaustible, like Arnold’s hypocycloid in the quote at the start. In fact, ‘close reading’ has become my idiosyncratic mental label for any kind of heavily example-driven work, not just the detailed study of written texts. Toy models in maths and physics also count, for example.

Once I’d got my ear in for the New Critical style, it was pretty easy to find more of it in secondhand bookshops and never really read anything that went beyond it, so the ‘wall of fake pomo’ wasn’t a problem. (I knew what postmodernism was anyway! It was that stupid rubbish that Sokal made fun of, where people make word salad out of hermeneutics and deconstruction and logocentrism. I wasn’t going to fall for any of that!)

Now, funnily enough, I seem to be back with the New Critics for different reasons. Something like ‘well, I’m in the right place to follow that ‘genuine pomo critique’ arrow, so let’s give it a go’. This time I got there by reading the text of this fantastic talk on Derrida by Christopher Norris. Norris is a literary critic who wrote about Empson as well – I’ll get to that in a minute. But first I’ll talk about that linked Norris piece, especially the bit on Rousseau and Rameau. (I did warn you that this is going to be a rambling braindump.)

I learned a lot from the Norris talk. The first thing I picked up was that Derrida is not the vague sort of waffler I’d imagined from the Science Wars stereotype. He does have a weird writing style that I find a complete pain to read (I have started wading through Of Grammatology now, and I’m not particularly enjoying the process), but it’s not a vague style. He’s actually a close reader with a similar method to the New Critics, even if the tone is completely different, and he works by going through specific examples in detail. Which is actually my favourite way of learning!

Derrida’s main targets for the close reading treatment in Of Grammatology are Rousseau, Saussure and Lévi-Strauss. A very French list which doesn’t mean a whole lot to me, so I’m having to read backwards too. But I was able to understand the part about Rousseau’s fight with the musician Rameau. Norris explains it here:

Rousseau was himself a musician, a performer and composer, and he wrote a great deal about music history and theory, in particular about the relationship between melody and harmony… One way of looking at Rousseau’s ideas about the melody/harmony dualism is to view them as the working-out of a tiff he was having with Rameau. Thus he says that the French music of his day is much too elaborate, ingenious, complex, ‘civilized’ in the bad (artificial) sense — it’s all clogged up with complicated contrapuntal lines, whereas the Italian music of the time is heartfelt, passionate, authentic, spontaneous, full of intense vocal gestures. It still has a singing line, it’s still intensely melodious, and it’s not yet encumbered with all those elaborate harmonies.

This was really accessible to me because of my weird music listening habits. I listen to a lot of baroque music, and I specifically like the Italian baroque, pretty much for the reasons Rousseau lists above. It’s very different to Bach’s sort of baroque music, which is very harmonically and structurally complex, and it sticks much closer to its roots in dance music. The melodies are straightforwardly, immediately appealing, instead of being subsumed into some big contrapuntal structure. It’s not simple folk music, though – the complexity is instead in the very delicate, constantly changing mood and texture.

(If you want to get an idea of the kind of music I’m thinking of, the Youtube channel Ispirazione Barrocca is my best source for obscure but brilliant Italian composers. For a specific example, at the moment I keep listening to the ciaccona and rondeau here. The ciaccona is the simplest thing possible harmonically, it’s just the same chords over and again. The melody is pretty straightforward too, and there’s no fancy structure. But I love the shadings in mood, and that fantastic change in energy at 7:00 as it transitions into the rondeau.)

So I’m probably the right sort of person to be persuaded by Rousseau’s argument. But actually, as Norris/Derrida tells it, it doesn’t work at all. I’m just going to quote this big glob of Norris’s text, because I can’t possibly explain it any better myself:

What’s more, Rousseau says, this is where writing came in and exerted its deleterious effect, because if you have a complex piece of contrapuntal music, by Rameau let’s say, then you’ve got to write it down. People can’t learn it off by heart; you can quite easily learn a folk tune, or an unaccompanied aria, or perhaps a piece of plainchant, or anything that doesn’t involve harmony because it sinks straight in, it strikes a responsive chord straight away. But as soon as you have harmony then you have this bad supplement that comes along and usurps the proper place of melody, that somehow corrupts or denatures melody, so to speak, from the inside. Now the interesting thing, as Derrida points out, is that Rousseau can’t sustain that line of argument, because as soon as he starts to think harder about the nature of music, as soon as he begins to write his articles about music theory, he recognizes that in fact there is no such thing as melody without harmony. I think this is one of the remarkable things about Derrida’s reading of Rousseau, that it carries conviction as a matter of intuitive rightness as well as through sheer philosophical acuity and close attention to the detail of Rousseau’s text. His arguments seem to be very cerebral, very technical and even counter-intuitive, but in this case they can be checked out against anyone’s – or any responsive listener’s – first-hand experience of music. Thus even if you think of an unaccompanied folk song, or if you just hum a tune or pick it out in single notes on the piano, it will carry harmonic overtones or suggestions. What makes it a tune, what gives it a sense of character, shape, cadence, etc., is precisely this implicit harmonic dimension.

Derrida brings out this contradiction through his characteristic method (again I’ll just quote a glob of Norris’s talk):

Derrida gets to this point through a close reading of Rousseau’s text which shows it to concede – not so much ‘between the lines’ but in numerous details of phrasing and turns of logico-semantic implication – that there is no melody (nothing perceivable or recognizable as such) without the ‘bad supplement’ of harmony. Thus, for instance, Rousseau gets into a real argumentative pickle when he say – lays it down as a matter of self-evident truth – that all music is human music. Bird-song just doesn’t count, he says, since it is merely an expression of animal need – of instinctual need entirely devoid of expressive or passional desire – and is hence not to be considered ‘musical’ in the proper sense of that term. Yet you would think that, given his preference for nature above culture, melody above harmony, authentic (spontaneous) above artificial (‘civilized’) modes of expression, and so forth, Rousseau should be compelled – by the logic of his own argument – to accord bird-song a privileged place vis-à-vis the decadent productions of human musical culture. However Rousseau just lays it down in a stipulative way that bird-song is not music and that only human beings are capable of producing music. And so it turns out, contrary to Rousseau’s express argumentative intent, that the supplement has somehow to be thought of as always already there at the origin, just as harmony is always already implicit in melody, and writing – or the possibility of writing – always already implicit in the nature of spoken language.

Birdsong is an awkward case for Rousseau, because it really is melody without the implicit harmonic dimension. It’s mostly missing the exact, ‘engineered’ side of human music – the precise, repeatable harmonic intervals and lengths of notes and bars. But this is the same structure that makes a tune sound like a tune, rather than a loose cascade of pitches. Even Rousseau didn’t want music that was that structureless and undifferentiated, so he gets himself tied in knots trying to exclude this case that ruins his argument.

This is really just a sideline in the book. Derrida’s main interest is not the tension between harmony and melody, but that between writing and speech. The argument goes through in a similar way, though. Speech is presumed to be the fundamental one of the pair, and writing is derivative – what Saussure called ‘a signifier of a signifier’. But Derrida points out that speech also has a structural element, using repeatable components and arbitrary conventions. The possibility of writing is inherent in speech from the start, in the same way that harmonic structure is inherent in the simplest folk tune:

For Derrida, ‘writing’ should rather be defined as a sort of metonym for all those aspects of language – or of human culture generally – that set it apart from the realm of natural (that is, pre-social, hence pre-human) existence. That is to say, it encompasses not only writing in the usual, restricted (graphematic) sense but also speech in so far as spoken language likewise depends on structures, conventions, codes, systems of relationship and difference ‘without positive terms’, and so forth.

Derrida was in the right time and place to take both sides of the writing/speech opposition seriously. He’d started out in phenomenology, with a deep study of Husserl and his emphasis on the immediacy of raw experience. And then structuralism had been in the air in France at the time, and he’d picked up its insights through Saussure and Lévi-Strauss. He could see that either on its own was not enough:

What Derrida does, essentially, is juxtapose the insights of structuralism and phenomenology, the two great movements of thought that really formed the matrix of Derrida’s work, especially his early work. Phenomenology because it had gone so far – in the writings of Husserl and Merleau-Ponty after him – toward describing that creative or expressive ‘surplus’ in language (and also, for Merleau-Ponty, in the visual arts) that would always elude the most detailed and meticulous efforts of structuralist analysis. Structuralism because, on its own philosophic and methodological terms, it revealed how this claim for the intrinsic priority of expressive parole over pre-constituted langue would always run up against the kind of counter-argument that I have outlined above.

This might be a good point to briefly try and explain my other reason for being interested in this Derrida stuff, beyond ‘trying to understand the New Critics’. This goes back to my normal pet topic. There’s a very similar tension in mathematics between the structural, ‘algebraic’ element, where the individual symbols are arbitrary and only their relations matter, and the ‘geometric’ side where these symbols become grounded in our perceptual experience, and are experienced as being ‘about’ curved surfaces or nodes in a graph or whatever. I’m scare-quoting ‘algebra’ and ‘geometry’ because I’m using them in a weird way – there is of course a structural component to any geometric problem, and a perceptual component to any algebraic one. But algebra tends to be closer to the structuralist style, and geometry to the phenomenological one, so they work quite well as labels for the two sides of the opposition. This is actually pretty similar to Derrida’s weird use of ‘writing’ and ‘speech’.

I really want to understand this parallel story, and how the ‘algebra’/’geometry’ tension played out in the twentieth century. I can figure out who a lot of the main characters are: Bourbaki on the structuralist side, for a start, and Poincare and Weyl on the phenomenology one, in very different ways. Also Brouwer and the intuitionist weirdos must fit in there somewhere. But I’m struggling to find much in the way of good secondary material. About the only thing I’ve found is Mathematics and the Roots of Postmodern Thought by Vladimir Tasić, which has a lot of good material but is kind of all over the place. It’s very different to the situation with Derrida, where there’s an overabundance of crappy secondary sources, mostly teaching literature students how to make up bullshit ‘deconstructions’ of any text that comes their way. I’m glad the maths students don’t have to suffer in the same way, but it would be nice if there was more to read.

Finally, I’m going to wrench this braindump back to the New Critics where I started. Reading Norris on Derrida made me think of Empson. Empson was particularly keen on the close reading technique, particularly early on when he wrote Seven Types of Ambiguity. Here’s a short example where he goes over a few lines of The Waste Land in minute detail. His main interest in this is trying to track down all the fleeting associations that words in a poem pull with them:

As a rule, all that you recognise as in your mind is the one final association of meanings which seems sufficiently rewarding to be the answer—‘now I have understood that’; it is only at intervals that the strangeness of the process can be observed. I remember once clearly seeing a word so as to understand it, and, at the same time, hearing myself imagine that I had read its opposite. In the same way, there is a preliminary stage in reading poetry when the grammar is still being settled, and the words have not all been given their due weight; you have a broad impression of what it is all about, but there are various incidental impressions wandering about in your mind; these may not be part of the final meaning arrived at by the judgment, but tend to be fixed in it as part of its colour.

At times he seems pretty close to Derrida, at least as Norris tells it, in the way he combines structural critique (he later wrote a book called The Structure of Complex Words, which I haven’t read) with an interest in the phenomenology of how a poem is experienced.

I did some research and discovered that Norris has written a lot about Empson as well. In fact he edited a whole book about him. So maybe that isn’t so surprising! I looked to see if he wrote anything specifically about both Empson and Derrida, and it turns out that Norris actually sent Empson a copy of one of Derrida’s essays, along with some other stuff by de Man and Barthes, to see whether he liked it. He got this very funny response (quoted in the introduction of that book):

‘I feel very bad’, Empson wrote,

not to have answered you for so long, and not to have read those horrible Frenchmen you posted to me. I did go through the first one, Jacques Nerrida [sic], and nosed about in several others, but they seem to me so very disgusting, in a simple moral or social way, that I cannot stomach them. Nerrida does express the idea that, just as people were talking grammar before grammarians arose, so there are other unnoticed regularities in human language and probably in other human systems. This is what I meant by the book title *The Structure of Complex Words*, and it was not an out-of-the-way idea, indeed I may have got it from someone else, but of course it is no use unless you try to present an actual grammar, and actual grammar of the means by which a speaker makes his choice while using the language correctly. This I attempted to supply, and I do not notice that the French ever even try … They use enormously fussy language, always pretending to be plumbing the very depths, and never putting your toe into the water. Please tell me I am wrong.

That’ll be a no, then. I don’t blame him about the writing style. But I have convinced myself that I want to read Derrida anyway.

Book Review: The Reflective Practitioner

In my last two posts I’ve been talking about my experience of thinking through some website design problems. At the same time that I was muddling along with these, I happened to be reading Donald Schön’s The Reflective Practitioner, which is about how people solve problems in professions like architecture, town planning, teaching and management. (I got the recommendation from David Chapman’s ‘Further reading’ list for his Meaningness site.)

This turned out to be surprisingly relevant. Schön is considering ‘real’ professional work, rather than my sort of amateur blundering around, but the domain of web design shares a lot of characteristics with the professions he studied. The problems in these fields are context-dependent and open-ended, resisting any sort of precise theory that applies to all cases. On the other hand, people do manage to solve problems and develop expertise anyway.

Schön argues that this expertise mostly comes through building up familiarity with many individual examples, rather than through application of an overarching theory. He builds up his own argument in the same way, letting his ideas shine through a series of case studies of successful practitioners.

In the one I find most compelling, an established architect, Quist, reviews the work of a student, Petra, who is in the early stages of figuring out a design for an elementary school site. I’m going to follow Schön’s examples-driven approach here and describe this in some detail.

Petra has already made some preliminary sketches and come up with some discoveries of her own. Her first idea for the classrooms was the diagonal line of six rooms in the top right of the picture below. Playing around, she found that ‘they were too small in scale to do much with’, so she ‘changed them to this much more significant layout’, the L shapes in the bottom left.


I’m not sure I can fully explain why the L shapes are ‘more significant’, but I do agree with her assessment. There’s more of a feeling of spaciousness than there was with the six cramped little rectangles, and the pattern is more interesting geometrically and suggests more possibilities for interacting with the geography of the site.

At this point, we already get to see a theme that Schön goes back to repeatedly, the idea of a ‘reflective conversation with the materials’. The designer finds that:

His materials are continually talking back to him, causing him to apprehend unanticipated problems and potentials.

Petra has found a simple example of this. She switched to the L shapes on more-or-less aesthetic grounds, but then she discovers that the new plan ‘relates one to two, three to four, and five to six grades, which is more what I wanted to do educationally anyway.’ The materials have talked back and given her more than she originally put in, which is a sign that she is on to something promising.

After this early success, Petra runs into difficulties. She has learnt the rule that buildings should fit to the contours of the site. Unfortunately the topography of this particular site is really incoherent and nothing she tries will fit into the slope.

Quist advises her to break this rule:

You should begin with a discipline, even if it is arbitrary, because the site is so screwy – you can always break it open later.

Together they work through the implications of the following design:


This kicks off a new round of conversation with the materials.

Quist now proceeds to play out the imposition of the two-dimensional geometry of the L-shaped classrooms upon the “screwy” three-dimensional contours of the slope… The roofs of the classroom will rise five feet above the ground at the next level up, and since five feet is “maximum height for a kid”, kids will be able to be in “nooks”…

A drawing experiment has been conducted and its outcome partially confirms Quist’s way of setting the L-shaped classrooms upon the incoherent slope. Classrooms now flow down the slope in three stages, creating protected spaces “maximum height for a kid” at each level.

In an echo of Petra’s initial experiment, Quist has got back more than he put in. He hasn’t solved the problem in the clean, definitive way you’d solve a mathematical optimisation problem. Many other designs would probably have worked just as well. But the design has ‘talked back’, and his previous experience of working through problems like this has given him the skills to understand what it is saying.

I find the ‘reflective conversation’ idea quite thought-provoking and appealing. It seems to fit well with my limited experience: prototyping my design in a visual environment was an immediate improvement over just writing code, because it enabled this sort of conversation. Instead of planning everything out in advance, I could mess around with the basic elements of the design and ‘apprehend unanticipated problems and potentials’ as they came up.

I don’t find the other examples in the book quite as convincing as this one. Quist is unusually articulate, so the transcripts tell you a lot. Also, architectural plans can be reproduced easily as figures in a book, so you can directly see his solution for yourself, rather than having to take someone’s word for it. With the other practitioners it’s often hard to get a sense of how good their solutions are. (I guess Schön was also somewhat limited by who he could persuade to be involved.)

Alongside the case studies, there is some discussion of the implications for how these professions are normally taught. Some of this is pretty dry, but there are a few interesting topics. The professions he considers often have something like ‘engineering envy’ or ‘medicine envy’: doctors and engineers can borrow from the hard sciences and get definitive answers to some of their questions, so they don’t always have to do this more nebulous ‘reflective conversation’ thing.

It’s tempting for experts in the ‘softer’ professions to try and borrow some of this prestige, leading to the introduction of a lot of theory into the curriculum, even if this theory turns out to be pretty bullshit-heavy and less useful than the kind of detailed reflection on individual cases that Quist is doing. Schön advocates for the reintroduction of practice, pointing out that this can never be fully separated from theory anyway:

If we separate thinking from doing, seeing thought only as a preparation for action and action only as an implementation of thought, then it is easy to believe that when we step into the separate domain of thought we will become lost in an infinite regress of thinking about thinking. But in actual reflection-in-action, as we have seen, doing and thinking are complementary. Doing extends thinking in the tests, moves, and probes of experimental action, and reflection feeds on doing and its results. Each feeds the other, and each sets boundaries for the other. It is the surprising result of action that triggers reflection, and it is the production of a satisfactory move that brings reflection temporarily to a close… Continuity of enquiry entails a continual interweaving of thinking and doing.

For some reason this book has become really popular with education departments, even though teaching only makes up a tiny part of the book. Googling ‘reflective practitioner’ brings up lots of education material, most of which looks cargo-culty and uninspired. Schön’s ideas seem to have mostly been routinised into exactly the kind of useless theory he was trying to go beyond, and I haven’t yet found any good follow-ups in the spirit of the original. It’s a shame, as there’s a lot more to explore here.

Precision and slop together

[This was originally part of the previous post, but is separate enough of an idea that I spun it out into its own short post. As with the previous one, this is in no way my area of expertise and what I have to say here may be very obvious to others.]

‘Inkscape GUI plus raw XML editor’ turns out to be a really interesting combination, and I wish I had access to this general style of working more often. Normally I find I have a choice between the following two annoying options:

  • write code, no inbuilt way to visualise what it does
  • use a visual tool that autogenerates code that you have no control over

Visual tools give you lots of opportunity for solving problems ‘by doing’, moving things around the screen and playing with their properties. I find this style of problem solving intrinsically enjoyable, and finding a clever solution ‘by doing’ is extremely satisfying. (I gave a couple of examples in the previous post.)

Visual tools also tend to ‘offer up more of the world to you’, in some sense. They make it easy to enter a state of just messing around with whatever affordances the program gives you, without having to think about whatever symbolic representation is currently instantiating it. So you get a more open style of thinking without being tied to the representation.

A screen's worth of 'just mucking around'

On the other hand, of course, visual tools that don’t do what you want are the most frustrating thing. You have no idea what the underlying model is, so you just have to do things and guess, and the tool is normally too stupid to do anything sensible with your guesses. Moving an image in Microsoft Word and watching it trigger a cascade of other idiotic changes would be a particularly irritating example.

You also miss out on the good parts of symbolic tools. Precision is easier with symbolic tools, for example. The UI normally has some concessions to precision, letting you snap a rotation to exactly 45 degrees or 90 degrees, for example. But sometimes you just want exactly 123 degrees, and not 122 degrees or 124 degrees, and the UI is no help with that.

Most importantly, symbolic tools are much better for anything you’re going to automate or compose with other tools. (I don’t really understand how much this is inherent in the problem, and how much it’s just a defect of our current tools.) I knew that I was eventually going to move to coding up something that would dynamically generate random shapes, so I needed to understand the handles that the program would be manipulating. Pure visual mucking around was not an option.

Unusually, though, I didn’t have to choose between visual immersion and symbolic manipulation, at least at the prototyping stage. This was a rare example of a domain where I could use a mix of both. On the symbolic side, I understood the underlying model (paths and transformations for scalable vector graphics) reasonably well, and wasn’t trying to do anything outlandish that it would struggle to cope with. At the same time, it was also an inherently visual problem involving a load of shapes that could be moved and edited and transformed with a GUI tool. So I could switch between open-ended playing and structural manipulation whenever I wanted. I wish I got to do this more often!

Practical Design for the Idiot Physicist

I’ve recently started getting a personal website together for maths and physics notes, somewhere that I can dump work in progress and half-baked scraps of intuition for various concepts.

For no good reason, I decided that this should also incorporate a completely irrelevant programming idea I’d been playing around with, where I dynamically generate kaleidoscope patterns. Like this:

Kaleidoscope pattern

(There’s a click-to-change version over on the website itself.)

I’d been working on this on and off for some time. It’s obviously not the most important problem I could be thinking about, but there was something about the idea that appealed to me. I originally thought that this would be a programming project, and that the problems I would need to solve were mostly programming ones. The open questions I had before I started were things like:

  • how would I represent the data I needed to describe the small shapes?
  • how would I generate them uniformly within the big triangles?
  • how would I make sure they didn’t overlap the boundaries of the big triangles?

I would say that in the end these sort of questions took up about 10% of my time on the project. Generally they were easy to solve, and generally it was enough to solve them in the simplest, dumbest possible way (e.g. by generating too many shapes, and then discarding the ones that did overlap the boundaries). That side of the project is not very interesting to describe!

Maybe another 30% was the inevitable annoying crap that comes along with web programming: fixing bugs and dodgy layouts and dealing with weird browser inconsistencies.

What surprised me, though, was the amount of time I spent on visual design questions. Compared to the programming questions, these were actually quite deep and interesting. I’d accidentally given myself a design project.

In this post I’m going to describe some of my attempts to solve the various problems that cropped up. I don’t know anything much about design, so the ‘discoveries’ I list below are pretty obvious in retrospect. I’m writing about them anyway because they were new to me, and I found them interesting.

There’s also a strong connection to some wider topics I’m exploring. I’m interested in understanding the kind of techniques people use to make progress on open-ended, vaguely defined sort of projects, and how they tell whether they are succeeding or not. I’ve been reading Donald Schön’s The Reflective Practitioner, which is about exactly this question, and he uses design (in this case architectural design) as a major case study. I’ll put up a review of the book in a week or so, which should be a good complement to this post.

I’m going to go for a fairly lazy ‘list of N things’ style format for this one, with the headings summarising my various ‘discoveries’. I think this fits the topic quite well: I’m not trying to make any deep theoretical point here, and am more interested in just pointing at a bunch of illustrative examples.

Get the models out of your head

I actually started this same project a couple of years ago, but didn’t get very far. I’d immediately tried coding up the problem, spent a while looking at some ugly triangles on a screen, got frustrated quickly with a minor bug and gave up.

I”m not sure what made me return to the problem, but this time I had the much more sensible idea of prototyping the design in Inkscape before trying to code anything up. This turned out to make a huge difference. First I tried drawing what I thought I wanted in Inkscape, starting with the initial scene that I’d reflect and translate to get the full image:

Bad first attempt at initial scene

It looked really crap. No wonder I’d hated staring at those ugly triangles. I tried to work out what was missing compared to a real kaleidoscope, and progressively tweaked it to get the following:Better attempt at initial scene
Suddenly it looked enormously better. In retrospect, making the shapes translucent was absolutely vital: that’s how you get all the interesting colour mixes. It was also necessary to make the shapes larger, or they just looked like they’d been scattered around the large triangle in a sad sort of way, rather than filling up the scene.

(My other change was to concentrate the shapes at the bottom of the triangle to imitate the effect of gravity. That turned out to be pretty unimportant: once I got round to coding it up I started by distributing them uniformly, and that looked fine to me, so I never bothered to do anything cleverer.)

Fast visual feedback makes all the difference

Once I knew what this central part looked like, I was ready to generate the full image. I could have done this by getting a pen and paper out and figuring out all the translations, rotations and reflections I’d need. But because I was already prototyping in Inkscape, I could ‘just play’ instead, and work it out as I went.

I quickly figured out how to generate one hexagon’s worth of pattern, by combining the following two sets of three rotated triangles:

Rotated and reflected triangles

Once I had that I was pretty much done. The rest of the pattern just consisted of translating this one hexagon a bunch of times:

Tiling hexagons

I generated that picture using the do-it-yourself method of just copying the hexagons and moving them round the page. I was still just playing around, and didn’t want to be dragged out of that mode of thinking to explicitly calculate anything.

I was really happy with the result. Having even this crude version of what I wanted in front of me immediately made me a lot more excited about the project. I could see that it actually looked pretty good, even with the low level of effort I’d put in so far. This sort of motivating visual feedback is what had been missing from my previous attempt where I’d jumped straight into programming.

You can often solve the same problem ‘by thinking’ or ‘by doing’

I gave a simple example of this in my last post. You can solve the problem of checking that the large square below has a side length of 16 by counting the small squares. Instead of putting in this vast level of mental effort, I instead added the four-by-four squares around it as a guide:

16 by 16 square, with 4 by 4 squares around it as a guide

Four is within subitizing range, and sixteen is well outside of mine, so this converts a problem where I have to push through a small amount of cognitive strain into a problem where I can just think ‘four four four four, yep that’s correct’ and be done with it. (‘Four fours are sixteen’ also seems to ‘just pop into my head’ as a fairly primitive action without any strain.)

Now admittedly this not a difficult problem, and solving in ‘by thinking’ would probably have been the right choice – it would have been quicker than drawing those extra squares and then deleting them again. I just have a chronic need to solve ‘by doing’ wherever I can instead.

Sometimes this actually works out quite well. In an earlier iteration of the design I needed to create a translucent overlay to fit over a geometric shape with a fairly complicated border (part of it is shown as the pinkish red shape in the image below). I could have done this by calculating the coordinates of the edges – this would have been completely doable but rather fiddly and error prone.

Drawing a line around the rough border of the shape

Instead, I ‘made Inkscape think about the problem’ so that I didn’t have to. I opened up the file containing the shape I wanted to make an overlay of, and drew a rough bodge job of the outline I needed by hand, with approximate coordinates (this is the black line in the image). Then I opened up Inkscape’s XML editor containing the raw markup, and put in the right numbers. This was now very easy, because I knew from previous experience playing around that the correct numbers should all be multiples of five. So if the number in the editor for my bodged version was 128.8763, say, I knew that the correct number would actually be 130. The need to think had been eliminated.

For me, at least, this was definitely quicker than calculating. (Of course, that relied on me knowing Inkscape well enough to ‘just know’ how to carry out the task without breaking immersion – if I didn’t have that and had to look up all the details of how to draw the outline and open the XML editor, it would have been quicker to calculate the numbers.)

Go back to the phenomena

Once I got the rough idea working, I had to start thinking about how I was actually going to use it on the page. For a while I got stuck on a bunch of ideas that were all kind of unsatisfying in the same way, but I couldn’t figure out how to do anything better. Everything I tried involved a geometric shape that was kind of marooned uselessly on a coloured background, like the example below:

Bad initial design for page

All I seemed to produce was endless iterations on this same bad idea.

What finally broke me out of it was doing an image search for photos of actual kaleidoscopes. One of the things I noticed was that the outer rings of the pattern are dimmer, as the number of reflections increases:

Actual kaleidoscopes


That gave me the idea of having a bright central hexagon, and then altering the opacity further out, and filling the whole screen with hexagons. This was an immediate step-change sort of improvement. Every design I’d tried before looked rubbish, and every design I tried afterwards looked at least reasonably good:

Improved design for page

Which leads to my final point…

There are much better ideas available – if you can get to them

I find the kind of situation above really interesting, where I get stuck in a local pocket of ‘crap design space’ and can’t see a way out. When I do find a way out of it, it’s immediately noticeable in a very direct way: I straightforwardly perceive it as ‘just being better’ without necessarily being able to articulate a good argument for why.

I still don’t have a good explicit argument for why the second design is ‘just better’, but looking at actual kaleidoscopes definitely helped. This was pretty similar to what I found back when I was playing with the individual shapes – adding the translucency of actual kaleidoscope pieces made all the difference.

I don’t have any great insight in how to escape ‘crap design space’ – the problem is pretty much equivalent to the problem of ‘how to have good ideas’, which I definitely don’t have a general solution to! But maybe going back to the original inspiration is one good strategy.

20 Fundamentals

I was inspired by John Nerst’s recent post to make a list of my own fundamental background assumptions. What I ended up producing was a bit of a odd mixed bag of disparate stuff. Some are something like factual beliefs, some of them are more like underlying emotional attitudes and dispositions to act in various ways.

I’m not trying to ‘hit bedrock’ in any sense, I realise that’s not a sensible goal. I’m just trying to fish out a few things that are fundamental enough to cause obvious differences in background with other people. John Nerst put it well on Twitter:

It’s not true that beliefs are derived from fundamental axioms, but nor is it true that they’re a bean bag where nothing is downstream from everything else.

I’ve mainly gone for assumptions where I tend to differ with the people I to hang around with online and in person, which skews heavily towards the physics/maths/programming crowd. This means there’s a pretty strong ‘narcissism of small differences’ effect going on here, and if I actually had to spend a lot of time with normal people I’d probably run screaming back to to STEM nerd land pretty fast and stop caring about these minor nitpicks.

Also I only came up with twenty, not thirty, because I am lazy.

  1. I’m really resistant to having to ‘actually think about things’, in the sense of applying any sort of mental effort that feels temporarily unpleasant. The more I introspect as I go about problem solving, the more I notice this. For example, I was mucking around in Inkscape recently and wanted to check that a square was 16 units long, and I caught myself producing the following image:


    Apparently counting to 16 was an unacceptable level of cognitive strain, so to avoid it I made the two 4 by 4 squares (small enough to immediately see their size) and then arranged them in a pattern that made the length of the big square obvious. This was slower but didn’t feel like work at any point. No thinking required!

  2. This must have a whole bunch of downstream effects, but an obvious one is a weakness for ‘intuitive’, flash-of-insight-based demonstrations, mixed with a corresponding laziness about actually doing the work to get them. (Slowly improving this.)

  3. I picked up some Bad Ideas From Dead Germans at an impressionable age (mostly from Kant). I think this was mostly a good thing, as it saved me from some Bad Ideas From Dead Positivists that physics people often succumb to.

  4. I didn’t read much phenomenology as such, but there’s some mood in the spirit of this Whitehead quote that always came naturally to me:

    For natural philosophy everything perceived is in nature. We may not pick and choose. For us the red glow of the sunset should be as much part of nature as are the molecules and electric waves by which men of science would explain the phenomenon.

    By this I mean some kind of vague understanding that we need to think about perceptual questions as well as ‘physics stuff’. Lots of hours as an undergrad on Wikipedia spent reading about human colour perception and lifeworlds and mantis shrimp eyes and so on.

  5. One weird place where this came out: in my first year of university maths I had those intro analysis classes where you prove a lot of boring facts about open sets and closed sets. I just got frustrated, because it seemed to be taught in the same ‘here are some facts about the world’ style that, say, classical mechanics was taught in, but I never managed to convince myself that the difference related to something ‘out in the world’ rather than some deficiency of our cognitive apparatus. ‘I’m sure this would make a good course in the psychology department, but why do I have to learn it?’

    This isn’t just Bad Ideas From Dead Germans, because I had it before I read Kant.

  6. Same thing for the interminable arguments in physics about whether reality is ‘really’ continuous or discrete at a fundamental level. I still don’t see the value in putting that distinction out in the physical world – surely that’s some sort of weird cognitive bug, right?

  7. I think after hashing this out for a while people have settled on ‘decoupling’ vs ‘contextualising’ as the two labels. Anyway it’s probably apparent that I have more time for the contextualising side than a lot of STEM people.

  8. Outside of dead Germans, my biggest unusual pervasive influence is probably the New Critics: Eliot, Empson and I.A. Richards especially, and a bit of Leavis. They occupy an area of intellectual territory that mostly seems to be empty now (that or I don’t know where to find it). They’re strong contextualisers with a focus on what they would call ‘developing a refined sensibility’, by deepening sensitivity to tiny subtle nuances in expression. But at the same time, they’re operating in a pre-pomo world with a fairly stable objective ladder of ‘good’ and ‘bad’ art. (Eliot’s version of this is one of my favourite ever wrong ideas, where poetic images map to specific internal emotional states which are consistent between people, creating some sort of objective shared world.)

    This leads to a lot of snottiness and narrow focus on a defined canon of ‘great authors’ and ‘minor authors’. But also the belief in reliable intersubjective understanding gives them the confidence for detailed close reading and really carefully picking apart what works and what doesn’t, and the time they’ve spent developing their ear for fine nuance gives them the ability to actually do this.

    The continuation of this is probably somewhere on the other side of the ‘fake pomo blocks path’ wall in David Chapman’s diagram, but I haven’t got there yet, and I really feel like I’m missing something important.

  9. I don’t understand what the appeal of competitive games is supposed to be. Like basically all of them – sports, video games, board games, whatever. Not sure exactly what effects this has on the rest of my thinking, but this seems to be a pretty fundamental normal-human thing that I’m missing, so it must have plenty.

  10. I always get interested in specific examples first, and then work outwards to theory.

  11. My most characteristic type of confusion is not understanding how the thing I’m supposed to be learning about ‘grounds out’ in any sort of experience. ‘That’s a nice chain of symbols you’ve written out there. What does it relate to in the world again?’

  12. I have never in my life expected moral philosophy to have some formal foundation and after a lot of trying I still don’t understand why this is appealing to other people. Humans are an evolved mess and I don’t see why you’d expect a clean abstract framework to ever drop out from that.

  13. Philosophy of mathematics is another subject where I mostly just think ‘um, you what?’ when I try to read it. In fact it has exactly the same subjective flavour to me as moral philosophy. Platonism feels bad the same way virtue ethics feels bad. Formalism feels bad the same way deontology feels bad. Logicism feels bad the same way consequentialism feels bad. (Is this just me?)

  14. I’ve never made any sense out of the idea of an objective flow of time and have thought in terms of a ‘block universe’ picture for as long as I’ve bothered to think about it.

  15. If I don’t much like any of the options available for a given open philosophical or scientific question, I tend to just mentally tag it with ‘none of the above, can I have something better please’. I don’t have the consistency obsession thing where you decide to bite one unappealing bullet or another from the existing options, so that at least you have an opinion.

  16. This probably comes out of my deeper conviction that I’m missing a whole lot of important and fundamental ideas on the level of calculus and evolution, simply on account of nobody having thought of them yet. My default orientation seems to be ‘we don’t know anything about anything’ rather than ‘we’re mostly there but missing a few of the pieces’. This produces a kind of cheerful crackpot optimism, as there is so much to learn.

  17. This list is noticeably lacking in any real opinions on politics and ethics and society and other people stuff. I just don’t have many opinions and don’t like thinking about people stuff very much. That probably doesn’t say anything good about me, but there we are.

  18. I’m also really weak on economics and finance. I especially don’t know how to do that economist/game theoretic thing where you think in terms of what incentives people have. (Maybe this is one place where ‘I don’t understand competitive games’ comes in.)

  19. I’m OK with vagueness. I’m happy to make a vague sloppy statement that should at least cover the target, and maybe try and sharpen it later. I prefer this to the ‘strong opinions, weakly held’ alternative where you chuck a load of precise-but-wrong statements at the target and keep missing. A lot of people will only play this second game, and dismiss the vague-sloppy-statement one as ‘just being bad at thinking’, and I get frustrated.

  20. Not happy about this one, but over time this frustration led me to seriously go off styles of writing that put a strong emphasis on rigour and precision, especially the distinctive dialects you find in pure maths and analytic philosophy. I remember when I was 18 or so and encountered both of these for the first time I was fascinated, because I’d never seen anyone write so clearly before. Later on I got sick of the way that this style tips so easily into pedantry over contextless trivialities (from my perspective anyway). It actually has a lot of good points, though, and it would be nice to be able to appreciate it again.