# 20 Fundamentals

I was inspired by John Nerst’s recent post to make a list of my own fundamental background assumptions. What I ended up producing was a bit of a odd mixed bag of disparate stuff. Some are something like factual beliefs, some of them are more like underlying emotional attitudes and dispositions to act in various ways.

I’m not trying to ‘hit bedrock’ in any sense, I realise that’s not a sensible goal. I’m just trying to fish out a few things that are fundamental enough to cause obvious differences in background with other people. John Nerst put it well on Twitter:

It’s not true that beliefs are derived from fundamental axioms, but nor is it true that they’re a bean bag where nothing is downstream from everything else.

I’ve mainly gone for assumptions where I tend to differ with the people I to hang around with online and in person, which skews heavily towards the physics/maths/programming crowd. This means there’s a pretty strong ‘narcissism of small differences’ effect going on here, and if I actually had to spend a lot of time with normal people I’d probably run screaming back to to STEM nerd land pretty fast and stop caring about these minor nitpicks.

Also I only came up with twenty, not thirty, because I am lazy.

1. I’m really resistant to having to ‘actually think about things’, in the sense of applying any sort of mental effort that feels temporarily unpleasant. The more I introspect as I go about problem solving, the more I notice this. For example, I was mucking around in Inkscape recently and wanted to check that a square was 16 units long, and I caught myself producing the following image:

Apparently counting to 16 was an unacceptable level of cognitive strain, so to avoid it I made the two 4 by 4 squares (small enough to immediately see their size) and then arranged them in a pattern that made the length of the big square obvious. This was slower but didn’t feel like work at any point. No thinking required!

2. This must have a whole bunch of downstream effects, but an obvious one is a weakness for ‘intuitive’, flash-of-insight-based demonstrations, mixed with a corresponding laziness about actually doing the work to get them. (Slowly improving this.)

3. I picked up some Bad Ideas From Dead Germans at an impressionable age (mostly from Kant). I think this was mostly a good thing, as it saved me from some Bad Ideas From Dead Positivists that physics people often succumb to.

4. I didn’t read much phenomenology as such, but there’s some mood in the spirit of this Whitehead quote that always came naturally to me:

For natural philosophy everything perceived is in nature. We may not pick and choose. For us the red glow of the sunset should be as much part of nature as are the molecules and electric waves by which men of science would explain the phenomenon.

By this I mean some kind of vague understanding that we need to think about perceptual questions as well as ‘physics stuff’. Lots of hours as an undergrad on Wikipedia spent reading about human colour perception and lifeworlds and mantis shrimp eyes and so on.

5. One weird place where this came out: in my first year of university maths I had those intro analysis classes where you prove a lot of boring facts about open sets and closed sets. I just got frustrated, because it seemed to be taught in the same ‘here are some facts about the world’ style that, say, classical mechanics was taught in, but I never managed to convince myself that the difference related to something ‘out in the world’ rather than some deficiency of our cognitive apparatus. ‘I’m sure this would make a good course in the psychology department, but why do I have to learn it?’

6. Same thing for the interminable arguments in physics about whether reality is ‘really’ continuous or discrete at a fundamental level. I still don’t see the value in putting that distinction out in the physical world – surely that’s some sort of weird cognitive bug, right?

7. I think after hashing this out for a while people have settled on ‘decoupling’ vs ‘contextualising’ as the two labels. Anyway it’s probably apparent that I have more time for the contextualising side than a lot of STEM people.

8. Outside of dead Germans, my biggest unusual pervasive influence is probably the New Critics: Eliot, Empson and I.A. Richards especially, and a bit of Leavis. They occupy an area of intellectual territory that mostly seems to be empty now (that or I don’t know where to find it). They’re strong contextualisers with a focus on what they would call ‘developing a refined sensibility’, by deepening sensitivity to tiny subtle nuances in expression. But at the same time, they’re operating in a pre-pomo world with a fairly stable objective ladder of ‘good’ and ‘bad’ art. (Eliot’s version of this is one of my favourite ever wrong ideas, where poetic images map to specific internal emotional states which are consistent between people, creating some sort of objective shared world.)

This leads to a lot of snottiness and narrow focus on a defined canon of ‘great authors’ and ‘minor authors’. But also the belief in reliable intersubjective understanding gives them the confidence for detailed close reading and really carefully picking apart what works and what doesn’t, and the time they’ve spent developing their ear for fine nuance gives them the ability to actually do this.

The continuation of this is probably somewhere on the other side of the ‘fake pomo blocks path’ wall in David Chapman’s diagram, but I haven’t got there yet, and I really feel like I’m missing something important.

9. I don’t understand what the appeal of competitive games is supposed to be. Like basically all of them – sports, video games, board games, whatever. Not sure exactly what effects this has on the rest of my thinking, but this seems to be a pretty fundamental normal-human thing that I’m missing, so it must have plenty.

10. I always get interested in specific examples first, and then work outwards to theory.

11. My most characteristic type of confusion is not understanding how the thing I’m supposed to be learning about ‘grounds out’ in any sort of experience. ‘That’s a nice chain of symbols you’ve written out there. What does it relate to in the world again?’

12. I have never in my life expected moral philosophy to have some formal foundation and after a lot of trying I still don’t understand why this is appealing to other people. Humans are an evolved mess and I don’t see why you’d expect a clean abstract framework to ever drop out from that.

13. Philosophy of mathematics is another subject where I mostly just think ‘um, you what?’ when I try to read it. In fact it has exactly the same subjective flavour to me as moral philosophy. Platonism feels bad the same way virtue ethics feels bad. Formalism feels bad the same way deontology feels bad. Logicism feels bad the same way consequentialism feels bad. (Is this just me?)

14. I’ve never made any sense out of the idea of an objective flow of time and have thought in terms of a ‘block universe’ picture for as long as I’ve bothered to think about it.

15. If I don’t much like any of the options available for a given open philosophical or scientific question, I tend to just mentally tag it with ‘none of the above, can I have something better please’. I don’t have the consistency obsession thing where you decide to bite one unappealing bullet or another from the existing options, so that at least you have an opinion.

16. This probably comes out of my deeper conviction that I’m missing a whole lot of important and fundamental ideas on the level of calculus and evolution, simply on account of nobody having thought of them yet. My default orientation seems to be ‘we don’t know anything about anything’ rather than ‘we’re mostly there but missing a few of the pieces’. This produces a kind of cheerful crackpot optimism, as there is so much to learn.

17. This list is noticeably lacking in any real opinions on politics and ethics and society and other people stuff. I just don’t have many opinions and don’t like thinking about people stuff very much. That probably doesn’t say anything good about me, but there we are.

18. I’m also really weak on economics and finance. I especially don’t know how to do that economist/game theoretic thing where you think in terms of what incentives people have. (Maybe this is one place where ‘I don’t understand competitive games’ comes in.)

19. I’m OK with vagueness. I’m happy to make a vague sloppy statement that should at least cover the target, and maybe try and sharpen it later. I prefer this to the ‘strong opinions, weakly held’ alternative where you chuck a load of precise-but-wrong statements at the target and keep missing. A lot of people will only play this second game, and dismiss the vague-sloppy-statement one as ‘just being bad at thinking’, and I get frustrated.

20. Not happy about this one, but over time this frustration led me to seriously go off styles of writing that put a strong emphasis on rigour and precision, especially the distinctive dialects you find in pure maths and analytic philosophy. I remember when I was 18 or so and encountered both of these for the first time I was fascinated, because I’d never seen anyone write so clearly before. Later on I got sick of the way that this style tips so easily into pedantry over contextless trivialities (from my perspective anyway). It actually has a lot of good points, though, and it would be nice to be able to appreciate it again.

# The cognitive decoupling elite

[Taking something speculative, running with it, piling on some more speculative stuff]

In an interesting post summarising her exploration of the literature on rational thinking, Sarah Constantin introduces the idea of a ‘cognitive decoupling elite’:

Stanovich talks about “cognitive decoupling”, the ability to block out context and experiential knowledge and just follow formal rules, as a main component of both performance on intelligence tests and performance on the cognitive bias tests that correlate with intelligence. Cognitive decoupling is the opposite of holistic thinking. It’s the ability to separate, to view things in the abstract, to play devil’s advocate.

… Speculatively, we might imagine that there is a “cognitive decoupling elite” of smart people who are good at probabilistic reasoning and score high on the cognitive reflection test and the IQ-correlated cognitive bias tests.

It’s certainly very plausible to me that something like this exists as a distinct personality cluster. It seems to be one of the features of my own favourite classification pattern, for example, as a component of the ‘algebra/systematising/step-by-step/explicit’ side (not the whole thing, though). For this post I’m just going to take it as given for now that ‘cognitive decoupling’ is a real thing that people can be more or less good at, build on that assumption and see what I get.

It’s been a good few decades for cognitive decoupling, from an employment point of view at least. Maybe a good couple of centuries, taking the long view. But in particular the rise of automation by software has created an enormous wealth of opportunities for people who can abstract out the formal symbolic exoskeleton of a process to the point where they can make a computer do it. There’s also plenty of work in the interstices between systems, defining interfaces and making sure data is clean enough to process, the kind of jobs Venkatesh Rao memorably described as ‘intestinal flora in the body of technology’.

I personally have a complicated, conflicted relationship with cognitive decoupling. Well, to be honest, sometimes a downright petty and resentful relationship. I’m not a true member of the elite myself, despite having all the right surface qualifications: undergrad maths degree, PhD in physics, working as a programmer. Maybe cognitive decoupling precariat, at a push. Despite making my living and the majority of my friends in cognitive-decoupling-heavy domains, I mostly find step-by-step, decontextualised reasoning difficult and unpleasant at a fundamental, maybe even perceptual level.

The clearest way of explaining this, for those who don’t already have a gut understanding what I mean, might be to describe something like ‘the opposite of cognitive decoupling’ (the cognitive strong coupling regime?). I had this vague memory that Sylvia Plath’s character Esther in The Bell Jar voiced something in the area of what I wanted, in a description of a hated physics class that had stuck in my mind as somehow connected to my own experience. I reread the passage and was surprised to find that it wasn’t just vaguely what I wanted, it was exactly what I wanted, a precise and detailed account of what just feels wrong about cognitive decoupling:

Botany was fine, because I loved cutting up leaves and putting them under the microscope and drawing diagrams of bread mould and the odd, heart-shaped leaf in the sex cycle of the fern, it seemed so real to me.

The day I went in to physics class it was death.

A short dark man with a high, lisping voice, named Mr Manzi, stood in front of the class in a tight blue suit holding a little wooden ball. He put the ball on a steep grooved slide and let it run down to the bottom. Then he started talking about let a equal acceleration and let t equal time and suddenly he was scribbling letters and numbers and equals signs all over the blackboard and my mind went dead.

… I may have made a straight A in physics, but I was panic-struck. Physics made me sick the whole time I learned it. What I couldn’t stand was this shrinking everything into letters and numbers. Instead of leaf shapes and enlarged diagrams of the hole the leaves breathe through and fascinating words like carotene and xanthophyll on the blackboard, there were these hideous, cramped, scorpion-lettered formulas in Mr Manzi’s special red chalk.

I knew chemistry would be worse, because I’d seen a big chart of the ninety-odd elements hung up in the chemistry lab, and all the perfectly good words like gold and silver and cobalt and aluminium were shortened to ugly abbreviations with different decimal numbers after them. If I had to strain my brain with any more of that stuff I would go mad. I would fail outright. It was only by a horrible effort of will that I had dragged myself through the first half of the year.

This is a much, much stronger reaction than the one I have, but I absolutely recognise this emotional state. The botany classes ground out in vivid, concrete experience: ferns, leaf shapes, bread mould. There’s an associated technical vocabulary – carotene, xanthophyll – but even these words are embedded in a rich web of sound associations and tangible meanings.

In the physics and chemistry classes, by contrast, the symbols are seemingly arbitrary, chosen on pure pragmatic grounds and interchangeable for any other random symbol. (I say ‘seemingly’ arbitrary because of course if you continue in physics you do build up a rich web of associations with x and t and the rest of them. Esther doesn’t know this, though.) The important content of the lecture is instead the structural relationships between the different symbols, and the ways of transforming one to another by formal rules. Pure cognitive decoupling.

There is a tangible physical object, the ‘little wooden ball’ (better than I got in my university mechanics lectures!), but that object has been chosen for its utter lack of vivid distinguishing features, its ability to stand in as a prototype of the whole abstract class of featureless spheres rolling down featureless inclined planes.

The lecturer’s suit is a bit crap, too. Nothing at all about this situation has been designed for a fulfilling, interconnected aesthetic experience.

I think it’s fairly obvious from the passage, but it seems to be worth pointing out anyway: ‘strong cognitive coupling’ doesn’t just equate to stupidity or lack of cognitive flexibility. For one thing, Esther gets an A anyway. For another, she’s able to give very perceptive, detailed descriptions of subtle features of her experience, always hugging close to the specificity of raw experience (‘the odd, heart-shaped leaf in the sex cycle of the fern’) rather than generic concepts that can be overlaid on to many observations (‘ah ok, it’s another sphere on an inclined plane’).

Strong coupling in this sense is like being a kind of sensitive antenna for your environment, learning to read as much meaning out of it as possible, but without necessarily being able to explain what you learn in a structured, explicit logical argument. I’d expect it to be correlated with high sensitivity to nonverbal cues, implicit tone, tacit understanding, all the kind of stuff that poets are stereotypically good at and nerds are stereotypically bad at.

I don’t normally talk about my own dislike of cognitive decoupling. It’s way too easy to sound unbearably precious and snowflakey, ‘oh my tastes are far too sophisticated to bear contact with this clunky nerd stuff’. In practice I just shut up and try to get on with it as far as I can. Organised systems are what keep the world functioning, and whining about them is mostly pointless. Also, I’m nowhere near the extreme end of this spectrum anyway, and can cope most of the time.

When I was studying maths and physics I didn’t even have to worry about this for the most part. You can compensate fairly well for a lack of ability in decoupled formal reasoning by just understanding the domain. This is very manageable, particularly if you pick your field well, because the same few ideas (calculus, linear algebra, the harmonic oscillator) crop up again and again and again and have very tangible physical interpretations, so there’s always something concrete to ground out the symbols with.

(This wasn’t a conscious strategy because I had no idea what was happening at the time. I just knew since I was a kid that I was ‘good at maths’ apart from some inexplicable occasions where I was instead very bad at maths, and just tried to steer towards the ‘good at maths’ bits as much as possible. This is my attempt to finally make some sense out of it.)

It’s been more of an issue since. Most STEM-type jobs outside of academia are pretty hard going, because the main objective is to get the job done, and you often don’t have time to build up a good picture of the overall domain, so you’re more reliant on the step-by-step systematic thing. A particularly annoying example would be something like implementing the business logic for a large enterprise CRUD app where you have no particularly strong domain knowledge. Maybe there’s a tax of 7% on twelve widgets, or maybe it’s a tax of 11.5% on five hundred widgets; either way, what it means for you personally is that you’re going to chuck some decontextualised variables around according to the rules defined in some document, with no vivid sensory understanding of exactly what these widgets look like and why they’re being taxed. There is basically no way that Esther in The Bell Jar could keep her sanity in a job like that, even if she has the basic cognitive capacity to do it; absolutely everything about it is viscerally wrong wrong wrong.

My current job is rather close to this end of the spectrum, and it’s a strain to work in this way, in a way many other colleagues don’t seem to experience. This is where the ‘downright petty and resentful’ bit comes in. I’d like it if there was a bit more acknowledgment from people who find cognitive decoupling easy and natural that it is in fact a difficult mode of thought for many of us, and one that most modern jobs dump us into far more than we’d like.

From the other side, I’m sure that the decouplers would also appreciate it if we stopped chucking around words like ‘inhuman’ and ‘robotic’, and did a bit less hating on decontextualised systems that keep the world running, even if they feel bad from the inside. I think some of this stuff is coming from a similar emotional place to my own petty resentment, but it’s not at all helpful for any actual communication between the sides.

I’m seeing a few encouraging examples of the kind of communication I would like. Sarah Constantin looks to be in something like a symmetric position to me on the other side of the bridge, with her first loyalty to explicit systematic reasoning, but enough genuine appreciation to be able to write thoughtful explorations of the other side:

I think it’s much better to try to make the implicit explicit, to bring cultural dynamics into the light and understand how they work, rather than to hide from them.

David Chapman has started to write about how the context-heavy sort of learning (‘reasonableness’) works, aimed at something like the cognitive decoupling elite:

In summary, reasonableness works because it is context-dependent, purpose-laden, interactive, and tacit. The ways it uses language are effective for exactly the reason rationality considers ordinary language defective: nebulosity.

And then there’s all the wonderful work by people like Bret Victor, who are working to open up subjects like maths and programming for people like me who need to see things if we are going to have a hope of doing them.

I hope this post at least manages to convey something of the flavour of strong cognitive coupling to those who find decoupling easy. So if the thing I’m trying to point at still looks unclear, please let me know in the comments!

# Imagination in a terrible strait-jacket

I enjoyed alkjash’s recent Babble and Prune posts on Less Wrong, and it reminded me of a favourite quote of mine, Feynman’s description of science in The Character of Physical Law:

What we need is imagination, but imagination in a terrible strait-jacket. We have to find a new view of the world that has to agree with everything that is known, but disagree in its predictions somewhere, otherwise it is not interesting.

Imagination here corresponds quite well to Babbling, and the strait-jacket is the Pruning you do afterwards to see if it actually makes any sense.

For my tastes at least, early Less Wrong was generally too focussed on building out the strait-jacket to remember to put the imagination in it. An unfair stereotype would be something like this:

‘I’ve been working on being better calibrated, and I put error bars on all my time estimates to take the planning fallacy into account, and I’ve rearranged my desk more logically, and I’ve developed a really good system to keep track of all the tasks I do and rank them in terms of priority… hang on, why haven’t I had any good ideas??’

I’m poking fun here, but I really shouldn’t, because I have the opposite problem. I tend to go wrong in this sort of way:

‘I’ve cleared out my schedule so I can Think Important Thoughts, and I’ve got that vague idea about that toy model that it would be good to flesh out some time, and I can sort of see how Topic X and Topic Y might be connected if you kind of squint the right way, and it might be worth developing that a bit further, but like I wouldn’t want to force anything, Inspiration Is Mysterious And Shouldn’t Be Rushed… hang on, why have I been reading crap on the internet for the last five days??’

I think this trap is more common among noob writers and artists than noob scientists and programmers, but I managed to fall into it anyway despite studying maths and physics. (I’ve always relied heavily on intuition in both, and that takes you in a very different direction to someone who leans more on formal reasoning.) I’m quite a late convert to systems and planning and organisation, and now I finally get the point I’m fascinated by them and find them extremely useful.

One particular way I tend to fail is that my over-reliance on intuition leads me to think too highly of any old random thoughts that come into my head. And I’ve now come to the (in retrospect obvious) conclusion that a lot of them are transitory and really just plain stupid, and not worth listening to.

As a simple example, I’ve trained myself to get up straight away when the alarm goes off, and every morning my brain fabricates a bullshit explanation for why today is special and actually I can stay in bed, and it’s quite compelling for half a minute or so. I’ve got things set up so I can ignore it and keep doing things, though, and pretty quickly it just goes away and I never wish that I’d listened to it.

On the other hand, I wouldn’t want to tighten things up so much that I completely stopped having the random stream of bullshit thoughts, because that’s where the good ideas bubble up from too. For now I’m going with the following rule of thumb for resolving the tension:

Thoughts can be herded and corralled by systems, and fed and dammed and diverted by them, but don’t take well to being manipulated individually by systems.

So when I get up, for example, I don’t have a system in place where I try to directly engage with the bullshit explanation du jour and come up with clever countertheories for why I actually shouldn’t go back to bed. I just follow a series of habitual getting-up steps, and then after a few minutes my thoughts are diverted to a more useful track, and then I get on with my day.

A more interesting example is the common writers’ strategy of having a set routine (there’s a whole website devoted to these). Maybe they work at the same time each day, or always work in the same place. This is a system, but it’s not a system that dictates the actual content of the writing directly. You just sit and write, and sometimes it’s good, and sometimes it’s awful, and on rare occasions it’s genuinely inspired, and if you keep plugging on those rare occasions hopefully become more frequent. I do something similar with making time to learn physics now and it works nicely.

This post is also a small application of the rule itself! I was on an internet diet for a couple of months, and was expecting to generate a few blog post drafts in that time, and was surprised that basically nothing came out in the absence of my usual internet immersion. I thought writing had finally become a pretty freestanding habit for me, but actually it’s still more fragile and tied to a social context that I expected. So this is a deliberate attempt to get the writing flywheel spun up again with something short and straightforward.

# Seeing further

In November and December I’m going to shut up talking nonsense here, and also have a break from reading blogs, Twitter, and most of my usual other internet stuff. I like blogs and the internet, but being immersed in the same things all the time means I end up having the same thoughts all the time, and sometimes it is good to have some different ones. I’ve seen a big improvement this year from not being on tumblr, which mostly immerses you in other people’s thoughts, but stewing in the same vat of my own ones gets a bit old too. Also I have a bunch of stuff that I’ve been procrastinating on, so I should probably go do that.

I’ve pretty much accepted that I am in fact writing a real blog, and not some throwaway thing that I might just chuck tomorrow. That was psychologically useful in getting me started, but now writing just seems to appear whether I want it to or not. So I might do some reworking next year to make it look a bit more like a real blog and less like a bucket of random dross.

The topic of the blog has spread outwards a bit recently (so far as there is a topic — I haven’t made any deliberate effort to enforce one), but there does seem to be a clear thread connecting my mathematical intuition posts with my more recent ramblings.

One of the key things I seem to be interested in exploring in both is the process of being able to ‘see’ more, in being able to read new meaning into your surroundings. I’ve looked at examples in a couple of previous posts. One is the ‘prime factors’ proof of the irrationality of the square root of two, where you learn to directly ‘see’ the equation $\frac{p^2}{q^2} = 2$ as wrong (both $p^2$ and $q^2$ have an even number of each prime factor, so dividing one by the other is never going to give a 2 on its own).

Another is the process of noticing interesting optical phenomena after reading Light and Colour in the Outdoors. I see a lot of sun dogs, rainbows and 22° halos that I’d have missed before. (No circumzenithal arcs yet though! Maybe I’m not looking up enough.)

They are sort of different: the first one feels more directly perceptual — I actually see the equation differently — while the second feels like more of a disposition to scan my surroundings for certain things that I’d previously have missed. I’m currently doing too much lumping, and will want to distinguish cases more carefully later. But there seems to be some link there.

I’m interested in the theoretical side of how this process of seeing more might work, but currently I’d mostly just like to track down natural histories of what this feels like to people from the inside. This sort of thing could be distributed all over the place — fiction? the deliberate practice literature? autobiographies of people with unusual levels of expertise? — so it’s not easy to search for; if you have any leads please pass them on.

I hadn’t really thought to explicitly link this to philosophy of science, even though I’d actually read some of the relevant things, but now David Chapman is pointing it out in his eggplant book it’s pretty obvious that that’s somewhere I should look. There is a strong link with Kuhn’s scientific revolutions, in which scientists learn to see their subject within a new paradigm, and I should investigate more. I used to hang around with some of the philosophy of science students as an undergrad and liked the subject, so that could be fun anyway.

We ended up discussing a specific case study on Twitter (Storify link to the whole conversation here): ‘The Work of a Discovering Science Construed with Materials from the Optically Discovered Pulsar’, by Harold Garfinkel, Michael Lynch and Eric Livingston. This is an account based on transcripts from the first observations of the Crab Pulsar in the optical part of the spectrum. There’s a transition over the course of the night from talking about the newly discovered pulse in instrumental terms, as a reading on the screen…

In the previous excerpts projects make of the optically discovered pulsar a radically contingent practical object. The parties formulate matters of ‘belief’ and ‘announcement’ to be premature at this point.

…to ‘locking on’ to the pulsar as a defined object:

By contrast, the parties in the excerpts below discuss the optically discovered pulsar as something-in-hand, available for further elaboration and analysis, and essentially finished. … Instead of being an ‘object-not-yet’, it is now referenced as a perspectival object with yet to be ‘found’ and measured properties of luminosity, pulse amplitude, exact frequency, and exact location.

This is high-concept, big-question seeing further!

I’m currently more interested in the low-concept, small-question stuff, though, like my two examples above. Or maybe I want to consider even duller and more mundane situations than those — I’ve done a lot of really low-level temporary administrative jobs, data entry and sorting the post and the like, and they always give me some ability to see further in some domain, even if ‘seeing further’ tends to consist of being able to rapidly identify the right reference code on a cover letter, or something else equally not thrilling. The point is that a cover letter looks very different once you’ve learned do the thing, because the reference code ‘jumps out’ at you. There’s some sort of family resemblance to a big fancy Kuhnian paradigm shift.

The small questions are lacking somewhat in grandeur and impressiveness, but make it up in sheer number. Breakthroughs like the pulsar observation don’t come along very often, and full-scale scientific revolutions are even rarer, but millions of people see further in their boring office jobs every day. There’s much more opportunity to study how it works!

# Followup: messy confusions

My last post was a long enthusiastic ramble through a lot of topics that have been in my head recently. After I finished writing it, I had an interesting experience. I’d temporarily got all the enthusiasm out of my system and was sick of the sight of the post, so that what was left was all the vague confusions and nagging doubts that were sitting below it. This post is where I explore those, instead. (Though now I’m getting excited again… the cycle repeats.)

Nothing in here really invalidates the previous post, so far as I can tell. I’ve just reread it and I’m actually pretty happy with it. It’s just… more mess. Things that don’t fit neatly into the story I told last time, or things that I know I’m still missing background in.

I haven’t bothered to even try and impose any structure on this post, it’s just a list of confusions in more-or-less random order. I also haven’t made much effort to unpack my thoughts carefully, so I don’t know how comprehensible all of this is.

I probably do have to read Heidegger or something

My background in reading philosophy is, more or less, ‘some analytic philosophers plus Kant’. I’ve been aware for a long time now that that just isn’t enough to cover the space, and that I’m missing a good sense of what the options even are.

I’m slowly coming round to the idea that I should fix that, even though it’s going to involve reading great lumps of text in various annoying writing styles I don’t understand. I now have a copy of Dreyfus’s Being-in-the-World, which isn’t exactly easy going in itself, but is still better than actually reading Heidegger.

Also, I went to the library at UWE Bristol, my nearest university, the other weekend, and someone there must be a big Merleau-Ponty fan. It looks like I can get all the phenomenology I can eat reasonably easily, if that’s what I decide I want to read.

One thing: worse than two things

Still, reading back what I wrote last time about mess, I think that even at my current level of understanding I did manage to extricate myself before I made a complete fool of myself:

There is still some kind of principled distinction here, some way to separate the two. The territory corresponds pretty well to the bottom-up bit, and is characterised by the elements of experience that respond in unpredictable, autonomous ways when we investigate them. There’s no way to know a priori that my mess is going to consist of exercise books, a paper tetrahedron and a kitten notepad. You have to, like, go and look at it.

The map corresponds better to the top-down bit, the ordering principles we are trying to impose. These are brought into play by the specific objects we’re looking at, but have more consistency across environments – there are many other things that we would characterise as mess.

Still, we’ve come a long way from the neat picture of the Less Wrong wiki quote. The world outside the head and the model inside it are getting pretty mixed up. For one thing, describing the remaining ‘things in the head’ as a ‘model’ doesn’t fit too well. We’re not building up a detailed internal representation of the mess. For another, we directly perceive mess as mess. In some sense we’re getting the world ‘all at once’, without the top-down and bottom-up parts helpfully separated.

The main reason I was talking about not having enough philosophical background to cover the space is that I’ve no idea yet how thoroughgoing I want to be in this direction of mushing everything up together. There is this principled distinction between all the autonomous uncontrollable stuff that the outside world is throwing at you, and the stuff inside your head. Making a completely sharp distinction between them is silly, but I still want to say that that it’s a lot less silly than ‘all is One’ silliness. Two things really is better than one thing.

Sarah Constantin went over similar ground recently:

The basic, boring fact, usually too obvious to state, is that most of your behavior is proximately caused by your brain (except for reflexes, which are controlled by your spinal cord.) Your behavior is mostly due to stuff inside your body; other people’s behavior is mostly due to stuff inside their bodies, not yours. You do, in fact, have much more control over your own behavior than over others’.

This is obvious, but seems worth restating to me too. People writing in the vague cluster that sometimes gets labelled postrationalist/metarationalist are often really keen on the blurring of these categories. The boundary of the self is fluid, our culture affects how we think of ourselves, concepts don’t have clear boundaries, etc etc. Maybe it’s just a difference in temperament, but so far I’ve struggled to get very excited by any of this. You can’t completely train the physicist out of me, and I’m more interested in the pattern side than the nebulosity side. I want to shake people lost in the relativist fog, and shout ‘look at all the things we can find in here!’

How important is this stuff for doing science?

One thing that fascinates me is how well physics often does by just dodging the big philosophical questions.

Newtonian mechanics had deficiencies that were obvious to the most sophisticated thinkers of the time — instantaneous action at a distance, absolute acceleration — but worked unbelievably well for calculational purposes anyway. Leibniz had most of the good arguments, but Newton had a bucket, and the thing just worked, and pragmatism won the day for a while. (Though there were many later rounds of that controversy, and we’re not finished yet.)

This seems to be linked to the strange feature that we can come up with physical theories that describe most of the phenomena pretty well, but have small holes, and filling in the holes requires not small bolt-on corrections but a gigantic elegant new superstructure that completely subsumes the old one. So the ‘naive’ early theories work far better than you might expect having seen the later ones. David Deutsch puts it nicely in The Fabric of Reality (I’ve bolded the key sentence):

…the fact that there are complex organisms, and that there has been a succession of gradually improving inventions and scientific theories (such as Galilean mechanics, Newtonian mechanics, Einsteinian mechanics, quantum mechanics,…) tells us something more about what sort of computational universality exists in reality. It tells us that the actual laws of physics are, thus far at least, capable of being successively approximated by theories that give ever better explanations and predictions, and that the task of discovering each theory, given the previous one, has been computationally tractable, given the previously known laws and the previously available technology. The fabric of reality must be, as it were, layered, for easy self-access.

One thing that physics has had a really good run of plugging its ears to and avoiding completely is questions of our internal perceptions of the world as it appears to us. Apart from some rather basic forms of observer-dependence (only seeing the light rays that travel to our eyes, etc.), we’ve mostly been able to sustain a simple realist model of a ‘real world out there’ that ‘appears to us directly’ (the quotes are there to suggest that this works best of all if you interpret the words in a common sense sort of way and don’t think too deeply about exactly what they mean). There hasn’t been much need within physics to posit top-down style explanations, where our observations are constrained by the possible forms of our perception, or interpreted in the light of our previous understanding.

(I’m ignoring the various weirdo interpretations of quantum theory that have a special place for the conscious human observer here, because they haven’t produced much in the way of new understanding. You can come up with a clever neo-Kantian interpretation of Bohr, or an awful woo one, but so far these have always been pretty free-floating, rather than locking onto reality in a way that helps us do more.)

Eddington, who had a complex and bizarre idealist metaphysics of his own, discusses this in The Nature of the Physical World (my bolds again):

The synthetic method by which we build up from its own symbolic elements a world which will imitate the actual behaviour of the world of familiar experience is adopted almost universally in scientific theories. Any ordinary theoretical paper in the scientific journals tacitly assumes that this approach is adopted. It has proved to be the most successful procedure; and it is the actual procedure underlying the advances set forth in the scientific part of this book. But I would not claim that no other way of working is admissible. We agree that at the end of the synthesis there must be a linkage to the familiar world of consciousness, and we are not necessarily opposed to attempts to reach the physical world from that end…

…Although this book may in most respects seem diametrically opposed to Dr Whitehead’s widely read philosophy of Nature, I think it would be truer to regard him as an ally who from the opposite side of the mountain is tunneling to meet his less philosophically minded colleagues. The important thing is not to confuse the two entrances.

This is all very nice, but it doesn’t seem much of a fair division of labour. When it comes to physics, Eddington’s side’s got those big Crossrail tunnelling machines, while Whitehead’s side is bashing at rocks with a plastic bucket and spade. The naive realist world-as-given approach just works extraordinarily well, and there hasn’t been a need for much of a tradition of tunnelling on the other side of the mountain.

Where I’m trying to go with this is that I’m not too sure where philosophical sophistication actually gets us. I think the answer for physics might be ‘not very far’… at least for this type of philosophical sophistication. (Looking for deeper understanding of particular conceptual questions within current physics seems to go much better.) Talking about ‘the real world out there’ just seems to work very well.

Maybe this only holds for physics, though. If we’re talking about human cognition, it looks inescapable that we’re going to have to do some digging on the other side.

This is roughly the position I hold at the moment, but I notice I still have a few doubts. The approach of ‘find the most ridiculous and reductive possible theory and iterate from there’ had some success even in psychology. Behaviourism and operant conditioning were especially ridiculous and reductive, but found some applications anyway (horrible phone games and making pigeons go round in circles).

I don’t know much about the history, but as far as I know behaviourism grew out of the same intellectual landscape as logical positivism (but persisted longer?), which is another good example of a super reductive wrong-but-interesting theory that probably had some useful effects. Getting the theory to work was hopeless, but I do see why people like Cosma Shalizi and nostalgebraist have some affection for it.

[While I’m on this subject, is it just me or does old 2008-style Less Wrong have a similar flavour to the logical positivists? That same ‘we’ve unmasked the old questions of philosophy as meaningless non-problems, everything makes sense in the light of our simple new theory’ tone? Bayesianism is more sophisticated than logical positivism, because it can accommodate some top-down imposition of order on perception in the form of priors. But there’s still often a strange incuriosity about what priors are and how they’d work, that gives me the feeling that it hasn’t broken away completely from the unworkable positivist world of clean logical propositions about sense-impressions.

The other similarity is that both made an effort to write as clearly as possible, so that where things are wrong you have a hope of seeing that they’re wrong. I learned a lot ten years ago from arguing in my head with Language, Truth and Logic, and I’m learning a lot now from arguing in my head with Less Wrong.]

Behaviourism was superseded by more sophisticated bad models (‘the brain is a computer that stores and updates a representation of its surroundings’) that presumably still have some domain of application, and it’s plausible that a good approach is to keep hammering on each one until it stops producing anything. Maybe there’s a case for the good old iterative approach of bashing, say, neural nets together until we’ve understood all the things neural nets can and can’t do, and only then bashing something else together once that finally hits diminishing returns.

I’m not actually convincing myself here! ‘We should stop having good ideas, and just make the smallest change to the current bad ones that might get us further’ is a crummy argument, and I definitely don’t believe it. But like Deutsch I feel there’s something interesting in the fact that dumb theories work at all, that reality is helpfully layered for our convenience.

The unreasonable effectiveness of mathematics

Just flagging this one up again. I don’t have any more to say about it, but it’s still weird.

The ‘systematiser’s hero’s journey’.

This isn’t a disagreement, just a difference in emphasis. Chapman is mainly writing for people who are temperamentally well suited to a systematic cognitive style and have trouble moving away from it. That seems to fit a lot of people well, and I’m interested in trying to understand what that path feels like. The best description I’ve found is in Phil Agre’s Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI. He gives a clear description of what immersion in the systematic style looks like…

As an AI practitioner already well immersed in the literature, I had incorporated the field’s taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial — except that it reproduced the same technical schemata as the AI literature.

… and the process of breaking out from it:

My first intellectual breakthrough came when, for reasons I do not recall, it finally occurred to me to stop translating these strange disciplinary languages into technical schemata, and instead simply to learn them on their own terms. This was very difficult because my technical training had instilled in me two polar-opposite orientations to language — as precisely formalized and as impossibly vague — and a single clear mission for all discursive work — transforming vagueness into precision through formalization (Agre 1992). The correct orientation to the language of these texts, as descriptions of the lived experience of ordinary everyday life, or in other words an account of what ordinary activity is like, is unfortunately alien to AI or any other technical field.

I find this path impressive, because it involves deliberately moving out of a culture that you’re very successful in, and taking the associated status hit. I don’t fully understand what kicks people into doing this! It does come with a nice clean ‘hero’s journey’ type narrative arc though: dissatisfaction with the status quo, descent into the nihilist underworld, and the long journey back up to usefulness, hopefully with new powers of understanding.

I had a very different experience. For me it was just a slow process of attrition, where I slowly got more and more frustrated with the differences between my weirdo thinking style and the dominant STEM one, which I was trained in but never fully took to. This reached a peak at the end of my PhD, and was one of the main factors that made me just want to get out of academia.

I’ve already ranted more than anybody could possibly want to hear about my intuition-heavy mathematical style and rubbishness at step-by-step reasoning, so I’m not going to go into that again here. But I also often have trouble doing the ‘workings of a mechanism’ thing, in a way I find hard to describe.

(I can give a simple example, though, which maybe points towards it. I’m the kind of computer user who, when the submit button freezes, angrily clicks it again twenty times as fast as I can. I know theoretically that ‘submitting the form’ isn’t a monolithic action, that behind the scenes it has complex parts, that nobody has programmed a special expediteRequestWhenUserIsAngry() method that will recognise my twenty mouse clicks, and that all I’m achieving is queuing up pointless extra requests on the server, or nothing at all. But viscerally it feels like just one thing that should be an automatic consequence of my mouse click, like pressing a physical button and feeling it lock into place, and it’s hard to break immersion in the physical world and consider an unseen mechanism behind it. The computer’s failure to respond is just wrong wrong wrong, and I hate it, and maybe if I click this button enough times it’ll understand that I’m angry. The calm analytic style is not for me.)

(… this is a classic Heideggerian thing, right? Being broken out of successful tool use? This is what I mean about having to read Heidegger.)

On the other side, I don’t have any problem with dealing with ideas that aren’t carefully defined yet. I’m happy to deal with plenty of slop and ambiguity in between ‘precisely formalized’ and ‘impossibly vague’. I’m also happy to take things I only know implicitly, and slowly try and surface more of it over time as an explicit model. I’m pretty patient with this, and don’t seem to be as bothered by a lack of precision as more careful systematic thinkers.

This has its advantages too. Hopefully I’m getting to a similar sort of place by a different route.

# Metarationality: a messy introduction

In the last couple of years, David Chapman’s Meaningness site has reached the point where enough of the structure is there for a bunch of STEM nerds like me to start working out what he’s actually talking about. So there’s been a lot of excited shouting about things like ‘metarationality’ and ‘ethnomethodology’ and ‘postformal reasoning’.

Not everyone is overjoyed by this. There was a Less Wrong comment by Viliam back in January which I thought made the point clearly:

How this all feels to me:

When I look at the Sequences, as the core around which the rationalist community formed, I find many interesting ideas and mental tools. (Randomly listing stuff that comes to my mind: Bayes theorem, Kolmogorov complexity, cognitive biases, planning fallacy, anchoring, politics is the mindkiller, 0 and 1 are not probabilities, cryonics, having your bottom line written first, how an algorithm feels from inside, many-worlds interpretation of quantum physics, etc.)

When I look at “Keganism”, it seems like an affective spiral based on one idea.

I am not saying that it is a wrong or worthless idea, just that comparing “having this ‘one weird trick’ and applying it to everything” with the whole body of knowledge and attitudes is a type error. If this one idea has merit, it can become another useful tool in a large toolset. But it does not surpass the whole toolset or make it obsolete, which the “post-” prefix would suggest.

Essentially, the “post-” prefix is just a status claim; it connotationally means “smarter than”.

To compare, Eliezer never said that using Bayes theorem is “post-mathematics”, or that accepting many-worlds interpretation of quantum physics is “post-physics”. Because that would just be silly. Similarly, the idea of “interpenetration of systems” doesn’t make one “post-rational”.

In other words, what have the metarationalists ever done for us? Rationality gave us a load of genuinely exciting cognitive tools. Then I went to metarationality, and all I got were these lousy Kegan stages.

This would be a very fair comment, if the only thing there was the Kegan idea. I’m one of the people who does find something weirdly compelling in that, and I was thinking about it for months. But I have to admit it’s a total PR disaster for attracting the people who don’t, on the level of equating torture with dust specks. (At least PR disasters are one thing that rationalists and metarationalists have in common!)

‘You don’t understand this because you haven’t reached a high enough stage of cognitive development’ is an obnoxious argument. People are right to want to run away from this stuff.

Also, as Viliam points out, it’s just one idea, with a rather unimpressive evidence base at that. That wouldn’t warrant a fancy new word like ‘metarationality’ on its own. [1]

Another idea I sometimes see is that metarationality is about the fact that a particular formal system of beliefs might not work well in some contexts, so it’s useful to keep a few in mind and be able to switch between them. This is point 2 of a second comment by Viliam on a different Less Wrong thread, trying to steelman his understanding of metarationality:

Despite admitting verbally that a map is not the territory, rationalists hope that if they take one map, and keep updating it long enough, this map will asymptotically approach the territory. In other words, that in every moment, using one map is the right strategy. Meta-rationalists don’t believe in the ability to update one map sufficiently (or perhaps just sufficiently quickly), and intentionally use different maps for different contexts. (Which of course does not prevent them from updating the individual maps.) As a side effect of this strategy, the meta-rationalist is always aware that the currently used map is just a map; one of many possible maps. The rationalist, having invested too much time and energy into updating one map, may find it emotionally too difficult to admit that the map does not fit the territory, when they encounter a new part of territory where the existing map fits poorly. Which means that on the emotional level, rationalists treat their one map as the territory.

Furthermore, meta-rationalists don’t really believe that if you take one map and keep updating it long enough, you will necessarily asymptotically approach the territory. First, the incoming information is already interpreted by the map in use; second, the instructions for updating are themselves contained in the map. So it is quite possible that different maps, even after updating on tons of data from the territory, would still converge towards different attractors. And even if, hypothetically, given infinite computing power, they would converge towards the same place, it is still possible that they will not come sufficiently close during one human life, or that a sufficiently advanced map would fit into a human brain. Therefore, using multiple maps may be the optimal approach for a human. (Even if you choose “the current scientific knowledge” as one of your starting maps.)

I don’t personally find the map/territory distinction all that helpful and will talk about that more later. Still, I think that this is OK as far as it goes, and much closer to the central core than the Kegan stage idea. To me it’s rather a vague, general sort of insight, though, and there are plenty of other places where you could get it. I’m not surprised that people aren’t falling over themselves with excitement about it.

I think people are looking for concrete, specific interesting ideas, along the lines of Viliam’s list of concepts he learned from rationality. I very much have this orientation myself, of always needing to go from concrete to abstract, so I think I understand a bit of what’s missing for a lot of people.

(Example: My first experience of reading Meaningness a few years ago was to click around some early posts, read a lot of generalities about words like ‘nebulosity’ and ‘pattern’ and ‘eternalism’, and completely miss the point. ‘Maybe this is some kind of Alain de Botton style pop philosophy? Anyway, it’s definitely not something I care about.’ There are a lot more specifics now, so it’s much easier to follow a concrete-to-abstract path and work out what’s going on. If you also tend to learn this way, I’d advise starting with the most recent parts, or something from the metablog, and working your way back.)

I do think that these concrete, specific ideas exist. I wrote a little bit in that first Less Wrong thread about what I found interesting, but it’s pretty superficial and I’ve thought about it a lot more since. This is my first attempt at a more coherent synthesis post. I’ve written it mostly for my own benefit, to make myself think more clearly about some vague parts, and it’s very much work-in-progress thinking out loud, rather than finalised understanding. This the kind of thing I enjoy writing, out on the ‘too early’ side of this ‘when to write’ graph. (This blog is mostly for talking about about things I’m still trying to understand myself, rather than polished after-the-fact explanation.)

There’s a lot there, it’s real stuff rather than vague generalities about how systems sometimes don’t work very well, and it’s very much worth bothering with. That’s what I want to try and get across in this post!

Also, this is just my idea of what’s interesting, and I’ve stuck to one route through the ideas in this post because otherwise the length would get completely out of control. Maybe others see things completely differently. If so, I’d like to know. I’ve filled this with links to make it easy to explore elsewhere.

### Contents

This post is pretty long, so maybe a summary of what’s in it would be a good idea. Roughly, I’m going to cover:

• How we think we think, vs. how we actually think: if you look closely at even the most formal types of reasoning, what we’re actually doing doesn’t tend to look so rigorous and logical.
• Still, on its own that wouldn’t preclude the brain doing something very rigorous and logical behind the scenes, in the same way that we don’t consciously know about the image processing our brain is doing for our visual field. Some discussion of why the prospects for that don’t look great either.
• Dumb confused interlude on the unreasonable effectiveness of mathematics.
• The cognitive flip: queering the inside-the-head/outside-the-head binary. Sarah Perry’s theory of mess as a nice example of this.
• Through fog to the other side: despite all this confusion, we can navigate anyway.

### How we think we think, vs. how we actually think

I’ve got like a thousand words into this without bringing up my own weirdo obsession with mathematical intuition, which is unusually restrained, so let’s fix that now. It’s something I find interesting in itself, but it’s also this surprisingly direct rabbit hole into some rather fundamental ideas about how we think.

Take mathematical proofs, for example. In first year of a maths degree, everyone makes a massive deal out of how you’re going to be learning to write proofs now, and how this is some extra special new rigorous mode of thought that will teach you to think like a real mathematician. This is sort of true, but I found the transition incredibly frustrating and confusing, because I could never work out what was going on. What level of argument actually constitutes a proof?

I’d imagined that a formal proof would be some kind of absolutely airtight thing where you started with a few reasonable axioms and some rules of inference, and derived everything from that. I was quite excited, because it did sound like that would be a very useful thing to learn!

We did learn a little bit of formal logic stuff, basic propositional and predicate calculus. But most of the proofs we saw were not starting there, along with say the axioms of the real numbers, and working up. (If they had, they’d have been hundreds of pages long and completely unilluminating.)

Instead we proved stuff at this weird intermediate level. There were theorems like ‘a sequence of numbers that always increases, but is bounded by some upper value, will converge’, or ‘a continuous function f on the interval from a to b takes on all the values between f(a) and f(b)’. We weren’t deriving these from basic axioms. But also we weren’t saying ‘yeah that’s obviously true, just look at it’, like I’d have done before going to the class. Instead there was this special domain-specific kind of reasoning with lots of epsilons and deltas, and a bunch of steps to include.

How do you know which steps are the important ones to include? I never really worked that out at the time. In practice, I just memorised the proof and regurgitated it in the exam, because at least that way I knew I’d get the level right. Unsurprising, this didn’t exactly ignite a fire of deep intellectual excitement inside me. In the end I just gave up and started taking as many applied maths and physics courses as possible, where I could mostly continue doing whatever I was doing before they tried to introduce me to this stupid proofs thing.

If I went back now I’d probably understand. That weird intermediate level probably is the right one, the one that fills in the holes where your intuition can go astray but also avoids boring you with the rote tedium of obvious deductive steps. [2] Maybe seeing more pathological examples would help, cases where your intuitive ideas really do fail and this level of rigour is actually useful. [3]

An interesting question at this point is, how do you generate these intermediate-level proofs? One answer would be that you are starting from the really formal level, thinking up very careful airtight proofs in your head, and then only writing down some extracted key steps. I think it’s fairly clear you’re not doing that, at least at the level of conscious access (more on the idea that that’s what we’re ‘really’ doing later).

The reality seems to be messier. Explicitly thinking through formal rules is useful some of the time. But it’s only one method among many.

Sometimes, for example, you notice that, say, an equation admits an algebraic simplification you’ve used many times before, and mechanistic formula-churning takes over for a while. This may have required thinking through formal rules when you first learned it, but by now your fingers basically know the answer. Sometimes the resulting expression looks messy, and some rather obsessive part of the mind is not happy until like terms are collected tidily together. Sometimes you realise that part of the complexity of the problem ‘is just book-keeping’, and can be disposed of by, for example, choosing the origin of your coordinate system sensibly. The phrase ‘without loss of generality’ becomes your friend. Sometimes a context-specific trick comes to mind (‘probably it’s another one of those thingies where we sandwich it between two easier functions and show that they converge’).

There’s no real end to the list of things to try. Generating proofs is a fully general education in learning to Think Real Good. But some fundamental human faculties come up again and again. The mathematician Bill Thurston gave a nice list of these in his wonderful essay On proof and progress in mathematics. This is the sort of essay where when you start quoting it, you end up wanting to quote the whole lot (seriously just go and read it!), but I’ve tried to resist this and cut the quote down to something sensible:

(1) Human language. We have powerful special-purpose facilities for speaking and understanding human language, which also tie in to reading and writing. Our linguistic facility is an important tool for thinking, not just for communication. A crude example is the quadratic formula which people may remember as a little chant, “ex equals minus bee plus or minus the square root of bee squared minus four ay see all over two ay.” …

(2) Vision, spatial sense, kinesthetic (motion) sense. People have very powerful facilities for taking in information visually or kinesthetically, and thinking with their spatial sense. On the other hand, they do not have a very good built-in facility for inverse vision, that is, turning an internal spatial understanding back into a two-dimensional image. Consequently, mathematicians usually have fewer and poorer figures in their papers and books than in their heads …

(3) Logic and deduction. We have some built-in ways of reasoning and putting things together associated with how we make logical deductions: cause and effect (related to implication), contradiction or negation, etc. Mathematicians apparently don’t generally rely on the formal rules of deduction as they are thinking. Rather, they hold a fair bit of logical structure of a proof in their heads, breaking proofs into intermediate results so that they don’t have to hold too much logic at once …

(4) Intuition, association, metaphor. People have amazing facilities for sensing something without knowing where it comes from (intuition); for sensing that some phenomenon or situation or object is like something else (association); and for building and testing connections and comparisons, holding two things in mind at the same time (metaphor). These facilities are quite important for mathematics. Personally, I put a lot of effort into “listening” to my intuitions and associations, and building them into metaphors and connections. This involves a kind of simultaneous quieting and focusing of my mind. Words, logic, and detailed pictures rattling around can inhibit intuitions and associations.

(5) Stimulus-response. This is often emphasized in schools; for instance, if you see 3927 × 253, you write one number above the other and draw a line underneath, etc. This is also important for research mathematics: seeing a diagram of a knot, I might write down a presentation for the fundamental group of its complement by a procedure that is similar in feel to the multiplication algorithm.

(6) Process and time. We have a facility for thinking about processes or sequences of actions that can often be used to good effect in mathematical reasoning. One way to think of a function is as an action, a process, that takes the domain to the range. This is particularly valuable when composing functions. Another use of this facility is in remembering proofs: people often remember a proof as a process consisting of several steps.

Logical deduction makes an appearance on this list, but it’s not running the show. It’s one helpful friend in a group of equals. [4]

Mathematics isn’t special here: it just happened to be my rabbit hole into thinking about how we think about things. It’s a striking example, because the gap between the formal stories we like to tell and the messy reality is such a large one. But there are many other rabbit holes. User interface design (or probably any kind of design) is another good entry point. You’re looking at what people actually do when using your software, not your clean elegant theory of what you hoped they’d do. [5]

Apparently the general field that studies what people actually do when they work with systems is called ‘ethnomethodology’. Who knew? Why does nobody tell you this??

(Side note: if you poke around Bret Victor’s website, you can find this pile of pdfs, which looks like some kind of secret metarationality curriculum. You can find the Thurston paper and some of the mathematical intuition literature there, but overall there’s a strong design/programming focus, which could be a good way in for many people.)

### After virtue epistemology

On its own, this description of what we do when we think about a problem shouldn’t necessarily trouble anyone. After all, we don’t expect to have cognitive access to everything the brain does. We already expect that we’re doing a lot of image processing and pattern recognition and stuff. So maybe we’re actually all running a bunch of low-level algorithms in our head which are doing something very formal and mathematical, like Bayesian inference or something. We have no direct access to those, though, so maybe it’s perfectly reasonable that what we see at a higher level looks like a bunch of disconnected heuristics. If we just contented ourselves with a natural history of those heuristics, we might be missing out on the chance of a deeper explanatory theory.

Scott Alexander makes exactly this point in a comment on Chapman’s blog:

Imagine if someone had reminded Archimedes that human mental simulation of physics is actually really really good, and that you could eyeball where a projectile would fall much more quickly (and accurately!) than Archimedes could calculate it. Therefore, instead of trying to formalize physics, we should create a “virtue physics” where we try to train people’s minds to better use their natural physics simulating abilities.

But in fact there are useful roles both for virtue physics and mathematical physics. As mathematical physics advances, it can gradually take on more of the domains filled by virtue physics (the piloting of airplanes seems like one area where this might have actually happened, in a sense, and medicine is in the middle of the process now).

So I totally support the existence of virtue epistemology but think that figuring out how to gradually replace it with something more mathematical (without going overboard and claiming we’ve already completely learned how to do that) is a potentially useful enterprise.

Chapman’s response is that

… if what I wrote in “how to think” looked like virtue ethics, it’s probably only because it’s non-systematic. It doesn’t hold out the possibility of any tidy answer.

I would love to have a tidy system for how to think; that would be hugely valuable. But I believe strongly that there isn’t one. Pursuing the fantasy that maybe there could be one is actively harmful, because it leads away from the project of finding useful, untidy heuristics.

This is reasonable, but I still find it slightly disappointing, in that it seems to undersell the project as he describes it elsewhere. It’s true that Chapman isn’t proposing a clean formal theory that will explain all of epistemology. But my understanding is that he is trying to do something more explanatory than just cataloguing a bunch of heuristics, and that doesn’t come across here. In other parts of his site he gives some indication of the sorts of routes to better understanding of cognition he finds promising.

Hopefully he’s going to expand on the details some time soon, but it’s tempting to peek ahead and try and work out the story now. Again, I’m no expert here, at all, so for the next section assume I’m doing the typical arrogant physicist thing.

The posts I linked above gave me lots of pieces of the argument, but at first I couldn’t see how to fit them into a coherent whole. Scott Alexander’s recent predictive processing post triggered a few thoughts that filled in some gaps, so I went and pestered Chapman in his blog comments to check I had the right idea.

Scott’s post is one of many posts where he distinguishes between ‘bottom-up’ and ‘top-down’ processing.

Bottom-up processing starts with raw sensory data and repackages it into something more useful: for vision, this would involve correcting for things like the retinal blind spot and the instability of the scene as we move our gaze. To quote from the recent post:

The bottom-up stream starts out as all that incomprehensible light and darkness and noise that we need to process. It gradually moves up all the cognitive layers that we already knew existed – the edge-detectors that resolve it into edges, the object-detectors that shape the edges into solid objects, et cetera.

Top-down processing is that thing where Scott writes ‘the the’ in a sentence and I never notice it, even though he always does it. It’s the top-down expectations (‘the word “the” isn’t normally repeated’) we’re imposing on our perceptions.

This division makes a lot of sense as a general ordering scheme: we know we’re doing both these sorts of things, and that we somehow have to take both into consideration at once when interpreting the scene. The problem is working out what’s relevant. There’s a gigantic amount of possibly relevant sense data, and a gigantic amount of possible relevant existing knowledge. We need to somehow extract the parts that are useful in our situation and make decisions on the fly.

On the bottom-up side, there are some reasonable ideas for how this could work. We can already do a good job of writing computer algorithms to process raw pixel data and extract important features. And there is often a reasonably clearcut, operational definition of what ‘relevant’ could possibly mean.

Relevant objects are likely to be near you rather than miles away; and the most salient objects are likely to be the ones that recently changed, rather than ones that have just sat there for the last week. These sort of rules reduce the pressure to have to take in everything, and push a lot of the burden onto the environment, which can cue you in.

This removes a lot of the work. If the environment can just tell us what to do, there’s no need to go to the effort of building and updating a formal internal model that represents it all. Instead of storing billions of Facts About Sense Data you can have the environment hand them to you in pieces as required. This is the route Chapman discusses in his posts, and the route he took as an AI researcher, together with his collaborator Phil Agre (see e.g. this paper (pdf) on a program, Pengi, that they implemented to play an arcade game with minimal representation of the game surroundings).

In the previous section I tried to weaken the importance of formal representations from the outside, by looking at how mathematical reasoning occurs in practice as a mishmash of cognitive faculties. Situated cognition aims to weaken it from the inside instead by building up models that work anyway, without the need for too much representation.

Still, we’re pushing some way beyond ‘virtue epistemology’, by giving ideas for how this would actually work. In fact, so far there might be no disagreement with Scott at all! Scott is interested in ideas like predictive processing and perceptual control theory, which also appear to look at changes in the sense data in front of you, rather than trying to represent everything as tidy propositions.

However, we also have to think about the top-down side. Scott has the following to say about it:

The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”.

This looks like the bit where the representation sneaks in. We escaped the billions of Facts About Sense Data, but that looks very like billions of Facts About Prior Experience to me. We’d still need to sort through them and work out what’s relevant somehow. I haven’t read the Clark book, and Scott’s review is very vague about how this works:

Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can.

My response is sort of an argument from incredulity, at this point. Imagine expanding out the list of predictions, to cover all the things you know at the same level of specificity as ‘that guy in the blue uniform is probably a policeman’. That is an insane number of things! And you’re expecting people to sort through these on the fly, and compute priors for giant lists of hypotheses, and keep them reasonably consistent, and then run Bayesian updates on them? Surely this can’t be the story!

Arguments from incredulity aren’t the best kinds of arguments, so if you do think there’s a way that this could plausibly work in real time, I’d love to know. [6]

Coping with the complexity would be much plausible if we could run the same situational trick as with the bottom-up case, finding some way of avoiding having to represent all this knowledge by working out which parts are important in the current context. But this time it’s far harder to figure out operationally how that would work. There’s no obvious spatial metric on our thoughts such that we can determine ‘which ones are nearby’ or ‘which ones just changed’ as a quick proxy for relevance. And the sheer variety of types of thought is daunting – there are no obvious ordering principles like the three dimensions of the visual field.

It was when we realized we had no idea how to address this that Phil [Agre] and I gave up on AI. If you take a cognitivist approach—i.e. representing knowledge using something like language or logic—the combinatorics are utterly impossible. And we had no good alternative.

So it’s not like I can suggest anything concrete that’s better than the billions of Facts About Prior Experience. That’s definitely a major weakness of my argument here! But maybe it’s enough to see that we didn’t need this explicit formal representation so far, and that it’s going to be a combinatorial disaster zone if we bring in now. For me, that’s enough clues that I might want to look elsewhere.

### Brief half-baked confused interlude: the mathematical gold standard

Maybe you could go out one stage further, though. OK, so we’re not consciously thinking through a series of formal steps. And maybe the brain isn’t doing the formal steps either. But it is true that the results of correct mathematical thinking are constrained by logic, that to count as correct mathematics, your sloppy intuitive hand-waving eventually has to cash out in a rigorous formal structure. It’s somehow there behind the scenes, like a gold standard backing our messy exchanges.

Mathematics really is unreasonably effective. Chains of logic do work amazingly well in certain domains.

I think this could be part of the MIRI intuition. ‘Our machines aren’t going to do any of this messy post-formal heuristic crap even if our brains are. What if it goes wrong? They’re going to actually work things out by actual formal logic.’ (Or at the very least, they’re going to verify things afterwards, with actual formal logic. Thanks to Kaj Sotala for pointing this distinction out to me a while ago.)

I don’t understand how this is possibly going to work. But I can’t pretend that I really know what’s going on here either. Maths feels like witchcraft to me a lot of the time, and most purported explanations of what it is make no sense to me. Philosophy of mathematics is bad in exactly the same way that moral philosophy is bad, and all the popular options are a mess. [7]

### The cognitive flip

I think I want to get back to slightly more solid ground now. It’s not going to be much more solid though, because I’m still trying to work this out. There’s a bit of Chapman’s ‘Ignorant, irrelevant, and inscrutable’ post that puzzled me at first:

Many recall the transition from rationalism to meta-rationalism as a sudden blinding moment of illumination. It’s like the famous blotchy figure above: once you have seen its meaning, you can never unsee it again. After you get meta-rationality, you see the world differently. You see meanings rationalism cannot—and you can never go back.

This is probably an illusion of memory. The transition occurs only after years of accumulating bits of insight into the relationship between pattern and nebulosity, language and reality, math and science, rationality and thought. At some point you have enough pieces of the puzzle that the overall shape falls into place—but even then there’s a lot of work left to fill in the holes.

Now first of all, I don’t recall any such ‘blinding moment of illumination’ myself. That possibly doesn’t mean much, as it’s not supposed to be compulsory or anything. (Or maybe I just haven’t had it yet, and everything will make sense tomorrow…)

What was more worrying is that I had no clear idea of what the purported switch was supposed to be. I’ve thought about this a bit more now and I think I’m identifying it correctly. I think that the flip is to do with removing the clean split between having a world outside the head and a representation inside it.

In the ‘clean split’ worldview, you have sensory input coming in from the outside world, and your brain’s job is to construct an explicit model making sense of it. Here’s a representative quote summarising this worldview, taken from the Less Wrong wiki article on ‘The map is not the territory’:

Our perception of the world is being generated by our brain and can be considered as a ‘map’ of reality written in neural patterns. Reality exists outside our mind but we can construct models of this ‘territory’ based on what we glimpse through our senses.

This worldview is sophisticated enough to recognise that the model may be wrong in some respects, or it may be missing important details – ‘the map is not the territory’. However, in this view there is some model ‘inside the head’, whose job is to represent the outside world.

In the preceding sections we’ve been hacking away at the plausibility of this model. This breakdown frees us to consider different models of cognition, models that depend on interactions between the brain and the environment. Let’s take an example. One recent idea I liked is Sarah Perry’s proposed theory of mess.

Perry tackles the question of what exactly constitutes a mess. Most messes are made by humans. It’s rare to find something in the natural world that looks like a mess. Why is that?

Maybe this sounds like an obscure question. But it’s exactly the kind of question you might sniff out if you were specifically interested in breaking down the inside-the-head/outside-the-head split. (In fact, maybe this is part of the reason why metarationality tends to look contentless from the outside. Without the switch everything just looks like an esoteric special topic the author’s interested in that day. You don’t care about mess, or user interface design, or theatre improv, or mathematical intuition, or whatever. You came here for important insights on reality and the nature of cognition.)

You’re much better off reading the full version, with lots of clever visual examples, and thinking through the answer yourself. But if you don’t want to do that, her key thesis is:

… in order for mess to appear, there must be in the component parts of the mess an implication of extreme order, the kind of highly regular order generally associated with human intention. Flat uniform surfaces and printed text imply, promise, or encode a particular kind of order. In mess, this promise is not kept. The implied order is subverted.

So mess is out in the world – you need a bunch of the correct sort of objects in your visual field to perceive a mess. But mess is not just out in the world – you also have to impose your own expectation of order on the scene, based on the ordered contexts that the objects are supposed to appear in. Natural scenes don’t look like a mess because no such implied order exists.

Mess confuses the neat categories of ‘in the world’ and ‘in my head’:

It is as if objects and artifacts send out invisible tendrils into space, saying, “the matter around me should be ordered in some particular way.” The stronger the claim, and the more the claims of component pieces conflict, the more there is mess. It is these invisible, tangled tendrils of incompatible orders that we are “seeing” when we see mess. They are cryptosalient: at once invisible and obvious.

In the language of the previous section, we’re getting in bottom-up signals from our visual field, which is resolved into the bunch of objects. And then by some ‘magic’ (invisible tendrils? a cascade of Bayesian updates?) the objects are recognised from the top-down side as implying an incompatible pile of different ordering principles. We’re seeing a mess.

Here’s some of my mess:

I can sort of still fit this into the map/territory scheme. Presumably the table itself and the pile of disordered objects are in the territory. And then the map would be… what? Some mental mess-detecting faculty that says ‘my model of those objects is that they should be stacked neatly, looks like they aren’t though’?

There is still some kind of principled distinction here, some way to separate the two. The territory corresponds pretty well to the bottom-up bit, and is characterised by the elements of experience that respond in unpredictable, autonomous ways when we investigate them. There’s no way to know a priori that my mess is going to consist of exercise books, a paper tetrahedron and a kitten notepad. You have to, like, go and look at it.

The map corresponds better to the top-down bit, the ordering principles we are trying to impose. These are brought into play by the specific objects we’re looking at, but have more consistency across environments – there are many other things that we would characterise as mess.

Still, we’ve come a long way from the neat picture of the Less Wrong wiki quote. The world outside the head and the model inside it are getting pretty mixed up. For one thing, describing the remaining ‘things in the head’ as a ‘model’ doesn’t fit too well. We’re not building up a detailed internal representation of the mess. For another, we directly perceive mess as mess. In some sense we’re getting the world ‘all at once’, without the top-down and bottom-up parts helpfully separated.

At this point I feel I’m getting into pretty deep metaphysical waters, and if I go much further in this direction I’ll make a fool of myself. Probably a really serious exploration in this direction involves reading Heidegger or something, but I can’t face that right now so I think I’ll finish up here.

### Through fog to the other side

A couple of months ago I had an idea for a new blog post, got excited about it and wrote down this quick outline. That weekend I started work on it, and slowly discovered that every sentence in the outline was really an IOU for a thousand words of tricky exposition. What the hell had I got myself into? This has been my attempt to do the subject justice, but I’ve left out a lot. [8]

I hope I’ve at least conveyed that there is a lot there, though. I’ve mostly tried to do that through the methods of ‘yelling enthusiastically about things I think are worth investigating’ and ‘indicating via enthusiastic yelling that there might be a pile of other interesting things nearby, just waiting for us to dig them up’. Those are actually the things I’m most keen to convey, more even than the specifics in this post, but to do that I needed there to be specifics.

I care about this because I feel like I’m surrounded by a worrying cultural pessimism. A lot of highly intelligent people seem to be stuck in the mindset of ‘all the low-hanging fruit’s been plucked, everything interesting requires huge resources to investigate, you’re stuck being a cog in an incredibly complicated system you can barely understand, it’s impossible to do anything new and ambitious by yourself.’

I’ve gone through the PhD pimple factory myself, and I understand how this sort of constricting view takes hold. I also think that it is, to use a technical phrase, total bollocks.

My own mindset, which the pimple factory didn’t manage to completely destroy, is very different, and my favourite example to help explain where I’m coming from has always been the theory of evolution by natural selection. The basic idea doesn’t require any very complicated technical setup; you can explain the it in words to a bright ten-year-old. It’s also deeply explanatory: nothing in biology makes sense except in the light of it. And yet Darwin’s revolution came a couple of hundred years after the invention of calculus, which requires a lot more in the way of technical prerequisites to understand.

Think of all those great mathematicians — Gauss, Lagrange, Laplace — extending and applying the calculus in incredibly sophisticated ways, and yet completely clueless about basic questions that the bright ten-year-old could answer! That’s the situation I expect we’re still in. Many other deep ways of understanding the world are probably still hidden in fog, but we can clear more of it by learning to read new meaning into the world in the right ways. I don’t see any reason for pessimism yet.

This is where the enthusiastic yelling comes from. Chapman’s Meaningness project attacks the low-hanging-fruit depressive slump both head on by explaining what’s wrong with it, and indirectly by offering up an ambitious, large-scale alternative picture full of ideas worth exploring. We could do with a lot more of this.

It may look odd that I’ve spent most of this post trying to weaken the case for formal systems, and yet I’m finishing off by being really excitable and optimistic about the prospects for new understanding. That’s because we can navigate anyway! We might not think particularly formally when we do mathematics, for example, but nothing about that stops us from actually getting the answer right. A realistic understanding of how we reason our way through messy situations and come to correct conclusions anyway is likely to help us get better at coming up with new ideas, not worse. We can clear some more of that fog and excavate new knowledge on the other side.

### Footnotes

1. I don’t really love the word ‘metarationality’, to be honest. It’s a big improvement on ‘postrationality’, though, which to me has strong connotations of giving up on careful reasoning altogether. That sounds like a terrible idea.

‘Metarationality’ sounds like a big pretentious -ism sort of word, but then a lot of the fault of that comes from the ‘rationality’ bit, which was never the greatest term to start with. I quite like Chapman’s ‘the fluid mode’, but ‘metarationality’ seems to be sticking, so I’ll go with that. (back)

2. There’s also a big social element that I didn’t get at the time. If you’re a beginner handing in homework for your first analysis course, you may need to put a lot of steps in, to convince the marker that you understand why they’re important. If you’re giving a broad overview to researchers in a seminar, you can assume they know all of that. There’s no one canonical standard of proof.

At the highest levels, in fact, the emphasis on rigour is often relaxed somewhat. Terence Tao describes this as:

The “post-rigorous” stage, in which one has grown comfortable with all the rigorous foundations of one’s chosen field, and is now ready to revisit and refine one’s pre-rigorous intuition on the subject, but this time with the intuition solidly buttressed by rigorous theory. (For instance, in this stage one would be able to quickly and accurately perform computations in vector calculus by using analogies with scalar calculus, or informal and semi-rigorous use of infinitesimals, big-O notation, and so forth, and be able to convert all such calculations into a rigorous argument whenever required.) The emphasis is now on applications, intuition, and the “big picture”. This stage usually occupies the late graduate years and beyond.

(Incidentally, this blog post is a good sanitised, non-obnoxious version of the Kegan levels idea.) (back)

3. I recently went to a fascinating introductory talk on divergent series, the subject that produces those weird Youtube videos on how 1 + 2 + 3 + … = -1/12. The whole thing was the most ridiculous tightrope walk over the chasm of total bullshit, always one careful definition away from accidentally proving that 1 = 0, and for once in my life I was appreciating the value of a bit of rigour. (back)

4. The list isn’t supposed to be comprehensive, either. I would definitely add aesthetics as an important category… sometimes an equation just looks unpleasant, like a clunky sentence or some badly indented code, and I feel a compulsion to tidy it into the ‘correct’ form. (back)

5. My current job is as a relative noob programmer supporting and extending existing software systems. I’ve been spending a lot of time being dumped in the middle of some large codebase where I have no idea what’s going on, or getting plonked in front of some tool I don’t understand, and flailing around uselessly while some more experienced colleague just knows what to do. There’s now a new guy who’s an even bigger noob on the project than me, and it’s fun to get to watch this from the other side! He’ll be staring blankly at a screen packed with information, fruitlessly trying to work out which bit could possibly be relevant, while I’ll immediately just see that the number two from the end on the right hand side has got bigger, which is bad, or whatever. (back)

7. Has anyone else noticed the following?

Platonism virtue ethics
formalism deontology
logicism utilitarianism

I don’t know what this means but I’m pretty sure it’s all just bad. (back)

8. The worst omission is that I’ve only glancingly mentioned the difference between epistemic uncertainty and ontological ambiguity, the subject I started to hint about in this post. This is an extremely important piece of the puzzle. I don’t think I could do a good job of talking about it, though, and David Chapman is currently writing some sort of giant introduction to this anyway, so maybe it made sense to focus elsewhere. (back)

# no better state than this

I’m writing a Long Post, but it’s a slog. In the meantime here are some more trivialities.

1. I realised that the three images in my glaucoma machine post could be condensed down to the following: “Glaucoma, the ox responded / Gaily, to the hand expert with yoke and plough.”

This is really stupid and completely impenetrable without context, and I love it.

2. I’ve been using the fog-clearing metaphor for the process of resolving ambiguity. It’s a good one, and everyone else uses it.

It’s probably not surprising that we reach for a visual metaphor, as sight is so important to us. It’s common to describe improved understanding in terms of seeing further. Galileo named his scientific society the Academy of Lynxes because the lynx was thought to have unparalleled eyesight, though unfortunately that finding seems not to have replicated. (That was the high point of naming scientific institutions, and after that we just got boring stuff like ‘The Royal Society’.)

I’m more attached to smell as a metaphor, though. We do use this one pretty often, talking about having a ‘good nose’ for a problem or ‘sniffing out’ the answer. Or even more commonly when we talk about good or bad taste, given that taste is basically smell.

I’m probably biased because I have atrocious eyesight, and a good sense of smell. I’d rather join an Academy of Trufflehogs. I do think smell fits really well, though, for several reasons:

• It’s unmapped. Visual images map into a neat three-dimensional field; smell is a mess.
• The vocabulary for smells is bad. There’s a lot more we can detect than we know how to articulate.
• It’s deeply integrated into the old brain, strongly plugged into all sorts of odd emotions.
• It’s real nonetheless. You can navigate through this mess anyway! Trufflehogs actually find truffles.

3. An even better metaphor, though, is this beautiful one I saw last week from M. John Harrison on Twitter. ‘You became a detector, but you don’t know what it detects’:

This mental sea change is one of my weird repetitive fascinations that I keep going on about, here and on the old tumblr. Seymour Papert’s ‘falling in love with the gears’, or the ‘positive affective tone’ that started attaching itself to boring geology captions on Wikipedia. The long process of becoming a sensitive antenna, and the longer process of finding out what it’s an antenna for. There is so absolutely NO BETTER STATE THAN THIS.

# Three replies

These are responses to other people’s posts. They’re all a bit short for an individual post but a bit long/tangential/self-absorbed for a reply, so I batched them together here.

1. Easy Mode/Hard Mode inversions

I spend a lot of time being kind of confused and nitpicky about the rationalist community, but there’s one thing they do well that I really really value, which is having a clear understanding of the distinction between doing the thing and doing the things you need to do to look like you’re doing the thing.

Yudkowsky was always clear on this (I’m thinking about the bit on cutting the enemy), and people in the community get it.

I appreciate a lot this having done a PhD. In academia a lot of people seem to have spent so long chasing after the things you need to do to look like you’re doing the thing that they’ve forgotten how to do the thing, or even sometimes that there’s a thing there to do. In parts, the cargo cults have taken over completely.

Zvi Mowshowitz gives doing the thing and doing the things you need to do to look like you’re doing the thing the less unwieldy names of Hard Mode and Easy Mode (at least, I think that’s the key component of what he’s pointing at).

It got me thinking about cases where Easy Mode and Hard Mode could invert completely. In academia, Easy Mode involves keeping up with the state of the art in a rapidly moving narrow subfield, enough to get out a decent number of papers on a popular topic in highly ranked journals during your two year postdoc. You need to make sure you’re in a good position to switch to the new trendy subfield if this one appears to run out of steam, though, because you need to make sure you get that next two year postdoc on the other side of the world, so that …

… wait a minute. Something’s gone wrong here. That sounds really hard!

Hard Mode is pretty ill-defined right now, but I’m not convinced that it necessarily has to be any harder than Easy Mode. I have a really shitty plan and it’s still not obviously worse than the Easy Mode plan.

If there was a risk of a horrible, life-ruining failure in Hard Mode, I’d understand, but there isn’t. The floor, for a STEM PhD student with basic programming skills in a developed economy, is that you get a boring but reasonably paid middle class job and think about what you’re interested in in your spare time. I’m walking along this floor right now and it’s really not bad here. It’s also exactly the same floor you end up on if you fail out of Easy Mode, except you have a few extra years to get acquainted with it.

If there is a genuine inversion here, then probably it’s unstable to perturbations. I’m happy to join in with the kicking.

2. ~The Great Conversation~

Sarah Constantin had the following to say in a recent post:

… John’s motivation for disagreeing with my post was that he didn’t think I should be devaluing the intellectual side of the “rationality community”. My post divided projects into into community-building (mostly things like socializing and mutual aid) versus outward-facing (business, research, activism, etc.); John thought I was neglecting the importance of a community of people who support and take an interest in intellectual inquiry.

I agreed with him on that point — intellectual activity is important to me — but doubted that we had any intellectual community worth preserving. I was skeptical that rationalist-led intellectual projects were making much progress, so I thought the reasonable thing to do was to start fresh.

😮

‘Doubted that we had any intellectual community worth preserving’ is strong stuff! Apparently today is Say Nice Things About The Rationalists Day for me, because I really wanted to argue with it a bit.

I may be completely missing the point on what the ‘rationality community’ is supposed to be in this argument. I’m only arguing for the public-facing, internet community here, because that’s all I really know about. I have no idea about the in-person Berkeley one. Even if I have missed the point, though, I think the following makes sense anyway.

Most subcultures and communities of practice have a bunch of questions people get really exercised about and like to debate. I often internally think of this as ~The Great Conversation~, with satiric tumblr punctuation to indicate it’s not actually always all that great.

I’ve only been in this part of the internet for a few years. Before that I lurked on science blogs (which have some overlap). On science blogs ~The Great Conversation~ includes the replication crisis, alternatives to the current academic publishing system, endless identical complaints about the postdoc system (see part 1 of this post), and ranting about pseudoscience and dodgy alternative therapies.

Sometimes ~The Great Conversation~ involves the big names in the field, but most of the time it’s basically whoever turns up. People who enjoy writing, people who enjoy the sound of their own voice, people with weird new ideas they’re excited about, people on a moral quest to fix things, grumpy postdocs with an axe to grind, bored people, depressed people, lonely people, the usual people on the internet.

If you go to the department common room instead, the academics probably aren’t talking about the things on the science blogs. They’re talking about their current research, or the weird gossip from that other research group, or what the university administration has gone and done this time, or how shit the new coffee machine is. ~The Great Conversation~ is mostly happening elsewhere.

This means that the weirdos on the internet have a surprisingly large amount of control over the big structural questions in the field. This often extends to having control over what those questions are in the first place.

The rationalist community seems to be trying to have ~The Great Conversation~ for as much of human intellectual enquiry as it can manage (or at least as much as it takes seriously). People discuss the replication crisis, but they also discuss theories of cognition, and moral philosophy, and polarisation in politics, and the future of work, and whether Bayesian methods explain absolutely everything in the world or just some things.

The results are pretty mixed, but is there any reasonably sized group out there doing noticeably better, out on the public internet where anyone can join the conversation? If there is I’d love to know about it.

This is a pretty influential position, as lots of interesting people with wide-ranging interests are likely to find it and get sucked in, even if they’re mostly there to argue at the start. Scott Aaronson is one good example. He’s been talking about these funny Singularity people for years, but over time he’s got more and more involved in the community itself.

The rationalist community is some sort of a beacon for something, and to me that ought to count for ‘an intellectual community worth preserving’.

3. The new New Criticism

I saw this on nostalgebraist’s tumblr:

More importantly, the author approaches the game like an art critic in perhaps the best possible sense of that phrase (and with M:TG, there are a lot of bad senses). He treats card design as an art form unto itself (which it clearly is!), and talks about it like a poetic form, with various approaches to creativity within constraints, a historical trajectory with several periods, later work exhibiting a self-consciousness about that history (in Time Spiral, and very differently in Magic 2010), etc.

That is, he’s taking a relatively formal, “internal,” New Criticism-like approach, rather than a historicist approach (relate the work to contemporary extra-artistic phenomena) or an esoteric/Freudian/high-Theory-like approach (take a few elements of the work, link them to some complex of big ideas, uncover an iceberg of ostensibly hidden structure). I don’t think the former approach is strictly better than the latter, but it’s always refreshing because so much existing games criticism takes the latter two approaches.

I know absolutely nothing about M:TG beyond what the acronym stands for, but reading this I realised I’m also really craving sources of this sort of criticism. I recently read Steve Yegge’s giant review of the endgame of Borderlands, a first person shooter that I would personally hate and immediately forgot the name of. Despite this I was completely transfixed by the review, temporarily fascinated by tiny details of gun design, enjoying the detailed explorations of exactly what made the mechanics of the game work so well. This is exactly what I’m looking for! I’d rather have it for fiction or music than games, but I’ll take what I can get.

I kind of imprinted on the New Critics as my ideal of what criticism should be, and although I can see the limitations now (snotty obsession with narrow Western canon, tone deaf to wider societal influences) I still really enjoy the ‘internal’ style. But it’s much easier now to find situated criticism, that wants to relate a piece of art to, say, Marxism or the current political climate. And even easier to find lists of all the ways that that piece of art is problematic and you’re problematic for liking it.

Cynically I’d say that this is because the internal style is harder to do. Works of art are good or bad for vivid and specific internal reasons that require a lot of sensitivity to pinpoint, whereas they’re generally problematic for the same handful of reasons that everything else is problematic. But probably it’s mostly just that the internal style is out of fashion. I’d really enjoy a new New Criticism without the snotty high culture focus.

# Two cultures: tacit and explicit

[Epistemic status: no citations and mostly pulled straight out of my arse, but I think there’s something real here]

While I was away it looks like there was some kind of Two Cultures spat on rationalist-adjacent tumblr.

I find most STEM-vs-the-humanities fight club stuff sort of depressing, because the arguments from the humanities side seem to me to be too weak. (This doesn’t necessarily apply this time – I haven’t tried to catch up on everyone’s posts.) Either people argue that the humanities teach exactly the same skills in systematic thinking that the sciences do, or else you get the really dire ‘the arts teach you to be a real human being‘ arguments.

I think there’s another distinction that often gets lost. There are two types of understanding I’d like to distinguish, that I’m going to call explicit and tacit understanding in this post. I don’t know if those are the best words, so let me know if you think I should be calling them something different. Both are rigorous and reliable paths to new knowledge, and both are important in both the arts and sciences. I would argue, however, that explicit understanding is generally more important in science, and tacit understanding is more important in the arts.

(I’m interested in this because my own weirdo learning style could be described as something like ‘doing maths and physics, but navigating by tacit understanding’. I’ve been saying for years that ‘I’m trying to do maths like an arts student’, and I’m just starting to understand what I mean by that. Also I feel like it’s been a bad, well, century for tacit understanding, and I want to defend it where I can.)

Anyway, let’s explain what I mean by this. Explicit understanding is the kind you come to by following formal logical rules. Scott Alexander gives an example of ‘people who do computer analyses of Shakespeare texts to see if they contain the word “the” more often than other Shakespeare texts with enough statistical significance to conclude that maybe they were written by different people’. This is explicit understanding as applied to the humanities. It produces interesting results there, just as it does in science. Also, if this was all people did in the humanities they would be horribly impoverished, whereas science might (debatably) just about survive.

Tacit understanding is more like the kind you ‘develop a nose for’, or learn to ‘just see’. That’s vague, so here are some examples:

• Taking a piece of anonymised writing and trying to guess the date and author. This is a really rigorous and difficult thing my dad had to do in university (before pomo trashed the curriculum, [insert rant here]). It requires very wide-ranging historical reading, obviously, but also on-the-fly sensitivity to delicate tonal differences. You’re not combing through the passage saying ‘this specific sentence construction indicates that this passage is definitely from the late seventeenth century’. There might be some formal rules like this that you can extract, but it will take ages, and while you’re doing the thing you’re more relying on gestalt feelings of ‘this just looks like Dryden’. You don’t especially need to formalise it, because you can get it right anyway.

• Parody. This is basically the same thing, except this time it’s you generating the writing to fit the author. Scott is excellent at this himself! Freddie DeBoer uses this technique to teach prose style, which sounds like a great way to develop a better ear for it.

• Translation. I can’t say too much about this one, because I’ve never learned a foreign language :(. But you have the problem of matching the meaning of the source, except that every word has complex harmonic overtones of different meanings and associations, and you have to try and do justice to those as well as best as you can. Again, it’s a very skilled task that you can absolutely do a better or worse job at, but not a task that’s achieved purely through rule following.

I wish these kinds of tacit skills were appreciated more. If the only sort of understanding you value is explicit understanding, then the arts are going to look bad by comparison. This is not the fault of the arts!

# Crackpot time 2: cargo culting hard

I’m back from another weird foundations-of-physics workshop in the middle of nowhere, this one even smaller and more casual than the last. Also much more relaxed, schedule-wise, so there was plenty of time to think idly about various rubbish in my head.

Last time I was inspired to write my crackpot plan, so it feels like a good time to revisit it a bit, but mostly this is just a large braindump to get various things out of temporary memory before I lose them.