Some thoughts on the modularity of the mind
Published: June 12, 2009Tags: evopsych pinker cogsci
I recently started reading Steven Pinker's "How the Mind Works". Pinker is a psychologist whose interests broadly overlap with mine, and he writes a lot of popular science books on these issues. I like him, even though our opinions on language acquisition are quite different, because while his books are occasionally fairly biased (a particular problem for his "The Language Instinct"), they also maintain a good sense of intellectual rigour whilst still being fun to read. It's encouraging to be reminded from time to time that there are smart people thinking about these things sensibly.
I'm only a little way into the book as of yet but already it's been quite rewarding because it has dispelled a misconception I had about evolutionary psychology, a school of thought of which Pinker is a strong advocate. The stance of this school is essentially summarised by this excerpt:
The mind is a system of organs of computation, designed by natural selection to solve the kinds of problems our ancestors faced in their foraging way of life...The mind is organized into modules or mental organs, each with a specialized design that makes it an expert in one arena of interaction with the world. The modules' basic logic is specified by our genetic program. Their operation was shaped by natural selection to solve the problems of the hunting and gathering life led by our ancestors in most of our evolutionary history.
I have never taken issue with the essential issues of this school of thought. I have embraced the computational theory of mind for as long as I can remember, and I have no doubt that the structure of the brain - and hence the mind - has been shaped by evolutionary pressures that acted in the distant past. It's the "module" thing that has always kind of bugged me. The reason for this is that I have always interpreted the modular view of the mind espoused by evolutionary psychology as implying a mind made up of separate and autonomous parts bolted together. This is apparently not uncommon, as Pinker goes on to say:
The word "module" brings to mind detachable, snap-in components, and that is misleading...mental modules need not be tightly sealed off from one another, communicating only through a few narrow pipelines.
It's a relief to hear that evolutionary psychology does not consider this position to be required. Of course, it's one thing to not claim that the modules of the mind are necessarily distinct, and another thing to actually make a claim about the extent to which they are. This question really interests me. Do the modules of the mind look like this:
i.e. a collection of highly domain-specific modules with minimal overlap, most of the work being done by specialised faculties with little sharing of data or tools between modules? Or do they look like this:
i.e. a collection of highly overlapping modules with minimal domain-specific components, most of the work being done by a large, shared toolbox of general purpose algorithms?
As an aside, these charts were produced using Google's free Charts API, a pretty nifty tool.
It's easy to frame this question in terms of object oriented programming, too. If each module of the mind is a class and each class is a subclass a common BaseMindModule class, then is the interface of BaseMindModule just a few simple attributes and methods dealing with common stuff like I/O, with each subclass adding a lot of domain specific behaviour, or is BaseMindModule a large class with a rich API of general purpose methods, with each subclass being a thin wrapper around this API?
There seem to be two questions to consider here: firstly, just how large can the centre of the Venn diagram be, i.e. how much of human cognition can, in principle be explained by general purpose tools; secondly, even if a large common toolbox is possible in principle, is evolution likely to favour it over a a dispirit pile of specialist tools?
On the first question, I'm actually fairly confident that domain general tools can get a tremendous amount of work done. A background in mathematics makes this seem almost obvious. Mathematics is full of "tools" which are defined at such an abstract level that they can be applied to just about anything, while still being sufficiently meaningful that are practical. The student who has learned basic differential and integral calculus, for example, can construct simple models of phenomena from domains as diverse as biology, chemistry, economics, epidemiology, physics, sociology and more. Markov chains are an example of a tool with a rather different flavour that still manages to be very broadly applicable - Markov chains can play a role in models of things from all of the above fields as well. And, of course, statistics is taught to a wide range of students as little but a tool box of techniques and tests that work on any kind of data whatsoever - linear regression, hypothesis tests and confidence intervals are probably the closest thing that exists in academia to a universally common component of education across departments and disciplines, from engineering to psychology. If the mind contained circuits that processed data by approximating the logic of coupled linear ordinary differential equations, or of building hidden Markov chain models, then couldn't those circuits be the workhorses of a whole host of mental modules for a wide range of problems? I can't think of a compelling reason why not.
The second question feels less straightforward. The large, common toolbox approach offers a certain economy of design that on the one hand should be preferable to reinventing essentially identical wheels again and again for each problem that is encountered. But simultaneously, it feels like general tools are in some sense harder things to come up with. Inventing m general tools instead of n specific ones is of course less work overall if m is significantly less than n, but as so many people fail to grasp, evolution is an emphatically blind watchmaker which cannot look ahead like this and is thus extremely susceptible to getting stuck in local optima. Sticking with our counts of m and n, if the probability of a sequence of mutations leading to a domain specific tool is p and the probability of it leading to a domain general tool is q, the question becomes one of which is greater: pm or qn? As the ratio of n to m tends to infinity, chance favours a general toolbox, but as the ratio of q to p tends to infinity, chance favours a collection of specialist tools. We can't answer the question without sensible estimates for these ratios, but how could we even begin to make such estimates? It's not a trivial task.