Return to articles on mental architecture

 

Moderately massive modularity

 

Peter Carruthers

 

 

This paper will sketch a model of the human mind according to which the mind’s structure is massively, but by no means wholly, modular. Modularity views in general will be motivated, elucidated, and defended, before the thesis of moderately massive modularity is explained and elaborated.

 

1          Modular models of mind

Many cognitive scientists and some philosophers now accept that the human mind is modular, in some sense and to some degree. There is much disagreement, however, concerning what a mental module actually is, and concerning the extent of the mind’s modularity. Let us consider the latter controversy first.

 

1.1       How much modularity?

Here the existing literature contains a spectrum of opposed modularist positions. At one extreme is a sort of minimal peripheral-systems modularity, proposed and defended by Fodor (1983, 2000). This holds that there are a variety of modular input and output systems for cognition, including vision, audition, face-recognition, language-processing, and various motor-control systems. But on this view central cognition - the arena in which concepts are deployed, beliefs are formed, inferences drawn, and decisions made - is decidedly non-modular. Then at the other extreme is the hypothesis of massive modularity proposed and defended by evolutionary psychologists (Cosmides and Tooby, 1992, 1994; Tooby and Cosmides, 1992; Sperber, 1994, 1996; Pinker, 1997). This holds that the mind consists almost entirely of modular systems. On this view there is probably no such thing as ‘general learning’ at all, and all - or almost all - of the processes which generate beliefs, desires and decisions are modular in nature. Then in between these two poles there are a variety of positions which allow the existence of central or conceptual modules as well as peripheral input and output modules, but which claim that the mind also contains some significant non-modular systems or processes (Carey, 1985; Carey and Spelke, 1994; Spelke, 1994; Smith and Tsimpli, 1996; Carruthers, 1998, forthcoming a, b; Hauser and Carey, 1998; Hermer-Vazquez et al., 1999; Cosmides and Tooby, 2001;).

The position to be sketched and defended here is variety of this latter sort of ‘moderate modularity’. But although moderate, it is pitched towards the massively modular end of the spectrum. For the main non-modular central processing arena postulated on this view is itself held to be constructed from the resources of a number of different modules, both central and peripheral. More specifically, it will be held that it is the natural-language module which serves to integrate the outputs of the various central-conceptual modules, and which subserves conscious belief-formation and decision-making. On this conception, then, the degree of modularity exhibited by the human mind is, not massive, but moderately massive.

 

1.2       What is modularity?

As for the different conceptions of what modularity is, here there are two cross-cutting spectra of opinion, to be introduced shortly. According to Fodor’s classic (1983) account, however - which was designed to apply only to modular input and output systems, it should be noted - modules are domain-specific innately-specified processing systems, with their own proprietary transducers, and delivering ‘shallow’ (non-conceptual) outputs (e.g., in the case of the visual system, delivering a 2½-D sketch; Marr, 1983). Modules are held to be mandatory in their operation, swift in their processing, isolated from and inaccessible to the rest of cognition, associated with particular neural structures, liable to specific and characteristic patterns of breakdown, and to develop according to a paced and distinctively-arranged sequence of growth.

Those who now believe in central-conceptual modules are, of course, required to modify this account somewhat. They can no longer believe that all modules have proprietary transducers, deliver shallow outputs, or that modules are wholly isolated from the rest of cognition. This is because central modules are supposed to be capable of taking conceptual inputs (that is, to be capable of operating upon beliefs), as well as to generate conceptual outputs (issuing in beliefs or desires, for example). But it can still be maintained that modules are innately-channeled processing systems, which are mandatory, fast, liable to specific patterns of breakdown, and follow distinctive patterns of growth in development. It can also still be held that central modules operate using their own distinctive processing algorithms, which are inaccessible to, and perhaps largely unalterable by, the remainder of cognition.

Can anything remain of the encapsulation / isolation of modules, however, once we switch our attention to central-conceptual modules? For this is the feature which many regard as the very essence, or core, of any worthwhile notion of modularity (Currie and Sterelny, 1999; Fodor, 2000). One option here is to claim that central modules can be domain-specific (or partially encapsulated), having access only to those beliefs which involve their own proprietary concepts. This option can seem well-motivated since (as we shall see in section 2 below) one of the main arguments for central modularity derives from the domain-specific character of much of human child-development.

Encapsulation of modules was never really about limitations on module input, however (that was rather supposed to be handled by them having proprietary transducers, in Fodor’s 1983 account). Rather, encapsulation relates to the processing data-base of the modular system in question (Sperber, forthcoming). A radically un-encapsulated system would be one which could access any or all of the agent’s beliefs in the course of processing its input. In contrast, a fully encapsulated module would be one which had no processing data-base at all – it would be one that was unable to draw on any of the agent’s beliefs in performing its computations. Thus understood, encapsulation is fully consistent with central-process modularity. But this notion, too, admits of degrees. Just as a system can be more or less domain-specific in its input-conditions, so it might be capable of drawing on a more or less limited content-specific data-base in processing that input.

So much for points of agreement, modulo the disagreements on which kinds of module exist. One spectrum of disagreement about the nature of modules concerns the extent of their innateness. At one extreme are those who think that modules are mostly fixed at birth or shortly thereafter, needing only the right sorts of support (e.g. increases in short-term memory capacity) in order to manifest themselves in thought and behavior (Fodor, 1983, 1992; Leslie, 1994; Spelke, 1994). At the other extreme are those who think that the extent of innate specification of modules may be fairly minimal - perhaps amounting only to an initial attention-bias - but that human cognition becomes modularized over the course of normal experience and learning (Karmiloff-Smith, 1992). (On this sort of view, there may be no real difference between a child’s knowledge of language and a grand-master’s knowledge of chess in respect of modular status.) And then in between these poles, there are those who think that conceptual modules may be more or less strongly innately channeled, following a distinctive developmental sequence which is more or less reliably buffered against environmental variation and perturbation (Baron-Cohen, 1995, 1999; Botterill and Carruthers, 1999).

            The other spectrum of disagreement about the nature of modules concerns the extent to which modules should be thought of as information processors, on the one hand, or as organized bodies of information, on the other. Classically, most modularists have adopted the former option (Fodor, 1983; Cosmides and Tooby, 1992; Sperber, 1996; Pinker, 1997) – although Chomsky, for example, often writes as if the language system were a body of beliefs about language, rather than a set of processing algorithms (1988).[1] But more recently Samuels (1998, 2000) has developed what he calls the ‘library model’ of the mind, according to which modules consist of largely-innate stored bodies of domain-specific information, whereas the processing of this information is done using general-purpose algorithms. Intermediate between these two poles, one can think of a module as consisting both of a set of domain-specific processing algorithms and of a domain-specific body of information, some of which may be learned and some of which may be innate (Fodor, 2000).

            I shall not enter into these disputes concerning the nature of modules in any detail here. But since I think that the case for innate channeling of modules is powerful (Botterill and Carruthers, 1999), I shall here adopt a conception of modules which is closer to the nativist end of the spectrum. I shall also assume that modules will contain distinctive processors, at least, even if they also contain domain-specific bodies of information. This is because one of the main arguments supporting central-conceptual modularity seems to require such a conception. This is the broadly-methodological argument from the possibility of computational psychology, which goes roughly as follows. The mind is realized in processes which are computational, operating by means of algorithms defined over sentences or sentence-like structures. But computational processes need to be local - in the sense of having a restricted access to background knowledge in executing their algorithms - if they are to be tractable, avoiding a ‘computational explosion’. And the only known way of realizing this, is to make such processes modular in nature. So if computational psychology is to be possible at all, we should expect the mind to contain a range of modular processing systems.[2]

 

2          The case for massive modularity

There is a raft of different arguments, of varying strengths, supporting some form of  (or some aspects of) a massively modular conception of the mind. I shall sketch just some of these here, without providing much in the way of elaboration or justification. This is by way of motivating the project of this paper, which is to defend a thesis of moderately massive modularity against the critics of massive modularity.

            One argument for massive modularity has already been mentioned, based on the assumption that the mind is computationally realized. For if the mind is so realized, then it is very hard to see how its basic structure could be other than modular. Of course, this argument is only as strong as the assumption on which it is based. But computationalism still seems to a great many cognitive scientists to be the only game in town, despite the recent rise in popularity of distributed-connectionist approaches.[3]

            A second set of arguments is broadly developmental. Young children display precocious abilities in a number of conceptual domains, especially naïve physics and naïve psychology (and also, on some views, naïve biology – Atran, 2002). They begin to display competence in these domains very early in infancy, and the development of their understanding of them is extremely fast. This provides the basis for a ‘poverty of stimulus’ argument paralleling the argument Chomsky has famously deployed in support of the innateness of linguistic knowledge. (For a comprehensive review and assessment of this argument, see Laurence and Margolis, 2001.) The problem is to explain how children manage to know so much, so fast, and on such a meager evidential basis. In response to this problem, many in the cognitive sciences have concluded that this cannot be explained without postulating a rich endowment of innate domain-specific information and/or of innate processing algorithms (Carruthers, 1992; Leslie, 1994; Spelke, 1994; Baron-Cohen, 1995).

            Related arguments derive from psychopathology; either developmental, or resulting from disease and/or brain-injury in adults. As is now well known, autism is a developmental condition in which children show selective impairment in the domain of naïve psychology – they have difficulty in understanding and attributing mental states to both themselves and other people, but can be of otherwise normal intelligence (Baron-Cohen, 1995). Conversely, children with Williams’ syndrome are socially and linguistically precocious, but have severe difficulties in the domain of practical problem-solving (implicating both spatial reasoning and naïve physics; Karmiloff-Smith et al., 1995; Mervis et al., 1999). Moreover, some stroke-victims have been found to display a selective impairment in their deployment of concepts for living kinds, suggesting that some sort of localized naïve biology system has been damaged (Atran, 2002; plus others).

            A different set of arguments for massive modularity has been put forward by proponents of evolutionary psychology. They maintain that biological systems in general evolve by ‘bolting on’ specialized components to already-existing systems in response to specific evolutionary pressures; and they argue that for this and a variety of other reasons, the anti-modular ‘general-purpose computer’ model of the mind cannot be correct, because no such computer could evolve. They have then gone on to postulate a number of different modular components which our minds should contain, reasoning from the evolutionary pressures which we can be confident our ancestors faced – including (in addition to those already mentioned above) a module for reasoning about social contracts, especially detecting cheaters and free-riders; and a module for assessing and reasoning about risks and dangers. These postulates have then been tested experimentally, with a good deal of success – showing that people perform much better on reasoning-tasks within these domains than they perform on structurally-isomorphic tasks outside of them, for example (Cosmides and Tooby, 1992; Fiddick et al., 2000).

            A word of caution about the way in which evolutionary psychology is sometimes presented, however: evolutionary psychologists often write as if the mind should contain a suite of elegantly engineered processing machines (modules). But there is no reason to think that this should be so. On the contrary, evolution of any new mechanism has to start from what is antecedently available, and often a new adaptation will arise by co-opting, linking together, and utilizing systems which already exist for other functions. (This process is often called ‘exaptation’.) On this model, we might expect a module to be ‘kludgy’ rather than elegant in its internal organization. Thus, for example, Nichols and Stich (forthcoming) argue that the mind-reading system is a highly complex combination of specially-selected components, adaptations of independently-existing domain-specific processes, and some general-purpose (non-domain-specific) functions. Given the loose way in which I understand the notion of a module, the heart of this system can still count as modular in nature; but it is decidedly not an ‘elegantly engineered machine’.

 

3          Some misunderstandings of evolutionary psychology

The recent movement known as evolutionary psychology is the successor to a more long-standing program of work in sociobiology, which came to prominence in the 1970s and 80s. Many philosophers have been critical of sociobiology for its seeming-commitment to genetic determinism, flying in the face of the known flexibility and malleability of human behavior, and of the immense cultural diversity shown in human behaviors and social practices (Kitcher, 1985; O’Hear, 1997; Dupré, 2001). Sociobiologists had a reply to some of these criticisms, in fact, since it is not specific types of behavior but only behavioral tendencies which have been selected for, on their account. And which behavioral tendencies are actually expressed on any given occasion can be a highly context-sensitive – and hence flexible – matter.

            The charge of genetic determinism is even less applicable to evolutionary psychology. For what such a psychology postulates, in effect, is a set of innate belief and desire generating mechanisms. How those beliefs and desires then issue in behavior (if at all) is a matter of the agent’s practical reasoning, or practical judgment, in the circumstances. And this can be just as flexible, context-sensitive, and unpredictable-in-detail as you like. Thus, there may be a ‘mind-reading’ module charged with generating beliefs about other people’s mental states. These beliefs are then available to subserve an indefinitely wide variety of plans and projects, depending on the agent’s goals. And there may be a ‘social status’ module charged with generating desires for things which are likely to enhance the status of oneself and one’s kin in the particular cultural and social circumstances in which agents find themselves. How these desires issue in action will depend on those agents’ beliefs; whether these desires issue in action will depend upon those agents’ other desires, together with their beliefs. There is no question of genetic determinism here.

            Philosophers are also apt to have an overly-narrow conception of the operations of natural selection, as being wholly or primarily survival based. Many of us have a picture of natural selection as ‘red in tooth and claw’, ‘survival of the fittest’, and so on. And of course that is part – an important part – of the story. But survival is no good to you, in evolutionary terms, if you don’t generate any off-spring, or if your off-spring don’t live long enough to mate, or for other reasons aren’t successful in generating off-spring in turn. In the human species, just as in other species, we should expect that sexual selection will have been an important force in our evolution, shaping our natures to some significant degree (Miller, 2000). In the animal world generally, sexual selection has always been recognized by evolutionary biologists as an important factor in evolution, and perhaps as the main engine driving speciation events. There is no reason to think that the human animal should have been any different.

            In fact Miller (2000) makes out a powerful case that many of the behaviors and behavioral tendencies which we think of as distinctively human – story-telling, jokes, music, dancing, sporting competition, and so on – are products of sexual selection, functioning as sexual displays of one sort or another (like the peacock’s tail). And as Miller (1998, 2000) also argues, when you have a species, such as our own, who are accomplished mind-readers, then sexual selection has the power to reach deep into the human mind, helping to shape its structure and functioning. Emotional dispositions such as kindness, generosity, and sympathy, for example, may be direct products of sexual selection. Consistently with this, it appears to be the case that members of both sexes, in all cultures, rate kindness very highly amongst the desired characteristics of a potential mate (Buss, 1989).

            In addition to what one might call ‘survival-selection’ and sexual selection, there is also group selection, for whose significance in human evolution a compelling case can be made out (Sober and Wilson, 1999). As selection began to operate on the group, rather than just on individuals and their kin, one might expect to see the appearance of a number of adaptations designed to enhance group cohesion and collective action. In particular, one might expect to see the emergence of an evolved mechanism for identifying, memorizing and reasoning about social norms, together with a powerful motivation to comply with such norms. And with norms and norm-based motivation added to the human phenotype, the stage would be set for much that is distinctive of human cultures.

            To see how some of this might pan out, consider an example which might seem especially problematic from the perspective of a survival-based evolved psychology. Consider Socrates committing suicide when convicted of treason (O’Hear, 1997), or a kamikaze pilot plunging his aircraft into the deck of a battle-cruiser. How can such behaviors be adaptive? Well no one actually claims that they are, of course; rather, they are the product of psychological mechanisms which are normally adaptive, operating in quite specific local conditions. An evolved mechanism charged with identifying and internalizing important social norms may be generally adaptive; as might be one whose purpose is to generate desires for things which will enhance social status. If we put these mechanisms into a social context in which there is a norm requiring sacrifices for the community good, for example, and in which the greatest status is accorded to those who make the greatest sacrifices, then it is easy to see how a suicide can sometimes be the result.

 

4          Philosophers against central-conceptual modules

The idea of central-process modularity hasn’t, as yet, enjoyed wide popularity amongst philosophers. In part this may reflect some of the misunderstandings documented in section 3 above, and in part it may result from an intuitive resistance to beliefs in innate cognitive structures or innate information, within an intellectual culture still dominated by blank-slate empiricism in the humanities and social sciences. But a number of philosophers have presented arguments against central-process modularity, generally grounded in the claim that central cognition is in one way or another holistic in nature. I shall consider two sets of arguments here, due to Fodor (1983, 2000) and Currie and Sterelny (1999).

 

4.1       Fodor

Fodor (1983, 2000) famously argues that all and only input (and output) processes will be modular in nature (although language-processing is construed as both an input and an output process for these purposes). And only such processes will be amenable to explanation from the perspective of computational psychology. This is because such processes are local, in the sense that they only need to consider a restricted range of inputs, and can only be influenced in a limited way (if at all) by background knowledge. In contrast, central processes are said to be holistic, or non-local. What you believe on one issue can depend upon what you think about some seemingly-disparate subject. (As Fodor remarked in his 1983 book, ‘in principle, our botany constrains our astronomy, if only we could think of ways to make them connect.’) Indeed, at the limit, what you believe on one issue is said to depend upon everything else that you believe. And no one has the least idea how this kind of holistic process could be modeled computationally.[4]

The holistic character of central cognition therefore forms the major premise in a two-pronged argument. On the one hand it is used to support the pessimistic view that computational psychology is unlikely to make progress in understanding central cognitive processes in the foreseeable future. And on the other hand it is used in support of the claim that central cognition is a-modular in nature.

For the most part we are just invited to believe in the holism of the mental on the strength of Fodor’s say-so, however. The closest we get to an argument for it, are some examples from the history of science of cases where the acceptance or rejection of a theory has turned crucially on evidence or beliefs from apparently disparate domains. But science is, in fact, a bad model for the cognitive processes of ordinary thinkers, for a number of different reasons (I shall mention two).

One point is that science is, and always has been, a social enterprise, requiring substantial external support. (Fodor actually mentions this objection in his recent book – which must have been put to him by one of his pre-publication readers – but responds in a way which completely misses the point.)[5] Scientists do not and never have worked alone, but constantly engage in discussion, co-operation and mutual criticism with peers. If there is one thing which we have learned over the last thirty years of historically-oriented studies of science, it is that the positivist–empiricist image of the lone investigator, gathering all data and constructing and testing hypotheses by him- or her-self, is a highly misleading abstraction.

Scientists such as Galileo and Newton engaged in extensive correspondence and discussion with other investigators over the years when they were developing their theories; and scientists in the 20th century, of course, have generally worked as members of research teams. Moreover, scientists cannot operate without the external prop of the written word (including written records of data, annotated diagrams and graphs, written calculations, written accounts of reasoning, and so on). In contrast, ordinary thinking takes place within the head of an individual thinker, with little external support, and within the relatively short time-frames characteristic of individual cognition.

These facts about science and scientific activity can explain how seemingly disparate ideas can be brought to bear on one another in the course of scientific enquiry, without us having to suppose that something similar can take place routinely within the cognition of a single individual. What many different thinkers working collectively over the course of a lifetime can achieve – conducting painstaking searches of different aspects of the data and bringing to bear their different theories, heuristics and reasoning strategies – is a poor model for what individuals can achieve on their own in the space of a few seconds or minutes. There is certainly nothing here to suggest that ordinary belief-formation routinely requires some sort of survey of the totality of the subject’s beliefs (which really would be computationally intractable, and certainly couldn’t be modular).

Another reason why scientific cognition is not a good model for cognition in general, is that much scientific reasoning is both conscious and verbal in character, being supported by natural language representations (whether internal or external). And it may well be possible to explain how linguistically formulated thought can be partially holistic in nature without having to suppose that central cognitive processes in general are holistic, as we shall see in section 5 below. Indeed, it may well be possible to provide a moderately massively modular account of natural-language-mediated cognition which explains the partly-holistic character of such conscious thinking in modular terms, as we shall see.

In short, then, the holism of science fails to establish the holism of central cognition in general. So we need to look elsewhere if we are to find arguments against central-process modularity.

 

4.2       Currie and Sterelny

Fodor’s argument from the holism of the mental seems to require that information from many different domains is routinely and ubiquitously brought together in cognition. Yet, as we have seen, he fails to provide evidence for his major premise. For Currie and Sterelny (1999), in contrast, the mere fact that information from disparate domains can be brought together in the solution to a problem shows that central cognitive processes aren’t modular in nature. The focus of their argument is actually the alleged modularity of mind-reading or ‘theory of mind’; but it is plain that their conclusion is intended to generalize – if  mind-reading can be shown to be a-modular, then it is very doubtful whether any central modules exist. For the purposes of this paper I shall accept this implication, treating mind-reading as a crucial test-case.

            Currie and Sterelny’s argument is premised on the fact that information of any kind can be relevant in the interpretation of someone’s behavior – such as underpinning a judgment that the person is lying. They write (1999):

If Max’s confederate says he drew money from their London bank today there are all sorts of reasons Max might suspect him: because it is a public holiday there; because there was a total blackout of the city; because the confederate was spotted in New York at lunch time. Just where are the bits of information to which we are systematically blind in making our social judgments? The whole genre of the detective story depends on our interest and skill in bringing improbable bits of far-away information to bear on the question of someone’s credibility. To suggest that we don’t have that skill defies belief.

This is an unfortunate example to have taken, since the skill in question is arguably not (or not largely) a mind-reading one. And insofar as it is a mind-reading skill, it is not one which requires the mind-reading system to be radically holistic and un-encapsulated. Let me elaborate.

Roughly speaking, to lie is to assert that P while believing that not-P. So evidence of lying is evidence that the person is speaking contrary to their belief – in the case of Max’s confederate it is evidence that, although he says that he drew money from the London account today, he actually believes that he didn’t. Now the folk-psychological principle, ‘If someone didn’t do something, then they believe that they didn’t do that thing’ is surely pretty robust, at least for actions which are salient and recent (like traveling to, and drawing money from, a bank on the day in question). So almost all of the onus in demonstrating that the confederate is lying will fall onto showing that he didn’t in fact do what he said he did. And this isn’t anything to do with mind-reading per se. Evidence that he was somewhere else at the time, or evidence that physical constraints of one sort or another would have prevented the action (e.g. the bank was closed), will (in the circumstances) provide sufficient evidence of duplicity. Granted, many different kinds of information can be relevant to the question of what actually happened, and what the confederate actually did or didn’t do. But this doesn’t in itself count against the modularity of mind-reading.

            All that the example really shows is that the mind-reading faculty may need to work in conjunction with other elements of cognition in providing us with a solution to a problem.[6] In fact most of the burden in detective-work is placed on physical enquiries of one sort or another – investigating foot-prints, finger-prints, closed banks, whether the suspect was somewhere else at the time, and so forth – rather than on mind-reading. The contribution of the latter to the example in question is limited to (a) assisting in the interpretation of the original utterance (Does the confederate mean what he says? Is he joking or teasing?); (b) providing us with the concept of a lie, and perhaps a disposition to be on the lookout for lies; and (c) providing us with the principle that people generally know whether they have or haven’t performed a salient action in the recent past.

            Currie and Sterelny may concede that it isn’t supposed to be the work of the mind-reading faculty to figure out whether or not someone actually did something. But they would still think that examples of this short show that mind-reading isn’t an encapsulated process, and so cannot be modular. Following Fodor (1983), they take the essence of modularity to be given by encapsulation – a modular process is one which can draw on only a limited range of information, and which cannot be influenced by our other beliefs. But recall the distinction we drew in section 1.2 above (following Sperber, forthcoming) between limitations on the input of a system, on the one hand, and limitations on its processing data-base, on the other. Encapsulation is only really about the latter. So, fully-consistently with the encapsulation of the mind-reading module, it may be that such a module can take any content as input, but that it cannot, in the course of processing that input, draw on anything other than the contents of its own proprietary domain-specific memory store.

            So here is what might happen in a case like that described above. The confederate’s utterance is perceived and processed to generate a semantic interpretation. Max then wonders whether the utterance is a lie. The mind-reading system uses the input provided by the confederate’s utterance together with its grasp of the nature of lying to generate a specific principle, ‘His statement that he withdrew money from our London account today is a lie, if he believes that he didn’t withdraw the money’. The mind-reading system then deploys one of its many generalizations, ‘If someone didn’t do something salient recently, then he believes that he didn’t do that thing’ to generate a more-specific principle, namely, ‘If he didn’t withdraw the money from our London account today, then he believes that he didn’t withdraw the money today’. In making these inferences few if any background beliefs need to be accessed. The mind-reading system then generates a question for other central-cognitive systems (whether modular or a-modular) to answer, namely, ‘Did he actually withdraw money from our London account today, or not?’, and one of those systems comes up with the answer, ‘No, because the banks were closed’. The content, ‘He didn’t withdraw our money today’ is then received as input by the mind-reading system, enabling it to draw the inference, ‘My confederate is lying’.

All of the above is fully consistent with the mind-reading system being a largely-encapsulated module. That system can operate by deploying its own distinctive algorithms (not shared by other central-process modules nor by general learning), and these algorithms may be isolated from the remainder of the agent’s beliefs. And most importantly, the processing in question might only be capable of accessing a small sub-set of the agent’s beliefs, contained in a domain-specific mind-reading memory store. So the mind-reading system can be an encapsulated one, to some significant degree, and its processing can be computationally tractable as a consequence.

            Of course, such back-and-forth processing has to be orchestrated somehow. And more specifically, if the mind-reading module can take any sort of content as input, then something has to select those inputs. But this needn’t be inconsistent with the modularity of mind-reading, even if it does raise a problem for the thesis of massive modularity. One way for us to go, here, would be to postulate some sort of a-modular executive system, charged with determining appropriate questions and answers, and shuffling them around the other (modular) systems. This might conflict with massive modularity, but it doesn’t conflict with the modularity of mind-reading, or with any moderate form of modularity thesis. But we surely don’t have to go this way, in fact. We might equally appeal to various processing-principles of salience and relevance to create a kind of ‘virtual executive’ (Sperber, forthcoming). So the account just sketched above is fully consistent even with a thesis of massive modularity, I think.

            Let me now make a related criticism of Currie and Sterelny to that developed above. For they take, as the target of their attack, the thesis that social belief-fixation is cognitively impenetrable. And they take this to mean that all of the post-perceptual processes which eventuate in social belief should constitute one encapsulated system.[7] But this is absurd. No defender of the modularity of mind-reading believes this. What such people believe is that one of the systems which is engaged in the processes eventuating in social belief is an encapsulated mind-reading system (where the ‘encapsulation’ of this system is to be understood as explained somewhat along the lines of the paragraphs above).

            There is yet another way in which Currie and Sterelny actually set up a straw man for their attack, as well. This is that they take the thesis of the modularity of mind-reading to be the claim that the process which serves to fix social belief is modular. And they point out, in contrast, that social appearances (which might plausibly be delivered by some sort of modular system, they concede) can always be over-ridden by our judgments made in the light of any number of different kinds of background information.

            At the outset of this line of thought, however, is a mistaken conception of what central-process modularity is, in general. They take it to be the view that there are modules which serve to fix belief. One way in which this is wrong, of course, is that many modularists believe in the existence of central-process modules which serve to generate desires or emotions. The best way of explaining what a ‘central’ process is, for these purposes, is not that it is one which eventuates in belief, but rather that it is one which operates upon conceptual contents or conceptualized states. And there may well be central modules which issue neither in beliefs nor in desires or emotions, but which rather deliver conceptualized states which can feed into the further processes which issue in full-blown propositional attitudes (e.g. beliefs and desires).

            More importantly for our purposes, Currie and Sterelny tacitly assume that belief is a unitary kind of state. A good number of writers have, in contrast, explored the idea that belief may admit of at least two distinct kinds – one of which is non-conscious and which is both formed and influences behavior automatically, the other of which is conscious and results from explicit judgment or making up of mind (Dennett, 1978, 1991; Cohen, 1993; Evans and Over, 1996; Frankish, 1998a, 1998b, forthcoming). It may then be that some conceptual modules routinely generate the automatic kind of belief, below the level of conscious awareness. Consistently with this, when we give explicit conscious consideration to the same topic, we may reach a judgment which is at odds with the deliverances of one or more conceptual modules. And it is no news at all that in our conscious thinking all different types of information, from whatever domains, can be integrated and united. (How such a thing is possible from a modularist perspective will form the topic of section 5 below.)

            There is also a weaker way to save the hypothesis that (some) conceptual modules function to fix belief, which need not require us to distinguish between different kinds of belief. For one might merely claim that these conceptual modules will issue in belief by default, in the absence of explicit consideration. So the mind-reading module will routinely generate beliefs about other people’s mental states, for example, unless we consciously ask ourselves what mental states should be attributed to them, and engage in explicit weighing of evidence and so forth. The fact that these latter processes can over-ride the output of the mind-reading module doesn’t show that central modules don’t function as fixers of belief.

 

4.3       A consciousness-based intuition

I conclude, then, that where philosophers have argued explicitly against central-process (or conceptual) modularity, they have rejected the idea on inadequate grounds. Either, like Fodor, they have just assumed, without argument, that central processes are a-modular and holistic. Or, like Currie and Sterelny, they have imposed inappropriate constraints on what a central-process module would have to be like. But it may be that underlying such arguments is a deeper intuition, grounded in our conscious awareness of our own thoughts. For we know that we can, in our conscious thinking, freely link together concepts and beliefs from widely disparate domains. I can be thinking about new academic course structures one moment and about horses the next, and then wonder what process of association led me to entertain such a sequence – in effect then combining both topics into a single thought. How is such a thing possible if any strong form of central-process modularity is true?

            It is very important to note that this argument isn’t really an objection to the idea of central-process modularity as such, however. For the existence of belief-generating and desire-generating conceptual modules is perfectly consistent with there being some sort of holistic central arena in which all such thoughts – irrespective of their origin – can be freely inter-mingled and assessed. So all of the less extreme, more moderate, versions of the thesis of central-process modularity are left untouched by the argument. The argument only really starts to bite against massively modular conceptions of the mind. For these accounts would seem to fly in the face of what is manifest to ordinary introspective consciousness.

            The challenge for massive modularity, then, is to show how such a domain-general ‘central arena’ can be built from the resources of a suite of domain-specific conceptual modules. This is the challenge which I shall take up in the next section, providing just enough of a sketch of such an architecture to answer the possibility-challenge. (For further elaboration, see Carruthers, forthcoming a, b.) The result will be, of course, not a full-blown massively modular conception of the mind – since the existence of domain-general conscious thought will be granted – but still a moderately massively modular conception.

 

5          The moderately massively modular mind

The thesis which I want to sketch, here, maintains that it is the natural-language module which serves as the medium of inter-modular integration and conscious thinking. Note, for these purposes, that the language-faculty is, uniquely, both an input (comprehension) and an output (speech-production) system. And note, too, that just about everybody agrees that the language faculty is a module – even Fodor (1983), who regards it as an archetypal input (and output) module.

            On any version of central-process modularity, the language module looks likely be one of the down-stream consumer systems capable of receiving inputs from each of the central modules, in such a way that it can transform the outputs of those modules into spoken (or signed) language. Since we are supposing that the various central modules deliver conceptualized thoughts as output, the initial encoding process into natural language can be exactly as classical accounts of speech-production suppose (e.g. Levelt, 1989), at least in respect of the outputs of any single central module. That is, the process will begin with a thought-to-be-expressed, provided by some central module – e.g. Maxi will look in the kitchen [8] – and the resources of the language faculty are then harnessed to select appropriate lexical items, syntactic structures, and phonological clothing to express that thought, before issuing the appropriate commands to the motor-control systems which drive the required movements of the mouth and larynx – in this case, in such a way as to utter the words, ‘Maxi will look in the kitchen’.

            Similarly, on any version of central-process modularity, the language module will surely be one of the up-stream input systems capable of feeding into the various central modules for domain-specific or specialized processing of various sorts. So what someone tells you about Maxi’s goals, for example, can be one of the inputs to the mind-reading module. What sort of interface would be required in order for this to happen? One plausible proposal is that it is effected by means of mental models – that is, analog, quasi-perceptual, cognitive representations. So the role of the comprehension sub-system of the language faculty would be to build a mental model of the sentence being heard (or read), with that model then being fed as input to the central-process conceptual modules. For notice, first, that there is considerable evidence of the role of mental models in discourse comprehension (Harris, 2000, for a review). And note, second, that the central-process modules would in any case need to have been set up to receive perceptual input (i.e. the outputs of the perceptual input-modules), in such a way that one can, for example, draw inferences from seeing what someone is doing, or from hearing the sounds which they are making. So the use of mental models to provide the interface between the comprehension sub-system of the language faculty and the central-process modules would have been a natural way to go.

            The language faculty is ideally placed, therefore, to link together a set of central-process conceptual modules. But if it is to play the role of integrating the outputs of those central-process modules, rather than just receiving those outputs on a piecemeal one-to-one basis, then more needs to be said, plainly. Given that the language faculty is already capable of encoding propositional representations provided by central-process modules, the problem reduces to that of explaining how different natural language sentences concerning some of the same subject-matters can be integrated into a single such sentence. It is at this point that it seems appropriate to appeal to the abstract and recursive character of natural language syntax. Suppose that the geometric system has generated the sentence, ‘The object is in the corner with the long wall to the left’, and that the object-property system has generated the sentence, ‘The object is in the corner near the red wall’.[9] And suppose, too, that the common reference of ‘the object’ is secured by some sort of indexical marking to the contents of short-term memory. Then it is easy to see how the recursive character of language can be exploited to generate a single sentence, such as, ‘The object is in the corner with the long wall to the left near the red wall’. And then we will have a single representation combining the outputs of two (or more) central-process modules.

            That the language faculty is capable of integrating the outputs from the central modules for purposes of public communication is one thing; that it serves as the medium for non-domain-specific thinking might seem to be quite another. But this is where it matters that the language faculty is both an input and an output module. For a sentence which has been initially formulated by the output sub-system can be displayed in auditory or motor imagination – in ‘inner speech’ – and can therefore be taken as input by the comprehension sub-system, thereby being made available to the central process modules once again. And on most views of consciousness, some (at least) of these imaged natural language sentences will be conscious, either by virtue of their quasi-perceptual status (Dretske, 1995; Tye, 1995, 2000), or by virtue of their relationship to the mind-reading faculty, which is capable of higher-order thought (Armstrong, 1968; Rosenthal, 1986, 1993; Carruthers, 1996, 2000; Lycan, 1996). One can therefore imagine cycles of conscious language-involving processing, harnessing the resources of both the language faculty and the central-process domain-specific modules.

            There is more to our conscious thinking than this, of course. We also have the capacity to generate new suppositions (e.g., ‘Suppose that the object has been moved’), which cannot be the output of any module or set of modules. And we then have the capacity to evaluate those suppositions, judging their plausibility in relation to our other beliefs. It is arguable, however, that these additional capacities might require only minimal adjustments to the basic modular architecture sketched above – but I shall not attempt to show this here (Carruthers, 2002, forthcoming a, b). The important point for present purposes is that we have been able to sketch the beginnings of an architecture which might enable us to build non-domain-specific conscious thinking out of modular components, by harnessing the resources of the language faculty, together with its unique position as both an input and an output system for central cognition.

 

6          Conclusion

I  have explained what central-process modularity is, or might be; and I have sketched some of the arguments supporting some sort of massively modular model of the mind. The explicit arguments against such a model which have been propounded by philosophers have been shown to be unsound. And a sort of Ur-argument grounded in introspection of our conscious thoughts has been responded to by sketching how such thoughts might result from the operations of a set of central-conceptual modules together with a modular language faculty. Since very little has been firmly established, this is not a lot to have achieved in one paper, I confess; but then who ever thought that the architecture of the mind could be conquered in a day?

 

References

Armstrong, D. 1968. A Materialist Theory of the Mind. Routledge.

Atran, S. 2002. Modular and cultural factors in biological understanding. In P. Carruthers, S. Stich and M. Siegal (eds.), The Cognitive Basis of Science. Cambridge University Press.

Baron-Cohen, S. 1995. Mindblindness. MIT Press.

Baron-Cohen, S. 1999. Does the study of autism justify minimalist innate modularity? Learning and Individual Differences, 10.

Botterill G. and Carruthers, P. 1999. The Philosophy of Psychology. Cambridge University Press.

Buss, D. 1989. Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures, Behavioral and Brain Sciences, 12.

Carey, S. 1985. Conceptual Change in Childhood. MIT Press.

Carey, S. and Spelke, E. 1994. Domain-specific knowledge and conceptual change. In L. Hirshfeld and S. Gelman (eds.), Mapping the Mind. Cambridge University Press.

Carruthers, P. 1992. Human Knowledge and Human Nature. Oxford University Press.

Carruthers, P. 1996. Language, Thought and Consciousness. Cambridge University Press.

Carruthers, P. 1998. Thinking in language?: evolution and a modularist possibility. In P. Carruthers and J. Boucher (eds.), Language and Thought. Cambridge University Press.

Carruthers, P. 2000. Phenomenal Consciousness: a naturalistic theory. Cambridge University Press.

Carruthers, P. 2002. The roots of scientific reasoning: infancy, modularity, and the art of tracking. In P. Carruthers, S. Stich and M. Siegal (eds.), The Cognitive Basis of Science. Cambridge University Press.

Carruthers, P. forthcoming a. Is the mind a system of modules shaped by natural selection? In C. Hitchcock (ed.), Great Debates in Philosophy: Philosophy of Science. Oxford: Blackwell

Carruthers, P. forthcoming b. Distinctively human thinking: modular precursors and components. In P. Carruthers, S. Laurence and S. Stich (eds.), The Structure of the Innate Mind.

Chomsky, N. 1988. Language and Problems of Knowledge. MIT Press.

Cohen, L. 1993. An Essay on Belief and Acceptance. Oxford University Press.

Cosmides, L. and Tooby, J. 1992. Cognitive adaptations for social exchange. In J. Barkow, L. Cosmides and J. Tooby (eds.), The Adapted Mind. Oxford University Press.

Cosmides, L. and Tooby, J. 1994. Origins of domain specificity: the evolution of functional organization. In L. Hirschfeld and S. Gelman (eds.), Mapping the Mind. Cambridge University Press.

Cosmides, L. and Tooby, J. 2001. Unraveling the enigma of human intelligence. In R. Sternberg and J. Kaufman (eds.), The Evolution of Intelligence. Laurence Erlbaum.

Currie, G. and Sterelny, K. 1999. How to think about the modularity of mind-reading. Philosophical Quarterly, 49.

Dennett, D. 1978. How to change your mind. In his Brainstorms. Harvester Press.

Dennett, D. 1991. Consciousness Explained. Penguin Press.

Dretske, F. 1995. Naturalizing the Mind. MIT Press.

Dupré, J. 2001. Human Nature and the Limits of Science. Oxford University Press.

Evans, J. and Over, D. 1996. Rationality and Reasoning. Psychology Press.

Fiddick L., Cosmides, L. and Tooby, J. 2000. No interpretation without representation: the role of domain-specific representations and inferences in the Wason selection task. Cognition, 77.

Fodor, J. 1983. The Modularity of Mind. MIT Press.

Fodor, J. 1992. A theory of the child’s theory of mind. Cognition, 44.

Fodor, J. 2000. The Mind doesn’t Work that way. MIT Press.

Frankish, K. 1998a. Natural language and virtual belief. In P. Carruthers and J. Boucher (eds.), Language and Thought. Cambridge University Press

Frankish, K. 1998b. A matter of opinion. Philosophical Psychology, 11.

Frankish, K. forthcoming. Mind and Supermind. Cambridge University Press.

Harris, P. 2000. The Work of the Imagination. Blackwell.

Hauser, M. and Carey, S. 1998. Building a cognitive creature from a set of primitives. In D. Cummins and C. Allen (eds.), The Evolution of Mind. Oxford University Press.

Hermer-Vazquez L., Spelke, E., and Katsnelson, A. (1999). Sources of flexibility in human cognition: Dual-task studies of space and language. Cognitive Psychology, 39.

Job, R. and Surian, L. 1998. A neurocognitive mechanism for folk biology? Behavioral and Brain Sciences, 21.

Karmiloff-Smith, A., Klima, E., Bellugi, U., Grant, J., and Baron-Cohen, S. 1995. Is there a social module? Language, face processing and theory of mind in individuals with Williams syndrome. Journal of Cognitive Neurosceince, 72.

Kitcher, P. 1985. Vaulting Ambition: sociobiology and the quest for human nature. MIT Press.

Laurence, S. and Margolis, E. 2001. The poverty of the stimulus argument. British Journal for the Philosophy of Science, 52.

Leslie, A. 1994. ToMM, ToBY and Agency: Core architecture and domain specificity. In L. Hirschfeld and S. Gelman (eds.), Mapping the Mind. Cambridge University Press.

Levelt, W. 1989. Speaking. MIT Press.

Lycan, W. 1996. Consciousness and Experience. MIT Press.

Marr, D. 1983. Vision. Freeman.

Mervis, C., Morris, C., Bertrand, J. and Robinson, B. 1999. Williams syndrome: findings from an integrated program of research. In H. Tager-Flusberg (ed.), Neurodevelopmental Disorders. MIT Press.

Miller, G. 1998. Protean primates: the evolution of adaptive unpredictability in competition and courtship. In A. Whiten and R. Byrne (eds.), Machiavellian Intelligence II. Cambridge University Press.

Miller, G. 2000. The Mating Mind. Heinemann.

Nichols, S. and Stich, S. forthcoming. Mindreading. Oxford University Press.

O’Hear, A. 1997. Beyond Evolution. Oxford University Press.

Pinker, S. 1997. How the Mind Works. Penguin Press.

Rosenthal, D. 1986. Two concepts of consciousness. Philosophical Studies, 49.

Rosenthal, D. 1993. Thinking that one thinks. In M. Davies and G. Humphreys (eds.), Consciousness. Blackwell.

Samuels, R. 1998. Evolutionary psychology and the massive modularity hypothesis. British Journal for the Philosophy of Science, 49.

Samuels, R. 2000. Massively modular minds: evolutionary psychology and cognitive architecture. In P. Carruthers and A. Chamberlain (eds.), Evolution and the Human Mind. Cambridge University Press.

Sartori, G. and Job, R. 1988. The oyster with four legs: a neuro-psychological study on the interaction of semantic and visual information. Cognitive Neuropsychology, 5.

Smith, N. and Tsimpli, I. 1996. The Mind of a Savant. Blackwell.

Sober, E. and Wilson, D. 1999. Unto Others: the evolution and psychology of unselfish behavior. Harvard University Press.

Spelke, E. 1994. Initial knowledge: six suggestions. Cognition, 50.

Sperber, D. 1994. The modularity of thought and the epidemiology of representations. In L. Hirschfeld and S. Gelman (eds.), Mapping the Mind. Cambridge University Press.

Sperber, D. 1996. Explaining Culture. Blackwell.

Sperber, D. forthcoming. In defense of massive modularity. Feschrift for Jacques Mehler. MIT Press.

Tooby, J. and Cosmides, L. 1992. The psychological foundations of culture. In J. Barkow, L. Cosmides and J. Tooby (eds.), The Adapted Mind. Oxford University Press.

Tye, M. 1995. Ten Problems of Consciousness. MIT Press.

Tye, M. 2000. Consciousness, Color and Content. MIT Press.

Warrington, E. and Shallice, T. 1984. Category specific impairments. Brain, 107.

 



[1] One source of ambiguity here, however, is that any processor can be said to embody information implicitly. For example, a processing system which takes you from Fa as input to Ga as output can be said to embody the belief that All Fs are Gs, implicit in its mode of operation.

[2] See Cosmides and Tooby, 1992. Of course it is possible to reverse this argument, as does Fodor 2000 – arguing that since the mind isn’t massively modular, no one really has any idea how to make progress with computational psychology; see section 4 below.

[3] As is familiar, it is possible to run a traditional ‘symbol-crunching’ programme on a connectionist machine; and anyone attracted by massive modularity will think that the computational processes in the mind are massively parallel in nature. So the relevant contrast is just whether those computations operate on structured localist states (as classical AI would have it), or whether they are distributed across changes in the weights and activity levels in a whole network (as more radical forms of connectionism maintain).

[4] As an illustration of the supposed holism of belief, consider an episode from the history of science. Shortly after the publication of The Origin of Species a leading physicist, Sir William Thompson, pointed out that Darwin just couldn’t assume the long time-scale required for gradual evolution from small differences between individual organisms, because the rate of cooling of the sun meant that the Earth would have been too hot for life to survive at such early dates. Now we realize that the Victorian physicists had too high a value for the rate at which the sun is cooling down because they were unaware of radioactive effects. But at the time this was taken as a serious problem for Darwinian theory – and rightly so, in the scientific context of the day. (Thanks to George Botterill for this example.)

[5] Fodor (2000), p.52-3. Fodor wrongly assumes that ‘science is social’ is intended to mean ‘scientific cognition is individualistic cognition which has been socially constructed’; for he responds that it is implausible that the structure of human cognition might have changed radically in the past few hundred years. But the point is rather that the (fixed, innate) architecture of individual cognition needs to be externally supported and enriched in various ways in order for science to become possible – through social exchange of ideas and arguments, through the use of written records, and so on.

[6] See Nichols and Stich (forthcoming) who develop an admirably detailed account of our mind-reading capacities which involves a complex array of both domain-specific and domain-general mechanisms and processes, including the operations of a domain-general planning system and a domain-general suppositional system, or ‘possible worlds box’.

[7] Similarly, Nichols and Stich (forthcoming) argue against modularist accounts of mind-reading on the same grounds. They assert that mind-reading cannot be encapsulated, since all of a mind-reader’s background beliefs can be involved in predicting what another will do, through the process of default attribution (i.e. ‘assume that others believe what you believe, ceteris paribus’). And they assume that modularists are committed to the view that all of the post-perceptual processes which eventuate in mental-state attribution form one encapsulated system. But, firstly, what modularists are committed to here is only that the mind-reading system can only access a domain-specific data-base, not that it is encapsulated from all belief. And secondly, a modularist can very well accept that the mind-reading module needs to work in conjunction with other aspects of cognition – both modular and non-modular – in doing its work. The architecture sketched by Nichols and Stich is actually a very plausible one – involving the use of a suppositional reasoning system (a ‘possible worlds box’) with access to all background beliefs, as well as an action-planning system, a variety of inferential systems for updating beliefs, and so on. But a modularist can maintain that at the heart of the workings of this complex architecture is a modular mind-reading system, which provides the core concepts and principles required for mind-reading.

[8] I follow the usual convention of designating structured propositional thoughts, when expressed in some form of Mentalese, by means of capitalized sentences.

[9] This example is drawn from Hermer-Vazquez et al. (1999), who provide evidence that such contents cannot be integrated in rats and young children when they become spatially disoriented, and that it is actually language which enables those contents to be integrated in older children and adults. For example, the capacity to solve tasks using both forms of information is severely disrupted when adults are required to ‘shadow’ speech through a set of headphones (thus tying up the resources of the language faculty), but not when they are required to shadow a complex rhythm.