Evolving Self-Consciousness

Presenters: Peter Carruthers, Logan Fletcher, and J. Brendan Ritchie

Commentator 1: Joel Smith, University of Manchester

Commentator 2: JeeLoo Liu, California State University, Fullerton

Advertisements

13 Comments

  1. Can I write a really long comment? I am optimistic that I can and that this is a benefit of an online conference; I’m not hogging the floor, and no one needs to “listen” to me.

    I’m pretty sympathetic to Peter, Logan, and Brendan. I was—I think like Joel—not totally confident that I was clear on the kind of self-knowledge or self-consciousness at issue, but I think all that was meant was the ability to self-ascribe mental states. (So that I can not only feel incredibly hungry but also think a thought we would express using the English language sentence, “I’m hungry.”) If I got that, then I’m inclined to agree that what drove the development of the ability to self-ascribe mental states was indeed the need or benefit of being able to ascribe them to others. I was predisposed to think this in part because an animal gets so much already out of just e.g. feeling hungry; knowing that you feel hungry no doubt adds some cool new functionality but not, on the face of it, a ton more. That’s the way my intuitions point anyway (qualifying this in a moment): that the vast majority of the time I do things because I want to do them rather than because I know I want to do them, etc.

    The only potentially profoundly important role I can see for self-consciousness is one that I think JeeLoo alluded to. (Maybe this is also similar to Nichols’ multi-stage planning idea? I haven’t read his 2001 cited.) I imagine (but I don’t know the literature) that what an animal with purely first-order mental states does, at a given moment, is determined by a sort of competition between its various mental states, a competition whose winner is the strongest desire, or the most salient belief, or something. But all kinds of self-control—not just control over one’s own thoughts (or learning, or whatever)—might depend upon an ability to think about, to weigh the importance of, and to deliberately call to mind, our various mental states. (“I so don’t want to get up… Do I have to? Well, let me think about what I wanted to do today… Damn. I gotta get up.” Of course one could have just thought about the rest of one’s upcoming day until a competing, and even more powerful desire became active, and all without engaging in this bit of metacognition–it just seems that, lying there, with your most powerful desire being to return to sleep, the only realistic way you manage to go through this process is by engaging in the discipline of forcing yourself to think about what else you want to do.) And we seem to be pretty good at that kind of self-control. At least, our capacity for it seems just as distinctive of human animals as does our sociality.

    So I wondered what the authors would say about this. Are we not really that good at that kind of self-control? Or do we not in fact need to exercise it as often as I am flattering myself that I do? Or does the kind of self-control I’m imagining not in fact clearly require self-consciousness? Or am I totally right and the authors now concede on all points? (Until I get a reply, I will of course assume the latter.)

    Anyway, great paper and great discussion by the commentators–thank you!

    (Btw, the PDFs were listed using people’s first names, so as you can see I have just assumed that we’re being informal here! Hope that’s all right.)

    Lizzie

  2. Since Lizzie has decreed that silence will imply consent, I suppose we had better reply. This will also constitute a reply to JeeLoo’s main suggestion, which Lizzie picks up on. (I shall hope to reply to some of Joel’s comments later in the conference.)

    First of all, it is not true that all kinds of self-control depend on self-consciousness, as Lizzie suggests. On the contrary, many forms of executive function—from the direction of attention to the resolution of conflicts between sources of information—are entirely first-order in character. What is true is that some kinds of self-control depend on self-consciousness. The question is then whether these abilities display any of the signature properties of an adaptation. We suggest that they do not. They are late to emerge in development, and there are big variations between people in the self-control strategies they employ, suggesting the presence of a set of learned skills, rather than an adaptation.

    But Lizzie says: we are nevertheless good at self-control, which is one of the signature effects of an adaptation that we identify. True, some people are. But many are not. Perhaps this doesn’t by itself argue against an adaptation for self-control, since many adaptations admit of significant variation. But it equally doesn’t argue for an adaptation either, in the absence of additional evidence. After all, some people are good at playing the piano, and some (especially Lizzie) are good at philosophy. But no one thinks that these are adaptations.

    In Fletcher and Carruthers (2012) we discuss the work of Walter Mischel and colleagues on self-control abilities in children, showing not only that there are large individual variations, but more relevantly, that children seem to employ a range of idiosyncratic strategies for self-control. Even if they were all uniformly good at it, this would argue for a learning account rather than an adaptation.

  3. The main critical point made by Joel in his commentary is that our third-person-based account of the evolution of self-consciousness does not make the prediction that people will lack native capacities for metacognitive control, and is thus not confirmed by the finding that such capacities are, indeed, absent. Hence although the first-person-based account is disconfirmed by the latter finding, no direct support is thereby provided for the third-person-based view.

    I agree that the absence of native control capacities is not a prediction of the third-person-based account alone. An additional assumption is required. But it is, I think, a plausible assumption. (And of course it is generally the case that auxiliary assumptions are needed to derive empirical predictions from scientific theories.)

    Recall that the hypothesis we defend is not just that third-person-based mindreading evolved PRIOR to the evolution of a capacity for self-consciousness, but that the capacity for self-consciousness is not an adaptation at all. It results, rather, from turning one’s mindreading capacity on oneself. The hypothesis under consideration is that this self-consciousness capacity did NOT thereafter become a target of selection. So we intended our hypothesis to exclude the idea that the mindreading faculty, although initially evolving for third-person social purposes, thereafter became, in addition, an adaption for self-consciousness.

    Is this hypothesis consistent with the claim that there are, nevertheless, native capacities for metacognitive control? In the abstract, yes. For the self-consciousness abilities supported by self-directed mindreading might have set up a selective pressure for increasingly adaptive forms of control. Is it likely, however, that these evolving control capacities would have failed to create a reciprocal pressure on our capacity for self-consciousness? I think not. One would expect that, as increasingly successful native capacities for metacognitive control began to evolve, that these would co-evolve with subsequent changes in the mindreading system itself. One would expect that the latter system would be shaped and fine-tuned for more effective self-monitoring. With this additional assumption, our prediction goes through: if self-consciousness results from self-directed mindreading and is NOT an adaptation, then we should expect that there won’t be any native capacities for metacognitive control, either.

  4. Thanks Peter, that’s very helpful. In fact, the plausibility of it suggests to me that I had been confusedly failing to make the distinction (that you mention in your response to Lizzy) between control per se and meta-cognitive control (that is, control that, by definition, involves self-consciousness). If this is right, then the positive case for your account is certainly stronger than I had thought.

    However, I’m not sure that it was my main critical point! I’m pretty egalitarian – my other worries seemed equally important to me. So, the claim that mindreading can’t be outstripped by self-consciousness doesn’t really seem to receive any positive support. And the (of course, overwhelming) evidence for mindreading as an adaptation doesn’t really offer any distinctive support to the view, as it’s consistent with the view of your opposition. But perhaps, as you indicate above, the right response is that there are some plausible auxiliary assumptions in the vicinity. I just don’t know what they are…

  5. Joel

    I downgraded your other points to “less important” because they are ones with which we agree (and with which our agreement is noted in the content of the paper itself).

    Yes, there is no positive evidence that mindreading can’t be outstripped by self-consciousness. The portion of the paper dealing with comparative data is intended to be defensive: responding to alleged evidence that self-consciousness CAN outstrip mindreading. And so far as I am aware, there is no relevant evidence from infant development, either. While the general finding in the literature is that self-knowledge and other-knowledge proceed in parallel, with similarly timed developmental milestones, these findings are from experiments using verbal tasks. The recent infancy data indicate that mindreading competence is present at much earlier ages. (Otherwise the parallelism finding would have been support for our view, since a two-adaptations account would be forced to see this as mere coincidence, whereas a mindreading-based account predicts it.)

    And yes, too, the fact that there is an adaptation for mindreading lends no direct support for the third-person-based account of self-consciousness. We briefly included a discussion of the reasons for thinking that there is such an adaptation to point up the contrast with the first-person-based view. All we wanted to claim is that the difference between the evidence of an adaptation for mindreading and the evidence for (indeed, against) an adaptation for metacognitive control is a dramatic one.

    best
    Peter

  6. [I’m posting this in the ‘Evolving Self-Consciousness’ thread, and the thread on James Dow’s paper, as it brings in issues from both]

    Peter,

    Thanks – then I suppose our primary disagreement is on the overall judgement as to whether your Mindreading account qualifies as well supported.

    But I do have a quibble with something you say above. You say that your account predicts a parallelism concerning the development of mindreading (MR) and self-consciousness (SC) competence. As I suggested in my response, I think that this is ambiguous (and, in the paper, I suspect that the emphasis is sometimes on the one, sometimes on the other disambiguation).

    First, it might mean that the conceptual resources employed in MR develop in parallel with those employed in SC. Second, it might mean than the reliablility of a subject’s judgements about other minds is equal to those about one’s own.

    For reasons given in my response, I suspect that the second isn’t predicted by the account. In any case, I presume that it is the former that you are really concerned with, so let’s concentrate on that.

    Lets assume that the conceptual resources employed by the MR system are PERSON and concepts of psychological states. At stage one, these might include WANTS, at stage two, THINKS. Let’s also assume that the conceptual resources employed SC are I and concepts of those same psychological states.

    The claim to consider is that the mindreading account predicts a parallelism between the development of the conceptual resources employed in MR and SC. But, as I indicated in my response, I don’t think that it does. For even if we grant with Strawson that concepts of mental states are, of necessity general (here is where the link comes in to James’s paper), there is no reason to think that subjects capable of thinking “She wants food” can think, “I want food”. For the subject may yet lack the first-person concept. The generality constraint (if acceptable) doesn’t force us to accept that, if I can other-attribute then I can self-attribute. For the constraint is obviously only that I must be able to attribute the psychological concept in question to those entities for which I am in possession of an individual concept. So, it is consistent with the generality constraint to think that a subject may possess the ability to think “She wants food”, yet lack the ability to think, “I want food”. And this is so, it seems to me, even if self-consciousness is nothing more than mindreading turned to oneself. Thus, the account doesn’t, of itself, predict any such parallelism.
    How might this conclusion be resisted? Here are two options.

    First, it might be held that the concept employed in MR is person but something like CONSPECIFIC. The thought might then be that, since grasping conspecific means grasping CREATURE LIKE ME, then the first-person concept will come as part of the mindreading package. But I take it that this would be a problematic move, since it seems to suggest that the capacity to depends upon a prior, unexplained, capacity for SC.

    Second, it might be denied that SC requires the employment of the first-person concept. Perhaps a representation with the content “He wants food” is self-conscious when the person denoted happens to be me. But for very familiar reasons (Anscombe, Casteñeda, Kaplan, Perry, Lewis) I think that this mischaracterises SC.

    So, my view is that the mindreading account only makes the prediction in question, on the assumption that these conceptual issues are resolved, and resolved in a certain way – a way that does not secretly import an unexplained capacity for SC into the conceptual resources for MR. Similar remarks might be made for the case of concepts of psychological states themselves. On one view of how we apply such concepts to others in such a way as to preserve the generality constraint, they presuppose the first person (here I have Peacocke in mind). Roughly, to think “She wants food” is to think, “She is in the same state as I am in when I want food”. Of course, you will reject this picture. But I think it likely that whilst such a view is inconsistent the mindreading account, it is consistent with much of the empirical data that you bring to bear. If that’s right – and I won’t even attempt to defend it now, I’ve gone on long enough already – then these issues need to be sorted out regardless of the specific prediction in question.

    Best, Joel

  7. I want to thank Peter, Logan and Brendan for their insightful paper, “Evolving Self-Consciousness.” I also want to thank Joel and JeeLoo for their probing comments and questions. I am grateful for the opportunity to engage in the discussion about the evolution of self-consciousness. First, I want to see if Peter, Joel, and JeeLoo agree about the outline of the argument in “Evolving Self-Consciousness.”

    My first interpretation of the paper is as a process of elimination argument the conclusion of which is that self-consciousness is third-person based, namely that self-consciousness depends on mindreading which is directed toward oneself. On this account, mindreading is an adaptation and self-consciousness is an exaptation. The argument is a process of elimination argument: 1) Either self-consciousness evolved as an adaptation for metacognitive monitoring or control (first-person) or self-consciousness evolved as an exaptation from mindreading (third-person). 2) The latter is empirically better supported than the former; C) Therefore, the third personal account is the preferred account.

    As Peter and Joel grant that there may be other possible accounts, which would challenge the idea that the disjunct in 1 is exhaustive. JeeLoo seems to adopt the disjunct and defend the first person account as prior. I want to paint a picture of possible third position in this reply. I take it that Carruthers et al will need to address an account like this in the process of elimination, although of course, it need not be mine (ours)…

    I will call it “the intersubjectivist account.” Other names might be ‘the second person account’ to mark up the distinction from the first person and the third person accounts. I think the account may be similar to Joel’s comments about grasping the concept CONSPECIFIC, but I’m not sure what the details of Joel’s proposal. I also will make the point later that in many ways, Bogdan’s account hints at the possibility of the interpersonal account as well, but I will make that point in that discussion thread.

    I will first point back to the Strawsonian that Joel gestured towards. According to Strawsonian account I favor, self-ascription and other-ascription is done under the logical type PERSON. However, the logical type PERSON need not be as restrictive as Joel suggests, but instead it might be adjusted to reflect biological inclusiveness about the concept of self-consciousness and other-consciousness, it might be phrased as: if a creature is capable of self-ascription, and self-ascription is in its nature interpersonal, then that creature will be required to have been aware of other creatures like itself, or to use the general term, to be aware of that creature’s conspecifics, mutatis mutandis for other-ascription. On this view, the ‘we’ is not composed of two self-standing subjectivities; intersubjectivity is not two minds alone meeting somewhere in the middle. I will call this intersubjective account ‘the Persons theory.’ What is the import of the Persons Theory for the question of the relation between self-consciousness and mindreading?

    There are three possible views of mentalizing:
    (1): self-priority asymmetry: the ability to self-ascribe is not dependent upon the ability to other-ascribe in the sense that one need not think about or perceive others in order to self-ascribe (first person simulation theory);
    (2): other-priority asymmetry: the ability to other-ascribe is not dependent upon the ability to self-ascribe in the sense that the ability to other-ascribe is prior to the ability to self-ascribe (third person theory theory) (This is how I interpret Carruthers’ claim that mindreading is prior to self-consciousness);
    (3): no-priority symmetry: the ability to self-ascribe is mutually dependent upon the ability to other-ascribe in the sense that one cannot self-ascribe unless one can other-ascribe and one cannot other-ascribe unless one can self-ascribe.

    The Persons Theory of mentalizing is a no-priority symmetry account of the ascription of experiences. Strawson’s account of self-consciousness is an interpersonal account of self-consciousness, meaning that self-consciousness must be understood as fundamentally related to consciousness of other persons, and vice versa, and this fundamental relation stresses the full symmetry between self-ascription and other-ascription.

    It might seem that I’ve just invented a third position (and I’m engaged in Strawson-worship), but of course, there are precursor philosophers and psychologists that fit this description of a third account… I take it that Shaun Gallagher’s (2001) account in “The Practice of Mind” is neither the theory-theory nor the simulation theory account. On this account, the import of “primary intersubjectivity” as a basic embodied skill and ability, and that each fails to capture the symmetrical relation between self and other that is basic to the phenomenology of lived experience.

    Peter Hobson (2002) has taken inspiration from P. F. Strawson’s account that what is required for joint attention is “intersubjective engagement” (201): “One can only have joint attention if one has the capacity to ‘join’ another person—which means that one needs to be able to share experiences with others, registering intersubjective linkage—and at the same time remain separate” (201). In order to achieve separation, such intersubjective linkage need not be understood as based in thoughts about or imaginations of others. Again, thus it is a genuine third option. Instead, as I argued above, perception of others is sufficient to account for the intersubjective linkage without over-intellectualizing intersubjectivity.

    Carruthers et al cite Hrdy’s work, however, in Mothers and Others she contrasts accounts of mindreading that involve “theory of mind”– whether theory-theory or simulation theory– with accounts that make intersubjectivity central: “Other psychologists prefer the related term “intersubjectivity” which emphasizes the capacity and eagerness to share in the emotional states and experiences of other individuals— and which, in humans at least, emerges at a very early stage of development, providing the foundation for more sophisticated mind reading later on” (2).

    I take it that some of the work of Colwyn Trevarthen and Vasudevi Reddy—especially in “Consciousness in Infants” also support a third account. And, Axel Seemann’s account of person perception– http://philpapers.org/rec/SEEPP also points towards a third account that he calls ‘intersubjectivist’.

    So, my questions: Is this the proper way to outline the argument in the paper? And, if so, is there a third possibility? Of course, I have not all the precious details of the intersubjectivity account, namely what its full description is, what the evolutionary explanation would be, and what predictions it would make, but I have gone on long enough… Of course, one may not see this as a genuinely third account, so the value of determining whether or not it is empirically better supported than the other two is not worth considering, but is the intersubjectivist person theory a genuinely third account?

  8. James

    Our account is an account of the adaptive basis of self-consciousness. We assume that there is some sort of innately channeled basis for the capacity for self-consciousness, and ask what it evolved for. Your third suggestion is, rather, an account of the development of self-consciousness in infants. This is consistent with our view. Indeed, it is a suggestion I am inclined to endorse. For I think that it is the same mechanism that underlies both other-knowledge and self-knowledge (as well as inter-subjective engagement). There is no priority in terms of development. (Setting aside Joel’s worry about the first person concept, which I see and am not sure what to think about.)

    If you try to turn your view into an account of the evolution of the capacity for mental state ascription, then I can’t see how it would work. For it is implausible that mental state ascription is for mutual sharing of mental states of the sort that infants engage in with joint attention behaviors and early forms of communication (pointing, understanding of referential points). For we have reason to think that other primates have some understanding of mental states although they do not engage in such behaviors. Indeed, Tomasello and others think that it was the drive to share mental states, added to a pre-existing capacity for mindreading, that might underlie our evolution as a distinctively communicative species.

  9. MIND-READING, METACOGNITION AND MYSTERY

    It is probably true that the capacity to detect what information others have, and what they are likely to do (“mind-reading”) was a more important evolutionary adaptation than the capacity to monitor internal processes (“metacognition”). But it remains a mystery why either of these is done consciously. They would be equally adaptive in an insentient robot.

    Language probably trumps both (nonverbal mind-reading and metacognition) as an evolutionary adaptation — but it’s equally mysterious why language is conscious.

    I think the 1st-person/3rd-person terminology may be a bit misleading. Only sentient seers have any viewpoint at all (and that is always 1st-person). Absent an account of the causal role of sentience itself, all we really have is robot-centered (“egocentriic”) vs other-centered (“allocentric”) coordinate systems (with the “self” the origin of the robot-centered one): all “3rd-person” until/unless the subjective lights go on.

  10. Stevan

    Our paper was not about phenomenal consciousness, but rather self-knowledge, and made no commitments regarding the former. But it is controversial to claim that phenomenal consciousness is genuinely mysterious. Many of us think that the appearance of mystery (including the explanatory gap, conceivability of zombies, and so forth) results from the distinctively isolated concepts that we can employ when thinking about phenomenally conscious experience. But the properties that those concepts pick out are just representational and functional properties of various familiar sorts.

  11. Hi Peter,

    As far as an evolutionary account is concerned, an explanation of the adaptation of would include: 1) evidence that selection has occurred, 2) an ecological explanation of adaptive advantage, 3) evidence that the trait is heritable, 4) information about populations of species, and 5) phylogenetic information about trait polarity (Brandon (1990)). It seems to me that self-ascription and other-ascription are on a par with respect to not meeting these criteria. I wonder if a third account in terms of intersubjectivity wouldn’t fair better.

    My hypothesis is that self-ascription and other-ascription are both exaptive, while the intersubjective account in terms of cooperative sharing, joint engagement, and mutual recognition is adaptive. If the perceptual basis for joint engagement could be made clear and precise, then it seems like perceiving eyes, faces, hands, etc. would be something we might have better evidence for. While I agree that other-ascription is not for joint engagement, that does not rule out that joint engagement is prior in evolutionary development. Doesn’t Ferrari et al (2009) suggests that rhesus monkeys have reciprocal face to face recognition..? That’s not decisive, but, it paints a picture of a third account that hypothesizes that there is a basic skill, in this case the perception of persons, that is prior in evolutionary history and makes possible self-ascription and other-ascription. I agree also that primary intersubjectivity (joint engagement) is distinct from secondary intersubjectivity (communication), and the ascription of experiences underpins the latter, but does not underpin the former.

    Also, I take it that Bogdan’s account can be read as arguing that executive abilities are what are basic to both self-ascription and other-ascription, and that is where his evolutionary explanation is pitched. He seems to suggest that we ought to have a mixed developmental model (the evo-devo model) that begins with an account of infant development, then turns to the evolutionary story. I suppose the difference between his account in terms of executive abilities and the account I would sketch is that it is basic perceptual abilities that are the basis of self-ascription and other-ascription, rather than executive abilities.

    Best,
    James

  12. James

    Terms like “intersubjectivity”, “joint engagement”, and so on admit of weaker and stronger senses. In the weak sense, there is joint engagement whenever there is mutual face recognition, tracking of eye gaze, and so on. In this sense, joint engagement is widespread among primates. But in this sense joint engagement is too weak for mindreading and self-consciousness to be explicable in terms of it. In the strong sense, in contrast, there is joint engagement only when individuals share awareness of one another’s mental states – being aware that they are mutually afraid, for example, or that they both find the same event surprising. This presupposes a capacity to attribute mental states to others and to oneself, and so can’t form the basis of those abilities. In this strong sense, joint engagement appears to be uniquely human.

    None of this is to deny, of course, that joint engagement in the weak sense might be an evolutionary precursor of both mindreading and self-consciousness. Indeed, I have no doubt that this is true. But this is not a competitor for the two views contrasted in our paper, since it does not require mental states to be attributed to anybody.

    best
    Peter

Comments are closed.