The Concept Possession Hypothesis of Self-Consciousness

Presenter: Stephane Savanah, Macquarie University, Australia

Commentator 1: Kristina Musholt, Berlin School of Mind and Brain

Commentator 2: James Dow, The Graduate Center, CUNY

Stef’s Response

Advertisements

13 Comments

  1. I enjoyed Stef’s interesting paper, as well as the commentary and Stef’s responses. My comments go to two issues already raised in the commentary and responses – I’m not sure I have much to add, but here goes.

    Stef’s ‘concept possession hypothesis of self-consciousness’ relies on the argument that all concepts are related to the self-concept (pp9ff paper). Stef says a bearer of concepts must bear the concept [agent], and therefore the self-concept. Both commentators challenge this, and Stef addresses the issue again in his responses to the commentaries. I’m still not persuaded by Stef’s arguments. He discusses several concepts including [blade], [beauty] and [colour]. My difficulty with his approach is that I fear he relies too much on our concepts, and I think he needs to try also to address the sorts of concepts which non-human minds might bear. Certainly many, perhaps all, of our concepts are intertwined in a way which makes an argument from concept possession to self-concept possession plausible for possessors of the sorts of concepts we possess. But we are grappling with the possibility that there might be organisms which harbour concepts which don’t readily map to our concepts and which needn’t presuppose agency. This is difficult terrain of course because we are trying to analyse concepts which we don’t possess, at least not in the way that these other organisms do. The sort of example I have in mind which might cause a problem for Stef is an organism which has some primitive concept of predator which triggers tree climbing behaviour. This concept might mediate a whole lot of cognitive activity where the organism processes a wide range of stimuli correlated with the presence of predators before a tree climbing response is or is not triggered. Perhaps this can occur without the organism possessing any concept of agency or self. Now Stef may say that in this sort of case the relevant cognitive processing is better characterised as a stimulus–response process rather than a process involving concepts or inferential reasoning, but I’m not sure that this wouldn’t beg the question. Dennett for one suspects there is no principled distinction between the primitive intentionality of a thermostat and our intentionality.

    This first issue is related to my second comment about the practical implications of Stef’s hypothesis for research (which to be fair is not the target of Stef’s paper). Stef aims to assist our understanding of research in cognitive science by enabling us to assess the presence of self-consciousness according to criteria like the capacity to conceptualise or reason inferentially. Stef says we will be able to assess these criteria by examining behaviour. The practical challenge then is to show how moving to these conceptual criteria helps – perhaps the question of whether an organism is exhibiting the capacity to conceptualise is just as imponderable as the question of whether self-consciousness is being exhibited. Kristina also raises this sort of question and Stef responds by saying there will be fairly clear cut cases where certain behaviour could not occur without conceptualisation, and he refers us to two papers in preparation. I look forward to these, and simply wonder in anticipation whether the interpretation of the relevant cases will also present the challenge mentioned above of analysing organisms which may be the bearers of concepts which are quite unlike the concepts we bear.

  2. Hi Stef,

    You are presenting a hypothesis about concepts and self-consciousness, namely that concept possession is necessary and sufficient for self-consciousness. I asked for clarification on the arguments, because if your methodology is “dialectical,” “conceptual,” etc., then you are ARGUING for such necessary and sufficient conditions. But, I still don’t know what your intended arguments are. I am still wondering whether I have properly paraphrased your arguments for the concept possession hypothesis. In your response, you discussed your concept of self-consciousness, but that didn’t make explicit what your arguments are.

    I think there are some issues with the methodology being employed in the paper and in the replies to comments. I think these are important issues to address first, so my first follow-up comment will focus on that.

    The main question of your overall project is “Do infants and non-human animals possess self-consciousness?” You translate that question into “Do infants and non-human animals possess self-consciousness in the way that normal human adults do?” But, it seems to me that that question masks a few methodological issues.

    First, it doesn’t seem like there is much consensus about what the nature of the self-consciousness is that is possessed by normal adult humans. When philosophers and psychologists debate about what self-consciousness is, they are mostly arguing about the normal adult human case. Or, do you think that we’ve pinned down what self-consciousness is in the normal adult human case? (Similar questions could be asked about what a concept is, what intelligence is, etc…)

    Second, suppose you don’t care about consensus, but instead you just want to outline what your account of self-consciousness is. It doesn’t seem to be enough to outline a coherent and consistent account of self-consciousness, which is what you seem to be supposing in the paper and in the replies. It seems to me you also have to show that your account of self-consciousness is true. Or, do you think providing and coherent and consistent account is sufficient for its being true?

    Third, the concept of self-consciousness that you present strikes me as just stipulative. You say that the type of existential self-consciousness is: “We understand the meaning of existence; we know ourselves to exist; we question the meaning of our existence.” Each of these definiens are very different, and it is not clear whether these are necessary or sufficient, etc. Also, if this is the intuitive apriori plausible notion of self-consciousness, then I should recognize it as such, but these definiens don’t seem tautological, axiomatic, or basic really. And, understanding the meaning of existence can’t be necessary for self-consciousness, because I didn’t understand the meaning of existence until I read Quine’s “On What There Is,” but I was self-conscious long before that. It seems to me we could go back and forth about what’s necessary and sufficient for self-consciousness. For that reason, I’d like to ask some questions about what you take your methodology to be…

    Let me outline three methodological approaches to self-consciousness. I’ll attempt to pinpoint which one I think you endorse. And, I outline which one I think one should endorse. In commenting on your paper, I was really trying on hats that I would rather not wear.

    Methodology A (Purely Dialectical): The best method for research into self-consciousness is purely dialectical, which would involve presenting definitions of self-consciousness via necessary and sufficient conditions, then test the coherence and consistency of those definitions.

    Methodology B (Mixed Method): The best method for research into self-consciousness is through ‘a mixed method,’ alternating conceptual and empirical, which involves making arguments and claims about what is necessary and sufficient, then testing those claims against empirical findings in the relevant disciplines, for instance, linguistics, psychology, anthropology, and neuroscience.

    Methodology C (Purely Empirical): The best method for research into self-consciousness is purely empirical, which would involve devising experiments to discover the nature of self-consciousness, but would not require providing arguments about the terms or concepts used in describing the results of those experiments.

    It is clear that you are not proceeding with methodology C. However, I outlined a response from that perspective when I suggested that one might have a broad empirical theory of non-human animal minds within which self-consciousness is defined as MSR. But, we can put that aside.

    You say, “arguing for the hypothesis on the basis of any particular empirical evidence would unjustifiably privilege that research paradigm above others.” That seems to suggest that you are concerned with Methodology A. Now, what should our mode of proceeding with Methodology A be? If it is a purely dialectical inquiry, then presumably we’re arguing about how we ought to talk about and think about self-consciousness. In that case, we’re engaging in verbal and conceptual dispute. This raises a problem for some of the responses you give. I don’t see how you are justified in claiming that the differences between conceptions of self-consciousness are verbal or conceptual disputes, since that’s what Methodology A purports to be concerned with.

    In my comments, I was engaging in Methodology A, challenging your definitions of ‘concept,’ ‘intelligence’ and ‘self-consciousness,’ but your responses merely assert that you have defined ‘concept,’ ‘intelligence,’ and ‘self-consciousness’ in such and such a way (suggesting that I just misunderstood your definitions): “I do not think this form of ‘awareness’ will satisfy my conception of self‐consciousness”; “Creatures with intelligence that possess self‐consciousness is my definition of level 3”; “are there creatures with intelligence but without self‐consciousness?”; “I must again respond in the negative as that would contradict my CP Hypothesis”; “this type of self‐consciousness is not the one I refer to in the paper”. But, the assumption of methodology A is that we’re engaging in a verbal or conceptual dispute, I was presenting challenges to your definitions of those terms and concepts.

    However, I wouldn’t proceed via Methodology A, partly because there is a tendency in Methodology A towards simply saying “But, that’s not how I define ‘self-consciousness’” or “But, that’s not my conception of self-consciousness.” I would rather proceed with Methodology B. Let me know what you think about this methodology. I’m not proposing we adopt this methodology here, I’m just wondering whether you think Methodology A is preferable to Methodology B.

    In Methodology B, we would consider the claim “Concepts are necessary and sufficient for self-consciousness” as a constitutive claim, namely a claim about how we ought to talk and think about the relationship between concepts and self-consciousness. Then, we break up that claim into the necessary and sufficient condition claims:

    (N-CSC) Concepts are necessary for self-consciousness.
    (S-CSC) Concepts are sufficient for self-consciousness.

    Then, we test each claim in turn, proceeding first, with an argument for each claim, and second, testing each premise of that argument (including the conclusion of the arguments) against the relevant empirical literature. I don’t see how this would run into the circularity issue that you mentioned in your paper, because it would not test the claims against the data about MSR solely, or against the data from any particular research program, but instead against relevant empirical literature in general.

    A quick point against the use of Methodology A. You say in the paper that “the basis for the CP Hypothesis is an intuition, which I believe is shared by many, that there is a strong correlation between intelligence and self-consciousness. Below I present some of the background thinking behind this intuition in order to establish the a priori plausibility of the CP Hypothesis.” But, consider the following analogy: Suppose we consider a hypothesis about physics, for instance that matter is infinitely divisible. We say, “There is an intuition that matter is infinitely divisible. It is a belief that is shared by many. The other claims in the theory support the idea that matter is infinitely divisible as well.” It might be that we found it intuitively plausible, but we shouldn’t think that we could evaluate empirical research in physics given the apriori plausibility of an intuition. I don’t see why we should think that a purely dialectical inquiry into self-consciousness should be favored either.

  3. Hi James,

    Thanks again for your follow-up comments.
    You’ve asked if I think that we’ve pinned down what self-consciousness is in the normal adult human case or if I think providing a coherent and consistent account is sufficient for its being true.
    Well, of course I don’t think simply providing a consistent account makes it true, but I do have to articulate my position. You seem to present me with the following choice: either I define what I mean when I say ‘self-consciousness’, in which case I am open to the accusation of being stipulative, or I can avoid a definition and investigate the nature of self-consciousness based on empirical studies. In the latter approach we could say, for example, that MSR shows a certain type (or level) of self-consciousness, as distinct from (say) theory-of-mind, which shows another type (or level) of self-consciousness. But to me this approach is ultimately unsatisfying because it lacks explanatory power. This approach allows us to say things like “dolphins and chimpanzees have both achieved MSR-level self-consciousness” – but then we already know that just by looking at the experimental results. It does not allow me to say to (for example) Gordon Gallup “you claim that MSR shows chimpanzees are self-conscious, but you are wrong because…” (or, “you are right because…”). Yet there are many commentators who have long been engaged in that debate. As I, too, want to join that debate I need to take the former approach; I need to specify what I understand self-consciousness to be (in the hope and expectation that it is a close fit to most commentators’ conceptions) and explain the basis for my position on the results of MSR and other studies.
    When I said “We understand the meaning of existence; we know ourselves to exist; we question the meaning of our existence,” you said these definiens don’t seem tautological, axiomatic, or basic. But I think I can weave a thread through them. Some of us dwell upon it more than others, but who among us has never pondered the meaning of life? It is characteristically human to question the meaning of our existence. And when we do this, it is not that we question the existence of others, but of ourselves – in which case we must know ourselves to exist. And, I would be so bold to assert, we must have a conception of what it is to exist in order to question that very existence.
    Then you claimed you didn’t understand the meaning of existence until reading Quine’s “On What There Is,” but were self-conscious long before that. Perhaps until reading Quine you didn’t understand the meaning of existence in the deep philosophical sense, but I’m pretty sure you had a grasp of what the word ‘existence’ refers to. Most people when asked “does this pencil I’m holding exist?” and “do unicorns exist?” would answer in a consistent way, indicating at least a common understanding of what those questions mean in relation to existence. If you ask a person “do you exist?” they would probably be bemused and think it was a trick question by a philosopher trying to trip them up, but would otherwise understand what was being asked, and if they took it as a straightforward question (as they might if it was part of a lie detector test) would answer in the affirmative. Perhaps by talking this way I run the risk of being accused of an overreliance on ‘folk philosophy’, but really, when I look at a dolphin and think “are you aware of your own existence?” I am not wondering if it has read Sartre or Heidegger.
    You presented me with three methodologies: Methodology A (Purely Dialectical); Methodology B (Mixed Method); Methodology C (Purely Empirical). When you ask about my methodology I’m not sure if you are referring to my overall project or the CO2 paper. In the CO2 paper I said I would not rely on empirical evidence to make my case. That does not mean that empirical evidence I have come across has not played a part in shaping my thinking on this subject – for certainly it has. To a large degree the way I think about self-consciousness grew out of what I’ve studied about research in this area. But if the CP Hypothesis is to be used for evaluating research, it needs to be able to stand on its own independent of that research. So, yes, the CO2 paper is dialectical in that it avoids reliance on specific empirical data to make a case for the hypothesis. On the other hand, my overall project is to evaluate research, so it doesn’t fit into methodology A. But then I’m not sure my overall project would fit neatly into any of the three methodologies you describe.
    For example, Methodology B seems to be something like the following. Propose a theory (e.g. “concepts are sufficient for self-consciousness”), then test it by looking at data derived from research into (for example) MSR. But, what I propose is not testing a theory, but evaluating the research itself. That is, I do not want to say “MSR is one example of self-consciousness (or one type of self-consciousness or one level of self-consciousness, or whatever) so let’s see if MSR research data support the theory.” Instead I want to say something like “the CP Hypothesis provides a common framework for evaluating the research into self-consciousness; let’s use it as a yardstick for comparing the validity of different research paradigms.” Of course, this relies on the parties concerned embracing the CP Hypothesis, and my CO2 paper is an attempt to encourage this.

  4. Many thanks, Stuart, for your thoughtful comments dated Feb 25.

    Early on Stuart says “Certainly many, perhaps all, of our concepts are intertwined in a way which makes an argument from concept possession to self-concept possession plausible for possessors of the sorts of concepts we possess”, which I think I can take as at least a partial endorsement of the CP Hypothesis…
    Stuart says we are grappling with the possibility that there might be organisms which harbour concepts which don’t readily map to our concepts and which needn’t presuppose agency. There are a couple of ways to read this and perhaps Stuart will clarify which of these is meant, but in any case I think both deserve a response.
    The first way is in the sense that there may be animals that have concepts in much the same way as we humans understand ‘concepts’, but whose concepts are not possessed by any humans. Let me propose an example: perhaps there are animals that understand certain aspects of their environment which allows them to navigate great distances across the globe. But, these are aspects of the environment that no humans have an understanding of – so this discounts environmental cues such as the position of the stars and the Earth’s magnetic field. The related concepts possessed by these animals allow them to infer their current position and choose the direction they must go in. These concepts do not map onto any human concepts. If this is the sense Stuart means, then my response is that these concepts should be subject to the same reasoning I used in justifying the CP Hypothesis. In humans too, though most likely there is much overlap in our clouds of concepts, there are some whose concepts do not map onto those of others. To use a similar example, consider seafarers of old that were capable of navigating by the stars. I could not perform this feat and it is likely that these ancient mariners had concepts that many of us today do not possess. Nevertheless, my analysis did not depend on any specific concepts or clouds of concepts, but only asserted that any such cloud must include the self-concept. So long as the concepts possessed by organisms in Stuart’s comments are concepts in the same sense as we humans think of them then the same principles used in the CP Hypothesis analysis applies to them, too.

    Another way to understand Stuart’s comment is that some animals (say, gorillas) might possess ‘concepts’ that are not the same as concepts as we humans think of them. That is, they do not have the same properties as human concepts, in which case they might not be subject to my analysis in the CP Hypothesis. (As an aside, this is reminiscent of Mitchell’s [1994] warning that apes and other organisms may have self-understandings never dreamed of by humans). Since these things do not have the same properties as human concepts we should not refer to them as concepts at all – for now let’s call them kongcepts. Now we need to ask how kongcepts might fit into the scheme of things. In my paper I described two mechanisms by which stimuli might induce behaviour: via a stimulus/response (SR) paradigm or via the mediation of concepts (implying some level of inferential reasoning). In the SR mechanism I included (for want of a better expression) ‘genetic hard-wiring’ as well as associative learning (which might loosely be described as a rewiring). The force of Stuart’s objection is in the suggestion that kongcepts might present another alternative mechanism, which does not imply inferential reasoning but also does not fit into the SR mold. This is an interesting line of inquiry and I invite further comments on this idea. In the mean time, I can think of at least three lines of defence against the possibility of kongcepts. First, the existence of kongcepts is a postulate for which there is no evidence, unlike the case for SR and concept-mediated behaviour. Second, kongcepts are not the most parsimonious explanation for a mechanism explaining behaviour. Third, if kongcepts do exist in animals it is reasonable to suggest that humans would have retained a remnant of this capacity during evolution, in which case we would know about them.

    Stuart remarks that Dennett for one suspects there is no principled distinction between the primitive intentionality of a thermostat and our intentionality. I am a great admirer of Dennett’s work, but to this remark I must respond “so, what’s it like to be a thermostat?”

  5. Hi Stef. Thank you for a thoughtful and thorough presentation.

    One of the ideas you brought up that has stuck with me is the idea of a a possible missing developmental level between Level 2 and Level 3 ie. an organismic entity “where we only find Intelligence OR only Self-Consciousness awareness, but not both together”. While although your elucidation centers on organismic things (humans, animals, primitive cellular lifeforms), one might be able to consider artificial intelligence, or highly complex computer programs, as a model or sorts for what a possible Level 2.5 might look like, and some of the ramifications that might have to your convergence of theories.

    A.i. has now progressed to the point where programs can simulate learning, and in some cases arguably learn new processes. They are entities with a kind of manufactured intelligence, but they lack a self-conscious mode.

    I propose this under that condition that an entity with artificial intelligence is used solely as a prototype to compare your theoretical movements against, specifically the actions of what is defined in your talk as “Intelligent Behavior”. As A.I. is not learned intelligence, rather chosen, assembled, and activated data acting as intelligence, approaching computational models of this sort may be tricky from a conceptual perspective. However some instances of A.I. development may enable one to see how an entity without self-consciousness, but yet with some kind of “intelligence”, might operate. The low points is that the “intelligence” in question would be man-made, not an ideal model to compare against true living things.

  6. Hi Brian,

    Great comment, thank you. Artificial intelligence was another topic cut from the full-length version of the paper to fit the CO2 size guidelines and I welcome this opportunity to bring it into the discussion, albeit in condensed form. If AI does represent a model for a level between my levels 2 and 3, being a case of true intelligence without self-consciousness, then it presents a threat to the CP Hypothesis. I think that’s the case even if, as you caution, being artificial diminishes its value as a model for comparison with organisms.

    We must first question whether AI truly is or will ever be ‘intelligence’. Even at AI’s current level of sophistication, artificial intelligence may be a misnomer – one that has endured since the earliest days of chess-playing programs. In my rough characterisation of intelligence I advocated flexibility of behaviour as a key marker. Computer programs designed using algorithmic logic would not count as flexible: they are not making choices but following rules. (Only in the loosest sense of the word could an ‘IF’ statement in computer languages be considered a choice). In my paper I presented ‘concept possession’ as the core element in intelligent behaviour – do computers possess concepts? Certainly computer programs engage in a form of symbol processing, but they do not understand the symbols; the processing they carry out has been programmed by we humans.

    Some non-algorithmic computer programs, such as those based on neural networks, appear to mimic the learning capacity of some organisms. I wonder, though, if this isn’t more a model for level 2 organisms rather than level ‘2.5’. I still categorise associative learning, a capacity I attribute to level 2 organisms, as essentially stimulus-response type behaviour. I’m not sure that the most sophisticated AI learning systems have yet gone beyond that capability.

    And yet, despite the foregoing comments, I do not exclude the possibility of one day achieving true machine intelligence. But if we do ever achieve this feat, then at the same time I think we will have simultaneously created artificial self-consciousness. A similar position is taken by Dennett in Consciousness Explained, at least in terms of achieving artificial consciousness. If artificial self-consciousness is possible, then the CP Hypothesis is safe from the threat posed by artificial intelligence.

  7. Stef, thanks a lot for your thoughtful reply to my commentary; I appreciate your taking the time to clarify a few things. My apologies for my tardiness in my reply here; there have been a lot of rather unexpected things happening in my life these past few days, so that I didn’t find enough time to participate as much in this conference as I would have liked.

    Anyway, a couple of things I’d liked to discuss with regard to your reply:

    – The Centrality of Agency: I’m not sure whether I quite follow your argument. I can see how on your notion of agency metacognition is a necessary condition for intentional action. I’m not sure though whether it follows that “to be aware of oneself as an agent implies awareness of one’s metacognition”. Intentional action might require the possession of metacognitive abilities, but does it also require the awareness of the possession of these abilities? This seems to be a higher-level awareness (meta-metacognition, if you like; see below for a similar point). But in any case, in my original commentary I was wondering whether one could not be aware of oneself as, say, a perceiver or bearer of reactive attitudes, without being aware of oneself as an agent. In other words, is it not conceivable that there are forms of self-awareness that do not imply awareness of oneself as an agent? (This, by the way, would also seem to provide a solution to the worry that not all concepts imply possession of the concept of agency.)

    – Intelligence and self-consciousness: What you say in your reply doesn’t really seem to add anything to what you said in your original paper. I understand that by intelligence you mean flexible behavior, which you contrast with stimulus response behavior. But in my comment I suggested that this way of describing it might not be precise enough. In particular, I think that there seems to quite a bit of room between pure SR behavior on the one hand, and conscious (conceptual) deliberation on the other hand. In other words, the contrast you draw might be over-simplified. This connects to the next point,namely

    – Mental Representation: It might be possible that mental representations sometimes lead to non-intentional behavior. But that’s not the point I was trying to argue against. The question I have is what kind of justification is there for us, as theorists, to ascribe mental representations to animals. Now, at least on some philosophical theories – Bermúdez’ theory, which you take to be compatible with yours, being one of them – you are generally justified to ascribe mental representations to an organism if and only if that organisms is capable of displaying intentional (ie. not SR-based) behavior. In fact, this is Bermúdez’ reason for arguing for the notion of nonconceptual content in the first place. For according to him, certain non-linguistic beings (such as animals and infants) cannot be attributed with concept possession, but are nonetheless capable of intentional behavior. Thus, in order to be able to explain this behavior, we have to assume that they possess mental representations with nonconceptual content. But if that is the case, then there seems to be a problem for your description of level 2 organisms as possessing mental representations on the one hand, but not being able to display intentional behavior on the other hand. (This seems to be especially problematic if you want to argue that your theory is supported by Bermúdez’ arguments.)

    – What Behavior indicates concept possession: The two suggestions you make here are very interesting indeed, and I look forward to reading the actual arguments supporting them. So keep me posted on that!

    – Perception does not imply conception: Well, I guess I tend to side with James here in thinking that the notion of “existential self-consciousness” is rather unclear and certainly not something that one can intuitively grasp or agree on. In fact, quite a few philosophers do seem to think (including, again, Bermúdez) that if there is self-perception in perception, as you claim there is, then this should count as a form of (nonconceptual) self-consciousness. You will say that this is not your definition of self-consciousness, but I agree with James that you need to provide more explanation for what exactly your definition of self-consciousness is, and why we should accept it or prefer it to others. One problem I see with your notion of existential self-consciousness is that it is simply hard to see how a non-linguistic being could ponder the meaning of life. So your definition seems to exclude the possibility of these beings possessing self-consciousness from the start. Now obviously this is not your intention, which is why I think that your choice of words here is somewhat unfortunate.

    – Intentional and Non-Intentional Agency: “Kristina questions the validity of my ‘web of concepts’ discussion based on the objection that in
    my example of the apprehension of beauty, this action is not intentional.[…] But my argument does not rely on intentional agency at this
    point.” Well, according to Davidson (and many other philosophers) there actually is a conceptual tie between actions and intentions – this is precisely what distinguishes them from mere behavior, or activity or “happenings”. Maybe you even want to call all of these actions, but then you should at least distinguish between different levels of action. Otherwise you risk completely ignoring the extremely rich literature on the philosophy of action. In any case, even if you want to just speak of “doings” (however these might be defined), it is not clear to me that perceiving something, or having a reactive attitude is something that you “do” (except in a very weak sense). (This connects to the first point about the centrality of the notion of agency – you might be aware of yourself as an agent, but you might also be aware of yourself as a perceiver, and it is far from obvious that these are the same.) So in sum, it just does not seem to be obvious to me that “the apprehension of beauty might not be an intentional action, but Homer still knows that it must be an agent that does the apprehending (or that has the feeling of fear or thoughts about abstract
    objects, possible worlds and mathematical constructs).”
    But in any case, as you say yourself in your reply, the crucial question is “does Homer know he is an intentional agent?” (So even if we could count perceptions, behaviors and “happenings” in general as forms of action, this wouldn’t do much for you, as the crucial point is intentional agency.) Your response is that if he is an intentional agent then he must know that he is, for intentional agency implies control, and this implies awareness of having control. I have two questions with regard to this: (1) If this is the line of argument you want to take, then why worry about the connection between concepts and self-consciousness at all? The point about the relation between intentional action and metacognition seems to be opening quite a different line of argument, so I am now confused as to where the focus of your argumentation lies. (Even if concept possession turns out to be sufficient for deliberation and thus for control and intentional action (and I am not sure deliberation is actually sufficient for control), this does not show that concept possession is also necessary for control. This relates back to point made earlier about the ability of organisms with nonconceptual mental representations to display intentional behavior.) What you seem to be arguing here is that intentional agency implies self-consciousness (which might be related to a point Susan Hurley made in her 1998 paper “Nonconceptual self-consciousness and agency”), but if you want to make this argument, then it is not clear to me what role concepts play (or have to play) in this.
    (2) As I mentioned above in my post, there seems to be a distinction between being able to control one’s behavior and knowing that one is able to control it. So while it might be true that intentional behavior requires the ability to access and control one’s intentions, it seems to be yet another level of representation to be aware of the fact that one is able to access and control one’s intentions (meta-metacognition). (And I take it that the former (metacognition) is sufficient for nonconceptual self-consciousness in Hurley’s sense, but that the latter (meta-metacognition) might be required for your sense of self-consciousness.) But if we can make this distinction, then it simply isn’t the case that Homer “must know that he himself is an intentional agent if indeed he is one”.

    Sorry for this lengthy post, and my apologies if I didn’t manage to make myself very clear. I hope that at least some of this might be helpful.

  8. Sorry, I think that last paragraph was not clear at all. I guess what I am trying to say is that your argument about any concept necessarily entailing the concept of action (and hence of agency and hence of oneself as an agent) does not seem to go through, at least not for action as it is commonly understood. At the very best, it gives you an extremely weak notion of action. And since what you’re after is intentional action, the argument doesn’t seem to achieve what it’s supposed to achieve.

    So your response seems to be to switch to a different strategy, namely by arguing that intentional action requires the ability to control one’s action and that this in turn requires the ability to be aware that one is controlling one’s action.

    Now concepts come back in through the back-door, so to speak, as you seem to suggest that the ability to control one’s behavior requires the ability to deliberate and that deliberation in turn requires conceptual abilities. Did I get you right here, or did I completely misunderstand? (So if I got you right your notion of action in general seems to be very weak, whereas your notion of intentionality seems rather strong; you might want to address this in the paper.)

    I’m not sure whether this line of argument would go through, but I think you should perhaps say a bit more about this, if it is indeed the case you want to make.

  9. Hi Kristina,
    Thanks once again for your comments. In the following replies I will be very brief; consider these comments down payments on more considered explications of my position, to be incorporated into the next iteration of the paper.
    Can there be (‘full-fledged’) self-awareness without awareness of own agency? I say no. But given the comments from several commentators, I accept that I need to provide more explanation for what exactly my definition of self-consciousness is, and why we should accept it or prefer it to others.
    Is there room between SR and intelligence? Yes, no doubt. I’m sure it is not a sharp transition despite the way I simplified it for the sake of clarity. But the same could be said for self-consciousness. So my intuition that these things are correlated is not inconsistent with a gradual transition from SR to intelligence; there should be a corresponding gradation from non-self-conscious to self-conscious states.
    What justification is there for ascribing mental representations to animals? Well, OK, we may never know for sure (short of using a Vulcan mind meld). But, I suspect most people will agree (with me) that at least the great apes are likely to have mental representations similar to ours as humans. And if that is the case then why not the species on the next lower rung of the evolutionary ladder? We can’t guess where to draw the line, but if any non-human species possess mental representations then the point is made. It is quite possible that what I describe as ‘mental representation’ is not what others conceive when they use that term. My usual example is a visual image, which I suspect is common enough in the animal kingdom. I can see no reason why an organism that is capable of experiencing a visual image must be considered capable of intentional behaviour.
    I also accept I need to clarify how I view the different ‘levels’ of action and agency. Kristina says “it is not clear to me that perceiving something, or having a reactive attitude is something that you “do” (except in a very weak sense).” But my main point here is that even if some ‘activities’ are passive actions, there must still be a doer (that is, an agent, though not necessarily an intentional agent). In regard to control of behaviour, I have only considered up to the level of metacognition, not meta-metacognition. I do think that an agent must necessarily know that it has control of its actions in order to make choices. I need to examine my wording more closely, but I saw this in terms of metacognition, not meta-metacognition.

  10. Hi Stef and commentators. I’m a senior psychology major at Columbia College taking Hakwan’s Consciousness seminar. I found this presentation very succinct, clear, and a cogent argument. I think that your analysis of consciousness as a multi-level process helps clarify what consciousness really is. I do have some empirical evidence that support your idea. In terms of language, some evolutionary theorists (namely Herbert Terrace) believe that language evolved only as a symptom of consciousness- that language helps convey people’s thoughts, and only when they understand themselves to be self-conscious they can convey their own thoughts. Although chimpanzees have the physiology necessary for language, they are unable to use language because they do not have a fully formed self-consciousness.

  11. In accordance with Stef’s statement that “self-consciousness is about awareness of the self as a self, an intentional agent” (p3), it seems that if people have self-consciousness and thus view themselves as agents, they then must view other human beings as possessing agency. An interesting case is autism, in which people view themselves as agents, but not other people- in other words, they are mind-blind and cannot perceive other people as possessing an intentional stance. As Baron-Cohen writes in the introduction to a book on autism, “Imagine what your world would be like if you were aware of physical things but were blind to the existence of mental things.” Autistic people lack the ability to mind-read, so they cannot perceive other people as agents. However, they can perceive themselves as agents. If self-consciousness depends on the concept of oneself as being an agent, then must humans also have the concept of other humans as being agents as well?

    Stef’s main argument is that “concept possession is not only necessary but sufficient for self-consciousness” (p5). The autism case seems to contradict the second key property of concepts, that “the grasping of a single concept requires the grasping of an entire body of concepts” (p6), since autistic people can be self-conscious at the same time that they do not view other human beings as agent concepts.

  12. Hi Stef. Thanks for responding to my comment!

    I think you’ve made an important distinction that has allowed my to think about my own comment in a different way. You are right, there is a difference in considering that “[c]omputer programs [are] designed using algorithmic logic would not count as flexible: they are not making choices but following rules”. Following rules is clearly something to keep primarily in mind–it does discount any association to agentic action. This, indeed, would not beget a type or concept possession, because an algorithmic program is dependent on processing symbols, not interpreting what the symbols mean.

    Though perhaps this raises the question of defining the difference between “interpreting” and “processing” a symbol. While a computer program might be written by humans to read a symbol, it may have a codebase, or library if you will, with further data and method definitions that allows for an interpretation of how a symbol in the computer code should be used, when to be used, and for which type of system process. This code be considered to be a type of interpretation, maybe?

    Perhaps then this would allow us to consider with confidence whether A.I belongs in a Level 2 category or a a possible Level 2.5…? Alas, I’m addressing this at the very end of the conference…

    Again, thanks for a thoughtful presentation and continued dialogue over the past few days!

  13. Closing Comments
    In this, my final post for the conference, I want to do three things. First some acknowledgements of conference participants, then a couple of brief comments directed towards last-minute commentators and finally a brief final clarification about my notion of self-consciousness.

    Acknowledgements
    First I must heartily congratulate Richard Brown for a thoroughly well-organised and enjoyable event that has been highly stimulating. I’m already looking forward to next year’s event. Special thanks to pre-selected commentators Kristina Musholt and James Dow; their insights and challenges will help me sharpen my focus on those areas of my paper that require strengthening. Thanks also to all those who posted comments and engaged with the material. I would be very happy to stay in touch with you all. My email address is stephane.savanah@gmail.com.

    To Brian O’Hagan
    Thanks, Brian, for your follow up comments. In my response I know I will open up a can of worms and travel a path I was not intending to venture down at this point. But, I can’t resist…
    Your worry about the difference between ‘interpreting’ and ‘processing’ a symbol is pivotal to the reductive/non-reductive dichotomy in consciousness debate. I made a distinction between algorithmic programs and neural networks, but even neural network simulations can themselves be programmed using algorithmic programming methods. In that sense, even neural networks might be said to be following rules. So ‘interpreting symbols’ (a capacity which we can think of as equivalent to concept possession) might be just a matter of following rules (just processing symbols), the rules being the programming statements used to simulate the neural networks.
    Similarly, concept possession in a human or animal could be thought of as the following of rules, in this case the biological rules of brain physiology. (These themselves can also be reduced to the laws of chemistry and then further down to the level of the laws of physics). This reductive approach might lead one to question whether ‘intelligence’ is anything more than illusory. I’ll say no more on this for now, other than to say that I think emergence theories come to the rescue at this point…

    To Shelly Zhu
    Hi Shelly and thanks for your comments. I think Terrace is probably right that language depends on self-consciousness. However, I would not jump to the conclusion that chimpanzees do not have self-consciousness (in fact I think they do). If self-consciousness precedes language in evolution, then it is conceivable that chimpanzees have evolved the former capacity but not the latter. Even if language was indeed necessary for self-consciousness one has to take into consideration two more things: chimpanzees’ capacity for sign-language and the possibility of a type of ‘language of thought’ (in either Fodor’s version or another) which chimpanzees and other animals might possess.
    I do not think autism poses a threat to the CP Hypothesis. Firstly, I’m not sure that a lack of theory of mind implies non-recognition of others’ agency. Just because an autistic might not be able to mind-read or empathise with others does not necessarily mean that he considers those others as non-agentic. Secondly, my view of self-consciousness does not rely on the recognition of agency in others, but only on the recognition of agency in oneself. In my BLADE example I did suggest that if Homer was able to recognise the agency of others then he must recognise it in himself. However, that does not mean that if he could not recognise it in others that he therefore could not recognise it in himself. Self-consciousness does not depend on a theory of mind though (probably) theory of mind does depend on self-consciousness. In the BLADE example I could have argued from Homer’s concept of AGENCY directly to his perception of himself as an agent without the need to assume that he perceives the agency of others.

    A final comment on my notion of self-consciousness
    It is clear that some commentators are still unconvinced by, and/or unclear about, my definition of self-consciousness, especially when described as ‘existential’ self-consciousness. No doubt my comment in which I waxed lyrical on those of us who ‘ponder the meaning of life’ did not help matters, for, as was pointed out to me privately, this may have given readers the wrong impression that I think all humans must do this or not be self-conscious – which is not my intention. What I really mean to say is that, as a species, humans have the capacity to understand that they exist. And not just that they exist in the same way as other physical objects in the world. Humans have the capacity to understand that they exist as psychological subjects as well as physical objects. So, for example, self-perception by organisms, where this just means perception of themselves as objects in the world, does not quite make the grade for self-consciousness in my view; it has to be perception of themselves as psychological subjects. I intend to clarify my meaning in the next version of the paper, which, as I mentioned previously, will benefit greatly from the commentary received during this conference.

Comments are closed.