Empty Thoughts: An Explanatory Problem for Higher-Order Theories of Consciousness

Presenter: Adrienne Prettyman, University of Toronto

Commentator: Richard Brown, CUNY -LaGuardia

Advertisements

43 Comments

  1. HOTs and Mental Appearance: A Reply to Prettyman
    David Rosenthal
    Consciousness Online 4, February 2012

    There are a few things I’d like to say in reply to Adrienne Prettyman’s interesting paper, “Empty Thoughts: An Explanatory Problem for Higher-Order Theories of Consciousness,” in which she discusses the objection to higher-order theories from the possibility those theories leave open that a higher-order awareness represents one as being in a state that one is not actually in.

    A preliminary remark: Much of Adrienne’s useful paper focuses on whether Charles Bonnet syndrome, appealed to by Lau & Rosenthal (2011) and by Lau & Brown (forthcoming), provides empirical evidence for the occurrence of empty HOTs. I’ll say something very brief about that at the end, but my main concern has to do with two other matters. One is what the higher-order-thought (HOT) theory does and does not say; the other is whether the objection from empty HOTs has any merit at all–independent of such empirical findings as Charles Bonnet syndrome.

    1. Adrienne largely follows Block (2011) in her formulation of that objection. That’s of course fine. But she also follows Block (2011) in her the statement of what higher-order theories, in particular my own HOT theory, assert. And that’s problematic, especially given the several ways in which I note in my (2011) reply to Block that he has misrepresented my position.

    2. One example is Adrienne’s claim that HOT theory endorses a biconditional that she calls (HOTs):

    (HOTs) A mental state is conscious if and only if that state is represented
    by a further mental state, caused non-inferentially (2).

    This is inaccurate in at least two respects. One concerns causation. Though the state a HOT represents one as being in probably often figures among the causal antecedents of that HOT, the theory itself makes no commitment about the aetiology of the higher-order state–“a fine point” that Block says he “will ignore” (2011, p. 421).

    The idea that the theory asserts that the HOT is “caused non-inferentially” presumably derives from the theory’s provision that it not seem subjectively that the higher-order state results from any inference or observation. The higher-order state might, on the theory, causally result from inference or observation; it just cannot seems subjectively to do so. Indeed, the higher-order state might causally result from wholly extraneous factors.

    Why does this matter in the current context? Only to remove a mistaken source of thinking that the objection from empty HOTs has some merit. The HOT functions solely to make one aware of oneself as being in the target state. It’s not, as some have urged, a matter of HOT’s monitoring an actually occurring first-order state. If it were, such monitoring would require a causal connection between the HOT and the first-order state, and the first-order state could not then fail to occur.

    3. Adrienne’s one appeal to my (2011) reply to Block is to quote my saying that if a HOT does occurs without the first-order state the HOT represents one as being in, one “will still be subjectively aware of oneself as being in whatever state the HOT describes one as being in” (432; her p. 4).

    That is indeed my view, assuming that the HOT does not subjectively seem to result from any inference or observation. As I argue in (2011) and previously, a conscious state as a state that one is aware of oneself as being in–aware in a way that’s subjectively independent of observation and inference. That’s a second respect in which Adrienne’s biconditional, (HOTs), does not accurately reflect my view. A conscious state need not, on my view, be a state that actually occurs.

    Why is that? Consciousness is a matter of mental appearance–of how our mental lives appear to us subjectively. No view of consciousness that does not respect that aspect of consciousness can be about any commonsense notion of consciousness. A conscious state is simply a state that one appears subjectively to be in. Nothing need be said about whether it is also a state one is actually in.

    4. It’s presumably beyond serious controversy that conscious states are states one seems subjectively to be in. The only issue is whether, in addition to that, one must actually be in a state for it to qualify as a conscious state.

    Why would one be tempted to think that? One reason might be that one can’t talk about a state unless it actually occurs. But that’s plainly not so; we can and do talk and think about Santa Claus and the Easter Bunny, and some even think at least for a time that they exist. Similarly, conscious states that don’t actually occur would be states that one seems subjectively to be in; they would be objects of subjective appearance. And objects of appearance plainly need not exist for us to talk about them, describe their apparent features, and develop satisfactory theories of why the relevant appearances occur as they do.

    5. Things need not exist or occur for those things to appear a particular way to one. That’s plain for the general case, and it’s undeniable that conscious states are states that one appears subjectively to be in. So one needs so reason for exportation–for the inference that because one appears to be in a particular state mental state, there is an actual mental state that one appears to be in.

    I’ll return to that question in a moment. But first let me note one consequence of failing to take account of the need for a reason to sustain that inference. Consciousness is mental appearance; a conscious state is a state that one appears subjectively to be be in. But Adrienne simply assumes that this implies that there is some actual state that has the property of being the conscious state. And that, it seems, leads her to claim that “[a]ccording to Rosenthal, some empty higher-order thoughts are conscious” (4). If there’s no actual first-order state, the only candidate for an actual state to have that property is the higher-order state. Here again she follows Block, writing, “[i]n the case of empty thoughts, Block points out, there is no first-order state which can be rendered conscious by the higher-order representation” (3).

    But the property of a state’s being conscious is not like the property of an object’s being round or red; it’s not properly speaking a property at all. It’s simply an apsect of the way our mental lives appear to us subjectively. If one is to hold that it’s more than that, one needs an argument or a reason.

    6. I’ve argued that, as with appearances in general, the mental appearance that occurs when one is subjectively aware of oneself as being in a mental state need not correspond to reality; there need be no state that matches the apparent state one takes oneself subjectively to be in. Still, there is a widely adopted though wholly unargued assumption that, whereas appearance and reality can diverge in general, when it comes to the mental they must coincide.

    Why would that be? Indeed, why might one even believe that?

    Dennett dismisses the possibility of divergence between mental appearance and reality by lampooning the possibility of something’s seeming to seem a particular way, as against its merely seeming that way (1991, p. 132). The idea is that since a first-order state represents how things seem, a higher-order state would reflect how that first-order seeming seems (1991, ch. 10). But describing something in terms that sound silly does not show that it cannot occur.

    Those who follow Descartes in maintaining that the mental and the conscious must coincide tend accordingly to hold that mental reality and mental appearance do so as well, since consciousness is after all mental appearance. I suspect that this outdated Cartesian assumption still often operates tacitly in the background of today’s arguments about consciousness. But not only is the assumption itself widely discredited; it is hard to imagine any non-question-begging reason for adopting it.

    Of course it seems subjectively as though appearance and reality always coincide in the case of conscious mental phenomena. But that’s not relevant. When it appears to one visually that there’s a round, red object in front of one, reality also seems to correspond to appearance; that’s what it is for something to appear a particular way to one. Appearance simply is the way reality seems to us.

    Perhaps Adrienne has some other, non-question-begging reason for the assumption of exportation–the inference from one’s appearing to be in a particular mental state to there being an actual mental state that one appears to be in. But I did not find it in the paper.

    And it’s worth stressing, as I did in (2011), that whether that’s really all that matters for the objection from so-called empty HOTs. All that matters is whether a conscious state is simply a state that one appears to be in, independent of whether it is in addition a state one is actually in. One cannot simply assume that, nor take it as an implication of higher-order theories by saddling them with Adrienne’s biconditional, (HOTs).

    7. Without the claim that appearance and reality must coincide in the case of mental phenomena, there is no basis for inferring from its appearing subjectively that one is in a particular mental state to one’s actually being in that state. And without any such basis, there’s also nothing to object to in the possibility of HOTs that represent one as being in a state that one is not actually in.

    Higher-order theories do not imply or even suggest that such cases ever actually occur; they simply leave it open that they may. Charles Bonnet syndrome, however, does invite being interpreted as just such a case. As Lau & Rosenthal (2011) note in discussing Charles Bonnet syndrome, “[i]f conscious experience can exist in the absence of first-order representations, the qualitative character of conscious awareness might depend entirely on higher-order representations” (371). Brown and Lau (forthcoming) usefully follow up at greater length.

    Adrienne’s conclusion about Charles Bonnet syndrome, as I understand it, is that that interpretation of the syndrome simply falls prey to the original objection from empty HOTs. But without some way to sustain the inference from mental appearance to mental reality, that conclusion, along with the objection itself, is unfounded.

    Block, Ned (2011), “The Higher Order Approach to Consciousness Is Defunct”. Analysis 71, 3 (July 2011): 419-431.

    Dennett, Daniel C. (1991), Consciousness Explained, Boston: Little, Brown & Co., 1991.

    Lau, Hakwan, & David Rosenthal (2011), “Empirical Support for Higher-Order Theories of Conscious Awareness'” Trends in Cognitive Sciences, 15, 8 (August 2011): 365-373.

    Lau, Hakwan, & Richard Brown (forthcoming), “The Emperor’s New Phenomenology? The Empirical Case for Conscious Experience without First-Order Representations.”

    Prettyman, Adrienne (2012), “Empty Thoughts: An Explanatory Problem for Higher-Order Theories of Consciousness,” Consciousness Online.

    Rosenthal, David (2011), “Exaggerated Reports: Reply to Block”, Analysis 71, 3 (July 2011): 431-437.

  2. A few brief thoughts concerning Adrienne’s argument that empty thoughts undermine HOT-theory’s explanatory power (regarding the explanatory potential of HOT-theory, my thoughts will track Richard’s somewhat). Adrienne considers the possibility that ‘all that is required is that some higher-order state represents as though there were a first-order state.’ (6) Weisberg (2011) suggests something very much like this, as do Lau & Rosenthal (2011). Adrienne thinks this option is a non-starter. Why?

    Adrienne holds that this response ‘undermines the explanatory aim of higher-order theories’ (6). For assume that some empty HOTs are conscious. How does a HOT-theory explain this? Not causally. Not in terms of an actual relation between a HOT and a first-order thought. So, ‘Some empty higher-order states are conscious, but we don’t have an account of what makes them conscious’ (7).

    This phrase ‘what makes them conscious’ can be taken in different ways. Here is a reading that I find plausible: ‘what explains why they are conscious while other thoughts are not.’ But of course HOT theorists do have an account of why HOTs are conscious while other thoughts are not. I won’t try to do justice to the account in such a short comment, but the account involves the content of the HOT, which differs in important ways from the content of first-order thoughts.

    One might worry that appeals to content will not explain how a HOT is phenomenally conscious – how such a state can have the elusive property of ‘what-it’s-like-ness.’ Of course, HOT proponents characterize ‘what-it’s-like-ness’ in terms of subjective appearances – how something seems to a subject. And as David notes in his comment, it is unclear why some further characterization (perhaps the addition of a property to the state in question) is needed. As I note in a forthcoming paper (Shepherd forthcoming), however, in Block’s exchange with Rosenthal and Weisberg, Block does seem to want something more. He points out, for example, that the term seem can indicate a subjective appearance, or ‘a thought or in any case something cognitive rather than anything phenomenal’ (2011, p. 444). And he asserts that HOT-theory – presumably since it relies on the representational content of thoughts – only explains how things seem to a subject in the latter, non-phenomenal sense. But HOT proponents do not take themselves to explain consciousness in only a cognitive or non-phenomenal sense. HOT-theory attempts to give a representational account of subjective appearances. (In this regard, it is no different from same order theories.) Why think that the representational content of a higher-order thought cannot explain subjective appearances in the relevant sense? Those pressing the empty-thought issue, to my knowledge, have not said why. So: why think that?

    Block, N. (2011). Response to Rosenthal and Weisberg, Analysis, 71(3), pp. 443-448.
    Lau, H. & Rosenthal, D. (2011). Empirical Support for Higher-Order Theories of Conscious Awareness, Trends in Cognitive Sciences, 15(8), pp. 365-373.
    Shepherd, J. (forthcoming). Why Block Can’t Stand the HOT, Journal of Consciousness Studies.
    Weisberg, J. (2011). Misrepresenting consciousness, Philosophical Studies, 154, pp. 409-433.

  3. Hi everybody, very interesting paper and replies!
    I would like to ask a couple of questions about the details of Rare Charles-Bonnett syndrome (if you know the answer), since a straightforward reply that comes to my mind (one that is not considered –on purpose I guess– by Adrienne) is that the relevant first-order representations do not constitutively depend on V1 activity.

    There is evidence that s that suggests that motion experiences require activity in V1. You mention that Charles-Bonnett patients report vivid visual hallucinations of faces, persons, objects, and complex geometric patterns. But do they report motion experiences?

    Would it be possible for Lamme, in the light of cases like Charles-Bonnett syndrom, to claim that the required loops in the visual cortex do not essentially involve activity in V1?

    What would prevent a first-order defender to reject the view that V1 is necessary for the representations that she postulates? Would this have a cost for Block’s project of defending that phenomenal consciousness does not depend on cognitive access?

    For example, I think that Hakwan’s bayesian model is compatible with a FO theory (if the relation between dlPFC and the activity in the visual cortex were merely causal). If this were right, lack of activity in V1 would be compatible with FO insofar as the strenght of the signal at the input of the bayesian filter were enough.

  4. I am puzzled with something that I have understood from David’s reply.

    I have understood that:
    A subject undergoes an experience as of a red apple if and only if the subject has the thought (HOT) that she is in qualitative state M (the qualitative state that correspond to perceptions of red apples) and it seems subjectively to her that this thought does not causally result from inference or observation.

    If this were the right understanding, then there seems to be a prima facie problem of circularity. The reason is that what seems subjectively to oneself has to do with consciousness and hence with HOTs but what seems subjectively to oneself is part of the individuation conditions of the HOT.

  5. Hello All — Very Interesting Discussion! – Thought I’d also weigh in — Apologies for the length.
    Of course, I disagree strongly with Richard that my wide intrinsicality view (WIV) is “ad hoc” in holding that no state consciousness arises for misrepresentation cases or for empty/targetless HOTs (as Richard charges in his comments at the video times 9:15-11:30). For example, I argue at length in The Consciousness Paradox: Consciousness, Concepts, and Higher-Order Thoughts (MIT Press, 2012) that the WIV has some advantages over standard HOT theory while of course still resembling HOT theory in important ways. (Some of this “misrepresentation” issue I had previously addressed in my 2004 book chapter in the John Benjamins anthology.) I think my view better accounts for the overall problem of misrepresentation, fits in better with what we know about the brain, can better handle the “problem of the rock,” among other things. But I won’t elaborate on these.
    More importantly here, I first make two main points (but see also the book excerpts below): (1) If so-called “empty thoughts” or “targetless HOTs” are themselves unconscious (as they would typically be), then how could they possibly also be conscious? How can a lone unconscious targetless HOT itself be conscious? I remain puzzled as to why these cases are discussed as if we are mainly referring to the usual unconscious HOT which accompanies first-order conscious states. (2) On the other hand, if empty HOTs are conscious HOTs (= introspection), then of course they can be conscious and can also be in error or have no target at all. This would be more like a hallucination case. But then there does need to be a 3rd order unconscious HOT. Preserving the so-called “mental appearances” is all well and good, but there can’t even be conscious mental appearances unless we have a conscious state in the first place.
    Thus, I reject Richard’s HOROR view. I do think that Richard gives up far too much with respect to the explanatory power for HOT theory. It seems to me that the main motivation/explanatory power of HOT theory is indeed undercut if one goes with something more like HOROR. HOT theory is then no longer really an account of (intransitive) state consciousness, which I thought was the main explanatory purpose of HOT theory. So I largely agree with Prettyman that “when the higher-order representation is empty, there is no first-order representation to be rendered conscious. So, according to HOT theory, there should be no conscious mental state at all.” Thus, I also agree with Block to this extent. However, I think that something very close to HOT theory can still work but the first step in this direction is to deny that there is any state consciousness when empty (unconscious) thoughts occur. I do not (like Block and perhaps Prettyman) see the empty HOT problem as a reason to abandon HOT theory, or at least something very close to it. [With regard to Charles Bonnet Syndrome, it may actually be a case of a targetless conscious HOT somewhat like the confabulation cases discussed below, though the causal etiology would be different and I may just not know enough about this rare disorder. This is perhaps what Prettyman has in mind, e.g. a conscious HOT directed at a memory.]
    Prettyman also says: “Now we discover that there are some cases in which there is a higher-order state, with no first-order state – empty thoughts. And some empty thoughts are conscious. How could a higher-order theory explain how this mental state is conscious? It can’t be that the first-order mental state is rendered conscious by the higher-order state, since there is no first-order state. So, if any state is conscious, it must be the empty higher-order state. But now we seem to be right back at the explanatory starting point. Some empty higher-order states are conscious, but we don’t have an account of what makes them conscious. It is not that they’re represented by some further, third-order state.” As I hope I make clearer below, I largely agree with her critique of the Rosenthal-Weisberg view. If, however, there are empty conscious HOTs (accompanied by 3rd order states), then we are really talking about hallucinatory or delusional introspective states which would have more to do with my point (2) above.
    Anyway, I paste in below excerpts from my 2012 book (ch. 4), the first part of which I also posted in the Sebastian session. (NOTE: I sometime use ‘MET’ (meta-psychological thought) instead of ‘HOT’ and ‘CMS’ for ‘conscious mental state’.)
    —————————————–
    4.2 Misrepresentation: A First Pass
    4.2.1 Levine’s Case
    With regard to the problem of misrepresentation, I focus first on the way that Levine (2001) presents this objection against all HO theories. He credits Neander (1998) for an earlier version of this objection under the heading of the “division of phenomenal labor.” The idea is that when “we are dealing with a representational relation between two states, the possibility of misrepresentation looms” (Levine 2001, 108). Levine argues that standard HOT theory cannot explain what would occur when the higher-order (HO) state misrepresents the lower-order (LO) state. The main example used is based on color perception, though the objection could presumably extend to other kinds of conscious states. Levine says:
    Suppose I am looking at my red diskette case, and therefore my visual system is in state R. According to HO, this is not sufficient for my having a conscious experience of red. It’s also necessary that I occupy a higher- order state, say HR, which represents my being in state R, and thus constitutes my being aware of having the reddish visual experience. . . . Suppose because of some neural misfiring (or whatever), I go into higher- order state HG, rather than HR. HG is the state whose representation content is that I’m having a greenish experience, what I normally have when in state G. The question is, what is the nature of my conscious experience in this case? My visual system is in state R, the normal response to red, but my higher-order state is HG, the normal response to being in state G, itself the normal response to green. Is my consciousness of the reddish or greenish variety? (Levine 2001, 108).
    Levine initially points out that we should reject two possible answers:
    Option 1 : The resulting conscious experience is of a greenish sort.
    Option 2 : The resulting conscious experience is of a reddish sort.
    I agree that options one and two are arbitrary and poorly motivated. Option one would make it seem as if “the first- order state plays no genuine role in determining the qualitative character of experience” (Levine 2001, 108). The main problem is that one wonders what the point of having both a LO and HO state is if only one of them determines the conscious experience. Moreover, HOT theory is supposed to be a theory of (intransitive) state consciousness; that is, the lower-order state is supposed to be the conscious one. On the other hand, if we choose option two, then we have the same problem, except now it becomes unclear what role the HO state plays. It would then seem that HOTs are generally not needed for conscious experience, which would obviously be disastrous for any HO theorist. Either way, then, options one and two seem to undermine the relational aspect of HOT theory. Thus Levine says: “When the higher-order state misrepresents the lower-order state, which content—higher-order or lower- order—determines the actual quality of experience? What this seems to show is that one can’t divorce the quality from the awareness of the quality” (2001, 168).
    It is important to point out here that Rosenthal defends Levine’s option one. For example, with respect to “targetless” HOTs, where there is no LO state at all, Rosenthal explains that the resulting conscious state might just be subjectively indistinguishable from one in which both occur (Rosenthal 1997, 744; cf. 2005, 217). I find this view highly implausible, as I have already mentioned. It also seems to me that since the HOT is itself unconscious, there would not be a conscious state at all unless there is also the accompanying LO state. We would merely have an unconscious HOT without a target state, which by itself cannot result in a conscious state. Levine says, “Doesn’t this give the game away? . . . Then conscious experience is not in the end a matter of a relation between two (non- conscious) states” (2001, 190). On the other hand, I argue that the self-reference and complexity of conscious states in the WIV rule out this kind of misrepresentation. If we have a MET but no M at all (or vice versa), then what would be the entire conscious state does not exist and thus cannot be conscious. A CMS will exist only when its two parts exist and the proper relation holds between them. Returning to the foregoing example, both Levine (2001, 108–109) and Neander (1998, 429–430) do recognize that other options are open to the HO theorist, but they quickly dismiss them. I [will] focus on Levine’s treatment of these alternatives and argue that they are more viable than he thinks.
    Option 3: “When this sort of case occurs, there is no consciousness at all” (Levine 2001, 108).
    Option 4: “A better option is to ensure correct representation by pinning the content of the higher- order state directly to the first- order state” (Levine 2001, 108).”
    …………………………………
    4.2.3 More on Targetless HOTs
    Let us further examine cases where the HOT has no target at all. Rosenthal frequently refers to confabulation and dental fear as examples of targetless or “hallucinatory” HOTs. Confabulation typically involves (falsely) thinking that one is in an intentional state or, better, making erroneous claims with regard to the causes of one’s intentional states (Nisbett and Wilson 1977). Dental fear occurs when a dental patient seems to experience pain even when nerve damage or local anesthetic makes it impossible for such a pain to occur. Perhaps the patient’s fear has been mistaken for pain, but it may also be that the patient has a HOT about being in pain when in fact no pain is present. What I find most puzzling in this discussion is the implication that we are talking about possible misrepresentation within first-order conscious states. Rather, it seems to me that these cases involve fallible introspection, and thus misrepresentation at this level is not a problem at all. Both the WIV and HOT theory can and should acknowledge that one might be mistaken when one introspects. When one flounders around for an explanation in the case of confabulation, one seems to be rationalizing about one’s own behavior or mental states. This is presumably what occurs during some instances of introspection and results in one (falsely) believing that one has a particular mental state. Confabulation involves a process whereby one is searching, as it were, for an explanation of one’s behavior. But since no plausible introspective explanation arises, one tends to make one up, that is, to literally create or cause one instead. Indeed, Rosenthal sometimes refers to “confabulatory introspective awareness” (2005, 125). In short, then, the appearance/reality distinction still applies to introspection, but not within a complex conscious state. I elaborate on this theme in section 4.5.
    Much the same applies to the dental patient. Intense and fearful introspection can cause the patient to confuse fear with a pain or represent being in pain when there is no pain. However, it seems to me that another explanation is more plausible. Owing to the fear and expectations of the dental patient, this case is better explained via what Hill (1991) calls “activation.” As we saw in the previous chapter, introspection can actually involve the creation of a lower-order conscious state. It might just be that a genuine pain is created “top-down,” so to speak, and is thus felt by the patient. I can surely, via introspection, cause myself to have a desire for lasagna if I think about it for a minute or so. In any case, we can happily acknowledge that a conscious HOT (= introspection) can either have no target (and thus be fallible) or create a target state (and thus really result in a conscious state). But in neither case does this threaten the WIV. Importantly, however, the main misrepresentation objection raised earlier to Rosenthal’s view does not apply in these cases. If we are now referring to fallible conscious HOTs (or introspection), then it makes perfect sense that subjects would still subjectively experience those states in an indistinguishable way.
    ………………….
    4.5.5 The Infallibility Objection
    Another objection to the WIV (or similar views) is the charge that it entails that knowledge of one’s conscious states is infallible, especially in light of the problem of misrepresentation discussed in section 4.2 (Thomasson 2000, 205–206; Janzen 2008, 96–99). If M and MET cannot really come apart, then doesn’t that imply some sort of objectionable infallibility? This objection once again conflates outer- directed conscious states with allegedly infallible introspective knowledge. In the WIV, it is possible to separate the higher-order (complex) conscious state from its target mental state in cases of introspection (see fig. 4.1 again). This is as it should be and does indeed allow for the possibility of error and misrepresentation. Thus, for example, I may mistakenly consciously think that I am angry when I am “really” jealous. The WIV properly accommodates the anti-Cartesian view that one can be mistaken about what mental state one is in, at least in the sense that when one introspects a mental state, one may be mistaken about what state one is really in. However, this is very different from holding that the relationship between M and MET within an outer-directed CMS is similarly fallible. There is indeed a kind of infallibility between M and MET according to the WIV, but this is not a problem. The impossibility of error in this case is merely within the complex CMS, and not some kind of certainty that holds between one’s CMS and the outer object. When I have a conscious perception of a brown tree, I am indeed certain that I am having that perception, that is, I am in that state of mind. But this is much less controversial and certainly does not imply the problematic claim that I am certain that there really is a brown tree outside of me, as standard cases of hallucination and illusion are meant to show. If the normal causal sequence to having such a mental state is altered or disturbed, then misrepresentation and error can certainly creep in between my mind and outer reality. However, even in such cases, philosophers rarely, if ever, doubt that I am having the conscious state itself………….. when one introspects, I take it that virtually everyone agrees there is a “gap” between the introspective state and its target, which also accounts for the widely held view that there is an appearance/reality difference and fallibility at that level. But this is not a problem at all; rather, it is the way that any HOT theorist can accommodate the anti-Cartesian view that introspection is fallible. Just as one can have a hallucinatory conscious state directed at nonexistent objects in the world, one can have a hallucinatory conscious HOT directed at a nonexistent mental state. But even when one hallucinates that there are pink rats on the wall, there is infallible appearance of pink rats on the wall. The CMS still exists. ……, as was discussed in section 4.2, confabulated states are best understood as introspective states that either bring about the existence of a conscious state (Hill’s “activation”) or mistake one state for another. Finally, when one is in a confabulatory state, we must remember that there is indeed an undisputable conscious state involved, but here it appears at the higher-order level as a conscious HOT (or MET). Thus, though that conscious MET has no object, one still experiences that state (the MET) as conscious, much as one’s hallucination of pink rats on the wall still involves a conscious, but nonveridical, state. Once again the analogy holds, and there is no problem here for the WIV. There can be targetless conscious HOTs just as there can be nonveridical hallucinatory outer-directed conscious states. It is admirable that Rosenthal so clearly wishes to make room for an appearance/reality distinction with regard to our own mental states. I agree with the notion that our introspective states are fallible and may misrepresent our “selves” and our mental states. But this distinction applies at the introspective level, not within first-order world-directed conscious states. If there is an inner analogy to an illusory or hallucinatory first- order conscious state directed at an outer object, it must be a conscious state (= introspection) directed at a mental state. But then this is not a case of an appearance/reality difference between an unconscious HOT (or MET) and a mental state M. This is again why we should reject Rosenthal’s endorsement of Levine’s option one for misrepresentation cases. A lone unconscious HOT without its target is not a case of fallible introspection.

  6. Hi everyone, thanks for the comments! I appreciate the chance to discuss these issues with all of you at CO4.

    I’d like to begin by tackling some criticisms from Richard’s commentary and David’s reply to my paper. One of the biggest worries seems to be that the view that I attack, HOTs, is not the most charitable version of the higher-order view. My first aim is to show that the criticism I raised for HOTs also applies to HOTi:

    HOTi: A mental state is conscious IFF one has a further mental state representing oneself as being in that state, caused non-inferentially.

    The main difference between HOTs and HOTi is that HOTi allows for misrepresentation of our first-order mental states. Certainly representing x doesn’t usually require that x exists. One of the primary benefits of representationalism is its ability to give an account of illusion and misperception. As David put it, HOTi allows for the possibility that appearance and reality do not coincide.

    The problem is that in allowing for the possibility of misrepresentation, the higher-order view sacrifices an explanation of state consciousness. Consider how HOTi would account for empty thoughts. Richard writes, “When there is an empty thought there is a state that is being represented.” Consistent with HOTi, we should interpret this as the claim that empty thoughts represent some state x, but that representing x doesn’t require that x exists. There is a higher-order state that represents oneself as being in some non-existent first-order state. Note that HOTi doesn’t tell us that the higher-order state is conscious. On the higher-order view, proponents typically hold that a higher-order thought is unconscious, unless it is represented by some further higher-order state (e.g. Rosenthal 2000 discussion of higher order thought and metacognitive judgments). But in the case we are considering, there is only the higher-order state, since the represented first-order state doesn’t exist. So where is the conscious state in this picture? As far as I can see, HOTi gives us no better answer to this question than HOTs. We now need to explain why some higher-order states are conscious states with a first-order content (e.g. green thing), while other higher-order states are conscious states with a higher-order content (e.g. I’m seeing a green thing), and still others are not conscious at all.

    Perhaps the solution to this problem is to abandon HOTi as a theory of state consciousness in particular. I’m curious as to whether David had something like this in mind when he wrote: “…the property of a state’s being conscious is not like the property of an object’s being round or red; it’s not properly speaking a property at all. It’s simply an apsect of the way our mental lives appear to us subjectively.”

    In his commentary, Richard similarly suggested that we should separate questions concerning state consciousness from questions concerning phenomenal consciousness. He offered a higher-order principle for phenomenal consciousness:

    HOROR: Phenomenal consciousness just is a higher-order representation of a representation.

    I have a few concerns with HOROR. My biggest worry is that HOROR makes all higher-order mental representations conscious just in virtue of being higher-order. This seems psychologically implausible. Unconscious cognition accomplishes a wide variety of sophisticated tasks. Some of these tasks may require a mental state that represents some other mental state. To stipulate that all higher-order representation is phenomenally conscious seems premature. Another concern I have is that HOROR may be redundant. If I understood him correctly, Richard thinks that one could hold both HOROR and HOTi as an account of phenomenal and state consciousness, respectively. But the ambitious version of HOTi is an account of phenomenal state consciousness. It tells us when a subject will be in a phenomenally conscious state, and so, when she will be phenomenally conscious. HOROR seems to be needed only if we think that a subject can be phenomenally conscious, even though she isn’t in any phenomenally conscious state. What would be an example of this kind of phenomenal consciousness? In the absence of examples, is there another reason to hold HOROR?

    Finally, I’d like to address Richard’s question about my alternative interpretation of Charles Bonnett Syndrome. I suggested that the symptoms of Charles Bonnett Syndrome might be better explained by first-order belief or judgment, rather than an empty higher-order perceptual representation. Richard asked whether I was suggesting that subjects with Charles Bonnett Syndrome are wrong when they claim to have visual phenomenology. I agree with Richard that this isn’t particularly plausible. While a belief that I am having a visual experience suffices to explain subjects’ reports, taking this interpretation goes against the principle of charity that we typically extend to other human beings. When they tell us that they’re having a vivid conscious experience, we should at least lean toward taking them at their word. So I’d be happy to say that the belief (or judgment) has a phenomenal content, maybe even a content that is phenomenally quite similar to a visual phenomenal content. My point was simply that the evidence does not (yet!) force us to conclude that there are real-life cases of conscious empty thought.

  7. Adrienne assumes in her reply that a theory of state consciousness is a theory of existent conscious states. That’s the question-begging assumption I was contesting. It’s that assumption that leads her to assume that if HOTs can occur without there being actual first-order states they refer to, “some higher-order states are conscious states with a first-order content (e.g. green thing), while other higher-order states are conscious states with a higher-order content (e.g. I’m seeing a green thing).” No version of a higher-order theory of consciousness that I’m familiar with makes that claim. The higher-order content (even on intrinsicalist higher-order theories) always concerns exclusively what mental state the individual is in.

    A theory of what it is for an individual to be in conscious states is a theory of state consciousness. It’s beyond dispute that this phenomenon–the phenomenon of an individual’s being in a conscious state, which the theory must explain–concerns how the individual’s mental life appears subjectively to that individual. Higher-order theories explain that by positing suitable higher-order states. (First-order theories have no explanation.)

    My challenge to Adrienne was to explain why the subjective mental appearance that constitutes the phenomenon of state consciousness requires that the individual actually be in the state that the individual appears to be in. Her reply seems once again simply to take that for granted, as though it’s part of what it is to be a theory of state consciousness that it’s a theory not of the way an individual’s mental life appears to that individual–what mental states the individual *appears* to be in–but that it’s a theory only of *actual* states that, as it happens the individual appears subjectively to be in.

    That unargued assumption reflects the traditional Cartesian view that mental appearance cannot diverge from mental reality. But that view’s having been traditionally accepted without argument does not give us reason to accept it.

    Adrienne sees my reply as explaining a worry “that the view that I attack, HOTs, is not the most charitable version of the higher-order view.” Well, it’s true that that biconditional is not anything I’ve ever put forth or even accepted. But I guess I don’t see my previous remarks as primarily directed to that concern, though it is true that Adrienne’s adoption of Block’s (2011) claims about my theory neglect my own (2011) reply to Block as to accuracy about my view.

    Rather, my main concern was the one reiterated above, about we can simply assume that it’s part of what it is for a theory to be a theory of state consciousness that it’s a theory of states that an individual not only subjectively appears to be in, but actually is.

  8. I have two queries for David regarding his terminology:

    (1) Suppose that one is subjectively aware of oneself as being in, say, pain (and that it’s false that this subjective awareness seems to one to arise from observation or inference). Then is the conscious state here pain? Or is the conscious state one’s being subjectively aware of oneself as being in pain etc.? (Or again am I on the wrong track completely?)

    (2) I hear “aware” as implying the reality or veridicality of whatever one is aware of. So, to my ear, “Sam is aware of himself as being in pain” entails that Sam is in pain. Is your idea, then, that the addition of “subjectively” cancels this implication, so that one can be subjectively aware of what doesn’t exist or of what isn’t the case? Alternatively, perhaps you conceive of subjective awareness as seeming to be aware?

    Andrew Melnyk

  9. Hi everyone, sorry to be late to the discussion! Let me start by addressing some of Rocco’s comments above.

    “Of course, I disagree strongly with Richard that my wide intrinsicality view (WIV) is “ad hoc” in holding that no state consciousness arises for misrepresentation cases or for empty/targetless HOT”

    Why, Rocco, is it that there is a conscious mental state in the case when the first-order state is in fact present? If the answer is ‘because I am aware of myself as being in that state’ then it is ad hoc to claim that in the case where the first-order state is missing there is no conscious mental state since you are still aware of yourself as being in the first-order state (whether you are or not that is how it seems to you). If you deny that it is because ‘i am aware of myself as being in that state’ then you deny the traditional explanation given by higher-order theories. If, for instance, it is just because there is a relation between the two states (no matter what kind) then you have not explained why that relation matters. You have just said ‘there is consciousness when there is this relation and no consciousness when there isn’t’ which may (or may not) be true, but it certainly isn’t the explanation that higher-order theorists give.

    ” I think my [WIV] view better accounts for the overall problem of misrepresentation, fits in better with what we know about the brain, can better handle the “problem of the rock,” among other things. But I won’t elaborate on these.”

    This doesn’t address the ad hoc issue that I am talking about…the question is not whether it handles objections better. Here is another way to make the ad hoc point; the transitivity principle predicts that there would be no difference between the two cases above so what explains the failure of the transitivity principle? Nothing. It is just modified to follow your intuitions, hence ad hoc.

    (1) If so-called “empty thoughts” or “targetless HOTs” are themselves unconscious (as they would typically be), then how could they possibly also be conscious? How can a lone unconscious targetless HOT itself be conscious? I remain puzzled as to why these cases are discussed as if we are mainly referring to the usual unconscious HOT which accompanies first-order conscious states.

    There are two answers, as I made clear in my comments. On one answer we distinguish two kinds of consciousness (state vs phenomenal) and say the higher-order state is conscious in one sense but not the other. The other strategy (favored by David) is to deny that the higher-order state is conscious in any sense and to insist that the state which is conscious is the one I represent myself as being in.

    “(2) On the other hand, if empty HOTs are conscious HOTs (= introspection), then of course they can be conscious and can also be in error or have no target at all. This would be more like a hallucination case. But then there does need to be a 3rd order unconscious HOT. Preserving the so-called “mental appearances” is all well and good, but there can’t even be conscious mental appearances unless we have a conscious state in the first place.”

    As explained above they are not introspectively conscious on either account.

    “Thus, I reject Richard’s HOROR view. I do think that Richard gives up far too much with respect to the explanatory power for HOT theory. It seems to me that the main motivation/explanatory power of HOT theory is indeed undercut if one goes with something more like HOROR. HOT theory is then no longer really an account of (intransitive) state consciousness, which I thought was the main explanatory purpose of HOT theory.”

    This is the puzzling part. What explanatory power is given up? It is still an account of state consciousness it is just that you have to adjust your intuitions about what state consciousness consists in…the transitivity principle says that a mental state is conscious when I am, in some suitable way, conscious of myself as being in that state. That, according to higher-order theories is what (intransitive) state consciousness consists in. Rather, it seems to me, that on the WIV you give up the explanation of what state consciousness is for an ad hoc stipulation of what it is…it may (or may not) be the case that the ad hoc version fits the data better but, none the less, it remains ad hoc! 🙂

    There is a lot more to say but I’ve got to run…be back later in the afternoon!

  10. Hello Richard – Here’s my take on your previous post (much of which is reprinted below):

    Rocco G: “Of course, I disagree strongly with Richard that my wide intrinsicality view (WIV) is “ad hoc” in holding that no state consciousness arises for misrepresentation cases or for empty/targetless HOT”
    Richard B.: “Why, Rocco, is it that there is a conscious mental state in the case when the first-order state is in fact present? If the answer is ‘because I am aware of myself as being in that state’ then it is ad hoc to claim that in the case where the first-order state is missing there is no conscious mental state since you are still aware of yourself as being in the first-order state (whether you are or not that is how it seems to you). If you deny that it is because ‘i am aware of myself as being in that state’ then you deny the traditional explanation given by higher-order theories. If, for instance, it is just because there is a relation between the two states (no matter what kind) then you have not explained why that relation matters. You have just said ‘there is consciousness when there is this relation and no consciousness when there isn’t’ which may (or may not) be true, but it certainly isn’t the explanation that higher-order theorists give.”
    RG reply: I don’t see how my account is ad hoc when I have gone into great detail, increasingly through the years, explaining why I think my alternative HOT theory is superior to standard HOT theory. This includes significant discussion of e.g. how feedback loops can help us to understand how my version of HOT theory is realized in the brain, answering other objections, etc. Now, perhaps my account is wrong and these are very difficult issues, but it is hardly ad hoc. (See also my response to Levine’s case, etc….; Chs 4 — 9 in my book, etc… surely not ad hoc.) I’ve certainly done much more than “just say” that ‘there is consciousness when there is this relation and no consciousness when there isn’t.’ And if one key question is supposed to be “What makes a mental state a conscious mental state?” I just don’t see how an answer that requires the presence of a mental state in the first place can be ad hoc. If there isn’t a mental state present to begin with, then there is nothing to explain in terms of what makes “it” conscious as opposed to unconscious.
    And I don’t think that am denying at least THAT “traditional” explanation of HOT theory because HOTs are supposed to be typically UNCONSCIOUS and so cannot typically be conscious by themselves. Speaking in terms of being “aware of myself as being in that state” begs the question as to whether or not such “awareness” is itself conscious. One traditional rationale for HO theories was always, I thought, to give a reductionist account of first-order conscious states in terms of unconscious HOTs. So to claim that a first-order state is not even needed for state consciousness seems precisely to give up on that account, not to mention perhaps revisiting problems of circularity or regress.
    When there IS a first-order conscious state, is the HOT or HO “awareness” also conscious on your view? If so, what makes THAT conscious? Saying it is a “second kind” of consciousness doesn’t explain that.
    ——————————–
    RG: “I think my [WIV] view better accounts for the overall problem of misrepresentation, fits in better with what we know about the brain, can better handle the “problem of the rock,” among other things. But I won’t elaborate on these.”
    RB: “This doesn’t address the ad hoc issue that I am talking about…the question is not whether it handles objections better. Here is another way to make the ad hoc point; the transitivity principle predicts that there would be no difference between the two cases above so what explains the failure of the transitivity principle? Nothing. It is just modified to follow your intuitions, hence ad hoc.”
    RG reply: Again, it DOES address the ad hoc issue because if one version of HOT theory explains more and can respond better to other related problems as a whole, then that is a reason to prefer that theory over another. The “aware of” in TP is supposed to be typically unconscious – isn’t it? But then you say there is “another kind” of consciousness when (and only when?) there is no first-order state. Is this still a reductionist theory then? I understand the difference between, say, transitive and intransitive consciousness, but, again, these terms are very misleading since the “transitively conscious of” is typically unconscious, according to HOT theory. Much the same goes for your “state” vs. “phenomenal” conscious distinction. If there is “something it is like” to be in a targetless HOT, then these HOTs are either no longer being viewed as unconscious or the threat of regress or circularity creeps in for the more typical cases of state consciousness (or at least it is difficult to see how we could retain a reductionist HOT theory). Again, what then makes a HO state “phenomenally conscious”? You say in your written comments that “phenomenal consciousness can occur than in the absence of my being aware of that phenomenal consciousness. In this sense we can say that there is phenomenal consciousness without awareness.” This seems much further away from “traditional” HOT theory to me. After all, normal FIRST-ORDER conscious states are surely also phenomenally conscious, but then you seem to be saying that such states can be phenomenally conscious without any HO awareness or thoughts. This runs directly counter to HOT theory, traditional or not. So I disagree that the TP “predicts” what you say since it is referring to a HO state which is typically unconscious. The one area, of course, where I do clearly move away from traditional HOT theory is in holding that HOTs are not entirely distinct from their targets, hence, what might be called “intrinsic HOT theory.”
    ________________
    RG: “(1) If so-called “empty thoughts” or “targetless HOTs” are themselves unconscious (as they would typically be), then how could they possibly also be conscious? How can a lone unconscious targetless HOT itself be conscious? I remain puzzled as to why these cases are discussed as if we are mainly referring to the usual unconscious HOT which accompanies first-order conscious states.”
    RB: “There are two answers, as I made clear in my comments. On one answer we distinguish two kinds of consciousness (state vs phenomenal) and say the higher-order state is conscious in one sense but not the other. The other strategy (favored by David) is to deny that the higher-order state is conscious in any sense and to insist that the state which is conscious is the one I represent myself as being in. “
    RG Reply: Now it again seems to me that your view is ad hoc and/or not really reductionistic. See my reply above again. Moreover, you still seem now to have completely abandoned what I take to be the traditional HOT theory aim of explaining first-order conscious states. Indeed, you seem to concede this point and give up at least some explanatory power of HOT theory, e.g. in your paper: “So we have some explanation from the theory itself which would give us a reason to think that the higher-order thought is conscious and we have some reason from science to think that this is what phenomenal consciousness turns out to be. That seems like enough explanation to me. Now it’s not the kind of explanation where you start with the state and then add a property to that state in virtue of it being looked at him. So we don’t get that kind of explanation but that wasn’t the kind of explanation we were ever supposed to get.” And e.g. “perhaps in this case we could even say there is no mental state that is conscious in the sense of being state conscious.” Thus, you often seem sympathetic with giving up on HOT theory as a theory of intransitive state consciousness – that seems to give up on at least what I take to be one major goal of HOT theory.
    ____________________
    RG: “(2) On the other hand, if empty HOTs are conscious HOTs (= introspection), then of course they can be conscious and can also be in error or have no target at all. This would be more like a hallucination case. But then there does need to be a 3rd order unconscious HOT. Preserving the so-called “mental appearances” is all well and good, but there can’t even be conscious mental appearances unless we have a conscious state in the first place.”
    RB: “As explained above they are not introspectively conscious on either account. “
    RG reply: No real problem here. I agree that shouldn’t be what you are talking about; however, my view is that the way you often DESCRIBE the HO “awareness” in question when there’s no target state SOUNDS like you are introspecting or at least equivocating between “unconscious” HO awareness and “conscious” HO awareness.
    ___________________
    RG: “Thus, I reject Richard’s HOROR view. I do think that Richard gives up far too much with respect to the explanatory power for HOT theory. It seems to me that the main motivation/explanatory power of HOT theory is indeed undercut if one goes with something more like HOROR. HOT theory is then no longer really an account of (intransitive) state consciousness, which I thought was the main explanatory purpose of HOT theory.”
    RB: “This is the puzzling part. What explanatory power is given up? It is still an account of state consciousness it is just that you have to adjust your intuitions about what state consciousness consists in…the transitivity principle says that a mental state is conscious when I am, in some suitable way, conscious of myself as being in that state. That, according to higher-order theories is what (intransitive) state consciousness consists in. Rather, it seems to me, that on the WIV you give up the explanation of what state consciousness is for an ad hoc stipulation of what it is…it may (or may not) be the case that the ad hoc version fits the data better but, none the less, it remains ad hoc!”
    RG Reply: Again, see the above/my earlier comments – Again, in your paper, you say that your view “[is] not the kind of explanation where you start with the state and then add a property to that state in virtue of it being looked at him. So we don’t get that kind of explanation but that wasn’t the kind of explanation we were ever supposed to get.” I disagree or at least always thought that state consciousness was the main aim, and what you are now CALLING “(intransitive?) state consciousness” is really something else. You also seem to concede that Rosenthal also gives up this explanatory power when you say: “They give up one kind of explanatory power; namely that of explaining consciousness in terms of a relation between a first-order state and a further higher-order state.” So my point throughout is that if we can have a version of HOT theory that gives up neither this relational view nor HOT theory as an account of state consciousness, then it is preferable to an account that gives up one (or even both) of these. My WIV, I argue, gives up neither and, in addition, can help to answer various other objections (e.g. problem of the rock) and fits in better with the brain science.

  11. In response to David’s second post, I don’t assume that “the subjective mental appearance that constitutes the phenomenon of state consciousness requires that the individual actually be in the state that the individual appears to be in.” I am instead relying on the assumption that being phenomenally conscious implies that there is a phenomenally conscious state that one is in. This assumption is not justified by the claim that mental appearance and reality cannot diverge, and so is not question-begging.

    In the case of empty thoughts, I am assuming that if it appears to a subject that she is in a phenomenally conscious state, then she is in a phenomenally conscious state. But she need not be in the state that she appears to be in. For example, I leave open the possibility that a higher-order state is conscious, not the first-order state that subjectively appears conscious (since there is no such state!). So I don’t rule out the possibility that the subjective appearance and the mental reality diverge.

  12. Hi Adrienne, I largely agree with what David has said so far but I thought I would follow up on a couple of points.

    “The problem is that in allowing for the possibility of misrepresentation, the higher-order view sacrifices an explanation of state consciousness. “

    This seems like the crux of the mis-interpretation problem. On David’s way of doing things there is no sacrifice. State consciousness is, and always has been for him, my being aware of myself as being in some first-order state. So David is quite happy saying that the state which is conscious os the notional state that the higher-order state represents you as being in. That is why David says, “conscious states that don’t actually occur would be states that one seems subjectively to be in; they would be objects of subjective appearance. And objects of appearance plainly need not exist for us to talk about them, describe their apparent features, and develop satisfactory theories of why the relevant appearances occur as they do.”

    So, too, in you response to David you say that “I am instead relying on the assumption that being phenomenally conscious implies that there is a phenomenally conscious state that one is in,” but fail to notice that you are simply assuming that the phenomenally conscious state you are in must exist (which David denies). On my way of doing things I would say that the higher-order state is itself phenomenally conscious (as you seem to allow is a possibility) but that it is NOT state conscious.

    ” My biggest worry is that HOROR makes all higher-order mental representations conscious just in virtue of being higher-order. “

    This is my fault as I was not careful enough in stating the claim. I was assuming that the usual kinds of restrictions apply. That is, I was assuming that the higher-order representations are of the suitable sort (seeming non-inferetial, assertoric, etc).

    “Another concern I have is that HOROR may be redundant. If I understood him correctly, Richard thinks that one could hold both HOROR and HOTi as an account of phenomenal and state consciousness, respectively.”

    It is true that I think it provides an account of both, but each is explained via different aspects of the higher-order state. Thus the mental state which is conscious is the state which figures in the content of the HOT but it is the HOt itself which is phenomenally conscious (in the sense of there being something that it is like for me to be in that state). As I see it (David may disagree) the differences between HOROR and the traditional ambitious HOT theory lie in the mainly verbal dispute about which state is phenomenally conscious, the first-order state or the higher-order state. This matters precisely in the case where we have empty higher-order thoughts since David will say that the non-existent state I represent myself as being in is phenomenally conscious and I will say that the higher-order state is itself phenomenally conscious.

    “So I’d be happy to say that the belief (or judgment) has a phenomenal content, maybe even a content that is phenomenally quite similar to a visual phenomenal content. My point was simply that the evidence does not (yet!) force us to conclude that there are real-life cases of conscious empty thought.”

    I am still not sure what to make of this claim…is this content supposed to a normal part of the content of first-order beliefs?

    Part of the point of our paper was that when you take the three cases together the overall best explanation that is supported by the available evidence is that these are empirical cases of consciousness without first-order representations…you could posit a weird kind of belief content to fill in here but how would that make sense of the Rahnev results? Our interpretation gives a unified account of these different kinds of cases and so seems better off that appealing to unique and novel kinds of mental content…

  13. Hi Rocco, thanks for the response!

    I think there must be some kind of misunderstanding here…maybe it is my fault for using the term ‘ad hoc’…let me try again. Let’s take a step back and look at a historical case. When Einstein was working out the mathematics behind his general theory of relativity he realized that the equations predicted an expanding universe, and since he antecedently assumed that this could not be the case, he inserted into the equations a constants that has come to known as the cosmological constant. The insertion of this constants was ad hoc because it was not itself the product of theory but was a stop-gap measure to make the theory (which predicted otherwise) come in-line with his intuitions. In Einstein’s own lifetime he came to regret this because of the evidence that came in for an expanding universe, but now many years later physists typically think that the existence of dark matter shows that there is, after all, a cosmological constant…os the moral of the story is that even though Einstein’s introduction of the constant was ad hoc it turned out to be more or less correct. That is why having an ad hoc element to to a theory is not the same thing as that theory fitting best with the evidence. The issue is whether the theory has to modified in some way that is not itself a product of the theory itself.

    So, now turning to the case of conscious mental states we can apply this lesson. We start with the observation that some mental states are conscious and some mental states are not conscious. We wonder what the difference is between them. We note that it is part of our common sense folk thinking about this that a state that we are in no way aware of selves as being is is not a conscious state. This leads us to state the transitivity principle: TP=a conscious mental state is one which I am conscious of myself as being in, in some suitable way. (I know you take tP to be a priori, but not everyone agrees, as you know but that is largely besides the point here.) We then note that there are generally two ways that we are conscious of things. We either sense them or we have a direct assertoric thought to the effect that they are present. Sensing, while inviting at first, doesn’t seem to be the right way to go because it seems to entail that there would be a separate set of sensory qualities that represent the first-order sensory qualities and we don’t find these higher-order sensory qualities in our mental lives so we have no reason to posit them. That leads us to thoughts. So we have, in theorizing been lead to the conclusion that when I am conscious of myself as being in some mental state it is likely due to having some higher-order thought to the effect that I am in a first-order state. Thoughts are intentional and as such have intentional contents. So when I have a (suitable) thought to the effect that I am seeing red (or whatever) it will appear to me as though I am seeing red. This is because I have a higher-order thought with the content ‘I am seeing red’ and so I become conscious of myself as being in a state with a certain qualitative character (the red* kind). That story will be true whether or not the red* state is actually there. This is because, whether the red* state is actually there or not, by having the suitable higher-order thought it will appear to me as though I am in that state and that is what matters according to TP. Thus to say that this doesn’t hold when there is no first-order state is to modify the theory in a way so as to bring it in line with your intuitions rather than to follow the theory out to its logical conclusions. This is what I mean when I say that the theory has an ad hoc element to it.

    Now, as I said, that doesn’t mean it isn’t right. It just means that you have inserted something into the theory so as to force it to come out a certain way in the empty thought cases. That may (or may not) capture the way things really are but it certainly does’t follow from TP.

    So, now let me address some of your specific points.

    You say,

    ” I just don’t see how an answer that requires the presence of a mental state in the first place can be ad hoc. If there isn’t a mental state present to begin with, then there is nothing to explain in terms of what makes “it” conscious as opposed to unconscious.”

    I hope now you can see what I was getting at. When we start to theorize about this stuff we start off with a particular mental state and ask ‘what is the difference between this state occurring consciously and unconsciously?” and then we are lead, by theorizing, to TP which, on closer inspection, entails that the difference is one that we can have even if that particular first-order state doesn’t occur. That is why it is ad hoc to say what you do here. If TP entails that a state can be conscious even if it is not actually tokened, and we think that TP is the best account of what state consciousness consists in, then we should accept this result. Of course, you can modify it if you want to but you cannot say that the modification is anything but an attempt to modify the theory in order to bring it in live with the way you think things should turn out.

    “And I don’t think that am denying at least THAT “traditional” explanation of HOT theory because HOTs are supposed to be typically UNCONSCIOUS and so cannot typically be conscious by themselves. Speaking in terms of being “aware of myself as being in that state” begs the question as to whether or not such “awareness” is itself conscious.”

    Here again perhaps the misunderstanding is my fault. On David’s view the Higher-order state is always unconscious in every sense unless there is a third-order representation that targets the higher-order thought. So on his view the higher-order thought is not conscious but, as per TP, the conscious state is the one that I am aware of myself as being in (the notional state). Now, switching from David’s way of talking to mine, I would say that the higher-order state is phenomenally conscious in the sense of being the state that there is something that it is like for me to be in, and that the target of the higher-order state is the one that state conscious in the sense of being the state that I am aware of myself as being in. Either way that you want to talk there is no problem about what makes the higher-order state (phenomenally) conscious.

    “Again, it DOES address the ad hoc issue because if one version of HOT theory explains more and can respond better to other related problems as a whole, then that is a reason to prefer that theory over another.”

    I hope it is clear now that the issue is not which version of the theory can respond better to objections, since the version that does *may* (or *may not*) be the one with ad hoc elements in it. But even putting that aside it is arguable that the WIV cannot explain what state consciousness is (I haven’t yet heard what you think the explanation is supposed to be)

    “Again, what then makes a HO state “phenomenally conscious”? You say in your written comments that “phenomenal consciousness can occur than in the absence of my being aware of that phenomenal consciousness. In this sense we can say that there is phenomenal consciousness without awareness.” This seems much further away from “traditional” HOT theory to me. After all, normal FIRST-ORDER conscious states are surely also phenomenally conscious, but then you seem to be saying that such states can be phenomenally conscious without any HO awareness or thoughts. This runs directly counter to HOT theory, traditional or not.”

    No, on my version of the story first-order states are not phenomenally conscious. That is exactly the point. According to me phenomenal consciousness consists in having a suitable higher-order presentation NOT in having a suitable first-order representation. But, again, on David’s way of doing things first-order states are phenomenally conscious…though it is an interesting question (maybe one David will answer for us) whether he thinks that in the ‘normal’ case the existing first-order state is conscious or whether it is always the notional state (in the sense of the content of the higher-order state) which are conscious. Sometimes I think Josh (Weisberg) has this second view I am not sure.

    “Thus, you often seem sympathetic with giving up on HOT theory as a theory of intransitive state consciousness – that seems to give up on at least what I take to be one major goal of HOT theory.”

    No, I am sympathetic to giving up on one way of interpreting TP since there are (theory-based) reasons to think it can’t be right.

    “my view is that the way you often DESCRIBE the HO “awareness” in question when there’s no target state SOUNDS like you are introspecting or at least equivocating between “unconscious” HO awareness and “conscious” HO awareness.”

    I hope now that it is clear that the ONLY sense of ‘conscious’ that I attribute to higher-order states is the sense of phenomenal consciousness, which I take to be the property of there being something that it is like to be in that state.

    “So my point throughout is that if we can have a version of HOT theory that gives up neither this relational view nor HOT theory as an account of state consciousness, then it is preferable to an account that gives up one (or even both) of these”

    But my point, which you don’t seem to have grasped, is that by doing so you reject HOT theory as an account of state consciousness, since that account entails that we give up the relational view.

  14. Thanks Adrienne, and all the participants, for a fascinating discussion!

    I would like to focus on the disagreement between Prettyman, Gennaro and Sebastian, on one side, and Rosenthal, on the other. Rosenthal, in his first comment, says that “a conscious state need not … be a state that actually occurs.” It seems that Prettyman, Gennaro and Sebastian deny this. Rosenthal takes ‘being conscious’ as a form of ‘appearing some way to the subject’, and, as he rightly stresses, non-existent states can appear some way to the subject. So Rosenthal seems to present Prettyman, Gennaro and Sebastian with a challenge, which is to provide an argument that shows that only states that actually occur can be conscious (answering this challenge might also answer Brown’s charge that Genarro’s view is ad hoc).

    Here are my two cents. I think it might be possible to answer Rosenthal’s challenge in roughly the following way (note that I’m merely presenting an idea that has just popped into my mind, in the hope that it might contribute to the present discussion. I do not aim at presenting a seriously worked-out and detailed argument.).

    My basic thought is that non-existent states (even if they are objects of a HOT) cannot be conscious because they are not mental-like (I use the term ‘mental-like’ rather than ‘mental’ because non-existent states can be mental). Let me explain. A seen apple that is an object of my thought (an ordinary first-order thought) does not count as conscious. Why? Because the apple is not mental-like (and so the thought directed at the apple does not count as high-order); it fails to meet a necessary conditions for being mental-like. The condition in question, I suggest, is of having an (actual or potential) effect on my mental life which is DIRECT, i.e., independent of its (i.e., the apple’s) being represented by me. The seen apple (at least in ordinary cases) has an effect on my mental life ONLY VIA BEING REPRESENTED BY ME, and for this reason, I suggest, it is not mental-like. By contrast, ordinary mental states have an effect on my mental life even when they are not objects of representation. To take a worn-out example, when I am absent-mindedly driving my car, my visual and auditory experiences of the environment are not represented by some other state (such as a HOT), yet they nevertheless guide my driving behavior (cf. Armstrong, 1968). This makes these states mental-like (and, consequently, it makes thoughts directed at these states high-order, whereas the thought directed at the apple counts as first-order).

    My suggestion, then, is that the seen apple is not (and cannot be) conscious because it does not have an (actual or potential) effect on my mental life independently of being represented (and so is not mental-like), whereas ordinary mental states have an effect on my mental life independently of being represented (which makes them mental-like). Note that I do not claim that having an effect on a subject’s mental life independently of being represented is a sufficient condition for being mental-like, only that it is a necessary condition.

    Now, a non-existent experiential state that is the object of an (empty) HOT has an “effect” on my mental life only via its being represented by the HOT. Thus, it does not have effects that are independent of its being represented. Thus, it is not mental-like; it is rather like the seen apple. This gives us reason to belive that it cannot be conscious. Conclusion: non-existent states cannot be conscious.

    Does this make sense?

    Armstrong, D. (1968). A Materialist Theory of Mind. London: Routledge & K. Paul.

  15. I appreciate Assaf’s reply to my challenge to Prettyman, Block, et al. His reply gets my challenge right. A conscious state is indisputably a state that one appears subjectively to be in; what reason is there to think that, in addition to that, it’s also a state that one is actually in. Absent such a reason, a theory of state consciousness is a theory simply of under what circumstances one appears subjectively to be in a particular mental state.

    Assaf’s answer is that a state that a higher-order awareness (HOA) represents oneself to be in might well be mental, but if the individual is not actually in the first-order state, that (notional) state fails to be “mental-like.” He explains what it is for a state to be mental-like as follows: “The condition in question, I suggest, is of having an (actual or potential) effect on my mental life which is DIRECT, i.e., independent of its (i.e., the apple’s) being represented by me. The seen apple (at least in ordinary cases) has an effect on my mental life ONLY VIA BEING REPRESENTED BY ME, and for this reason.”

    What this amounts to is the following: When a HOA represents an individual as being in a particular mental state, and thereby results in its appearing subjectively to that individual as being in that state, that first-order state is mental-like only if the state itself has an effect on the individual’s mental life independent of its simply seeming subjectively to the individual that the individual is in that state.

    But thus spelled out, that condition is patently question begging. It amounts simply to ruling out cases of first-order states that an individual’s HOAs represent the individual as being in–i.e., ruling them out as being conscious states. So an argument is needed for imposing Assaf’s condition of being mental-like.

    One might have expected that any conscious state would have an effect on the relevant individual’s mental life independent of it’s seeming subjectively to the individual that the individual is in that state. But that seems clearly not to be so. Consider relatively peripheral but conscious visual states; they have no discernible effect on one’s mental life despite being conscious–and almost certainly no effect at all. Similarly with stray thoughts and desires; sometimes they may affect one’s mental life; but why think they always do?

  16. Hello Richard – Whew! OK — Interesting stuff – Let’s see: I’m with you for most of your first two paragraphs. I’m somewhat willing to drop the whole “ad hoc” charge back and forth (note that Levine says it is ad hoc to agree with Rosenthal on his answer to misrepresentation cases.) But I still very much disagree with you on this point, who is closer or further from “traditional” HOT theory, etc… I just point out again that you say on your version of the story “first-order states are not phenomenally conscious” and “that the higher-order state is phenomenally conscious in the sense of being the state that there is something that it is like for me to be in.“ These seem to give up a major (“essential”?) feature of HOT theory, e.g. the attempt to reduce state consciousness, in at least mentalistic terms. I also didn’t see you address the reduction issue in your last post: How (if at all) can your view maintain HO theory as a reductionist theory? If a “modified” HOT theory isn’t reductionistic, is it really a HOT theory after all?

    So I’d say that, in some respects, you differ from “standard” HOT theory and so do I in other ways. What I want to emphasize again is that if you treat the HO state as itself conscious (and thus use phenomenology as evidence for TP), then it seems to me that you hold a view much closer to e.g. Kriegel’s, at least on this one key point. But his kind of “self-representationalism” is not meant to be reductionistic in mentalistic terms, i.e. he accepts that all such “self-awareness” is conscious, albeit “peripherally” so. But I don’t treat our shared belief in the TP as based on any phenomenological observation. [(My comments on Sebastian’s paper and the subsequent discussion gets into this a bit and I included some book excerpts there too. I’d be glad to do so again here in response to you [“it is arguable that the WIV cannot explain what state consciousness is (I haven’t yet heard what you think the explanation is supposed to be)]” But, at least for now, I’m trying to keep these posts at a reasonable length while also not violating copyright.]

    Now you say that I “have inserted something into the theory so as to force it to come out a certain way in the empty thought cases. That may (or may not) capture the way things really are but it certainly doesn’t follow from TP.” I disagree on “forcing” etc… – here’s another way of describing it: I’ve modified HOT theory in order to better defend it and answer various objections, one of them being the problem of misrepresentation and targetless HOTs. I did not initially “antecedently assume” or want to “force” HOT theory into any pre-conceived variation. Rather, I thought that certain problems and objections to HOT theory — misrepresentation but also many others – make my modified version of HOT theory more plausible than standard HOT theory. Of course, at that point, one begins to defend the alternative against other objections, argue for other advantages, etc…. The following is also surely a reasonable method that many philosophers and scientists properly use:
    Step 1. A theory says X; 2. X has certain problems; 3. Perhaps theory Y is better with respect to problems A, B, and C…(even if close to X in many ways); 4. Go on to defend theory Y and contrast Y to X. 5. Hey, theory Y now seems to be able to explain some things and deal with further objections better than X! (Sometimes we might even just “try out” an alternative theory to see how it fares, and then, over time, honestly believe that it fares better!) But there’s no “forcing” a theory one way or another. As a matter of fact, one could equally make the case that your way of arguing for HOROR also fits this method quite well. Did you antecedently believe HOROR and then force HOT theory into it somehow? Seems like you also did something more like 1-5 to me. At the least, I wouldn’t presume that you merely antecedently wanted to force HOT theory into HOROR in the way that you apparently do with respect to my view: “If TP entails that a state can be conscious even if it is not actually tokened, and we think that TP is the best account of what state consciousness consists in, then we should accept this result. Of course, you can modify it if you want to but you cannot say that the modification is anything but an attempt to modify the theory in order to bring it in line with the way you think things should turn out.” Indeed, this last part might even sound like an attempt to unfairly dismiss an alternative version of HOT theory, though I trust that you don’t think the same can be said about your own potential alternative. (Think also about how so many alternative philosophical views are developed.) We (and others) also may honestly disagree about what is or isn’t “central” or “traditional” for a given theory to hold. I give up that HOTs are entirely distinct from their targets but, as I’ve noted, you also seem to give up quite a bit, perhaps even more in my view. Much of what I have quoted in my posts from your initial commentary seems to me to be VERY different than HOT theory. In any case, there is surely a fine line between giving up a theory entirely and modifying a theory based on evidence, argument, objections, etc…. One might often be torn on this matter over a period of time.
    And, again, I don’t think that “TP entails that a state can be conscious even if it is not actually tokened” – TP, at best to me, entails e.g. that some form of HO theory (or even “self-representationalism”) is true, and then an argument by elimination is what typically follows. But if one does think that offering a reductionist theory is important for various other reasons, then TP certainly cannot be taken as entailing that the HO state is itself conscious. It is of course true that the goal of reductionism needs justification too; fair enough – I do so in Ch. 2 of my book, but this is often such a shared assumption in the HO literature that it is not often explicitly defended.
    Finally, you say: “But my point, which you don’t seem to have grasped, is that by doing so you reject HOT theory as an account of state consciousness, since that account entails that we give up the relational view.”
    A good example of you treating some aspect of HOT theory as “central” or even “essential” (i.e. giving up the “relational view”) that I do not or perhaps would treat as more of a terminological issue. I think you’ve given up other aspects of HOT theory that are more central to HOT theory. I’m willing to accept that some of this is terminological – e.g. one often finds in the literature authors (including myself) wondering in print about the overlapping terminology, e.g. “same-order” “higher-order” “self-representational” etc…. You spoke of “the theory itself” as if that is so clear one way or another. Actually, I recall a discussion about this general topic at a Tucson conference – here’s a footnote from my book: “….Kriegel agrees that this is largely a terminological matter, but he still opts to restrict use of “higher-order” to theories that treat the HOT as a distinct state. Thus he often calls his view “same-order monitoring” (e.g., Kriegel 2006). However, during a session at the 2004 “Toward a Science of Consciousness” conference in Tucson, it also became clear that some hold stronger views on this matter. Andrew Brook urged me to jettison all use of “higher-order” in my theory, whereas Peter Carruthers thought that Kriegel had misnamed his theory. I agree more with Carruthers here, and Van Gulick also clearly has this preference; but, again, I take this mainly to be a terminological dispute. One problem, though, is that the converging similarities between all these positions might be lost….[also] In retrospect, perhaps I should have chosen a more catchy name for my theory, but at this point, I hesitate to add to the abundance of acronyms and theory names already in the literature. Even just “WIT” (wide intrinsicality theory ) would at least have been easier to say. Other, sexier possibilities are “intrinsic HOT theory” (IHOT) and the more provocative “1½ order theory of consciousness” or “split-level theory of consciousness.”
    Here’s just one additional excerpt on the point about preferring something more like the WIV with regard to some neuroscientific evidence (much more in esp chs 4, 6, and 9 of my book. NOTE: I sometime use ‘MET’ for ‘metapsychological thought’ instead of ‘HOT’):
    “I suspect that the underlying concern at the heart of Levine’s and Weisberg’s objections has more to do with the following questions: If first-order misrepresentation is really impossible in the WIV, then why even call the relationship between MET and M a representational one? What kind of representation (and thus representational theory) does not really allow for misrepresentation? The main reason to treat the MET as a representation of M is that it is still an unconscious metapsychological thought about M. Now, at the neural level, the MET is still directed at M even though there is important interaction between them, which warrants treating the complex state as a single state with two contents. As is well known, the brain has layers of representation going from “lower” to “higher” areas. We can think of this in terms of a hierarchy where the higher areas represent the lower areas. But in the case of conscious states, the relation between the MET and M is what Feinberg (2000) would call a nested one; that is, there is dynamic interaction in both directions due to feedback loops and concept application. This contrasts with, for example, the central nervous system in general, where we have a nonnested hierarchy, that is, a purely bottom-up sequence of representations. Feinberg (2000, 2001, 2009) has argued for what he calls the “nested hierarchy theory of consciousness” (NHTC). According to Feinberg, in a nonnested hierarchy, lower and higher levels are independent entities in which the top of the hierarchy is not physically composed of the bottom. A nonnested hierarchy has a pyramidal structure with a clear-cut top and bottom with the higher-levels controlling the lower levels, analogous to a military command structure. In a nested hierarchy, however, lower levels of the hierarchy are nested within higher levels to create increasingly complex wholes. This idea is also applicable to many other structures in living organisms, such as individual cells. Unlike an account of neural hierarchy that views the brain as a nonnested hierarchy, the NHTC (like the WIV) would treat some areas of the brain as a nested hierarchy when conscious states occur. The idea is that lower-order features combine in consciousness as part of (or nested within) higher-order features. So consciousness is not narrowly localizable, but it is also not very strongly global. And conscious states are thus neurally realized as combinations of lower-and higher-order brain features. Thus we can view a conscious mental state as a complex of two parts that are integrated in a certain way. Like the NHTC, essential reciprocity exists between specific neural structures on the WIV. The structures in question are not merely laid upon one another without neural functioning going in both directions. Thus my view is not merely what has been called a hierarchical theory whereby the farther up one goes in, say, one’s visual system, the more consciously aware of a stimulus one becomes (Pollen 1999; Lamme and Roelfsema 2000). It is much more of an interactive theory such that “once a stimulus is presented, feedforward signals travel up the visual hierarchy. . . . But this feedforward activity is not enough for consciousness. . . . High level areas must send feedback signals back to lower- level areas . . . so that neural activity returns in full circle” (Baars and Gage 2010, 173). And perhaps the most crucial point is that part of the reason for this may simply be that “higher areas need to check the signals in early areas and confirm if they are getting the right message” (173). If there is no such confirmation, including perhaps a hypothetical case of misrepresentation between M and MET, then no conscious state occurs. I will elaborate further on these themes in chapters 6 and 9.”

  17. David, thanks for your challenging comment!

    Let me clarify two points. First, in presenting my suggested condition for being mental-like, I did not mean to describe it as one of actually having a direct (i.e., independent of being represented) effect on my mental life, but as one of having an ACTUAL OR POTENTIAL direct effect on my mental life. By this I meant to capture the standard functionalist way of describing causal roles. On a functionalist view, every mental state realizes a causal role. This means that it WOULD cause certain things if certain mental conditions were to obtain. A mental state, on a functionalist view, need not actually cause anything. I don’t think that conscious peripheral visual states or stray thoughts and desires count as counterexample to functionalism, right? So, I submit, they also do not count as a counterexample to my condition for being mental-like.

    So, to repeat, on my suggested view, a state S is mental-like only if the following condition holds: S would have a direct effect on my mental life if certain mental conditions were to obtain.

    Now for the second point: You say that my suggestion is question begging. However, in my post I have gestured at an argument for imposing my condition for being mental-like. Let me present it in a (somewhat) more detailed fashion. My thought is that a HOT theorists is working (at least in the background) with some account of the difference between mental-like states and non-mental-like states. Such an account is required to explain why an apple that is represented by me (in a certain way) is not conscious whereas an experience that is represented by me (in a certain way) is conscious. Differently put, we need an account of why a representation of a visual experience is a high-order representation rather than a first-order one. It is a high-order representation, presumably, because it is directed at a mental-like state.

    What is now required is an account of relevant difference between a seen apple and a visual experience, in virtue of which the latter counts as mental-like whereas the former does not.
    What could this difference be? One suggestion is that the apple, unlike a visual experience, has no representational content. But, at least on causal-covariational accounts of representation, it (sometimes) does. It, e.g., indicates that there is an apple tree nearby (because it is caused, in normal conditions, by apple trees), which means that it represents that there is an apple tree nearby. If you disagree, you can change the example to, e.g., a thermometer I am seeing, which, on standard causal-covariational accounts, has representational content, but is not conscious. Now, a HOT theorist might deny causal-covariational theories of mental representation, but that is a price.

    What, then, is the relevant difference between the apple and a visual experience? I think it is natural to say that a visual experience influences my mental life regardless of its being an object of a different mental state, whereas an apple does not. Put differently, the apple counts as ‘external’ to me exactly because it can influence my mental life only indirectly, i.e., via being represented by me, whereas a visual experience counts as ‘internal’ to me (and so mental-like) because it can influence my mental life directly, i.e., independently of being represented by me.

    I have just sketched an argument in favor of imposing my condition for being mental-like. I think that it answers your challenge with a challenge: a HOT theorist that thinks (like yourself) that non-existent states can be conscious now faces the challenge of providing a reasonable condition for mental-likeness that is different from the one I have suggested. This condition must be one that seen apples (and seen thermometers) fail to meet, whereas visual experiences and (certain) non-existent states meet. Absent such a condition, we have a reason for denying that non-existent states can be conscious.

    (For other readers: I use the (perhaps confusing) term ‘mental-like’ rather than ‘mental’ because of the nagging issue that a non-existent state can be mental. My claim that a non-existent mental state is not mental-like is meant to convey that it ‘behaves’ in a way that is more similar to seen apples than to actually occurring visual experiences.)

  18. Richard wrote that I “fail to notice” that I am “simply assuming that the phenomenally conscious state you are in must exist (which David denies).” I have not failed to notice that David and I disagree on this point, but I don’t think that it makes my argument in this paper circular. Although I mention David’s view among the empirically motivated defenses of higher-order theories, my aim in this paper isn’t to argue against his view. Instead, I have the narrower aim of showing that the empirical evidence presented in Richard and Hakwan’s paper does not help the higher-order view to meet the challenge from empty thoughts. Although my assumption is not consistent with David’s view, it is consistent with Richard’s, since he holds that the higher-order state is conscious. If my starting assumptions are consistent with Richard’s version of the higher-order view, then they suffice for my purposes in this paper.

    I appreciate David’s criticism: in order to raise a compelling objection to his view, I would need to provide an argument for my assumption that phenomenal consciousness implies the existence of a phenomenally conscious state. My assumption would indeed be question-begging if used to attack David’s claim that a subject can be in a phenomenally conscious state that doesn’t exist. Arguing for my assumption, and thus against David’s view, would require a different argument than I have given in this paper.

    Here’s one argument: The existence of a mental state is a precondition for posing the very question that higher-order theories are expected to answer. If no conscious mental state exists, then there is no question of what makes that mental state conscious rather than unconscious – so there’s nothing for the higher-order theory to explain. I suspect that David will find this first argument unconvincing because of the implicit assumption that only existing conscious states could stand in need of explanation. Rather, if I understand his view correctly, he thinks that the goal of a theory of state-consciousness is to explain how a subject’s mental life subjectively appears to her. Posing this problem doesn’t require that the state subjectively presented to her actually exists, only that it appears to exist.

    My second argument begins from a commitment to naturalism. A conscious state that doesn’t exist cannot be naturalistically explained. A non-existent state cannot stand in causal relations to other mental states; it cannot cause behavior, inference, or even verbal report; and it cannot be experimentally measured – only things that exist can be measured. One consequence of this is that it makes conscious states epiphenomenal. A second consequence is that conscious states cannot be given a naturalistic explanation. Naturalistic explanation requires that we can explain consciousness using ordinary terms of, e.g., causation and representation. When subjective appearance is not reflected in mental reality, it is placed outside the domain of naturalistic investigation.

    One of the goals of a naturalistic theory of consciousness is to demystify the relation between subjective appearance and mental reality. What initially intrigued me about higher-order theories is that they aim to provide a naturalistic explanation of phenomenal state consciousness in ordinary representationalist terms. If the solution to empty thoughts leads to a view that places conscious states outside the domain of naturalistic explanation, then this compelling motivation is lost. This gives us reason to prefer a version of the higher-order view which doesn’t allow for non-existent conscious states.

  19. Nicely put Adrienne — I largely agree with you here! But I’d just also like to point out that this is very much like the “problem of the rock” that I mentioned in passing in other posts.
    For example, you say: “The existence of a mental state is a precondition for posing the very question that higher-order theories are expected to answer. If no conscious mental state exists, then there is no question of what makes that mental state conscious rather than unconscious – so there’s nothing for the higher-order theory to explain.”
    In response to Richard, I similarly said: “I just don’t see how an answer that requires the presence of a mental state in the first place can be ad hoc. If there isn’t a mental state present to begin with, then there is nothing to explain in terms of what makes “it” conscious as opposed to unconscious.”
    This is one reason that I think, apparently contra Richard, that other objections are very relevant to the discussion in this session. These problems are importantly interrelated. (Of course, I also think that my alternative HOT theory handles this issue better than standard HOT theory. Note also below that Van Gulick makes a similar point in advocating his “HOGS” position.)
    So here’s something from section 4.3 in my 2012 book (some is reprinted from my 2005 JCS article):
    “Following Stubenberg (1998), I will call the following classic objection from Alvin Goldman to all higher- order (HO) theories of consciousness “the problem of the rock”:
    “The idea here is puzzling. How could possession of a meta-state confer subjectivity or feeling on a lower- order state that did not otherwise possess it? Why would being an intentional object or referent of a meta-state confer consciousness on a first-order state? A rock does not become conscious when someone has a belief about it. Why should a first-order psychological state become conscious simply by having a belief about it? (Goldman 1993, 366)”
    [I skip quite a bit here including my discussion of David’s 1997 response]
    So where do we go from here? It is first necessary to return to Lycan’s response to the problem. Although he did not go far enough, Lycan does take the first crucial step. We must first and foremost distinguish rocks and other nonpsychological things from the psychological states that HO theories are attempting to explain. HO theories must maintain that there is not only something special about the meta-state…………..but also something special about the object of the meta-state, both of which, when combined in certain ways, result in a conscious mental state. The HO theorist must initially boldly answer the problem of the rock in this way to avoid the reductio whereby a thought about any x will result in x ’s being conscious. So HOT theory does not really prove too much in this sense, and various principled restrictions can be placed on the nature of both the lower-order state and the meta-state to produce the mature theory. In this case, a rock is not a mental state, and so having a thought about a rock will not render it conscious. After all, the HOT theory is attempting to explain what makes a mental state a conscious mental state. This is not properly recognized by those who put forward the problem of the rock.

    Two further moves can also be made. First, recall from chapter 2 that it might be wise to raise the next natural question: what makes a state a mental state? As we have seen, there are differing views here. One might, for example, insist that mental states must fill an appropriate causal-functional role in an organism. Alternatively, one might even simply identify mental states with certain neural or biochemical processes in an organism (Crick 1994). Either way, however, it is clear that external objects, such as rocks, cannot meet these criteria. The LO states in question thus have certain special properties that make it the case that they become conscious when targeted by an appropriate HOT. It is also important to note that this response effectively handles other related objections to the HOT theory. Various internal states such as cancer (Dretske 1995, 97, 100) and liver states (Block 1995, 280) are also ruled out by these criteria.
    Second, in a similar vein, if we return to the idea that the meta-state is an intrinsic part of a complex conscious state, then it is also clear that rocks cannot be rendered conscious by the appropriate HOT or MET. This is because, in such a view, the MET must be more intimately connected with its object, and it is most natural to suppose that the target object must therefore be “in the head.” That is, both “parts” of the complex conscious state must clearly be internal to the organism’s mind. Van Gulick (2000, 2004), who calls this “the generality problem,” makes a similar point when he says that “having a thought . . . about a non-mental item such as the lamp on my desk does not make the lamp conscious . . . because [the lamp] cannot become a constituent of any such global [brain] state” (Van Gulick 2000, 301). Thus it is difficult to compare the inner (mental)/inner (mental) relation as described by HO theories to the inner (mental)/outer (rock) relation described in Goldman’s initial objection. Like the WIV, this move provides Van Gulick and any intrinsic HO theorist with an additional counterargument not available to standard HOT theory.”

  20. Thanks very much, Assaf, for your clarification about an actual or potential causal impact on one’s mental life. But I’m not sure how much it gets around my earlier worries.

    For one thing, ‘potential’ is notoriously difficult to cash out in precise, specific terms. So an overarching worry remains that the condition you impose, cast in terms of of an actual or potential it’s raining, is just a way of saying, with a nod to functionalism, that the first-order state must exist. I’m myself quite convinced by functionalism (of the Lewis sort, not especially the MIT version). But appeal to the causal platitudes seldom if ever involves potential effects, though it does often involve causal results spread through a large network in a quasi-hoilist way. The MIT, machine-table version of functionalism, it seems to me, allows even less room for appeal to potential causal results.

    But there are more specific, less overarching concerns. You write: “On a functionalist view, every mental state realizes a causal role. This means that it WOULD cause certain things if certain mental conditions were to obtain.” What conditions would need to obtain for an existent first-order visual state that is very peripheral but nonetheless conscious to have an actual effect on one’s mental life? Presumably not that it’s less peripheral; that would simply be a state of a different typle.

    On the other hand, might it happen that an individual has a higher-order awareness (HOA) of being in a particular first-order state, the individual in not actually in any such first-order state, but the occurrence of the HOA causes the individual to come to be in just such a first-order state? I take it that we simply don’t know at this time; but things at least as strange do get dsiscovered almost weekly.

    If that did happen, the new first-order would presumably have an effect on the individual’s mental life. It’s unclear to me why that would not be a case in which a first-order state that doesn’t exist but is merely represented as existing has the potential to have an causal impact on one’s mental life; under the right circumstances, which they might be, merely having a HOA that one is in a particular first-order state would result in that first-order state’s coming to occur and thereby having an effect on one’s mental life.

    As to my charge that your appeal to the property of a state’s being mental-like is question begging, you write that “we need an account of why a representation of a visual experience is a high-order representation rather than a first-order one. It is a high-order representation, presumably, because it is directed at a mental-like state.”

    Not on my higher-order account. A state is higher-order on my view if it represents one as being in another mental state. That’s all my term, ‘higher-order’, was ever intended to mean. So I don’t really understand what you have in mind by saying that “we need an account of why a representation of a visual experience is a high-order representation rather than a first-order one.” If a state represents one as being in a mental state, that state has higher-order content. That’s all it amounts to.

    Consider intrinsicalist theories, such as those of Gennaro and Kriegel. They hold that the higher-order content occurs as an aspect of the state that the higher-order content makes one aware of oneself as being in. (As Weisberg has convincingly shown in a couple of articles, that doesn’t imply that the state of which that higher-order content is an aspect actually is as the higher-order content represents it to be. A state could represent an individual as being in a state of seeing an apple without that that state’s having any first-order visual properties that count as seeing an apple.)

    Kriegel for some years insisted that this kind of theory was not a higher-order theory. But in his 2009 book he gave up on that, and now admits that it is a higher-order theory, since a state’s being conscious is a matter of that state’s having higher-order content. Does the higher-order content’s being intrinsic mean that the the is a first-order state? No; it’s a higher-order state simply in virtue of its having higher-order content.

    So I don’t think that there is a real issue about “why a representation of a visual experience is a high-order representation rather than a first-order one.” And so I don’t see that your answer, that “[i]t is a high-order representation, presumably, because it is directed at a mental-like state,” is not question-begging with respect to the issue at hand.

    You speak parenthetically at the end of “the nagging issue that a non-existent state can be mental.” Well, nonexistent things of course have no properties whatsoever. But nonexistent things can be represented as have properties. Santa Claus is represented as being a man who has reindeer and lives at the north pole. (I guess.) I can have a HOA that represents me as seeing an apple even though I am not in any first-order of seeing an apple. The objects, Santa Claus and the first-order state of my seeing an apple, are not anything at all; they don’t exist. But they are notional states.

    My argument, against Block last summer and now against Adrienne, is that it’s clear that state consciousness is a matter of how our mental lives appear to us subjectively. It’s not obvious that when it appears subjectively to one that one is in a particular mental state, one is actually in that state. An argument for that step is needed. Your appeal to the condition that a conscious state being mental-like is an argument that if sound would close that gap. I’ve been arguing that it isn’t, after all, sound.

  21. Thank you David! I wanted to see where my glimpse-of-an-idea might lead to, and you have helped me, via your clear and challenging replies, to see more clearly where it leads. In what follows, I somewhat clarify some things I said in my previous post. I’m afraid, though, that these clarifications are hardly enough. I concede that much more needs to be said in order to answer your challenge.

    You are, of course, correct that all that you have ever intended by ‘high-order’ state was a state with high-order content. When I said, in my previous post, that a mental state directed at a non-mental-like state is not high-order, I was implicitly committing myself to the view that being a high-order state amounts to more than having high-order content. Let me make this commitment more explicit.

    The difference between ordinary (that is, not empty) high-order states and first-order states is not only a difference in content; rather, it is also a difference in the role they realize. The objects of first-order states acquire influence on one’s mental life only via these states. These states function as a channel through which their objects influence one’s mental life. By contrast, the objects of ordinary high-order states have an (actual or potential) effect on one’s mental life independently of their being represented (you contest this claim, but let us grant it for the sake of argument). Thus, ordinary high-order states do not realize the role of enabling their objects to influence one’s mental life (because these objects are already capable of influencing one’s mental life independently of being represented).

    An empty HOT admittedly has high-order content, but, unlike ordinary HOTs, the object of an empty HOT has influence on one’s mental life only via this HOT. Thus, the empty HOT realizes the role of enabling its object to influence one’s mental life. Thus, an empty HOT seems to serve a role that is quite similar to that of ordinary first-order states. So, despite having high-order content, an empty HOT realizes a role that ordinary HOTs do not realize; a role that ordinary first-order states realize. Given that mental states are type-individuated via the role they realize, it would seem to follows that an empty HOT is not of the same mental type as an ordinary HOT. An empty HOT is therefore a hybrid state: it has features that ordinary HOTs have but it also has features that ordinary first-order states have.

    Now, the fundamental intuition that supports HOT theory is that a mental state is conscious if it is targeted by a HOT. Arguably, this intuition concerns ordinary, everyday, HOTs, which do not realize a first-order role. Perhaps, then, it could be argued that we have reason to doubt that this intuition applies to empty HOTs, since empty HOTs and ordinary HOTs belong to different mental types. If so, then it would seem that we are unjustified in holding that the objects of empty HOTs are (or can be) conscious (on the basis of the aforementioned intuition).

  22. Thanks very much, Assaf, for your thoughtful reply. A few very brief reactions.

    You suggest that, since an empty HOT would affect one’s mental life only by way of the HOT itself, without any help from the first-order state it represents one as being in, “an empty HOT seems to serve a role that is quite similar to that of ordinary first-order states”

    I don’t see that. Put aside intrinsicalist higher-order views for a moment. If the HOA isn’t empty, then the HOA and the first-order each affects one’s mental life; they presumably have causal results that are independent, though they may of course overlap and interact.

    The same holds for intrinsicalist higher-order theories; the higher-order content would have one set of causal results and the first-order aspects of the relevant state (if they exist) would have another, independent set, again, likely overlapping and interacting.

    So I see no reason to think that a HOA would, on its own and in the absence of any actual first-order state that the HOA represents one as being in would have a causal impact of one’s mental life that’s similar to that of a first-order state.

    Is there much causal result from the higher-order content–on either version of a higher-order theory, intrinsicalist of distinct higher-order state?

    I have argued elsewhere (“Consciousness and Its Function” Neuropsychologia, 46, 3 (2008): 829-840. and more recently §4 of “Higher-Order Awareness, Misrepresentation, and Function”, Philosophical Transactions of the Royal Society B: Biological Sciences, forthcoming, both available at http://dl.dropbox.com/u/16674062/Rosenthal-Publications.htm) that higher-order states have very little causal impact on our mental lives.

    And my arguments for this *independent* of any assumptions about whether higher-order theories of consciousness are correct.

    Anyway, these are just a couple of concern I have.

  23. Hi everyone, sorry it has taken so long to get back to this but things have been crazy out here in Meat Space 🙂

    Thanks to Rocco and Adrienne for their responses. I am enjoying this discussion and think it is important to get these things right. Let me start by addressing Rocco’s last reply to me and then turn to Adrienne’s (and Rocco’s seconding of it).

    The Ad Hoc Charge
    This, I think, is really the most important issue so I should start with it. You admit that you have modified the theory to handle certain objections. My point is that those modifications actually nullify the explanatory power of the higher-order theory, or to put it another way, that to make those modifications is to give up on the transitivity principle as an explanation of what state consciousness consists in.

    All higher-order theories are committed to some version of the Transitivity Principle, which says that a mental state is conscious just in case I am aware of myself as being in that mental state, in some suitable way. So, when we explain the difference between a conscious mental state and an unconscious mental state we appeal to a certain kind of awareness, not a relation between two mental states. So, to repeat a bit of what I have said already, when the first-order state is there and is conscious it is so because I am aware of myself as being in that state. Do you agree with this much? If so, then why is it that when I am aware of myself as being in that same state in the case where the state does not exist there will be a difference? Will it not still seem to me, from my point of view, as though I am in the first-order state? If you say ‘yes’ then you have no reason to deny the non-relational version of the theory but if you say ‘no’ then you have no way to explain what is going on in the ‘normal’ case. The crux is this: either you admit that the higher-order state fully determines the way things seem to you subjectively or you deny that. If the former then you cannot say what you say about the empty cases without denying that the transitivity principle gives the explanation of what state consciousness consists in (it only does when the first-order state is there; but why?) but if the latter you also give up the transitivity principle and have to deny that state consciousness consists in my being aware of myself as being in some first-order state (again, it only happens in some case that when you are aware of yourself as being in some state that there is a conscious mental state). So it is, on your way of doing things, more than merely the transitivity principle which is doing the explaining. Now, as I said, you can do that if you think that better fits the data (as I gather you actually do, I think Hakwan does as well, at least on some days) but you can’t do it and say that the transitivity principle explains what a conscious mental state is. That is why I said you were ‘forcing’, etc. Once you accept the transitivity principle as an explanation of what state consciousness is, the rest follows.

    Mentalistic Reduction
    I am not sure why you say what you do about reduction. I think maybe your point is that on my way of doing things we don’t get to say that it is an unconscious higher-order state which does the work. But if so that is a mistake (as I have tried to say above). State consciousness consists in my being aware of myself as being in some mental state; phenomenal consciousness consists in there being something that it is like for me to be in a mental state. On my view it is a surprising discovery that these two notions can be explained (via slightly different routes) via the transitivity principle (this is, in part, why there is a modest and an ambitious version of the theory). Thus the higher-order state is not itself state conscious (introspectively conscious; i.e. the kind of consciousness that requires me to be aware of myself as being in some state) but it is phenomenally conscious (i.e. there is something that it is like for me to be in that state). So, phenomenal consciousness is reduced in the same way that it always has been on the higher-order view: to a higher-order thought-like representation.

  24. Adrienne, thanks again for your reply.

    I know that David would probably disagree, but I see my take on these things as merely (or mostly) a terminological variant on David’s version of the theory. But even if that isn’t the case, Hakwan and I appeal to both ways of talking in the paper (mostly because Hakwan leans towards, or is sympathetic to, David’s non-exitence approach and I see them as mostly the same). That is why I talk about both approaches in my comments and why David’s remarks are appropriate (and I take him to be remaking the same points discussed in my comments). So, if you want to address my view (or Hakwan’s and my view) you have to address the claim that the empirical evidence we appeal to is consistent with both approaches and that on either approach there is no explanatory problem with empty thoughts.

    Since you recognize that the first argument won’t work I will make just one quick note on your second argument. It is not clear why you think that a “conscious state that doesn’t exist cannot be naturalistically explained.” This is especially so since you say that naturalistic explanation “requires that we can explain consciousness using ordinary terms of, e.g., causation and representation”. A non-existent mental state is simply one that you represent yourself as being in but are not in fact actually in. That explanation is done completely in terms of representation and its properties (intentionality). How is that not naturalistic in your sense?

  25. Richard, thanks for your comment – and thanks to Rocco for posting the excerpts on the “problem of the rock.” I’ll post again soon in response to Richard’s question about naturalistic explanation.

    First I’d like to follow up on one of Richard’s questions from a previous post. He asked what I would say about Rahnev’s evidence for inattentional inflation. I think that Rahnev’s study and the peripheral vision cases provided less compelling evidence for empty thoughts than Charles Bonnett syndrome, because in each case it’s not clear that the first-order representation isn’t “strong enough” to account for the subject’s conscious experience.

    In Rahnev’s experiment, I don’t think that subjects’ higher confidence and detection bias show that there is a difference in the conscious experience of the unattended vs attended gradient. Instead, these effects show that subjects have a different degree of confidence in their first-order judgments about the world depending on whether the first-order representation is presented as unattended or attended. Even if we assume that the first-order representations are similar in phenomenal “strength” or “richness,” a rich representation in the periphery signals something different about the world than a rich representation in the focus of attention. Namely, it signals that the grating in the periphery has higher actual contrast than a similar representation in the focus of attention. Whenever the richness of the attended and unattended gradients are similar, the representation of the unattended grating will indicate a higher actual contrast, and the subject will be more confident in the unattended than attended gradient. (This is assuming that we are generally more confident in our judgments about high contrast than low contrast stimuli). There is no “degree of emptiness” to explain, because subjects’ performance doesn’t indicate a richer conscious experience of the unattended grating. It indicates overconfidence in their first-order judgments about the unattended grating.

    The peripheral vision cases, on the other hand, do seem to involve a discrepancy between first-order representation and conscious experience. To provide a first-order account of peripheral vision, we need to reject the assumption that first-order representations are realized exclusively in early perceptual processing areas. As suggested by Miguel Sebastian in an earlier post, another explanation of Charles Bonnett syndrome is that first-order representations are realized in regions of the brain beyond the primary visual cortex. In Richard and Hakwan’s paper, they assume that first-order representations must be realized in early perceptual processing areas, because this seems to be Block’s view (and it’s his view that they are criticizing). I agree with Miguel that we should allow for first-order perceptual representations realized elsewhere in the brain. For example, a visual experience of the periphery may depend on memory of recent visual experiences as well as activity in the primary visual cortex. Roughly, my idea is that the content of consciousness at any moment is determined by more than the representations in early perception at that moment; the content at the current moment is partly determined by representations of the recent past. Allowing for first-order visual representation realized outside the primary visual cortex eliminates the discrepancy between first-order representation and conscious experience of a colorful and detailed periphery. So once again, there is no degree of emptiness to explain.

  26. By the way, I should probably add something about the “problem of the rock”, since it has come up so many times. On the standard interpretation of th transitivity principle there isn’t even the possibility of a problem of this kind. State consciousness consists in my being aware of myself as being in some first-order state (in some suitable way) and I (obviously) cannot be aware of myself as being in a rock…the “problem” only arises when you think of the higher-order state as somehow transferring consciousness onto some first-order state…so it isn’t the case that Rocco’s view can deal with it better since it is better not to have the problem arise in the first place 🙂

  27. Hello David: “Where does Joe say “it is ad hoc to agree with Rosenthal on his answer to misrepresentation cases..?.”

    Apologies, David (and Joe, for that matter). I was giving my own gloss on Joe’s view of the choice between Options 1 and 2 as “arbitrary” — Not quite the same as “ad hoc” I guess but close. Nonetheless, I pretty much agree with him (see my February 20, 2012 at 00:58 post esp under sec 4.2.1) with regard to those options. It is actually option 3 that he says “seems ad hoc…” (p. 108 in Purple Haze) but of course here I disagree with him!! And I also argue that options 3 and 4 are really two sides of the same coin. But I didn’t include all that in the book excerpt I posted… Rocco

  28. Hi Rocco,

    Thanks so much for following up on that!

    I wouldn’t have followed up about ‘arbitrary’, though Joe is simply wrong about that, since I do give an argument for my take, and he says nothing against that argument except that he finds it unconvincing. (He finds it unconvincing because he sees the qualitative character and awareness of it as having to go together; but that’s just what’s at issue.)

    ‘Ad hoc’ seemed (at least a bit) stronger, and I was interested if there was something I was overlooking.

    Very many thanks–

    David

  29. Agreed, David – Yeah, Joe (same book/page ref.) does explain why he thinks that “whatever one answers, there is a problem” re: both options 1 and 2. As I said, I do agree to some extent that there are the problems he mentions respect to these options but not necessarily for the main reason he gives and obviously not with respect to his options 3 or 4. I’ll also post something re: Richard’s last post, sometime later today, hopefully! (All kinds of tornado watches overnight here in Evansville but no major damage right in town!) — Rocco

  30. Hi everyone, amazing discussion!

    In post on February 19, 2012 at 13:51 I wondered whether the appeal to how things subjectively seem to restrict the cases in which the HOA give rise to a conscious state was problematic. Given that Richard also seem to rely on this restriction I would like to ask again.

    I understood this condition as “…and it seems subjectively to her that this thought does not causally result from inference or observation.” (Richard also added this claim in one of his replies to Addrienne: “seeming non-inferential”).

    In reply David said that “The condition is not that it seems that the awareness does not rely on inference or observation; it’s that it is not the case that the awareness does seem to so rely.”

    I might be missing some details because I do not see straightforwardly how this clarification might be helpful

    Again, the reason why it seems to me to be problematic is that it explains what seems subjectively to oneself in terms of a HOT of the right kind but what seems subjectively (or what doesn’t) to oneself is part of the individuation conditions of a HOT of the right kind.

  31. I had written: “The condition is not that it seems that the awareness does not rely on inference or observation; it’s that it is not the case that the awareness does seem to so rely.”

    Miguel wrote: “I do not see straightforwardly how this clarification might be helpful[.] Again, the reason why it seems to me to be problematic is that it explains what seems subjectively to oneself in terms of a HOT of the right kind but what seems subjectively (or what doesn’t) to oneself is part of the individuation conditions of a HOT of the right kind.”

    I suppose what Miguel finds problematic is the appeal to subjective appearance in specifying when a HOT is of the right sort. That doesn’t seem to be to be automatically problematic. If it is, perhaps Miguel can say why it is.

    The relevant subjective appearance is the explanandum–what is to be explained by the HOT theory. It’s not as though the theory must be barred from mentioning that explanandum. The chemical explanation of water needn’t foreswear mentioning water as such.

    What would be problematic would be mentioning the explanandum of subjective appearance in a way that was circular or (probably equivalently) led to a vicious regress of some sort. I don’t see that my mention of the explanandum does that.

    It says that a state is conscious if one is subjectively aware of oneself as being in that state, and one is subjectively aware of being in that state if one has a HOT that one is in it and one is not subjectively aware of oneself as having that HOT as a result of inference or observation.

    One can then unpack that, in turn and with regress or circularity, by saying that a state is conscious if one has a HOT that one is in it and one has no additional HOT to the effect that one has the first HOT as a result of inference or observation. Why is that better than providing instead that one have an additional HOT that one does not have the first HOT as a result of inference or observation?

    One reason is that that alternative condition doesn’t seem accurate; one seldom if ever does have that additional HOT. That’s easy to see: If one had the additional HOT, one would be aware of the first HOT, which arguably occurs only in cases of introspective awareness of one’s being in a state.

  32. Terrific! Thank you very much for your clear explanation David. Your reply dismisses my worries about circularity.

    Just for clarification: the only way I can have a thought T with the content ‘I am in M’ without thereby M becoming conscious is that there is another thought to the effect that T results of inference or observation. Is this right?

    However, in Richard’s HOROR theory, there is a worry in the vicinity seems to remain (if I have properly understood him)
    In HOROR a state is phenomenally conscious iff it is a HOT of the suitable sort
    And a HOT is a HOT of the suitable sort iff it seems to be non-inferential and….

    It would be weird to claim that a state is phenomenally conscious in virtue of not having another thought to the effect that one has the first one as a result of inference or observation.
    Does this make sense?

  33. I like Richard’s work, and have high regard for it. But I would not put things his way, but mine.

    Is it’s seeming subjectively that a HOT is due to inference or observation the only way one might have a HOT that one is in a mental state without that state’s being conscious? I see no reason to pronounce on that. Do you have another candidate?

  34. No, not really, maybe weird cases but I guess they are manageable or not very convincing.

    Maybe something along these lines:
    Someone might be willing to postulate unconscious HOTs in some situations without any state thereby being conscious and without there being any further HOT. Imagine that someone talk to me about food and how long I have been without eating. Imagine that I form a HOT with the content ‘I am hungry’ but no state is thereby conscious (I do not feel hungry). Your theory satisfactorily explains this.

    The conversation goes on on different topics and five minutes later I go the kitchen to eat
    something. Someone might be willing to postulate an unconscious thought to the effect that I am hungry for explaining my behavior (in spite of the fact that I do not feel hungry) and insist that it would be weird to hold that there is still a further HOT to the effect that the previous one is the result of inference.

    What do you think?

  35. I don’t think marginal cases are the place to test a theory. I don’t know how to settle whether the hunger that occurred when the HOT that one was hungry occurred was hungry. I think that, as usual, one gets a theory that explains clear cases, branches out to somewhat clear cases, and in the end uses a theory that does all that to settle cases that aren’t all that clear. I think that any other procedure is in effect question begging.

  36. Hello Richard – Yes, I agree, an important and worthwhile exchange. This may be my final post! Hard to say at this point, but thanks again for organizing the conference. So:

    The TP/Ad Hoc Charge
    RB: “You admit that you have modified the theory to handle certain objections. My point is that those modifications actually nullify the explanatory power of the higher-order theory….”

    RG: I disagree – My main modification (eliminating the distinctness of the HOT) ENHANCES the overall explanatory power of what I still take to be a version of HOT theory. I alluded to some neuro evidence of mutual interaction between mental state and HOT in a previous post. I think that my version better explains how our concepts are actually built in to our conscious experiences, among other things. Some of these reasons do predate my writings on e.g. the misrepresentation problem, but I won’t run through all of them here. As you know, I have also often been content to respond to various other common objections to HOT theory where no explanatory power has been lost and no theory modification is even needed, e.g. the hard problem, animals/infants, etc. In my new book, I also argue that there are some very interesting connections between HOT theory, especially my version, and conceptualism….among other things.

    RB: “….or to put it another way, that to make those modifications is to give up on the transitivity principle as an explanation of what state consciousness consists in.”

    RG: Yes, but again only if you mean “as a COMPLETE explanation.” (NO – if you don’t) I still do of course think that TP is true and is a good first step. As I explained in a previous post: “TP, at best to me, entails e.g. that some form of HO theory (or even “self-representationalism”) is true, and then an argument by elimination is what typically follows.” TP is a good starting point from which different (though related) theories are deduced depending on the other views or arguments of a given author. The fact e.g. that Uriah and I can agree on TP but end up with differing theories seems to support that, not to mention Bill’s HOP theory, and others.

    RB: “All higher-order theories are committed to some version of the Transitivity Principle, which says that a mental state is conscious just in case I am aware of myself as being in that mental state, in some suitable way. So, when we explain the difference between a conscious mental state and an unconscious mental state we appeal to a certain kind of awareness, not a relation between two mental states.”

    RG: I agree with the first sentence, but the “aware of…” is ambiguous between “consciously aware of” and “unconsciously aware of.” I opt for the latter – I see no reason to treat the former as THE correct version of TP but I see plenty of reason to opt for the latter. Indeed, like David, I think that the latter interpretation is far preferable, in part, to avoid charges of regress and circularity, to allow for a reductionist theory, etc…. As is so often said by HOT theorists, our HOTs are seldom themselves conscious. (More below on this.) I remain a bit puzzled by your insistence on some privileged interpretation of TP and even HOT theory. If we think of David’s view as the “standard” view, in some sense, then we have various modified and non-standard versions of HOT theory. One is mine and perhaps one is yours, but I see little reason to think that either of us holds the “real” TP or HOT theory. Again, many of the things you say in your initial commentary seem to me to be VERY different than standard HOT Theory. To be sure, when a modification of a theory becomes so different than the “initial” theory, one might begin to wonder if we should continue to CALL them both by the same name, but that seems less important than the theory’s substance.

    RB: “So, to repeat a bit of what I have said already, when the first-order state is there and is conscious it is so because I am aware of myself as being in that state. Do you agree with this much?”
    RG: YES, but again the “aware of” is ambiguous and so needs to be interpreted, I think properly and importantly, as “unconsciously aware.”

    RB: “If so, then why is it that when I am aware of myself as being in that same state in the case where the state does not exist there will be a difference? Will it not still seem to me, from my point of view, as though I am in the first-order state?”

    RG: Again, “aware of myself” in what sense? When I am unconsciously aware of myself “as being in that same state where the state does not exist” is different precisely because we only then have an unconscious HOT, which is by itself – well, not conscious.
    And here is precisely why I think you are, in a subtle way, smuggling consciousness into the HO state in some way (and perhaps really have something more like introspection in mind). You can SAY otherwise and CALL the HO state ‘conscious’ in a different sense than the LO state. But you also say “from my point of view” – Isn’t this referring to the HO “awareness”? If there is something it is like to be in that HO state, then what makes IT conscious in the sense you have in mind? What I take to be “standard” HOT theory has it that HOTs are unconscious when one has a first-order world-directed conscious state. A HOT is itself conscious (in any sense) only when it is targeted by yet another (unconscious) HOT.

    RB: (continued from above…) “If you say ‘yes’ then you have no reason to deny the non-relational version of the theory but if you say ‘no’ then you have no way to explain what is going on in the ‘normal’ case.”
    RG: You seem to treat a “non-relational” version of HOT theory as if that is some essential aspect of HOT theory. I disagree; as a matter of fact, I think that understanding HOT theory as a reductionistic “relational” theory involving two unconscious states and a relation between them is precisely the best way to think of the theory.

    RB: “The crux is this: either you admit that the higher-order state fully determines the way things seem to you subjectively or you deny that.”
    RG: I DENY IT as I hope is clearer now…especially since you say “fully”. But I didn’t initially deny it BECAUSE I hold the WIV; rather I initially denied it because I take very seriously the notion that, according to HOT theory, the HO state, by itself, is typically unconscious. And preserving this aspect of HOT theory is, I think, more important than the distinctness of HOT and M. Moreover, if the HO state FULLY determines the way things seem, then the LO state becomes completely unnecessary to the theory, which seems to defeat a major aim of the theory, i.e. explaining first-order (intransitive) state consciousness. Why do we need first-order conscious states at all then?

    RB: “…if the latter you also give up the transitivity principle and have to deny that state consciousness consists in my being aware of myself as being in some first-order state…”
    RG: NO – I don’t give up the TP; I give up your interpretation of the TP. And, no, I don’t “have to deny” that, since the “awareness” in question in unconscious.
    [By the way, on targetless HOTs, immediately prior to David’s often quoted “…might well be subjectively indistinguishable…” passage (e.g. his ’97 paper – “A Theory of Consciousness” in the Block, Flanagan, and Guzeldere volume, p. 744), David actually says “Strictly speaking, having a HOT cannot of course result in a mental state’s being conscious if that mental state does not even exist.” He does retreat soon thereafter and perhaps it is unfair to hold him to that quote today anyway, but I take it that this is a somewhat natural and viable option regarding how to handle targetless HOTs for a HOT theorist, depending on what he meant by “strictly speaking.” It also hardly seems to be a denial of the TP or any other major aspect of HOT theory. Perhaps I am the one who is “speaking strictly” after all!]

    RB: “So it is, on your way of doing things, more than merely the transitivity principle which is doing the explaining.”
    RG: RIGHT!!

    RB: “Now, as I said, you can do that if you think that better fits the data (as I gather you actually do, I think Hakwan does as well, at least on some days) but you can’t do it and say that the transitivity principle explains what a conscious mental state is.”
    RG: RIGHT; Again — TP does not FULLY explain state consciousness. I mean, even in Lycan’s 2001 Analysis piece, he needs several other premises for his argument and that is supposed to be a “simple argument” for HO theory. Notice that that argument doesn’t really settle the HOP vs. HOT disagreement and it really doesn’t even really rule out self-representationalism.

    RB: “Once you accept the transitivity principle as an explanation of what state consciousness is, the rest follows.”
    RG: Again, that’s much too simple, I don’t think the rest just “follows” absent other key premises or assumptions.

    Mentalistic Reduction
    RB: “I am not sure why you say what you do about reduction. I think maybe your point is that on my way of doing things we don’t get to say that it is an unconscious higher-order state which does the work.”
    RG: RIGHT – well, at least not ALL the work. And also that on your view the “unconscious” HO state starts to sound like a conscious state. After all, you say that the HO state is phenomenally conscious….etc…

    RB: “But if so that is a mistake (as I have tried to say above). State consciousness consists in my being aware of myself as being in some mental state; phenomenal consciousness consists in there being something that it is like for me to be in a mental state. Thus the higher-order state is not itself state conscious (introspectively conscious; i.e. the kind of consciousness that requires me to be aware of myself as being in some state) but it is phenomenally conscious (i.e. there is something that it is like for me to be in that state). So, phenomenal consciousness is reduced in the same way that it always has been on the higher-order view: to a higher-order thought-like representation.”
    RG: Well, again, distinguishing two senses of ‘conscious’ is fine, but if the HO state is not unconscious in both senses, then I think you are smuggling in something more like introspection. Another way to put it: if by your expressions “something it is like for me…” and “phenomenally conscious” you do really mean what I mean by ‘unconscious’, that sounds very odd to me – in that case, then perhaps some of our disagreement is more verbal than we think. What makes me not think so is that you seem to appeal to phenomenology and the first-person point of view when explaining the “aware of” in TP. I disagree with that.

    The “problem of the rock”:
    RB: “On the standard interpretation of the transitivity principle there isn’t even the possibility of a problem of this kind. State consciousness consists in my being aware of myself as being in some first-order state (in some suitable way) and I (obviously) cannot be aware of myself as being in a rock…the “problem” only arises when you think of the higher-order state as somehow transferring consciousness onto some first-order state…so it isn’t the case that Rocco’s view can deal with it better since it is better not to have the problem arise in the first place.”
    RG: I agree that “standard” HOT theory MAY be able to handle this objection as well as the WIV. I also agree that SOME who put forth this objection do mistakenly have in mind an unfortunate “causal” or “transfer” sense of what “makes” a mental state conscious. However, you’ll notice that I didn’t include my discussion of David’s answer to the problem in a previous post (it’s in pp. 72-73 in my book). Where I disagree is that the problem of the rock can simply be dismissed as a confused or as a tacit endorsement of the view that state consciousness would then be unanalyzable and essential to mental states in some Cartesian way. I do think that the WIV gives a much more powerful response, however, given the more intimate connection between M and HOT. Also, it’s not clear to me that your analogy above holds: the ‘rock’ is supposed to be the analogue of the first-order mental state, not “myself.” But, in any case, I’m willing to concede that both versions of HOT theory can offer good replies to the problem of the rock. Actually, my view is that this problem is interestingly related to the “hard problem” but I won’t get into that here. I offer a HOT-ish solution to it as well.

    Look forward to seeing you and many other OC4 participants in Tucson — Rocco

  37. Hi Rocco, thanks for the response there is a lot I could say but the conference is running short on time and we may have to continue this over a beer in Tucson…I had hoped to go to Seattle to comment on this paper at the apa, but sadly can’t make it…Perhaps Adrienne will be in Tucson?

    Before the conference ends I would like to try out an argument that just occurred to me. Suppose that one thought that a state was transitively conscious when being in that state made a creature conscious of something in the environment. To be conscious of something in the environment, let’s say, is to be informationally responsive to the salient aspects of the item in question. Now suppose that I am conscious of blue by having a certain kind of representation that we can call blue*. Blue* represents blue out in the world so that when I am in a blue* state I am (usually) in a relation to some physical blue (let’s call physical blue reflectance profiles just for fun). Now suppose someone said “I am interested in the property that this physical blue has when a creature is transitively conscious of it,” that would be all well and good and is probably a good way to figure out what blue* really is. Everything is fine until one day the creature tokens the blue* state when there is nothing blue in the environment (maybe in an unconscious dream or some such). Now, in this case, that is the case where the creature has the blue* state but there is nothing blue in the environment, will you say that the creature is not conscious of blue? There doesn’t seem to be any reason to say this! True, there is nothing blue out there, so in a success sense of the word the creature is not conscious of the blue, but from the creature’s point of view it is just as if there is blue out there, and the creature will behave in all the relevant ways (given the blue* state’s causal powers, etc). It would certainly be odd for someone to say that in one case the creature was conscious of blue (because there was a certain physical property out there) but in the other case they weren’t (since there wasn’t a certain physical property). But if this is true of transitive consciousness in general then it is true of transitive consciousness in the particular case of higher-order thoughts.

    Miguel, isn’t it weird that knowing which path a photon takes in the double-slit experiment effects the outcome of the experiment? Yeah it is weird! But as Hume said, nature reserves the right to trump our intuitions!

    Finally, just quickly, thanks David for the kind words!

  38. Richard — If I understand you correctly, I address that type of concern as potential problem for my WIV version of HOT theory in an earlier post (February 20, 2012 at 00:58; under the book excerpt 4.5.5 – I re-pasted it below). I take it that you are trying to show, by analogy, that if I hold that hallucinations about outer objects/colors still allows for consciouness of a nonexistent outer object/color, then I also ought to say the same about a case where there is a HOT without its target. But, on my view, I do not think that the proper parallel is between, say, a conscious ‘outer-directed’ hallucination case and an unconscious HOT directed at a mental state M. The proper parallel would be between two “conscious of X’s”, thus, between being conscious of a nonexistent object/color and an INTROSPECTIVE state (= conscious HOT) directed at a nonexistent M. I agree with this parallel but this is not the same as the much tighter relationship, on my view, between an unconscious HOT and its target. In other words, I agree that the “success sense” you speak of isn’t present at the conscious HOT, or introspective, level (and the same goes for the outer “conscious of” sense) but it is present between an unconscious HOT and its target, on my view at least. I don’t deny that hallucinatory states can seem the same as the veridical kind and I don’t deny that introspection is fallible etc.. but these cases are different than the relationship between an unconscious HOT and its target. Well, anyway — A beer in Tucson it is!……Rocco

    4.5.5 The Infallibility Objection
    Another objection to the WIV (or similar views) is the charge that it entails that knowledge of one’s conscious states is infallible, especially in light of the problem of misrepresentation discussed in section 4.2 (Thomasson 2000, 205–206; Janzen 2008, 96–99). If M and MET cannot really come apart, then doesn’t that imply some sort of objectionable infallibility? This objection once again conflates outer- directed conscious states with allegedly infallible introspective knowledge. In the WIV, it is possible to separate the higher-order (complex) conscious state from its target mental state in cases of introspection (see fig. 4.1 again). This is as it should be and does indeed allow for the possibility of error and misrepresentation. Thus, for example, I may mistakenly consciously think that I am angry when I am “really” jealous. The WIV properly accommodates the anti-Cartesian view that one can be mistaken about what mental state one is in, at least in the sense that when one introspects a mental state, one may be mistaken about what state one is really in. However, this is very different from holding that the relationship between M and MET within an outer-directed CMS is similarly fallible. There is indeed a kind of infallibility between M and MET according to the WIV, but this is not a problem. The impossibility of error in this case is merely within the complex CMS, and not some kind of certainty that holds between one’s CMS and the outer object. When I have a conscious perception of a brown tree, I am indeed certain that I am having that perception, that is, I am in that state of mind. But this is much less controversial and certainly does not imply the problematic claim that I am certain that there really is a brown tree outside of me, as standard cases of hallucination and illusion are meant to show. If the normal causal sequence to having such a mental state is altered or disturbed, then misrepresentation and error can certainly creep in between my mind and outer reality. However, even in such cases, philosophers rarely, if ever, doubt that I am having the conscious state itself………….. when one introspects, I take it that virtually everyone agrees there is a “gap” between the introspective state and its target, which also accounts for the widely held view that there is an appearance/reality difference and fallibility at that level. But this is not a problem at all; rather, it is the way that any HOT theorist can accommodate the anti-Cartesian view that introspection is fallible. Just as one can have a hallucinatory conscious state directed at nonexistent objects in the world, one can have a hallucinatory conscious HOT directed at a nonexistent mental state. But even when one hallucinates that there are pink rats on the wall, there is infallible appearance of pink rats on the wall. The CMS still exists. ……, as was discussed in section 4.2, confabulated states are best understood as introspective states that either bring about the existence of a conscious state (Hill’s “activation”) or mistake one state for another. Finally, when one is in a confabulatory state, we must remember that there is indeed an undisputable conscious state involved, but here it appears at the higher-order level as a conscious HOT (or MET). Thus, though that conscious MET has no object, one still experiences that state (the MET) as conscious, much as one’s hallucination of pink rats on the wall still involves a conscious, but nonveridical, state. Once again the analogy holds, and there is no problem here for the WIV. There can be targetless conscious HOTs just as there can be nonveridical hallucinatory outer-directed conscious states. It is admirable that Rosenthal so clearly wishes to make room for an appearance/reality distinction with regard to our own mental states. I agree with the notion that our introspective states are fallible and may misrepresent our “selves” and our mental states. But this distinction applies at the introspective level, not within first-order world-directed conscious states. If there is an inner analogy to an illusory or hallucinatory first- order conscious state directed at an outer object, it must be a conscious state (= introspection) directed at a mental state. But then this is not a case of an appearance/reality difference between an unconscious HOT (or MET) and a mental state M. This is again why we should reject Rosenthal’s endorsement of Levine’s option one for misrepresentation cases. A lone unconscious HOT without its target is not a case of fallible introspection.

  39. Rocco wrote: “It is admirable that Rosenthal so clearly wishes to make room for an appearance/reality distinction with regard to our own mental states. I agree with the notion that our introspective states are fallible and may misrepresent our “selves” and our mental states. But this distinction applies at the introspective level, not within first-order world-directed conscious states. If there is an inner analogy to an illusory or hallucinatory first- order conscious state directed at an outer object, it must be a conscious state (= introspection) directed at a mental state.”

    Is there any argument for this? Is this just “intuition”? The tradition? Is it just that you think that the only way to tell anything about the mental properties of mental states is by the way we’re subjectively aware of them?

    Intuition: Appearance always *seems* veridical; that’s why it’s appearance. Sometimes it is, sometimes not.

    The tradition: That’s been wrong so many times that we should disregard it. But even if it hadn’t been, it’s equivalent to simply appealing to one’s favorite authority.

    If the third possibility–“that the only way to tell anything about the mental properties of mental states is by the way we’re subjectively aware of them”–that’s what needs arguing. You don’t get it for free. What’s the argument for that?

  40. Thanks for a spirited discussion! Richard, thanks again for organizing a great event. I hope we’ll have a chance to continue our conversations in person in the future. As a final comment, I’ll briefly attempt to address some of the remaining problems for my view.

    My aim in this paper was to argue that the empirical evidence for empty thoughts doesn’t address Block’s worry that the higher-order account of empty thoughts is incoherent. That evidence suggests that empty thoughts are actual; but Block’s worry is that if empty thoughts are actual, then the higher-order view cannot give a coherent account of them. Addressing Block’s worry requires more than establishing the existence of empty thoughts. However, as is clear in David’s response to Block’s paper, the argument from empty thoughts relies on the assumption that a conscious state is an existing state – an assumption that David rejects. Although Richard & Hakwan sometimes speak in their paper as though they think that conscious states are existing states, they, too, could reject this assumption and escape the problem of empty thoughts. If they do reject this assumption, however, then it is puzzling that they frame their empirical argument as a response to Block, since empty thoughts wouldn’t pose a compelling challenge to their view in the first place. The empirical argument is only needed if we accept the problematic assumption that conscious states can’t be non-existent – and I’ve argued that if we make that assumption, their empirical argument fails to meet the challenge.

    The larger issue is whether the arguments in my paper and Block’s show that the higher-order view should be abandoned. As David and Richard have emphasized, the opponent of the higher-order view must provide a non-circular argument for the assumption that a conscious states must be an existing state. In a previous comment, I gave one argument: the higher-order view aims to provide a naturalistic explanation of state consciousness. You can’t provide a naturalistic explanation of something that doesn’t exist. So, you can’t provide a naturalistic explanation of non-existent conscious states. Richard responded by asking why a representational account of a non-existing conscious states isn’t a naturalistic explanation of state consciousness on my view. My worry is that a representationalist account of a non-existent conscious states doesn’t explain state consciousness – rather, it explains it away. When a thought is empty, the representationalist account may explain the subjective appearance of a conscious state. But on such an account, the conscious state cannot cause actions, inferences, or mental events. If we’re interested in finding a place for conscious states in the natural world, a view that explains mere appearance isn’t satisfying.

    In a previous comment, David pointed to the possibility that an empty thought could bring about the very conscious state it represents. I think this is a promising suggestion that could provide a response to my worry. If I understand his suggestion correctly, an existing higher-order state will play a particular causal role in virtue of representing some mental state, whether or not that mental state exists. So although conscious states would technically be epiphenomenal in the case of empty thoughts, they could still have derivative causal influence.

    I’ll take one more shot at an argument for the assumption that conscious states must be existing states. A standard way of defining ‘phenomenally conscious state’ is as follows: a state is phenomenally conscious just if there’s something it’s like for the subject. Given this definition of ‘phenomenally conscious state,’ subjective appearance is both necessary and sufficient for the existence of a phenomenally conscious state. If there’s something it’s like for a subject, then there’s a phenomenally conscious state. And if a subject is in a phenomenally conscious state, then there’s something it’s like for her. Given this definition, it is analytic that a conscious state just is whichever state is necessary and sufficient for subjective appearance. On the higher-order view, an appropriately caused HO state is necessary and sufficient for subjective appearance. So the HO state would be the conscious state, not the first-order state or no state at all. This argument aims to shift the burden onto the proponent of a higher-order theory to provide us with a reason to adopt a different definition of phenomenal state consciousness.

    To sum up: in order to show that the higher-order view is “defunct,” we still need a non-circular argument for the assumption that a conscious state cannot be non-existent. As I pointed out previously, I don’t think this is a problem for the main argument in my paper. That argument is intended to show that the empirical cases don’t address Block’s worry that the HO account of empty thoughts is incoherent. But while it isn’t a problem for the narrow aim of my paper, it is a problem for the broader project of showing that the higher order view should be abandoned.

    Thanks again to everyone who participated in the discussion! If you would like to continue discussion by email, feel free to get in touch with me at adrienne (dot) prettyman (at) utoronto (dot) ca.

Comments are closed.