Sensory Awareness and Perceptual Certainty

Presenter: Hakwan Lau, Columbia University

Commentator 1: Ned Block, NYU

Commentator 2: David Rosenthal, The Graduate Center, CUNY

Commentator 3: David Chalmers, ANU & NYU

Originally presented and recorded at the NYU perception workshop October 31st, 2009. Filming & editing by Richard Brown. Music by the New York Consciousness Collective.

40 responses to “Sensory Awareness and Perceptual Certainty”

  1. Hi Hakwan, I have a bunch of questions, for all of the speakers, but I guess I’ll start by asking you something.

    You seem to suggest that we can empirically decide between, say, a first-order biological theory of consciousness like Ned’s and a higher-order theory like yours or david Rosenthal’s. But I am not as sure as you are that this can be done. So, you find that there is activation in pre-frontal cortex when they are perceptually certain and not when they aren’t. You take this to mean that higher-order ares are implicated in consciousness whereas Ned takes it to show that people can’t report their conscious experience. What experimental result could mediate between you two?

    It seems to me that what we need to do is first settle the philosophical issues and then look at the data…or to put the point another way; the data only makes sense when interpreted theoretically. We need to know what consciousness is first and then look into the brain to see what does that.

  2. Hey Richard, thanks for getting to the heart of the matter so quick by asking your question. I suppose one advantage of this kind of online discussion is that one can give detailed replies without having to worry about time limits. But this could become dangerously verbose. So, here’s the upshot –

    UPSHOT: Unfortunately current empirical results can’t settle the matter completely unanimously once and for all. But hoping that we can fix the issues via philosophy alone is likely to be even more futile, as history suggests. The thing to do is to engage the theorists in empirical research, and get them to make empirical *predictions*. Unlike in chess playing, of course we allow them to go back to their moves and revise their theories, in case their predictions are rejected. But eventually, some theories would need so much revisions and patch work that they will be abandoned, giving way to better theories with higher consistency and better predictive and explanatory power. In this specific case, to account for the new data I brought in, I think Ned (Block) has already narrowed his options for future empirical interpretations (assuming he is to be consistent throughout). Likewise, by interpreting my own data the way I suggested, I also commit to making certain assumptions. So with this gradual narrowing of our degrees of freedom to say whatever we see fit posthoc, eventually I believe our respective models will become clear enough to be distinguishable by future experiments.

    DO SUBJECTIVE JUDGMENTS REFLECT CONSCIOUS PERCEPTION, OR JUST POST-PERCEPTUAL COGNITIVE PROCESSES? This is really the crux of the matter. So Ned’s way of explaining away my Rahnev et al result that attention can lower conscious perception is to say that it doesn’t – what attention does is to bias some “judgment process” takes place after conscious perception is determined. Likewise, for the metacontrast results, what I call % conscious seeing to him is just some judgments or reports. These “judgment processes” can of coz be reflected in prefrontal, but that shouldn’t hurt his theory.

    I would say though, certainty judgment has a long history to be taken to reflect conscious awareness (cf Pierce & Jastrow 1884). If one is theoretically unbiased, why deny that they reflect conscious awareness? After all, one essentially hallmark of blindsight is that they think they’re guessing. In fact, on the first slide of Ned’s reply he hinted that the actual “phenomenality” may depend on a combination of objectively-measured performance and subjectively-measured certainty too..

    But ok, that’s not decisive. Perhaps these are really post-conscious-perception judgment biases after all. But now if Ned is to write off the Rahnev et al and metacontrast results as irrelevant to consciousness per se, we can mark that Ned thinks certainty / confidence judgments reflect post-perceptual cognitive stuff, so in the future he can’t interpret similar results to the benefits of his theory by counting these as reflecting consciousness, should similar results turn out the other way.

    Now, in Rahnev et al (in review, by the way), we also measured detection bias. I.e. it is not just that subjects rate certainty differently. They are also more likely to say “yes I see the target” under lack of attention. Taking “yes I see the target” as a measure of awareness is pretty common in visual studies. If one denies that these are relevant to consciousness per se, and say these only reflect post-perception cognitive biases, one would have to be consistent in interpreting lots of other studies.

    On a side note (that one can skip): Carrasco’s results are hugely influential and extremely interesting. But the paradigms differ very much, and it will unfortunately be beyond the scope here to discuss the potential and apparent discrepancies between that and Rahnev et al (in short when Marisa Carrasco and I talked about these we think there’s no real discrepancy, but I’ll like to do more experiments to figure out exactly how). Essentially, the Carrasco apparent contrast paradigm is a 2-choice judgment as to which of two targets have a higher contrast. Contrast perception is interesting, i.e. these are what people these days call forced-choice judgments. But attention boosts the signal, and when the signal is high one may see the contrast more clearly. A strong signal activates both the early visual areas and the prefrontal areas more. In my cases though, I focus on matching the performance and thus hopefully the basic signal strength. If these are matched, and yet consciousness differ, we need an explanation of how it happens. Ned says that we don’t need an explanation regarding consciousness per se because the difference in not in conscious perception, but probably post-perceptual cognitive. I disagree, and to sort out the disagreement we have to sort out whether my results reflect consciousness per se or just some judgment processes. Carrasco’s result doesn’t help us much here.

    ARE FIRST-ORDER THEORISTS COMMITTED TO A DUAL-CHANNEL ACCOUNT? Richard, this really isn’t answering your question per se, but this is another important point in Ned’s reply. (So for those who are not interested in this, you can skip this; I’m taking the following from a manuscript I’m working on, so the tone may sound out of context in some places).

    Basically, my argument in the paper is to force Ned to take a dual-channel account, and then show that the dual channel account is wrong. But my argument isn’t very tight. Ned’s Tolstoy reply is very cute, but here I try to run a more systematic argument as to why he may have to take the dual channel account after all.

    There could perhaps be ways for proponents of the him or other first-order theorists to adopt a hierarchical account of how one can have performance-matched conditions where levels of conscious perception differ, but these are not plausible. For instance, one could assume that after a reduction of the activity in the visual cortex, performance is maintained not because of compensation from a different channel, but from an increase in the efficiency of the fronto-parietal mechanism to pick up the reduced visual cortex activity. This would imply a hierarchical structure where the first stage (visual cortex) determines conscious experience and the second stage determines task performance, and importantly no parallel processing (therefore it is not a dual-channel account). Trouble is, in the performance-matched experiment in blindsight patient GY mentioned earlier, it is hard to imagine that the subject was paying more attention to the blindfield. In fact, fMRI data showed that fronto-parietal activity was higher under normal field stimulation but not blindfield stimulation. Similar results were found in a related fMRI study described in my talk, that higher prefrontal activity was found when subjects had higher but not lower conscious experience (Lau and Passingham). In general, it is hard to imagine that whenever performance is matched, a reduction in visual activity and conscious experience would be accompanied by an increased efficiency of attention or cognitive access.

    Similarly, one may think of visibility as being supported specifically by recurrent activity from higher visual areas back to primary visual cortex (V1) (REF). This may look like a hierarchical structure: feedforward activity from V1 to higher visual areas is the first stage of processing, whereas the recurrent feedback is the second stage that determines conscious experience. However, transcranial magnetic stimulation (TMS) studies that support the recurrent activity hypothesis in the first place suggest that disruption of feedback impairs both conscious experience and task performance (cf Vincent Walsh and colleagues). According to the hierarchical model, disruption of the late stage processing should selectively impair conscious experience but not task performance, which is demonstrated by the TMS results mentioned in the talk. The difference is that in that study we treat prefrontal activity but not feedback activity to V1 as reflecting late stage processing. Further, if feedforward activity from V1 to higher visual areas is supposed to reflect the early stage of processing that supports task performance, it is hard to see how blindsight patients with lesioned V1 would have above chance task performance at all; lesion to V1 would disrupt both processing stages.

    The discussion above does not undermine the notion that recurrent processing to V1 supports visibility, or visual awareness in the objective sense (i.e. the informational aspect of conscious perception). What I argue against is the plausibility that different stages of recurrent processing within the visual cortex supports the different stages of processing as described by the hierarchical model specified above.

    Due to these considerations, I take it is plausible to interpretat Ned as committed to the dual-channel model, with respect to explaining the performance-matched cases with difference in conscious experience.

    Of coz, all these assume that the performance-matched difference in consciousness is real – rather than just reflecting post-perceptual judgment. That goes back to the last section. But the case of blindsight seems rather uncontroversial here.

    BETTER MACHINES IN THE FUTURE? This is also tangential to the question per se. But elsewhere, it has been suggested that one reason why current empirical results can’t distinguish between the theoretical positions is that our current machines aren’t good enough. We need something as good as single-cell physiology in humans to advance the field. Although I use some of them, I’m not a big fan of big machines. They’re good, but as you can see, a lot of the work that needs to be done lies in careful experimental design and interpretations of behavioral paradigms, rather than using fancier machines.

    SHAMELESS PLUG: Ned and I will continue this debate in ASSC in Toronto, together with Stan Dehaene and Imogen Dickie (http://www.theassc.org/conferences/assc_14). ASSC is a great conference and Toronto is a great city! Check it out. They’re still accepting abstracts.

    SENTIMENTAL MUMBLING: So, fair enough, Richard, the jury is still out at the moment. But that’s not too bad. It means we still have interesting work to do! To me personally as a scientist who is interested in the conceptual issues, formulating the questions such that they start to look like they can be addressable by (future) experiments (e.g. distinguishing between the dual channel and hierarchical models) is very rewarding in itself. The choice really isn’t about doing science or philosophy first. We can do both, and we will be better for it. In an optimistic mood (and after some good sake! I love being able to do this at home on my couch…), I think together we’re moving forward.

  3. Hakwan: Great talk! I know that these do not get to the heart of your work, but nonetheless, I have two worries.

    One is the association of access with general attention, especially for Dehaene. Dehaene does say that attention is necessary for working memory and thus access, but not that they are the same thing. One reason for this is that only top-down attention is associated with the prefrontal areas. Thus, those who say that access is not necessary for consciousness are going to be distinct from those who say that attention is not necessary for consciousness.

    The second is the match between your findings with Rahnev, et al. and phenomenological overflow. That is, you found that subjects report more confidence than they reflect in their performance, but this only shows that it is possible to have inattentional filling-in, as you put it. But if there is this filling-in, this sounds like an illusory experience, and not at all what people like Ned Block want to show. As I understand it, Block wants to show that we can experience more of the world than we can report/access, not that we can have an illusory experience of more than we can report/access. I take it that the former is what is really at stake. Furthermore, it doesn’t seem as though the experiment (as presented) distinguishes between judgement and perception. That is, do the subjects really experience more or do they just judge more (as Block suggests). Finally, from your presentation of the experiment, it sounds like the distinction between inattention and attention is really low attention and high attention.

    I apologize for not meeting your theory head-on. I need a bit more time to digest the difference between d’ and perceptual certainty. Nonetheless, I would love to hear what you think about these side issues.

  4. Reply to Rosenthal

    (This answers one of Carolyn’s questions too, and also addresses one of Dave Chalmers’s comment, see last 2 paragraphs)

    About David Rosenthal’s comments: our disagreements are few, and I suggest that our different accounts for phenomenological overflow are actually not mutually exclusive. The difference is in whether we have first-order (unconscious) representations of those unreportable things. I think sometimes we do, sometimes we don’t.

    My account is that in the case of unreportability due to inattention (e.g. Simons’ gorilla, all the notes in a symphony, all the identities of Sperling’s letters), people only *think* they have representations of the details. This is because the higher-order perceptual certainty judgments are inflated (due to inattention, as empirically supported by Rahnev et al), so they sort of “experience” more than they have the information for. (in the case of the gorilla, i suggest that some subjects don’t really have the information / first-order representation to tell whether there is a gorilla or not – the “correct” phenomenology should have been one of blurriness in the background, or something like that – but because of certainty inflation they hold they vividly see the background and the lack of a gorilla)

    Your account (which upon reflection I find very plausible) is that they have the first-order representation of the gorilla or the notes or the letters, but the lack of higher-order representation means that we’re not conscious of them. Sometimes, as in the case of sperling, we have the higher-order thought that “there are 12 letters” (conscious), but the identities of the letters are missing (we’re not conscious of what the letters are).

    I suppose both of our accounts can be true, and the relative extents to which they are true may differ from subject to subject, and from case to case. Remember that even in Simons’ gorilla experiments, only a portion of subjects don’t consciously see the gorilla. Of these, it’s likely that some of them fail to represent the gorilla at all. Some of them may have unconscious first-order representation of some moving object. Some may unconsciously represent the gorilla, but not in full details, etc.

    This is helpful, because in debating about these we often have to appeal to our own phenomenology. Sometimes this is confusing. For Sperling, I myself never feel that I was ever conscious of the identities of all 12 letters – but people like Ned claim that they do. So maybe we have different (first order) representations, and also different levels of certainty inflation for them.

    What is important is, given the plausibility of both accounts, we can give an higher-order theory based explanation of why people like Ned *think* that they consciously see all 12 letters. He may have some representation of the 12 letters (in some unknown degree of details), and because he couldn’t have attended to all of them, some of the them gives inflated phenomenology.

    This brings me back to Carolyn’s question: No this is not what Ned wants (i.e. you’re correct). Ned wants there to be real phenomenology for the identity of all 12 letters. If this is true, this may mean trouble for higher-order theories (phenomenology goes with high-capacity first order representations). But I’m denying him that. I say it’s a kind of an illusion that he sees so much. If he behaves like my subjects in Rahnev et al, he’s bound to have such illusion. This illusory account is compatible with higher-order theories.

    Dave Chalmers also suggested that he believes phenomenology goes with high-capacity first-order representations. Presumably this is also due to introspection on the phenomenology (they seem rich). Likewise, I’m saying that those are often inflated, because of the results of Rahnev et al.

  5. Reply to Carolyn: on using the term “attention” precisely

    You’re right, I’ve been lose in using the term “attention” (which is pretty lose in itself). In Rahnev et al we certainly mean high attention vs low attention (I still don’t know how to experimentally induce complete inattention without doing something terrible to the subject!).

    As to mapping attention to access, yes that certainly isn’t fair. Attention isn’t just prefrontal/parietal. Even for top-down attention alone, there is feedback to first-order representations to visual cortex too. But I’m using this short-hand way to talking to give a simple picture, as Ned does in his account too.

    The fact that attention may have feedback effects on first-order representations may actually complicate the story. I.e. access and phenomenology are not really two distinct thing. It’s a bit like the fridge light problem – whenever you apply access, you change the very thing you want accessed. But this is more a problem for Ned so I sidestep it for now.

  6. Another worry:
    I noticed that you claim that d’ and perceptual certainty can sometimes be distinguished by discrimination results when compared to detection results. However, the neural response to detection and discrimination tasks is entirely different (hence the mexican hat phenomena). Thus, finding a difference between detection and discrimination is not necessarily going to give you a subjective difference, as I see it.

  7. The most straight forward cases of distinction between performance and certainty is probably blindsight. Blindsight patients can perform discrimination tasks at above chance, yet they claim they’re totally guessing (zero certainty).

    It is true they also have problems in detection (i.e. Yes/No) tasks too (cf Paul Azzopardi). But how does that relate to mexican hats and your question? I don’t get it yet.

  8. Reply to Dave Chalmers (and some of Ned’s comments too)

    HOW HIGH AM I?

    Dave raised some really important issues, and helped me to think more about exactly what kind of higher-order model I’m talking about.

    Let me first say I’m not totally confident about the details of my higher-order model. The experiments that need to be done to confirm these details are not done yet (see below). But let me sketch out what I think anyway, as it may help to clarify to what extent my theory is really higher-order.

    MY HIGHER-ORDER VIEW

    First of all – I never say I am a higher-order THOUGHT theorist, because I don’t know whether the higher-order representations / processes should count as thought / thinking. Unlike David Rosenthal, I also think the higher-order stuff probably has some functions apart from just making us conscious, though I don’t know yet what functions they are exactly.

    What I really have in mind may be close to what Dave calls an introspective confidence kind of model. E.g. there is a first order representation of redness. It comes with a certain representation strength or quality. Then a higher-order representation gives an *explicit* probabilistic interpretation, and gives a content like: “THIS representation I have there is 95% likely to be for real”.

    I emphasize the “THIS” here because I don’t think the higher-order system duplicates the content of phenomenology; it refers to the first-order representation in question by some kind of pointing / labeling. It determines which first order representation contributes to our experience and how vividly we experience them, etc.

    So in a way, my view is kind of close to Uriah Kriegel’s – that the phenomenology is determined jointly by the higher-order prefrontal (and parietal) activity as well as the first-order representations in sensory areas.

    Now then, Uriah calls his theory same-order. So to what extent am I higher-order?

    I find this kind of debate rather tedious and not necessarily useful. How to call a theory is not so important to me. What is important is what difference in empirical predictions do I make. I am higher-order in the sense that I’d say –

    – a change in higher-order representation (in the prefrontal and/or parietal “monitoring” areas) is sufficient for there to be a change in phenomenology
    – without the right *kind* of higher-order representation you would never be conscious, however strong / detailed your first-order representations (in the sensory areas) are. (on my view, you can have a higher-order representation that explicit say “THIS representation is completely unlikely to be for real” and you will be unconscious of the relevant thing – so it is not that any higher-order representation would lead to consciousness)

    I think these are claims shared by me, David Rosenthal, and Uriah Kriegel. Ned Block would probably reject these claims (especially the second one)

    Amongst us, David Rosenthal and I probably disagree on:
    – can we read out the content of phenomenology from higher-order representations alone? (I’d say no, he may say yes)

    But this last question is thus far not uncontroversially answered yet. If David turns out to be right, I’d happily change my mind.

    WHY CAN’T WE DO ALL THESE WITH FIRST-ORDER REPRESENTATIONS ALONE?

    Ned also raises this question. Why can’t we say the first-order sensory representations carry certainty values?

    There are really two “why” questions here. One is why conceptually we need higher-order representation. I.e. why, in principle, we can’t do all these by first-order representations.

    To this conceptual question, I don’t exactly know. Maybe there are some a priori reasons, but probably not. But I’m concerned with an empirical account of how it actually works in the brain. Precisely, our brains.

    So I’m just saying that empirically, to explicitly represent perceptual certainty, the brains needs higher-order representations. This empirical claim turns out to be true, I think. Some may say the representation quality of first-order stuff determines certainty. But that means we can’t have a dissociation of certainty and performance. Fact is we do. And when we do find such dissociations, we find difference in activity in prefrontal cortex.

    Is this good enough? I concede that I’m not giving an a prior conceptual account of how consciousness work. I don’t know why a priori, conceptually, a photodiode isn’t conscious either (maybe they even are!). So in this sense, I’m not addressing the hard problem. But one can only do so much at a time. I think working out an empirical account to distinguish between conscious and unconscious in our own brains is challenging enough, and hopefully will eventually inform our conceptual thinking.

    PS – by the way, distinction between higher-order and first-order representations in the brain is not just about which brain region they are in; first-order representations directly track stimulus properties; higher-order representation don’t directly do so, and they track the properites of first-order representations. they turn out to be mostly in sensory areas and fronto-parietal areas respectively

  9. Hi Hakwan, thanks for the detailed response.

    I agree with much of what you say here (though I disagree with the slight to the history of philosophy…I think we have actually made pretty good progress on the issue of consciousness since, say, Descartes). It is certainly a give and take and we definitely need multi-unit recording technology that is non-invasive before we can really start making significant empirical tests of theories of consciousness; but let me try to restate the point.

    David (Rosenthal) thinks that it is something like analytically true (he wouldn’t put it this way but we can slum a bit) that if one is in no way conscious of being in some mental state then that mental state is not a conscious mental state (this is basically the converse of the Transitivity Principle). He is very clear about this in his response to you. It follows from this that there cannot be conscious sensory qualities that one is not conscious of having. This is, of course, the very thing that people like Ned deny. Each offers experimental evidence for their point. Ned points to things like the Sperling results; David explains them in term of his theory (we are conscious of all the letters just not in respect of which letters they are, which matches the phenomenology; no overflow (though there is first-order overflow)). You point to the findings of you and Rahnev et al and metacontrast; Ned explains these in terms of his theory (judgements/access affected not phenomenology; still overflow).

    Now you are right that experimental findings will constrain the possible responses and also help to determine their interpretation of other experimental results; in fact each claims that their theory does this best…but at bottom the argument against the kind of position that Ned has has got to be non-scientific…how could there be something that it is like for the subject when the subject denies that there is anything that it is like for them?

    Another way to make the point; what we need here is not an experiment which shows that there is phenomenology that I am not conscious of (what experiment could show that?), what we need is an account of what it would mean to have phenomenology that I am not conscious of.

    Notice too that even according to your argument what is decisive against the Global Workspace theories is a philosophical objection –too zombie– not a scientific one.

  10. Richard, I got a sense that we agree on the basics – that it’s best to have both science AND philosophy – but we probably disagree on some details. I suppose that makes sense, because the general debate of “which discipline is better” usually end up being rather silly and discipline-centric. So let’s go to the details:

    “though I disagree with the slight to the history of philosophy…I think we have actually made pretty good progress on the issue of consciousness since, say, Descartes”

    Well, we still have some very clever and influential people out there who are dualists, and it’s not like we have decisively falsified them. Ok, they have become the minority, for better or worse, and their version of dualism may be more sophisticated than Descartes’s, but you think science meanwhile has not influenced such thinking? Even Descartes consulted the science of his days.

    “we definitely need multi-unit recording technology that is non-invasive before we can really start making significant empirical tests of theories of consciousness”

    I’m really not so sure about the “definitely” here.Probably my point was lost. We arbitrate theories by testing their predictions. Right now, we don’t really have major theories that make predictions at that detailed level. So strictly speaking, if you give me multi-unit recording, I wouldn’t know what theory to test. One could explore – just stick the electrodes in and see what happens. But exploratory science only goes so far. You need to make hypotheses and test them. Of course, once you have the machines you may start to come up with cool hypotheses, perhaps. But it is a “perhaps” really. We don’t really know. My feeling is right now the difficulty we face isn’t that the machines aren’t good enough. We (e.g. Ned and I) disagree on some pretty basic things such as how to interpret behavioral data. So I don’t know how multi-unit recording would help here.

    “but at bottom the argument against the kind of position that Ned has has got to be non-scientific”

    Maybe, maybe not. Depends on what you mean by “at bottom”. There are some conceptual points that you can attack. These are likely to take a while, if ever, to settle, but I agree that occasionally we do make progress this way.

    But we don’t have to focus on attacking the conceptual ideas alone. These ideas may also have empirical consequences. Sometimes they may be very indirect, but it doesn’t mean they’re not relevant. And Ned often backs up his arguments with empirical data. If those data turn out to be wrong, his arguments would be weakened.

    We don’t falsify a theory in one day – with science or philosophy or both. A theory often get abandoned gradually because people grow dissatisfied with it as more and more flaws are exposed. Seems to me that’s what usually happens in any field.

    But I hear you: it is a tricky thing to test for phenomenology when one claims that they may not correlate with report / any measurable behavioral signs. But Ned doesn’t think they are always never reportable. Usually, under normal conditions, they are. In some experiments he cites, we inferred the phenomenology based on report. So if we set up experiments under similar conditions, and use the same standards to infer the subjects’ phenomenology, Ned would have to accept that those reflect phenomenology. Now if we show that such inferred phenomenology doesn’t correlate with high-capacity early sensory processing, Ned’s position would be weakened. In that case, for him to back track and interpret the data as not reflecting phenomenology would make him look weak, if we do apply the same standard to infer phenomenology as was done in studies he cites in his support.

    Now then, we haven’t got such results yet. Maybe we never will. But I don’t see any reason why in principle it could never happen.

    “Notice too that even according to your argument what is decisive against the Global Workspace theories is a philosophical objection –too zombie– not a scientific one.”

    Elsewhere I have given empirical arguments against GWT too. In this talk I didn’t focus on the GWT in general, but for one of its incarnations – the dual route GWT a la Dehaene – I think my argument against it is largely empirical.

    But in any case. I suppose sometimes a good conceptual argument is as good as an empirical one. Best is to have both.

    PS – see my reply to Rosenthal below – there I think are some convincing empirical claims that can distinguish between mine (and Uriah’s) and David’s view. In that case it’s less tricky because the three of us have more similar standards of accepting what count as reflecting phenomenology

  11. My question here relates to the suggestion in one of the slides that perceptual certainty can be indicated by detection. However, as the difference between detection and discrimination can be shown neurally in lieu of a subjective difference, detection does not seem as though it could properly represent perceptual certainty. The Mexican Hat is a combination of detection, which requires a wide responsiveness of the neuron to stimuli, and discrimination, which requires a narrow responsiveness.

  12. I see. When I say detection I’m referring to detection bias. I.e. given the same d’ on detection (or signal quality in your brain), you can choose to say yes liberally, or conservatively. Classic ways to manipulate detection bias can be by changing payoffs (if I give you a reward of $100 every time you get a hit, and give you an electric shock every time you get a miss – you would say “yes” very liberally I suppose)

    but i consider it “subjective”, *if* nothing changes, but people just spontaneously say yes more frequently. In fact, in blindsight, they say “no there isn’t a thing” very often – and we commonly interpret that as an impairment to subjective awareness.

    when i say dissociate, i mean something more specific than comparing a detection task and a discrimination task. what i mean, and what i did in the metacontrast experiment is – you create 2 conditions, match for performance (e.g. discrimination d’), and if under such matching, you get a difference in detection bias / confidence ratings, you dissociate subjective from objective.

    then what you do is you compare these performance-matched conditions. the difference (say in the brain) would reflect a difference in subjective (i.e. certainty) but not objective aspects of perception (i.e. performance)

    does it help to clarify or am i just repeating myself and missing your point?

  13. This has been very interesting. I’ve enjoyed reading it. Let me add a few thoughts and comments.

    Hakwan mentions my doubts about whether our higher-order awareness, whatever form it may take, has any utility in our psychological functioning. I don’t have any proof that it doesn’t. My overarching concern is that any information that’s useful will likely be carried by first-order states, so that the higher-order states won’t add anything to psychological utility. Maybe that’s wrong, but I’m certainly heartened by Hakwan’s not having any utility in mind that higher-order awareness contributes.

    Richard correctly notes that I would not want to describe as analytic the principle that no mental state of which we’re not aware counts as conscious, but makes it seem a bit as though that’s just Quinean religion on my part. It’s not. It’s that I’m convinced that the principle is better seen as a folk-psychological principle, not as conceptual or as a matter of meaning.

    Let me stress–as I did in reactions to another discussion in Richard’s magnificent online conference–that ‘conscious’ as applied mental states was extremely rare until the late 19th century. (OED; I gather much the same for other European languages and Japanese.) So Quinean scruples aside, it’s highly unlikely that there are robust facts about conceptual or meaning connections about a recent word, whose use as applied to mental states still isn’t that common. But folk psychology changes and adapts more with the times and does, I think, have something useful to say here.

    Hakwan raises the question, derived from a worry of Ned’s, as to why we need higher-order states at all. “I.e. why, in principle, we can’t do all these by first-order representations.” And Hakwan suggests that this may be a request for a conceptual or a priori answer. I would appeal to the last couple of paragraphs to give instead a folk-psychological answer. Folk psychology has it, I argue, that states don’t intuitively count as conscious if we’re in no way at all aware of them. So some higher-order awareness is needed for the conscious cases.

    I think Hakwan and I are more in agreement than he suspects about first-order states in in the gorilla and Sperling cases and symphony cases. Of course inattention, even in ordinary life, limits the number of first-order states; as Hakwan says, “people only *think* they have representations of the details.”

    But I myself see no reason to doubt in the Sperling cases that subjects have first-order states that capture the identities of all the alphanumeric characters. Are there cases where we in effect “fill in” at the higher-order level? I see no reason to doubt that. We need to look case by case.

    Hakwan writes: “For Sperling, I myself never feel that I was ever conscious of the identities of all 12 letters – but people like Ned claim that they do. So maybe we have different (first order) representations, and also different levels of certainty inflation for them.” I bet that that could be resolved by talking more in detail about what’s involved in being conscious of the identities of all 12. (One might have a conviction: I consciously experienced them all as alphanumeric characters; how could I not have been aware of all their identities? But that’s inferential, and convictions like that have to be factored out.)

    Hakwan notes correctly that “Ned wants there to be real phenomenology for the identity of all 12 letters.” But there’s an issue here. It could be that Ned simply wants qualitative character for all the characters, and his theoretical vocabulary doesn’t countenance qualitative character that isn’t conscious, and hence that doesnt constitute phenomenology.

    On my view, we can have qualitative character that *isn’t* in any way at all conscious. That’s presumably all we need to explain the Sperling results. We have qualitative character that represents the specific identifies of all the letters, but that qualitative character is conscious in a less fine-grained way–which represents their presumed identity as being alphanumeric characters, but not their specific identities.

    Hakwan suggests that he and I “probably disagree on: – can we read out the content of phenomenology from higher-order representations alone?” Hakwan’s view, he suggests, is “is kind of close to Uriah Kriegel’s – that the phenomenology is determined jointly by the higher-order prefrontal (and parietal) activity as well as the first-order representations in sensory areas.”

    Hakwan also says that it’s not a deep conviction on his part, but something he’s inclined to hold. Still, that’s a real disagreement I have with Hakwan and Uriah; so let me say a few things.

    One thing we could agree on is this. Suppose there’s a difference in first-order qualitative character. Is that likely to influence conscious phenomenology? Of course.

    But we need to distinguish two possibilities. One is that the first-order difference influence conscious phenomenology *because*–and *only* because–it influences what’s going on at a higher-order level. That’s of course fine by me.

    The other possibility is that the first-order difference influence conscious phenomenology *entirely independently* of everything that’s going on at a higher-order level. That’s not fine by me.

    Do Hakwan or Uriah think that we have some reason to think the second thing happens? Some way to preclude its always being the first?

    Perhaps Uriah does. His theory unifies the higher-order and first-order mental properties into a single state, so that changes in either result in changes in that state. But without some specific story, borne out by evidence, that changes in the state’s first-order properties result in changes in conscious phenomenology *despite* no changes in the state’s higher-order properties, t doesn’t really engage the issue.

    So I propose the following: Let’s assume that only higher-order properties have a bearing on conscious phenomenology until we have a reason to think otherwise–a reason borne out by evidence.

    And there’s reason to think that that will not be forthcoming. If there’s no conscious phenomenology without higher-order awareness, as Hakwan acknowledges, there’s reason to think that conscious phenomenology is due to that higher-order awareness. So what kind of finding might show that conscious phenomenology due to higher-order awareness, but it doesn’t determine the character of that conscious phenomenology?

    One final thing: Is the relevant higher-order awareness a matter of thoughts–HOTs? I care less about that issue than about those discussed above. But if it is a kind of higher-order awareness, it’s plainly not, I think, any kind of sensing or perceiving. So what’s left?

  14. Hi David,

    I agree with almost everything you say. But let me comment on the following two issues of which I’m not sure.

    IS PHENOMENOLOGY TOTALLY DETERMINED BY HIGHER ORDER STUFF ALONE?

    As I noted, this one I really ain’t sure. My initial motivation which I had quite awhile ago was due to the mis-match problem (as raised by Karen Neander’s Division of Phenomenal Labor paper). So thought it would be nice if there’s no duplication of any kind. But meanwhile I also see that you seem to handle the challenge quite well with your version.

    So now my reasoning behind is largely empirical. I just don’t know if PFC has the capacity to represent so much. If you’re doing a task (say discriminating / reporting about a stimulus), yes, I’m guess you can find something relevant in PFC /parietal. But even then, I’m not totally sure if we would be reading out just the response or the content of perception per se.

    Then there are the fMRI studies by Tse et al (PNAS) and Kouider et al (Cerebral Cortex) which showed a lack of PFC activations to visible stuff (compared to invisible stuff) when one is not required to do anything about the visible stuff.

    In Tse et al, visible stuff is a flicker of the pattern of the whole background. Hard to imagine that subjects didn’t see it, even though we never asked during the experiment.

    Now then, however, the invisible condition is not nothingness. It is just a static pattern. So instead of saying the flicker is visible in one condition, but invisible in the other, one can just say these are 2 different conscious percepts (flickering vs static patterns). One is perhaps more salient (the flickering one), but this may not necessarily mean more activity in PFC because subjects were told to ignore them both.

    Still, PFC needs to register the difference though, on most higher order accounts.

    It is possible that if we use a multivoxel pattern classifier to see if PFC activity can distinguish between the two percepts, we can find something.

    But this is not done yet, and given the lack of difference in level of activity between the two, as a scientist I feel I’d be going out on a limp to make the strong prediction, that you can always distinguish two conscious percepts in the PFC. We just don’t have any positive evidence yet (in cases when the different percepts are not relevant to the tasks / immediate responses).

    These are considerations that make me feel, even if all changes in phenomenology is registered somehow in PFC, the difference is likely to be subtle. The PFC probably doesn’t code the details of all conscious percepts. It may just be a subtle change in labeling (“now THAT rather THIS first-order representation is for real).

    So, in other words, I’m chicken. It is still possible that you’re right and I’m wrong (which would mean people like Ned and Lamme etc very wrong). I would be happy in that case. But now I’m just hedging my bet a little.

    SHOULD WE COUNT HIGHER ORDER REPRESENTATIONS AS THOUGHTS?

    This one I really am not so sure, and it isn’t so important for the empirical work I do.

    But if it has to be either thought or percept, I’d go with you to say it’s a thought.

    But neuro people talk about other kinds of representations. We say certain neurons represent motor commands, probability distributions, decision variables etc.

    So I want to remain neutral and just call them representations.

    Or sometimes, I get more vague and just say the higher order “system” / “process”, because I don’t even know if these are represented explicitly in PFC. I mean, some relevant higher-order info is there. But how it is coded, I don’t know yet.

    Other people have troubles accepting that it’s a thought because they think it implies that it is a conscious, intellectual kind of thing. I see it doesn’t have to be. But since calling it a thought doesn’t seem to buy me anything extra either, I try to remain neutral.

    In other words, it’s not that I think it is NOT a thought. I just don’t know what it is, and I haven’t been forced to give an answer yet. 🙂

  15. That does clear up my worry, thank you.

  16. Hakwan–

    First: Wonderful talk–and great commentaries as well. Kudos to Richard for an AMAZING job!

    I wonder about your empirical worries to HO representations, mentioned in your response to David Rosenthal, above. You wonder if the PFC has enough representational capacity to account for all that is in conscious perception. This seems to endorse something like the “thick phenomenology” claim mentioned (and endorsed) by Dave Chalmers. Ned Block clearly endorses something like this as well. But I worry about the prospects of empirically settling this issue. If phenomenology is thin (meaning, there is less represented in conscious experience than we intuitively think), then it’s more plausible that the PFC can do whatever needs to be done at any given moment in conscious experience. If it’s thick, the PFC looks less plausible.

    But how might we settle the prior question of thickness? I worry that interpreting ANY brain study will require a prior answer to the thickness question. So then it reverts to arguments from phenomenology (personal reflections on the Sperling task, for example) or conceptual arguments about the best way to “pre-theoretically” characterize consciousness (here one might appeal to folk-psychological principles, as David R. does, or to intuitions about possibility and necessity, as Dave C. does).

    In particular, I wonder how we might empirically differentiate between Block’s claim that the letters are all phenomenally conscious in the Sperling case though they are not fully accessible, and Rosenthal’s claim that we are conscious of the letters (and so our perception of them is conscious), but only in some respects and not others, i.e. as alpha-numeric shapes but not as particular letters.

    In any event, I think that a thin phenomenology view makes things more plausible for HO views and it’s not clear that this is a straight-forward empirical question. So why not go HOT?

    Final quick question: couldn’t occipital activation just be HO awareness? Perhaps this indicates that it’s not thoughts but something less “conceptual.” Still, it might well be HO and in principle independent of the first-order representations about the environment.

    Thanks!

  17. Hey Josh,

    Thanks for the questions.

    KEY ISSUE = RICHNESS OF PHENOMENOLOGY

    You hit it spot on there. If phenomenology is richer than capacity of PFC, it’s likely to be (at least partially) determined by posterior sensory activity. I think this is the case.

    BUT – I also think phenomenology is apparently so rich that it is unlikely to be determined by posterior sensory activity alone either. There are stuff in V1 that is detailed enough, but we’ve ruled out their likely contributions to phenomenology years ago. So people like Lamme and Ned claims that the posterior representation of phenomenology is instantiated by recurrent loops from extrastriate back to V1. My impression is that such loops don’t have very high capacities either (intuition: resolution goes down when you go feedforward, as receptive field size increases, and once resolution goes down it can’t go back up again, for the same reason why you can’t un-smooth a bitmap image. So feedback to V1 is likely to play non-specific modulatory roles, cf Macknik, or Crick & Koch’s “no strong loop hypothesis”).

    Now Ned thinks he sees the identities of all 12 letters of Sperling some times (or at at least at some places he hints at the impression). I just don’t think the recurrent loops are gonna do the job. In fact, in one recent paper by Sligte et al done by Lamme’s group (in J. Neuroscience) – where they’re clearly trying to find recurrent loop like stuff to support Sperling effects, they failed. They found V4 activity instead.

    So including PFC may help here. PFC interprets sensory signals in the back. Under lack of attention it inflates the phenomenology (the Rahnev et al stuff I presented). So phenomenology seems richer than it is. (more precisely, you got richer phenomenology than the info in posterior cortex contains)

    On this view, phenomenology is jointly determined by both the early sensory stuff and PFC.

    RICHNESS OF PHENOMENOLOGY CAN’T BE SORTED OUT EMPIRICALLY

    This I disagree. I agree that neuroscience can only do so much to sort this out. But we have psychophysics. The Rahnev et al stuff I presented, for instance, contributes to the debate about phenomenology. No it doesn’t settle it once and for all. But nothing does.

    Point taken: studying phenomenology is tricky – for philosophers or scientists or whoever. But I’d trust the self reports of a group of subjects in a carefully conducted psychophysics experiment any time over a single philosopher’s musing!

    Alright, may be I shouldn’t put it this way. I take it back. Let’s say we should carefully consider both….

    CAN HIGHER ORDER STUFF BE IN OCCIPITAL ITSELF?

    Yeah in replying to Dave (Chalmers) I briefly considered something related. I suppose in principle it could. But so far I’ve been getting PFC stuff in my studies. So I’m hopeful that this mapping of PFC = higher order, posterior sensory stuff = first order, is roughly correct.

  18. PS – just to add, there is this recent study (ref below, good old psychophyiscs again) that addresses quite directly the phenomenology in Sperling. Ned thinks it doesn’t work. Perhaps it isn’t decisive. But it does narrow our options of how we could interpret the phenomenology in Sperling. I think taking this, together with the Rahnev et al stuff I presented in the talk, it’s pretty clear that we have inflated phenomenology … (i.e. we think we see more than we have information for)

    Perceptual illusions in brief visual presentations.
    de Gardelle V, Sackur J, Kouider S.
    Conscious Cogn. 2009 Sep;18(3):569-77. Epub 2009 Apr 14.

  19. Hi Hakwan,

    Very many thanks for your thoughtful comments on what I’d written.

    One thought about our potential difference about whether PFC can represent enough to be responsible for all the phenomenology: That’s of course a pivotal issue. But any assessment of that must take into consideration that the phenomenology is doubtless a very great deal less detailed than it seems–there’s a lot of “bluff” about detailed phenomenology. That fits with PFC’s not being up to the task of capturing everything on the first-order level–but it goes farther. Parafoveal visual phenomenology at any particular moment is very indistinct, though it can seem if one isnt careful to be nearly as detailed foveal visual phenomenology.

    About whether the higher-order awareness is a matter of having thoughts about the mental-state targets: You agree that if it’s either sensations or thoughts, it’s got to be thoughts, but offer as alternatives to both those possibilities cases in which “neurons represent motor commands, probability distributions, decision variables etc.”

    But the reason for thinking that there’s a higher-order awareness at all is folk psychological, not neural. It’s that no mental state of which an individual is not in any way at all aware counts as a conscious state. So there must be some sort of *awareness* of any state that doesnt as conscious, And it’s questionable, I think, whether your neural alternatives would provide that.

    You write: “Other people have troubles accepting that it’s a thought because they think it implies that it is a conscious, intellectual kind of thing. I see it doesn’t have to be. But since calling it a thought doesn’t seem to buy me anything extra either, I try to remain neutral.”

    I think that neutrality is fine. As you note, HOTs plainly aren’t in every way the kinds of thoughts folk psychology is used to; they’re theoretical posits. But given the need for some sort of higher-order awareness, I guess I don’t yet see a tenable alternative.

  20. Hi David,

    Sorry it took forever for me to reply. These days I am only allowed to do philosophy over the weekends. 🙂

    You wrote: “But any assessment of that must take into consideration that the phenomenology is doubtless a very great deal less detailed than it seems–there’s a lot of “bluff” about detailed phenomenology.”

    Yes, in fact my Rahnev et al results partially argue for the existence of such bluffing. But I fear, even taking into account the bluffing, PFC content alone is still not detailed enough to determine the content of real, non-bluffed phenomenology. It really is just a fear though. We don’t know the answer to that yet.

    (If Richard is following this conversation, he would be happy to point out that multi-electrode neural recording would indeed be useful here. Then again, that being useful does not mean everything thing else is not.)

    You wrote: “But the reason for thinking that there’s a higher-order awareness at all is folk psychological, not neural.”

    That might be another of our differences here. I am intrigued by the folk psychological arguments too, and am not against them. But my real motivation for doing higher-order stuff is empirical. Neurons are noisy and to see what and especially how well they represent we need a certain “interpretative” mechanism. And as it turns out, awareness seems to depend on such mechanisms in PFC. That’s really why I am sold. Given that, you may see what I don’t want to say exactly what such PFC mechanisms reflect yet.

    But perhaps this is reason for both of us to be happy – that we come to such similar views by such different reasoning.

    H

  21. Hi Hakwan,

    I think that there’s very little disagreement between us–just around the edges. And so far as converging on things from different directions, that delights me.

    (One minor, perhaps merely terminological quibble: I think it’s all empirical–noit just neuropsychology, but psychology generally, including folk psychology. 😉

    But let me ask about PFC and pheny that doesn’t result from bluff: I would have thought we don’t yet have a very good measure either of how much PFC can do nor of how much phenomenology there is independent of bluff. Is there anything beyond Rahnev et al?

    The fact that we have a sense that parafoveal phenomenology is rich, when it’s pretty weak, should give us pause.

    All the best,

    David

  22. Reply to John Campbell

    Hi John, in the video version of your reply to Carolyn Suchy-Dicey’s paper, you mentioned about my explanation of blindsight in passing. Thanks for that. I try to address your concerns here.

    You said that in blindsight one issue is that the subject does not “know” about what is presented to the blindfield, even though the subject can guess what’s there. One difference between that and normal vision is of course that the conscious experience is lacking. But I also suggest that another way to characterize the difference is that subjective perceptual certainty is very low in blindsight, whereas in normal conscious vision it is high.

    You said that you don’t see how a mere change in certainty is going to reflect / explain the difference in knowledge. You gave an example: a schoolboy sitting in an exam may know the answer of the questions without being very sure. So certainty and knowledge are not the same.

    I know next to nothing about epistemology, so I can only speak from the commonsense of a non-native English speaker. But my feeling is knowledge does have something to do with certainty. I mean, the schoolboy can be not totally sure. But if he is totally not sure, i.e. he just guessing (as in blindsight), even if he got it right, it seems to make sense to say that he doesn’t really know the answer (he just got it right by luck). So some degree of certainty seems necessary.

    And conversely, if he feels 100% certain of an answer, and that he’s right. It seems that there’s no sense to say that he doesn’t “know” the answer.

    So it seems to me the connection between certainty is quite strong. But before I go on, it would be useful to know if I’m just totally missing your point.

    Thanks for your comment again.

    H

  23. To David:

    Sadly we don’t exactly know how much is “bluff”, or what I call “fake” phenomenology (I consider it phenomenology too, only that the details are not as rich as subjects think). But from Rahnev et al and also de Gardelle et al, it’s quite obvious that such inflations happens when we don’t attend. For attended things that are at fovea, I think it’s safe to say that the phenomenology is real. My worry is that even for such safe phenomenology, PFC alone can’t do the job.

    There are some fMRI results I’ve heard somewhere that multivoxel pattern classification would fail to reveal perceptual content (at fovea under full attention), though content in working memory can be revealed using the same method. I haven’t been able to see the data myself and it is not published yet. But it goes along quite well with the intuition I have from reading other physiological studies of PFC too.

    Then again, these are negative findings. PFC has an amazing ability to show effects of all sorts of things. So the jury is still out

  24. Hi Hakwan,

    There’s another variable in trying to determine whether PFC can handle even, say, undisputed attentive, foveal phenomenology. How do you test how rich the phenomenology there is? Psychological functioning won’t do, because that could work instead off information registered in visual cortex. And I think subjects are notoriously not great at giving credible accounts of how rich their phenomenology is–even attentive, foveal phenomenology.

  25. To David:

    Indeed. It would be a tricky issue. A theme that has been raised by Richard too. Can we ever get there? I’m hopeful. I think more work on designing / thinking about the psychophysical paradigm would be important. To my mind we’re almost still at its infancy. Most studies now just take forced-choice performance as an index of phenomenology. I hope these will change some day, and we’ll converge on some better methods.

    What about something like this? We ask subjects to distinguish between A & B. If they can, and they also SAY they do by what they consciously see (rather than just guessing), we should expect PFC content to distinguish between A & B, – if you’re right. In fact, if you’re right, we should find it if we present the target in such a way that they *would have been* to distinguish between A & B and say they do so by what they consciously see. I.e. we should rule out that PFC doesn’t just reflect the response.

  26. Hakwan :
    To David:
    What about something like this? We ask subjects to distinguish between A & B. If they can, and they also SAY they do by what they consciously see (rather than just guessing), we should expect PFC content to distinguish between A & B, – if you’re right. In fact, if you’re right, we should find it if we present the target in such a way that they *would have been* to distinguish between A & B and say they do so by what they consciously see. I.e. we should rule out that PFC doesn’t just reflect the response.

    I agree entirely, and that’s a good way to test whether PFC is implicated in mental states’ being conscious–and hence presumably whether higher-order intentional content is as well.

    But distinguishing A and B won’t even approach testing how rich conscious, foveal, attentive phenomenology is. And there’s a question about using a distinguishing task to do so, since attention could shift as one increases the number of things subjects would distinguish, and as it shifts from here to there, we won’t be able to tell whether rich phenomenology occurs only in connection with what’s attended.

    I’m not, by the way, concerned that attention is needed for mental states’ being conscious; I’m completely convinced that that’s not so. Just that attention will have an effect on how rich conscious phenomenology is.

  27. Hi guys, very rich and interesting discussion in here!

    I don’t why you think that this is a good way to test whether PFC is implicated in a mental state’s being conscious. Why wouldn’t Ned just say what he says in the comments; the subjects are accessing their phenomenal consciousness for reporting and it seems reasonable to say that the difference in states accessed will show up the content of PFC. Likewise PFC will be absent in the second case where you present it subliminally (if I am understanding the second part of the experiment which I might not be).

  28. Richard Brown :
    Hi guys, very rich and interesting discussion in here!
    I don’t why you think that this is a good way to test whether PFC is implicated in a mental state’s being conscious. Why wouldn’t Ned just say what he says in the comments; the subjects are accessing their phenomenal consciousness for reporting and it seems reasonable to say that the difference in states accessed will show up the content of PFC. Likewise PFC will be absent in the second case where you present it subliminally (if I am understanding the second part of the experiment which I might not be).

    Hi Richard,

    I had in mind something much weaker: If PFC doesn’t figure when we independently see that mental states are conscious, that’s bad news for higher-order intentional content as figuring in mental states’ being conscious.

    And, yes, I agree that that’s what Ned would say. But then it turns on whether one can make good sense of qualitative states’ being conscious without one’s being aware of oneself as being in those states. If not, as I have argued, then it’s not clear what we can make of Ned’s insistence that the subjectively unmediated access he and I agree PFC would be responsible for is something distinct from the qualitative states’ being conscious.

    Phenomenally, but not access conscious? I don’t know what that means if it’s a way states are sometimes conscious but is altogether independent of one’s being in any way at all aware of oneself as being in the states.

  29. Thanks for the response David.

    Re the PFC: Ah, I see. I thought that Hakwan was suggesting, and that you endorsed, this experiment as a possible way to empirically test HOT theory.

    Re unconscious phenomenality: I completely agree with you, as no doubt you know 🙂 That’s exactly the point I was trying to make in my opening question to Hakwan. Experimental data doesn’t seem to be the kind of stuff we need in order to make sense of phenomenality without access. This is a debate which will rest on the kind of (folk)-psychological cum philosophical arguments like the ones above not an experiment.

  30. Hi Richard,

    Well, what happens in PFC *is* a bit of a test for higher-order theories. If nothing happens when a state is conscious, I think that undermines any higher-order theory.

    As for making sense of phenomenality without access, let me elaborate a bit.

    I think it’s easy enough to make sense of that–but not in the way I take Ned would like to. Phenomenality without access is simply nonconscious qualitative character–qualitative character in excess of what is captured by awareness.

    My worry comes only in regarding phenomenality as a case of a mental state’s being conscious–in the commonsense way in which we can contrast mental states’ being conscious or not conscious. (I don’t of course care much about how to use the word ‘conscious’.)

    So if phenomenal consciousness, so-called, doesn’t mean that, as I assume it can’t, I’m also fine with phenomenality’s overflowing access. That would then just mean that there’s more to qualitative character than consciousness ever (almost ever) captures, and I’m sure that that’s so.

    But if what it means for phenomenality to overflow access is that, then one in principle *cannot* discover that introspectively–by one introspective sense, e.g., about what happens in Sperling cases. Introspective impressions are irrelevant to nonconscious qualitative character.

  31. Hi David, thanks for these further thoughts, with which I wholly agree.

    I would just add that my worry focuses in on our notion that a phenomenally conscious state is one that there is something that it is like for the subject to be in. I don’t think we can make any sense out of the idea of a state that is phenomenally conscious in this sense — so there is something that it is like for me to be in that state– but which I am not in any way conscious of myself of being in. Further what kind of scientific evidence could convince us that tehre were such cases?

  32. Hi Hakwan, There have been a couple of philosophical papers recently, by Declan Smith and by Johannes Roessler, trying to explain why it is that the blindsight subject does not have perceptual knowledge of what’s in the blind field whereas ordinary perception does give knowledge of what’s going on. Something I like very much about your approach is it does seem to bear on this question why consciousness might do some work in giving us knowledge. So my comments are just directed at how far we could get along this line.
    At #22 above, talking about the nervous schoolboy who despite his uncertainty does know the answers, you say :
    “my feeling is knowledge does have something to do with certainty. I mean, the schoolboy can be not totally sure. But if he is totally not sure, i.e. he just guessing (as in blindsight), even if he got it right, it seems to make sense to say that he doesn’t really know the answer (he just got it right by luck). So some degree of certainty seems necessary.
    And conversely, if he feels 100% certain of an answer, and that he’s right. It seems that there’s no sense to say that he doesn’t “know” the answer.
    So it seems to me the connection between certainty is quite strong.”
    It seems to me you can be subjectively very certain and still not know. In fact a blindsight subject might, idiosyncratically, be very arrogant in his guessing and we wouldn’t say well that means he knows. On the other hand someone might watch his son receiving a prize and be totally incredulous that this is really happening, so really not certain this is actually going on, but still he knows perfectly well what is going on.
    I think you have everything to play for here as these arrogances and uncertainties are relatively high-level compared to the perceptual certainty or uncertainty you are talking about. However I do wonder whether the basic point will change when we shift down a level. Whether you have experience of the object really seems to affect whether you have knowledge of it, and I wonder whether an approach in terms of subjective perceptual certainty will explain this, though the possibility that it might do is one reason your approach is so interesting.

  33. Richard Brown :
    I would just add that my worry focuses in on our notion that a phenomenally conscious state is one that there is something that it is like for the subject to be in. I don’t think we can make any sense out of the idea of a state that is phenomenally conscious in this sense — so there is something that it is like for me to be in that state– but which I am not in any way conscious of myself of being in. Further what kind of scientific evidence could convince us that tehre were such cases?

    I agree with all that, Richard, but would add only that Ned has sometimes, I think, pressed a sense of ‘what it’s like’ (perhaps not ‘what it’s like for one’) in which one has no access to what it’s like. (Hence not “for one.”) The ‘what it’s like’ rubric has been used in ways that make ithard, I think, to pin down, and hence hard to rely on.

  34. Hakwan, David, and John. Trying to get clear on who stands where on some things:
    1.) Hakwan notes that “a change in higher-order representation is sufficient for there to be a change in phenomenology.” John writes it as “a wider range of aspects of those perceptions can be conscious” depending on “exactly which higher order thought you have” (Reference and Consciousness 133). I wonder if there is a trade off of aspects, whether there can be many aspects conscious at once? Does focusing on any one aspect take away a sense of all of them, of the object as a whole? Are higher order thoughts not constrained by the same sorts of resources of occurent thought (I don‘t know how many thoughts we can have at once or if it will increase through brain evolution etc)? I wouldn’t say that I am thinking everything that I am now seeing, now feeling. I may say “I am thinking about the painting,” but I am thinking of the painting as an object and not in its entire detailed variety (ignoring here a different set of questions about being open to an aesthetic experience, to really experience art — as opposed to critiquing it, judging it). I wonder whether the fineness of grain, the richness which we are after in this explanation will come through multiplying concepts, with extra ways of describing. In addition to the worry about how many thoughts we can have at once, there is also a concern about capturing experience in words or concepts: whether the abstractions from our experience lose the unique feel, the very richness of the moment they are claimed to cause, by the process of generalization. David remarks on the multiple drafts model in “Content, Interpretation, and Consciousness,” and more on that soon…as well as on John’s demand of consciousness: that our experience of a perceived object be what explains/provides knowledge of reference of a demonstrative. I think there are interesting questions about whether we need conceptual knowledge or possession conditions already to fix the reference, to point to an object; whether we find out about what the object is after we name it, find out its kind. Right now, I am just trying to question whether we need many sentences and concepts to describe any one moment of conscious experience and whether at any moment, we could consciously consider many propositions (I’m going to put specious present remarks in the temporality discussion).
    2.) Hakwan claims “without the right *kind* of higher-order representation you would never be conscious.” This seems like a strong claim and I wonder if all agree. There is nothing else that could make you conscious? Representations of seeming lead to a dimension of subjectivity? Higher order representations are responsible for the phenomenal feel as like? I wonder about whether it is the representing that causes the consciousness, whether they are the same thing, whether you could have either without the other. I agree that conceptual resources can explain the richness of conscious experience, that changes in the higher order representing can affect the feeling. I still wonder, though, about whether they cause the feeling and whether only they cause it. Might there be information processes, computation, functions, etc. that go along with a feeling but not cause it? I wonder whether we can isolate the effects of the higher order representing as opposed to function: do the function and the representing collapse into the same thing? Are there higher order representings that don’t feel like anything (and vice versa)? Is it that whatever causes this feeling some other way would not be called consciousness (would it be said to cause consciousness if it did not represent)? Is the function of consciousness tied to the function of higher order representing? Hakwan mentions that “the higher-order stuff probably has some functions apart from just making us conscious,” and I wonder how we can separate the function of consciousness from the function of the higher-order stuff.
    3.) on certainty and knowledge. I wonder if anyone has opinions about the connection of certainty and content. Is the certainty part of the content, part of the belief? Or, are we certain about something, affirming a certain proposition? Can we have a firm opinion but not about anything, a belief without content (this sounds strange, but I have a related worry about whether it is possible to have an experience of nothingness, of pure-consciousness)? Russell notes (in the context of discussing James and whether nitrous-oxide heightens the sense of belief) that someone “may sweat with conviction, and he be all the time utterly unable to say what he is convinced of. It would seem that, in such cases, the feeling of belief exists unattached, without its usual relation to a content believed, just as the feeling of familiarity may sometimes occur without being related to any definite familiar object. The feeling of belief, when it occurs in this separated heightened form, generally leads us to look for a content to which to attach it.” (in The Analysis of Mind, Lecture XII)

  35. Maxwell Bertolero

    I have a few points
    1. There has been some discussion concerning PFC and being able to empirically conclude from BOLD differences across conditions in the PFC that work has been done to distinguish between A & B, while visual areas remain the same. This would be ideal data for HOT; however, we can’t make the inference that the PFC is doing the work just because the BOLD levels are different across conditions. I find it peculiar that we think that it will be as simple as one part of the brain working on first order representations. This brain most likely evolved in a rather ad hoc manner, and something this simple might not be the most probable. There’s probably a process more involved than this. Moreover, having different BOLD levels across conditions is not the only result that would provide evidence that the PFC is doing the work. fMRI and BOLD measures might not be able to detect a difference this small. In sum, both our conception of the how the HOT might work here and the experimental methods employed to study it seem oversimplified and need to be worked on some more before we go throwing people in scanners.

    2. When we are discussing the function of consciousness, one obvious, at least to me, function is working memory. Philosophers seem to forget this aspect of consciousness. We do work in there, not just represent things. However, to do work, we have to have representations to do work on. The obvious question, then, is if we can do working memory tasks without being conscious of the representations. Moreover, there are two aspects of working memory tasks. Maintenance and manipulation. For example, when you try to remember a telephone number for 10 seconds before you dial, that’s all maintenance. When you are given a list of words, and said to repeat them in alphabetical order, then that requires maintenance plus manipulation. There’s no such thing as only manipulation. It seems that it would be possible to do a simple working memory task, like the sternberg item recognition task on people with blindsight or TMS subjects, where you present 1-9 words, and then probe them with a word from the list or not from the list and it’s forced choice. One can probabably do this without a conscious representation of the words. They might even get normal working memory levels without chunking (i.e., 4). However, if the task requires manipulation, it seems impossible for them to do this. Thus, we need consciousness to manipulate the things we’re conscious of. Being able to do this is quite adaptive, and one can speculate about how consciousness and working memory capacity would evolve together over time.

    3. For the overflow problem, I don’t see the problem. The right explanation is that we consciously perceive the whole visual field, but some things are attended to more than others. For example, image that you are looking from the empire state building downtown. There are millions of things to look at, and we consciously perceive them all, but don’t attend to very many of them. Moreover, there is a diffuse nature of attention, and therefore consciousness. It’s not an on/off switch.
    That’s it. My tone might sound a little pompous. I tend to come off that way when writing in situations like this instead of discussing them, so sorry ahead of time.

  36. Maxwell Bertolero

    Also, I forgot to mention something about metacognition. There is a dissociation between certainty and accuracy. For example, in the metcalfe lab, they had people do a Raven’s matrices task, and then had them judge their accuracy. Basically, the people who scored the highest were not that sure of themselves, and the people that scores the lowest were pretty sure of themselves. This is only one task and one study, but it still poses a problem.

  37. David Rosenthal :

    Richard Brown :
    I would just add that my worry focuses in on our notion that a phenomenally conscious state is one that there is something that it is like for the subject to be in. I don’t think we can make any sense out of the idea of a state that is phenomenally conscious in this sense — so there is something that it is like for me to be in that state– but which I am not in any way conscious of myself of being in. Further what kind of scientific evidence could convince us that tehre were such cases?

    I agree with all that, Richard, but would add only that Ned has sometimes, I think, pressed a sense of ‘what it’s like’ (perhaps not ‘what it’s like for one’) in which one has no access to what it’s like. (Hence not “for one.”) The ‘what it’s like’ rubric has been used in ways that make ithard, I think, to pin down, and hence hard to rely on.

    I think at that point we have reached a purely terminological disagreement about how to use the words “qualitative”, “phenomenal” etc (I think you point out much the same thing in “Explaining Consciousness” and “How Many Concepts of Consciousness?”). But putting that aside I think there is a fairly obvious and harmless sense of ‘what it’s like for one’ that is useful for picking out a neutral explanandum and which is part of our folk psychology.

  38. I want to mention some objections from the literature and suggest a possible higher order thought defense.
    1.) Carruthers notes (in Phenomenal Consciousness 239): “why should an analog, but non-conscious, perceptual representation suddenly acquire the subjectivity distinctive of phenomenal consciousness merely because it causes a higher-order belief about itself?” He questions the connection between non-inferential knowledge of an experience and there being something it is like to undergo it and asks about the explanation for differences between distinct phenomenally conscious states. The non-inferential knowledge doesn’t seem key to there suddenly being something it is like; the targeting by refined concepts, though, does arguably lead to enriched experience and further conceptualizing about it (and so on). I take it that a HOT theorist doesn’t need the what-it’s-like to be a result of not-inferring. Whitehead (Modes of Thought VI) suggests a way it could arise from targeting-by-concepts: “The growth of consciousness is the uprise of abstractions. It is the growth of emphasis. The totality is characterized by a selection from its details. That selection claims attention, enjoyment, action, and purpose, all relative to itself. This concentration evokes an energy of self–realization.” There is a trade-off to this awareness if it taxes us too much, if it drains too many cognitive resources. Yet, is there any reason to think that higher order thoughts take any extra resources above and beyond the 1st order ones? Is there any new information involved or just new ways to think about it, different aspects to highlight? If there is a function to the 2nd order thoughts, then there must be some work done, some cost. Carruthers says (220) that “it is easy to see a function for HOTs…would enable a creature to negotiate the is-seems distinction, perhaps learning not to trust its own experiences…would enable a creature to reflect on, and to alter, its own beliefs and patterns of reasoning” but notes the concern mentioned above: how to isolate a function of the targeting states (in virtue of which conscious) from a function of consciousness itself, whether the targeting functions independently of consciousness or if they are the same thing, whether this was a selected function or by-product of another. Despite the benefits of an occasional alternation of reasoning through reflection, he thinks that (an actualist) HOT involves “a huge number of beliefs which would have to be caused by any given phenomenally conscious experience…There can be an immense amount of which we can be consciously aware at any one time…I would need to have a distinct activated higher-order belief for each distinct aspect of my experience–either that, or just a few such beliefs with immensely complex contents…think of the amount of cognitive space that these beliefs would take up” (221). It’s not clear how cognitive space is affected by (what Carruthers notes is mostly) non-conscious activity; perhaps the opposite is more likely: a kind of mental clearing, a discharge, an expression. The HOT is not draining in this sense even if it is a release.
    2.) Campbell thinks that a HOT account (R and C 133) “presupposes knowledge of the reference of a demonstrative,” that a HOT “must use a perceptual demonstrative referring to the object,” that “having a thought to the effect that one is perceiving that very object presumes that you understand the demonstrative used.” He wants the experience of an object to provide knowledge of the reference of a demonstrative, and I want to question whether this is a fair demand. I applaud the effort to bring these two (things/processes/natural kinds or events–exactly what reference and consciousness are is up for debate) into the same conversation, but worry that saddling a theory of consciousness with explaining demonstratives is going to complicate an already dusty desert. Campbell notes Wittgenstein’s worry that there is only the pattern of use and not control of it by knowledge: “there is no such thing as knowledge of reference which controls the pattern of use, and to which the pattern of use is responsible…the pattern of use now seems arbitrary, since it is no longer thought of as controlled by knowledge of reference” (4). Yet, he thinks we need to refocus on the common sense picture–to not lose sight of the fact that you use a word a certain way because you know what it means; that you know what it stands for, and this guides the use. Regardless, the mere question should make us wonder whether there is (if at all) one way to know the reference of a demonstrative and, therefore, doubt the necessary connection between knowledge of reference and experience of objects or consciousness. Kripke does note that, in the case of sensed phenomena, the way the reference is fixed is important, but also notes that a blind person could fix the reference in different ways. I want to suggest that a HOT theory may actually shed light on this area but should not be judged by how well lit it leaves the issues. It accounts for how I may be conscious enough of an object to baptize it, to name it, to fix a sample to investigate. Science will lead to better knowledge of the characteristics, better than the original set used to fix the reference. Our experience of the object will also change; we will be aware of aspects and details previously ignored. HOT theory tells a similar story about concept acquisition and the enrichment of experience; so it might be fruitful to bear in mind but should not be ruled out by an attachment to an ideal of demonstrative reference. Additionally, I’m not sure that the higher order thought “must use a perceptual demonstrative” (133) and couldn’t be more than “that man there” (4). Our conscious experience may be “who’s that man there?” but different HOTs may be at work in virtue of which we have the experience. I’m suggesting that the fit between the HOT and the 1st order state do not have to be perfect (in the case where you have an experience of an object and gradually learn more about it). Perhaps the 1st order state is the same and the HOT gets gradually more fine-grained (but see multiple drafts/pandemonium model here). I don’t however think, as I have heard objected, that there can be a 1st order state of blue and a HOT about it as red. If there is such a case, then I’m suggesting that it would not show up in (or cause) consciousness; that it is the fit between the two states, the sense of getting/expressing it, of the targeting by the right concepts, that gives it the feeling…this is consciousness?

  39. Replies to Richard Brown, David Rosenthal, John Campbell, Andy Snyder, and Max Bertelero

    I have been terrible in replying in a timely fashion. Sorry about that. Let me address a few specific points here as well as make some general comments as to what this discussion has taught me.

    – Continuing the discussion @ ASSC in Toronto

    I’m sure my reply won’t end the many disputes here, though it may my last words for now because the conference is coming to a close. However, at ASSC in Toronto, there will be a symposium on these issues again (speakers = yours truly, Ned Block, Imogen Dickie, and Stanislas Dehaene, major proponent of global workspace theory and the dual channel model). It’s gonna be exciting! Today is the last day ASSC is accepting abstracts

    http://www.theassc.org/conferences/assc_14/abstract_submission

    – Fineness of grain of BOLD (to Max, and Richard to some extent) –

    Yes you’re right, current fMRI studies may not be able to distinguish subtle differences in activity in an area. So it could well be that the HO stuff is in PFC but a change in HO may not be reflected in BOLD in PFC. Too bad, but then that’s science. We have to deal with the ambiguity of null results (nothing vs we fail to see something). But a positive result is easier to interpret.

    fMRI isn’t nearly as good as invasive physiology, but it isn’t too bad either. I expect there will be more multivoxel stuff on PFC in perceptual tasks soon, which may tell us a bit more about what PFC represents.

    The picture I’m giving, i.e. PFC = HO, posterior sensory = FO, is definitely over simplistic. But then we try to theorize with what we have at the moment. There’s a place for rough summaries and generalizations to get at the big picture, while acknowledging that the details aren’t certain yet. I tend to think this process of theorizing and experimentation it’s a give and take – we can’t say we don’t theorize for now, wait for 10 years of science, and then theorize base on better data. Science don’t really work like that in general. Future theories would almost certainly replace current ones, because there will be progress. But doesn’t mean current theoretical ideas will be useless. They can guide us as to what experiments to do next.

    – Consciousness necessary for working memory? (to Max) –

    You suggest this may be the case. I think it looks plausible too. But you sure that there can’t be unconscious WM? How do you know? We need to be careful about this. Unconscious signals tend to be weak. So maybe it’s just because of the weak strength that they fail to enter WM, but not unconsciousness per se. I.e. if I could give you a strong but unconscious visual input, it may get into your WM or even guide WM functions.

    People use to assume cognitive control requires consciousness too. But the last few years we’ve been finding evidence to the contrary. So in general I’d be cautious as to saying what functions definitely require consciousness, until I see some hard data that deals with the issue of signal strength.

    – On denying the overflow problem (to Max, and some extent David and Richard) –

    Yes you can deny there’s an overflow problem, like David, or Michael Tye (to some extent). But from what you say, you’re exactly acknowledging there IS an overflow problem, i.e. the content of consciousness is more than what you can report. Maybe you don’t want to call it a “problem”. But there is overflow, in your view.

    So your take is a lot like Ned’s. Which is fine (not by David, but by me). My point is the talk at least, is to say we HO people can easily grant the existence of apparent overflow, and give a reply too (based on the idea of fake phenomenology, supported by psychophysics). I.e. you don’t really consciously perceive all the details. You only think you do. Because when you don’t pay attention (say the periphery), you inflate the phenomenology.

    Unlike David, I don’t want to commit to saying that Ned’s or your (i.e. Max’s) intuitive take is wrong. The details are tricky and it may depend on case by case parameters. My point is just that there’s an alternative way to look at it which is compatible with HO, and is backed by data. To me that’s good enough.

    – On dissociation of certainty and knowledge (to Max) –

    Thanks for bringing up another example of certainty and performance. That is the kind of thing that would support my version of HO. You said it’s a “problem”, I suppose you mean for my opponents?

    – Empirical predictions of my version of HO theory and theoretical/intuitive misgivings (to Andy, and to some extent it relates to my earlier reply to Dave Chalmers and Ned Block)

    Yes you’re right, my HO theory makes these claims:
    1) a change in HO stuff is all is needed to change phenomenology, i.e. you can keep FO constant and have a change in phenomenology
    2) without the right HO stuff, FO stuff will not be conscious
    3) subjective perceptual certainty is explicitly determined in HO but not in FO

    These are empirical claims. They may turn out to be false. They are listed in descending order of my level of confidence. If 3 is wrong, my version of HO will die, but some other HO may be ok. 2 is actually pretty critical and Ned certainly disagrees. I don’t think it’s a done deal for us HO folks yet. So we’ll have to see. 1 I’m fairly confident. Ned et al can try to explain away our results. I.e. if I manipulate PFC, and hence HO stuff, I get a change in reported phenomenology (already shown in my TMS study, and to some extent Rahnev et al). Ned can say I’m only changing the report. We’ll have to fight this out a bit more, but I feel we HO folks are getting there.

    Apart from that, of coz there are theoretical / pre-theoretic intuitive worries as to whether these claims will turn out to be true. While I acknowledge these, I think the way to go about it is to get some data and see what turns out. So, it’s not to brush aside your concerns. But I’m a scientist and using empirical data to support theories has been the main point of the talk.

    – Connection of conscious perception to epistemology (to John and some extent Andy) –

    John, thanks for raising this interesting connection. I must confess I haven’t thought enough about this. And thanks for hinting at how I could reply to you before I do – yes I certainly take it that although general confidence can dissociate with knowledge a lot, subjective perceptual certainty (i.e. your immediate certainty judgment of the sensory information itself) may turn out map with perceptual knowledge pretty well. It’s not worked out at all, and I do appreciate some of your concerns. But as a first try, it seems to be as good as anything else we have in science for now. Looking forward to flashing out more details of this interesting topic in Toronto (ASSC) where Imogen will comment on my work and Stan Dehaene’s.

    – Empirical resolution of theoretical disputes (to David and Richard, and everybody who cares about the science of consciousness at all!)

    Richard and David are both right – it’s tricky business. The most central dispute, to summarize, takes this form: when I get a change in reported phenomenology that supports my theory, my opponents could say it’s just a cognitive / reporting bias that doesn’t really reflect a phenomenology.

    This kind of fuzziness of the interpretation may mean that there can never be a single killer experiment. But eventually, by taking a lot of fuzz, we do average out the noise. You may not trust a single case of subjective report, but if they pop up all the time, in different conditions, under circumstances where you can’t think why they would be biased, you would have to accept the report at face value eventually. That’s why philosophers cite empirical papers too. Because what else is there to reason from? Let’s not confuse introspective reports or intuitions about reports with “conceptual analysis”. You can’t get a “feel” from an “is”!

    I have tried to provide this kind of data, knowing that they may be fuzzy to interpret, as it is the nature of studies of consciousness. But I hope the studies I’ve presented may be more relevant than some previous studies, which mainly focused on studying performance capacity. I hope my attempts haven’t been entirely futile.Well, at least it’s been fun.

  40. Thanks Hakwan for the lucid and cogent presentation.

    I had a question about the relationship between inattention and certainty. Studies have shown that people are more confident (“certain”) in their decisions when they are forced to make these decisions without ability to retract, rather than when they have the ability to change their decision. It seems that in the inattention experiments people may be more certain of their decisions because they are forced because they are not able to actually look the stimulus and hence must give their best guess. Could that influence their certainty?

    Also a minor point, I’m curious as to how exposure and feedback influence certainty. It would make sense that blindsight patients who are given positive feedback in their correct responses may both perform better and be more certain.

Leave a comment