Chair: Richard Brown
Presenter: David Rosenthal, The Graduate Center, CUNY
David’s paper
Originally presented and recorded at the 13th Attention and Perception Conference, Chung-Cheng University, Chia-Yi, Taiwan June 26, 2008.
Title by Richard Brown; music by NC/DC
17 responses to “Consciousness and its Function”
Hi David, thanks for the talk.
I have asked you something like this in the past but I don’t think your response works in the way you think think it does, so I figured I would try to make the question a little clearer.
So, as you are aware, there are priming studies done, for instance, by Silverman and Mack, that show that we get different priming effects when subjects are conscious vs. unconscious of a change. So, if a subject is presented with two pictures, A and B, which have some difference between them (like an extra tree or something), then when one is presented with A and B and one was not conscious of the difference then both A and B show priming effects (i.e. one will complete a degraded picture with what one unconsciously saw in A and B) but when one consciously notices that there is a difference between A and B then only B (i.e. not A) shows priming effects.
I suggest that it is evidence that a state’s being conscious inhibits previous ‘outdated’ representations and so serves to guide certain representations (i.e. the conscious ones) to greater causally efficacy and so to greater effect on behavior. If this were true, it seems to me that that would definitely give some evolutionary advatage to having conscious states.
Suppose, for instance, that a bear is charging at you and that there is a spear that is just out of reach. The bear is running straight at you and you are casting frantically about for something to defend yourself with. As you look around, wildly, you first see the spear out of reach, and then in another pass you see the spear within reach (say it was knowcked towards you in the chaos of the bear stampeding towards you). Now let us assume that in one case you do not consciously see this difference and in the other case you do. In both cases you will have representations of the scene with the spear out of reach and with the spear within reach. But only in the case that you consciously see the change (that is, consciously see that the spear is in reach) will the previous representation be inhibited and the representation of the spear as in reach is now more causally active and liable to cause you to reach for the spear and (maybe!) stave of the bear. This doesn’t seem like some minor or neutral thing. This sounds like an important function for perceptual consciousness!
Now, I take it that your response is to point out that change detection can itself occur unconsciously, as in the Fernandez-Duque et. al. visual cognition paper, and so detecting change can’t be the function of consciousness (as it happens unconsciously).
But this seems too quick to me. The kind of results from Silverman & Mack suggest that when a change is not consciously detected both of the images presented will show priming effects, so of course subjects perform the way that they do in the Fernandez-Duque et. al. study. Given that the subjects were not conscious of the change presenting either of the images will allow subjects to perform above chance. So these kinds of studies don’t show that consciously detecting the change doesn’t have some added value.
Hi Richard,
Thanks for your challenging thoughts about change blindness and priming in connection with a possible utility for perceptual states’ being conscious.
A minor point: I think we’d need more to conclude from the important Silverman-Mack study that a perception’s being conscious has the utility you suggest in the bear-spear scenario. For one thing, it’s not obvious that we can extrapolate from priming to action, especially quick action of the sort that figures in the bear-spear story.
But let me concede the connection you presuppose between priming and action. More important in any case, there is a confound in the Silverman and Mack priming result between consciousness and attention, something that they themselves stress in the discussion section of their article. As they stress, when a change is (consciously) detected in their experiment, the changed aspects post-change presentation commands attention, and the attended perception of the changed feature “is accompanied by an inhibition of all other irrelevant … information.” They continue: “This suggests that the unattended visual information may be volatile and that without focused attention it may be effectively inhibited by the stimulus information that has been attended to (because it is critical to the task at hand), and is, therefore, more stable” (pp. 419).
So it’s arguably *attention*, not the post-change perceptual state’s being conscious, that would have utility in the bear-spear case.
Thanks for the quick response David.
I take the point about priming and action but if pressed I think we could tell a story about that so I guess I’ll wait and see if you want to press…
Thanks for pointing out that section of the article, I do think it is plausible that it is attention that is doing the work; but I think there are still problems for your argument here.
So, it seems to me that if you are willing to grant that attention has utility then your argument that the lack of utility for conscious states is indirect evidence for the higher-order theory of consciousness won’t go through. The conclusion you get instead is that either the higher-order theory of consciousness is true and conscious mental states have no function OR a view like Jesse’s AIR theory is right and conscious mental states do have utility. This isn’t terrible but you then need to give an argument that attention is not sufficient for a mental state’s being conscious.
But if you are able to give an argument that it is some higher-order state and not attention that is sufficient for a mental state’s being conscious it then starts to look as though the above considerations about priming suggest that the function of a perceptual state’s being conscious is to command attention to the state. So, again it looks like we have a function for consciousness.
Hi Richard,
I’m not sure I’m following.
It’s pretty clear that mental states occur consciously without attention. That’s evident from all the Mack & Rock work. Consider the basic experiment. Subjects attend (fixate) centrally. Something is flashed a bit off to the lower right in a way that’s would be consciously seen under other circumstances. Subjects report not seeing that thing. Prinz, following Mack & Rock, construe that as due to subjects’ attending centrally; the absence of attention where the stimulus is flashed results in subjects’ not seeing that stimulus.
There’s serious question about that interpretation of that work. As Braun (Psyche March 2001 and forthcoming) has urged, decisively to my mind, it’s subjects’ *expectations*, not their attention, that’s operative.
But let the Prinz and Mack & Rock interpretation stand. That still wouldn’t show that attention is *required* for consciousness. Why? Because it’s not as though the perception that pertains to the region of the display where the flash occurs is not a conscious state. That state is perfectly conscious; it’s just that it’s not conscious *in respect of* the flashed stimulus, but only in respect of the white screen.
More generally, and Mack & Rock aside, consider any case of consciously seeing a scene. Attention is seldom if ever diffuse over the entire conscious visual field; it’s directed at only part of it. But much of the unattended portion of one’s visual field consists of conscious visual perceptions. So attention isn’t required for mental states to be conscious.
And robust experiment work in blindsight (e.g., Schurger, Aaron and Cowey, Alan and Cohen, Jonathan and Treisman, Anne and Tallon-Baudry, Catherine, “Distinct and independent correlates of awareness and attention in a hemianopic patient” Neuropsychologia, 2008) shows pretty decisively that attention isn’t sufficient for a mental state to be conscious.
So consciousness is not a matter of attention. The argument that it’s a higher-order state is what I call the Transitivity Principle: that a mental state’s being conscious consists in one’s being aware of that state. In what way? For present purposes that doesn’t matter; one won’t be aware of the state in any way at all except in virtue of some higher-order content.
You say that if I have “an argument that it is some higher-order state and not attention that is sufficient for a mental state’s being conscious it then starts to look as though the above considerations about priming suggest that the function of a perceptual state’s being conscious is to command attention to the state. So, again it looks like we have a function for consciousness.”
That’s where I don’t follow. Two arrays are presented to subjects. Sometimes subjects consciously detect the change. But it’s not that subjects ever see the arrays but fail to see the arrays consciously; they see both arrays consciously, but sometimes fail to see the respect in which the second differs from the first.
The arrays are 3 rows of 3 letters each. Might it be that subjects see all the letters but don’t see the changed letters consciously? No. As in the famous and influential Sperling (1960) paradigm on which the Silverman & Mack work relies, subjects consciously see all the individual letters consciously but need not, on that account, consciously see what letters they are. They consciously see them as letters, but not as ‘a’ or ‘b’, etc.
So I don’t see how this work show that consciousness of mental states has utility in directing attention.
What am I missing?
Thanks for the detailed response David; very helpful. I’m sure you aren’t missing something…I don’t think I did a very good job of explaining what I meant in my second post. I’ll try to make myself clearer.
But first, you say,
I take it that the first bit is exactly where someone like Prinz is going to object. In fact I think that his main response to a number of objections to his AIR theory (including the ones you mention) is to argue that attention is often diffused over the entire conscious field. So those considerations aren’t decisive unless one has independant suport for the claim that attention occurs only/usually directed at a small portion of one’s conscious visual field.
Now for the function of consciousness. My thought was that the Silverman and Mack studies suggest that consciousness serves to ‘call attention’ to differences in our conscious experience which then lead to inhibiting of the causal influence of outdated perceptual representations. You are right to point out that in the Silverman and Mack studies subjects are consciously seeing both arrays. I agree with you that they are conscious of seeing the letters, or shape orientations as the case may be but not conscious of which letters or which shapes are in which orientations.
The mere fact that these perceptual states are conscious does not seem to add any utility for the creature. The Silverman and Mack study seems to nicely support this by showing that these two conscious states both show priming effects when they are conscious (and when teh subject is unconscious of the difference between them). But the utility actually comes in the next stage; that is when the subject has a new conscious state (being conscious of the difference as the difference). It is this conscious state that arguably has the utility. Change detection is the function of consciousness (I am suggesting). This has utility because it serves to call attention to the change thus causally privileging the more current representation. If change detection can occur only when a state is conscious, and change detection serves this very important function, then a state’s being conscious seems to have utility. It is true that we don’t detect the difference everytime there is one to detect but that doesn’t mean that it isn’t the state’s being conscious which allows us to do it when we do.
The Fernandez-Duque results cannot show that change detection takes place unconsciously because the Silverman & Mack results suggest that we would get those results even when the change is not consciously detected.
This, I think, actually fits very nicely with your overall view of how these states come to be conscious in the first place. So, we have some creature that is able to sense certain physical properties and so has qualitative mental properties. This creature’s attention happens to be drawn to perceptual errors (that’s not bent/red…) which give it the concepts it needs for higher-order thoughts and to having the higher-order thoughts (I thought that it was red). This detection has some utility for the creature (it inhibits the erroneous representation) and *also* leads to it having conscious sensory experience. Having conscious sensory experience greatly enhances (it turns out) the creature’s ability to detect changes in its representations and so enhances the likelyhood that the creature’s attention is drawn to these changes thus enhancing the creature’s chances of survival. A function for consciousness is thus born.
Very many thanks, Richard, for your thoughtful, interesting reply.
First on Prinz’s AIR theory. I don’t know any actual evidence that attention is typically diffuse over the entire conscious visual field. But in any case that’s not what’s needed. What’s needed is exactly what Mack & Rock, and Jesse following them, agree is *not* the case. What’s needed is that attention is *always* diffuse over the entire visual field. If it were, whatever the Mack-Rock results are (recall: there is compelling evidence that they’re due to the manipulation of expectation, not attention), they wouldn’t be inattentional blindness. Plainly attention is sometime focused only on a small part of the conscious visual field.
Prinz or you might reply that, although attention is largely focused on one small area of the conscious visual field in so-called inattentional-blindness experiments, it’s still diffuse *in some measure* over the entire conscious visual field. But that defense of AIR seems little better than fiat. What’s the operational test for for diffuseness? Why isn’t it just the potential to command attention, rather than attention itself?
And note: Without a good operational answer to that last question, AIR fails for the sufficiency of attention for consciousness. Nonconscious, peripheral vision also has the potential to command attention.
Now about your interesting suggestion about change blindness and the possible utility of a state’s being conscious. Your idea is that change detection can occur only when the state that represents the changed condition is a conscious state. Since change detection has utilty, a state’s being conscious has utility.
But I’m not sure why you think change detection can occur only when the state that represents the changed condition is a conscious state. At best, Silverman & Mack show only that *conscious* change detection can occur only when the second state is conscious. Why think that change detection can occur, or that it has utility, only when conscious?
Your answer, as I understand it, is that “[t]he Fernandez-Duque results cannot show that change detection takes place unconsciously because the Silverman & Mack results suggest that we would get those results even when the change is not consciously detected.”
But I’m not following that. Why is that so? No reason to think we couldn’t test for the occurrence of nonconscious change detection. But Fernandez-Duque and Silverman & Mack haven’t.
Another more general word about your idea. Silverman & Mack shows that with *conscious* change detection the post-change state primes but the pre-change state doesn’t. Your original bear-spear story suggests that utility for prompting action attaches only to the post-change state–perhaps at least if change detection occurs.
But I don’t see why there wouldn’t be roughly as many scenarios on which it would be advantageous for the animal if the pre-change state could also prompt action. Unless such scenarios are not only relatively frequent and stacked very heavily in the direction of advantage only for post-change states to prompt action, both of which seem at least doubtful and in any case unsupported, I don’t see significant selective pressure of the sort you suggest for the utility of states’ being conscious.
Hi David, sorry I did not get back to you yesterday; I was busy running errands out in meat space! Anyway, I really appreciate your thought-provoking responses. This is really helping me think through the issues more carefully.
I am no expert on Jesse’s theory, nor am I especially inclined to think that it is right (though, as you probably know, I think that IF it is right it will be because it turns out to be a way of implementing the transitivity principle), but I would guess that he might respond to your challenge by backing off of the ‘attention is diffuse’ line and emphasizing the connection to working memory being necessary (on his view) for conscious experience. So, he says,
It seems that on his view being accesible to working memory is enough to get us a conscious experience of something-or-other. So the story might go something like this. At the beggining of the experiemt attention is diffuse over the scene before selecting something to focus on (the basket ball) this makes the entire scene accessible to working memory (though not accessed by it). This explains why the unattended sections (i.e. non-selected sections) of the scenes are conscious (they are accesible), why the gorrilla isn’t seen (it isn’t accessible), and why some of the scene is ‘at the center’ of our conscious experience (it is being accessed).
You might think that this has the sound of an ad hoc response but Jesse might respond as he does in another context:
But this is all guess work on my part…perhaps someone who knows more about AIR theory will have something to say…
Moving on to the function debate. You say,
I may have over-stated my position earlier. I don’t think that I need to hold that change detection happens only when the state that represents the change is conscious (the change may be deteced by chance (i.e. attention may *happen* to select the difference) which leads to our being conscious of the change). Rather all that I need for the argument to work (*I think*) is that when the state representing the change is conscious the likelyhood of change detection is greatly enhanced and that this is the basis for a function to consciousness.
You are right that that the Silverman and Mack results only show that conscious change detection occurs when one is conscious of the change. Unconscious change detection is certainly a possibility and would refute the argument I am making. I certainly agree that we need some empirical work on this in order to adjudicate this issue. My only point in bringing up Fernandez-Duque et. al. was that it, by itself, does not show that there is unconscious change detection, and I thought that this was generally taken to be what the results showed, so I wanted to head off that objection. If I misunderstood that then I am happy to withdraw that bit and wait until there is some experimental evidence one way or the other.
Finally, about the general worry. This is an interesting point, and one that I have not thought about. I guess I have been assuming that there would be a *heavy* bias in favor of the post-change state. The commone sense reason for my assumption was just that it seems likely that a creature with the most up to date information about its environment is the one that gets the worm, so to speak. What kind of scenerios do you have in mind where outdatted representations would confer some advantage to the creature in question? I can think of a billion bear/spear type scenerios but none like the ones you mention…this is probably just a failure of my imagination, but could you give an outline of what such a scenerio would look like?
Very many thanks for your thoughtful follow up on the AIR theory and function. First a few thoughts about the AIR theory.
You suggest that, instead of concentrating on attention, we focus instead on the accessibility to working memory that AIR appeals to.
I don’t think we know enough to know whether a state’s being accessible to working memory is coextensive with its being conscious. For one thing, ‘accessible’ is a tricky notion, I don’t know that we have a way of spelling it out with sufficient precision to allow us to determine in an empirically robust way whether a particular state is accessible to working memory.
But bracket that and suppose that accessibility to working memory is indeed coextensive with a state’s being conscious. Why even then think that such accessibility is what a state’s being conscious consists in?
You say you agree with me that the Transitivity Principle, on a which a state’s being conscious consists in one’s being aware of it in some suitable way, is crucial to understanding what it is for a state to be conscious. (Prinz, as I understand him, contests that, pressing instead a so-called first-order theory, on which such higher-order awareness is neither necessary nor sufficient for a state to be conscious.) If so, we should understand a state’s being conscious in terms of that higher-order awareness, and seek to explain whatever accessibility to working memory occurs with consciousness as a result of that higher-order awareness.
I had urged that there are cases in which attention plainly doesn’t cover the entire conscious visual field; in fact I suspect that that’s alwayus, or almost always, the case.
It seems pretty reasonable to think the same about accessibility to working memory. Consider the periphery of the visual field that, howsoever parafoveal, still counts intuitively as conscious. Is it reasonable to think that the states in those conscious peripheral areas of the visual field are, without a shift of attention, accessible to working memory? I think not. Neither Sperling nor anything else I know of suggests otherwise.
Let’s then move back to the utility of a state’s being conscious.
You had argued, using your bear-spear scenario, that it’s useful for a creature to act of the most up-to-date perception when there is a change. Sometimes, yes. But under less dramatic circumstances, it may be useful to know where the spear had been before, if e.g. that’s relevant to whether I can usually find a spear. If the most up-dated perception result from the spear’s having been knocked out of its usual place, it’s the prior information that’s more useful. That will often be the case.
You’ll probably reply that you’re concerned with cases in which immediate action is called for. But recall: I said a lot earlier in this exchange that the Silverman & Mack results don’t speak to relevance to immediate, pressing action. They show a priming effect, not an effect on immediate, pressing action.
You write that it could be that “change detection happens only when the state that represents the change is conscious (the change may be deteced by chance (i.e. attention may *happen* to select the difference) which leads to our being conscious of the change).” But then attention is what’s useful, and consciousness may well be a by-product irrelevant for the utility in question.
You continue by saying “that when the state representing the change is conscious the likelyhood of change detection is greatly enhanced.” What you need to show is that the enhanced likelihood of change detection is due to consciousness, not attention.
Thanks David; I am enjoying this!
I have to admit that I find what you say here about the AIR theory convincing so I’ll leave off defending it. The only point I really wanted to make was that something like this has to be said in order to make the indirect argument for the higher-order theory work. But now that I have thought about what you say above, I guess you think you don’t have to because you think this kind of view is taken care of by your arguments against first-order views. Is that right?
In the kind of case you describe I don’t find it convinging that the utility is attaching to the first (out dated) state. It seems to me that the usefulness comes from being conscious of the difference as the difference. So I agree that it is the past *information* that is useful but don’t agree that this should make the out dated representation more useful. Being conscious of the difference as the difference will easily allow one to ‘read off’ the information about where the spear was, so it is there is one needs it.
As for the last bit, I agree that it is attention that is likely doing the work here but I am arguing that it is consciousness that gets attention’s attention, so to speak. The likely hood of detection is greatly enhanced, I am suggesting, because the state’s being conscious enhances the chances of attention selecting the state in question. Now, granted the Silverman and Mack results do not show this, but they suggest this and there are no reasons to think it is wrong (there is no experimental evidence against the view). Also, it is natural to think that there is a close connection, or affinity, between consciousness and attention and so it wouldn’t be suprising if consciousness served to attract attention. So this claim –that attention mostly selects from conscious states– is a plausible candidate for a basis of a function for perceptual states being conscious. So I don’t have to show that it is consciousness *not* attention that is doing the work. What I need to show is that it is cosnciousness that allows attention to do the work and I admit that this hasn’t been shown. I think that more experimental work has to be done, but still, this is a viable hypothesis about the function of consciousness that is not at this point ruled out.
Thanks again, Richard, for your thoughtful remarks.
You say: “I have to admit that I find what you say here about the AIR theory convincing so I’ll leave off defending it. The only point I really wanted to make was that something like this has to be said in order to make the indirect argument for the higher-order theory work.”
I’m not sure I understand; what is it that you think has to be said? What’s the indirect argument for the higher-order theory?
In any case, you’re right that Prinz’s AIR theory is, as he himself notes, a first-order theory. In other words, it reject the Transitivity Principle, that a state’s being conscious consists in one’s being aware of that state in some suitable way. But my argument against AIR is that there are counterexamples that can’t deal with except in ad hoc, unconvincing ways. That’s been what I’ve been arguing in some detail in my earlier posts.
You write that you find unconvincing my case in which utility attaches to the pre-change perception, urging instead that it attaches to the awareness of the difference between pre- and post-change conditions. And you urge that the past information isn’t more useful (than what? than the information that there’s been change?). But all I was saying was that sometimes the past information is the only useful information. Information that there has been change might allow one to extract information about the past situation, but that’s an extra step, which might go wrong. As long as it’s the past information that sometimes carries utility, one can’t argue that utility attaches primarily to the later information or to information that there was a change.
As for whether it’s a state’s being conscious that captures attention: You agree that Silverman & Mack doesn’t show this, but urge that that result suggests it. I don’t see that that’s so. You also say that there’s no evidence against it. That’s not so; the blindsight GY experiments, for one example, indisputably show that perception that’s entirely nonconscious captures attention. Given that blindsight perceptual functioning is impaired perceptually, that suggests a strong tie between nonconscious perceiving and the potential to capture attention.
And you say that “it is natural to think that there is a close connection, or affinity, between consciousness and attention and so it wouldn’t be suprising if consciousness served to attract attention.” Well, if you’re talking about what’s natural pretheoretically, I agree. But what is natural from a folk, pretheoretic point of view is often not the case. For one thing, it’s tempting from a pretheoretic, folk point of view to think that all perceiving is conscious, or so many think.
You conclude that it’s “a viable hypothesis about the function of consciousness that is not at this point ruled out” that “cosnciousness that allows attention to do the work.” But what I’ve been arguing is that there is ample experimental work that does rule that out.
Hi David, thanks for the substantial responses! I appreciate your taking the time to work through this with me.
In the first part of your talk (introduction and caveats) you seem to suggest that showing that consciousness doesn’t have any significant function is indirectly evidence for the higher-order theory. I was arguing that if you conceed that attention has some significant function then you need to rule out theories like Prinz’s that link consciousness and attention. I didn’t hear anything about this in your talk and I was suggesting that the kinds of things you say here about those kinds of theories is needed to make the argument go through. But then I thought that maybe you didn’t need to do it explicitly because your arguments against first-order theories is enough to already rule out Jesse’s kind of theory.
I think I agree with this, but haven’t you switched from talking about *states* to talking about *information*? I agree that the past information is useful. What I am disputing is that this must mean that the outdated perceptual state is therefore useful. I am claiming that it is more useful for the creature to have the most up todate representations at the causal forefront. In cases where *information* about outdated representations is useful, it is there for the creature to get. You are right that this reuires and extra step and may in some cases go wrong but overall utility is enhanced in this way (and notice that it is attention-cum-consciousness that gives us the utility in both cases so your objection is not an objection to my hypothesis about the utility of consciousness but really about my interpretation of the Silverman & Mack data and your point can be explained in a way consistent with my interpretation).
This isn’t really evidence against what I am arguing. I have already admitted that attention can be captured in the absence of consciousness. That is fine with me, and seems to be a neccessary part of your explanation of how these states become conscious in the first-place. But this is compatible with my claim that consciousness greatly enhances the likelyhood that attention will be captured and so that the information is causally active. The old adage about the thirsty blindsighter illustrates this. It is only in highly artifical cirmcumstances in which we find that attention is captured in blindsighters. In the usual circustances these representations do not end up causing action.
In the second place, I don’t think that GY cases ‘indisputably’ shows anything like what you suggest. So, the lesson from Block’s paper “On a Confusion about about a Function of Consciousness” is that it is a mistake to automatically infer that any kind of consciousness is missing in blightsight cases. A version of this lesson applies to those of us who like the higher-order strategy. So, you assume that GY’s perception is non-conscious, but it might be the case that GY has normal conscious perception (some information is getting through via a non V1 pathway and GY is conscious of himself as being it that state) but when GY introspects he mistakenly is conscious of himself as being conscious of nothing in the area of the scotoma. So it will seem to GY that he is not consciously seeing anything when he is in fact having conscious perception. This is relatively bizzarre but GY is relatively messed up perceptually so who knows what is going on in there? And it is consistent with the higher-order theory. So how do we rule this explanation out? If we can’t then you can’t infer that GY’s attention isn’t captured because of consciousness.
I was talking pre-theoretically but I disagree that these things are not often the case. In fact it seems to me that often they are the case. In the particular case you bring up we can give an explanation for why it would seem pre-theoretically that way to us (viz, that we are only ever conscious of teh conscious ones so of course it seems that way to us pre-theoretically). And in this case I think that we don’t have that kind of explanation (at least not yet). So this platitude about the link between consciousness and attention may turn out to be false, but, at least according to this argument I am trying to develop, that hasn’t been shown yet.
So, in sum, this looks like a possible function for consciousness that is (i) supported by pre-theoretic considerations (ii) suggested by empirical considerations; the Silverman and Mack findings are plausibly interpreted as showing that only conscious change detection has utility (iii) not ruled out by any experimental results; the Fernandez-Duque results can’t be marshelled against it, nor is the GY kind of cases decisive (iv) a plausible way to interpret claims like Block’s that the function of conscious is to “greese the wheels of access;” if we are careful not to think of access as a kind of consciousness (as I agree we shouldn’t) we can see it as simply drawing attention and thus enhancing the causal promiscuity of the perceptual state. Now all of this is defeasible so I am not trying to claim that this is true, but only that there is a good case to made that this is the function of a perceptual state’s being conscious.
Very many thanks for your further thoughts, Richard.
You’re right that I argue in the talk that if a mental state’s being conscious has no significant utility, that’s indirect evidence for a higher-order theory, and that I acknowledge that attention does have a significant utility, as I think is obvious. And your of course right that if we explain consciousness in terms of attention, then consciousness would itself have utility.
Time constraints may have prevented me from covering it in the talk, but the PowerPoint for §IV on executive function does have a slide, just before my discussion of Dretske, that takes up the question of attention and consciousness. I write there:
Nor is attention always conscious (Koch et al 2007; Tsuchiya et al 2007; Kentridge, Heywood, and Weiskrantz 2004; Lamme 2003; Schurger et al, 2008). And even if the attentive deliberation characteristic of the nonroutine actions in learning complex activities is typically conscious, it may well be, as noted in §II, that its being conscious itself does nothing to enhance that learning, but is simply a byproduct of the deliberate attending.
But I agree that my argument for the Transitivity Principle should also suffice as an argument against any first-order theory, such as AIR.
I agree that information is not the same as a mental state. But I’m not sure I understand the force of your invoking of that distinction. The relevant information is captured by the representational content of a state.
You say “overall utility is enhanced” by the creature’s having to recover earlier information by an extra inferential step. I don’t see how. Do you have an argument that earlier information has utility less often than later information? How could making the creature draw an inference enhance overall utility?
You also say “that it is attention-cum-consciousness that gives us the utility in both cases.” That’s what I dispute. Even if attention only occurred with a state’s being conscious, it might be just attention, and not consciousness, that had utility. What’s the counterargument? It can’t be that consciousness is required for attention, since we independently know that not to be so.
You concede that, but insist that “consciousness greatly enhances the likelihood that attention will be captured and so that the information is causally active.” I don’t see that Silverman & Mack shows that, nor any other results I know of.
Block (1995) did urge “that it is a mistake to automatically infer that any kind of consciousness is missing in blightsight cases.” I believe he’s given that up (2007, 2008, forthcoming). But in any case, it strikes me, along with many others, as most unintuitive. If a subject tells us that what it’s like for that subject is that no stimulus is seen in a region of the visual field, then any visual states we can independent show, as in blindsight, to pertain to that region are not conscious states.
I’m not sure I understand why we should consider as a live option your hypothesis about GY’s having a conscious perception in the relevant region which he erroneously regards as not there when he introspects. The damaged brain areas in GY don’t pertain to prefrontal cortex, where it’s likely that higher-order thoughts and introspection occurI
I also don’t see that hypothesis fitting well with a higher-order-thought theory. On my theory, a state is conscious (in the ordinary, nonintrospective way) if one has a suitable HOT about that state, and it’s introspectively conscious if that HOT is itself conscious, which is seldom the case. The HOT will be conscious only if there’s a third-order thought about it. It’s of course not inconceivable that there is in GY a relevant first-order perception, a HOT about it that result in its being conscious, and a third-order thought to the effect that tr is no relevant conscious perception. But although it doesn’t contravene the theory, it seems so unlikely as to not be worth considering as a possibility. What independent reason–independent of mere possibility–is there to think that this is actually the case?
I’d said that what seems natural from a folk, pretheoretic point of view often isn’t so. You said you disagreed, but argued in support of your disagreeing that we can give an explanation of why things seem pretheoretically as they do (in particular, why it seems pretheoretically that all mental states are conscious). But explaining why something seems a particular way isn’t showing that it is that way; and in this case, there’s good reason, despite any explanation, to think that it isn’t.
You suggest we don’t even have an explanation of why consciousness and attention appear, from a folk perspective, to be linked. Two things. (1) I think we do. We aren’t aware pretheoretically of attention that attaches to nonconscious states; so attention seems pretheoretically to attach only to conscious states.
But (2) having such an explanation doesn’t matter. The assumption that seems to underlie your suggestion that it does is that if there’s no explanation of why it appears that such-and-such, it must be that the explanation is that it appears that way because it actually is that way. Well, maybe. But to substantiate that, you need to show that its actually being that way does result in its seeming that way; only then can you go with the explanation that the appearance is due to its really being as it appears. Not having an explanation of why things appear a particular way doesn’t license the assumption that they are; it should impel us to get an actual explanation.
As for whether “the Silverman and Mack findings are plausibly interpreted as showing that only conscious change detection has utility,” let me say as I did earlier: Silverman & Mack show a priming result, which does not by itself establish any utility at all for action. And not only does attentional access happen independently of consciousness; Silverman & Mack doesn’t show that consciousness greases the wheels for the attention that’s needed for action.
Could your hypothesis be correct? Of course. But there’s a reasonable amount of empirical data going against it and nothing directly in support of it, and that seems enough for now.
Hi David, sorry for not getting back to you sooner. I have been busy trying to figure out how to turn the video presentations here into podcasts/vodcasts so that people can watch/listen on their ipods and iphones.
There is a lot of interesting and useful stuff in your response but before getting to specifics it might be useful to recapitulate the main thrust of my argument.
Consider the Silverman and Mack findings. We have two perceptual states, A and B, both of which are conscious and where the is some difference in B. When the subject does not (consciously) detect the change we see that bot A and B show priming effects. When the subject does (consciously) detect the change we see that only B shows priming effects. Why is this? Well, as you nicely put it in your first response to me,
So you seem to agree with me (and Silverman and Mack) that change detection commands attention, and that attention is useful in the way suggested. So then there is a Very Interesting Question that arises. Can Change detection occur unconsciously? You cite Fernandez-Duque et. al. as evidence that change detection does occur unconsciously so the fact that change detection is useful is not evidence for the usefulness of consciousness. But their work doesn’t show that. In their case we havetwo perceptual states, C and D (both unconscious) and where D is different than C. The silverman and Mack results show that when the change is not consciously detected we get priming for both perceptual states, and the Fernandez-Duque case is one where the difference between C and D is not consciously detected. So so far we have no evidence gainst the claim that consciously detecting a change has some further beneficial effect. In order to cast doubt on my hypothesis one would need to show that even in the cases where the change is unconsciously detected attention is commanded to the change, selects the current representation and inhibits the other. There is no evidence for this yet. I think a study could be done to test this but it hasn’t been done yet. So so far as things stands now, there is only evidence that the usefulness of change detection occurs when we consciously detect the change.
Now to the specifics of you last reply. You say,
So, the information in conscious perceptual state A is p, the information in conscious perceptual state B is q and the information that there is a difference in B is r. The useful information here (according to me) is r. In cases like the ones you bring up (where it is useful to have p) we need not think that this means that sometimes it is better to priviledge perceptual state A. This is because p is recoverable from r and r is the more useful information.
You then ask, “Do you have an argument that earlier information has utility less often than later information?” I think that there is the makings of such an argument in work like that of Jason New’s. New shows that we are better at detecting changes incolving animals or people than inanimate objects (that is, where two scenes differ only in whether there is an animal present elicite very low levels of change blindness). As he argues knowing where animals are would have been extremely advantageous to our ancestors. Knwoing where the animal was a second ago might be useful in some cases, but rarely will it be useful in the life or death way that the animals current location will be. New’s work is also nice because it suggests that change detection is enhanced by semantically categorizing the animal, which in our terms means that the subject needs to be conscious of the animal as an animal, which means that the subject has to be conscious of B in terms of r, or that the difference is consciously detected.
Moving on to the blindsight stuff, I do think that there is some reason to think something like this is going on. There is, for instance, this very interesting study which suggests something like this. Whether this is reliably replicated or not is a good question, but the point is that this may be more than mere possibility.
What about your argument that priming results don’t have implications for action? I guess I don’t really understand the challenge here. Priming results show, as I thought we agreed, that perceptual state B, when the difference is consciously detected, ‘more stable’ and influences action in a way that perceptual state A doesn’t (the action in this case is subject’s reports, but it is still action). In the case of semantically relevant categories like New’s, having perceptual state B stable and causally active is very much to the benefit of the animal. Do you have any doubt that detecting a lion where there wasn’t one before will strongly influence behavior? An so far we have no evidence that change detection occurs unconsciously, or that attention is drawn to the change inthe absence of our consciously detecting the change, so we have no evidence against change detection as the function of consciousness.
Very many thanks for your latest, Richard, which usefully stresses the pivotal points in your position.
A few brief thoughts in reply.
Change detection can command attention, but it needn’t. Attention doesn’t, e.g., seem to be implicated in many of the change-blindness paradigm pioneered by James Grimes (1996), which induced changes during saccades. So, even if change detection couldn’t occur without being conscious, I wouldn’t be convinced that change detection can be used to forge the connection you hypothesize between attention and consciousness. And I see little reason to think that nonconscious change detection can’t occur.
I’m not sure what the information is that you imagine is captured in the thought that r–that achange has taken place? That this specific change has taken place? That a has changed from its being F to its being G? It’s not easy to see how any information short of the last would serve to recover information about the pre-change condition (here, that a was F). And it’s similarly hard for me to see how that elaborate information, which explicitly states the pre- and post-change conditions, could be conscious without conscious awareness the pre- and post-change conditions themselves.
You also suggest (if I understand you), that if the information that p is recoverable from r, then r is the more useful information. I don’t see that. If r is useless except insofar as p is recovered from it, then the organism might well be better off not having to wrestle p out of r.
But a general point. These ideas of what’s useful and what isn’t are all speculations which would be hard in the best of circumstances to sustain in a serious, empirical way.
I agree that the work that seems to show that change detection is enhanced where animals figure, as against inanimate objects, is very interesting. Attention is also attracted more readily.
But you say that this “means that the subject needs to be conscious of the animal as an animal, which means that the subject has to be conscious of B in terms of r, or that the difference is consciously detected.” I don’t see this. Something may be recognized as an animal without its being consciously recgd as an animal. Once so recognized, attention is attracted and change detection is enhanced. Unless you’re assuming, which I see no evidence for, that change detection can’t occur nonconsciously, there’s so far no reason to think that the relevant states nd be conscious.
I don’t have much to say about Overgaard et al, which you cite to suggest that blindsight subjects may after all see consciously, though not very well. For one thing, the scale they use for seeing more or less well, including not at all, doesn’t seem to allow for the distinctly nonvisual “sense” that something is there reported by some (but by no means all) blindsight subjects. In addition, blindsight subjects are notoriously not a homogenous bunch, as one would expect, given that their cortical damage is not uniform. So I’m not sure what their work with GR would show even if their results are well-founded.
Is reporting something in the way Silverman & Mack subjects do the same kind of action that would have utility in your lion case? Maybe. But that’s by no means obvious; it’s something that would have to be shown.
Hi David, thanks again for the response. I feel like we are making some progress.
Granted. But this kind of empirical work needs to be attempted in order to rule out the kind of thing I am suggesting. So far as things stands now, that hasn’t been done (on either side).
I’m not sure this is right. Consider the case where teh subject doesn’t consciously detect the animal. In that case the subject (presumably) unconsciously recognizes the animal but doesn’t (consciously) detect that it is different. So it is an open question at this point whether attention is commanded in that case. So far as the evidence we actually have goes, this hasn’t been established (or ruled out). Admittedly, New’s study isn’t really designed to answer the question we are discussing here. We would need to see if we get the Silverman and Mack dissasociation in these cases when the stimuli are masked. Until this kind of study is done all either of us have is out intuitions about what seems plausible.
This is because the method that the experimenters were using calls for them to use a scale determined by the subjects (they go through training and spontaneoulsy produce a Likert scale with those points). So this is the scale that GR came up with. GY might come up with a different scale. Bu tthe general point might still hold, which is just that the forced binary choice paradigm might be setting the bar too high and so be obscuring the data we get, and therefore our assumptions about blindsight might be off.
This point is well taken, but I was only responding to the ‘indisputable’ claim that made about blindsight (that it indisputably showed attention without consciousness). I thought it was disputable, especially on the HOT account, and there is some experimental evidence which suggests that this may be more than a mere theoretical possibility. Isn’t that all I need to dispute your claims about GY?
I don’t think the kinds of action are the same. But that the representation is just generally more available for duty…this would need to be shown, I agree, but still there is no case against it that doesn’t start from the assumption that there is no function for consciousness. That is, granted everything that you say about other proposed functions is correct, my suggestion is still in play. And given that the pre-theoretic pull of the intuition that there must be some purpose for consciousness it seems to me that you need to present some empirical evidence against my claim that specifically rules it out. But this hasn’t been done. The case (against me) is circumstantial. SO the advantage goes to my claim (for now)…
Very many thanks for your follow-up thoughts, Richard. A few thoughts in return.
I said: “These ideas of what’s useful and what isn’t are all speculations which would be hard in the best of circumstances to sustain in a serious, empirical way.” You replied: “Granted. But this kind of empirical work needs to be attempted in order to rule out the kind of thing I am suggesting. So far as things stands now, that hasn’t been done (on either side).”
Let me stress: It would be difficult–if possible at all–to test these ideas about what’s useful and what’s not in any robust empirical way. You and I had been entertaining speculations about whether pre- or post-change information would on balance be more useful. I see no way to move from the just-so stories that animate and underlie evolutionary psychology to something we can rely on.
About images of animals attracting attention: There’s evidence wholly independent of anything in the Joshua New study that images of animals attracts attention. I’ll try to find the reference, but because of other demands today and tomorrow, that may remain a promissory note for the Online Conference.
About the Overgaard et al study: I said that “the scale they use for seeing more or less well … doesn’t seem to allow for the distinctly nonvisual ‘sense’ that something is there reported by some (but by no means all) blindsight subjects.” You replied that Overgaard et al used a scale that their subject came up with. My point exactly; it’s hard if possible at all to tell what it means.
You also wrote “that the forced binary choice paradigm might be setting the bar too high.” Too high for what? The issue you raised is whether there is conscious visual sensation in blindsight. How does a graded scale help with that? It’s a binary issue.
In replying to my point that blindsight is far from uniform across different human subjects, you wrote that the Overgaard et al study of GR was meant only to contest my claim from Schurger et al that attention without awareness indisputably occurs in blindsight. You say Overgaard et al shows that “there is some experimental evidence which suggests that this may be more than a mere theoretical possibility. Isn’t that all I need to dispute your claims about GY?”
No, I don’t think so. For one thing, it’s a different subject, who hasn’t been tested with the Overgaard scale but has been subject to a remarkable amount of testing by very many different people. And there’s still the foregoing point about what the subject-generated Overgaard scale could show.
A lot of what you argue seems to rest on the idea that I haven’t ruled out every possibility. True enough. But I wasn’t arguing that we can’t imagine that a mental state’s being conscious has no utility, only that the best evidence and the best theoretical considerations point in that direction.
Thus, you write that “there is no case against [priming results’ showing utility for self-preserving action] that doesn’t start from the assumption that there is no function for consciousness. That is, granted everything that you say about other proposed functions is correct, my suggestion is still in play.” And you say that given “the pre-theoretic pull of the intuition that there must be some purpose for consciousness,” the presumption is on your side until I’ve ruled out every possibility.
I don’t see that. Nobody will ever rule out every possibility. And I argued in my talk against the reliability of the pretheoretic intuition that consciousness has some utility. But maybe I’ll address the issue of reliability again in my summary comments, after the conference closes.
In wrapping up, I’ll first say something that ties some of my remarks in the online discussion of my talk to an argumentative strategy that’s central to the talk.
Much of the online discussion revolved around whether a connection between consciousness and attention might point to some utility of consciousness, since attention plainly has great utility. My reply was mainly to point out compelling reasons to doubt that the required tie between consciousness and attention actually holds. Since attention is neither necessary nor sufficient for a state’s being conscious, attention cannot be part of what it is for a state to be conscious. That’s enough to undermine Prinz’s AIR theory, and the utility that attaches to attention doesn’t carry over to consciousness.
As I noted at one point in the online discussion, even if attention and consciousness were coextensive, that by itself would not show that any utility attaches to consciousness. If mental states are F if, and only if, they are G, there might be utility to their being F, but not at all to their being G. It might be, e.g., that whatever factors led to the state’s being F (e.g., occurring attentively) also led to its being G (e.g., occurring consciously).
This point generalizes. It may seem that all planning or reasoning of some particular sort is conscious. But even if that were so, it would not by itself show that utility attaches to consciousness, as against the intentional content of the planning or reasoning. We would have to show that the utility actually attaches to consciousness, not to the intentional content. An argument for some utility of consciousness must meet higher standards than mere suggestive coextensiveness.
Let me then close with a few remarks about why it’s so inviting to attribute utility to mental states’ being conscious. When a phenomenon is poorly understood, there is a temptation, pioneered with damaging scientific effect by Aristotle, to try to get a grip on the phenomenon by appeal to some utility it may have. The poorer our understanding, the greater the temptation to invoke utility.
We should resist this temptation and be suspicious of its apparent fruits. Even if a particular phenomenon does have utility, we can get a serious explanation of it only by seeing how that phenomenon arises and operates. Utility may stem in part from the way something operates, but it’s dependent also on the effects it has on other things, and exploring that may well be independent of anything about the phenomenon itself.
This is one reason for the search for some utility of consciousness, since many still see consciousness as poorly understood. But there is in addition another reason, special to consciousness, which stems from a tendency to rely exclusively or primarily on first-person access to learn about our mental functioning.
Being in mental states of various sorts plainly has enormous utility. Human life would be impossible without our elaborate mental functioning, and the same goes in varying degrees for other animals. And relying solely on first-person access encourages assimilating mental states to conscious mental states, tthereby leading to assimilating the utility of mental states to the utility of mental states’ being conscious.
First-person access is of course crucial to understanding human mental functioning. But mental states are accessible in both first- and third-person ways. The pain I consciously feel is a state you may well know I’m in, and the thoughts and desires I introspect are states you can often independently tell are operative within me. And mental states that occur without being conscious are accessible only in the third-person way.
So we can’t infer from the utility of being in mental states to the utility of those states’ being conscious. Indeed, even if all mental states were conscious, that would hold, given the earlier argument against relying on coextensiveness.
We must instead ask what utility attaches to the property of a state’s being conscious independently of any utility that attaches to the state’s other mental properties, in particular, its representational character. I argued in my talk that whatever utility conscious thoughts and volitions do have attaches not to their being conscious, but rather to their representational properties. I’ve argued elsewhere that the same holds for conscious qualitative states–that whatever utility they have attaches to them independently of their being conscious.
In my last two posts, I spoke of images of animals attracting attention independently of issues of change detection. I now have the reference to the study I was thinking of: Fei Fei Li, et al, “Rapid natural scene categorization in the near absence of attention,” PNAS 7/9/02; thanks to Hakwan Lau. Their finding isn’t, as I’d wrongly remembered, that animals attract attention, but “that subjects can rapidly detect animals or vehicles in briefly presented novel natural scenes while simultaneously performing another attentionally demanding task.” In any case, this finding is independent of change detection, and it shows that images of animals play an enhanced role in processing.