Correlates, Causes, and the Neurobiology of Consciousness

Presenter: Joeseph Neisser, Grinnel College

Commentator: Jakob Hohwy, Monash University, Austrailia



  1. Reply to Hohwy’s Comments
    Thanks to Jakob for his thoughtful and detailed engagement with the paper. His comments will certainly be helpful to me as I move forward with this line of research.
    Before I get to the philosophy, though, I should say that although I’m basically happy with the talk, I’m uncertain why the uploaded power point presentation has lost so much. The resolution is fuzzy, the colors are off, and some text has even been lost. So, for those of you who sat through the presentation, I’m sorry about that. Hopefully people were not too put off to follow the argument. Also, I regret the delay in posting this response. I was at another conference this weekend and haven’t been able to get to this until today. Hopefully this will attract some discussion!
    Okay, on to the reply. At this point I restrict myself to a single issue: Why give up on treating subsystem N in terms of minimally sufficient content?
    Hohwy takes it that the key criticism of the received definition of the NCC concerns the distinction between content consciousness and creature consciousness. Thus, his fundamental response is to try to preserve the standard approach by saying that N – the neural subsystem that is minimally sufficient for the corresponding experience – will have to include the creature NCC as well. In much the same spirit, he also suggests that the perspectival content of experience might be handled by including earlier stages of visual processing which might be more sensitive to ‘variant’ or perspectival information. The move is to handle the problems with the NCC concept by widening the scope of N to include the missing content.
    In some ways, I am perfectly happy with this move. Thomas Metzinger, who was the editor of the book in which Chalmers’ piece originally appeared, responded to Noe & Thompson’s critique in much the same way. He salvaged the NCC concept at the price of holding that the contents of consciousness only correlate with global states of the whole brain, possibly even the entire nervous system (Metzinger, 2004, p.71). This is heavy price to pay for holding onto the sufficiency requirement. For one thing, it is a position clearly held in principle, rather than one that is drawn from any particular finding in neurobiology. In fact this kind of response straightforwardly abandons Chalmers’ core idea that the work by Logothetis et al is paradigmatic of the search for the content NCC, as well as his claim that discovering the NCC for content consciousness is a realistic goal attainable in the foreseeable future. On the expanded NCC idea that Hohwy recommends, those experiments could not possibly be aimed at discovery of the NCC! The research isn’t just incomplete, but it becomes the very paradigm of how not to find the NCC. Part of my motivation for writing this paper was to vindicate the research itself – to argue that the experimental design is valid and that it fits straightforwardly into the general movement of contemporary neuroscience. I’d rather not to throw it out on the grounds that we now realize that the NCC definition is experimentally intractable.
    Although I am in perfect agreement about widening the scope of N, I think there is more to the phenomenological analysis than this. Howhy asks whether looking for minimally sufficient neural activity that varies with perspective would meet the critique: “If yes, then the content-matching idea seems safe. If no, then what is the notion of the ‘first person perspective’ that is in play?” (p.3). This is a fair question, though I don’t think my response will satisfy Howhy. But here it is. The analysis of the perspectival nature of conscious content is offered as a counterexample to the content-matching idea. In order to handle the counterexample, we might include whatever neural mechanisms underlie ‘variant’ information processing. But this misses the underlying point. The problem is really one of individuating conscious states in the first place. There is no actual experience, the content of which is exhausted by the appearance of the preferred stimulus. Thus, the neural activity that is attributed the role of specifying the appearance of the preferred stimulus does not exhaust to content of any actual experience. The preferred stimulus is in consciousness only as part of a gestalt. The stream of consciousness is not composed of an assemblage of smaller ‘experiences’ that stand on their own. (One need not look to the phenomenological tradition to find this sort of point. For example, see Michael Tye, Consciousness and Persons, 2005). Instead, the preferred stimulus is nested within the ongoing global experience of a conscious subject. An attempt to fix ‘which’ actual, concrete experiential content is the one that is specified by these momentary activation profiles of N will always be like nailing jello to the wall.
    So the thought behind the paper is to try to finesse this problem. The problem arises from looking for a discrete neural state with a particular conscious content. As I argue, the problem is not about finding the causes of measurable changes in consciousness. Dropping the matching-content idea – i.e., dropping the sufficiency requirement that forces the subsystem N to do a job that it cannot do – reduces the explanatory burden. Subsystem N need not ‘know’ what the eventual gestalt content will be. That is, the conscious content can be underdetermined by the activity in N, even though N plays a part in mechanically producing the eventual change in experience.
    That is my motivation for dropping the language of minimal sufficiency. However, I do realize that the language of minimal sufficiency could be retained by (a) expanding the scope of N as Howhy suggests, and (b) making the ceteris paribus conditions do even more work. But that begins to look a little ad hoc.

  2. Joe’s comments are really useful for bringing the debate forward. I’ll just make a couple of remarks. Joe is right that I am resisting the notion of experience being “intrinsically experiential”. Not so much because I think it is wrong as because I struggle to form a clear idea of what the notion means (if it is not just the idea that conscious content is not only invariant). Joe’s clarification helps with this. I agree on the wrongness of the building-block approach to consciousness, which takes conscious contents as causally independent units that can be explored in isolation from each other. The appeal to the gestalt-ness of consciousness indeed seems to undermine this idea. But I don’t think this entails a recoil wholly away from the idea of individual contents with distinct neural correlates: we can retain the spirit of the building block approach by appealing to causally interacting units whose neural signature and manner of interaction can be explored scientifically. True, the content it is will change in systematic ways depending on the other building blocks but this is not an unusual causal situation in nature. This leaves an unresolved question: would this notion of causally interacting contents undermine the notion of minimal sufficiency? In other words, must Chalmers’ notion of the NCC absolutely be married to the unattractive notion of causally independent building blocks? Joe tends to think it must, I tend to think it doesn’t have to. Perhaps what is going on is that I never thought of the notion of minimal sufficiency as necessarily involving the matching-content doctrine. It is clearly a pragmatic notion on a par with standard notions of causal inquiry and in causal inquiry one always works with a shifting set of background conditions and with contrasts between limited aspects of the cause and effect events. For this reason I don’t find it problematic to focus on core aspects of content and setting aside other (perhaps gestalt-ish) aspects for later inquiry, and still retaining the notion of minimal sufficiency. If the notion of content being intrinsically experiential prohibits this picture then I think there is a threat of moving consciousness beyond the limits of causal inquiry altogether.
    Of course there is much more to be said about how the interaction of the neural correlates of content and creature consciousness should be explored. I think this is one of the most exciting questions in consciousness science. Joe argues that with an incorporation of creature consciousness into the NCC we can no longer view the classic binocular rivalry experiments as paradigms of NCC research. Under one conception of the NCC I agree, and have argued earlier, that this is exactly right: since the people experiencing rivalry are already conscious the neural substrate of their rivalry can only concern selection into consciousness of contents, not what it is for those contents to be conscious. This point is however independent of the issue of intrinsic experientialness, and I don’t se anything preventing using the notion of minimal sufficiency for picking out the neural substrates of selection into consciousness. I also find it very probable that this substrate will be a neural representational system. My strategy, after making this observation, would be to investigate how this selection-system causally interacts with the system underlying creature consciousness.
    In the end I am not sure exactly where or how Joe and I depart. We both think a causal interpretation of the NCC is attractive – I just don’t find the appeal to intrinsic experientialness attractive nor necessary. Perhaps there is a deeper difference: perhaps Joe, with Noe and Thompson, is suspicious about the very idea of the brain maintaining representations of the environment. If it doesn’t then it will certainly be hard to find neural representational systems which are as much as minimally sufficient for conscious experience (even on more lax readings of minimal sufficiency). This is however an entirely different line of argument, which is rather controversial. In general I agree with the kind of sceptical line Block took in his review of Noe’s book, and, more specifically, I think perception of simple things like occluded objects (the famous cat behind the fence) simply demands that the brain maintains models of the world. To speculate: those internal models deliver the contents of experience. So one question for Joe is whether and to what extent his views rely on the anti-representationalist position?

  3. Interesting discussion going on here. I am new to this aspect of studying consciousness and the brain, but I see the value of considering the different between causes and correlates. Knowing what brain regions correlate with consciousness doesn’t tell us nearly as much as what causes consciousness. However, I’m failing to see the issue at hand with the distinction between creature and content consciousness. I know the distinction, but, if the distinction is real, I don’t see why the concepts would be hard to dissociate neurally using single cell recording or fMRI. I get that, to have content consciousness, one must have creature consciousness, so the brain systems will likely have a lot of overlap, but they will in the end be distinct. Moreover, if I am understanding correctly, the perspective content problem (in that people can view the same object and have varying experiences of it) is more of a speedbump than a road block. We are already, using machine learning, predict, from fMRI data, what object a person is seeing. Now, we must scan subjects looking at the objects, create a neural map of each object, and then test it on new subjects, but the tools exists nonetheless. What stop us from using this method to dissociate creature consciousness from content consciousness? I know these questions are probably more help for me than for you, since I am not well versed in this, but hopefully they will be a little illuminating for you!

  4. Maxwell – Thanks for your comment. It gives one final chance to try to clarify my strategy here. I’ll pick out two pieces from your post: (1)”Knowing what brain regions correlate with consciousness doesn’t tell us nearly as much as what causes consciousness” and (2) “the perspective content problem (in that people can view the same object and have varying experiences of it) is more of a speedbump than a road block.”

    (1) I strongly agree with this, insofar as the identification of causes is the basic explanatory strategy for neurobiology. I take it that Chalmers is, in effect, offering a philosopher’s ‘reconstruction’ of a research program in neuroscience. I believe that his reconstruction misrepresents the nature and goals of the actual research, and that there is a sense in which there just is no such research program, so construed. Rather, there is a search for causal mechanisms that proceeds with or without the philosopher’s metaphysical concerns about the problem of consciousness. But I’d also point out that there is a sense in which explaining by causes actually tells us LESS than an appeal to correlates would, because the standard correlate idea expresses a particular theory about the relation between conscious content and the content of neural subsystem N (the relation is held to be one of identity). The burden of the argument is that although this particular claim cannot be sustained, this is no problem as far as the explanatory canons of neuroscience are concerned. This brings me to . . .

    (2) Again I certainly agree. In fact I’d put the point more strongly: it isn’t even a speed bump, if you apply the actual standard of explanation rather than the standards imputed by Chalmers. So, the research method you describe can very well be used – and indeed it is being used. The caveat is that the neural map it identifies will at best be a map of the object, not a map of the conscious experience that the subject is having. Roughly, this is because the experience is not composed only of a stimulus object.

  5. Jakob – Hello again, and thanks for your response. You have challenged me to take a stand on a deep issue about the representational theory of mind:”So one question for Joe is whether and to what extent his views rely on the anti-representationalist position?” This cannot be fully resolved here, but I can venture a few things.

    You suggest that internal representational models “deliver the content of experience.” I think I can safely reject this suggestion if it is intended in any precise way. If it only means that the brain processes information, or that neural representations are necessary to explain cognition, then of course I accept it. But if the claim is that local neural activation is sufficient (even minimally sufficient? here the debate gets bogged down, so I will set it aside for the moment) for some actual, phenomenologically valid experience, then I reject it for two well-known reasons: (a) there are some features of experience that are non-representational (Block’s reason), and (b) the gestalt content of actual experience can only be identified with overall states of the animal, not with momentary states of some particular neural population (Noe & Thompson’s reason).

    To step back for a moment, the paper is not intended to be “anti-representationalist” in any grand sense. It only takes issue with a very specific application of a specific version of representationalism to the neurobiology of consciousness. And it is noteworthy that an ever increasing body of work in neuroscience has little use for the version of representationalism that is assumed by many philosophers of mind. Neural mechanisms are often treated as controllers, governors, or regulators. This does not mean that they are not processing information or that there is no role for representations (pace the more extravagant claims of some theorists). But it does mean that the information in the system is (i) bound by structural relations to the environment and (ii) interpretable only in context.

    A final twist on that last bit: The neural subsystem N need not ‘know’ (represent) the rest of the context. It just needs to govern a particular trajectory through neural state space. At this level, the activity of N is explanatory even without the attribution of conscious content.

    I really enjoy this dialog, and look forward to following your work on this subject!

  6. I too am in the situation Maxwell finds himself in where my knowledge and experience in this field is the level of novice at best. To try and gain a greater understanding of this topic I took it upon myself to read a few of the other articles you referenced for your piece (Chalmer’s NCC piece from Metzinger’s book, the Noe and Thompson ‘Are there Neural Correlates of Consciousness, and one of Logothetis studies with Leopold from 1996). These only seemed to complicate the issue more when at the same time allowing for reference to a greater picture. I have one issue that I currently seem to be unable to resolve and then another issue that arose after reading your response to Maxewell’s comment.

    (1) When discussing the redefining of Chalmer’s NCC concept I need a bit more clarification as to your decision to use ‘partially causal’ in your terminology. The clarification may have already ben addressed and I may be sticking to what I read from Noe and Thomspon as to there being an agreement rather than a correlation and going from agreement to partially causal seems a big leap for me. I agree with them that there is more than likely the agreement between the neural and the perceptual rather than them sharing the same content. Much as like the hallucination example there can be neural activity which produces seemingly phenomenological responses (imagining an elephant in the corner as opposed to there actually being an elephant in the corner) but they are not the same as actually seeing the elephant in the corner. This case illustrates for me that there is an agreement between the percept and the N but since in the former is bottom-up so to speak and the other is top-down (to roughly use that terminology) then there would never really be a causal relationship simply a N state relationship.

    My articulation of this matter would be much improved by further study but I hope that it sheds some light on the issue at hand and before I conflate things even further I’ll move on to my address of your response to Maxwell.

    (2) The fact that current research is allowing us to explore the processes of the brain on multi-various levels using fMRI, TMS, and single-cell recording such as in Logothetis’ studies, it seems that in due time there will be greater empirical evidence as to which processes correspond to certain conscious states. Whether there is ample evidence at this time I do not know and even if there is, philosophers love to disagree. My concern lies with two things: your mention of how explaining by causes will tell us less than an appeal to correlates would (which I agree with but more on that in a moment) and that of having a neural map as an object rather than a map of experience. As to the first issue, I agree that explaining by causes will tell us less and would open itself up more to the explanatory gap but since that is what since is looking for, neural causal relationships, and philosophers are looking for more explanatory relationships it seems your move (by using causal in your new definition for NCC) is actually a widening the gap. As to the map issue, it lead me think about neuroplasticity and the ability of the brain to expand certain functions to accommodate for surrounding damaged areas. In this case it seems as though an object map would be a step closer to a conscious experience map.

    In retrospect that last inquiry about the brain map might not be true but it is a good stepping stone to the question of how, if NCC are then thought to be causal, would a area of the brain (say Broca’s area) be considered the NCC of, in this case, written and spoken conscious experience, if once a stroke happened, that area no longer existed? Would it be said that ‘Well now this are is the NCC.’ but if that were the case then wouldn’t a causal relationship (which I am taking to be a one to one relationship) be ruled out?

    I’d like to thank you in advance for any feedback as well as your patience.

  7. James – Hello and thanks for posting! There is a lot in your post, but I’ll try to respond as succinctly as possible.

    1. First I should state that I am not here to speak for Noe & Thompson. No doubt they would reject many of the things I say, especially about causal mechanisms. But I do take certain phenomenological arguments seriously, as they do. You suggest that their notion of ‘agreement’ (as opposed to correlation) may be apt. I basically accept this, but I put the point slightly differently by saying that the activation in N underdetermines the content of actual experience. Activation in N is compatible with (agrees with) the appearance of multiple conscious experiences, and hence does not specify (is not minimally sufficient for) any one of them. Your example of perceiving vs hallucinating an elephant will fit the bill here. But you conclude that this shows there is no causal relation between states of N and conscious experience! I do not draw this inference. Even without carrying a matching content that specifies the experience, the neural mechanism can be an inus condition that makes a measurable difference in the experience.

    2. I am not certain I fully understand this part of your post, but I think I can respond to the charge that my interpretation of the data widens the explanatory gap.

    The argument (along with the research here in view) neither widens the gap nor closes it. In the end, we are left with the same basic kind of relation between physical science and consciousness that we had before. We already believe that neural mechanisms underlie (causally explain) consciousness. The data simply add some fascinating specificity about which neural mechanisms causally explain which aspects of consciousness. So I disagree that the move I propose from correlates to causes renders the data less explanatory.

    Much depends on your theory of explanation and on what you would like to explain. As you pointed out, many philosophers are looking for something “more explanatory.” The ideal explanation would render the relation between consciousness and the brain intelligible, such that it could be derived (deduced) from mechanical laws. Jackson calls this “a priori physicalism.” It is possible that the day will come when this can be done. If so, then this kind of research will be understood in retrospect to have been a small step on that road. That is, it may turn out in the end that Logothetis et al did make some progress toward closing the gap. But even if the gap is never closed, this research is explanatory insofar as it localizes part of an underlying mechanism.

  8. Hi — I missed this paper and discussion until now for some reason. Sorry for jumping in so late. A few notes based on an overly quick reading — apologies in advance for almost certainly overlooking crucial aspects of the paper and disussion.

    (i) As Jakob notes, the content NCC is just one approach to the NCC that I spell out in my paper. I also spell out a more general notion of phenomenal-family correlation that doesn’t rely on content matching — the point of it is precisely to provide a more general way of capturing the sort of correlations that Joe is after.

    (ii) Joe says that local states of the sort invoked in content NCCs don’t really correlate with consciousness at all, because they rely background conditions (including conditions for creature consciousness) for the correlation. I think that this is a verbal issue. Obviously there is a sense of correlation in which covariation against a necessary background counts, as there is another sense in which some sort of sufficiency is required. In the paper I distinguish the relevant notions by talking about core NCCs and total NCCs. I think it is quite clear that someone like Logothetis is after something more like a core NCC — obviously he isn’t trying to get at the neural correlates of creature consciousness, but is happy to presuppose creature consciousness in the background, but his work is no worse for that. The treatment of core NCCs that I give reflects that.

    (iii) The “experiences” that NCCs correlate with are not supposed to be abstractions — they are honest-to-goodness phenomenological experiences! But of course it is legitimate to focus on different (phenomenal) properties of experiences and to find different neural correlates for them if appropriate. And again, I think this is just what someone like Logothetis is in effect doing.

    (iv) I think that any phenomenal state is a first-person state by definition, and my discussion is cast in terms of phenomenal states, so I’m not sure how it misses first-person states. Maybe Joe’s thought is that something about the first-person character of phenomenal states precludes analyzing them in terms of content or precludes (a la Noe and Thompson) matching them with third-person states. I think that point turns on a number of complex issues about the representational character of consciousness so I’ll set it aside here (but I do note that Noe and Thompson’s critique largely depends on identifying third-person contents with ultra-simple contents such as receptive-field contents). Or
    maybe the thought is that focusing on specific contents may miss the background subjective character of experience. Here I note that my treatment leaves open the possibility that there are different neural correlates for background states of consciousness and for specific contents of consciousness.

    (v) “Philosophers’ metaphysical concerns” about consciousness really aren’t playing any significant role in my treatment — they’re set to one side. Talk of correlation is precisely intended to stay neutral on questions such as causation, identity, psychophysical laws, and so on. It seems to me that Joe’s talk of causation is more metaphysically commital here.

    (vi) Joe’s eventual definition is interesting although I don’t fully understand it. It appeals to “the INUS condition”, but of course where there are INUS conditions, there are often many INUS conditions. My talk of states that are minimally sufficient with respect to a certain sort of background state is in effect intended as a way of isolating something like the relevant INUS condition (the total state will be unnecessary but sufficient for the phenomenal state in question, the local state will be insufficient, and it will plausibly be necessary to the extent that there isn’t multiple realization of the role of the local state). I note that when we have a minimally sufficient correlate against a background, we will have a correlation between whole families of states, and it will be a clear sense in which variation in the local correlate “makes the difference” between the correlated phenomenal states — at least once we set aside metaphysical concerns about causation for pragmatic concerns about covariation and prediction.

  9. Hello David, and thanks for this feedback. I’ll use the ‘last word’ privilege bestowed on conference presenters to respond to a few of your observations.

    First I’ll remark that the basic intent of the paper is to respond to the phenomenological critique in a way that acknowledges the point about the holism of conscious content while simultaneously affirming the basic NCC idea and the empirical project(s) that it identifies. So I’m primarily trying to show that the phenomenology does not stand in the way, and that there is a perfectly straightforward interpretation the NCC according to which it does not stand or fall with any assumption about matching content. The attempted revision of the NCC definition is meant to show how the NCC project can handle the phenomenological objection. Since you are unmoved by the objection, you aren’t very motivated to reply to it. So I’ve taken it upon myself! Now I turn to your specific points.

    (ii) & (iii): The experiences that NCC’s correlate with, you affirm in point (iii), are not meant to be abstractions. They are meant to be honest-to-goodness experiences. I agree that they ought not to be abstractions, if the notion of the content NCC is to hold water. And yet they are abstractions. Immediately before this, in point (ii), the distinction between core & total NCCs is invoked, precisely in order to prevent the core NCC from being held to any phenomenologically valid standard. The core NCC (which is what binocular rivalry studies are after) does not correspond to any actual experience but only to an abstracted cognitive content which is said to be at the core of an experience that, strictly speaking, only corresponds to the total NCC. (And again, though I do not speak for Noe & Thompson, I’d point out that they would probably deny even this last claim, arguing that the content of experience correlates only with total states of the animal, not with total states of the nervous system.)

    (iv): You point out that the issue of matching conscious content to activation in N turns on some vexing issues about the representational nature of consciousness, including and especially the problem of giving the first-person point of view a place in a third-person physical world. To clarify my approach in this paper, I can say that there is a connection between the problem about subjectivity and the gestalt nature of conscious content. Part of what makes experience ‘mine’ is the way that its contents are constituted relationally – in connection to one another, to the background, and to the point of view. Particular intentional states are not just ‘for me’ but also for one another. Or again: where ecological psychology insists that there is no such thing as a stimulus but only a nested array, the analysis here insists that there is no such thing as an isolated conscious content, but rather a nested array of experience.

    (v): The phrase “philosophers’ metaphysical concerns” was ill-advised. In fact I had resolved to avoid that sort of talk, but there was a moment of unedited weakness in my reply to Maxwell (that whole paragraph, perhaps, ought not to have seen the light of day). But although I sort of blurted that out, there is something behind it. I agree that talk of causes is more metaphysically committal than talk of correlates. On the face of it, one of the great strengths of the way you formulated the NCC idea is that it is couched in the neutral language of correlates. But on reflection, two things present themselves about this:

    (1) Who is it that desires neutrality here? Who is afraid of commitment? Metaphysicians, that’s who, not neurobiologists. What the researchers are after are causes and explanations. So the very choice of a metaphysically non-committal definition is motivated by “metaphysical concerns” rather than by reflection on the empirical practice. In general, neuroscience is unafraid of making a metaphysical commitment to causes. In fact, those two lovebirds are already married.

    (2) When it comes to the content NCC, the way the correlate idea is actually cashed out is not as neutral as it at first appears. It relies on a fairly specific application of the representational theory of mind. The obvious counter example here is the enactive approach favored by Thompson and company. With respect to that particular metaphysical possibility, the definition of the content NCC is not neutral, which of course is why its proponents were all bent out of shape about this in the first place.

    Now, as I said at the top, I think the basic idea of the NCC, along with the research it seeks to comprehend, is perfectly compatible with the phenomenological analysis. The idea of reformulating the definition is just a way of trying to make this absolutely clear. By dropping the talk of a contentful correlate that specifies an actual experience, and instead speaking of neural variables that predict measurable changes in experience, we can shed that much more theoretical baggage while at the same time remaining faithful to the aims and findings of the actual research. This brings me to the final point in the post.

    (vi) I recognize that the language of minimal sufficiency, plus ceteris paribus conditions, can pick out the same logical relation that Mackie’s inus condition picks out (once the talk of content is cleared up). This is one respect in which I am unsatisfied with the revision of the NCC definition offered in the paper. I adopted the inus condition because I wanted to make explicit the commitment to a mechanical model of explanation in neuroscience. Although there is a large body of literature on causation that uses the language of minimal sufficiency, this is primarily associated with statistical models of causality (and with the counterfactual analysis championed by Lewis). But Mackie’s approach is within a tradition that focuses on intervention and manipulation of key variables, and in turn this is based on a mechanical conception of causes that better reflects the explanatory strategy in neuroscience. A more satisfactory take on research like Logothetis’ might appeal to acyclic graphs rather than to inus conditions. But rhetorically it is much more convenient to speak of inus conditions, and it allowed me to meet the word limit for this conference.

    It is all very much a work in progress. My thinking will be improved through the scrutiny I have received from Jakob, you, and the others who gave me their attention. Cheers!

Comments are closed.