If Materialism is True, The United States is Probably Conscious

Presenter: Eric Schwitzgebel, UC Riverside

Commenter: Janet Levin, USC

Advertisements

59 Comments

  1. Thank you, Janet, for your thoughtful and generous comments! I find almost nothing to disagree with.

    I conclude my paper by confessing that I am unsure whether to (a.) conclude that the U.S. is actually conscious, or to (b.) interpret my argument as a challenge to materialism, or instead to (c.) see the argument as revealing a problem in the very enterprise of armchair metaphysics about consciousness. You, Janet, have some remarks about this tripartite conclusion that I would like to expand on just a bit.

    On (a), you say that my candidate for group consciousness is not more plausible (or less plausible) that Block’s Chinese Nation. I’m inclined to agree. Block’s Chinese Nation has the advantage of tighter functional similarity to the human mind; the U.S. has the advantage of more history and natural environmental embedding. It’s not clear, metaphysically, which set of features is more important. I could see going either way. Regardless, I hope that my reflections still make a useful contribution by adding valuable diversity to the set of examples that the metaphysician should consider.

    On (b), you say that my challenge to materialists is not unique, since it resembles some classic and influential challenges by dualists and by Thomas Nagel. Here I fear that I expressed my point too briefly. I meant, in fact, to activate two different aspects of the word “challenge” with that remark. A “challenge” might be an objection, but it might also be an invitation to rise to an occasion. If the reader treats as a fixed point the non-consciousness of U.S., then if she also interprets my arguments as showing that materialism would imply that the U.S. is conscious, then she can conclude the falsity of materialism. However, I also think it would be reasonable for someone committed to the non-consciousness of the U.S. to respond to my arguments not by rejecting materialism but rather by trying to develop a materialist theory that clearly implies that the U.S. is not conscious. I don’t currently see how this could be done without either denying rabbit conscious, denying plausible cases of alien consciousness, or adopting some dubiously ad hoc restrictions on the conditions of consciousness, but I stand ready to be surprised.

    On (c), you express sympathy with the possible skeptical conclusion of the essay. It’s hard to see how either empirical investigation or armchair intuitions could settle metaphysical questions about what sorts of weird aliens, group systems, etc., could be conscious. But neither does it seem that there is any other method for settling those questions. So… are we stuck? And if so, how stuck are we? Should the skeptic about the armchair metaphysics of alien consciousness still feel comfortable in embracing materialism of some form or other, even if we can’t know exactly which types of material beings are conscious? Or is materialism itself open to doubt on similar grounds? Is there compelling empirical or armchair-metaphysical reason to reject Berkeley’s idealism or Kant’s transcendental idealism or Chalmers’s property dualism? If so, what’s the methodological difference between those cases and the case of U.S. consciousness? (I further explore this last issue in my essay in draft, “The Crazyist Metaphysics of Mind”, available here: http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/CrazyMind.htm .)

  2. Perhaps the problem here is mainly linguistic. The author suggests that there is enough similarity in functionality between entities like organic systems (brained organisms) which we call “conscious” and large scale systems such as nation states to draw the counter-intuitive conclusion that if the reason for consciousness in such organic systems is their functional organization then we must suppose that even entities like nation states might be conscious. But this conclusion is undermined by closer examination of the meanings of the terms the author relies on to identify the similarities between the two classes of entities.

    He suggests that an important similarity between the two classes is “exchange of information”, i.e., that both kinds of systems have this as part of their make up. But is that actually true? No one thinks that when neurons in a brain exchange information they are doing the same thing as individuals in a nation state when the president, say, sends a message to Congress or the electorate votes for the candidates of their choice or a law is passed or a diplomatic letter is exchanged. In the latter cases the information exchange involves thought processes, ideas, intentions, beliefs and so forth. But none of that presumably takes place in individual neurons where the exchange of information consists of chemical events that trigger electrical discharges which affect other neurons. In both cases we can call what is going on information exchange but the fact that the same terminology applies does not imply that the same processes are being named.

    Schwitzgebel seems to recognize this when he adds that it might just be the case that “the information exchange among members of the U.S. population isn’t of the right type to engender a genuine stream of phenomenally conscious experience. For consciousness, there presumably needs to be some organization of the information in the service of coordinated, goal-directed responsiveness; and maybe, too, there needs to be some sort of ability of the system to monitor itself in a sophisticated way.” But then he adds: “. . . the United States has these properties too. The United States is a goal-directed entity, flexibly self-protecting and self-presenting. The United States responds, intelligently or semi-intelligently, to opportunities and threats – not less intelligently, I think, than a small mammal.”

    But this is to make the same mistake as before, i.e., the goal directedness of the United States is a function of thinking entities, as is its “self-protecting” and “self-protecting” behaviors. Not so with brains however, the presumed source of consciousness in entities like ourselves. That WE are goal-directed, self-protecting and self-presenting is arguably a function of our brain components acting in a certain way which, if THEY are “goal directed”, etc., they are not in the sense that those of us who have brains with such components are.

    Now it is conceivable, as Schwitzgebel reasonably suggests, that there could be entities in the universe so constructed as to consist of non-contiguous components, a kind of hive mind entity or a swarm entity that we might, under some circumstances, come to ascribe conscious states to. If functionalism is a true account of consciousness, it is arguable, that entities on vastly grander scales than ourselves, even swarms, could also be conscious. But in such cases how would we know them for what they are? And, if we could not, then on what basis could we call them “conscious” at all? To borrow a thought from Wittgenstein, if a mountain could speak, we would not understand it.

  3. Thanks for the thoughtful comment, swmirksy! I agree that there are some important differences between the information exchange within the U.S. and within the brain, one obvious one being that some important parts of the U.S. information exchange (not all!) involves people’s conscious opinions and communicative intentions. But it’s not clear how relevant that is. That seems like the “anti-nesting” issue, which I address briefly in the paper (and at more length in the full-length version of the paper, both in the anti-nesting section and then again in my response to Dretske).

    I am sympathetic with the skeptical worries you raise in the concluding paragraph of your comment. Maybe such skepticism is, indeed, where this path of reasoning ultimately leads. The question, then, is whether that skepticism can be contained safely within remote alien examples or whether it starts to come home to actual cases on Earth, like the U.S. or ant colonies. I’m inclined to think it does come home.

  4. I suppose the main thrust of your paper is to address the metaphysical question that seems to be raised, i.e., whether a materialist account of things can stand if we are led to draw such counter-intuitive conclusions as found in your United States example by heading down that path. I must admit to having very little interest in arguing the metaphysical case but I am strongly convinced that a functionalist account of consciousness is the right one and THAT, it seems to me, is consistent with a de facto materialist stance. But the case for a conscious United States or comparable entity doesn’t seem to me to pose a real problem for a functionalist account.

    I think it may be readily granted that any kind of physical platform that can do roughly what brains do can produce a genuine sort of consciousness (even if not precisely the same as brains produce). Thus computers are not excluded, nor are macro scale systems or swarm-constituted entities. All that would be required is that the system in question be able to do the right sort of things in the right way. But that implies certain empirical considerations, namely what are those things and what is the right way?

    Thus a swarm creature like your Anteans would not be excluded from the class of entities that can do it, in principle, unless there were facts about the requisite kind of platform (e.g., our brains) that cannot be replicated in the swarm medium. Similarly, I would expect that consciousness, understood functionally as I’ve suggested, would be a continuum phenomenon and so would fade in or out for us the farther we got in any direction from familiar operating territory. Just as it’s harder to tell if a jelly fish is conscious than it is to tell if a mouse is or if another human is, so it would be harder at the other end as we scale up. Of course your Anteans by definition are recognizable to us so they don’t present such a problem. On the other hand, a system that’s big enough or slow enough may just be beyond our capacity to encounter. Still, if consciousness is just a locus on a certain continuum, then it may not make a heck of a lot of difference. On the other hand, it suggests that consciousness is nothing more than one more manifestation of certain physical stuff organized in a certain way. Your United States example though, aside from its merits as an interesting thought experiment, strikes me as empirically lacking in certain key features a brain would have and any proto brain would need. Unlike the Chinese city where each component operates like a neuron in a brain, performing relatively simple tasks, that is not the case in your United States example where none of the components are doing that but rather doing some very different kinds of things which, unless the did double duty in some way, could not be expected to be replicating brain processes.

  5. Thanks, Eric, for your comments on my comments! I probably sounded a bit too ecumenical, though, in my conclusion. In my view (contrary to swmirsky, I think) there are various methodological reasons, e.g. inference to the best explanation, for skepticism about whether certain creatures functionally like us are conscious–and perhaps (probably!) they apply to the US. But, as you say, it’s not clear that they would preclude the attribution of consciousness to spatially distributed creatures. And it’s not clear, in my view, that they would prelude the attribution of consciousness to creatures psychofunctionally like us, but with different types of physical ‘realizers’ of those psychofunctional states.

  6. Thanks for the follow-up, swmirsky! You write:

    “Your United States example though, aside from its merits as an interesting thought experiment, strikes me as empirically lacking in certain key features a brain would have and any proto brain would need. Unlike the Chinese city where each component operates like a neuron in a brain, performing relatively simple tasks, that is not the case in your United States example where none of the components are doing that but rather doing some very different kinds of things which, unless the did double duty in some way, could not be expected to be replicating brain processes.”

    So is the idea that each component needs to be doing something simple? Well, the U.S. has components each of which is doing something simple, if you slice it into fine enough pieces (e.g., down to the level of the neurons of its citizens). Is the idea that these simple components can’t be organized into groups that do something complex, except at the level of the entirety? Well, I’m not sure that brains meet that criterion; nor is it clear that collectively conscious aliens, since you seem to allow them, would meet that criterion. (One relevant thought experiment might be my “Group Minds on Ringworld” case, posted on The Splintered Mind blog.)

    Now maybe some criterion along these lines could be developed and made to work, sweeping in all the plausible cases of consciousness but leaving out the U.S. But I don’t think there’s an account of this type out there yet. So that would be a “challenge to materialism” in the second sense of “challenge” in my response (b) above to Janet Levin.

  7. Thanks for clarifying, Janet. I’m sorry if I misread the tone of your conclusion. I’d be very interested to hear you expand a bit more on your thought that methodological considerations such as inference to the best explanation probably provide reasons for skepticism about U.S. consciousness. (I assume you mean to use “skepticism” here as “probably-not” skepticism rather than “could-go-either-way” skepticism. If you only mean the latter, our views are pretty close.) I could imagine someone mounting an interesting argument along these lines, and it might be productive for me to tangle with such an argument.

    Maybe one starting thought is that you don’t need to refer to the consciousness of the U.S. to explain its behavior? If that’s the starting thought, then the first direction I would probe for a rejoinder would be whether, on your view, you need to refer to the consciousness of a rabbit to explain its behavior. If yes, then I would ask what role consciousness is playing in that explanation, and then consider an analogy to the U.S. If no, then I would ask why we treat rabbits differently from the U.S. in this regard.

  8. why the restricted antecedent? after all, many of the arguments still go through about as well even if e.g. dualism or panpsychism is true. e.g. even if there are immortal souls, why should they attach themselves just to bags of neurons and not to nations? there would have to be some principled difference here. then off the arguments go. so i think you should write a paper called “the united states is probably conscious”.

  9. Hi Eric, I had the same question as Dave. Anyone who endorses some version of the principle of organizational invariance is going to have to endorse your conclusion.

    In fact it seems that your real target was never materialism per se but rather functionalism (where that include dualists who endorse OI). The views you are targeting take it that consciousness -just happens- to be physical but physicalism is supposed to be the claim that it is metaphysically necessary that it be physical (this is a point Ned made in his paper on Max Black’s objection)…

    So this leads us to neurochavinism (full disclosure: I probably do suffer from the uncool un-Copernican neuro-fetishism). When arguing against this view you say,

    “From a cosmological perspective it would be strange to suppose that of all the possible beings in the universe that are capable of sophisticated, self-preserving, goal-directed environmental responsiveness, beings that could presumably be (and in a vast enough universe presumably actually are) constructed in myriad strange and diverse ways, somehow only we with our neurons have genuine conscious experience, and all else are mere automata there is nothing it is  like  anything  to  be. “

    I don’t know about anyone else but I am not moved by this appeal to strangeness. It may be strange, but so what? If it is true, then it is true strange or not. I would also object to the appeal to intuitions about possible creatures constructed in ‘myriad and diverse ways’. Intuitions are fine but they are not enough to ground an argument against physicalism! If we meet any of these creatures then us neurotypes might be in trouble but so far there is no reason to worry. if it turns out that consciousness is identical to some biological property then these ‘possible’ creatures are like XYZ to our water. Sure you can imagine a world where water isn’t H20 but that doesn’t show that water isn’t H20 actually or that it isn’t H20 necessarily. So what argument is there, that doesn’t depend on these question begging intuitions about scifi creatures or name calling, against an identity theory?

  10. “However, I also think it would be reasonable for someone committed to the non-consciousness of the U.S. to respond to my arguments not by rejecting materialism but rather by trying to develop a materialist theory that clearly implies that the U.S. is not conscious.”

    Hi Eric! This is a great paper and a very fascinating and difficult topic. It ties into so many different issues in phil mind. Im wondering though if you were too quick to dismiss the “bankruptcy of metaphysical speculation” response.

    It seems that a materialist could be impressed with your arguments for the USA being conscious on the best theories of qualia but see this result as a compelling reason to become a metaphysical-skeptic about qualia-theorizing and adopt a strict operationalism about terms like “phenomenology” or “consciousness” .That is, we could admit up front that we our stipulating our definitions of consciousness and set up our ascription criteria according to our pretheoretical intuitions. If some theorist is pretheoretically committed to thinking mammals are conscious but insects are not, they will develop a theory and tweak the already vaguely defined parameters to get that outcome and reject theories that don’t as being “absurd”.

    If our only epistemic access to the thing-to-be-explained (qualia) is through either “self-intimation” or introspection, and everyone’s self-intimation/introspection gives them different ideas about what qualia is, then we cannot use introspective intuitions to settle theoretical disputes about which entities do or do not have qualia. And I think in line with what Dennett and Cohen are saying, empirical science is also not useful in settling these high-theoretical disputes about the necessity of accessibility because if you are clever enough and cherry-pick enough empirical data it’s always possible to interpret the data in a way favorable to your theory.

    This problem would be highlighted if we gave unlimited funding to 30 of the most respected qualia theorists to build qualia-detecting instruments. They would end up with wildly different apparatus. You would have some people looking into quantum states of the brain, others into the frontal lobes of humans, others investigating bacteria and insects, and others in fundamental physics. Compare this to giving money to astronomers to solve an ongoing astronomical mystery about how to detect a new type of unobservable star, you would expect them to have different ideas on how to build a telescope and what exactly to look for but they would be more or less all looking in the same place for the same thing with a similar set of assumptions and methodological commitments about what would count as settling the debate. But things seem much different for the scientific study of qualia.

    And these theorists would still have to deal with all the smart people saying it’s impossible to explain qualia no matter how hard we try. All of these factors make radical eliminativism and metaphysical-skepticism appealing. You say in the paper that radical eliminativism seems at least as bizarre as believing that the United States is conscious but if reactions of strangeness are no index of reality then eliminativism/skepticism is just as much a live player as qualia metaphysics.

  11. Good point, Eric. My suggestion was that, even if one could appeal to ‘general methodological considerations’ such as that biologically similar organisms (e.g. humans and rabbits) have evolved in the same environment and share other traits as giving *some* (albeit not definitive) support to the claim that rabbits are conscious–which does not extend to the claim that the US is conscious–there are other cases (humans vs Martians or other alien species–including those that are spatially distributed)–in which such arguments seem unwarrantedly chauvinistic. Of course, as Richard points out, this doesn’t mean that neural chauvinism is not the correct view; it’s just that it’s hard to argue for. (There is some interesting work, however, by Bill Bechtel and others involved in both philosophy and neuroscience, that suggests that the identity conditions for neural state-types are more abstract than philosophers take them to be–and therefore neural chauvinists may turn out to be more inclusive than they are typically thought to be.) As for Dave: I agree; the same argument can be used by dualists, which, among other things, reinforces the view that if the consequent is false–if the US is not probably conscious–we have no more an argument against materialism than against dualism.

  12. Thanks for the continuing comments, folks!

    Dave: I’m inclined to think that the arguments probably work for some non-materialist views as well, including yours. (I’d be interested to hear your opinion about that!) But I don’t think they work for God-imbues-a-soul substance dualism or Berkeleyan idealism or transcendental idealism or various strengths of solipsism, and I’d like to see these options back in play in the philosophy of mind. Maybe the title should be “If Materialism (or One of Its Near Cousins) Is True, the United States Is Conscious”?

  13. Richard: I agree that the arguments will probably also apply to organizational-invariant dualist views.

    On neurochauvinism, I have two thoughts about the seeming oddness. First: Although I entirely agree that our sense of oddness is no rigorous index of reality and cannot be trusted, I *also* think that there is no clearly better way to settle these types of metaphysical questions (a phenomometer up against the Antareans or the U.S.? theoretical virtues like simplicity?). The whole metaphysical enterprise here is epistemically suspect, but it seems worth trying our best hand at it anyway, and our sense of what’s bizarre is an epistemic tool we unfortunately cannot afford to jettison. Second: It’s not merely folk-intuitive bizarreness here that I am citing, but what is plausibly a principle of good scientific method: the Copernican Principle that says that if a theory yields the result that we are in some specially privileged spot in the universe (e.g., the exact center), that theory is unlikely to be true. Admittedly, it’s not straightforward how to apply this principle to the present case, but my sense is that it would be a suspiciously un-Copernican self-congratulatory coincidence if (a.) there were lots of systems of diverse sorts that emitted intelligent-seeming and linguistically-interpretable behavior including what would seem to be self-reports of consciousness, and (b.) among them only the systems with neurons like ours actually had consciousness. It *could* be true, I acknowledge. But is there any *reason* to think it’s true? What weighs in the epistemic balance against the suspicious un-Copernicanness of it?

  14. Here’s a related question: were God to exist, would there be something it’s like to be God? If so, then there are interesting implications for the relation of consciousness, ‘what it’s like’ properties, and subjective perspectives.

    Just asking, as they say.

  15. Eric, my point about the difference between the Chinese nation model and the United States model, as presented in your argument, is that the first is a parallel realizer of brain operations which just happens to be writ stupendously large, i.e., each constituent individual performs the kinds of functions we suppose brain cells perform (whatever they are), rather like many of Searle’s Chinese Rooms cobbled together on a national scale and so, together, produce a massive replication of a brain at work. On the other hand your proposal takes the United States as it is, i.e., a nation state — being what such a state is and doing what such states do. But such an example is not a fair analogy because of the difference in functions despite the shared terms used to name them. The issue of organizational and operational similarity, from the point of view of functionalism, requires that equivalent functions be performed on the different platforms, not that very different functions, which we just happen to call by the same name (a mere contingent fact of our language, perhaps even an unavoidable one) are at work. There is no reason to suppose that the United States, as it is, is a conscious being, even without discounting the possibility that conscious beings may come in quite a broad variety of shapes, sizes and configurations — and that functionalism implies that.

  16. Gary, you put it very nicely. Indeed, I do think it’s a reasonable response to embrace the bankruptcy of metaphysical speculation. I just don’t think it’s the *only* reasonable response, and I’m not sure whether it’s best among the reasonable responses. It certainly is possible to stipulate an operationalization and leave it at that. As you point out, people will probably operationalize in very different ways, and then the basis of choice among those stipulations must be…, what, pragmatic? Maybe confusion would then best be forestalled by going eliminativist about the original concept. My resistance here is that I am too much a phenomenal realist. I just can’t help but think that something important is left out by hard operationalism or eliminativism. But I also admit that something needs to give, theoretically. It’s a quandary and I’m not comfortable with any of the possible solutions I see.

  17. Janet, I think I agree with you on all those points. I don’t think any of the options are good. So I complain about neurochauvinism, but since all the other options I see also seem to have problems, I can’t rule it out entirely. It’s not my *favorite* among the options, but that might be partly a matter of taste and, as my wife puts it, too much passion in defending the rights of imaginary aliens!

  18. Arnold: It would have a large locus, I assume, presumably something like its geographical region. Thus asteroids and other nations would be exterior and the Rocky Mountains would either be interior or perceived as part of the ground its various parts were standing on.

  19. Janet, on God: I could see it going either way. If there is something it’s like to be God, then I agree that “perspective” will be a tricky issue! If we are part of God’s body, (s)he might proprioperceive us. If we are external to God, (s)he might perceive us but in a non-perspectival way.

  20. Swmirsky: It sounds like you are committing to the view that in order to be conscious an entity must have “equivalent functions” to what goes on in the human brain. If “equivalent functions” is meant with a high degree of specificity, that seems chauvinistic against aliens who might exhibit all signs of phenomenal consciousness and yet operate very differently. If it is meant more liberally, then we’re back with the question of what the relevant functions are and whether the U.S. lacks them. No?

  21. Ah, it looks like I just missed Eric! 🙂

    You say, “I agree that the arguments will probably also apply to organizational-invariant dualist views.”

    Right, so it isn’t *materialism* you are after but rather functionalism (which is only materialist friendly, not materialist)…so I think Dave is still right that your title is wrong since it seems to be targeting people like me when it isn’t and it seems to not be targeting people like Dave when it is.

    You also say “…and our sense of what’s bizarre is an epistemic tool we unfortunately cannot afford to jettison.

    I just don’t agree with this. It seems to me that what we find bizarre is a function of what theories we are already (perhaps implicitly) attracted to (or actually accept). So I don’t see how any of this intuition based stuff is supposed to get us anywhere except a really compelling autobiography.

    More seriously though, you say, “It’s not merely folk-intuitive bizarreness here that I am citing, but what is plausibly a principle of good scientific method: the Copernican Principle that says that if a theory yields the result that we are in some specially privileged spot in the universe (e.g., the exact center), that theory is unlikely to be true”

    I know you said you are not sure how to apply this here, but I don’t see how we are supposed to apply this here ;). Is it a violation of this principle that water is H2O rather than XYZ? It doesn’t seem that way to me. Is it a violation of this to say that electricity in my computer isn’t lightning? It doesn’t seem that way to me. What we are after is to identify the ultimate nature of consciousness, if it is identical to a neural state, then it is. How does that violate this principle?

    You go one to give an answer to my question when you say, “but my sense is that it would be a suspiciously un-Copernican self-congratulatory coincidence if (a.) there were lots of systems of diverse sorts that emitted intelligent-seeming and linguistically-interpretable behavior including what would seem to be self-reports of consciousness, and (b.) among them only the systems with neurons like ours actually had consciousness.”

    I agree that this would be strong evidence against an identity theory but my point was that I don’t see *any* reason at all (besides question begging intuitions) to think that (a) is true. I admit that if (a) were true I would give up on an identity theory (or at least concede that it is unlikely to be true) but we just don’t have any non-questioning begging reason to think this is true. I feel the same way about water and H2O; if we discover watery stuff in Andromeda that is composed of XYZ then I will give up on the identity between water and h20, until then imagining it to be so doesn’t move me (rather it seems to indicate that you already reject the identity theory).

  22. On your last point, Richard, I’m inclined to think that the current scientific evidence suggests that complex systems could arise in a wide variety of ways, and once you have complex systems with certain sorts of further conditions, selection pressures have some reasonable chance of causing the emergence of life; and once your have life plus some certain sorts of further conditions, then selection pressures have some reasonable chance of causing intelligent life, and maybe also linguistically interpretable behavior. Is there reason to think this whole process needs to be done with things like Earthly nucleotides and neurons? I’m not sure I see why we should think so. The required processes don’t *seem* to require a very specific substratum. Arguably, for example, if we could create the right kind of “artificial life” system on a computer, with enough time and resources, we could see such patterns are arise there. Nor do I see any reason that Earthly insect societies couldn’t evolve to achieve human-level group intelligence, if there were the right kinds of environmental pressures. Arguably, the most sophisticated insect societies already approach mammalian group-level intelligence. (Such insect societies have neurons, but at the group level not configured like our own). Even if neither of these possibilities works, your position, if I understand correctly, commits to the view that the *only* way for behavior intelligent enough to be linguistically interpretable to evolve would require a close analog to human biology. That seems a pretty strong claim. Maybe you could say more about what reason there is to think that claim is true? (I mean this as a serious question, not a rhetorical question.)

  23. It may be best to separate the question of whether ‘non-Earthly’ biological creatures (or even biologically based ‘groups’ such as insect societies) could be conscious from the question of whether, in general, ‘complex systems’ functionally like us could be conscious.

  24. Hi Eric, this is really cool – a couple of comments/questions, starting with phil bio stuff:
    1. The literature on species as individuals may give you some support in your fight against our prejudice against entities composed of unconnected entities. These people argue that each species is a temporally and spatially distributed individual (Ghiselin 1974, Hull 1978)
    2. On the other hand, the literature on the ‘major transitions of evolution’ may go against your line of argument. In some ways, that topic is up your street (the jump from prokaryotes to eukaryotes, from single cell organisms to multi-cellular organisms, etc). But the standard way of analyzing these transitions (by Maynard Smith, for example) is this: while single cells can interact as an ensemble, the ‘major transition’ happens only if they also replicate as an ensemble. As long as they replicate separately and only get together to interact with the environment, they should not be considered an organism. So one could, mirroring this argument, insist that the US is not an organism and thus not a potential bearer of consciousness because it does not replicate (as an ensemble – only its components are replicating).
    3. I wonder about the modality of your conclusion. Your title is that if materialism is true, then the US is probably conscious. But isn’t what you show is that if materialism is true, then it is possible for the US to be conscious? To show that the US is in fact conscious right now would need some further argument. But I take it that your arguments in the paper aim to show that the US has a general structural organization that doesn’t rule it out that it can be conscious. Could you clarify?
    4. To push this point further, it seems that we can use the arguments in your paper to conclude that the US can represent any state of affairs. Representing is a matter of neural connections. Your weird extraterrestrials can represent in spite of their ant-based brains. So the US can represent, say, that Paris is the capital of France. But then the same would go for any other state of affairs (and there are lots of state of affairs…) So this makes me think that the real claim is that it is possible for the US to represent that a is F (if the actual connections between people align that way). But then similarly the real claim about consciousness would be the it is possible for the US to be conscious (again, if the actual connection between people align that way).
    OK, I have more but first I’d like to know what you think. Thanks again for the cool paper, Bence

  25. Right, Janet. There are several different possibilities here. I don’t want to commit, for example, to the view that Swampman would be conscious. Maybe historical embedding is necessary. But is biology necessary? I’m not sure I know what it is for a system to be biological. On a sufficiently liberal sense of “biology”, I see no Copernican violation. But a liberal sense of “biology” won’t deliver neurochauvinism.

  26. Thanks for the helpful comment, Bence!

    On 1: Yes, that’s a nice connection.

    On 2: I see the “major transitions” literature as supportive in a way. What Maynard Smith and Szathmary nicely do, I think, is problematize our assumption that what we tend to think of as the organismic level is privileged in some way and different in kind from higher and lower levels. That fits nicely with the spirit of my argument. However, you’re also right that the biological emphasis on reproductive mode speaks somewhat against my view. My response to that is threefold: (a.) to emphasize that to the extent we’re interested in *consciousness* rather than *organisms*, it’s unclear why reproductive mode should matter; (b.) to suggest that there is a sense in which nations do reproduce, by fission (though there isn’t a bottleneck through a germ line; sponges might be an interesting comparison case), and (c.) to contemplate another science fiction example that is meant to exploit ideas from the literature on group selection and major transitions. That sci-fi example didn’t make the cut for the official full-length paper in draft (which is already pretty long), but I’ve posted it on my blog under the heading “Group Minds on Ringworld”.
    http://schwitzsplinters.blogspot.com/2012/10/group-minds-on-ringworld.html

    On 3: My thought is that there are several ways out of the conclusion that the U.S. is conscious. For example, one could deny rabbit consciousness, or one could embrace eliminativism, neurochauvinism, or an anti-nesting principle, or one could refuse to take any of those routes and nonetheless still optimistically conjecture that there is some presently unknown but acceptable criterion that includes a broad range of plausible animal and alien cases but excludes the U.S. However, I think it’s reasonable to be wary of all of those responses. If we don’t take for granted that the U.S. is *not* conscious, and if we then apply typical materialist claims about how consciousness arises in the world in a straightforward way to the case, it seems like that straightforward application does result in the conclusion that the U.S. is conscious. For example, the U.S. seems to represent the world and also to represent its own representations in a sophisticated way. I don’t pretend to have a theory of consciousness that delivers the U.S.-consciousness conclusion straightaway, e.g., X is sufficient for consciousness, the U.S. has X, therefore the U.S. is conscious. I am a skeptic about theories of consciousness. So the best I can offer is what I think is a reasonable array of conditional credences.

    On 4: Yes, I see no reason, on standard materialist accounts of representation, to think that the U.S. is incapable of representing any proposition that an individual human is capable of representing. In a way, that’s a weaker claim than that the U.S. is conscious. Some (at least weak) version of this claim will be accepted by many of the authors who have been working recently on “collective intentionality”. What’s distinctive about my view is not that claim, but the leap to literal group consciousness.

  27. Eric: You say: ‘On a sufficiently liberal sense of “biology”, I see no Copernican violation. But a liberal sense of “biology” won’t deliver neurochauvinism.’ True. But biochauvinism may be more intuitively compelling than neurochauvinism, and on a sufficiently liberal sense of ‘same neural state’ may deliver something pretty close.

  28. I agree, Janet, that biochauvinism, on a liberal sense of “biology”, is harder to argue against than neurochauvinism, partly because there is no Copernican violation for the former. However, I do also think that on a liberal sense of “biology”, the U.S. is probably best conceived as biological, so I think biochauvinism is consistent with my thesis as neurochauvinism is not.

  29. Eric, I thoroughly enjoyed your paper! Your discussion of Tononi’s Integrated Information Theory is very interesting and I’d like to ask you something about it. As you’ve clearly pointed out, Tononi’s “exclusion postulate” raises many troubles when we have to describe the hierarchy of conscious complexes. Nevertheless, when we just have to distinguish objects that are conscious from objects that are not, for IIT things are apparently clearer. In your example you talk about US as potentially conscious. At a certain point you also say that US integrates a lot of information because it is made of millions of individual brains that integrate information at an individual level. I believe IIT rejects it and I’ll explain why with an example. One day i was stargazing and I’ve wondered: if my brain is conscious because it integrates information, now that it is interacting with Betelgeuse through my senses, why the system “Betelgeuse plus my brain” (let’s call it BB) doesn’t constitute a conscious system on its own? In this case the answer is trivial. It is true that the system BB generates more information than the two systems isolated (for we add the information generated by the star to the one generated by the brain). However, the whole system is not conscious because it is not integrated (even if interacting). IIT states: it is possible to isolate two (mid)partitions of the system for which effective information is null (at least in one direction). This (mid)partitions couple is easy to find, for it’s the one that puts my brain on one side and the star on the other. In fact, if it is true that the activity of the star affects my brain, the activity of my brain doesn’t affect the star, therefore the system is not integrated. Now, couldn’t we apply the same reasoning to every couple of objects, even to a couple of brains? Indeed, even if they are interacting they are not integrated as a whole system. Tononi states that the internet is not conscious for the same reason: there is at least a way of partitioning with null effective information between the parts. To me, this reasoning seems applicable to a system like the US as well. Since IIT is compatible with materialism but denies that the US is conscious, would it represent a counterexample to your argument?

  30. Thanks for the thoughtful comment, Matteo! My discussion of Tononi in my online draft was based on the draft version of his 2012 “updated account”. I have just reviewed his published version and its discussion of exclusion is somewhat different from the draft version, though I think my objection does still apply.

    I agree about stargazing. But I don’t think that those remarks apply to the internet case or the U.S. case. Where does Tononi say that there is “null effective information” between the parts of the internet? Looking at the published version of his 2012, he says “the internet is obviously integrated” (p. 58). He says that the internet, however, is not *maximally* integrated, and that interactions can be “reduced to independent components” (p. 59). He also says that there is some weak exchange of information when two people have a conversation (p. 68), but the two-person system doesn’t count as an entity or “complex” because it has parts that are *more* integrated (higher phi) than the larger system. So I think the “more” and “maximal” are doing a lot of work in this updated view — more work than in his 2004-2008 view. There is information integration, on his model, at lots of levels and spatial and temporal grains. This would produce a plethora of nested or overlapping consciousnesses without some constraint added. And the constraint he chooses is that consciousness only occurs at the level and scale with the maximum integration of information — as we see in the exclusion postulate and the new definition of a “complex”. Consequently, there are the two bizarre-seeming consequences I discuss in my paper: loss of consciousness by invasion of your brain by tiny aliens with higher-phi (adapting Block’s objection to Putnam) and loss of consciousness by integration at the polity level exceeding that at the individual level, e.g., through voting. (In neither case are we to imagine any functional or introspective report difference in the individual.)

    Now it’s hard to know exactly what phi we should expect to emerge at the polity level through an election or through Facebook or whatever. Phi is hard to calculate or predict for large, complex systems. In conversation with me, Tononi expressed confidence that no existing process at the group level has a phi in the ballpark of the main conscious complex of the human brain, but it’s hard to know what grounds Tononi’s confidence on this point.

    But even if we accept Tononi’s confidence here, it seems that at least in principle we could organize a society in a way to produce a phi greater than that of the individual person, at some spatio-temporal grain. The bizarre-seeming consequence would then be that every individual would become unconscious. The bizarreness of this result seems especially evident given the yes/no nature of “maximum” and “greater than”. At t0, I’m conscious. At t1, by adding one more bit of information integration to the system, possibly spatially remote from me, I’m suddenly non-conscious because the polity in which I belong now has higher phi; but the difference might be locally invisible to me and have only a minimal effect on my behavior.

    I’d be interested to hear more from you on this matter, Matteo. Am I misunderstanding Tononi? Does IIT have a good reply that I’m not seeing?

  31. Hi everyone, a lot of interesting themes in here! Before getting to those developments let me address Eric’s comment #24.

    I do agree that the kinds of things Eric says seem possible but there is a question about whether they are actually possible (in the relevant sense). At this point what we have is evidence that is ‘suggestive’ but that doesn’t seem to be enough to ground an argument against any kind of identity theory (since, as before, it more likely indicates that one already rejects the identity theory). So, again, all we seem to have at this point is intuitions without any real arguments.

    You say,

    even if neither of these possibilities works, your position, if I understand correctly, commits to the view that the *only* way for behavior intelligent enough to be linguistically interpretable to evolve would require a close analog to human biology. That seems a pretty strong claim. Maybe you could say more about what reason there is to think that claim is true? (I mean this as a serious question, not a rhetorical question.)

    I didn’t mean to be saying that this is the only possible way, but rather that it might be the only *actual* way. That seems like a less strong claim but I agree that it is still pretty strong, so what arguments are there to believe it? I actually think that there are at least two arguments in the area (and here I focus on consciousness rather than life since life seems like a straight-forward functional thing).

    One is the 2D a priori argument that I (and others like Keith Frankish though I don’t think he and I are after the same result) have been developing (for the most recent version see: http://philpapers.org/rec/BROTTA-6 ). The other is the kind of argument that Ned has been developing. On his view we posit identities here because they license greater explanatory power. So, positing an identity between visual conscious experience and a certain neural activity (or whatever) allows us to explain psychological results in a way that would be impossible without the identity. As I see things either of these arguments gives us good reason to be confident that consciousness just is something neural. How confident? I’m not sure but at least as confident as we are that it could arise in functional non-brained isomorphs.

  32. Eric, I’d be interested to know your reaction to the following reason for thinking that while possibility is, of course, open, a probablility claim is premature.

    We have complex neural systems that do not, as far as we have any reason to believe, result in consciousness — I have in mind systems that control perspiration, digestion, heart rate. And, of course, we have neural systems for which we have good reason to think that they do give rise to consciousness. So, it would seem to make sense to try to identify a property that’s common to all of the latter neural events, and missing from all of the former. Call that P1. Then there would be some good reason to think that entities that lack P1 lack consciousness, and entities that have P1 do have consciousness. (That would hold for materialists and dualists alike, although their choice of “causes” vs. “is” would be different.)

    The question whether the US is conscious would then be the question whether it has P1. We don’t know what P1 is (yet), so we’re in no position to evaluate the probability that the US (or ant heads) have it.

  33. Thanks for following up, Richard! I’m not sure what I think yet about the 2D argument against dualism. My general inclination is to be liberal about metaphysical possibility but to think that metaphysical possibility doesn’t really buy very much with respect to what we really ought to care about. (I recognize that that is a somewhat awkward view for me to take, given my interest in metaphysics.)

    On the explanatory power view, it seems to me that you can get all the explanatory power you should want by committing to the view that it is nomologically (or even metaphysically) necessary for vision to be neural *in humans*, if it is to be conscious. It’s the move from that to the nomological necessity of neurons for consciousness in *all* actual types of beings that I’m still inclined to regard as unjustified and un-Copernican, especially if the universe is large or maybe even infinite. I have no particular beef against species-constrained identity claims (e.g., David Lewis). Cautious scientists and neuroscientists do often constrain their claims about neural correlates of consciousness to the species or to a range of near species, rather than going entirely universal; and it doesn’t seem that their empirical explanatory projects suffer as a result. In fact, it seems exactly right for them to cautious in that way.

  34. Hi Bill, that’s a very nice comparison point. I myself wouldn’t entirely rule out consciousness in, e.g., the clump of neurons in the gut. If we’re going to allow individual insects to be conscious, maybe there’s not that much of a principled difference once we set aside our squeamishness and morphological prejudices. However, I’m not committed to that by any means.

    I’m entirely fine with a skeptical conclusion. My own inclinations about theories of consciousness are highly skeptical (e.g., Chapter 6 of my 2011 book), and this paper grew out of a larger skeptical project about the metaphysics of mind (“The Crazyist Metaphysics of Mind”, in draft). However, that said, I think it’s also reasonable to take a low-confidence best guess on matters that we care about. And group consciousness is a matter we should care about, I think, once our minds are open enough to take the possibility seriously. So what should ground our best guess?

    Well, if we’re going to go materialist, it seems like the best guess should be grounded in what mainstream materialists say about the conditions under which consciousness arises; and mainstream materialists seem mostly to invoke conditions for consciousness that the U.S. meets. Does the clump of neurons in my gut also meet those same conditions? Well, maybe! But it arguably it is less sophisticated and flexible in its environmental responsiveness than is the U.S. (e.g., it has a pretty limited repertoire of actions), and it is less linguistic than the U.S., so a materialist who leans on those sorts of criteria might be warranted in attributing consciousness to the U.S. but not the gut-brain.

  35. Dear Eric,
    If you are allowing counterintuitivity, there may be a simple resolution here. Let me suggest that your thesis is that the United States is probably conscious **in the sense that human beings are**. Assuming a ‘something it is like’ sense, what is that sense? Maybe it is that the material or functional domain is associated with an instance of phenomenality. The United States fits that too. So what are the hidden problem assumptions? I think there are two: firstly, that the instance of phenomenality belongs to the ‘whole organism’; secondly, that there is only one instance.

    The first assumption is not so widely held. Many people from Descartes to Derek Parfit, including, I believe, Andy Clark, have been happy that phenomenality (vs. mind in a wide sense) rests with a small domain in the brain. Clark has indeed talked of adequate ‘bandwidth’, which I will come back to. If the immediate material or functional substrate of human consciousness is only 0.1% of the total then, if we just go with Obama’s phenomenality, the US is not so different.

    The second assumption is more rarely questioned. Many say they know there is only one instance of phenomenality in their body. Yet we have no scientific grounds. And we need not expect multiple instances to be fed information about each other. We are aware of virtually nothing about our own brains. So how could we know? Most signals in the brain are sent to about 10,000 places so why not at least that number of phenomenalities? Cognitive scientists may throw their hands up in horror (despite lack of evidence) but there seems no excuse for rigorous philosophers to do so.

    What I think puzzles us about human experience is not actually that it is like something. Why shouldn’t a ball hit by a bat ‘feel a blip’. The puzzle of our phenomenality is its richness. As Clark says, it needs bandwidth. (I am not sure where synchrony or parallel come in.) Taking bandwidth as ‘rate of signal input’ I would guess 10-40,000 (on or off) signals that are not only co-temporal, but potentially of differing significance (far better than serial bandwidth) every 10 milliseconds would do a reasonable job.

    That is roughly the ‘bandwidth’ of a pyramidal cell input and in brains **only** cells have bandwidth in this sense. So I think the material substrates for human phenomenal experiences are the dendritic trees of each of a large number of individual neurons. It seems odd to have lots of simultaneous phenomenalities but it resolves all our difficulties. Fading qualia are dealt with, and so on. What is special about neurons (need not be uniquely biological) is then that they have ~10,000 independent inputs to single energy-bearing modes. Also, those inputs carry the fruits of the complex computations of another few billion neurons, providing a narrative about a ‘me in a world’. There are some boundary problem loose ends but much less than in most models.

    So I would suggest that your thesis is entirely reasonable, in fact it is much more than probable in the only sense we should want to mean. But that may not be the sense that has been assumed. I would be interested in your thoughts.

    Jonathan Edwards
    (All relevant sources in material at http://www.ucl.ac.uk/jonathan-edwards)

  36. Hi Eric,
    Can’t the anti-nesting principle can be defended by appeal to anti-epiphenomenalism? If conscious subjects are causally efficacious, and in particular in an agentive way, then it would require a huge coincidence for the intentions of the lower level subjects not to conflict with the intentions of the overlapping higher level subject? Nesting will require at least two subjects both controlling the same physical area. The United States, if conscious and intentional and not powerless, would agentively control our bodies or our brains/neurons, in the same sense that we do. One could either say that this situation is incoherent and a reductio of nesting, or that it makes it empirically implausible, because we should be able to sometimes detect the United States competing with us for determining action, but we never do.

  37. (I am typing this on my phone as I walk to the subway so I hope that autocorrect doesn’t mess with me!)

    I am interested in what you might say about the 2D argument against dualism so if any thoughts pop up please share them! But the general point that I really want to make by invoking this is that this kind of argument is at least as persuasive as a priori arguments on the other side. So if these kinds of possibilities don’t show us what we should care about then neither do the kinds of possibilities that you point to (ant heads, ring worlds, etc).

    re species specific identities I agree that this is an option my only complaint was that this option only seems necessary if one is trying to make room for the kind of possibilities you invoke, but since those shouldn’t dictate what we ought to care about why should we think that these identities are merely species specific? Indeed, I tend to find less caution in the scientific literature than you do. For instance neural synchrony is found in all brained species on Earth and those like Wolf Singer have no problem using synchrony found in, say, cats as evidence that we ought to find the same thing in Humans. It seems to me that constraining it to species precludes this kind of explanatory power. I mean do we adopt ‘environment specific identities’ for water? Do we say that water is H2O just on Earth?And that on Mars(or the Crab Nebula, or whatever) it is XYZ? No we don’t and the reason seems to be that we just don’t have any good evidence that they may be multiple types of water (let’s leave aside so-called heavy water which raises difficulties that I don’t think are relevant here).

    But again, I really think these are empirical issues. IF we find the kinds of systems that you think are possible then I’ll be chagrined. Until then why should I worry about what you can imagine?

  38. Thank you very much for your reply Eric, now I see your ideas clearer. You are perfectly right about the internet, I’ve expressed myself wrong. For Tononi the internet is integrated, what I had in mind was that it is not integrated enough to generate (relevant levels of) consciousness. In fact, its structure is similar to the cerebellar cortex, modular and decomposable to independent parts. In a certain sense I was trying to say what Tononi said in conversation with you. Apparently, for IIT, we don’t know any existing system in the whole Universe that has a PHI higher than the main complex of a human brain. Obviously such a system is not inconceivable, but we should wonder if it is actually possible: we should model it, calculate its PHI and evaluate if any physical or organizational constraint exclude its existence. It would be such a revealing analysis, I think!
    However, apart from the issue of *maximality*, I think the issue of the *minimality* is interesting, and maybe even more relevant for your argument. Tononi seems to embrace a form of (quasi)pan-psychism, which states that all and only the systems generating PHI are conscious, but he also says that there is a threshold to be conscious in the sense we know. For instance, even if in deep sleep (or anesthesia, coma, etc.) our brain is still integrating information, PHI is reduced so low that for us consciousness disappears completely. I think there is a way for IIT to (partially) disprove your hypothesis, but in order to do that it would have: (1) to prove that there is a minimum level of PHI which constitutes the threshold of consciousness as we experience it and to quantify it; (2) to prove that the US and collective systems have a PHI lower than this threshold. I’ve said that this would be a partial disproof because obviously the US could still be considered conscious (because PHI is not null), but not in a “relevant” sense similar to the one we apply to human beings.

  39. Jonathan: Thanks for the interesting comments and the links! I like how you fearlessly cast aside “common sense”. It’s an interesting thought that the phenomenality of the U.S. might reside in a single person’s brain, e.g., Obama’s. That thought opens up a convenient possibility for my argument. However, I’m disinclined to think that any one’s single person’s brain would house the phenomenality of the U.S., if there’s a tight connection between phenomenality and behavior, since the behavior of the U.S. seems to arise in a somewhat distributed way from the compromise of many people’s opinions and preferences.

    I also agree that people are probably too quick to dismiss the possibility that there is a more than one conscious stream in the human body. Once we are sufficiently willing to jettison common sense as a guide, the weird possibilities seem to just keep opening up.

    But I think we will diverge in that I am skeptical about general theories of consciousness, whereas you seem to have more confident positive views, yes?

  40. Hedda: That’s an interesting argument, but I guess I’m not too worried at this point about causal exclusion arguments — at least not without further clarification. Causal exclusion arguments seem to cause trouble all over metaphysics, e.g., how could it be true that the baseball breaks the window if it’s leading edge breaks the window? How could mental states cause behavior if neural states also cause behavior? There are a variety of possible answers; presumably *something* works out.

    Consider also the human brain: Individual neuron firings cause things to happen; so also do person-level states; these effects are sometimes within the same area. But no suspiciously huge coincidence is necessary.

    Also I do think sometimes the attitudes of individuals and the attitudes of the U.S. as a whole might diverge. For example, there are List & Pettit type cases in which the group might believe P & Q & R even if no individual member of the group believes all the conjuncts. Another example might be the U.S. being angry about something, as revealed by its public statements, with (at least in principle) no individual person (e.g., not the President or his spokesperson) being angry about that thing at a personal level….

  41. Richard: My concern is not with weird thought experiments per se but rather with modal strength. Mere conceptual (or “metaphysical”?) possibilities seem to me typically to reveal more about our concepts (plus logic and math, if that’s different) than about the mind-independent world; nomological possibilities are more interesting; and actuality is more interesting still. So my reaction to Chalmers’s zombies is something like: Okay, maybe they are possible, but meh; if phenomenal and physical properties co-occur in every nomologically possible world, that’s materialism enough for me. But I don’t want entirely to commit to this view. I can’t seem to quite work out the wrinkles in it, which is quite possibly a bad sign! But that’s the kind of bias or inclination that serves as part of my background reaction to certain metaphysical thought experiments.

    So it’s important to me that my weird cases be nomologically possible. And in fact, I have recently been energized by the thought that the universe might be infinite and/or there might be an infinite number of actual universes. This seems to me very much a live epistemic option right now in cosmology. And in such extremely large cosmologies, nomological possibilities will tend to be actualized. Therefore, I consider it a p > .05 epistemic possibility that creatures relevantly similar to my antheads or (in the full paper) supersquids or (on my blog) Ringworld nations are *actual*. And that’s why I think it would be a mistake to leave them out of our metaphysics. I don’t think we need to wait for actual alien contact.

    We do adopt environment specific identities *sometimes*, e.g., (per Lewis) “the winning lottery number is ___”. Can we do it in biology? “The photosynthetic process in species X is ____”, “The female reproductive organ in species Y is _____”. Why not treat mind-brain identities similarly? To be clear, I’m not committing to that view. But if you’re drawn to an identity view, a species-specific version seems like a reasonable way to go and in keeping with our epistemic limitations, and it doesn’t produce awkward un-Copernican-seeming commitments.

  42. Dear Eric,
    Sorry, I think I overcompressed some arguments. I was not suggesting that ‘the phenomenality of the USA might reside in one brain’. I was trying to counter the objection to ‘US consciousness’ that the phenomenality we do find in the USA, that of ‘individual people’, is too localized to count. My case was that phenomenality in human beings may be just as localized, so ‘not filling the whole’ is no objection. I admit that the argument may seem forced until we deal with the second issue, multiplicity, which I see as much more interesting.

    To be clear, I am not suggesting that there is more than one narrative going on in a brain. I am happy with Bernard Baars’s idea that some selected interpreted data are fed into a ‘workspace’ in which the same narrative is broadcast to a large number of processing sites (i.e. cells). The difference is that I place a phenomenality in each cellular member of the ‘audience’, as seems to fit the metaphor, rather than where the ‘actor is spotlighted on the stage’. One narrative of me-in-a-world has lots of ‘listeners’.

    I actually disagree with Russell about common sense, despite being British. For me ‘English common sense’ is the indispensible skill of sorting out what works when things get confusing, rather than the instinct of the common man. What I like to throw out is the dumbed down 20th century mish-mash of naïve intuition and garbled popular science. The idea of multiple phenomenalities is weird in terms of self-image but scientifically it is perfectly reasonable – maybe the obvious option.

    I applaud a general skepticism about theories of consciousness because most of them are, I believe, utter rubbish. However, as far as I can see there is a satisfactory alternative that both follows seamlessly from Leibniz and Whitehead and is bog standard physics and neurobiology. (There are 300 pages on my site for anyone who has the interest.) I like William James’s description of it ‘Speculative minds alone will take an interest in it; … That [its] career may be a successful one must be admitted as a possibility – a theory which Leibnitz, Herbart, and Lotze have taken under their protection must have some sort of a destiny.’

    I particularly like your conscious USA because it suggests that countries and persons do seem rather similar and that you can have a purposeful biological unit with phenomenality peppered around in little bundles without any chaos. As you point out, the idea that there is a problem with having no central agent is probably of little or no weight. There may be a close relation between phenomenality and behaviour but it may be all over the place in smidgins, as in an army, or Apple or Walmart.

  43. Thanks for following up, Jonathan.

    On paragraph 1: Yes, I see your point better now, and I think I agree with it.

    On paragraph 2: That’s a fun idea. I have to admit that I’m not familiar with your arguments for this, but I’ve flagged them now for future examination. My fairly confident guess is that I will remain skeptical, but I stand ready to at least tweak my credence distribution!

    On paragraph 3: Yes, I agree that “common sense” can refer to either thing, or something kind of between those two, and that the former kind of common sense has substantial epistemic merit.

    On paragraph 4: Leibniz is pretty far out! But part of my general aim — also in my companion essay, “The Crazyist Metaphysics of Mind” — is to make the case that we should be more open to bizarre-seeming possibilities about consciousness, including possibilities discussed by the philosophers of previous centuries, than most mainstream philosophers of mind seem presently to be.

  44. Leibniz far out? Huh!! Anyone who suggests that, at least without a suitable rider about the need for more broad mindedness, must still be in metaphysical short pants! Only kidding, but as someone who has, in 30 years in the lab, found nature not vindictive but a pretty teasing sort of task master, I have to come think that if only Leibniz had explained a bit more what he meant we would all see how he was, and still is, five tricks ahead – up there with nature. As William Seager nicely understated it, he was no dimwit. (He did make one mistake though.)

  45. Hi Eric, thanks very much for your response, it is very helpful. I think I see more clearly where our disagreement lies.

    Re possibilities: I really can’t see why you take those possibilities to be nomological. The kinds of things you appeal to seem to be epistemic in nature and so it seems to me that you are appealing to epistemic possibility (what could be true for all we know). Whether those (apparent) epistemic possibilities point to nomological ones is exactly what is at issue! So I am not sure how you get from one to the other without begging the question. And just for the record, epistemic possibility is what I also appeal to in the 2D argument against dualism. No metaphysical thought experiments here! (I do try to show that once you get the epistemic possibility you can get to the metaphysical ones via a separate argument about the necessity of identities *not* via a thought experiment)…and by the way the zombie world is nomologically possible (at least if the laws in question are just the physical ones that we are familiar with now).

    Re species specific identities: As I said, I think that this is an option and in the long run it might be right but we need not retreat to it unless there is good reason to do so. What we have now is people appealing to intuitions about what could be true, and maybe it is true, but maybe it isn’t. Notice that in the original Twin Earth thought experiment Twin Earth was supposed to be in our galaxy, not in some other possible world. And if we discovered such a place it would cause us to retreat to the kind of identity we are talking about but *that someone finds it intuitively plausible* that Twin Earth could actually exist doesn’t cause us to wonder if we are being brash in holding that all water is H2O. So I agree that *sometimes* this is the right response, but in actual practice this response is invoked when there is more evidence than merely what seems intuitively plausible about what could be the case. Usually we need more of a reason than that to posit these things, so why should this case be any different?

  46. Richard: It seems to me that our current best scientific evidence suggests that no compound of different composition than H2O could have all the macroscopic properties of water. It also seems to me that our current best scientific evidence suggests that systems could evolve to produce highly intelligent-seeming, linguistically-interpretable behavior, including very sophisticated tracking of their own internal cognitive conditions, though composed differently from us. That’s why I think we need to make room in our metaphysics for weird aliens but we needn’t worry too much about XYZ. (That said, I’m not opposed to local identities for “water”, either, if it comes to that.) I admit that it’s a conjectural extrapolation, though! Does our disagreement boil down to our judgment about extrapolations about nomologically possible evolutionary paths to high complexity? Somehow that seems an odd place to end up!

  47. Eric, thanks for your response. A quick follow up:
    So take a system of, say, 80bn units that is capable of forming 100tn connections among themselves. This system is, in principle, has at least the capacity of being conscious, right? (well, the brain is just this kind of system and that does it).
    But it’s a much stronger claim to say that take any system with 80bn units that is capable of forming 100tn connections among themselves and this system IS IN FACT conscious (right now).
    So there is a difference between being in fact conscious and having the capacity of being conscious.
    And I wonder whether your arguments warrant only the ‘being capable of being conscious’ claim about the US and not the ‘is in fact conscious right now’ claim. (bracketing the conditional and the ‘probably’ for now)
    Thanks again, Bence

  48. Bence: Right. I hope my arguments license at least the capacity claim. But I also don’t see why we shouldn’t also draw the factuality conclusion. I can phrase this is a challenge: What’s missing?

    I don’t think it’s impossible that there’s something missing. But so far it seems to me like it’s hard to find a good answer to that question, if one is also committed to rabbit consciousness and the denial of neurochauvinism.

  49. Hi all, many of these ideas have been discussed by others in the comments, but I’d like to add my own critical review to this thread if I may.

    I have no problem with the statements that biology is not necessary for consciousness, nor do I argue with the assertion that the United States is homeostatic on theoretical grounds (clearly, the USA is decidedly not homeostatic when it comes to all sorts of foreign oil dependencies, but that’s another argument). But I do take issue with the statement that morphological prejudice is unjustified. It has nothing to do with the number of neurons — I will not argue that an ant is or is not conscious based on the number of neurons it possesses. I will not even argue about the relative consciousness level of the Aplysia, with its 20,000 neurons, or even C elegans, with its 302 neurons and approximately 5000 synapses. But by saying that 1/300th of the US population viewing online videos equates to 10^12 bits per second seems a little overzealous. Does it not matter that the information coming from all of our photoreceptors typically comes from a single information stream? From the Gibsonian view of perception as unconscious inference through current Bayesian theories of perception, the brain’s entire function is to determine the most likely structure of the world that gave rise to the incoming sensory input. In this way, millions of people watching millions of different videos doesn’t seem to fit the bill. Further, then the individual photoreceptors will pass their respective bits of information on to the amacrine, bipolar, and ganglion cells in the retina, and then the information goes on to the LGN, V1, and so on. There is no such iterative information processing present for all those people sitting in their houses watching internet videos, no matter how one looks at it. Even when one stretches to consider governmental decrees to be “feedback” from “higher order areas”, the analogy simply falls apart. So the issue isn’t the number of responding units, or even the exponential connectivity.

    The three “ways out” are really not convincing, either. Of course we can eliminate the idea that the United States is not conscious by saying nothing is conscious, or that consciousness is rare or requires language, or that consciousness requires neurons. These are easy arguments to dismiss. I think where this argument really breaks down is the statement that there “seems to be no principled reason to deny entityhood, or entityhood-enough, to spatially distributed but informationally integrated beings. So the United States is at least a *candidate* for the literal possession of real psychological states, including consciousness” (p. 14, short paper). This is really the crux of the problem: the US isn’t informationally integrated! Sure, there is integration, but the amount and nature of that integration in no way approaches that of even the simplest neural architectures. Where is the hierarchy, the order, the circuits capable of transduce environmental properties into bits of information to be passed around and processed?

    In reading the paper I was reminded of two works of science fiction, both of which really fall under the umbrella of information integration. The first is Olaf Stapledon’s “Starmaker” which discusses the type of consciousness possessed by the Anterean Antheads, and which does not at all seem unreasonable: flocks of birds and swarms of insects regularly achieve awareness in his work. The second is Robert Heinlein’s “The Moon is a Harsh Mistress”, which again posits that anything integrated enough can become conscious. But the United States lacks the connectivity and integration that makes these two fictional but “weird” types of consciousness so easy to swallow. It seems easy enough to believe that information must be integrated in order to lead to conscious states, feeling, or qualia, and the United States simply lacks that integration. At least, for now!

  50. Megan P: Thanks for the very thoughtful and informed reply!

    First, on your science fiction suggestions. I’ve read both of them, and I find Stapledon in particular very interesting. I reviewed Starmaker in the course of writing the essay but ultimately I didn’t end up finding it as good for my purposes of Vinge. I can’t seem to reconstruct why, though, right now. Maybe he wasn’t quite as explicit as Vinge about group-level phenomenology. Also, I don’t think he was working with materialism as a background commitment….

    Your key point, I think, is this: “But by saying that 1/300th of the US population viewing online videos equates to 10^12 bits per second seems a little overzealous. Does it not matter that the information coming from all of our photoreceptors typically comes from a single information stream? From the Gibsonian view of perception as unconscious inference through current Bayesian theories of perception, the brain’s entire function is to determine the most likely structure of the world that gave rise to the incoming sensory input. In this way, millions of people watching millions of different videos doesn’t seem to fit the bill.”

    I think this is among the most promising directions to push against my view. It would be interesting to see it developed more fully. To push back a bit, though:

    (a.) What qualifies as a single information stream? The information that comes into my eye from my computer screen might not be very well related to the information coming into my eye from my (physical) desktop. In what sense is that one stream? On the other hand, the information coming to the U.S. populace through the New York Times and YouTube might qualify in some sense as a single stream. So if a lot is going to hang on the individuation of information streams, this will have to be handled carefully! (Maybe you’ve thought about this already; there’s only so much one can pack into a comment, after all.)

    (b.) It’s a pretty substantial commitment to tie consciousness essentially to incoming sensory input. It’s not unheard of, of course. But it seems worth noting that if you’re going to land hard on this point, it might be hard to allow for purely introspective beings (which at least *seem* possible) or, perhaps more realistically, conscious beings with some sensory input, but only relatively impoverished sensory input but excellent introspection and a priori reasoning.

    (c.) It seems an overstatement to say that “the brain’s entire function is to determine the most likely structure of the world that gave rise to the incoming sensory input”. Among its other functions are helping to regulate the sympathetic and parasympathetic nervous systems, regulating hormones, guiding motor output, figuring out what’s going on in non-sensorily-available parts of the world, and tracking internal cognitive and emotional conditions. No? So *some* of the brain’s processing functions to determine the most likely source of sensory input, but lots of brain processing does not have that function. Similarly *some* of the informational processing in the U.S. is directed toward determining what’s going on in sensorily or quasi-sensorily available parts of the world (e.g., via spy satellites, army scouts, newspaper reporters maybe, weather stations, astronomical observatories…), and lots of the information exchange does not have that function. Probably the ratio of processing dedicated to sensory functions is higher in the brain than in the U.S., but it’s not clear what would justify treating the ratio of sensory vs. non-sensory processing as the crucial determinant of whether consciousness is present.

    I’d be interested to hear more, though!

  51. Dear Megan and Eric,
    As a biologist I would say Eric is being modest about pushing back on Megan’s objections. I doubt they can survive, but I do think they can take the argument forward.

    The first issue is of a single information stream. The McGurk effect, or just binocular vision, indicates multiple streams. We handle streams from many inferred things at once – coffee smell and keyboard. We handle an indeterminate number of sources, as when reading ‘children are taller now’. But, Megan may well say, these are not quite the point, and I agree. The other argument is complexity of integration for the USA not being up to C. elegans. That does not seem right since there are more than 302 units with 5,000 interaction points and, after all, the integration involves humans with complex enough integration. One can argue, but it is hard to see qualitative (non-quantitative) issues here.

    So my suggestion is that what makes the USA different from a person is that it has *no single Baarsian workspace*. A Baarsian workspace is not input stream from sense organs but a later bottleneck that restricts input to a computational subsystem such that all the many units in the subsystem only receive signals of a single flow. A single Baarsian workspace does not entail a single instance of phenomenality (even if this is often assumed), so the question now is not about a unique ‘something it is like’ consciousness, but about a unique Baarsian consciousness (BC).

    On this basis the USA could well be rejected on the basis of no unique BC because it has many. Every media organ from Fox to NYT, plus consumers, would be a broadcast BC. I agree that physical ‘contiguity’ is irrelevant as long as we have local causal interaction; radio waves will do. The only difficulty might be if we were forced to admit that a human brain has several BC systems too. Dorsal and ventral streams, maybe? I actually think the case for a single BC is strong but I worry that focusing on this distracts from the issue of understanding consciousness. To just say ‘Eric was wrong after all, thank goodness’ would be to miss the point that Chalmersian and Baarsian consciousness are quite different concepts and so just to say ‘that is conscious, the USA is not’ is seriously ambiguous. It would also be a bit odd to make consciousness a limiting property. If a second radio station were to set up in a dictatorship with only one, would this mean that the reception area suddenly stopped being conscious? (Shades of IIT?)

  52. Hi Eric, it doesn’t seem such a strange place to end up to me! In fact it seems to me that all debates end up at this point. Only empirical work can settle this issue. As with most things I can’t escape the feeling that which way you extrapolate will depend on whether or not you already lean towards ‘neurochauvinism’ (not a nice name by the way, I prefer ‘biologism’) or politically correct functionalism and so I have a ‘wait and see’ attitude. The only point I am trying to make here is that we haven’t, not by a long shot, ruled out this kind of view.

    On a related note, in response to Megan P, this should be obvious, but saying that a position is easily dismissed is not the same thing as giving substantive reasons for its dismissal (which you don’t do). The kind of ‘biologism’ that I am attracted to is a live epistemic option. The only argument against is is that it discriminates against fictional entities which may or may not be actual…not a very convincing argument unless you belong to the choir; everything else is just politics, bullying, and name calling none of which instantiates any intellectual virtues that I am familiar with.

  53. Hi Eric and Jonathan,

    Thanks so much for your thoughtful replies! I’m somewhat of a novice when it comes to philosophical arguments, though, so you’ll have to bear with me as I attempt to counter.

    Eric, I think you’re right that I was a bit overzealous in saying that the brain’s *entire function* is to deal with sensory input. I suppose I’ve fallen into the trap of wearing blinders, as I study sensation and perception. The brain does indeed regulate sympathetic and parasympathetic systems, etc. But in terms of guiding motor output, I do think that’s related to sensation and making sense of it: Daniel Wolpert argues that that’s the whole reason we have a brain, after all, and that creatures that have no need for movement do away with brains altogether — like the sea slug/anemone that eats its own brain after finding a suitable rock to call home. I may be mis-paraphrasing him, but I think you get the point.

    Introspection and and tracking internal emotions, and even reflecting on explicit (as opposed to implicit) memories, seem to fall under the umbrella of sensory processing too. In rat hippocampus there are place and grid cells that refer to the rat’s place in space, so what’s to say that the human hippocampus doesn’t contain similar circuits to refer to the human’s place in space-time (and memory of previous trajectories)? These are not my ideas, just references from Tad Blair’s work.

    I would also like to say a bit about the McGurk effect and binocular rivalry, as Jonathan mentioned. You’re right that I would say that cases where cues are in conflict (either within or between or even among modalities) isn’t really the point, and actually it seems like the brain’s way of dealing with conflicting information is to determine the most likely causal scenario giving rise to that sensory info, and then evaluate/combine the information with reference to that determined probability (c.f. Konrad Kording and Ladan Shams’ work on causal inference in multisensory and sensorimotor integration. And there’s some new work using Chinese restaurant processes in both rats (Yael Niv) and humans (Ulrik Bierholm) to model this causal inference process as well). So the question then becomes, how would such a process that attempts to determine the structure of the world deal with a billion different YouTube videos at once, none of which has any relation to the others? The process would simply break down, and not lead to any sort of useful conclusion. I think one could say that if every photoreceptor were made to fire randomly — similar to a bunch of people watching different YouTube videos — that the representation of the input in V1 would end up looking an awful lot like white noise, or maybe just spontaneous activity. But I suppose this doesn’t preclude some sort of phenomenology…

    Anyway I’m getting off topic. I think your biggest issue, Eric, with my objections is my hang-up on sensory processing as a fundamental aspect of consciousness. I guess I don’t have an answer to that for now. I think Jonathan’s mention of the single Baarsian workspace is perhaps the best description of what I’ve been trying to say. One might argue that dorsal and ventral streams could indicate two in humans, but there is also evidence of lots of crosstalk between the two streams, and lots of feedback from higher order to lower order areas, as well as evidence of a multitude of connections between traditionally “independent” primary sensory cortices. This interconnectedness allows all parts of the brain to talk to all others, at least indirectly, and it is this unity or level of connection that I really think is missing from the United States as a potentially conscious entity. I’m not sure whether this is what is meant by Baarsian workspace, Jonathan, since I’m not intimately familiar with that theory, but it seems to be related, no?

    Thank you for your comments!

  54. Richard: Okay, that seems reasonable. I think maybe we’ve reached the point where we can each understand fairly well where the other is coming from but disagree about what the balance of considerations point to as the most reasonable conclusion. Usually, I feel pretty good about a philosophical discussion if that’s where we end up! Let’s push further on these issues sometime face-to-face. Too bad you won’t be at the SSPP this year!

  55. Megan and Jonathan:

    The Baarsian workspace idea is an interesting one, and I think another possibly fruitful direction to push back (or maybe approximately the same direction as Megan had in mind in the first place). A few reactions:

    (a.) My reading of Baars is that he intends his claims to be species specific rather than metaphysically general. But it’s not 100% clear, and if you have evidence otherwise, I’d be interested to hear. If I’m right, you’d be out-Baarsing Baars.

    (b.) Having things hinge on the Baarsian workspace does seem to have some odd implications, as Jonathan nicely brings out. It seems odd to suppose that establishing, say, a central monopoly news network would be the difference-maker. (Of course, on my view, *something* odd must be true — so maybe this?)

    (c.) This approach shares with other specific-architecture approaches the counterintuitive-seeming consequence that it denies consciousness to naturally evolved aliens built with a different architecture, and yet (*if* it’s possible! see my exchange with Richard Brown) who emit very sophisticated, intelligent-seeming and linguistically interpretable behavior, including seeming phenomenological self-reports of interior representational processes. Now maybe to have unified consciousness, self-report of interior representational processes needs to proceed through a single central workspace? I’m inclined to think not, but the idea might be worth pursuing.

    (d.) I’m not entirely convinced that there’s a radical difference in kind between the U.S. and the brain here. Regarding the brain, I am generally of the camp that sees cognitive processes as vastly interconnected and cross-modal and chaotically dynamic and swapping around through different brain areas, rather than of the camp that sees several distinct streams that converge on a center. Some moderate view on these issues is probably right, but my own general reading of the evidence tilts against the one workspace picture. If we end up, then, with at best a kind of diffuse sense of what the “global workspace” is, then I wonder whether we might be able to think of some things — like the State of the Union address? like the debate about the “fiscal cliff” and sequester? — that get bandied about through a wide variety of media as in some sense being in the global workspace of the U.S. There is no one master neuron in the brain’s global workspace so maybe too there need be no master media monopoly in the U.S. for the structure to be equivalent, as long as the same info is bouncing around available to the system as a whole. (I think this also connects to Megan’s YouTube video example.)

Comments are closed.