The Biological Cost of Consciousness

Presenter: Bernard Baars, The Neuroscience Institute

Advertisements

95 Comments

  1. Thanks for a wonderful review and update on the brain mechanisms supporting consciousness.

    I have a question (one brought up by another party) about your pragmatic identification of consciousness with reportability, one which is very relevant in the issues of the minimally conscious state which you reviewed.

    If dreams (page 6 of the paper) represent genuine mental content which occurs when we are not normally conscious, and thus is seldom reportable, what additional mental contents, if any, do you think are potentially reportable, but not currently reportable?

  2. Dear drbilh,

    That one isn’t too hard. You wake people up at random moments when they show the classical signs of REM, i.e., regular, large eye movements (picked up from electrodes at the side of the head, because the eyes swing a big voltage), waking-like EEG, and skeletal muscle inhibition. Then you ask them what, if anything, they remember.

    A more interesting technique is in a paper by Steven Laberge and Dement (see PubMed.gov), where lucid dreamer(s) were signaled to respond with big eye movements given a auditory buzzer during dreams, They were previously told to count to ten when they heard the buzzer, and then respond again with eye movements. The ten second interval is a normal period for working memory. That works nicely with good lucid dreamer.

    A few odd finding is that mentation is reported even from slow-wave sleep. The content of reported SWS mentation appears to be less dramatic and visual than dreams. I need to double-check the findings, but it looks like some mental activity remains, probably during the peaks of the slow wave.

    Yes, the inference is that if you can interrupt a state like dreaming at random moments and get consistent responses, that mental events are reportable (potentially) in general. I believe that’s a reliable finding, and it can be sampled often enough to make the inference statistically sound.

    bjb

  3. CONSCIOUSNESS: QED

    I don’t think Bernie has succeeded in explaining the biological function of consciousness — i.e., what is it for? what does it do? what could not get done without it, and why? He has simply reaffirmed that consciousness is indeed there there, and correlated with a number of biological functions — inexplicably.

    The problem (a “hard” one) is always the same: why and how is some given biological function executed consciously rather than unconsciously? It is “easy” to explain why and how the function itself (seeing, attending, remembering, reporting, etc.) is biologically adaptive, but it is “hard” to explain how and why it is *conscious*, hence why and how it is biologically adaptive that it is conscious.

    In considering consciousness Bernie also makes the very widespread conflation between (1) the accessibility of information and (2) consciousness of the information. Information is just data, whether in a brain or in a radio, computer, or robot. To explain the function of the fact that information is accessible (hence reportable) is not to explain the function of the fact that the access is conscious access.

    What — besides accessibility — is the “mark” of information being conscious? The fact that it is *felt*: it feels like something to have access to some information. And it feels like nothing to have access to other information. The information to which a computer or robot has access, be it ever so useful to whatever it is that the computer or robot can or does do, is not conscious. It does not feel like anything to have access to that information. The same is true for the information to which our cerebellum has access when it keeps our balance upright, or the information to which our medulla has access when it keeps us breathing, or keeps our hearts beating, especially while we are in deep (delta) sleep. When we are awake, sometimes some of that information does become conscious, in that we feel it, and then usually some further functional flexibility is correlated with it too (including reportability). But the question remains: why and how are some states of informational access felt and some not, and what further functional benefit is conferred by the fact that the felt ones are felt? What is the causal function of the (unexplained) correlation?

    Limited resources are limited resources, and resource costs are just resource costs. The fact that our brains can have access to — and can process — only a limited amount of information and not more is not an explanation of why and how having and processing some of that information is felt. Access and processing limitations, in and of themselves, have nothing to do with consciousness — except that they are correlated with it, so far still inexplicably.

    That was the fact that was (and still is) to be explained.

    Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3. http://eprints.ecs.soton.ac.uk/22242/

    Harnad, S. (2011) Doing, Feeling, Meaning And Explaining. In: On the Human. http://eprints.ecs.soton.ac.uk/22243/

  4. Hi Stevan,

    That was not the purpose of this paper. I have written extensively about the functions of consciousness in my 1988 book, which is again available (for ten dollars) on Kindle, after being unavailable for some years.

    I’ll be happy to write another paper on the functions of consciousness. There’s plenty of evidence out there.

    Notice that as a major evolutionary phenomenon, the functions of consciousness are inherently somewhat inferential. We can make plausible inferences about the functions of the lungs, of bipedalism, and the like. The functions — i.e., the evolution accumulations of functions — of language are hotly disputed. The reason is of course that evolutionary functions are post hoc, and we don’t have a chance to run experiments on them.

    Nevertheless, the plausibly most basic functions of consciousness are fairly clear. Please see the 1988 book, and if there is a demand for it, I can update that.

    Best,

    B

  5. TO

    Hi Bernie, I just took my cue from your words: “Some… maintain that consciousness… has no biological function.” It seems to me that until one has explained the biological function of consciousness, one cannot explain its costs: I think you only addressed the costs of resource limitations, not consciousness.

    Chrs, S

  6. Here is Chapter 10, The Functions of Consciousness, from B.J. Baars (1988) A Cognitive Theory of Consciousness. Cambridge University Press, now republished on Amazon Kindle.

    These nine functions are basic, but there are many more specific examples, discussed throughout the book.

    My 1997 book also has a last chapter on that topic.

    I’ll be happy to publish an update when I get the time.

    I may upload the entire book as a Word document, in addition to the Kindle book.

    Bernard

    10 The functions of consciousness

    The particulars of the distribution of consciousness, so far as we know them, point to its being efficacious … It seems an organ, superadded to other organs which maintain the animal in the struggle for existence; and the presumption of course is that it helps him in some way in the struggle.

    William James, 1890/1983 (pp. 141-2; italics in original)

    Consciousness would appear to be related to the mechanism of the body
    … simply as a [by-]product of its working, and to be completely without
    any power of modifying that working, as a steam whistle which accompanies the work of a locomotive is without influence upon its machinery.

    Thomas Henry Huxley
    (quoted in William James, 1890/1983 (Vol. I, p. 130)

    10.0 Introduction
    Readers who have come this far may be a bit skeptical about T. H. Huxley’s claim that conscious experience has no function whatever in the workings of the nervous system. But the great number of useful roles played by consciousness may still come as a surprise. The eighteen or so functions presented in this chapter provide only one way of grouping and labeling these useful services – some of the labels overlap, and there may be some gaps. But it is doubtful whether any shorter list can do justice to the great and varied uses of conscious experience.
    The functions listed in Table 10.1 really belong to the entire GW system, including both conscious and unconscious components. In this architecture, conscious experience represents the jewel in the crown, enabling the whole system to function.

    10.0.1 Conscious experience as a biological adaptation

    A basic premise of this book is that, like any other biological adaptation, consciousness is functional. Many biological mechanisms serve multiple functions: The eyes pick up information in the light, but human eye contact also communicates social messages such as dominance, submission, affection, and plain curiosity. Consciousness, too, has apparently gathered multiple functions in its evolutionary history; we explore some of these functions in this chapter (see also Rozin, 1976; Baars, in press a). But perhaps the most fundamental function is the one we remarked on in Chapter 1: the ability to optimize the trade-off between organization and flexibility. Organized responses are highly efficient in well-known situations, but in the face of novel conditions, flexibility is at a premium. Of course the global workspace architecture is designed to make ‘”canned” solutions available automatically in predictable situations, and to combine many different knowledge sources in unpredictable circumstances.
    In another way, consciousness and related mechanisms pose a great challenge to functional explanations because of the paradoxical limits of conscious capacity (1.3.4). Why can’t we experience two different “things” at one time? Why is Short Term Memory limited to half a dozen unrelated item? How could such narrow limits be adaptive? Reasoning naive1y, it would seem wonderful to be able to consciously read one book, write another one, talk to a friend, and appreciate a fine meal, all at the same time. Certainly the nervous system seems big enough to do all these things simultaneously. The usual answers that the limitations are “physiological” or that we only have two hands and one mouth to work with, are quite unsatisfactory because they simply move the issue one step backwards: Why have organisms blessed with the most formidable brain in the animal kingdom not developed hands and mouths able to handle true parallel processing? And why does our ability to process information in parallel increase with automaticity, and decrease with conscious involvement?
    Whenever we encounter a biological phenomenon that seems nonfunctional there are two possible explanations. First, we may be asking the wrong question: Perhaps cultural evolution has simply outpaced biological evolution, and we are now expecting the organism to do things it was not adapted to do. It is a good bet that the human nervous system was not developed for academic study, since universal education is only a few centuries old in almost all cultures. This may be the reason that learning in school seems so hard, while learning to perceive the world, learning to move, or learning one’s native tongue seem effortless by comparison. If we then ask why children find it hard· to learn arithmetic or spelling, we are asking a culturally biased question, one that may seem natural today, but which is biological nonsense.
    A second reason for apparently nonfunctional adaptations may be an invisible “design trade-off” between two different factors (e.g., Gould, 1982). When the mammalian ancestors of the whales returned to the ocean, they must have encountered trade-offs between walking and swimming, and over time lost their legs. This may seem nonfunctional to

    Table 10.1. The major functions of consciousness

    1 Definition and Context-setting By relating global input to its contexts, the system underlying consciousness acts to define the input. and remove ambiguities. Conscious global messages can also evoke contexts, which then con­ strain later conscious experiences.

    2 Adaptation and Learning Conscious experience is useful in representing
    and adapting to novel and significant events.

    3 Editing, Flagging, and Debug- Unconscious processors can monitor any conscious content, edit it, and try to change it if it is consciously “flagged” as an error.

    4 Recruiting and Control Function Conscious goals can recruit subgoals and motor systems to organize and carry out mental and physical actions.

    5 Prioritizing and Access-control Attentional mechanisms exercise conscious and unconscious control over what will become conscious. By relating some particular conscious con­ tent to deeper goals, we can raise its access priority, making it conscious more often and increasing the chances of successful adaptation to it.

    6 Decision-making or Executive When automatic systems cannot routinely resolve some choice-point, making it conscious helps recruit unconscious knowledge sources to make the proper decision. In the case of indecision, we can make a goal conscious to allow widespread recruitment of conscious and unconscious “votes” for or against it.

    7 Analogy-forming Function Unconscious systems can search for a partial match between their contents and a globally dis- played (conscious) message. This is especially important in representing new information when no close models of the input are available.

    8 Metacognitive or Self­ Through conscious imagery and inner speech we can
    monitoring Function reflect upon and control our own conscious and unconscious functioning.

    9 Autoprogramming and Self­ The deeper layers of context can be considered as a “self-
    maintenance Function system” that works to maintain maximum .stability in the face of changing inner and outer conditions. Conscious experience provides information for the self-system to use in its task of maintaining stability. By “replaying” desirable goals, it can recruit processors able to produce solutions and thereby reprogram the system itself.

    land animals like ourselves, but the loss was compensated by a great gain in swimming ability. Conscious limited capacity may involve such a trade-off. There may be powerful advantages for a global broadcasting ability that allows access from any component of the nervous system to all other components. A truly global message, if it is to be available to any part of the nervous system, must come only one at a time, because there is only one “whole system” at any moment to receive the message. Thus vertebrates perhaps evolved a nervous system with two operating modes: a parallel (unconscious) mode and a serial (conscious and limited­ capacity) mode. GW theory gives one interpretation of the interaction between these dual operating modes.
    Biological adaptations tend to be accretive (Gould, 1982; Rozin, 1976). The speech system, for example, is “overlaid” on a set of organs that in ancestral primates supported breathing, eating, and simple vocalization. Likewise, it may be that the global broadcasting property of the consciousness system is overlaid on an earlier function that is primarily sensory. This may be why human consciousness has such a penchant for sensory, perceptual, and imaginal contents compared to abstract or nonqualitative events (e.g., 2.5.4).
    Table 10.1 tells the most plausible story we can posit about the uses of consciousness, based on the foregoing chapters.

    10.1 Definitional and Context-setting Function

    In looking through a hollow tube at an isolated corner of a room (2.1.1), in listening for the words in a rock song, or in learning to perceive an abstract painting, we engage in conscious observation leading to an experiential transformation. We may experience this transformation directly, simply by attending to the stimulus until it is transform-ed. But even when we try to understand an easy sentence, rapid transformations are taking place unconsciously: Many different unconscious sources of information combine to build a single interpretation of a focal, rather ambiguous event (2.3.2).
    If we were forced to choose one premier function of consciousness, it would be the ability of the consciousness system to combine a variety of knowledge sources in order to define a single, coherent experience. Another way to say this is that the system underlying consciousness has the function of relating an event to the three kinds of contexts: to a qualitative context that allows us to experience an event as an object of consciousness, to a conceptual interpretation, and to a goal context that may lead to effective action (Chapters 4, 6, and 7). A word can be experienced as a stimulus without a conceptual context, but such a context is necessary for it to have meaning; and we know that a meaningful word is usually related to some contextual goals, which are not wholly available consciously at the time they guide us. This contextual apparatus is needed to allow even very ”simple” things to take place, such as the reader’s decision to read the next paragraph. Note that the Definitional Function of consciousness corresponds closely to Mandler’s and Marcel’s constructivist view of consciousness, emphasizing its capacity to create experiences that go beyond a simple combination of components (Mandler, 1983, 1984; Marcel, l983a; see 1.3.5, 2.3.2).
    A related critical function of consciousness is context-setting, the ability to evoke relevant contexts in the first place. This is most obvious in the case of conceptual and goal contexts; for example, in the case of the tip-of-the-tongue (TOT) phenomenon, where the role of a goal context is quite clear (6.1). A TOT state may be evoked by a conscious question or an incomplete conscious sentence (6.0). Given the TOT state, we begin to search (unconsciously) for the correct word; this search process, as well as the goal context for retrieving the word, together will constrain the conscious answers that will come to mind. Context-setting may not be so clear in more complex cases as in meeting a new person, or encountering a new idea, but these conscious experiences do seem to evoke and create new contexts.

    10.2 Adaptation and Learning Function

    Whether consciousness is necessary for learning has led to years of controversy (e.g., Eriksen, 1960; Holender, 1986), but there is little doubt that the more novel the material to be learned, the more time we must typically spend pondering it consciously before learning to cope with it (5.5.3). This is the learning function of conscious experience. GW theory suggests that conscious events are broadcast globally to unconscious processors and contexts, which can then adapt to this information. If they cannot adapt immediately, they can act to bring the material to mind at some later time, sometimes many times. Several researchers have shown that personally significant information tends to come to mind again and again, until presumably it is absorbed and adapted to (Singer, 1984; Horowitz, 1975a, 1976; Klinger, 1971). Obviously we also adapt to· the world by action: We can avoid a threatening predator, approach a new source of food, and explore an unusual situation. Action also requires conscious goal-images, which must, again, be more consciously available the more novel the action is (7.2.2).

    10.3 Editing, Flagging, and Debugging Function

    Several psychologists have argued that conscious experience plays a role in “debugging” faulty processes (e.g., Mandler, l975a,b). In particular, it seems that conscious events are monitored and edited by numerous unconscious rule-systems that can compete for access to the global workspace if they detect some serious flaw, and that may be able to repair the error cooperatively. Indeed, we have argued in Chapter 7 that voluntary action is tacitly edited action (7.3.2). Editing is an automatic consequence of the GW architecture in which- many rule systems can simultaneously inspect, interrupt, and help repair a single conscious event. On the other side, conscious experience can also be used to “flag” some significant event. The most spectacular example of this is biofeedback training, in which otherwise unconscious events can come under voluntary control simply by having them trigger a conscious feedback signal. In this way we can learn to control apparently any population of neurons, at least temporarily (2.5). Biofeedback training reveals an extraordinary capacity of the nervous system, one that by itself suggests the existence of global broadcasting.

    10.4 Recruiting and Control Function

    Recruiting has much to do with the Flagging Function- in fact, as soon as we can flag some novel mental event consciously, we may be able to recruit it for voluntary purposes. The ideomotor theory (7.3) suggests that conscious goal-imag.es are necessary to recruit novel subgoals and motor systems that wi11 achieve the goal. But of course conscious goal-images themselves are under the control of unconscious goal contexts, which serve to generate a goal-image in the first place.
    The Control Function is similar to the notion of recruiting of unconscious systems to help in achieving a goal. But consciousness is useful in setting goals in the first place, and in monitoring action feedback signaling success or failure. To set a goal that is compatible with existing goal contexts, we need to simply become conscious of the goal. Thus: “What is the name of the first president of the United States?” Just being conscious of the question allows the answer to be searched for unconsciously, and candidate answers are returned to consciousness, where they can be checked by multiple unconscious knowledge sources. Feed­ back checking occurs in essentially all tasks, from striking a tennis ball, to modulating the loudness of one’s voice, to word-retrieval, to mental arithmetic. In all these cases it is useful for errors to become conscious in order to recruit unconscious error-detection and correction resources.

    10.5 Prioritizing and Access-control Function

    Attention involves access control to consciousness, and assigning priorities is a core issue in access control. Incomplete conscious thoughts tend to evoke conscious completions. We can apparently use conscious functions to control the likelihood that some piece of information will become conscious more often. Presumably, in the act of voluntarily accessing some information, we also practice the skill of recalling it – that is, of making it conscious again (8.0). In vocabulary development we may want to practice certain words to ensure that they will come to mind readily when needed. Recall, as the skill of bringing material to conscious­ ness, has been studied since Ebbinghaus, but most modern studies ignore the fact that “”recall” means “bringing memories to consciousness.”
    We can change the access priority of information in several ways. One is to use associative learning techniques, like paired associate learning. If a neutral conscious event is made to signal a horrifying mental image, the neutral event will take on a higher priority (presumably it has more activation, or it is associated with a higher-level goal context), which will make it more easily available to consciousness.

    10.6 Decision-making or Executive Function

    While the global broadcasting system is not an executive mechanism, it can be used by goal systems in an attempt to control thought and action. Chapters 6-9 are devoted to different aspects of this issue. Consciousness can serve as the domain of competition between different goals, as in indecisiveness and in conscious, deliberate decisions. In a sense, one can broadcast the goal, 4 ‘Should I …?”followed by ”Or shou1dn’t I …?” and allow a coalition of systems to build up in support of either alternative, as if they were voting one way or another. The successful coalition presumably supports a goal-image that .is broadcast without effective competition, and which therefore gains ideomotor control over the action (7.0). This may be ca1led the Decision-making Function of conscious experience.
    Goal-images do not have to be recallable as conscious in order to influence action. There is considerable reason to believe that fleeting, hard-to-recall goal-images can trigger off well-prepared automatisms (1.5.5; 7.6.4). These images then act in an Executive fashion without allowing conscious decision-making; of course, the executive goal-images are themselves generated by complex unconscious goal structures.

    10.7 Analogy-forming Function

    Human beings have a great penchant for analogy and metaphor, and we use this capacity especially to cope with novel or ill-comprehended situations. Lakoff and Johnson (1980) point out that most everyday idioms involve a metaphorical extension from a well-known concrete situation to one that is abstract or poorly understood. Thus, we find “the mind is a container,” “love is a journey,” and “consciousness is the publicity organ of the nervous system.” Metaphors are both useful and dangerous. In science we use them constantly, and we must be constantly ready to abandon them when they lead us astray. The Rutherford atom of nineteenth-century physics drew an analogy between the planets orbiting the sun and electrons surrounding a heavy nucleus. Here the similarities and differences are obvious in retrospect; but at the time, of course, one did not know how far the metaphor would work, and at which point it would have to be abandoned. But it gave one a place to start. Similarly, whenever we encounter something new, for which our existing knowledge is inadequate, we look for partial matches between the novel case and existing knowledge. Such partial matches invite metaphors. We can best manipulate those metaphors that are familiar and easy to visualize. Thus, we tend to concretize abstract entities and relationships, and thereby transfer our knowledge from one context to another.
    The GW system is useful at several points along this path. It helps in detecting partial matches. It allows many systems to attempt to match a global message and to display their partial matches globally. It supports competition between different systems to edit the mental model of the event that is to be understood. And, in its preference for imageable, qualitative experiences, it is probably responsible for the bias for concreteness and imageability that we find in human metaphor.
    Indeed even when we have accurate abstract representations of some information, we often still prefer less accurate prototypes and metaphors. We know that the average chair is not the prototypical square, brown, wooden, lacquered kitchen chair, yet we continue to use the false prototype apparently because we have easier conscious access to it than to the more realistic abstraction {Rosch, 1975).

    10.8 Metacognitive or Self-monitoring Function

    Conscious metacognition depends on the capacity of one experience to refer to other experiences. Normally when we speak of consciousness we include the ability to describe and act upon our own conscious contents. Indeed, the operational definition of conscious experience proposed in Chapter 1 is predicated upon this ability (1.2.1). But conscious metacognition itself requires the global workspace and consciousness (8.2.3). Another aspect of such a self-referring system is our ability to label our own intentions, expectations, and beliefs, all abstract representations that are not experienced directly the way qualitative percepts or images are. Nevertheless, people constantly refer to their own intentions as if they were discrete objects in the world.
    Conscious self-monitoring is perhaps the single most important aspect of metacognition. There is a great deal of evidence for the view that many adults are constantly monitoring their own performance by reference to some set of criteria that can be collectively labeled the “self-concept.” We might expect self-monitoring to play a role in the psychology of impulse control-if one has an impulse to do something questionable, and if one can mobilize internal competition against it, to hold the action in abeyance, chances for control are improved. There is direct evidence that impulsive children can be taught to use inner speech in such a self-monitoring fashion, and that this does indeed help to constrain inappropriate actions (Meichenbaum & Goodman, 1971).

    10.9 Autoprogramming and Self-maintenance Function

    We can ask the reader to pay attention to the. period at the end of this sentence. We can ask someone to retrieve a memory, to solve a puzzle, or to wiggle a finger. We can learn new habits. All this implies the ability of the conscious system to engage in self-programming. In autoprogramming, goal systems make use of conscious experiences to exercise some control over both conscious and unconscious events. Autoprogramming can encounter obstacles, as in attempts to control smoking, overeating, or other undesired habits, but it is often quite effective. It presumably combines many of the functions discussed before: context-setting, decision making, self-monitoring, and the like.
    The smooth functioning of the whole system is dependent upon a stable Dominant Goal Hierarchy, the. deeper levels of which apparently correspond to the “’self” of commonsense psychology. These deeper levels can · be violated by external circumstances, just as any other contextual constraints can be. In addition, there is much clinical experience to suggest that the self can encounter violations of internal origin. Maintaining the self-system may be critical for mental and physical survival, and one tool for doing so may be the ability of attentional systems to control access to consciousness. The classical notions of repression would seem to fit in here. The evidence for repression as an unconscious process has been questioned (e.g., Holmes, 1972, 1974), but there is no dispute over the great amount of self-serving ideation and control of access to conscious experience that people engage in. The evidentiary question centers mainly about whether this kind of control is conscious or not. GW theory suggest that this is a secondary issue, since predictable voluntary control tends to become automatic with practice. In any case, self-maintenance through the control of access of information to consciousness seems to be one central role of the consciousness system.

    10.10 Chapter summary

    Conscious processes are functional, just as unconscious ones are. Normal human psychology involves a delicate, rapid interplay between conscious and unconscious events. Our list of eighteen functions does not exhaust the possibilities: For example, we have not even touched on the uses of sleep and dreaming. They too must surely have some functional role, probably even multiple roles, which are likely to be bound up with the systems we have explored in this book. But this issue must be left for future exploration, along with so many others.
    No doubt there will be some who continue to advocate the curious doctrine of epiphenomenalism, the idea that conscious experience has no function whatsoever. All we can do is point to the evidence, and develop further demonstrations that loss of consciousness – through habituation, automaticity, distraction, masking, anesthesia, and the like – inhibits or destroys the functions listed here.
    Some epiphenomenalists seem to adopt their position to defend the special and unique status of conscious experience. They are right. Consciousness is special; but its wonderful qualities are not isolated from other realities; nor is biological uselessness a special virtue. Conscious­ ness is the vehicle of our individuality, something that makes it of inestimable significance to each of us. But viewed from the outside, as an element in a larger system, the marvel of consciousness is one more wonder in an awesome nervous system, supported by a body that is scarcely less wonderful, evolved and maintained in a biosphere of endless complexity and subtlety, in a universe one of whose most miraculous features, as Einstein said, is our ability to know it.

  7. Stevan and Bernie,

    Couldn’t we reasonably say that the function of consciousness is to enable us to engage in behavior that we wouldn’t be able to perform if we were not conscious? For example, I think we would all agree that every person who contributes to this discussion does so while conscious, and wouldn’t be able to do so if not conscious. So I think a relevant question is what about consciousness underlies functions like this.

  8. CALLING A SPADE A SPADE

    AT: “Couldn’t we reasonably say that the function of consciousness is to enable us to engage in behavior that we wouldn’t be able to perform if we were not conscious?”

    Only if we decide to beg the question completely.

    The question is how and why we can’t do everything we can do (locomote, learn, detect, categorize, talk, reply, “mind-read”) without ever feeling a thing, just doing.

    To reply that we couldn’t do all that unless we felt is not an explanation; it is just an affirmation of the fact that we feel (and that we feel as if we couldn’t do what we do unless we felt it, and felt like it).

    (Bernie’s 1988 Chapter 10 does not answer the question either. Nor is it “epiphenomenalism” [whatever that means!] to point out that the causal role that is performed by the fact that we feel in our capacity to do remains completely unexplained. It is just calling a spade [the explanatory gap] a spade.)

  9. Stevan: “The question is how and why we can’t do everything we can do (locomote, learn, detect, categorize, talk, reply, “mind-read”) without ever feeling a thing, just doing.”

    Sorry, but I think this is the wrong question. The question is could we do everything we do (including posing the hard problem) without having an internal representation of the world we live in from our own egocentric perspective? My answer is no. We need a system of brain mechanisms that can provide a transparent representation of the world from our privileged egocentric perspective (subjectivity). I argue that this brain representation constitutes our phenomenal world/consciousness. See “Where Am I? Redux”, here:

    http://theassc.org/documents/where_am_i_redux

  10. Hi Arnold. I agree. The list of functions above, from Chapter 10 in the 1988 book, is precisely all those things we can only do when we are normally conscious. It also includes tasks we can do during the waking state, when we habituate (and become less conscious) to specific stimuli, when some response because automatic and unconscious, or when we mask a visual part of each task.

    The reasoning is very straightforward, and the claims are easy to support.

  11. WHY ARE INTERNAL REPRESENTATIONS FELT?

    BJB: “The functions of consciousness, like all the other evidence in my 1988 book, is precisely about “felt” internal representations.”

    Internal representations are felt, hence anything that is about them is about felt representations.

    But what is consistently missing — in your work, Bernie, but in everyone else’s too — is a causal explanation of (how and) why those internal representations are felt internal representations, rather than just internal representations.

    It’s one thing to show how and why functions are functional — quite another to show how and why they are felt (rather than just “functed”).

    (And that’s *felt*, not “felt”!)

  12. Since you’ve totally dropped your first line of criticism, namely that GWT makes no claims about the functions of consciousness, I take it that you now agree that GWT DOES make substantive and testable claims about the functions of consciousness.

    Second, you have still not responded to my “target paper” going into detail on the biological costs of consciousness, which are quite clear empirically (at least some of them), and which are the other side of the Darwinian coin of functionality.

    You therefore have implicitly agreed that consciousness has both clear, empirically based functions and costs. I welcome your willingness to agree to that, since so many other philosophers have hotly argued against those claims.

    Now we come to piece de resistance, the redness of red, the feltness of feelings, and the painfulness of pain. I will ignore the manifest tautological quality of such questions, and the way they are always phrased to boggle the minds of millions of innocent undergraduates, who learn never to think about consciousness empirically, even though they go through life having both conscious and unconscious brain events that contain in them the seeds of empirically-determined answers, as we have seen in the wave of scientific studies using contrastive analysis.

    You claim again that GWT provides no answers to the ‘”Gotcha!” questions of Philosophy 101. But it appears that nobody has read my 1988 book long enough to get to the final chapter, which provides, not one but five necessary conditions for conscious contents to be conscious. Apparently “global workspace” is the only phrase people walk away with. Well, two words is better than none. If people walk away from Newton with the phrase “action at a distance” they are not assumed to have taken the trouble to read the rest. But a full understanding of Newton requires a real study of his works, or of other classical physics textbooks.

    I’ve just uploaded my 1988 book, lock stock and barrel, to ASSC, so that it is now available free of charge to anybody who is interested. It has been on Kindle for a few months for ten dollars. It has been available from Cambridge since 1988, and popular version have been available from Oxford since 1997, and in journal articles galore. Nevertheless, I see that the original book is still not read. The proposed answer is in section 11.3, and it’s straightforward, based on a host of facts.

    Instead, we have philosophers going from behaviorism (like Dan Dennett in 1978) to panpsychism (like David Chalmers today) without batting an eyelash, and still talking about the same, putatively impossible problems, the “Hard Problems,” without bothering to study the vast body of empirical evidence. A search in PubMed reveals no less than 30,000 empirically-based article on the topic of consciousness. There is currently no evidence that logicist philosophers have read even a small fraction of that literature, which I constantly keep up with.

    Instead, the burden of proof is constantly based on theorists who have advanced ACTUAL TESTABLE HYPOTHESES, such as Edelman, Tononi, Crick and Koch, Llinas, and a number of others, who have a sizable body of REAL EVIDENCE at their disposal. We still see famous hypotheses about consciousness, like Roger Penrose’s, advanced without a shred of a smidgen of a telltale of a suspicion of evidence.

    It’s not that the study of consciousness is pre-scientific. Steven Laureys just published a magnificent article on detecting residual consciousness in behaviorally comatose patients in the journal BRAIN. That is a real, empirical, scientific finding on consciousness “as such.” It will be predictably ignored by the vast numbers of people who are not interested in evidence, but who prefer to repeat Mind-Body paradoxes that have been rehearsed since the Vedanta sages, the early Buddhis practitioners, the followers of the Tao and Zen, and the West, Plato, Aristotle and their millions of followers.

    We are apparently back to vitalism vs. early biologists like Darwin, Gregor Mendel, Santiago Ramon Y Cajal. Darwin argued against Creationism because he thought it provided empty explanations. Cajal is now considered the founder of neuroscience, because of his demonstration of the cellular nature of neurons after years of dogged microscopic studies. Mendel founded genetics. They were all dedicated empiricists, who were content to work on solvable questions around 1900. Mendel didn’t depart from peas and fruitflies, because that’s where the testable questions work. Likewise Darwin and Cajal.

    Empirical and inductivist science never starts with the answers philosophers demand to know instantly, here and now, even though they rarely study evidence. (Except for Descartes, William James, and Aristotle, of course). The rhetorical trick logicist philosophers have adopted is to flip the normal burden of proof. Scientists are supposed to know answer instantly that philosophers themselves have become committed not to know. Modern philosophers therefore act just like the vitalists did in 1900, demanding instant answers to the “REAL NATURE” of life, long before the discovery of DNA and the modern agreement on Darwinian evolution.

    VItalism did not help to solve any problems about the nature of life. Panpsychism will not solve any problems about consciousness. Newton’s philosophical critics solved no problems about the real, observable universe.

    What is scientifically appriopriate right now is to return the burden of proof to its proper role. You can’t accuse someone of guilt in a court of law without assuming the burden of proof. In philosophy of mind, routinely, scientists who dont’ know all the answers about consciousness today (which they can’t as empirical inductivists) are essentially accused of guilt before they have a chance to establish their innocent. This is not sensible.

    Physics was unable to answer the question “what is heat?” after Galileo’s thermometer, because the evidence was just beginning to be understood. They still did not know the answer after Fahrenheit and Boyle. They found the first really solid answer with James Clerk Maxwell at the end of the 19th century, when “T” (heat) because a defined quantity in the nomological network of thermodynamic theory. T is interpreted as molecular motion, in a thermodynamic universe where we know the zero point where motion stops at -274 degrees Celsius, 0 degrees Kelvin.

    Asking inductivist scientists to give the final answer in Galilean times, three centuries before the fact-based evidence comes in with thermodynamics, is … let’s call it “unusual.” It betrays a fundamental misunderstanding of the scientific enterprise and its long and largely successful history.

    However, I’m not going to say, “see you in 300 years,” which would be unsatifying. Instead, I am going to quote the answer from my 1988 book, which hazarded a best five empirically testable criteria based on the evidence we had in 1988. Today we have much more, of course.

    Please note that all theoretical terms in this answer are explicitly defined in the 1988 book, and in its complete Glossary, to make it easy to find. Nothing is left undefined empirically and theoretically, unlike philosophical “Gotcha!” questions, which generally define no terms whatsoever. The material below therefore contains terms of art that are clearly and explicitly and testably defined throughout my body of work, most obviously in 1988.

    I hereby quote the hypothesized answer(s) from 1988, which I still stand by. I have not flipped from behaviorism to panpsychism in twenty years, like my philosophical friends.

    Here it is! Remember it is necessarily empirical and hypothesized, because I dependent on evidence, while my philosophical critics are not.

    EXCERPT FROM B.J. Baars (1988) A Cognitive Theory of Consciousness, Cambridge University Press. No republished in Kindle for $ 9.99, and provided free of charge to the Association for the Scientific Study of Consciousness (theassc.org) on the web. Also available for the last 10 years on http://www.nsi.edu/users/baars and in various versions floating around the web.

    I am going to SINGLE-STAR the major, testable hypotheses, to keep things as clear and simple as possible.

    I am going to MULTIPLE STAR IN BRACKETS [] those claims that have gained in empirical support since 1988. The number of stars reflects the amount of evidence we have so far, on a scale for ZERO STARS for no evidence, and THREE STARS for a ton of evidence, case closed.

    The STARRED items have been discussed in various papers that are readily available for download, notably from Stan Franklin’s CCRG team’s site at the University of Memphis, which a large-scale hybrid AI simulation has been building for a couple of decades. Other computer simulations have been offered by Stan Dehaene in Paris, and by Murray Shanahan in London. Direct empirical evidence continues to come in, and GWT will stand or fall by the evidence, just like any other theory. So far I’m tickled to see the evidence.

    BJB

    QUOTE:

    11.3 *What are the necessary conditions for conscious experience?*

    We can now summarize five necessary conditions without which conscious experience of an event is lost. They are as follows:

    1 *Conscious events involve globally broadcast information.* [ EVIDENCE: **]

    This is quite a strong claim to make, but there is considerable evidence in its favor (2.5). Further, a number of the theoretical claims made through­ out this book are based on it. For example, the ideomotor control of action would not work unless conscious messages were made available to potentially all effectors and action schemata (Chapter 7). The notion of universal editing of conscious goal-images (7.3.2) would not work unless any editing criterion could compete against a globally broadcast goal­ image; and so on.

    2 *Conscious events are internally consistent* [ EVIDENCE: ***]

    Again, the evidence for this idea from both perception and cognition is quite good (2.1). Chapter 2 presented the argument that other features, like limited capacity and seriality, follow from the internal consistency constraint.

    3 Conscious events are informative- that is, they place a demand for adaptation on other parts of the system [ EVIDENCE: **]

    Chapter 5 was devoted to pursuing this claim, and the complementary hypothesis that conscious events that become predictable fade from consciousness, though they do not disappear- in fact, faded conscious events may create the context for later conscious events. These facts imply that consciousness requires a global message to be available long enough for many local processors to adapt to it, to reduce their uncertainty relative ‘to the conscious message. That is to say, this condition may imply that conscious events must have some minimal duration, as suggested in section 2.4.2.

    4 Conscious events require access by a self-system [ EVIDENCE: ***]

    The deeper layers of context may be “self-liken (Chapter 9), in that strong violations of these deeper layers are experienced as self-alien. These deeper layers may respond adaptively to conscious events, either by generating a voluntary response to do something about the event, or simply by recording that it has happened, much like conventional Long· Term Memory. Thus, access to GW contents by a self-system seems to be required for reportable conscious experiences.

    5 Conscious experience may require perceptual or imaginal events lasting for some minimum duration [ EVIDENCE: ***]

    Perception, imagery, bodily feelings, and inner speech seem to be involved in the conscious components of thought and action, not merely in input processes (1.2.5). Even abstract conscious concepts may involve rapid quasi-perceptual events. This suggests that perception may be closer to the mind’s lingua franca than other codes. The evidence is good that images become automatic with practice, and thus fade from consciousness, though they continue to serve as a processing code (1.2.4). Further, many sources of evidence suggest that the perceptual. code must be broadcast for at least 50-250 msec (Blumenthal, 1977;2.4.2).

    ADDED COMMENT: PLEASE REMEMBER THAT IN SCIENCE WE RARELY FIND NECESSARY AND SUFFICIENT CONDITIONS UNTIL WE HAVE A COMPLETE, EMPIRICALLY AND THEORETICALLY ROBUST THEORY. THAT OCCURRED WITH THE CONCEPT OF HEAT (T) IN THERMODYNAMICS IN THE LATE 19TH CENTURY.

    INSTEAD, WE CAN SPECIFY NECESSARY CONDITIONS WHEN WE HAVE A BODY OF SOLID EVIDENCE WITH A CLEAR BEARING ON THE TOPIC.

    I AM CURRENTLY EXPLORING OTHER NECESSARY CONDITIONS. FOR EXAMPLE, NEUROBIOLOGISTS HAVE ESTABLISHED THAT IN MAMMALS THE CORTICOTHALAMIC SYSTEM IS A NECESSARY CONDITION FOR CONSCIOUS CONTENTS TO BE SUSTAINED (SEE EDELMAN & TONONI, 2000; BAARS & GAGE, 2010 AND 2012).

    NEUROPHYSIOLOGICAL RESEARCH HAS ESTABLISHED THAT WAKING-LIKE OSCILLATORY ACTIVITY IN THE C-T CORE IS NECESSARY FOR CONSCIOUS CONTENTS TO OCCUR. SOME DEBATE REMAINS ABOUT “SLOW-WAVE SLEEP” MENTATION AND OTHER ISSUES, WHICH ARE IMPORTANT BUT RELATIVELY SMALLER TOPICS IN TERMS OF WEIGHT OF EVIDENCE.

    A NUMBER OF OTHER NEUROBIOLOGICAL CONSTRAINTS ARE KNOWN TO EXIST, WRITTEN ABOUT BY GM EDELMAN, A DAMASIO, R LLINAS, G TONONI, B BAARS, A SETH, D EDELMAN, AND A NUMBER OF OTHERS.

    THERE IS NO REASON TO BELIEVE THIS LIST IS EXHAUSTIVE. IT’S AN EMPIRICAL, INDUCTIVIST ENTERPRISE.

    WHILE THE C-T NEUROANATOMY IS CLEARLY INVOLVED IN HUMANS, THERE ARE INTERESTING CLAIMS THAT THERE ARE EVOLUTIONARILY PRIOR BRAIN STRUCTURES, SUCH AS THE ZONA INCERTA (B MERKER), AND THE SUPERIOR COLLICULUS, OR EVEN THALAMIC NUCLEI. THOSE ARE PERFECTLY POSSIBLE CLAIMS. HOWEVER IT SEEMS THAT IN HUMANS, THE GREAT NEOCORTEX MAY OVERRIDE OR OUTWEIGH THE CONTRIBUTION FROM THOSE DEEPER AND PHYLOGENETICALLY EARLIER SOURCES. THERE IS NO REASON WHATSOEVER TO BELIEVE THAT CONSCIOUSNESS HAS IDENTICAL GROSS ANATOMICAL STRUCTURES ACROSS ALL SPECIES.

    Other neurobiological conditions are discussed in Chapter 8 of Baars & Gage (2012). See Amazon for details. You can also pick up the 1988 book as a Kindle book in Amazon, and as a Word document, free of charge, at the ASSC.

    My apologies for this long answer. Einstein said “as simple as possible, but not too simple.” We try our best.

    Bernard Baars

    11.4 What is unconscious?

    If these are necessary conditions for conscious experience, it follows that anything that violates just one necessary condition is unconscious. That is, events that are globally broadcast but internally inconsistent are presumably not conscious; perhaps they last for so short a time before their competing parts destroy the global message that they fail to trigger an informative demand for adaptation. Similarly, it is conceivable that contextual information could be globally broadcast without being informative because the system at large has already adapted to contextual constraints. There are thus many ways in which information may be unconscious: habituation and automaticity, distraction by high-priority contextually ‘incompatible events, the absence of a context needed to interpret some event, inconsistent events, and so on. It is possible that motivational mechanisms may employ such ways of making things unconscious in order to avoid conscious thoughts that might evoke intense shame, fear or guilt (8.4).

    In sum, we find again that surprising simplicity emerges from the apparent complexity. The evidence discussed throughout this book seems to converge on only five necessary conditions for conscious events: global broadcasting, internal consistency, informativeness, access by a self­ system, and perceptual or quasi-perceptual coding.

    END OF QUOTE

  13. TO REPLY “JUST SO” IS TO BEG THE QUESTION

    BB-1: Definition and Context-setting. By relating global input to its contexts, the system underlying consciousness acts to define the input. and remove ambiguities. Conscious global messages can also evoke contexts, which then con­strain later conscious experiences.

    Why *felt* Definition and Context-Setting rather than just Definition and Context-Setting?

    BB-2: Adaptation and Learning. Conscious experience is useful in representing and adapting to novel and significant events.

    Why *felt* Adaptation and Learning rather than just Adaptation and Learning?

    BB-3: Editing, Flagging, and Debug Function. Unconscious processors can monitor any conscious content, edit it, and try to change it if it is consciously “flagged” as an error.

    Why *felt* Editing, Flagging, and Debugging rather than just Editing, Flagging, and Debugging?

    BB-4: Recruiting and Control Function Conscious goals can recruit subgoals and motor systems to organize and carry out mental and physical actions.

    Why *felt* Recruiting and Control rather than just Recruiting and Control?

    BB-5: Prioritizing and Access-control Function. Attentional mechanisms exercise conscious and unconscious control over what will become conscious. By relating some particular conscious con­ tent to deeper goals, we can raise its access priority, making it conscious more often and increasing the chances of successful adaptation to it.

    Why *felt* Prioritizing and Access-control rather than just Prioritizing and Access-control?

    BB-6: Decision-making or Executive Function. When automatic systems cannot routinely resolve some choice-point, making it conscious helps recruit unconscious knowledge sources to make the proper decision. In the case of indecision, we can make a goal conscious to allow widespread recruitment of conscious and unconscious “votes” for or against it.

    Why *felt* Decision-making\Executive Function rather than just Decision-making\Executive Function?

    BB-7: Analogy-forming Function. Unconscious systems can search for a partial match between their contents and a globally dis- played (conscious) message. This is especially important in representing new information when no close models of the input are available.

    Why *felt* Analogy-forming rather than just Analogy-forming ?

    BB-8: Metacognitive or Self-monitoring Function.­ Through conscious imagery and inner speech we can reflect upon and control our own conscious and unconscious functioning.

    Why *felt* Metacognition/Self-monitoring rather than just MMetacognition/Self-monitoring?

    BB-9: Autoprogramming and Self­-Maintenance Function: The deeper layers of context can be considered as a “self-maintenance system” that works to maintain maximum .stability in the face of changing inner and outer conditions. Conscious experience provides information for the self-system to use in its task of maintaining stability. By “replaying” desirable goals, it can recruit processors able to produce solutions and thereby reprogram the system itself.

    Why *felt* Autoprogramming and Self­-Maintenance rather than just Autoprogramming and Self­-Maintenance?

    To reply “Just So” is to beg the question. Every single one of these nine functions looks just as feasible if implemented feelinglessly. If there is some functional reason they could not be implemented feelinglessly, it has to be explained what that functional reason is: What causal role does the fact that the function is felt rather than unfelt play?

    (This, by the way, is what makes the problem of explaining consciousness “hard”.)

  14. Stevan and Bernie,

    Any cognitive function that requires us to devote attention in order to select some feature of the world we live in is a function that *must* be implemented consciously; i.e., with *feeling*. I claim this is the case because we do not experience the world directly; each of us has *only* an internal representation of the physical world to work with from our unique egocentric perspective. This is *subjectivity*. So if we are to explain subjectivity/consciousness/feeling, we have to explain how the brain is able to construct a global representation of the world from our privileged egocentric perspective. I have claimed that the retinoid model explains the problem of subjectivity/consciousness, and I have presented empirical evidence in support of this claim.

  15. AT: “Any cognitive function that requires us to devote attention in order to select some feature of the world we live in is a function that *must* be implemented consciously”

    If we leave out the (question-begging) “we” (and “cognitive” and “attention”) and just say: “Any function that devotes processing to select some feature of the world” then what you are describing, Arnold, is trivially implementable in any of today’s toy robots.

  16. Stevan: “… just say: “Any function that devotes processing to select some feature of the world” then what you are describing, Arnold, is trivially implementable in any of today’s toy robots….

    Here you are making a crucial mistake, Stevan. I am not saying what you want me to say. To my knowledge, there is no robot/artifact that contains a global perspectival representation of the volumetric space (the world) in which it exists. Put another way, there is no existing artifact that has *subjectivity* — the structural and dynamic properties of the brain’s putative retinoid space.

  17. AT: “there is no robot/artifact that contains a global perspectival representation of the volumetric space (the world) in which it exists. Put another way, there is no existing artifact that has *subjectivity*”.

    You are quite right that there is today “no robot/artifact that contains a global perspectival representation of the volumetric space (the world) in which it exists.”

    But (until further notice), *that* certainly is not *subjectivity*. It is “a global perspectival representation of the volumetric space (the world) in which it exists.”

    The rest is just wishful thinking…

  18. Stevan you have one counter-argument: “felt.” You use that single counter-argument against literally hundreds of empirical citations about consciousness as such. This is much like Darwin’s Crreationist critics who had just one basic counter-argument against the vast body of evidence Darwin amassed in favor of his position. It is also just like Henri Bergson’s single counter-argument of “elan vital” against the massed arguments known even at that time on behalf of a molecular basis for life. It is just like George Berkeley’s single counter-argument against Newton’s calculus, or by others against gravitational action at a distance. All you can do is repeat the same-old same-old over and over again. Ultimately, in the history of science, the overwhelming weight of evidence counts.

  19. In science, we cannot coerce the truth to emerge. We have to wait for her to reveal herself. Indeed, that which appears to be true very often turns out not to be. Everything unknown is in the womb of time.

    Philosophers demand a kind of violence to the patient, evidence-based, and slow process of finding the truth. They demand instant answers. It is not up to us to give instant answers. All we can do is follow the road patiently, rigorously, and with a sense of good fortune when we stumble into another nugget of fact.

  20. JUST-SO STORIES ABOUT CONSCIOUSNESS

    Countless things that we can do, we do consciously (it feels like something to do them). To enumerate the underlying functions that generate those doings is not to explain how or why those functions are conscious — i.e., how or why the doings are generated consciously, rather than just generated.

    This will be as true of the Nth example as of the first. What is needed is a causal explanation of how and why functions are conscious, not more examples, dubbed as causal, just so.

  21. Just so stories, like Kipling’s are post-hoc rationalizations. But we are never talking about that. We are talking about a vast and growing body of rock-solid evidence about mind and brain, most of it is experimental and predictive, not post-hoc and justificatory. This is normal science. To call that “just so” stories is merely to tar with the wrong brush.

    The great split, as Kuhn remarked, is between those who pay attention to evidence and those who don’t. The non-evidence position can last a long time. Ultimately, the evidence position wins out in healthy science. Bergson was more famous than Gregor Mendel.But Mendel’s work is now foundational to he great stream of biogenetics. Bergson is a forgotten historical figure.

  22. BB: “Philosophers demand… instant answers… In science… [a]ll we can do is follow the road patiently, rigorously…”

    1. Actually, I am not a philosopher ;>)

    2. Nor am I demanding instant answers.

    3. I keep pointing out, rather systematically, how the instant answers (to the question of the causal role of consciousness) — not just Bernie’s — are not successful (not rigorous, if you like!), always turning out to be just question-begging Just-So Stories hanging from a skyhook.

  23. THE HARDSHIPS OF COGNITIVE SCIENCE

    BB: “Just so stories, like Kipling’s are post-hoc rationalizations.”

    Correct. And so are all attempts (to date) to explain the causal role of consciousness.

    BB: “We are talking about a vast and growing body of rock-solid evidence… most of it… experimental and predictive, not post-hoc and justificatory.”

    The evidence, without exception, is always of two kinds:

    (1) Functional explanations of how and why organisms can do what they can do. (These are normal science — reverse engineering, actually. And they are not the object of my criticism. They are the beginnings of answers to what Dave Chalmers dubbed the “easy” questions of cognitive science.)

    (2) Correlations between brain activity and conscious cognitive states. (These too are normal science, and not the object of my criticism. If they help us come up with causal models explaining how and why we can do what we can do, then they too help us answer the “easy” questions of cognitive science.)

    The Just-So Stories are the attempts to explain the causal role of consciousness (i.e., to solve what Dave dubbed the “hard” problem) on the basis of (1) and (2).

    When looked at closely (“rigorously”), it always turns out that the attempted functional explanation in question is really just addressing the “easy” problem (of how and why we can do what we can do) and not the “hard” problem of how and why it is conscious.

    So as candidate solutions to the “hard” problem, such attempts do indeed turn out (every time) to be “post-hoc and justificatory” rather than explanatory.

    Harnad, S. (2000) Correlation vs. Causality: How/Why the Mind/Body Problem Is Hard. [Commentary on Humphrey, N. “How to Solve the Mind-Body Problem”] Journal of Consciousness Studies 7(4): 54-61. http://cogprints.org/1617/

    Harnad, S. & Scherzer, P. (2008) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling. Artificial Intelligence in Medicine 44(2): 83-89 http://eprints.ecs.soton.ac.uk/14430/

    Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3. http://eprints.ecs.soton.ac.uk/22242/

    Harnad, S. (2011) Doing, Feeling, Meaning And Explaining. In: On the Human. http://eprints.ecs.soton.ac.uk/22243/

  24. Sorry, I got Bernie’s request for closure after I posted the above. I think the above was still pertinent and informative, but hereon I agree we have each said our bit and I am happy not to post any further on this. I am also happy to leave the last word to Bernie.

  25. Stevan: “You [Arnold] are quite right that there is today “no robot/artifact that contains a global perspectival representation of the volumetric space (the world) in which it exists.”
    But (until further notice), *that* certainly is not *subjectivity*. It is “a global perspectival representation of the volumetric space (the world) in which it exists.”

    1. According to my working definition of subjectivity, any organism has subjectivity if it has *a global perspectival representation of the volumetric space (the world) in which it exists.*

    Since you are certain that this is not subjectivity, would you tell us what your definition of subjectivity is.

    2. Stevan: “The rest is just wishful thinking…”

    The rest is certainly more than just wishful thinking. The rest is a body of empirical findings that provides clear support for a particular theoretical model of consciousness (the retinoid model) as an explanation (within scientific norms) of subjectivity/consciousness.

    Would you agree that the hard problem was solved (within scientific norms) if it were successfully demonstrated that the biophysical structure and dynamics of a particular theoretical brain model could predict novel phenomenal events (*feelings*) in addition to predicting previously inexplicable conscious content (*feelings*)?

  26. So after all of this I am still a bit puzzled by Bernie’s position here.

    Bernie, you at least recognize that there is the air of mystery about consciousness? That is, that it seems like, at least at first, explaining the functioning of the brain leaves open the question that Stevan is interested in, which is the question of why that particular brain functioning is conscious in the sense of feeling like something rather than just being done ‘in the dark’? It may be wrong but it seems like we could imagine a robot doing the things you are talking about and yet not being conscious in any way at all; of course we also seem to be able to imagine a robot that does what you are talking about and is conscious…so what’s the difference between these two robots? Or is it that you think there isn’t even the apparent air of mystery here?

    To put it another way, It sometimes sounds like Bernie is saying that in the future we will know the answer to Stevan’s question but at other times it sounds like he is saying that this isn’t a question that has to be answered at all (or that only a philosopher would want it answered).

    I mean, even Newton himself thought that gravity was ‘spooky action at a distance’ and looked to involve ‘occult forces’…it was made reputable with Einstein almost 200 years later but wasn’t it right for Newton to express those worries? It is natural, and intellectually virtuous, to admit when one can’t explain some phenomena appealed to in one’s theory, no? Wouldn’t it have been wrong for Newton to say ‘I don’t need to explain that because the math works’? If so then it seems to me that one should just acknowledge that we can’t answer Stevan question and then either (a) explain why we can’t answer it (e.g. identities don’t require explanations but are justified by considerations of simplicity, parsimony, and explanatory power) or (b) explain why it is probably going to be answered at a later date (e.g. by appealing to a meta-induction over the history of science)

    Secondly, I wonder what Bernie would say to a challenge from a different camp. People sometimes distinguish between what David Rosenthal calls ‘transitive consciousness’ and ‘state consciousness’. Transitive consciousness is just what we would ordinarily think of as sensation and perception. These states make the organism conscious of something (hence transitive). So, when I am in a state in virtue of which I am conscious of some thing in the world then I am in a state that is transitively conscious. These kinds of states can occur when they are unconscious. That is to say that if I am in a state that is a sensation of blue (say) and having the the thought ‘blue there’ (or whatever) then I am conscious of the blue in my environment and conscious that there is blue in my environment. This perceptual state can occur in masked priming studies, etc, such that the subjects report being in no way conscious of seeing blue but yet clearly had been transitively conscious of blue since they are primed (or whatever). So, once we have this distinction then a lot of what you argue is the cost of consciousness can be seen as the cost of transitive consciousness. But what costs are there associated with those states themselves being conscious? That is, what cost is there just associated with state consciousness? Once one takes into account the vast empirical literature on what can be done unconsciously (pretty much everything that can be done consciously) it starts to look like there is no distinctive function for state consciousness (this was David’s conclusion in his talk at the first online consciousness conference. he is talking about thoughts but the basic argument applies to sensations and perceptions as well). What do you make of this?

    Finally, I would just like to say that the ‘philosopher versus scientist’ routine is a bit stale! Stevan isn’t a philosopher and plenty of scientists share worries about the hard problem and plenty of philosophers think that the air of apparent mystery will be cleared in the long run of science (myself included). New rhetoric, please! 😉

  27. Richard: ” Once one takes into account the vast empirical literature on what can be done unconsciously (pretty much everything that can be done consciously) it starts to look like there is no distinctive function for state consciousness …”

    Not so. Bernie’s position seems reasonable to me. Please tell us how we could possibly be engaging in this internet discussion if we were unconscious.

    It would be helpful if you would give your definition of “state consciousness”.

  28. RESPONSE TO ARNOLD TREHUB:

    ARNOLD: “According to my working definition of subjectivity, any organism has subjectivity if it has ‘a global perspectival representation of the volumetric space (the world) in which it exists’.”

    This is not a definition of subjectivity. It is a theory that anything that has “a global perspectival representation of the volumetric space (the world) in which it exists” has subjectivity.

    It remains for you to explain, Arnold, how and why anything that has “a global perspectival representation [etc.]” has subjectivity.

    On the face of it, it looks like anything that has “a global perspectival representation of the volumetric space (the world) in which it exists” has “a global perspectival representation of the volumetric space (the world) in which it exists”.

    Showing whether (and if so how and why) anything that has “a global perspectival representation [etc.]” also has subjectivity is a further explanatory problem (the so-called “hard” one) that you have not addressed at all (and it cannot be solved “by definition”).

    ARNOLD: “would you tell us what your definition of subjectivity is”?

    I don’t know what “subjectivity” is but I know what a subjective (or mental or conscious) state is: It is a state that it feels like something to be in.

    And we all know (without need of definition) what it feels like to feel (anything at all).

    So the simplest way to put my critique is that you have not explained how or why it feels like something to have “a global perspectival representation of the volumetric space (the world) in which it exists”.

    I think I myself do have “a global perspectival representation [etc.]”

    And I am certain — indeed Cartesianly certain — that I feel.

    But how or why the former causes (let alone constitutes) the latter is not at all clear (and I rather suspect it is not true either: the amphyoxus, for all I know, only feels “ouch,” but it does feel, even though it lacks “a global perspectival representation [etc.]”).

    ARNOLD: “Would you agree that the hard problem was solved (within scientific norms) if it were successfully demonstrated that the biophysical structure and dynamics of a particular theoretical brain model could predict novel phenomenal events (*feelings*) in addition to predicting previously inexplicable conscious content (*feelings*)?”

    Not at all. I would say that such a model (if it existed) could predict whether, when and which feelings are felt, but not how or why they are felt. It would be correlation and prediction (as in weather-forecasting or mind-reading), but not causal explanation.

    But I would agree that such a model (if it also solved the “easy” problem of explaining how and why we can do everything that we can do) would be the best we can ever hope to have, in cognitive science.

    It would not, however, solve (or even touch) the “hard” problem.

    For those who are interested specifically in this “hard” problem of explaining how and why we are conscious, there will be a Summer Institute focussed on this from June 30 – July 11 in Montreal this summer.
    http://www.summer12.isc.uqam.ca/page/renseignement.php?lang_id=2

    The theme is the “Evolution and Function of Consciousness, and the speakers who will be giving it their best shot include:

    Jorge Armony
    Bernard Baars
    Mark Balaguer
    Simon Baron-Cohen
    Roy Baumeister
    Bjorn Brembs
    John Campbell
    Erik Cook
    Fernando Cervero
    Paul Cisek
    Axel Cleermans
    Gary Comstock
    Antonio Damasio
    Dan Dennett
    Gregory Dudek
    Jeffrey Ebert
    David Edelman
    Shimon Edelman
    Barbara Finlay
    Dario Floreano
    David Freedman
    Michael Graziano
    Patrick Haggard
    Stevan Harnad
    Inman Harvey
    Eva Jablonka
    Phillip Jackson
    David Jacobs
    Hakwan Lau
    Joseph Ledoux
    Malcolm MacIver
    Stefano Mancuso
    Julio Martinez
    Jennifer Mather
    Alfred Mele
    Bjorn Merker
    Ezequiel Morsella
    Karim Nader
    Gualtiero Piccinini
    Christopher Pack
    Luiz Pessoa
    Gilles Plourde
    Alain Ptito
    Amir Raz
    David Rosenthal
    John Searle
    Michael Shadlen
    Amir Shmuel
    Wolf Singer
    Wayne Sossin
    Catherine Tallon-Baudry

  29. RESPONSE TO RICHARD BROWN:

    “Action at a distance” was a problem — but an “easy” (“functional”) problem (in the Chalmerian terminology).

    There is (and has been) only one “hard” problem: How and why do organisms *feel*, rather than just “funct” (*do*)?

  30. Arnold: ‘fraid so!

    See, for instance, this recent collection: The New Unconscious

    A mental state is conscious just in case it is a mental state which I am aware of myself as being in (in some suitable way)…(for instance a thought that I want to eat a salad is unconscious if I am not aware of myself as thinking that)

  31. I’m sorry. I promised to bug out. However, the claim that “action at a distance,” the molecular basis of life, and many another problem in the history of science were NOT “hard problems” is a presentist projection on history. In the theological/philosophical critique of Newton “action at a distance” was a HARD problem, and indeed it still is today. We do not know the answer.

    The same goes for Zeno’s Paradox (which was not solved until the late 19th century with infinite converging series), the “elan vital,” and a hundred smaller problems.

    When we speak from relative ignorance (which is always) there ALWAYS are “Hard Problems.” For Descartes the duality of the brain WAS a Hard Problem.

    To say we are facing unique unsolvable problems today is historically wrong. It is part of the egocentricity of our time, which is perhaps the foremost feature of this age.

    b

  32. ARNOLD: “Would you agree that the hard problem was solved (within scientific norms) if it were successfully demonstrated that the biophysical structure and dynamics of a particular theoretical brain model could predict novel phenomenal events (*feelings*) in addition to predicting previously inexplicable conscious content (*feelings*)?”

    Stevan: “Not at all. I would say that such a model (if it existed) could predict whether, when and which feelings are felt, but not how or why they are felt. It would be correlation and prediction (as in weather-forecasting or mind-reading), but not causal explanation.”

    It seems to me that here we see the crux of “the hard problem”. It hinges on the implicit assumption that a causal explanation of consciousness must be a causal explanation for the *sheer existence* of consciousness/feelings. I was careful to frame my causal explanation of consciousness as being within the framework of scientific norms. Science is a pragmatic enterprise. Its practitioners are not omniscient, and science is unable to explain the *sheer existence of anything*, including consciousness. All that were are able to do, within the bounds of science, is provide a *causal explanation of the measurable features of what consciousness/feeling is like*. So I would say that Stevan and others claim the hard problem can have no physical solution because they demand a solution *outside of the norms of science*.

  33. WHETHER-FORECASTING

    No one owns the terminology, but I, for one, never took the “easy/hard” distinction to refer only to our hunches or bets as to whether we would eventually be able to give a causal explanation of consciousness.

    I took explaining causally how and why we can do what we can do to be “easy” and explaining causally how and why we feel to be “hard” because the latter faces a principled obstacle whereas the former does not. (Just as explaining life and explaining action at a distance do not.)

    That principled obstacle is that there is no causal room for feeling as an independent further causal factor in the universe (except if psychokinetic dualism were true, which it is not, all evidence going against it). The existing repertoire of unfelt dynamics (including electromagnetic and gravitational “action at a distance”) already cover all the causal territory, fully, exhaustively, and with no remainder.

    That’s why Bernie’s and Arnold’s (and everyone else’s) causal theories always end up being just Just-So Stories.

  34. ARNOLD: “Stevan and others claim the hard problem can have no physical solution because they demand a solution *outside of the norms of science*.”

    No, I just think it is a perfectly reasonable to ask what is the causal role of what looks, on the face of it, like a garden-variety biological trait: The fact that organisms feel:

    How and why do organisms feel (rather than just do)?

    And that it is quite surprising that no causal explanation works.

    Neither Darwinian evolutionary explanation nor Turingian functional explanation can do the job.

    Hard problem….

  35. Question from the audience: Prof. Harnad, if you’re invoking causal closure as leaving no room for “feeling,” isn’t it more broadly the case that causal closure would leave no room for free will, where free will has to do roughly with our apparent process of choosing among consciously-prospected paths, and “consciously” in turn includes both self-awareness and “something it is like,” which you’re metonymically calling “feeling”?

    If we accept causal closure, and thus see no room for conscious free will, there may be a problem in explaining the point of biology instantiating a non-conscious version of essentially the same thing. On the one hand what Prof. Baars is outlining, biological operations supporting (among other things) conscious choice in the “theater,” might be imagined to occur without the “light of consciousness,” in the dark. On the other hand, in a world in which causal closure is true, why should biology carry out a blind pantomime of choosing, if the very claim that choice exists is taken to be false?

    If causal closure fails, does your whole argument regarding “feeling” fail? If causal closure does not fail, what is your argument for why biology should concern itself with something with no causal role, where that something is not just “feeling,” but choice itself? It’s one thing to try to explain away an “illusion” of conscious choosing. Can you explain away an “illusion” of unconscious choosing when there’s no one for it to be an illusion for?

    Prof. Baars: Would you agree that (consciously) choosing is an essential function of consciousness, and particularly of the global workspace? If so, does that entail a commitment to causal closure failing?

  36. ARNOLD: “Stevan and others claim the hard problem can have no physical solution because they demand a solution *outside of the norms of science*.”

    Stevan: “No, I just think it is a perfectly reasonable to ask what is the causal role of what looks, on the face of it, like a garden-variety biological trait: The fact that organisms feel:”

    OK, so in order to to ask, within the norms of science, what the causal role of feeling is for you, you have to give some description of what it is like for you to feel. For example what is it like for you to feel that you are reading this response? Is there a description that characterizes your *minimal state of feeling/consciousness*? For me the minimal state is like being here in a surround — like there is *something somewhere in relation to me*. If we start from this minimal description of feeling, based on our current knowledge, it is reasonable to ask what system of brain mechanisms might generate a biophysical analog of this fundamental feeling. Science can profitably take it from there without trying to explain the sheer existence of feeling/consciousness. My exploration of the problem leads me to conclude that *without the particular brain mechanisms that constitute our consciousness we would not have our phenomenal world*. No human culture, no art, no science, no philosophy. So the causal role of consciousness is to give us all these things.

  37. DOING, WILLING AND FEELING

    WHIT BLAUVELT: “…if you’re invoking causal closure as leaving no room for “feeling,” isn’t it more broadly the case that causal closure would leave no room for free will…”

    Yes, if feeling has no causal role, then doing something because you feel like doing it has no causal role.

    (But I don’t really know what “causal closure” means. I am not a metaphysician.)

    WHIT BLAUVELT: “…where [choosing] ‘consciously’… includes both self-awareness and “something it is like,” which you’re.. calling ‘feeling’?”

    Choosing to do something consciously means doing it because you feel like doing it, and, yes, it feels like something to feel like doing something.

    (But how did “self-awareness” get into this? An amphyoxus can move away because it feels like doing it without having any ideas worth mentioning about “self.”)

    WHIT BLAUVELT: “…If we accept causal closure, and thus see no room for conscious free will, there may be a problem in explaining the point of biology instantiating a non-conscious version of essentially the same thing…”

    I can’t follow.

    There is no explanation of how and why feelings cause anything, including doing.

    And far from creating a problem for unfelt doing, it is in fact only unfelt doing that is unproblematic.

    WHIT BLAUVELT: “…in a world in which causal closure is true, why should biology carry out a blind pantomime of choosing, if the very claim that choice exists is taken to be false?”

    I still don’t understand what you are asking or supposing, because I don’t know what “causal closure” means. Causality is not a problem in physics, engineering and biology. It is just a problem when we try to explain the causal role of feeling.

    There is no problem with Insentient Nature making a functional/adaptive distinction between voluntary and involuntary behavior. The problem only arises if the voluntary behavior is *felt* (and would arise no matter what the voluntary behavior felt like: whether it felt as if one was doing what one was doing because one had chosen to do it, or it felt as if one was soing it because one was pushed. Either way it would feel like something, and explaining the causal role of that fact is the hard problem).

    WHIT BLAUVELT: “…If causal closure fails, does your whole argument regarding “feeling” fail?”

    I’m not a specialist in — nor am I invoking — anything special about the metaphysics of causation as it occurs in physical, engineering and biological phenomena and their explanation.

    I have no idea what it means for “causal closure” to fail or not fail.

    WHIT BLAUVELT: “…If causal closure does not fail, what is your argument for why biology should concern itself with something with no causal role, where that something is not just ‘feeling,’ but choice itself?”

    “Choice” is just an extra word for when and what it feels like to do something because you feel like it.

    There is no reason at all to speculate about determinism vs. indeterminism here (if that is what you are doing). The hard problem would be just as hard in a deterministic universe as in an indeterministic one. Explanation would still be causal explanation, and the causal function of feeling would remain unexplained in either case.

    WHIT BLAUVELT: “…It’s one thing to try to explain away an “illusion” of conscious choosing. Can you explain away an “illusion” of unconscious choosing when there’s no one for it to be an illusion for?”

    To repeat, it makes no difference whatsoever whether the feeling you have is that you do some things because you feel like it (voluntarily, by conscious choice) or you feel that everything you do you do because you are impelled to do it (involuntarily, not by conscious choice). The hard problem is and remains to explain the causal role of the fact that some functions are felt and some are not.

    (Perhaps in an indeterministic universe causality itself matters less? Things just happen? No cause; no explanation?)

  38. CORRELATION VS CAUSATION

    ARNOLD: “so in order to to ask… what the causal role of feeling is… you have to give some description of what it is like for you to feel.”

    Not at all. It is enough *that* you feel (regardless of what it happens to feel like). With the fact of feeling (anything) we are already facing the “hard” problem.

    ARNOLD: “it is reasonable to ask what system of brain mechanisms might generate… feeling.”

    It is indeed. But correlation and prediction are not enough. You have to explain how and why brain mechanisms generate feeling. (On the face of it, it looks as if all they need to do is generate doing…)

  39. CAUSAL COUNTERFACTUALS

    BERNIE: “Stevan, may I ask UNDER WHAT CONDITION would YOU CONSIDER subjectivity TO HAVE A NATURALISTIC ANSWER?”

    If you mean ‘What would count as an explanation of the causal role of feeling?”: Psychokinetic dualism would do the trick: Feeling is a fundamental force, like gravity and electromagnetism.

    (This is probably what 95% of people believe is true — and it’s what 100% of us *feel* is true, whether or not we believe it. Unfortunately, psychokinetic dualism is false; all evidence — except what feeling itself feels like — contradicts it.)

    To describe any other potential causal explanation, I would have to resort to sci-fi fantasies, and I don’t think much is to be learned from that.

  40. I have a headache. Sometimes I’m distracted, so that I’m momentarily unaware of my headache, but most of the time I’m aware of it. While I am aware of it, I think to myself something that could perhaps be expressed in English, albeit very imperfectly, by my saying, “My current headache is like this”.

    I have two questions for anyone who feels able to answer them: Does my thinking this thought have a function? If so, what is it?

    These questions may not be the same as other questions that have been pressed in this thread, but I would love to have them convincingly answered.

  41. Dear Andrew Melnyk,

    Thank you for asking a specific, factual question. I think that’s productive.

    Two aspects seem relevant. First, the fading of your headache after distraction. In relevant experiments we can see that pain-related conscious stimulation evoke widespread cortical activity, while the same stimulation under conditions of distraction typically show only limited cortical stimulation. (Or in some cases, none.). Thus subjective pain corresponds nicely to the spread of cortical stimulus-related activity. The best evidence today indicates that widespread consciousness-related cortical activity involves widespread cross-frequency phase-locked oscillations in the range from theta to high gamma.

    The second interesting aspect is your thoughts about the feeling of a headache. I don’t think one can seriously question that your thoughts are functional, or potentially so. While words are relatively imperfect tools to express the richness of sensory experiences like pain, they can express the intensity, location, and onset and offset of the pain, sharpness/dullness, as well as the degree of suffering (which can be separated from the perceptual qualities of pain). In addition, one can voluntarily draw topographical intensity maps of pain, something that’s important for disabling and chronic pain. That’s often essential for diagnostic purposes. In the case of peripheral pain, the relevant nerves can often be located by virtue of reports like that. (That’s how dentists know where to inject anesthetic).

    A third interesting aspect is the vulnerability of pain perception to verbal suggestions, as seen in the placebo/nocebo effect and in direct suggestion in about a quarter of the normal population. Those changes in the intensity of pain can be picked up by way of brain measures like the evoked potential.

    Finally, from a therapeutic point of view, attentional distraction is also useful for pain control, and was indeed a primary means to do that before the modern discovery of analgesics and anesthetics.

    BJB

  42. METACOGNITION AND SENTIENCE

    melnyka: “‘My current headache [feels] like this’… Does my thinking this thought have a function? If so, what is it?”

    See the Carruthers & Ritchie thread, “Evolving Self-Consciousness” http://consciousnessonline.com/2012/02/17/evolving-self-consciousness/#comment-1338

    Two potential functions (there may be more) are (1) metacognition (internal monitoring) and (2) mind-reading (external communication, especially via language: “My headache feels like a migraine, not a tension-headache”).

    But that doesn’t explain why (some) internal functions are felt, rather than just “functed.” Feeling the functional state, rather than just functing it, seems functionally (causally) superfluous. By that token, feeling that you are feeling the state seems doubly superfluous.

    But I do want to point out something that most philosophers ignore or even deny: It *feels like something* to think. Between every different thought, there is a felt difference, rather like a JND in psychophysics.

    So feeling a headache feels like something, and thinking “I am feeling a headache” feels like something else. You can be feeling both at the same time (as in thinking “I am getting tired” when I am feeling that I am getting tired) or you can feel only the thought (as in thinking “I am getting tired” when you are not yet feeling tired).

    Again, the 2nd order state is functional — but the fact that it is felt seems just as superfluous functionally as the fact that the 1st order state is felt.

    Go figure…

  43. PS to melnyka:

    When you are not feeling your headache, even momentarily, you are not “having” a headache. Your vasculature may still be in the pathological state that is normally felt as a headache, but the “ache” is gone.

    To think otherwise is to imagine that the ache is being “had” even when I am not feeling it: That’s overdoing it. One mind/body problem is enough (Freudian psychodynamic theories of “unconscious mentation” to the contrary notwithstanding: unconscious brain functions there are, aplenty, but unconscious mentation? not).

    The real mystery, remember, is why everything is not unconscious.

  44. Stevan, I think that may be the key to our disagreement. The evidence (and scientific consensus) regarding unconscious knowledge is simply overwhelming. Autobiographical memories are unconscious (until recalled). So are unaccessed ambiguities in language, vision, and other functions. The cerebellum is unconscious; so are basal ganglia functions. The corticothalamic system (under the proper conditions) is not. Habituated input is unconscious. Automatisms are unconscious. Implicit motivation, implicit learning, incubation, preconscious perception, long-term ego functions, and yes, demonstrated cases of suppressed thoughts are unconscious. The evidence is simply enormous. You can be a radical subjectivist on those matters, but you will be in a small and diminishing minority. And what’s worse, you lose a ton of explanatory power.

    I think this may be the key to our mutual incomprehension. (Decontextualized comprehension is also unconscious).

  45. Stevan: I don’t share that intuition that consciousness isn’t just a brain process. Hence, I don’t feel the force of your claims here. I guess I’m not part of the 100%. 🙂

    Also, for those that do feel that intuition, can’t you just be methodological naturalists about consciousness, get behind this car, and push? Stop worrying about the ontological questions and get to work.

    I see too many smart researchers become impotent when they perseverate on the “hard problem,” constantly asking “Sure, but why would that process be conscious?” That has shown itself to be such an unproductive question, in practice, that maybe we should set it aside for now and do some good research. Like Bernard has done by focusing on contrasting conscious versus unconscious processing in the brain.

    Hopefully I’m not coming off overly snarky here. I just think that in practice “hard problem” fetishism has set back consciousness research long enough. It derails otherwise productive discussions and blunts creativity. Perhaps you should engage in some science fiction fantasies about how we can explain consciousness naturalistically. It would probably be more productive, as it might suggest the direction we should be going, rather than drag us back to the same local minimum time and time again.

  46. UNFELT FEELINGS

    BERNIE: “Stevan, I think that may be the key to our disagreement. The evidence (and scientific consensus) regarding unconscious knowledge is simply overwhelming.”

    It may well be (part of) the key to our disagreement, but not at all because I question the evidence concerning unconscious “knowledge”!

    Unconscious knowledge is the unconscious possession of information (data, capacity, propensity). I have no problem at all with unconscious information, nor with any unconscious function.

    My problem (the “hard” problem) is with *conscious* function, including conscious information (data, capacity, propensity).

    If all “knowledge” were unconscious, there would be no hard problem, and we would not be discussing consciousness here (just perhaps the “easy” functional matter of voluntary versus involuntary behavior and accessible versus inaccessible internal information).

    And it is precisely for that reason that I keep harping on the fact that it is only because we allow ourselves to keep invoking weasel-words for consciousness (“awareness, subjectivity, intentionality, mentality, 1st-personality, qualia,” etc. etc.) — which are really just vague and hopeful synonyms — that we keep fooling ourselves that we are making some headway on the hard one.

    To keep ourselves honest and grounded, we should ditch all the other locutions and stand-ins for “conscious” and just resort to “felt” vs. “unfelt”: That would make the question-begging (and even the incoherence) transparent whenever we inadvertently fall into it.

    And the question-begging and incoherence here was precisely the notion of an “unconscious headache” — which, when stated transparently, without equivocation, would be an “unfelt ache,” which amounts to an “unfelt feeling”: a contradiction in terms (like an uncurved curve or a colorless color).

    Feeling (not “intentionality”) is the “mark of the mental.” What is not felt is not conscious. And the hard problem is to explain how and why *anything at all* is felt (hence mental), anywhere, ever.

    Information accessibility is not what it’s about. There would be accessible as well as inaccessible information inside an insentient (= unconscious) robot (as well as inside a hypothetical “zombie,” for those who are fond of those sci-fi fantasies of speculative metaphysicians).

    BERNIE: “Autobiographical memories are unconscious (until recalled).”

    And the problem is not with the fact that the stored information is there, nor the fact that it is used and plays a causal role in adaptive function, nor even with the fact that it can be made explicit and verbalized. The problem is with the fact that recall is conscious recall — i.e., felt recall — rather than just recall!

    BERNIE: “So are unaccessed ambiguities in language, vision, and other functions.”

    Right. And the problem is not with access, but with conscious (*felt*) access.

    BERNIE: “The cerebellum is unconscious; so are basal ganglia functions.”

    Indeed. And the problem is not with cerebellar and basal ganglion functions, but with conscious (felt) functions.

    BERNIE: “The corticothalamic system (under the proper conditions) is not.”

    Translation: Corticothalamic functions (some, sometimes) are felt rather than unfelt.

    The Problem: How and Why?

    (Otherwise, all you have is an unexplained correlation, not a causal explanation of how and why some functions are felt functions.)

    BERNIE: “Habituated input is unconscious. Automatisms are unconscious. Implicit motivation, implicit learning, incubation, preconscious perception, long-term ego functions, and yes, demonstrated cases of suppressed thoughts are unconscious.”

    All just fine. And no problem.

    And if all functions were like that (unfelt) there would be no problem at all.

    But they’re not.

    And that’s the (hard) problem.

    BERNIE: “The evidence is simply enormous. You can be a radical subjectivist on those matters, but you will be in a small and diminishing minority. And what’s worse, you lose a ton of explanatory power.”

    I have no idea what a “radical subjectivist” is!

    I am just pointing out (each time) that it is indeed a *problem* to explain how and why *all* functions are not unfelt: to explain how and why we are *not* zombies, if you like. (We certainly aren’t: how and why not? What’s the functional advantage? What’s the causal difference?)

    The absence of an answer (or the failure even to face the problem) is the *absence* of explanatory power.

    BERNIE: “I think this may be the key to our mutual incomprehension. (Decontextualized comprehension is also unconscious).”

    I agree that there is indeed misunderstanding here, but I am not sure it is mutual! I think I understand completely what you are saying, Bernie, but I am not sure you are understanding — or appreciating the implications of — what I am saying (about the failure and indeed the vacuity of all attempts at causal explanation of consciousness).

    (I have no idea what “decontextualized comprehension” means, but the problem, as usual, is *conscious* [i.e., felt] comprehension, not comprehension simpliciter, which is simply the possession of information and the capacity to act accordingly — including, if necessary, to verbalize it!)

    Harnad, S. (1992) There is only one mind body problem. International Journal of Psychology 27(3-4) p. 521 http://eprints.ecs.soton.ac.uk/6464/

    Harnad, Stevan (1995) Why and How We Are Not Zombies. Journal of Consciousness Studies 1:164-167. http://cogprints.org/1601/

    Harnad, S. (2000) Correlation vs. Causality: How/Why the Mind/Body Problem Is Hard. Journal of Consciousness Studies 7(4): 54-61. http://cogprints.org/1617/

    Harnad, S. & Scherzer, P. (2008) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling. Artificial Intelligence in Medicine 44(2): 83-89 http://eprints.ecs.soton.ac.uk/14430/

    Harnad, S. (2011) Doing, Feeling, Meaning And Explaining. In: On the Human. http://eprints.ecs.soton.ac.uk/22243/

  47. ERIC THOMPSON: “I don’t share that intuition that consciousness isn’t just a brain process.”

    We’re not talking about intuition but about explanation: explaining the causal role played by consciousness (feeling).

    For the record, I have no doubt whatsoever that feeling must somehow be generated by the brain. The problem is explaining (causally, functionally) *How*, and, especially, *Why* (causally, functionally).

    ERIC THOMPSON: “researchers become impotent when they perseverate on the ‘hard problem'”

    I agree. The fruitful questions for research are the “easy” ones: How and Why can we *do* what we can do?

    Harnad, S. & Scherzer, P. (2008) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling. Artificial Intelligence in Medicine 44(2): 83-89 http://eprints.ecs.soton.ac.uk/14430/

  48. Stevan wrote:
    We’re not talking about intuition but about explanation: explaining the causal role played by consciousness (feeling).

    You seem to be assuming that consciousness is special, that giving all of the biological, physical, chemical facts about a conscious system still will leave us with consciousness left unexplained. That’s precisely the Hard Problem Maxim that I contest. I don’t buy into this division of the world into Easy and Hard problems.

    If experiences are brain states, then delineating the brain states would include delineating their role in the system (including their causal role). That’s just neuroscience.

    But it sounds like you are saying that methodologically, you aren’t making any recommendations other than to continue doing the science. So I guess time will tell whether your concerns hold up?

  49. THE ROAD TO LONDON

    ERIC THOMSON: “You seem to be assuming… that giving all of the biological, physical, chemical facts about a conscious system still will leave us with consciousness left unexplained.”

    Correct.

    But I’m not assuming it, I am asserting it.

    And the only way to rebut my assertion is to explain (at least in principle) how and why our brains (might) feel rather than just do.

    ERIC THOMSON: “If experiences are brain states, then delineating the brain states would include delineating their role in the system (including their causal role). That’s just neuroscience.”

    It would be, if feeling were a trait like all others. But it is not. Not because it is unobservable (though it is) but because it is causally superfluous (absent a causal explanation).

    That “delineating” brain states along the lines of delineating hepatic, renal, cardiac or pulmonary states will somehow reveal the causal role of feeling (rather than just doing) is, I’m afraid, hand-waving.

    ERIC THOMSON: “But it sounds like you are saying that methodologically, you aren’t making any recommendations other than to continue doing the science. So I guess time will tell whether your concerns hold up?”

    No, the hard problem of explaining how and why organisms feel rather than just do is as hard and real today as it will be at the end of the day after all the easy problems (of explaining how and why organisms can do what they can do) have been solved.

    I was just saying that there is no reason for the hard problem to delay or distract us from working on the easy problems.

    An occasional conference or Summer Institute on the hard problem doesn’t hurt, however; at the very least, it gets a lot of non-starters off the sports-field:

    “in an academic generation a little overaddicted to “politesse,” it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a `To London’ sign on the Dover cliffs pointing south…” Hexter (1979)

    Hexter, J. H. (1979) Reappraisals in History. Chicago: University of Chicago Press.

  50. Stevan: so if someone thinks consciousness is a biological phenomenon, and is happy with how things are progressing, and thinks consciousness is already starting to be explained, you wouldn’t have an argument to show they are wrong, but just an assertion/prediction that this project is doomed?

    So we would be at an impasse, then.

  51. I am delighted with the way things are progressing in science, i.e., empirically testable hypotheses. PubMed shows something like 30 thousand abstracts when you type in “conscious AND brain”. So it’s going just fine, and the good, solid evidence is coming in like gangbusters. Steven Laureys just had a spectacular article in BRAIN, and has just been named an honorary burger, or something, in Belgium. Progress.

    I can’t figure out why metaphysical discussions still command such vast resources in time, mental effort, and verbiage.

    On the other hand, that’s not really different from previous advances in science.

    So yes, I’m very pleased, but I can’t figure out why apparently not a single person has either (a) read my posted paper, or (b) responded to its contents.

    Oh, well.

    b

  52. Hi Richard,

    Yes, that question got indeed lost in the tide. My apologies.

    “State consciousness” vs. “transitive consciousness,” which I prefer to call by the traditional name of the “contents of consciousness” are very clearly related. My Chapter 8 of the recent Baars & Gage (2012) “Beginner’s Guide to Cog NS” goes into great detail on the science, as far as it is is known today. It turns out now that there is a very beautiful empirical/theoretical story to tell about waking corticothalamic activity and the momentary contents of consciousness. Very cool stuff. But needless to say, since we do not know the “neural code” of conscious contents with absolute certainty today, there are still many holes in the cheese.

    During waking consciousness the corticothalamic core goes into a far-from-equilibrium oscillatory state comparable to people talking to each other in a stadium, billions of point-to-point conversations among neighbors chatting with each other, such that a gross average of all the oscillations adds up to a flat line. (Which has been the reason why event-related potentials work so nicely). During that state we see all kinds of emergent wave propagation in the C-T core, as in the Izhikevich & Edelman (2008?) PNAS massive simulation. Independent evidence from Doesburg, Ward et al, and from Gaillard & Dehaene, and others, shows that a variety of potential signaling waveforms thrive in that medium. That is emphatically NOT true during the natural unconscious state of Slow Wave Sleep, in which the C-T core almost literally turns on and off at < 1Hz. This simultaneous buzz-pause activity among 100 billion neurons in the C-T core disrupts information processing at least once a second. It is perceived to be unconscious (subjectively) upon waking. There are striking similarities between SWS and pathologically unconscious states, notably loss of consciousness due to epileptic seizure.

    Similarly, dreaming is clearly a conscious state, viz. the EEG reflecting waking activity in the C-T core, subjective report, fMRI, and so on.

    How are momentary conscious events reflected in the waking state? The best current hypothesis is that there is momentary linking among distant, widely distributed regions of the C-T core, probably involving cross-frequency phase locking among distant labeled-link arrays, like the visuotopical regions of cortex. This can be viewed as global broadcasting, but we should always hold that lightly, as an hypothesis. Global broadcasting is only one out of five postulated necessary conditions for consciousness in my 1988 book. There are almost certainly additional necessary conditions, since empirical science only arrives at necessary and sufficient conditions after many years of study. Thermodynamics did so 300 years after Galileo. It's an inductive enterprise.

    There is also a very interesting body of evidence suggesting roughly 100 ms "microstates" that may reflect either the contents of consciousness, but more likely a momentary resonant state or content-related phase change in the nonlinear dynamics of the C-T core. Evidence for that comes from Walter Freeman's work, and from other electrophysiological studies suggesting microstates. This evidence is hotly debated, but it's very intriguing. Walter Freeman has very beautiful graphics from a Hilbert analysis of the cortical EEG showing a "collapse of a wave packet" every 100 ms or so, similar to the theta wave.

    The simplest explanation is a global broadcast, BUT with the other features described in the last chapter of the 1988 book. Global broadcasting is only one of half a dozen necessary conditions suggested by the evidence at that time. In general, the brain evidence has converged well with predictions made in 1988. Still, this is not Newton after the calculus, but more like Darwin in 1880. A ton of evidence, some plausible arguments, but no mechanistic basis that is rock solid as yet.

    The nice thing is that even if the theory turns out to be wrong, the convergence of relevant evidence is still real.

    b

  53. Do you think that dreaming is a normal conscious state, or is that just a matter of definition?

    I’ve always considered it to be a partially conscious state which is, more or less, transitive conscious but not state conscious. Wouldn’t a true conscious,but dreaming state be one of hypnagogic hallucinations?

  54. Hi Bernie,

    While I agree with most of what you say, there is one important point on which we might disagree. You claim that we are far from being able to propose the necessary and sufficient conditions for consciousness to exist. It seems to me that consciousness necessarily has two aspects — a subjective aspect (1pp) and an objective aspect (3pp). I have argued (a) that in the subjective aspect, consciousness exists *if and only if* an organism has an experience of *something somewhere*. And (b) in the objective aspect, consciousness exists *if and only if* an organism has a transparent brain representation of the world from a privileged egocentric perspective. I also argue that the best that science can do is to formulate theoretical models of the objective aspect (3pp) — models that specify the structure and dynamics of brain mechanisms that can generate corresponding analogs of the subjective aspect (1pp), and then test these models empirically by checking how well they predict measures of salient subjective content (1pp). This is what i’ve tried to do in my own work. The tension between us is that I do propose *necessary and sufficient conditions for consciousness to exist*, and I use these theoretical proposals as a foundation for empirical investigation. What would be your arguments against this approach. Would you have principled objections to this approach?

  55. Arnold I can’t speak for Bernard, but it seems there is very little tension between your theories.

    You could say that your retinoid system is his Global workspace. Bernard doesn’t give too detailed a description of the intrinsic properties of the workspace, but more a description of how it is used, its inputs and outputs (e.g., the information is broadcast globally to other “modules” in the brain). One intrinsic property he talks about is that the workspace’s representation of the world has a kind of ‘logic’ or internal consistency, though I never found that very well defined.

    Every time I’ve thought I had a model of consciousness that is a competitor to Baars’, I realize it could just be an implementation of it.

    And note this is from memory, reading both of your books about 10 years ago, so my apologies for any inaccuracies.

  56. Bernard. This is the first time I’ve seen a discussion of the costs (versus benefits) of consciousness. A very interesting twist on the usual theme. When we are conscious of X, we are not conscious of ~X, and if ~X is important, this can be a big problem!

    Talking about this in evolutionary terms: context-dependent preferences seem nearly universal in living things, from plants to paramecia.

    Could consciousness be a natural evolutionary outgrowth of this fact, that once nervous systems emerged that were as complicated as the environments in which we lived, we might expect some of the mechanisms for behavioral preferences to be applied to the brain itself. Certain parts of the neural ecosystem are more important than others at different times, and the very most important at a given time determine the contents of consciousness.

    One problem is: this sounds like I am talking about attention, which is subtly different from consciousness (or so it seems). I have some ideas about this chicken/egg story, but they aren’t well formed yet so I’ll leave it as a question for now–which came first, consciousness or attention (or do they always go together), and is the selectivity of one different from the selectivity of the other?

  57. Hi Eric,

    That’s my sense, too. I think Arnold’s retinoid theory looks like the egocentric visual and body context that’s associated with the parietal lobe, as in the case of attentional neglect.

    There’s nothing wrong with converging theoretical ideas, especially at a time when our evidence is still relatively imprecise. The notion of a battle of the theories occurs sometimes in the history of science, but rarely, and it’s not a necessary condition for theory to crystallize. However, at some point it may become useful to play off alternative hypotheses, if they become testable.

    What we DO need, I believe, is a lot of focused discussions between different theoretical perspectives, to get more precision and testability. If there is overlap, so be it. If there are major differences, that’s ok, too, as long as they are testable.

    By the way, the details of GWT have been worked out to a considerable extent in my 1988 book, and in various implementations, including Stan Dehaene’s, Murray Shanahan’s, and especially Stan Franklin’s. I am hoping to find the time, somehow, to write up a neurobiological version of the theory, which will add a lot of detail. In that process I can also bring in other work, e.g., on the visual system, where many details have been worked out very well. It’s a big job, and you want to make the most productive choices. For example, the notion of visuotopical maps connected by “labeled lines” (axons) in the visual brain is very well worked out, and can just slip into the appropriate parking space.

    If you would kindly remind me about your own work, Eric, that would be very helpful.

    BJB

  58. Thanks for your comment, Eric. I agree that the retinoid system is a global workspace, but consciousness, as I see it, requires more than a global workspace. (For example, A Google server center is a global workspace.) Consciousness requires subjectivity, a spatiotemporal perspectival origin — what I call the core self (I!) — within the global workspace, plus neuronal mechanisms that can *bind* recurrent input from diverse sensory modalities in proper spatiotemporal register within a coherent volumetric representational space. For example, if you see a red car traveling from left to right, the color of the car has to be represented in front of you within the shape of the car, and both color and shape have to move in concert from left to right in your egocentric phenomenal (retinoid) space. The neuronal structure and dynamics of the retinoid model provide the biological machinery to accomplish this. It will be interesting to see whether Bernie thinks these requirements are necessary for conscious content.

  59. Bernard, I have avoided publishing on consciousness within my home field of neuroscience (I focus more on basic sensory processes in somatosensory systems of rodents and leeches).

    My first public stab at a theory was over at philosophy of brains:
    philosophyofbrains.com/2008/01/21/do-qualia-serve-to-tag-the-here-and-now.aspx

    It is was inspired partly by Gregory and Trehub, but I also think it could consistently be treated as a description of the intrinsic features of the global workspace (though I am not committed to a global workspace, by any means: I am still much agnostic, leaning atheist about that).

  60. My present view is slightly different from what I posted at the link above: for instance, less of a focus on sensory transducers now (though I do think that is important for most forms of perceptual awareness).

  61. Yes, absolutely, Arnold. The GW was never intended to be the only necessary condition suggested by the evidence. The 1988 book suggested FIVE or SIX necessary conditions, and since that time we have learned so much more about the brain basis. If I get a little time I can start writing about all the currently indicated necessary conditions for consciousness in the human and primate/mammalian brain. The evidence is out there, and badly needs to be summarized.

    No “sufficient” conditions, of course, because this is inductive science. We’re climbing up the mountain, and we can’t tell if there is another big outcropping up ahead, that has to be tackled. Maybe NMDA synapses? That case has been made by Hans Flohr. And almost certainly the waking-state neuromodulators. I don’t talk about those much, because they are not that interesting, but perhaps that’s a mistake. Recent evidence indicates that very short pulses (200 ms) of dopamine may co-occur with conscious contents under some conditions.

    We keep being surprised, which is a good thing, because that means reality is having a say.

    🙂

    b

  62. The question of “sufficient” conditions can get murky I suppose. But consider this: I would not have been justified in predicting the results of my SMTT experiment in which vivid conscious experiences were created without corresponding sensory inputs unless I had hypothesized that activation of the subject’s putative retinoid system was both necessary and SUFFICIENT to generate the predicted conscious content. It seems to me that a critical empirical test of a theoretical entity requires the claim of sufficiency for the entity. For example, in testing for the existence of a new subatomic particle it is assumed that if the particle exists its properties will be *sufficient* to produce bubble tracks of a predicted kind under the specified experimental conditions.

  63. Yes, that makes sense for very well-specified experimental paradigms. If you are developing theory that is intended to work across many experimental paradigms, that goal runs into trouble. One of the striking phenomena, to me, of experimental work in perception is the constant discovery of new and unanticipated variables.

    I’m willing to stick my neck out and say that even color perception, which has been studies since Isaac Newton, has not been fully specified yet. Even the neurophysiology of the retina keeps adding more neurotransmitters and I believe types of synapses as well. The melatonin-triggered light-dark receptor in the retina was a recent discovery. Yet the retina’s basic neurons have been studied since Cajal. The optics of the the eye have been known since Descartes.

    So — count me as a bit of a skeptic for necessary conditions.

    This is also Gerald Edelman’s view, by the way, based on the principle of functional redundancy (degeneracy) in biological systems.

  64. Bernie, I think the tension arises because of a failure to distinguish between *theoretical necessity* and the notion of *proven necessity*. Science is a pragmatic endeavor. It doesn’t provide absolute PROOF. We are guided by the weight of evidence. But suppose I were to test the predictive power of a theoretical model by claiming that it MIGHT or MIGHT NOT do what it is supposed to do; that it might or might not show result X under the specified experimental conditions. In this case, there would be no standard of success for the model. So from a theoretical stance we say that if the theoretical model is correct it will necessarily produce result X. Or so it seems to me. If a theory claims that an event will *necessarily* happen under the assumptions of the theory and the prediction fails an empirical test, it doesn’t mean that the theory is necessarily false; it is simply one piece of empirical evidence against the theory. If it passes the test then mark it up as evidence for the theory. Of course I agree with you that new and unanticipated findings can always arise. Thats why all theories are provisional.

  65. Bernard, my view was ‘The contents of consciousness are a model of events that are currently evoking activity in a subject’s sensory receptors.’

    It’s a basic Helmholtzian view, really. Of course, our brain empoyes the TC loops (and likely brain stem reticular formation) to build up this model. I see this as the conscious ‘core’ much like you do. The above is more a description of what this core is doing.

    Now I expand beyond sensory perception, to include events inside the brain itself.

  66. Right, but the retina-LGN pathway is one-way, so it can’t be part of the resonant core. Two way connections from LGN to V1, Vn,… to visuotopical maps in the frontal lobe, down to the basal ganglia, then back to the thalamus and up to cortex again.

    One way connections are rare in the C-T core, but they do exist. Another one is the subiculum. In a naturally two-way oscillatory system one-way connections may play a special role, since they funnel activity. (That can also be done dynamically via GABA-ergic inhibition). But the retina only gets visual input. It only goes to LGN.

    I don’t see how it can be part of a dynamic core.

    b

  67. Bernard my claim is not that the retina is part of the NCC. It is that in awake perceptual consciousness (the canonical case of consciousness) the brain builds a model of the world, based partly on the stimuli coming in through sensory transducers. Mentioning ‘sensory transducers’ is just to say that the model is based partly on the stimuli coming in (it is the answer to the question how does the brain receive stimuli?).

    I take it this is largely uncontroversial, except to an antirepresentationalist.

  68. Bernard: “But we also have non-conscious visual inputs, or in any modality, for that matter. That is a major challenge.”

    Yes, indeed. If we agree that the challenge is to make principled distinctions between conscious brain representations and non-conscious representations (both exteroceptive and interoceptive) we must acknowledge that at any moment there are vastly more non-conscious representations in all the sensory modalities than there are in the modality of consciousness. The key question is what distinguishes conscious representations from non-conscious representations. My answer is that conscious representations can only be representations in relation to a locus of perspectival origin (the core self, I!) within a global volumetric medium. In the retinoid theory of consciousness, this is retinoid space. Moreover, this (retinoid) brain space *must* be innate because organisms have no sensory transducers to detect the space they live in. I might add that contra arguments by some (I have argued this issue with Aaron Sloman), such a coherent volumetric space CANNOT be constructed via learning. Consciousness depends on the evolutionary emergence of a system of brain mechanisms that can provide an innate internal representation of the space a creature lives in, from its own privileged egocentric perspective (e.g., a retinoid system). Activation of this neuronal space alone generates a primitive phenomenal experience of an “empty” world that will subsequently be enriched by the excitatory input of sensory patterns from the non-conscious sensory modalities.

  69. Bernie, among your five necessary conditions for consciousness you wrote:

    “… consciousness requires a global message to be available long enough for many local processors to adapt to it, to reduce their uncertainty relative ‘to the conscious message. That is to say, this condition may imply that conscious events must have some minimal duration … ”

    I take this to also be a description of an *extended present*, which enables us to understand sequential patterns of stimulation as in reading, music, speech, etc. Would you agree?

  70. Thanks for reading my last chapter, Arnold. You are one of about a dozen people who has done so since 1988. Well, no. But not many!

    This kind of adaptive widespread processing takes place over multiple time scales, ranging from 0.1 sec to several seconds. As you know, a single photo flash against a dark background may take << 0.1 sec, but its afterimage remains for many seconds. For a number of reasons the natural physiological "consciousness cycle" is likely to occur in the domain of 10 Hz, roughly theta and alpha. That is also found by the Doesburg et al group, and Revonsuo in at least half a different ERP studies comparing conscious to matched unconscious input, and Gaillard & Dehaene in a recent PLOS ONE article. Typically for a novel stimulus there is a post-stimulus lag time of a few hundred ms, then a widespread burst associated with conscious, but not matched unconscious input, lasting about 100 ms, in the theta to gamma frequency domain.

    This is also the time domain reported by Walter J Freeman and a European group specializing in EEG microstates.

    There is evidence suggesting that the conscious "global burst" is itself embedded in a sensorimotor loop, which is not conscious, but which controls goal-driven tasks. This one may involve basal ganglia, which loops back to the thalamus and then back to cortex.

    The big question is how such fast events can then give rise to longer events, like a melodic line or cadence. Or a sentence, or a tennis serve. Those are all higher-level Gestalts with internal organization, like a word or a sentence. Each embedded level adds greater predictability, so that there is a temporary hierarchy of processing levels, which start to resonate with each other to create the emerging Gestalt. Rumelhart and McClelland showed years ago that you can get the word superiority effect, a sort of Gestalt-y effect, by having two neural net levels resonate with each other, such that "the rich get richer." What goes for words goes even more for structures phrase and sentences.

    There is recent thinking that there is also a hierarchy of widespread regular oscillations, starting from the so-called Slow Oscillations (SO's), which occurs 24 hours per day, at < 0.5 Hz. Since each global oscillations lowers the firing threshold of billions of neurons during the UP cycle of the way, and raises their thresholds during the DOWN cycle, it is thought that SO's may coordinate faster cycles. This is also a natural mechanism to integrate cognitive events that are organized over time, like a sentence or a musical phrase, or a reaching out to a coffee cup and bringing it to your lips.

    The quick story is that long-term conscious events are plausibly made up of a number of roughly 300 ms cycles culminating in a 100 ms burst. Freeman has them somewhat shorter. The extremely well-established Event Related Potential has a stereotypical shape over about 1-2 seconds, and the conscious component may be associated with P3b, a bump on the third positive peak. The actual measured energy in the consciousness-associated signal is quite small, with far more oscillatory power belonging to the background cortical and thalamic activity. But that's also true of FM radio, for example.

    This is also similar to foveal fixations, which are typically brief, compared to the perceived flow of a conscious visual event, like a dance. The important point is that global broadcasts RECRUIT widespread unconscious information processing, which loop back (all C-T links are bidirectional) in a reentrant processing fashion. That is where we presumably get extended notes in a musical piece, simple rhythms and musical intervals, syllables and double-syllable utterances, and integrated actions like a "reach" for a coffee cup. We launch our hands toward the coffee cup via a conscious event, and as we know from the Milner and Goodale experiments, after that the moment-to-moment control of the hand and arm are unconscious. (They can be optionally conscious, in the same sense that pursuit eye movements can be moment-to-moment under conscious, voluntary control. But they work pretty good unconsciously, even if the coffee cup changes in some simple way).

    The most obvious analogy is to a series of movie stills, which start to look like real "conscious" motion quite quickly, especially if the visual events make sense as a coherent event over time. The most important single point is that each conscious "moment" drives widespread adaptive unconscious processes, such as the constant "updating" of the medial temporal and parietal lobes.

    In relation to your retinoid theory, Arnold, a hugely important point is that such updating involves "updating of the ego perspective" as the body moves through space. Without such updating (in parietal and MTL regions), the occipital visual input would make no sense. The visual input stream would be decontextualized. I have to look up the articles on so-called "simultanagnosia," where smooth temporal integration of the visual world is disrupted, but I would guess it involves parietal and maybe MTL damage, from MT/V5 upward. There is a visuotopical map in DL-prefrontal, which may also be involved in extended subjectively conscious events.

    Long answer to a short question.

    It's time to write it all down, get it out there somehow, and hope that people read it.

    🙂

    b

  71. Bernie, I would like your thoughts on an important issue that I think has received insufficient attention in published work on consciousness.

    You wrote: “For a number of reasons the natural physiological “consciousness cycle” is likely to occur in the domain of 10 Hz, roughly theta and alpha … Typically for a novel stimulus there is a post-stimulus lag time of a few hundred ms, then a widespread burst associated with conscious, but not matched unconscious input, lasting about 100 ms, in the theta to gamma frequency domain…. The quick story is that long-term conscious events are plausibly made up of a number of roughly 300 ms cycles culminating in a 100 ms burst.”

    I think some clarification is needed here. Are the statement above to be taken as meaning that we are *unconscious* for successive intervals of ~100 ms during our normal waking day? If so, I would dispute this contention. It seems to me that the empirical findings suggest that there is a latency in the *perception* (conscious representation) of novel stimuli, but that this perception depends on the subject being in a conscious state. In other words, consciousness must *precede* any instance of perception (in my model an update of the contents of retinoid space). For example in Dehaene’s experiments, he and his colleagues are measuring the wide-spread brain changes (what he sometimes calls “ignition”) in response to a novel perception of a previously masked stimulus by a person who is already conscious. So I think we have to distinguish between the steady ambient state of consciousness and its changing perceptual content that is organized by shifts in selective attention. I know you have spoken of fringe consciousness; is it what I would call the ambient conscious state — our phenomenal “ground”?

  72. Yes, of course we have to distinguish between the “state of consciousness” and between the stream of conscious contents. What’s neat is that the relationship is increasingly clear. My new Chapter 8 in the latest textbook by Baars & Gage covers that in great detail. Essentially waking state electrophysiology makes the corticothalamic system extremely sensitive to inputs and to signals superimposed upon the background oscillatory activity.

    Is the brain unconscious between fast “snapshots” of globally broadcast contents? Well, is the brain unconscious between foveal fixations, or between movie stills that go by fast enough to give a sense of continuity and motion? I don’t think so, judging by the electrophysiology. However, Freeman’s Hilbert analysis does result in all-or-none phase differences. I don’t understand the links in that chain well enough to say anything useful about it.

    The most unconscious natural state is slow-wave sleep, which does indeed involve massive buzz-pause activity among 100 billion neurons, give or take, every half second or so. During the massive pause we are presumably unconscious. There is interesting evidence that during the massive UP state some functional processing still remains.

    The distinction you are talking about, Arnold, are obviously essential. Right now this is the best answer I can give. I think the new Chapter 8 is about as good as we know right now.

    Best wishes,

    B

  73. Sorry, Arnold, I just re-read your comment and I think I see what you mean. Yes, that is a very interesting question. Intuitively I would agree with your judgment that there is a quasi-conscious background to the stream of percept-like conscious contents. This issue is addressed in the meditation traditions (under the heading of “pure consciousness,” defined very precisely as ‘consciousness without content’). It also appears in the Western philosophical tradition in a different guise as the problem of particulars and universals, but that’s a longer story.

    I don’t know of an empirical way to address that question. The sense I have is that it’s a sort of Ganzfeld background. Experimentally the question is what you would select as the comparison condition, if it is indeed a “ground” of figural conscious perception.

    I do have a specific experimental proposal for testing the meditation-derived hypothesis of momentary pure consciousness. I’m hoping to write something on that body of evidence soon, and maybe somebody will run the right experiments on it. Momentary pure consciousness is an easier question, because there is an onset and offset of the event, and therefore you can create a contrasting stimulus to pick up the transitions, in the same way you can pick up “moving gaps” in binaural stimulation that appear as real “things.” In music, subtle adjustments of pauses and silences are often very powerful, because they manipulate expectations.

    All interesting stuff.

  74. Thanks for your reply, Bernie. What I’m suggesting is that there must be a brain representation of a personal surround — an empty world? — as a precondition for the stream of conscious content. This would correspond to the minimal ground of consciousness, what I designate as C1 in my paper “Space, self, and the theater of consciousness”, in *Consciousness and Cognition*, 2007. It would be the tonic ambient scope of our phenomenal world that is filled by the phasic stream of perceptual events. As I see it, retinoid space fills the bill.

  75. Right. The other possibility is that the retinoid space is unconscious, and that it is constructed in the same way that language comprehension is constructed. The famous stick figure with the small white bulbs at the joints, walking through a totally black “space” is another example. I can’t remember the Swedish perception man who made the first examples. Now is the filled-in stick figure conscious or not? It’s not an easy question to answer. If you did priming tasks and closure tasks I believe the evidence would certainly suggest that we have detailed, spatially specific knowledge about the filled-in stick figure.

    To make things even a little trickier, it is quite possible that retinoid space, and the inferred walking stick figure with the bulbs on the joints, consists of all three: Of unconscious inferential processes, of conscious image-like moments and events, and of fringe-like fill-ins. That is, one could imagine a topographical landscape where the vertical dimension is perceptual specificity. Gabor diagrams are a little bit that way. In the case of visual imagery we know that images are rarely perceived as entirely vivid and “filled in.” Children may be able to do that, and rare talented adults. But most adults, especially male scholarly types, probably have what William James called “vague and scrappy” mental images.

    However, not in dreams. Dreams are often visually very vivid even for people who can’t generate vivid images during waking.

    All that is not an answer, but just an outline of three possible answers.

    Best,

    B

  76. Bernie, you posed this interesting question: “Experimentally the question is what you would select as the comparison condition, if it is indeed a “ground” of figural conscious perception.”

    I think that the Julesz random-dot stereogram, for example, is a comparison condition that provides empirical evidence for an egocentric volumetric “ground”. Condition 1: With one eye closed we have a phenomenal visual experience of a field of random dots on a 2d plane in front of us. Condition 2: Open both eyes and we have a phenomenal visual experience of a 3D object *extending into a volumetric space that was not perceptually occupied in condition 1*. There are other examples that might be given, but I would be interested in your thoughts about this as a test of the claim that the conscious brain must have a perspectival representation of a volumetric surround.

  77. That’s certainly a step in the right direction. Binocular rivalry will do it, and binaural localization, too. Haptic exploration, all that stuff. If we then add Gibson’s opening vistas as we walk along (or are pushed in a baby carriage facing forward, or have the corresponding stimuli in a video game), we get both a vanishing point on the horizon we are heading toward, and an implicit vantage point of the inferential observer, i.e. our own. Bjorn Merker has some source that claims the inferred ego center of that space resides an inch or so behind the tops of our noses. It’s all perceptual inference, of course, and my guess is that it’s doubled in brain, involving both parietal cortex for the egocentric body surroundings, and MTL allocentric/egocentric spatial maps. I believe there are also body-surround maps in the outer shell of the thalamus, the reticular nucleus of the thalamus, which has “gatelets” that influence the opening and closing of sensory thalamic gates to the cortex.

    Since babies develop sensorimotor skills very early in life, the inferred viewpoint of the observing ego is plausibly a “body ego,” as Freud thought. Then over development, we can imagine more interpersonal, cognitively complex, self-regulating and abstract ego functions being superimposed on the sensorimotor perspective of the inferred self.

    In any case, to get back to your point, I think Kant was right to say that ALL conscious experiences involve an interaction between an observing ego system (with many levels) and sensory input. In parietal neglect, due to damage to the right parietal cortex, the left side of ALL conscious objects and scenes disappears. Even though the flow of visual input to the eyes and the occipital cortex, etc., all that stuff is intact. The ventral object stream is working ok, but the dorsal “spatial context” is damaged. Without a retinoid visual space, a context, there is no conscious object.

    I believe that fits your thinking. Is that right?

    b

  78. B: “Without a retinoid visual space, a context, there is no conscious object.
    I believe that fits your thinking. Is that right?”

    My claim is broader than that, Bernie. Retinoid space is not just visual space; it is the space of ALL phenomenal features, including all exteroceptive and interoceptive perceptions. Retinoid space is our personal phenomenal world. For example, suppose you accidentally bang your left thumb with a hammer. If you close your eyes (no visual input), you have a conscious experience of pain in your thumb to the LEFT of you in egocentric space. If you now reach to your right, you experience the pain to the RIGHT of you in your egocentric space, even without visual input. Another example: If you want to scratch an itch on your back, you reach BEHIND you in your egocentric space (no visual target). My contention is that a tonic level of activity in the putative neuronal structure and dynamics of retinoid space is a necessary precondition for all perceptual experiences. In other words, there has to be a sense of a surround with phenomenal directions in relation to a locus of perspectival origin, what I call the core self. This is subjectivity. This is the tonic ground of consciousness; it is what the evolutionary emergence of the retinoid system has given only to certain creatures. They are the ones that we can say are conscious creatures.

    I appreciate your willingness engage on these matters. In-depth discussion of issues like this is absolutely essential as we move forward in developing a robust science of consciousness.

Comments are closed.