Special Session on Action Consciousness

Welcome to the Special Session on Action Consciousness: Myrto Mylopoulos, The Graduate Center CUNY

Presenter 1: Élisabeth Pacherie, Institut Jean Nicod

Commentator 1: Markus Schlosser, Leiden University

Commentator 2: John Michael, Aarhus University

Presenter 2: Chris Frith, Wellcome Trust Centre for Neuroimaging

Commentator: Patrick Haggard, University College London

Advertisements

28 Comments

  1. Hi, fascinating session I have, first, a question about Frith’s paper:

    Can a Bayesian account explain delusions of thought insertion?
    In his paper Frith points out that the comparator model has failed to provide an explanation of delusions of thought insertion. I agree that this is the case, but unlike others (e.g. Synofzik) I don’t think this is a problem for a comparator account of delusions of alien control. The symptoms double dissociate and there are a number of differences in the symptoms. Importantly as Frith mentions, those suffering delusions don’t show gross motor dysfunction, it is only the experience of agency which is disrupted and their actual control of actions is largely maintained. However, this does not seem to true of the delusion of thought insertion. As well as lacking a sense of agency over thoughts these patients lack some of the normal control over their thinking.

    Now the evidence for this is a little slippery as strictly speaking it involves verbal hallucinations not delusions of thought insertion. However, as many have long suspected there will be much in common between these symptoms (some, eg (Barrett 2004) have even suggested that verbal hallucinations and delusions of thought insertion a different interpretations of the same type of anomalous experience) I suspect that the evidence re: verbal hallucinations will generalise to delusions of thought insertion.

    In a number of paradigms Waters, Badcock, Paulik et al (Waters et al. 2003; Waters et al. 2006; Badcock 2008; Badcock et al. 2005; Paulik et al. 2006; Paulik et al. 2007; Paulik et al. 2008) have shown that those suffering from verbal hallucinations and those who are predisposed to hallucinations have a specific difficulty in deliberately inhibiting thoughts which are not task relevant. This inability to inhibit thoughts seems to go beyond a lack of a feeling of agency to an actual disruption of the subject’s thinking.

    I think this is significant not just for the comparator model but for the more general Bayesian account which Frith ultimately advocates. So, then, this is my question: how could a Bayesian framework allow us to account for such disruption of thinking in these symptoms?

    Badcock, J. C. (2008). “The Cognitive Neuropsychology of Auditory Hallucinations: A Parallel Auditory Pathways Framework.” Schizophrenia Bulletin.
    Badcock, J. C., F. A. V. Waters, M. T. Maybery and P. T. Michie (2005). “Auditory hallucinations: Failure to inhibit irrelevant memories.” Cognitive Neuropsychiatry 10(2): 125-136.
    Barrett, R. J. (2004). Kurt Schneider in Borneo: Do First Rank Symptoms Apply to the Iban? Schizophrenia, Culture and Subjectivity: the Edge of Experience. J. H. Jenkins and R. J. Barrett, Cambridge University Press.
    Paulik, G., J. C. Badcock and M. T. Maybery (2006). “The multifactorial structure of the predisposition to hallucinate and associations with anxiety, depresion and stress.” Personality and individual differences 41: 1067-1076.
    Paulik, G., J. C. Badcock and M. T. Maybery (2007). “poor intentional inhibition in individuals predisposed to hallucinations.” Cognitive Neuropsychiatry 12(5): 457-470.
    Paulik, G., J. C. Badcock and M. T. Maybery (2008). “dissociating the components of inhibitory control involved in predisposition to hallucinations.” Cognitive Neuropsychiatry 13(1): 33-46.
    Waters, F. A. V., J. C. Badcock, M. T. Maybery and P. T. Michie (2003). “Inhibition in schizophrenia: Association with auditory hallucinations.” Schizophrenia Research 62(3): 275-280.
    Waters, F. A. V., J. C. Badcock, P. T. Michie and M. T. Maybery (2006). “Auditory hallucinations in schizophrenia: Intrusive thoughts and forgotten memories.” Cognitive Neuropsychiatry 11(1): 65-83.

  2. My first question is to Élisabeth. Thanks again for an excellent paper!

    I’d like to take up a point that was touched on in Markus’ commentary as well. In your paper, you argue that agentive experiences have two core elements: a sense of agency, which is a phenomenal quality, and some specification of the action that the sense of agency is for. I’m especially sympathetic to including the latter component. But given this commitment, I’m wondering how it is that the low-level comparator mechanism can ever contribute to agentive experiences at all.

    There are two main options for how it might do so. The first is on the basis of the forward model predictions, as you suggest in your paper, during the “thelic” stage. But these predictions are very fine-grained, representing things like precise grip aperture and grip force; they do not seem to reflect the fairly coarse level of description under which we are typically aware of what we are doing, e.g., reaching for my drink. You suggest in your paper that voluntary or involuntary attention, as well as the skill level of the agent, and individual differences might sometimes result in more fine-grained representations of action. But it’s not clear that we can ever be aware of what we are doing at the level of specificity that forward models require in order to contribute to fine motor control.

    The second option is by way of the output of the comparison between the forward model prediction and sensory reafference. But this output state will only specify a match or a mismatch, and this is not itself a specification of what action is being performed.

    A slightly more general, but related point is that we seem to be typically aware of what we are doing under the descriptions given in the content of our (distal and proximal) intentions, but these contents are not reflected in the low-level comparator mechanism, which can sometimes even operate independently of any such intentions.

    So, given all this, my question is: insofar as you take agentive experiences to represent what action is being performed, what role, if any, is left for the low-level comparator mechanism?

  3. Reply to Markus
    Thanks to Markus for a wonderful set of comments. Markus raises many very interesting points. Here I will focus on what I take to be his three main worries.

    His first worry regards my claim that agentive experiences have two core components: awareness of oneself as acting (a sense of agency in the narrow sense) and an experience of what one is doing. Markus thinks that in routine actions one typically has at least a minimal awareness of oneself as acting, but not an experience of what one is doing. He suggests that it would be less problematic to say that one typically “knows” what one is doing in the sense that one has some dispositional belief about what one is doing. I am not sure I fully understand how Markus’ appeal to dispositional beliefs works. However, it seems to me they are alternative ways of construing what is going on in routine actions. First, it is important to note that the category of routine actions is a mixed bag. Actions such as playing with a hair lock or tapping one’s foot while attending a lecture are sometimes classified as routine actions, but so is driving (at least for experienced drivers) or coffee making if you do it every morning. My impression is that routine actions of the first kind are typically involuntary and also unconscious in the sense that we lack even a minimal experience of ourselves as acting. Witness the following dialogue:
    – Could you please stop that?
    – Stop what?
    – This constant tapping with your pen on the table
    – Oh, I’m sorry. I had no idea I was doing that.
    Note also that lacking an experience of acting is not tantamount to having an experience of passivity. An alternative to both is no experience, period.
    I assume that what Markus has in mind are routine actions of the second kind, actions that are voluntary but that one can perform without having to attend to what one is doing. The key word here is attention. There is a complex ongoing debate on the relationship between attention and consciousness, with some holding there is no consciousness without attention and others holding that although attention increases conscious awareness, there can be some conscious awareness without attention. I am not taking a position on this debate here, but what I want to do is sketch two stories about what might be going on in routine actions that preserve the two core-component view of agentive awareness.
    The first story is meant to be compatible with the view that there can be some conscious awareness without attention. This conscious awareness would be marginal however and lack depth. For instance, when I am in the subway listening to my ipod, I am marginally aware of people around me, but my awareness of them is not such that I could recognize them if I was to meet them later elsewhere. Similarly, it may be that when I am performing a routine action I have some marginal awareness of myself as acting and also some marginal awareness of what I am doing, but that I consciously access only some general features of the action representations that drive my action and have therefore an agentive experience of what I am doing that is largely underspecified.
    The second story is more radical and is meant to be compatible with the no-consciousness-without-attention view. Here’s how it goes. When one is performing a routine action without any attentional resources being engaged, one has no agentive experience for that action, not even a minimal sense of oneself as acting. However, (some of) the action representations that drive the action are “poised” for consciousness. As soon as, for whatever reason, attention becomes engaged, one starts enjoying some awareness of oneself as acting and becomes aware of at least some aspects of what one is doing. On this story, the idea that during the course of routine actions we always enjoy some sense of agency is an illusion. It is, so to speak, an instance of the fridge light illusion. In the same way that we may come to mistakenly believe that that the light is always on in the fridge become it is always on when we open the fridge, we may come to mistakenly believe that we always enjoy an experience of acting for routine action, because each time we attend to the action, we have the experience. This illusion would be reinforced by the fact that it is rarely the case that we deploy no attentional resources whatsoever while performing a routine action. Even though making coffee is something I do every morning, I still have to pay attention to how much water I pour in the coffee maker, how I position the cup, and so on. According to this second story, the question to ask is not, as Markus proposes, at how many (and at which) of the stages of action must the agential experience include an experience of what one is doing, but rather at how many (and at which) of the stages of action must there be an agential experience?
    What both stories suggest, though, is that when you interrupt a routine action of mine and ask me what I am doing, I may give an answer by reporting the contents of the action representations that were active just before you interrupted me and that remain active and thus can be attended to precisely because you interrupted me and thus prevented their being erased by corresponding reafferences.

    Markus’ second worry concerns some aspects of the integrative model of the sense of agency I defend in the paper. I propose to think of agency cue integration as involving a hierarchy of comparators forming part of a Bayesian hierarchical predictive model. Markus complains that I do not “show that a Bayesian hierarchy must be a comparator hierarchy—or is best interpreted as a comparator hierarchy—in order to explain the reliability of agency cues.” I confess to some perplexity with this objection. As I see it, the ideas of comparators and of a Bayesian hierarchy of predictive models go hand in hand. Bayesian predictive models are in the business of extracting the contingency relations between actions and their effects from statistical regularities and using this information to make predictions about the outcomes of our actions. They form a hierarchy in that they represent these contingency relations at many different grain sizes. Predictive models are useful to the extent that the contingency relations they represent accurately represent the contingency relations in the actual world. It does not seem reasonable to suppose that by some kind of Leibnizian pre-established harmony we’re innately endowed with accurate representations of all relevant action-outcome contingencies. Rather, we have to learn what these contingencies are and we learn from our errors. This need to learn is the reason why the notion of a prediction error plays such a key role in the hierarchical Bayesian approach. But then, how is the system to know that its predictions are incorrect and what the error is, if not by a process of comparison of predictions with reafferences?

    Markus’s third objection is to my claim that motor predictions are best conceived as pushmi-pullyu states or, as I call them thelic states, that is, as states with a mind-to-world direction of fit (thetic) and a world-to-mind direction of fit (telic). Markus points out that intuitively predictions are thetic states and that it is not sufficient for a state that it plays some role in the control of behavior to count as telic, given that this can also be said of paradigmatically thetic states, such as beliefs. I agree on both counts and so need to say something more on what makes motor predictions special and so special as to qualify as both thetic and telic. Here are some examples of how ordinary predictions may steer behavior. We plan a family picnic on Sunday because we believe (predict) that it will be warm and sunny. At the moment, it is rumored, some very rich and not so patriotic French people are transferring their assets in Switzerland because they predict that President Sarkozy will not be reelected in May. John bet all his money on Black Diamond at the races because he predicted no other horse could beat him (and as a result he went broke). In all three cases, the predictions made had some effect on the predictor’s behavior, but one effect they didn’t have is to make them behave so as to insure that the prediction come true. Our predictions about the weather have no effect whatsoever on what the weather will be. John’s predictions had no effect on how Black Diamond or his competitors ran the race. Very likely, the rich people now predicting that Sarkozy will not get reelected will, if they trouble to vote, vote for him in the next presidential election and thus try to get him reelected. In contrast, the role of motor predictions in the control of the agent’s behavior is much more specific: it is to regulate action execution so as to ensure that that they are fulfilled. This, I take it, distinguishes them from ordinary predictions and gives them their telic. Admittedly, predictive models work in tandem with inverse models and predictions insure that they are fulfilled by controlling what motor commands get issued, but I am not persuaded that only the other half of the tandem should be granted telic status. The role of intentions in the control of behavior is mediated as well and yet intentions are paradigmatic telic states.

  4. A brief follow-up on Elisabeth’s last rejoinder to Markus:
    She notes that, unlike the belief that Black Diamond will win the race, the predictions in question will contribute to bringing about their contents. I think this helps, but I am not yet fully convinced….

    It depends on what we take to be the predictions’ contents:
    (i) sensory reafference q will occur, or
    (ii) if motor plan p is enacted, then sensory reafference q will occur

    Presumably they do contribute to bringing about the content of (i) but not the content of the conditional expressed by (ii)

    So their content must be (i) if they are to be telic; but (ii) appears to better capture the role they play in guiding behavior. After all, the agent (or the cognitive system) does not endorse (i) but only compare it with the goal state in order to decide whether the antecedent motor plan would lead to the goal state.

  5. This is a follow-up on John’s comment. (Hi John, great to have you here!)

    I think you raise a really interesting issue about the right way to understand the content of forward model predictions.

    I have a couple of concerns with your proposal, however. The first is that both the contents you suggest for forward model predictions are propositional. This suggests that you are viewing these predictions as intentional states, like beliefs and desires. (I assume you also see them as having the “psychological mode” or “mental attitude” of prediction.) But it’s not obvious that this is the best way to understand them. Here are some considerations against viewing them as intentional states: they don’t seem to enter into inferential relations with other intentional states; there’s no clear evidence that we are ever conscious of them in the way that we are conscious of our other intentional states; their content is supposed to be highly fine-grained; we don’t seem to be able to express them verbally like we can other intentional states.

    If we don’t view them as intentional states, there are two options. They could be qualitative states, like a visual sensation of green, for example. Given that at least one component of their content is a representation of the reafferent sensations accompanying bodily movement, this is somewhat inviting. On the other hand, they might not be mental states at all, but rather subpersonal states of the motor system. This would fit well with their not ever being conscious.

    I’m tempted toward the non-mental option myself, but I’ll leave things there for now.

    My second concern is that, even we do view them as intentional states, there is some reason to prefer your option (i) to your option (ii). What I have in mind is that, in addition to entering into a comparison with the desired or goal state of the agent, they are also thought to enter into a comparison with the actual sensory reafferences from the bodily movement once the motor command has already been sent out. Here the conditional reading seems problematic, as the sensory reafferences do seem “endorsed” as you put it, insofar as they are based on a motor command (plus current state of the agent) that has in fact been executed.

    Moreover, option (i) seems like it could play both roles reserved for the forward model prediction, as it is silent on whether the motor command on which the prediction is based has been sent out or not. (Perhaps the conditional reading is, too, but it’s not clear to me given your idea of “endorsement”.)

  6. Hi All,

    Thanks Myrto for organizing this, and thanks to all the participants – this is very interesting.

    My comment concerns Elisabeth’s paper, and piggybacks on Myrto’s first comment. So, consider a 30 second stretch of time during which I deliberate about a certain chess move, decide to make it and do so, realize this move places my opponent’s Queen in jeopardy, deliberate about how best to celebrate, form the intention to do the funky chicken, tuck my hands into my armpits, and do the funky chicken. Sorry that got so elaborate, but assume that during this 30 second window I have a number of agentive experiences (of deciding, of initiating action, of controlling the funky chicken, etc.). It is difficult for me to see how predictions or error signals emanating from my motor control system fully account for all of these agentive experiences.

    First, the experience of deciding – as when I decide to make a chess move or decide to do the funky chicken – seems to elude the predictive work of any part of my motor control system. In general it is not clear to me how models drawn from motor control are meant to generalize to mental action (and I take it we have agentive experiences of mental action: imagining, trying to remember, deciding, and so on). I think I recall Elisabeth noting this in an earlier paper somewhere, but I wonder what she or anyone else thinks about this issue.

    Second, regarding overt action, Myrto’s comments give me reason to doubt that low-level predictions or the error/match signals of low-level comparator mechanisms play much direct role in producing agentive experiences. This makes me want to hear more about the nature of the less fine-grained perceptual predictions hypothesized to be important for conscious control (p. 7). In particular, how are these thought to relate to intentions?

  7. Hello everyone,
    Thanks to Myrto for organizing this!

    Here are a couple of points in response to Elisabeth:

    1. My idea was indeed that during routine action there could be stages that exhibit the fridge-light illusion. In particular, it seemed to me that there might be stages in which one does not even have a consciousness-without-attention about what one is doing. Elisabeth’s appeal to consciousness-without-attention is very helpful here. And I suppose it’s true that such routine actions might just as well be interpreted along the consciousness-without-attention line. Can anyone think of reasons that favor one interpretation over the other? (Cause I can’t…)

    2. I asked why we should have to interpret the Bayesian hierarchy as a comparator hierarchy. Initially, I thought that Elisabeth would agree that this question makes sense, because in the paper she calls the comparator hierarchy “an instance” of the Bayesian hierarchy. However, in her response she says that the relationship is stronger: the two “go hand in hand”. If that’s right (and I have to think more about this), then my objection was really beside the point – sorry!

    3. Concerning the role of predictions and beliefs. Elisabeth points out that many of the beliefs that inform our choices do not regulate the execution of the subsequent action, in the sense that they do not steer our action so as to make the believed prediction true. In this respect, the role of predictions in the feedback model is more substantial than that of beliefs. However, it seems that means-end beliefs can also play a more substantial role in the regulation of action. For instance, if we discover, during the execution of an action, that we have not chosen the best means, we may adjust or switch our strategy for obtaining the end. Or, sometimes our act-plans require further specification as we go along, and we achieve that by consulting further or more specific means-end beliefs. Despite this, we would still regard those beliefs as paradigmatic beliefs (not as pushmi-pullyu states). The reason, I suppose, is that we still see the action as being motivated by a desire or an intention to obtain the end – the beliefs, it seems, merely steer us there.

  8. Reply to John Michael

    Thanks a lot to John Michael for a wonderful set of comments.

    John’s comments focus on three main points.
    First, he asks for some clarifications on what exactly I mean when I say that agentive experiences are thelic (i.e., both thetic + telic). As he puts it very clearly, I could mean that (1) agentive experiences are thelic because they contain some component states that are thetic and some other ones that are telic; or (2) they are thelic because they contain some components that are both thetic and telic; or (3) both.
    He’s pretty sure that I endorse (2) and he’s right since I claim that predictions in the motor system are both telic and thetic. He wonders whether I also endorse (1) and gives a number of reasons why (1) is plausible. Here, I can only say that I also endorse (1) and do so for precisely the reasons John puts forward. So my position is indeed captured by (3): agentive experiences are thelic, because they contain some components that are themselves thelic, some that are telic and some that are thetic.

    John’s second concern is with whether agentive experiences can contribute to explain the asymmetry between first- and third-person knowledge of agency (assuming there is such an asymmetry). John notes that I suggest that the asymmetry between first- and third-person knowledge of agency may be due to the two-faced structure of agentive experience. He interprets this suggestion as my claiming that we only have telic representations of our own actions. As he quickly points out, however, there are reasons to think this is false and indeed I did myself in previous work argue that we could have telic representations of other’s actions. So, am I contradicting myself or have I changed views? I don’t think so, but am quite willing to admit that my quick suggestion that the two-faced structure of agentive experience could help explain the special status of first-person knowledge of agency could give this impression. Let me unpack it a bit more carefully. I didn’t mean to suggest that we only have telic representations of our own actions. Rather, what I had in mind is the very close coupling of telic and thelic elements on which agentive experiences depend, in particular the precision of their temporal binding. As several of the experiments I described in section 3 of the paper show, timing is extremely important (for instance, the sense of agency for an action decreases if feedback is delayed, primes will have no effect on the sense of agency if their occur outside a given temporal window; very small differences in timing allow us to distinguish our movements from similar movements made by others). While we can form both telic and thetic representations of what others are doing when we observe them, the timing of these representations we form when observing them and their temporal relations among these representations do not in general satisfy the rather strict temporal constraints that are the signature of self-agentive experiences.
    There are some exceptions though and this leads me to John’s third concern: what transformations does the sense of agency undergo in joint action? John suggests that while the participants in the Strother et al study (2010) did not report a greater sense of subjective agency in those cases in which the intentional binding was greatest (i.e. when their partner pressed the button), perhaps (but his was not investigated) they experienced a sense of joint agency. The phenomenology of joint agency is, if anything, an even more complex issue to address than the phenomenology of individual agency. In a recent paper, I attempted a foray into it (for those interested a draft of the paper is available at: http://pacherie.free.fr/papers/PhenoJointAction.pdf ). I am pretty sure I only skimmed the surface and am not in a position to really answer John’s question as to how the dynamic perspective I advocate might help us in understanding the conditions under which joint agency will be experienced. One of the issues I was interested in that paper though is under what conditions participation in a joint action might yield a sense of joint agency at the expense of a sense of individual agency and contribute to blur the boundaries of the self? I suggested that joint actions that require agents to perform the same individual actions and to do it in a highly synchronous way might have this effect. Examples would include military drill, communal singing, or line dancing. These might constitute the exceptions I mentioned above, where the telic and thetic representations of what I am doing are highly similar to the telic and thetic representations I form of what others are doing and where the relations among those representations satisfy strong temporal constraints.

  9. Reply to Myrto

    Thanks again to Myrto for organizing this.

    Myrto asks me whether and how the low-level comparator mechanism can ever contribute to agentive experiences at all. She delineates two options and points out that they are problematic, given that both sensorimotor specifications of actions and sensorimotor reafferences are probably below the threshold of awareness.
    These are in part empirical questions and my answer will be very tentative.

    Let me throw in two distinctions. First, we can distinguish between direct and indirect contributions to agentive experiences, where by direct contributions I mean that an action representation contributes (some of) its contents to the intentional content of agentive experiences. Second, we can distinguish between contributions to the core components of agentive experiences and contributions to their non-core components.

    Now, I take Myrto to be interested in whether the low-level comparator mechanism and the information it exploits can make a direct contribution to the intentional content of an agentive experience. One possibility Myrto considers but seems to reject is that highly-skilled performers might be able to attend to details of motor execution that remain inaccessible to less proficient performers. I am not sure this possibility should be rejected outright. For instance, in two studies where they investigated how people determine whether or not they are in control of sounds they hear, Knoblich & Repp (2007, 2009) found that their subjects used both sensori-motor and perceptual cues to infer agency. They also found, however, that music experts were much more sensitive to sensorimotor cues to temporal variability than ordinary participants. Knoblich & Repp (2009) point out that: “many forms of expertise such as playing football or playing a musical instrument involve acquiring very particular sensorimotor mappings. As a consequence, experts do not only seem to be able to exert an amazing amount of control over actions in their domain of expertise but they also seem to feel more in control of these actions than novices” (2009: 250)
    One way perhaps to make Myrto more comfortable with the idea that we might have some awareness of what is going on at the sensorimotor level, is to point out that distinguishing between three layers of action representations (distal, proximal and motor) is something of a simplification and that each of these three layers probably has many sublayers. I would agree with Myrto that very low-level sensorimotor representations are not consciously accessible, but still it could be that highly skilled performers have conscious access to and control over some higher-level aspects of their sensorimotor representations.

    With regard to the possible indirect contribution of low-level comparators to the content of agentive experiences, it seems plausible that predictor errors arising from mismatches detected by low-level comparators could contribute to attentional amplification at the next level and thus to an increased awareness of what is going on as it is represented at the next level. In such a scenario, low-level comparators would have an effect on how rich the contents of an agentive experiences would be not by contributing the contents they themselves process but generating prediction errors that trigger the deployment of attention resources one level upwards.

    Finally, low-level comparators may also contribute to non-core components of agentive experience. Once again this is very tentative, but what I have in mind is the experience of flow one can have for actions fluently and flawlessly executed.

    To sum up, my answer to Myrto’s question would be that in many cases low-level comparator mechanisms do not directly contribute to agentive experience, but not that in principle they can contribute nothing. Whether they directly contribute something or not depends on how skilled the performer is and also on how skilled the performance has to be to count as a successful performance. But even when low-level comparators do not directly contribute to agentive experiences, they might still have some indirect role to play.

    Repp, B. H., & Knoblich, G. (2007). Toward a psychophysics of agency: Detecting gain and loss of control over auditory action effects. Journal of Experimental Psychology: Human Perception and Performance, 33, 469–482.
    Knoblich, G., & Repp, B. H. (2009). Inferring agency from sound. Cognition, 111(2), 248-262.

  10. Hi Josh,

    Thanks for your (hard) questions. Regarding the last one about the role of low-level predictions or the error/match signals of low-level comparator mechanisms in producing agentive experiences, I have tried to say something about it in my reply to Myrto.

    Regarding the experience of deciding and mental action more generally, I agree with you that they seem difficult to explain from within the framework of predictive motor control and I made no claim that they were so explainable. In the paper, I was concerned with physical actions and my focus with on their core-components. I said somewhere in section 2 that the experience of deciding was not a core component of agentive experiences insofar as quite often our agentive experience of, say, A-ing does not include among its elements an experience of deliberating whether or not to A or even of deciding to A. You could say that I was carefully avoiding this issue as well as the larger issue of mental actions.
    The question of agentive experience for mental actions is for me a million dollar question (and I promise to work harder on it if someone offers that money for a solution). Jokes aside, I believe that that we can have agentive experiences for mental actions and I further believe that attempts to account for these experiences along “motor” lines, on the model of physical actions, are highly problematic. Many, I gather, have the same misgivings. For instance, Chris Frith who once proposed that the sense of agency for thoughts could be explained along the same lines as the sense of agency for physical actions notes in the conclusion of his target paper that, even the general Bayesian comparator theory does not provide a plausible account for thought insertion. My problem is that I have no positive solution to offer.
    I am not quite as pessimistic as Chris seems to be in his conclusion as to the prospects of a Bayesian theory contributing to a solution. It seems likely that the brain also builds models of its own workings (this is in part what metacognition is about) and perhaps when I am deliberating about what to do or what to believe, metacognitive models make predictions about what kind of mental activity will ensue and these predictions together with the mental activity that actually ensue play a role in generating an agentive experience of deliberating. However, it remains unclear to me what form these predictions would take. A closely related issue that I also find perplexing concerns the relation between sense of agency and sense of ownership for thoughts. As Joëlle Proust asks in a wonderful paper (Proust, 2009), thinking is a mental activity, but are thoughts always mental actions? If not, what distinguishes thinking that qualify as mental action from thinking that is simply a form of mental activity? If not again, does it make sense to speak of a sense of agency for thought? I am not sure how exactly these questions should be answered. I am not sure whether an appeal to metacognitive predictions could help us account for our sense of agency for mental actions or for our sense of ownership for thoughts more generally. Hence my timidity when asked about agentive experiences for mental actions…

    Proust, J. (2009) Is there a sense of agency for thought ? in L. O’Brien & M. Soteriou (eds.), Mental actions, Oxford, Oxford University Press, 2009, 253-279.

  11. Regarding the issue Myrto and Elisabeth discuss, of how much of our motor repertoires we are capable of experiencing: does anyone know of studies beyond the Knoblich studies Elisabeth cites that look into agentive experiences of experts? I’m familiar with studies (e.g., Aglioti et al. 2008) that show that the perceptual and motor simulation skills of experts are much better than those of novices (for actions at which they excel). But I don’t think these studies directly address agentive experiences.

    Aglioti, S.M., Cesari, P., Romani, M. & Urgesi, C. (2008). Action anticipation and motor resonance in elite basketball players, Nature Neuroscience 11(9): 1109-1116.

  12. Hi Elisabeth,

    Thanks for your thoughtful reply. I agree with you regarding Proust’s 2009 piece – that whole volume on mental action is very interesting! And I do think I read an earlier paper of yours in which you mention that models drawn from motor control do not easily translate to mental action, so I hope I didn’t seem to indicate that you thought otherwise.

    You say to Myrto that low-level comparators might make indirect contributions to the content of agentive experiences, where that content comes together at a higher level. This seems very plausible to me. Given this and the things you say in response to me regarding agentive experiences for mental action, I wonder what you (and others of course) think about the following question: regarding agentive experiences, how much can we claim comparator models explain?

    Assume for the moment that agentive experiences of mental and overt action are similar in ways that call for explanation (I’m open to arguments to the contrary). We might even throw in imagined and dream action – I at least seem to remember agentive experiences from dreams, and I’m not even a lucid dreamer. If mental and overt (and perhaps imagined) actions share agentive experiences, then plausibly some shared feature plays an important aetiological role in their generation – a role that calls for explanation. This shared feature is arguably not the outputs of motor control mechanisms (although I could see a role for predictions at some level of specificity). So accounts which draw inspiration from motor control models leave something important out. This would be true even of agentive experiences for overt action.

    As you say, perhaps metacognition of a Bayesian sort could fill the gap here. What I want to do in the above paragraph though is not speculate about a solution, but make sure I’m clear regarding how much comparator theorists take comparator models to explain.

  13. Follow-up to Elisabeth’s reply

    Hi Elisabeth,

    Many thanks for your helpful reply; you’ve given me much to think about. Some reactions below.

    My uncertainty as to whether the agentive experiences of even highly-skilled performers involve direct contributions from forward model predictions was owing largely to my further uncertainty about just how much detail goes into forward model predictions in the first place. My thinking was that without knowing this, it would be difficult to evaluate whether experts are aware of these predictions, since it would be difficult to know which content reported in their agentive experiences must, due to its level of specificity, be directly contributed by them.

    It now occurs to me, though, that knowing how detailed forward model predictions are might not be so helpful after all. This is because one might think (as I do) that we needn’t be aware of our own mental states in a way that reflects all the full-blown detail that they exhibit in their content. For example, I may be aware of a sensation of red I am having in a way that is indeterminate with respect to the exact shade of red that is represented by that sensation, as might be shown through priming effects. In the same way, perhaps we aren’t aware of our forward model predictions in all their detail, but we are nonetheless sometimes aware of them in rough detail, and even less rough detail in the case of experts. If that’s the case, then it won’t help to know the exact level of richness they involve, at least not for the purposes of determining whether we’re ever aware of them.

    But whether or not one accepts the above, the question arises as to whether the best explanation for the content of experts’ agentive experiences must appeal to forward model predictions (even higher-level aspects, which you helpfully point out would apply to low-level mechanisms as well), or whether other states might suffice. Other candidates that come to mind are proximal intention and proprioception; perhaps the richer content of experts’ agentive experiences can be explained by direct contributions from these states alone.

    When it comes to the Knoblich and Repp studies to which you referred, I think there might be promising alternative explanations for the results that do not appeal to forward model predictions, though at this point I’m just speculating. What I have in mind is that perhaps music experts are more attuned to the timing properties of their actions as well as external events in the world, as the studies suggest, but that their heightened sensitivity to these properties is grounded entirely in proprioception and audition. In other words, sensations in these modalities represent the timing of actions and their sensory effects as well, so I’m not sure what reason there is to prefer an explanation in terms of forward model predictions. (Just floating this as a worry; I realize you meant your remarks to be tentative.)

    I will think more about all this. Like Josh (hi Josh!), I’d be interested in further references on the agentive experiences of experts, if you (or anyone else) has them. Thank you for the Knoblich and Repp studies; they were new to me, and very interesting.

    As for indirect contributions, I agree with you that comparator mechanisms trigger the deployment of attention to what one is doing in the case of predictor errors. I’m not so sure, however, that this is an indirect contribution to an agentive experience, as opposed to another kind of experience. It seems that at the point at which one’s attention is directed to what one is doing after an error has been registered at the lower levels, one is also roughly simultaneously made aware that one’s action is not unfolding correctly. And at that moment, one’s agentive experience seems to be replaced by, e.g., an experience of loss of control, or of not doing what one intends to do. One might even abort one’s action entirely, depending on the severity of the error, in which case agentive experience would, of course, cease as well.

    Still, even if that’s right, I’m sympathetic to the idea that there might be other ways in which the low-level comparator mechanism contributes indirectly to the core elements of agentive experiences.

    As for your suggestion that low-level comparators might contribute to flow experiences, I’m not quite sure what to say here, as I’ve never been quite sure what to make of reports of flow experiences. I’m inclined to think that they are better understood as something other than non-core elements of agentive experiences, e.g., affective states. But that’s something else for me to think about further.

    Thanks again!

  14. Hi all

    I have one question and one comment about Patrick’s commentary on Chris’ paper.

    Patrick- in the first part of your commentary you seem to suggest that Chris advocates a view on which the predictions generated by the forward model are conscious and argue against this view using your experiments with Ian Waterman. Could you add a bit as to why you think so? Whilst I think there are two possible effects the predictions on conscious experience, namely in causing a sense of agency to be elicited and in attenuating experience of actual sensory feedback- this doesn’t suggest that the predictions themselves are ever conscious. Nor can I think of, off the top of my head, anywhere where Chris or anyone else has suggested that the prediction itself is conscious or that it is causally responsible for experience of the position of the body.

    Secondly, just a comment on the subliminal priming study- and this is before I’ve had a close look at the paper. I think the use of arrows to prime locations is a good idea. Previous subliminal priming studies of agency have typically primed the outcome of action, e.g. stopping location in ‘I-spy’ and ‘wheel of fortune’. This has limited the theoretical significance of these studies which are used to argue for inferential models, like Wegner et al’s. The significance of these studies is limited because the version of the comparator model on which the sense of agency is elicited by matching actual and predicted sensory feedback can explain these agency illusions as due to the formation of predictions based on the prime, or, indeed, the misidentifcation of the prime as a prediction. Your study, however, I think is stronger as the explanation you give (facilitation of intention) seems more plausible when arrows are used, it is not clear how an arrow could be misidentified as a prediction- even if it were it would be the wrong prediction.

  15. Hi all,

    Many thanks to Chris for his interesting target contribution, and to Patrick for his thoughtful commentary. It’s a pleasure to have you both involved in the session.

    Chris, in your work you focus on forward modeling as an explanation of the sense of agency. In his commentary, Patrick raises a question about whether forward models can explain the results in the masked priming study he describes having to do with participants’ sense of control. I’d love to hear your thoughts about this.

    Along those lines, I have a follow-up to Glenn’s remarks about Patrick’s commentary. (Nice to meet you on here, Glenn.)

    Specifically, I am wondering how to understand Patrick’s suggestion that the masked arrow prime facilitates the participant’s intention in his study.

    One way to understand what is going on here is as follows: At the start of the task, the participant forms an intention with two conditionals embedded in it, the content reading something like: If I see a left arrow, perform a left keypress, and if I see a right arrow, perform a right keypress.

    Upon subliminally perceiving either a left or right arrow prime, but before being presented with the target left or right arrow, the corresponding action is thereby prepared by the participant, in accordance with her intention, though not consciously so. If the target arrow that is subsequently presented is prime compatible, the action that has already been prepared is executed. If the target is not prime compatible, the wrong action that has already been prepared is inhibited, and the correct action is prepared and executed.

    The participant might then feel more in control of the coloured dot corresponding to prime-compatible trials relative to the coloured dot corresponding to prime-incompatible trials because on the prime-incompatible trials there is some—perhaps unconscious—awareness of having made an error, that is, of having prepared the wrong action.

    On this interpretation, the relatively greater sense of control in the prime-compatible trials is not due, or at least not entirely due, to the action selection being easier or more fluent on these trials than in the prime-incompatible trials. It is due, more specifically, to the wrong action being prepared in the prime-incompatible condition, and thereby some awareness that an error has occurred in executing the intention.

    The reason I suggest this error-based interpretation is that it strikes me that there are some cases in which we engage in difficult, conflicted deliberation over what action to perform next, e.g., in a game of chess, and yet this does not seem to diminish our sense of control over the action itself. What typically does seem to diminish, or even override our sense of control, however, is when we perform the wrong action entirely, and must take steps to correct it.

    I’d be interested in hearing what people think of this slightly alternative interpretation of the results.

  16. Hi Josh,

    Regarding the agentive experiences of experts, there’s a line of work that was started by Jeannerod & Decety in the mid 1990’s on conscious motor imagery, mental practice and rehearsal, which also suggests that motor imagery is richer and more accurate in experts than in novices. Conscious motor imagery is not the same thing as agentive experience, but it’s very plausible that they are there are strong relations between what can enter the contents of conscious motor imagery and what can enter the contents of agentive experiences. This line of research on conscious motor imagery is still very active nowadays in sport psychology. Just typing “conscious motor imagery sport expertise” since 2008 on google scholar got me over 3000 hits!

  17. Hi again Josh,

    You ask the following: regarding agentive experiences, how much can we claim comparator models explain?
    The answer, I think, largely depends on what one means by “comparator models”. On a very narrow understanding, a comparator model is a model that says that agentive experiences have their source in the comparison of sensorimotor predictions with sensorimotor reafferences. It’s pretty obvious that comparator models so conceived have very little to say about covert actions (whether they are purely mental actions or simulated bodily actions as in dreams and voluntary motor imagery). I say very little and not nothing because at least for the conscious motor simulation of bodily actions and perhaps also for dreams, it is conceivable that the simulation exercise involve simulating not just motor commands but also sensory feedback. Of course whether what’s conceivable is also true is an empirical matter.
    However, even early comparator models inspired by motor control theories postulated not one but two equally important control loops: a feedback loop where predictions are compared with actual feedback and a feedforward loops where predictions are compared with desired states. This second control look can operate in the absence of feedback and could therefore contribute to agentive experiences in the absence of overt movement. Comparator theorists could claim that their model can in principle account for agentive experiences for both overt and covert bodily actions.
    In my mind, the more recent versions of the comparator models, involving a hierarchy of predictions and comparators, can probably do a better job at accounting for agentive experiences. As Myrto has pointed out, the contents of our agentive experiences are typically less fine-grained than the contents at the level of sensorimotor comparators. Generalized comparator models would allow us to preserve the gist of the idea that agentive experiences have their sources in the predictions and comparisons made by the motor control system while allowing that, among those predictions and comparisons, the lower-level ones play at best a minor role in agentive experience. (Perhaps it would be better to speak of an action control system” rather than a “motor control system”, as we tend to associate the idea of motor control with low-level processes.)
    Assuming, as Josh asks us to do, that agentive experiences of mental and overt action are similar in ways that call for explanation, what could the explanation be and could comparator models provide the explanation?
    I don’t know the answer, but let me speculate a bit. According to what Andy Clark (see my previous post for references) calls the “hierarchical prediction machine” approach, hierarchical predictive processing cum prediction error minimization is a very general principle of neural function and organization that suggests an integrative framework for perception, action and thought. The comparator models inspired by motor control theory are but one application of this idea. For the “hierarchical prediction machine” approach to have this wide scope, it has to allow that the generative models in different domains incorporate different types of information (same organization principles but different contents). In particular, it is plausible that information encoded at low levels of a hierarchy would be highly domain-specific while more abstract information encoded at higher levels might be available across several domains. With respect to Josh’s question, I see two main possibilities. One is that agentive experiences of mental and overt actions have nothing more in common than being the products of the same general principles of predictive processing. The other is that agentive experiences in both cases have their source in part in predictions at higher levels of a predictive hierarchy that are common to mental and bodily actions. My guess is that the second possibility is more plausible for covert bodily actions (motor imagery, dreamed actions, etc.). As for purely mental actions (e.g., trying to remember a name or judging whether to believe someone or not), I simply don’t know.

  18. Hi again Myrto!

    You suggest that knowing how detailed forward model predictions are might not be so helpful after all, because one might think that we needn’t be aware of our own mental states in a way that reflects all the full-blown detail that they exhibit in their content. I agree with that and this is one of the things I myself suggested might be going on in agentive experiences in my reply to Markus on routine actions. This however does not make the question of how much detail goes into forward model predictions completely irrelevant, for it at least fixes some upper limit on the content of agentive awareness. A novice piano player cannot in principle have as rich an agentive experience as the piano expert because her predictive models are so to speak under construction and remain as yet very coarse.

    You also ask whether the best explanation for the content of experts’ agentive experiences must appeal to forward model predictions or whether alternative explanations may be forthcoming. You suggest that in the Repp & Knoblich experiments, the heightened sensitivity of music experts to the timing properties of their actions and their effects could be grounded entirely in proprioception and audition rather than involving more detailed predictions by forward models.
    Leaving aside agentive experiences for a minute, I guess you would grant that experts have better and more precise control over their actions than non-experts (that’s why we call them experts!) and that they have better control over their actions in a large part because they have developed more detailed predictive models.
    There are also a bunch of further studies by Knoblich and his colleagues (refs below) providing evidence (1) that people are more accurate at predicting the location and timing of forthcoming events for their own actions than for others’ actions and are able to distinguish between recordings of actions performed by themselves and recordings of actions performed by others, and (2) that experts are better than non-experts at recognizing their own past actions or predicting the forthcoming effects of their actions.

    Back to agentive experiences. If it also granted that forward model predictions contribute to agentive experiences (and leaving it open that other elements could also contribute to agentive experiences), it would be strange if the quality of one’s predictions made a difference to how well one controls an action, recognizes one’s own past actions and predicts the forthcoming effects of an action, and yet had no effect on agentive experience.

    Myrto’s suggestion that in the Repp & Knoblich experiments, the heightened sensitivity of music experts to the timing properties of their actions and their effects could be grounded entirely in proprioception and audition sits uneasily with the existing evidence that proprioceptive feedback is attenuated through forward modeling during voluntary movements. Chris Frith in his 2005 paper even suggests that “one possible indicator that I am performing a voluntary act could be a lack of proprioceptive experience”.
    Myrto’s suggestion is also difficult to reconcile with the result of a study by Tsakiris et al. (2005) where participants had to judge whether the hand they saw moving was theirs. Great care was taken to provide participants with the same visual and proprioceptive information in both passive and active hand movements conditions, and yet self-recognition was much better in the active condition, suggesting that it was efferent signals and predictions that enable subjects to detect the small timing and kinematic differences between their own movements and someone else’s very similar movements.

    Flach, R., Knoblich, G., & Prinz, W. (2003). Off-line authorship effects in action perception. Brain and Cognition, 53, 503–513.
    Flach, R., Knoblich, G., & Prinz, W. (2004). Recognizing one’s own clapping: The role of temporal cues in self-recognition. Psychological
    Research, 69, 147–156.
    Knoblich, G., & Flach, R. (2001). Predicting the effects of actions: Interactions of perception and action. Psychological Science, 12,
    467–472. Knoblich, G., & Flach, R. (2003). Action identity: Evidence from self-recognition, prediction, and coordination. Consciousness and
    Cognition, 12, 620–632.
    Repp, B. H., & Knoblich, G. (2004). Perceiving action identity: How pianists recognize their own performances. Psychological Science, 15,
    604–609.
    Tsakiris, M., Haggard, P., Franck, N., Mainy, N., & Sirigu, A. (2005). A specific role for efferent information in self-recognition.
    Cognition, 96(3), 215–231.

  19. Hi all,
    To follow up on Myrto’s last post regarding Patrick’s remarks about the role of action selection/inverse modeling in generating a sense of agency:
    Is the sense of agency that arises from fluid action selection/inverse modeling the same phenomenon as the sense of agency that arises from accurate forward modeling? Or, if they are distinct components of the same phenomenon, are they experienced in the same way?
    One possible difference is that affect could play a more central role in the former, i.e. the sense of agency that is facilitated by action selection/ inverse modeling could be experienced as a positive affect (and non-fluid action selection as a negative affect).
    This idea would fit well, for example, with Joelle Proust’s work on metacognition. She suggests that affective cues may play an important role in the evaluation of one’s own cognitive processes, such as whether one is making progress toward solving a problem, whether one has learned something adequately and will be able to recall it later – perhaps this is also the case for action selection?
    So, with respect to Patrick’s elegant experiment with the priming arrows: does the sense of control stem from a positive affective cue? If so, then might it be possible to influence the participants’ sense of control by manipulating their affective states?

    Proust, Joelle. 2006. Rationality and Metacognition in non-human animals, available here: http://jeannicod.ccsd.cnrs.fr/index.php?halsid=5c0ofudj3anjjdmfjfshcf12r5&view_this_doc=ijn_00139119&version=2

  20. Hi Elisabeth,

    Thanks very much for your thoughtful reply and all the useful references!

    About expert performers: I agree that they have more precise control over their actions than non-experts, and that, at least if the comparator model is the correct account of motor control, this is due in large part to their having developed more detailed forward model predictions. But it seems to me that such detailed predictions can’t be of much help on their own; what’s need in addition is the development of a more detailed proprioceptive sense with which to compare them.

    My thought is that if expert forward model predictions were more fine-grained in experts, but proprioceptive feedback remained more coarse-grained, matches would be registered at the comparator where more precise movements were in fact called for. Expert control could not thereby develop. So it seems that where forward model prediction must be more fine-grained in experts to allow them to perform the way they do, so, too, must proprioception, and plausibly other senses that are involved in the experts’ skill, e.g., audition for musicians. And if that’s right, then an explanation in terms of heightened perceptual capacities might still suffice to explain the superior performance of experts in the studies on experts performed by Knoblich and his colleagues.

    You raise two further considerations, however, against thinking that heightened perception, rather than forward models, helps experts out in these studies.

    First, you point out that this sits uneasily with the existing evidence that proprioceptive feedback is attenuated through forward modeling during voluntary movements. I have two worries about applying these findings to expert action: (i) for all the attenuation studies show, experts’ proprioception could still be less attenuated than that of non-experts, and this is all that is needed to explain their superior performance, and (ii) as far as I know, these studies focus on the sensory effects of actions, e.g., tactile sensation from palm stroking, and not on proprioception from the action itself. I have in mind the self-tickling studies by Blakemore, Frith, and Wolpert (1998; 1999), as well as the study by Sato (2008) on auditory consequences of action, but perhaps there are others that would help with this second worry? I do know of one neuroimaging study using PET scans, i.e., Blakemore, Oakley, and Frith (2001), which measured brain activation in somatosensory cortex for active vs. passive movement and showed more activation in the passive than in the active movement. But it’s not clear how these results translate into conscious experience.

    As for the Tsakiris et al. (2005) study you appeal to, I worry that the overall availability of proprioceptive information was not held constant across conditions, since the movement in question was a right index finger movement caused either by the participant moving a lever with their left finger (active condition) or the experimenter moving the lever with their hand (passive condition). So in the active condition, the participant had proprioception from her left finger to go on (as well as the forward model prediction), but not in the passive condition.

    So I guess I’m still a bit skeptical that we need to appeal to a direct contribution from forward model predictions, at least to account for the core components of experts’ agentive experiences and their superior performance.

    I still need to take a closer look at the other Knoblich studies you mentioned that deal with non-expert action, though. More to come once I go through them.

    One final point for now: Even if it were true that forward model predictions don’t directly contribute to the the core components of agentive experiences, it seems that the account you present in your paper stays largely intact. This is because predictions one level up in the comparator hierarchy, i.e., at the level of proximal intentions, could do the work, rather than forward model predictions. (And the same worries that apply to forward model predictions don’t seem to apply there.) Does that sound right to you?

    Thanks again for all your comments. They’ve been very helpful!

    Blakemore, S.-J., Frith, C. D., & Wolpert, D. M. (1999). Spatio-temporal prediction modulates the perception of self-produced stimuli. J Cogn Neurosci, 11(5), 551-559.

    Blakemore, S.-J., Oakley, D. A., & Frith, C. D. (2003). Delusions of alien control in the normal brain. Neuropsychologia, 41(8), 1058-1067.

    Blakemore, S.-J., Wolpert, D. M., & Frith, C. D. (1998). Central cancellation of self-produced tickle sensation. Nat Neurosci, 1(7), 635-640.

    Sato, A. (2008). Action observation modulates auditory perception of the consequence of others’ actions. Consciousness and Cognition, 17, 1219 – 1227.

  21. Hi all,

    I have some lingering questions/comments regarding telic states and 1st-person/3rd-person asymmetry of agentive knowledge that I thought I would share, for what they’re worth.

    1. In her paper, Elisabeth raises a concern regarding Anscombe’s suggestion that agentive awareness is purely telic. The concern is that this knowledge is then importantly independent of what happens, since telic states are not typically in the business of representing the way the world is.

    This does seem to be the right verdict for telic states like desires or distal intentions. But consider an intention to now do A, which Elisabeth suggests in her paper are defensible candidates for telic components of agentive experiences. It is true that this intention is not caused by what I am actually doing, and so in this sense, if it were to ground agentive knowledge, this knowledge would be independent of what happens. Still, there is another way in which my intention to now do A is importantly linked to what I actually do, and that is in virtue of being a reliable cause of what I actually do, i.e., a reliable cause of what it represents. Due to this reliable connection between an intention to now do A and my A-ing, the intention itself seems to be a source of agentive knowledge.

    And it is certainly true that an intention to now do A might fail to lead to my A-ing, and so might fail to give me knowledge of what I am doing. But no source of knowledge is infallible, so this does not seem by itself to count against intentions as a source of agentive knowledge.

    Might any of this help with the concern?

    2. Elisabeth and John discuss the suggestion that we can have telic representations of other people’s actions. This does seem right, as I can desire that someone else do something, and desires are telic states. But I am wondering how exactly the claim extends to states like intentions and inclinations in John’s watching football example, and how it is supposed to threaten 1st-person/3rd-person asymmetry.

    I will focus on intentions since I take these to be the central (purely) telic states in Elisabeth’s agentive experiences. It does seem that I can intend to do what I perceive another person doing, as in the case of shared representations of action. But this by itself would not be enough to threaten 1st-person/3rd-person asymmetry, since I still need to rely on observation to know what another person is doing in the first place, and I don’t typically need to in my own case.

    Perhaps the threat to asymmetry lies in the idea that in addition to intending to do something that I observe another agent doing, I can intend that another agent do something. If this were the case, then I could have “knowledge from the inside”, i.e., seemingly non-observational knowledge, of what someone else does by way of my intentions. But I’m having trouble making sense of the idea that I could intend for someone else to do something. Perhaps, at the very least, the agent of the action I intend is left unspecified, as Elisabeth and Marc Jeannerod suggested in their 2004 paper. Even so, unless my intention actually does reliably cause someone else’s action, which it does not, it cannot act as a source of knowledge of that other person’s agency, and so this scenario would not threaten 1st-person/3rd-person asymmetry. (So while, as John argues in his commentary, whether an intention causes what it represents might not be relevant to the intentional structure of shared representations, it does seem to be relevant in this way.)

    Am I understanding the claim that we can have telic representations of other people’s actions correctly here?

    Jeannerod, M., & Pacherie, E. (2004). Agency, simulation, and self-identification. Mind and Language, 19(2), 113 – 146.

  22. Thank you all for a very interesting discussion.

    I would like to follow up on Markus’s complaint that motor predictions do not seem to be pushmi-pullyu representations (or thelic representation) because they seem to have a role similar to a means-end belief, and they interact with the inverse model, which seems to have a desire-like role.

    I agree, and I would like to strengthen the complaint by focusing on what Millikan (1995) had in mind in discussing pushmi-pullyu (or thelic) representations. Her fundamental thought in introducing the idea, as I see it, is that thelic representations are more basic than telic representations and thetic representations. She gives an example of “the food call of a hen to a brood” (p. 190), which simultanouesly says: “come here now and eat” (a telic part) and also “here’s food now” (a thetic part). She writes that “the effect of the call on the chicks is not filtered through an ALL PURPOSE COGNITIVE MECHANISM that operates by first forming a purely descriptive [i.e., thetic] representation (a belief that there is food over there) then retrieving a relevant directive [i.e., telic] one (the desire to eat), then performing practical inference and, finally acting on the conclusion. Rather, the call connects directly with action. […] Where the hen finds food, there the chicks will go. The call is a PP [=pushmi-pullyu] representation.” (ibid., emphasis added)

    Thus, a purely telic representation can potentially co-operate with various purely thetic representations, and a purely thetic representation can potentially co-operate with various purely telic represenation. A pushmi-pullyu representation, by contrast, is a state in which there is a thetic component and a telic component THAT CANNOT BE DETACHED. For example, the thetic component of the hen’s call (the one saying “here’s food now”) can co-operate only with with one single telic component type, namely with the component that says “come here now and eat”. For this reason, the call of the hen is pushmi-pullyu.

    Now, it seems that the motor control system, which operates using an inverse model and motor predictions, is similar to the all purpose cognitive mechanism Millikan speaks of. This is so because it is not the case that a given inverse model type is tied to a SINGLE motor prediction type. Rather, the former can and does operate with various motor prediction types. And conversely, a single motor predicion type can operate with various inverse model types. Thus, it would appear that motor preditions count as purely thetic representations, on Millikan’s account.

    Millikan, R.G. (1995), “Pushmi-Pullyu Representations”, Philosophical Perspectives, vol. 9, pp. 185-200.

  23. Hi Myrto,
    Thanks for bringing up these fascinating issues about 1/3person asymmetry.
    You point out that we have to observe others but do not have to observe ourselves, which preserves asymmetry. Maybe; I will have to think about that a bit. But regarding Elisabeth’s suggestion that 1/3 person assymetry may be grounded in the thelic structure of our agentive experiences: I meant to cast a bit of doubt on this by pointing out that reps of others’ intentions may also be thelic (both telic and thetic). In light of her response to me, however, I think she is right: even though in some cases we can have thelic reps of others’ intentions, they are nowhere near as integrally embedded into a network of predictions, proprioception, etc., so the asymmetry may be grounded in the overall structure that makes our agentive reps thelic.

    I think you are right to point out that intentions relying causing actions is important if they are to yield knowledge of actions. But I still wonder whether some thelic reps of others’ intentions may reliably contribute to bringing about the intended actions (e.g. because they enable us to help them). Have to think about this some more, though…

  24. Hi John,

    Thanks very much for that!

    I also agree with Elisabeth’s reply to you that the close coupling of telic and thetic components in agentive experience could help explain 1st-person/3rd-person asymmetry. I guess I had been wondering more about the motivation for the challenge you put to her in the first place; more specifically, about what I took to be the suggestion that we sometimes have telic representations of other people’s actions, and the further suggestion that this might be a threat to 1st-person/3rd-person asymmetry. But I see now from your comment that you meant to be focusing on thelic states, and not purely telic states, which were more my concern. I’ll have to think more about this. Thanks for clearing things up for me.

  25. I am very grateful to Myrto for giving me this chance to present my ideas in this interactive forum. I have found the discussions most interesting and useful.

    Glenn and Elizabeth have wondered how we can understand the symptom of thought insertion, where patients with schizophrenia report that thoughts that are not their own are being inserted into their minds. “Thoughts are put into my mind like ‘Kill God’. It’s just like my mind working, but it isn’t. They come from this chap Chris. There his thoughts.”

    In the case of actions we can predict where our arm is going to go and what sensations the movement will create. But there are no such physical parameters to predict for thoughts. So it wasn’t clear to me how to bring prediction into a story about thoughts. This changed when I recently read one of the excellent papers about the metacognition of action by Joëlle Proust that Elizabeth mentioned. Proust suggests that when we want to try do something in thought, like trying to retrieve a memory, we can predict how much effort this will take and how likely it is to be successful. If the predicted effort is high and the likelihood low, then we probably won’t bother to try. If something went wrong with this prediction system, then we might well experience a loss of control over our thoughts. Faulty prediction errors might indicate that our thoughts were coming much more rapidly or much more slowly than we expected.

    But loss of control is not quite the same as having other people’s thoughts inserted. Also the predictions about thoughts that Proust is discussing seem to be concerned with situations where we deliberately use thoughts to solve problems. As Glenn points out the sort of thoughts that schizophrenic patients report being inserted seem more like spontaneous, irrelevant thoughts. Glenn wonders whether thought insertion is closely related to auditory hallucinations (different ways of describing the same experience?). He points to evidence that hallucinations are associated with problems in memory where people fail to distinguish relevant from irrelevant items at retrieval. These results remind me of studies of source memory in which patients have difficulty distinguishing between words produced by themselves or by others and between imagined and perceived pictures (Brebion et al, 2000; 2009).

    I think it most plausible that the same mechanism might underlie thought insertion and auditory hallucinations. So how can I link this mechanism with Bayesian-type prediction errors? One way might be to note that prediction errors have a variance as well as a mean, so that prediction errors with a low variance (or high precision) will have more impact than those with high variance. Perhaps irrelevant, unexpected thoughts are normally ignored because they are associated with low precision. In the acute, psychotic state the precision associated with prediction errors becomes abnormally high. This idea is clearly extremely sketchy at present, but I believe that this would be a fruitful path too explore.

    I am very pleased to respond to Patrick Haggard’s comments since he always says interesting things and conducts such elegant experiments. He asks what it is we are aware of when we act and is doubtful of the suggestion that we are aware of the forward model. He cites as evidence against this idea the case of IW who has no proprioception, but can nevertheless move. The problem here is that the forward model (and the inverse model) are continual updated by experience and so presumably IW has, at the least, a very abnormal forward model. Nevertheless, I am inclined to agree with Patrick on this point.

    Elizabeth is interested in the idea that we feel most in control when we are not aware of anything, except, perhaps, that ‘everything is fine’. This is why our experience of action so ‘thin and evasive’ as Thomas Metzinger has called it. We become aware of our actions when things don’t go according to our predictions.

    Patrick’s second point is that our feeling of agency is determined by events before as well as after the action and provides convincing evidence for this from his priming experiment. The phenomenon he describes seem so to me very similar to the ideas of Joëlle Proust that I have already mentioned; that is, prior to an action, we have to assess whether it is the right one to choose. This will depend on the likely value of the outcome and also on the likely effort needed for the action. If the value is too low and the effort is too high another action will be chosen. The conflicting prime causes the selection of the action to be more effortful than expectedly. However, this is agency as concerned with control of our movements. It is not agency as concerned with our control of the outside world. This latter aspect of agency presumably still depends upon the outcome.

    Proust, J. (2010). Metacognition. Philosophy Compass, 5(11), 989-998.

    Brebion, G., Amador, X., David, A., Malaspina, D., Sharif, Z., & Gorman, J. M. (2000). Positive symptomatology and source-monitoring failure in schizophrenia – an analysis of symptom-specific effects. Psychiatry Research, 95(2), 119-131.

    Brebion, G., David, A. S., Bressan, R. A., Ohlsen, R. I., & Pilowsky, L. S. (2009). Hallucinations and two types of free-recall intrusion in schizophrenia. Psychol Med, 39(6), 917-926.

  26. Hi everyone!

    As this is the final day of the conference, I would like to thank you all for your comments, objections and suggestions. I’m sure the final version of the paper will be much improved thanks to your input. I would also like to take this opportunity to renew my gratitude to Myrto for making those wonderful exchanges possible.

    Let me also comment on some of the points raised in recent posts.

    Expertise
    In her last post on expert performers, Myrto points out that where forward model prediction must be more fine-grained in experts to allow them to perform the way they do, so, too, must proprioception, for if feedback remained more coarse-grained, matches would be registered at the comparator where more precise movements were in fact called for. She’s right, but I would suggest that the development of a more detailed proprioceptive sense can be explained in part in terms of the sensory attenuation mechanisms of the kind postulated by Chris Frith & Sarah Blakemore: the more precise the forward model predictions, the more precise the filtering out of proprioceptive feedback. And since one learns from one’s mistakes, the more precise one’s predictions next time. So I wouldn’t say that experts show superior performance because their proprioception is less attenuated than that of non-experts, but because it is attenuated in more precise fashion than that of non-experts.
    Regarding Tsakiris et al. (2005) study, Myrto worries that the overall availability of proprioceptive information was not held constant across conditions, since in the active condition, the participant also had proprioception from her left finger to go on. That’s true, but I do not see how it helps. The task is to decide whether the right hand you see is yours. If, as Myrto suggests, you do that on the basis of a comparison between proprioception and vision, what you must compare is visual information about the right hand with proprioceptive information about the right hand. But what I don’t understand is how proprioceptive information (or lack thereof) about the left hand is supposed to make a difference.

    Pushmi-pullyu representations
    Assaf complains that motor predictions do not seem to be pushmi-pullyu representations (RPPs) in the sense Millikan intended. It’s true that I’m not completely truthful to the spirit of Millikan’s proposal (and my preferring to talk of thelic rather than pushmi-pullyu representations is a way of acknowledging this difference).
    One reason I move away from Millikan’s original conception of RPPs is that I think her dichotomy (either it is a RPP or it is something filtered through an ALL PURPOSE COGNITIVE MECHANISM that first form a thetic representation, than retrieves a relevant telic representation, than performs practical inference and, finally yields an action corresponding to its conclusion) is an oversimplification.
    While Millikan’s representations are pushmi-pullyu qua types in the sense that in all instances of the type the same thetic component is associated with the telic same component, I would say that motor predictions are thelic qua tokens, i.e.; all tokens involve both a telic component and a thelic component, although which telic component is associated with which thetic component depends on contextual variables (current state of the system & current state of the environment).
    One of the things that differentiate the motor control system from an all-purpose cognitive mechanism is, as its name indicates, its control function. In contrast to the all-purpose cognitive mechanisms of the kind Millikan envisages, the motor control system has only done half of its job once it has “decided on a course of action”. The other half of its job involves controlling the unfolding of the action and it is, in my view, in their role in the regulation of ongoing action that the thetic and telic components of motor predictions become inseparable.

  27. Hi all,

    Thanks so much to everyone for making this such an enjoyable and stimulating session!

    Special thanks to Elisabeth, Chris, Patrick, Markus, and John for their rich contributions. It was a pleasure to do this with you.

    And many thanks to Richard for organizing yet another excellent conference. Congratulations!

    All the best,

    Myrto

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s