Nonhuman Animals: Neither Saints Nor Sinners

Presenter: Cheryl Abbate, Marquette University

Commenter 1: Stephane Savannah, Macquarie University

Commenter 2 Logan Fletcher, University of Maryland

Advertisements

9 Comments

  1. Stephane Savannah:

    Much gratitude to you for your insightful comments and references which I am sure will be inspiring in revising my project. As you rightly noted, this is an early draft in an area which I am becoming newly acquainted with (animal minds/consciousness), so I appreciate having individuals like you who have considerable experience in this area to provide such provoking and well-researched feedback.

    Below are my responses to your comments, which I have attempted to address to the best of my ability.

    1. To begin, I am not quite convinced that self-consciousness is a necessary condition of moral patiency, but I definitely will pursue research to this question. I would be interested in hearing more about why you believe it is so critical for me to address in my specific project.

    2. You write, “Abbate argues that animals are conscious but incapable of mindreading (i.e. do not have TOM).”

    Reply: To be clear, nowhere in my paper do I argue this. I do argue: (1) that one could have metacognition without the capacity to mindread (as evident in the self-confidence studies I cited), yet this is not to say that nonhuman animals are incapable of mindreading. In fact, I go to great lengths to demonstrate that nonhuman animals do have the capacity to mindread, such as the plover birds or baboons referenced in the empirical observations in regard to deception.

    3. You write: “Abbate presents a case that the correct theory of consciousness is higher-order thought (HOT) theory………. but it is not clear why HOT should be considered the superior theory other than to set the bar deliberately high (which you stress twice in your response).”

    Reply: Yes, this is exactly the goal of my paper: to respond in behalf of nonhuman animals to the most stringent theory of consciousness. If we can respond to HOT theories, then most certainly we can respond to HOP or FOR theories in regard to animal consciousness. So why provide an argument that animals are conscious according to HOP or FOR theories (or some other less demanding theory of consciousness), when, after doing so, animal ethicists will remain challenged by HOT theorists?

    4. You write that my focus on HOT theories “leave[s] out in the cold those who would agree that some animals are capable of suffering pain despite an inability for HOT or conceptual thoughts.”

    My reply: my goal is to illustrate that we can demonstrate that even less sophisticated animals possess HOT. I could perhaps do better at making this clearer by employing examples to illustrate the capacity for metacognition/mindreading in the “lower” animals.

    5. You write, “the issue with regard to moral agency does not depend directly on which theory of consciousness is correct but (according to Abbate) on whether animals are capable of mindreading and moral judgements.”

    Reply: I am not quite making sense of this critique. My claims are:
    (1) Moral agency does depend, in part, on having HOT
    (2)If nonhuman animals possess HOT, then it brings them dangerously close to claims of agency
    (3) HOT theories maintain that a being is conscious only if it possesses HOT
    (4) Thus, if I am to illustrate that nonhuman animals are conscious according to HOT theories (which is one of the goals of the paper), then I must be prepared to respond to the problem of moral agency that is lurking in the background. This is why the two issues are related (moral agency and HOT theories)

    6. You state, “But, once again, their specific arguments are not provided/analysed and Abbate does very little by way of providing her own. To support her view that HOT does not entail mindreading she raises the example of confidence studies of metacognition (where subjects are allowed an option to ‘bail out’ of a test for a lesser but guaranteed reward, thereby apparently indicating their level of confidence in their own ability). The discussion is far from comprehensive – with just a single experiment cited – and the argument is insufficient.”

    Reply: my aim is not to develop my own HOT theory (I would have much more research to do before tackling this task!). Rather, my aim is to point to those theories that have already been developed in order to illustrate a cautious way to go about ensuring nonhuman animal moral considerability on a HOT account. On a side note, your point about the one self-confidence study as being insufficient to illustrate my point is noted. And in fact, I think my paper may do better to cite a few empirical examples of “lower” animals who demonstrate the capacity for metacognition.

    7. You write: “Quite plausibly, a subject in these experiments could be capable of HOT and also (independently) capable of mindreading.”

    Reply: Yes, I agree and as far as I know, I never made the claim that the monkeys in the self-confidence test were incapable of mindreading. I merely intended to illustrate that metacognition is possible in the absence of mindreading. If there is perhaps strong language in my paper that has led you to this particular critique, I would appreciate if you could point this out!

    8. Morgan’s Canon is a helpful reference! I look forward to putting it to use in my paper.

    9. You write “Abbate should consider critically evaluating a wider range of behaviours that seem to indicate the possession of a moral code.”

    Reply: I agree as well. There are important references from Marc Bekoff’s book Wild Justice that I plan to include in the revised version of my paper. Thank you for your suggestions which I am sure will be helpful.

    10. You write “I would caution against the use of loaded terminology. For example, Abbate says “…animals can distinguish an action that violates their group’s code and threatens cooperative behavior from one that does not…” ‘Violating the code’ here would imply to me that the subjects involved had a (moral) code if I did not already know that Abbate was arguing against this interpretation. In addition, it was surprising that Abbate later claims that “It seems unproblematic to claim that nonhuman animals have unilateral morality”. I think this is an inappropriate characterisation of what Abbate is really referring to here, which, rather than any kind of ‘morality’, is a simple associative account of actions being rewarded (and therefore desirable) or punished (and therefore undesirable).”

    Reply: This is where perhaps I need to better distinguish between having some sort of morality and being a moral agent. This consideration stems from Bekoff’s Wild Justice, who points out that animals demonstrate moral behaviour without themselves being moral agents (other regarding behaviours and capacities that nurture and foster social encounters and allow for the flexibility that is needed so that individuals can adapt to social contexts. E.g. cooperation, empathy, justice, forgiveness, trust, reciprocity, and so forth). I have no qualms conceding that nonhuman animals may have this sort of morality (which is similar to unilateral morality), because it does not entail the stronger claim that they are moral agents.
    11. You write “In the concluding section Abbate introduces a host of new issues and veers somewhat into a polemic. One can be sympathetic to her cause for animal rights but I feel that this section is less scholarly and tends to be somewhat emotive rather than reasoned.”

    Reply: I think there is a significant point I am addressing here and I do not, by any means, see it as lacking reason or emotion. There is an important question to ask in regard to why we value rationality so much, which is evident by the numerous attempts to demonstrate the cognitive sophistication of certain animals in order to ensure their moral considerability/possession of rights (legal or moral) (when in fact, many of the actions these animals perform can be interpreted in less sophisticated ways).

    12. You make reference to my paper“…consider whether comparing nonhuman animals to human beings is insulting and offensive to animals, as no animal is or could be as cruel as man” and then respond with: “That a subject is capable of feeling insulted or offended could easily be used to argue (contrary to Abbate’s thesis) that the subject has a sense of justice and therefore possesses moral concepts.”

    Reply: I don’t think a being has to feel insulted or offended in order for me to claim that we can act in an insulting and offensive way. Diamond (1991) Anderson (2004) argue that we can act in offensive ways to beings who are completely indifferent to the “offensive” treatment, such as carting around a disabled human being naked and feeding him cat food. Just as we wouldn’t justify such actions by claiming “he [the disabled human] doesn’t mind,” neither we should attempt to justify degrading characterizations of nonhuman animals just because “they don’t mind.” This, quite evidently, is where my paper is becoming more ethical in nature, and I believe that this section is absolutely critical, given that my task is ultimately a normative one. It is not a pure analytical approach devoid of any moral references, and I worry that these references seem to be purely “emotive” to you.

    13. In your summary you write “Rather than focusing on HOT, Abbate should more fully examine and evaluate the alternative theories of consciousness and from this make a case that (some) nonhuman animals possess phenomenal consciousness (which, I suggest, would be true of several theories). This would then establish the first part of her thesis, which is that, being able to experience suffering, these animals are eligible for moral considerability.”

    My response: this undercuts my intended goal. Quite honestly, I find it uninteresting to argue that nonhuman animals possess phenomenal consciousness according to FOR or HOP theories. We, hopefully, have moved well beyond the point of arguing that nonhuman animals are conscious according to theories of consciousness that require little to no conceptual capacity. The real challenge, as I see it, is responding to HOT theories, which again, is why I have targeted this theory specifically.

    Again, thank you for your comments. You have given me much to think about. Although I do not agree with many of the criticisms, you have indicated to me where I should develop my paper.

  2. Hi Cheryl,
    Thanks for your reply. I do not have much to add to my previous commentary, but I do want to emphasise a couple of points which I think you should consider in regard to improving your case. Your position is that animals are capable of HOT yet not of mindreading. Firstly, if you want to rely on metacognition experiments to demonstrate HOT in animals, you really need to go beyond just citing “a few empirical examples”; you need to analyse the results and argue your side since there are arguments to the contrary and it is not a given that these experiments really do demonstrate HOT. I will not go into the details here, but my own position is that they do not and associative accounts of the results are more parsimonious. Secondly in your reply you say “I merely intended to illustrate that metacognition is possible in the absence of mindreading.” Even if that is the case, you have not demonstrated that those experimental subjects are incapable of mindreading, which is crucial to your thesis. You need to provide evidence and arguments for that position.

    Regards,
    Stef

  3. Hi Cheryl,

    First off, I enjoyed your paper and I’m glad that this topic is getting some attention. I have two points that I’d like to make, one empirical and one theoretical.

    There’s actually pretty good evidence that chimpanzees can attribute and evaluate intentions. The experimental paradigm used to test this usually involves a chimpanzee in a test area waiting for an experimenter to give her food. In one condition, the experimenter attempts to give the food, but is unable to due to clumsiness. In the second condition, the experimenter also fails to provide the food, but as the result of an intentional action rather than an accident. The chimpanzee will typically wait longer in the testing area and exhibit less frustration in the “accidental” condition than in the intentional condition. There have also been similar findings with capuchin monkeys and (I believe) orangutans. This is relevant to your paper because it seems to be evidence of third-order intentionality (a thought about a mental state related to an action). It would also seem as though this is a case of Piaget’s “subjective morality.”

    The other point I want to make concerns your claim that moral concepts like rightness and wrongness are complex. I don’t really see why we should accept this. Sure, philosophical notions of rightness and wrongness can be very complex, but we don’t suppose that most humans really think in terms of the form of humanity or the maxim of utility, let alone animals. Moreover, Moore’s Open Question Argument would have us believe that it’s not so clear that these sort of accounts ever can capture the sense of what we mean by a term like “good”, which, Moore argues, must be treated as an unanalyzable primitive. When we consider what kind of mental representation that could plausibly serve as a moral concept in animals, maybe we should look for some kind of conceptual primitive similar to what Moore had in mind, perhaps associated with some kind of emotional valence and bearing a particular kind of causal role. All this is to say that given the ambiguous behavioral evidence that we see in animals doing stuff that sometimes looks an awful lot like morality, it’s not so crazy to explain their behavior in terms of mental states that contain some kind of rudimentary moral concept (rather than, say, a string of instrumental reasoning about what sort of behavior is accepted by the social group).

    Putting these two points together, it looks like there might be some animals that actually fit your more stringent criteria for moral agency – or at least, it’s plausible that an animal like the chimpanzee comes pretty close.

    Here’s a reference for one of the primate intention-reading studies I gestured to:
    Phillips, W., Barnes, J. L., Mahajan, N., Yamaguchi, M. and Santos, L. R. (2009), ‘Unwilling’ versus ‘unable’: capuchin monkeys’ (Cebus apella) understanding of human intentional action. Developmental Science, 12: 938–945.

  4. Hi Cheryl, Stephane, and Logan,

    I am happy that the topic of animal consciousness and its connection to the ethical treatment of animals is getting some airtime at CO5. Thank you all for the paper and commentaries.

    I am ready to challenge what Cheryl and, more directly, Logan have said about the relationship between being a moral patient and conscious experience of suffering. You both seem to assume that an an animal’s capacity for suffering should matter in our moral deliberations because that suffering is conscious. But It is not obvious why consciousness matters here. Suffering that is unconscious seems to be just as relevant. Unconscious suffering has all the relevant functional properties that conscious suffering has, regardless an animal’s (or person’s) awareness of it. And if we should count unconscious suffering in our moral deliberations about animals, then bringing in HOT’s, meta-cognition, etc., to make the point is a little besides the point. Or so it seems to me.

    Best,

    Michal

  5. Hi Michal, I wanted to say that I entirely agree with the point you are making here, in claiming that suffering could be morally relevant (in the sense that would qualify one as a moral patient), even if it were unconscious, i.e., in the absence of any conscious awareness of oneself as suffering. I didn’t at all take myself to be assuming that conscious experience is necessary in order to be a moral patient. In fact, I meant to be arguing *against* precisely this assumption. Hence the following statements on p5 of my reply: “the assumption that suffering must be phenomenally conscious in order to be morally relevant is one that can be called in question”; “we might understand the sort of moral patiency attributable to nonhuman animals on a utilitarian theory as rooted in first-order intentional states that are lacking in phenomenal consciousness”. So I am very much in sympathy with the idea that unconscious suffering could be of moral relevance.

  6. To all who have commented: I appreciate your comments and look forward to reading them more closely and responding early next week. I am out of town for a conference and I have a hectic schedule, so I apologize that I have not had adequate time to read and respond (especially to Logan’s thoughtful and thorough comments). I am working on a response to Logan and should have it posted early next week. Thanks for your understanding and patience!

  7. A response to Logan:
    First, I would like to say thank you for your thorough and detailed comments. They have given me much to think about in revising my paper. You obviously spent a considerable time commenting and it is much appreciated.
    With that being said, I will provide responses to your main concerns with my paper:
    (1)) Your first concern is that I have created a “pseudo-problem”; that is, you believe that the possession of HOT in no way should entail or even present us with a worry that moral agency is then entailed. Thus, why even tackle this “so-called” problem.
    In working on this project I have had the same worry, yet this is most likely because you and I both have a clear sense of what it is to be a moral agent, while there are many current discussions about this very issue (what is required for moral agency).
    It might be helpful to mention the article that briefly mentions the connection between HOT and moral agency, yet does not provide any expansion on this thought. The article that actually motivated my paper was one by Robert Francescotti “Animal Mind and Animal Ethics: An Introduction,” who writes that “one is a moral agent only if one is capable of having thoughts about the welfare of others – which would consist, at least in part, in thoughts about the mental states of others. Of course, higher-order intentionality might not, itself, be sufficient for moral agency. What might be required is a certain special type of thought about the mentality of others. In the end, it is up to the ethicists to figure out what sort of higher-order intentionality is necessary and/or sufficient for moral agency (246).” Part of my project is to spell out what else is needed for moral agency, while illustrating that nonhuman animals do not necessarily possess this “special type of thought.”
    There are a number of other motivations behind my paper, most of them ethical. One in particular is, as you recognize, a tendency to anthropomorphize nonhuman animals, such as our tendency to call them saints or sinners (and in fact, in my original draft I included an example of Saint Guienfart, a dog who was actually venerated as a Saint for his actions that were interpreted in an anthropomorphic way). Furthermore, we might be too quick to jump to claims of moral agency when we see nonhuman animals demonstrate some sort of moral behavior that entails that they have a theory of a mind, such as virtues like compassion and such (an ape rescuing a child in a zoo, a dolphin rescuing a surfer). If we see actions like these and then assume that nonhuman animals have HOT (especially mindreading), we might too readily assume some sort of agency. One of my projects is to illustrate what is required for full blown moral agency vs. having some sort of morality, yet I attempt to do this in philosophy of mind language, as there are many others such as Bekoff who have written on this topic extensively in more ethical language.
    So, perhaps the conclusion does seem too obvious to some. Yet there are many discussions within ethics that might attribute moral agency to some animals too readily, and this is perhaps who my paper is targeting. Furthermore, there are other considerations related to agency that I am able to address by targeting the possibility that nonhuman animals are moral agents.

    I do appreciate your suggestion regarding a new framing as “an investigation of the different roles played by higher-order intentionality in contributing to moral patiency and agency.” This is something that I will give thought to, although I worry that there is an important aspect of my paper that will be lost, namely our anthropomorphizing animals and why this has real negative consequences in regard to moral agency. I do think there is much confusion between having a morality and being a moral agent which can be clarified by discussion of higher-order mental states. Perhaps though, I could pursue this as a separate paper topic, although much would obviously overlap with what I have written in this one.

    (2) On page 4, you challenge my claim that “unsophisticated higher-order thought can be attributed to all animals,” and rightly so. Many animal ethicists start with particular examples of nonhuman animals who demonstrate some form of higher cognition and then attribute higher cognitive states to all animals, and I definitely would like to avoid this error. In revising, my paper would benefit from specifying that my paper is concerned with vertebrates. This is not to say that other sorts of animals do not have HOT, but space constraints really do allow me to only investigate “higher” kinds of nonhuman animals. Trying to provide an account for the whole animal kingdom would obviously need to be addressed in more extensive, lengthy research.

    Also, by “unsophisticated HOT,” I merely mean HOT like metacognition or mindreading that does not entail a capacity for language or possession of complex concepts. If this is confusing, I would be interested to hear why.

    (3) On page 4, you speak in defense of Carruthers, who revises his work in order to account for the problem of moral considerability of nonhuman animals by conceding that they can “suffer” without having conscious mental states. If someone could explain how a being can suffer without being phenomenally conscious, it would be greatly appreciated! This just seems like a desperate attempt to avoid criticism from animal ethicists while still maintaining a view that denies consciousness to nonhuman animals in order to preserve a human-dominated worldview. Perhaps if Carruthers were to argue that nonhuman animals are morally considerable for some other reason besides having the capacity to suffer (maybe merely being alive is morally relevant, like a biocentric view on moral considerability would maintain), he could save face (i.e. maintain that animals are not conscious, yet claim that being a moral patient does not require consciousness, so therefore nonhuman animals can be morally considerable without being conscious). But to me, it seems absurd that he is still appealing to some sort of sentience model of moral considerability and claiming both: (1) that animals are not phenomenally conscious, yet (2) they can still suffer so they are thus morally considerable.

    Maybe I need to read Carruthers a little more closely, but I simply cannot wrap my head around the idea that a being could suffer without being phenomenally conscious. Sentience based models of moral considerability (which I ultimately endorse) explicitly define the capacity for sentience as having subjective feels and phenomenal awareness.

    Furthermore, even if his account offers some sort of moral protection to nonhuman animals, it seems to imply that it is inferior to the moral protection of human beings, who are the “only” conscious ones with subjective feels and phenomenal awareness. This then perpetuates the idea that, yeah, maybe animals suffer, but they suffer “less,” therefore we are entitled to continue to exploit them for human purposes (since humans suffer “more” with their phenomenal conscious states). From the ethical perspective I attempt to preserve, being a moral patient is a categorical notion, which entails that either you are directly morally considerable or you are not: there is no sliding scale which allows that some beings are “more morally considerable than others,” yet Carruthers’ account seems to entail that there will be a sliding scale of moral considerability, where humans dominate at the top.

    On a side note, regardless of whether Carruthers dodges the problem of nonhuman animal moral considerability, we are still faced with individuals like Davidson who subscribe to HOT theories of consciousness while still maintaining that nonhuman animals are not morally considerable due to their so-called lack of HOT.

    (5) On page 8, you point to the possibility that I have in fact anthropomorphized nonhuman animals in my discussion of self-confidence studies. You point out that perhaps nonhuman animals do not have self-awareness that accompanies their feelings of uncertainty that human beings have.

    This is a very interesting critique. The problem is: can we really know or understand the minds of animals without actually having the experience of being in the mind of animals ourselves. Thus, we are left with empirical investigations and studies such as these self-confidence studies in order to attempt to describe the animal mind in some way. When we observe animals performing actions similar to human beings, do we attribute to them all of the corresponding attributes and states of the human mind that humans use when performing these same actions? If so, is this an instance of anthropomorphizing them?

    I don’t think it’s fair to claim that anytime we see a similarity between human and animal minds and point it out (that the animal is similar to the human in some respect) that we are anthropomorphizing them. And in fact, if we see a similarity between animals and humans and go to such an extreme to provide an inferior account of the animal mind, we seem to be doing the exact opposite of anthropomorphizing (is there a word for this? Extreme distancing of species, perhaps?), which is itself problematic (such as describing animals as merely having first-order intentional states while human beings are doing the exact same thing, yet we do not hesitate to describe their states in terms of second-order intentionality). I think there is a disanalogy between the self-confidence cases and the case of moral agency, in that we do not observe similarities between humans and animals (we do not observe moral agency in nonhuman animals, which I describe as acting independent of desires) while in this self-confidence case we do in fact observe similarities between humans and animals.

    Furthermore, practically speaking, my motivation is to ensure than nonhuman animals are morally considerable under a HOT account. Thus, my goal depends upon demonstrating some sort of HOT in nonhuman animals, and thus even if this is an instance of “anthropomorphizing” them, it is instrumental for ensuring that nonhuman animals are morally considerable. Perhaps then, there is good anthropomorphizing and bad anthropomorphizing? Although obviously this seems to allow one to pick and choose when to anthropomorphize nonhuman animals in order to benefit the animal. I’m not sure this is the route we would like to go. This is definitely something to give more thought to.

    (6) Your last concern is quite interesting, and perhaps I would benefit to describe the sort of morality I have in mind. So, you offer the possibility that morality might not have third-order intentionality. Namely, you point out that morality might apply to actions, rather than desires, which would then mean that morality can be described in terms of second-order intentionality. That is morality consists of “the judgment that it would be wrong to act in a way that causes someone pain” (10). I am still not quite sure how this is second-order intentionality. It seems to be that you substituted the word judgment for desire, yet this judgment would have third order intentionality.

    So we have the awareness/HOT that a certain action of ours might cause another being pain (this is second-order intentionality), and then, as you put it, morality requires that we have the higher-order judgment that this is wrong. But, doesn’t this mean that morality then still consists of third-order intentionality? Correct me if I have misunderstood you.

    Also, basing morality on actions rather than on desires or other mental states (such as intentions) is not compatible with the ethical theory I endorse (a sort of deontology), which perhaps I should mention in my paper. As a non-utilitarian, I do not endorse the view that morality consists of mere judgment of actions and the actions outcomes and effects without reference to the agent’s intentions. However, arguing for the correct view or morality would again be a separate project, so I would just have to stipulate up front what sort of morality I have in mind.

    I should point out that coming into this paper, I am endorsing a sentience based view of moral considerability and a deontological ethical theory. Again, in order for my project to proceed, these are the views I must take for granted, since considering challenges from other ethical theories and other models of moral considerability is a paper in and of itself. However, you have been helpful in pointing out that these considerations should be noted up front or perhaps in a foot note.

  8. Hi Cheryl,

    Many thanks for your interesting paper. It’s really great that this very important topic is receiving some attention here. You’ve been having a great discussion here and I’m very sorry I couldn’t join in earlier, but my schedule has just been too hectic over the past days. There is much in your paper and the ensuing discussion that I would like to comment on, but, alas, I will have to restrict myself to a very small point, regarding your response to Logan.

    You write: “And in fact, if we see a similarity between animals and humans and go to such an extreme to provide an inferior account of the animal mind, we seem to be doing the exact opposite of anthropomorphizing (is there a word for this? Extreme distancing of species, perhaps?), which is itself problematic (such as describing animals as merely having first-order intentional states while human beings are doing the exact same thing, yet we do not hesitate to describe their states in terms of second-order intentionality).” I just thought that I should point out that this is called “anthropodenial” in the literature, and that Frans de Waal and Elliott Sober (among others) have written about this (just in case you want to look into this). See, for example:

    Sober, E. (2005). “Comparative Psychology meets Evolutionary Biology: Morgan’s Canon and Cladistic Parsimony.” In Lorraine Datson & Gregg Mitman (eds.), Thinking with Animals: New Perspectives on Anthropomorphism, pp. 85–99, Columbia University Press

    Frans B. M. De Waal (1999). Anthropomorphism and Anthropodenial. Philosophical Topics 27 (1):255-280.

    Sorry for just making such a minor point. I hope to be able to revisit this topic another time.

    Best,
    Kristina

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s