Presenter: Brian Talbot, University of Colorado, Bolder
- Brian’s Paper
Commentator 2: Bryce Hubner, Georgetown University
Presenter: Brian Talbot, University of Colorado, Bolder
Commentator 2: Bryce Hubner, Georgetown University
I’m going to try to return your kick off with my responses to your responses to etc. I’ve greatly enjoyed this exchange so far, and I should say that while I have the worries about aspects of your research I bring up in my paper, these aside I consider it a model of well done experimental work.
As I understand the research, high CRT subjects engage System Two, generally speaking, when they perceive a task as difficult. You suggest that the novelty of your task should make it seem difficult. The environment in which the task is completed (the lab) makes it seem somewhat unusual, but the task itself is not unusual. People intuitively and easily make mental state judgments all the time, and ascribing mental states to robots is not even that unusual (at least, not if one has been to the right movies). I’m not convinced that this would seem like a difficult task for that reason.
You also say that subjects should have previously noticed conflicts between System One and Two (if there are any), which would make high CRT subjects likely to engage System Two here. Your claim is that subjects would have noticed these conflicts because “we all have encountered many cases like this—such as when we reflect on whether insects (and other animals), or plants, or group agents, or various machines have certain mental states.” I guess I don’t expect most people to have reflected on this question all that much. Does that make me a pessimist?
This is important because I think that we need a lot of (noticed) conflict between System One and Two before we start really engaging System Two. I can only offer anecdotal evidence here, but here goes: We know that System One and Two DO conflict – Arico et al’s research shows this (System One says that plants feel pain, System Two says they don’t). Presumably we’ve all experienced such conflicts, since we’ve all clipped flowers off plants and the like. I’m a high CRT subject. I generally speaking experience judgments about mental states as quite easy. That suggests to me that other high CRT subjects won’t experience them as difficult, either.
Now, of course it is an empirical question whether or not (enough) high CRT subjects engaged System Two when responding to your prompts, and it could turn out that they did. My point is only that I don’t think we can conclude that yet. However, I do think that the use of the CRT test is a step in the right direction; one thing I really like about your original paper was that it did not rely on any one study to make your point, but rather converging evidence from a number of studies. Perhaps you can offer converging evidence from a number of studies that engaging System Two doesn’t make a difference (for example, you can manipulate perceived difficulty as in Atler 2007 (see cite in my responses).
My point about low vs high effort System Two judgments was that sometimes things can not stand out as relevant, even though they are obvious, because they are too obvious (and thus too ordinary to stand out). If high CRT subjects use mostly low effort System Two judgments in your task, it might not occur to them to think about qualia (if they exist).
Premise 2The model I advanced in my original paper was: System One ascriptions of mental states are based on associations between external features and mental states. We can explain the difference you originally found between ascriptions of valenced and non-valenced states due to different features being associated with each type of state (e.g. characteristic facial features being associated with valenced mental states). In your responses to my paper, you argue that qualia (if they exist) might get associated with all of this. I accept that this is possible, but my point was that this doesn’t make a difference. You either get associations between external features and mental states (my original model) or external features, qualia, and mental states. But, either way, perception of the right external features should activate judgments of mental states, either through direct association, or by association with qualia and then associations with mental states. As long as we have different external feature sets associated with valenced and non-valenced mental states (which will be true even if you are right and qualia can be associated into the mix), we will get the results found in your study from System One judgments, whether or not there are qualia.
Bryce and Justin,
I’m also sympathetic to the point that the “folk” are not a monolithic group, and that failure to keep that in mind is problematic. I think the use of the CRT test is a good sort of move in this vein – it involves understanding that different people think differently, and that that can be important to understanding one’s results. I also find that the term “folk” feels a bit pejorative, although that may say more about my musical tastes than anything else.
I do think, however, that there are good reasons to talk about the group that “folk” refers to (laypeople/non-philosophers) as a group. In part, this is because there is an important different between philosophers and laypeople, which is that we have studied philosophy and laypeople (largely) have not. This difference matters: studying philosophy often gives us vested interests in the truth of certain views, which studies show time and time again affects judgments (likely both System One and Two judgments); studying philosophy exposes us to theories, and exposure to claims makes them seem more plausible (all things being equal). There are other biases that philosophers are likely prone to due to being philosophers, and for that reason I think it’s worth distinguishing us from the non-philosophers.
As for the rest of Bryce’s comments, I am also still digesting them. I’ll post again when I know what I want to say.
I think that maybe it is best to backup for a minute, as I worry that the big picture might be getting lost in focusing on specific objections to some of the premises of the argument. Our main concern is with the evidential value of claims that phenomenal consciousness is obvious—that it is phenomenologically obvious, the most central and manifest aspect of our mental lives, and so on. What is the evidential value of such obviousness claims? It seems to me that if they have any, then it is because they are asserting that phenomenal consciousness is evident just in having ordinary perceptual experiences. That is, the claims would not have much if any evidential value if the “obviousness” merely expresses agreement with certain philosophical arguments that are contested (as we should then focus on the arguments instead), or if they simply reflect that the philosophers have adopted a certain philosophical theory (as we then would want to know the reasons for accepting that theory, not simply that they accept it). Taking the obviousness claims to be such that they have some real evidential value, we want to test them. And, it seems at least initially, that if phenomenal consciousness is really pretheoretically obvious or evident just in having ordinary perceptual experiences, then it should be evident to most people.
With that framework in place, what sorts of judgments should matter to assessing whether phenomenal consciousness is obvious? I would think that it is people’s intuitive judgments that are at issue—that is, more or less their System One judgments, accepting that distinction. If we focused on System Two judgments instead—and especially high effort System Two judgments—then the worry is that the judgments would reflect something besides what is evident just in having the relevant experiences. That is, I don’t think that System Two judgments are the right sort of judgments for testing the claim that we are interested in. This is essentially our objection to Premise 3. Fortunately, however, although more evidence is needed to say with much certainty what sort of judgments people are calling on in answering our probe questions, we take the CRT data to tentatively suggest that this doesn’t really make much of a difference.
So, while the back-and-forth concerning Premise 1 perhaps gives the impression that we are arguing that our probes mainly elicited System Two judgments, that is not what we are up to. Rather, we hold that System One judgments are the right sort of judgments to be testing for the question we are interested in, although it is at best unclear that System Two judgments generally differ from System One judgments for this case anyway.
Nonetheless, I think that this is a very interesting question and agree that the way to make a compelling case, one way or the other, is with multiple studies offering converging evidence. I like the idea of manipulating the perceived difficulty the question. If I remember Atler correctly, the idea would be to give participants either some easy or difficult problems before giving them the probe and see what effect this has on their responses (as well as their CRT scores)? I would be up for doing that—shoot me an email if you would like to collaborate on such a study.
Hey Justin and Brian,
Sorry for taking so long to get back to y’all. I had a mess of grading to do that occupied way too much of my time over the weekend. But regardless, I’m really enjoying the discussion so far and I have enjoyed reading both of your papers and comments; there’s a lot to think about here!
So, I think that I am finally starting to see just how deeply my worry about the use of the term ‘the folk’ really runs. I agree with both of you that philosophers are likely to have developed skills and strategies that allow them to engage with some thought experimental content in ways that most non-philosophers have not. But these skills and strategies come from somewhere—in fact, my guess is that many of us ended up as philosophers because we had developed many of these skills before we ever had a chance to ‘do philosophy’. In many case, I would hypothesize, these skills and strategies are built up through a process of tweaking, modifying, and manipulating the reflexive commonsense strategies for making judgments that we all share. Of course, as Brian notes “studying philosophy often gives us vested interests in the truth of certain views, which studies show time and time again affects judgments”. I agree completely—and I think that it’s often to our detriment (e.g., where we reify minds, qualia, beliefs, etc.). But even where this is the case, it’s not obvious to me that there is anything here that warrants drawing a distinction between philosopher and ‘folk’. By what right do we posit a real distinction between philosophers and non-philosophers unless we have explicit experimental data showing a difference in judgments on a particular task?
I take it that experimental philosophy is supposed to be in the business of working out the sorts of cognitive strategies that people tend to employ in making judgments about, and evaluating philosophically significant cases. But if this is what is supposed to be going on, then it’s an empirical question whether there is a ‘folk intuition’ and a ‘philosophical intuition’. Once things are seen in this light, it seems that a couple of studies (or even a dozen studies) looking only at WEIRDOs, college undergraduates, and random people who happen to land on a website does not a ‘folk’ make. Of course, you can run studies explicitly comparing the intuitions of philosophers and non-philosophers (Justin and Edouard have, and with interesting effects); but if you do this, then you have obviated the need for the use of the term ‘the folk’. Where you are looking at differences between philosophers and non-philosophers, you can—and should—just use those terms. They are more precise and they are licensed by your data.
One more point. There are likely to be many cases where the people that are polled in these experiments also have a vested interest in the truth of some experimental probe. Moreover, it is likely that where this happens it is not for any reason that implicates them being members of ‘the folk’. In fact, in a lot of cases, it doesn’t seem to matter what you think you are testing, participants in these simple (and often fairly transparent) experiments are going to read the probe in light of what matters to them. I’ve had people tell me that I was trying to look at judgments about abortion and euthanasia on experiments where that could be further from the truth; where I’ve done experiments looking at ascriptions about corporate mental states, I’ve had people tell me that they ‘knew’ that I was looking at economic ideologies; and finally, when I’ve looked at mental state ascriptions to robots and cyborgs, I’ve had some people who were witty enough to see it as an experiment on dehumanization (which it wasn’t).
So, my plea to avoid using the term ‘the folk’ is grounded in a desire for more precision and care with the results of these experiments. If the data reveal a systematic difference in the judgments of two groups, say which groups they are. If you want to infer to a larger group, make sure to note where your sample fails to license a claim about human minds more generally. My guess is that in these sorts of explicit judgment tasks, we are not going to recover a set of underlying strategies that are common to all of the non-philosophers. If there were such underlying structures, we wouldn’t find whopping standard deviations of between 1 and 2 points on a 7-point scale (which are quite common in judgment tasks like these). There is a lot of variance in our participants, and at best we are uncovering statistically reliable differences between a pair of target groups.
“So, my plea to avoid using the term ‘the folk’ is grounded in a desire for more precision and care with the results of these experiments. If the data reveal a systematic difference in the judgments of two groups, say which groups they are. If you want to infer to a larger group, make sure to note where your sample fails to license a claim about human minds more generally.”
I agree with much of what you say here, Bryce. But I also think that this bit in particular is easier said than done: The problem is that in general we are going to be collecting data on samples to infer something about a larger group or groups. Groups can be constructed in all sorts of ways, however, and for the sorts of tasks at issue we are also going to expect both noise and individual differences that don’t correspond with a theoretically interesting group. I think that part of the solution, though, is to aim to get wider samples (I am quite skeptical about using undergrads in intro philosophy classes), ask more demographic questions, and give further tests to get information about other factors that might be relevant (CRT, personality inventories, and so on). Having been collecting data online and getting participants through Google for a while now, I find that I am much less skeptical about how this data will generalize than for paper studies on undergrads. And, part of that is that I get much more information about participants, so that I can check to see whether SES makes a difference, or education, or specifically education in philosophy or psychology, or religiousity, or political affiliation, or scores on a personality inventory, or CRT, or age, race, gender, and so on. Often these factors don’t appear to make much of a difference for the issues that I have been exploring (although philosophical education often does), but sometimes they do (especially SES and education, from what I have seen).
I wonder how much of the talk of “the folk” is simply a remnant of people in x-phi dealing with questions that the prior literature has discussed in terms of the folk theory of X, or folk intuitions, and so on? This seems especially true for work in experimental philosophy of mind, but other areas as well. I have been working on some questions about causal judgments recently and the literature tends to use “folk” a great deal (although it is sometimes unclear whether a philosopher like Lewis, say, means this to describe a group so much as something about an intuition – that it is a naïve or pretheoretical intuition). In this area, as in others, though, we find that “ordinary people” are simply not of one mind: Certain dominant patterns of response emerge in the studies (which is interesting), but not without minority dissent of different types (which is also interesting).
Regarding your earlier post stating, “So, while the back-and-forth concerning Premise 1 perhaps gives the impression that we are arguing that our probes mainly elicited System Two judgments, that is not what we are up to. Rather, we hold that System One judgments are the right sort of judgments to be testing for the question we are interested in, although it is at best unclear that System Two judgments generally differ from System One judgments for this case anyway.”
I agree that the discussion of System Two might be leading us astray. I guess where we disagree is about whether or not System One is a good source of data for your probject. The point of my original paper is, in part, that System One judgments about others’ mental states are not the right sort of judgments for your project. Originally I argue that this is because qualia are not likely to play a role in those judgments, even if they exist. You argue that they can play a role if they exist, since there is a means by which they can be associated with mental states. My response to that was (and is) to say that even so they don’t make a difference in these judgments – System One is going to produce the same judgments about others’ mental states whether or not qualia exist.
It looks like we agree to a large extent. But not entirely; you say, “it’s not obvious to me that there is anything here that warrants drawing a distinction between philosopher and ‘folk’. By what right do we posit a real distinction between philosophers and non-philosophers unless we have explicit experimental data showing a difference in judgments on a particular task?” I take this as implying that we don’t have a right to draw this distinction unless we have explicit data on particular tasks. I don’t entirely agree.
It may be because we disagree on the point of experimental philosophy. You say “take it that experimental philosophy is supposed to be in the business of working out the sorts of cognitive strategies that people tend to employ in making judgments about, and evaluating philosophically significant cases.” I’m not sure whether you mean this is the main or only business of experimental philosophy, or just one of the things we do. I certainly don’t think it should be the main point of experimental philosophy. I’m very interested in understanding people’s cognitive strategies, partly for ameliorative reasons (knowing how people (myself included) tend to think poorly helps us figure out how to help us think well). But I’m also very intersted in what have been called “extramentalist” projects – studying stuff outside our heads. I think intuitions can be helpful for this. I guess that’s one of the reasons I like Sytsma and Machery’s paper so much – it’s an attempt to use the intuitions of non-philosophers to do more than conceptual analysis.
Given that, I think we are justified in drawing some lines between philosophers and non-philosophers without specific data on differences in reactions to specific cases. This is because we have significant empirical support for the general claim that certain circumstances bias intuitions (unconsciously – I do not doubt our sincerity or interest in the truth), and that philosophers are quite likely to find themselves in those circumstances. This is irrelevant if what we want to do is learn how the mind works. But if we want to get at stuff that goes beyond this, and I do, then this is not relevant. It means that we should be at least somewhat dubious of philosophers’ intuitions, and it gives us a reason to be intersted in the intuitions of non-philosohers, just because they are non-philosophers.
Now, I agree with you that many non-philosophers will have biases of various sorts. But I also agree with Justin that this often will just give us noisy data, which is not too much of a problem. It is something we should watch out for, though. One problem that comes up is that, as you suggest, non-philosophers can sometimes SYSTEMATICALLY misinterpret the prompts we give them, and we aren’t careful enough about this.
More on this later – I have to go teach (please forgive any typos, since I realize I don’t have time to proofread now).
I am still having a hard time understanding why you think that the judgments that are provided by participants in these experiments are System-1 judgments. Do you think that there are evolutionarily pressures of some sort that support positing System-1 processes dedicated to the ascriptions of mental states to non-human entities? Is there some other reason to suppose that we are relying on an implicit theory of mentality that is sensitive to features of our environment to which we must rapidly adjust if we are to be successful in coping with our environment? Or is there some other reason that I’m missing for thinking that System-1 judgments are likely to be at play here?
I think that the reason that I’m having such a hard time seeing this is might be as follows. It just doesn’t seem plausible to me to think that there are the right sorts of pressures from ‘hostile environments’ to justify the claim that judgments about the mental lives of non-human entities are likely to be governed by a System-1 process. I take it that this is the really interesting result of the research that has been carried out by Sherry Turkel and her colleagues (or at a more familiar level, the interesting fact about the way in which a film like Bladerunner can manipulate our judgments about the sorts of mental states that a non-human entity can be in). These judgments seem to be grounded in a massive network of feedback relations that are constantly allowing us to update and revise any initial judgments that we might have made about a particular entity.
When you throw a survey experiment at people, the computations that are responsible for their response move on glacial time scales! I would be shocked and amazed to find that people weren’t recalibrating their judgments at every stage of the process (reading the prompt; making an initial ‘anchor-ish’ judgment; ‘adjusting’ against previous judgments until something plausible is spit out; etc). I just don’t see why we should start from the assumption that there is a System-1 judgment that is being revealed by these cases.
I’m perfectly happy to concede that I might be missing something here. But your argument in the paper went by really fast and I have a hard time knowing why I should see these judgments as governed by System-1 mechanisms. Can you give me a fuller version of your argument for this claim? If you have an argument for this claim, I would love to hear it! (By the way, thanks for making me think about this stuff…it’s great fun!).
Your last comment brings up a number of things that I thought were really intersting in your original comments to my paper, and I’ll address those here as well to an extent.
So, why think that these judgments are System 1 judgments? First, I should make it clear that I think System 1 is domain general – it makes judgments about all sorts of domains. And that a great deal of the judgments it makes are the product of learning. So it is not a process that evolved in order to make judgments about mental states; rather, it evolved to make judgments, and mental states are just one of things it ends up being able to make judgments about. (Supporting this goes way beyond what I can do in these comments, unfortunately)
On the view of System 1 I endorse, whenever we encounter a stimuli that is similar to things we have encountered in the past, it activates some associations automatically. This is what produces System 1 judgments. This is automatic, so System 1 is making judgments about all sorts of things all the time. This, I should add, isn’t just my view, it’s one put forth by a number of researchers in this area.
Given that, System 1 is going to make judgments about Justin’s cases. The question is, do subjects report those judgments? Do they suppress them and report pure System 2 judgments (as Justin and I have been arguing about a bit)? Or do they take those System 1 judgments and nudge them one way or another consciously, as you suggest they do?
You’ve observed some subjects making these judgments, so you have at least some evidence that people do SOME conscious reasoning on this matter. What you say, both in your most recent comments and also in your original response to my paper, suggests that subjects start with their intuitive (System 1) judgments, and adjust them somewhat after reasoning a bit. If they do this in a systematic manner (that is, if a large percentage adjust in the same way), this is very interesting. It’s also empirically study-able – we can elicit pure System 1 judgments (there are a number of well studied methods for doing this), and compare them with those we allow to be moderated. At the current moment, however, I don’t see any particular reason to think that these are moderated in a systematic manner, since, as I’ve been arguing here, I don’t see reason to think that people have easy access to a shared method for thinking about these issues. It seems like you might be sympathetic with the claim that subjects don’t all approach these in the same way, as your earlier comment about standard deviations suggests. If these judgments aren’t systematically altered, than any alteration of them will be noise in the data, and, given a big enough sample, will disappear, and what is left will be what subjects share, their System 1 judgments.
If subjects’ responses are, on the other hand, based on systematic alterations of System 1 judgments, then the question remains: are we interested in the System 1 judgments they started with? Are we interested in the System 2 based alterations? Either way, I think we have some reasons to be concerned about using this data to tell us about qualia.
“I guess where we disagree is about whether or not System One is a good source of data for your project. The point of my original paper is, in part, that System One judgments about others’ mental states are not the right sort of judgments for your project. Originally I argue that this is because qualia are not likely to play a role in those judgments, even if they exist. You argue that they can play a role if they exist, since there is a means by which they can be associated with mental states. My response to that was (and is) to say that even so they don’t make a difference in these judgments – System One is going to produce the same judgments about others’ mental states whether or not qualia exist.”
I don’t think that this is quite right. The issue is not (directly) whether qualia exist, but whether claims that qualia are obvious provide good support for the claim that they exist. I think that if qualia are only “obvious” if you employ System Two judgments (especially “high effort System Two” judgments), then the obviousness claims do not provide much if any support: On their own, the claims would only be good support if qualia are pretheoretically obvious, or obvious just in undergoing ordinary perceptual episodes; but, that suggests that they should play a role in System One judgments (or perhaps low effort System Two judgments—I’m not fully sure what this amounts to). Assuming that the judgments that we elicited are by and large System One judgments, then we seem to have produced evidence that qualia are not obvious in the relevant way.
Maybe I can make this clearer by co-opting your argument. Let me phrase our argument in a way that makes use of your claim that “qualia are not likely to play a role in [System One] judgments, even if they exist”:
P1. If the claim that qualia are obvious provides good support for the claim that qualia exist, then qualia should play a role in System One judgments.
P2. “Qualia are not likely to play a role in [System One] judgments, even if they exist.”
C. The claim that qualia are obvious is not likely to provide good support for the claim that qualia exist.
And, in fact, our data then supports a stronger conclusion, dropping the “not likely”: The claim that qualia are obvious does not provide good support for the claim that qualia exist.
“It means that we should be at least somewhat dubious of philosophers’ intuitions, and it gives us a reason to be interested in the intuitions of non-philosophers, just because they are non-philosophers.”
I think that this is right, and one of the biases we should be worried about is training with regard to how to think about certain types of cases. To take the example of phenomenal consciousness, I think that philosophers are often trained to think about certain mental states in a way that makes qualia seem obvious to them; but, we want to know whether qualia really are obvious in a philosophically interesting way or whether their seeming obvious to some philosophers just reflects their philosophical training. Put another way, we want to know whether qualia are pretheoretically obvious as opposed to just seeming obvious given certain theoretical commitments.
I was a bit sloppy in my earlier comment that you quote. My point was that, even if qualia exist *and are obvious*, we shouldn’t expect this to make a difference to System One judgments about others’ mental states. So, even if your data is about System One judgments, it doesn’t give us insight into whether or not qualia are obvious.
However, that ultimately might not undermine the argument you now seem to be making: System One and low effort System Two judgments provide no support for the claim that qualia are obvious, not because these judgments are inconsistent with the claim that qualia are obvious, but because we get the same judgments whether or not they are obvious. So why think qualia are obvious? Perhaps because of data from high effort System Two judgments, but these are too theoretically laden – even if they seem to distinguish between qualitative and non-qualitative mental states, this does not show that qualia are obvious, but only that we have some theoretical commitments to their existence.
The more I think about this argument, the better it seems to me. But it does overlook the possibility that System One judgments about our own mental states make use of the qualitative/non-qualitative distinction. Nothing in my original paper rules out this possibility (although I do point out some challenges for studying it), and we don’t have any data directly on this point. I guess the other weakness of the argument is that it is probably only a burden shifting argument. There is at least the epistemic possibility of someone arguing that high effort System Two cognition is required to detect qualia even though qualia are obvious in a certain sense; I remain open to the possibility that some aspect of our experience can be too obvious for us to easily notice.
That diagram does seem right to me.
About System One judgments of our own mental states: the difference here between our own mental states and those of others is that, if there are qualia, we are directly aware of our own qualia but not those of others. So we should expect a strong association between qualia and our own qualitative mental states, stronger than is likely between qualia and others’ mental states. In addition, if qualia are associated with our own mental states, this association is more direct than that between qualia and others’ mental states, since it comes from experiencing them together over and over, whereas the association between qualia and others’ mental states would be there because others’ behavior causes us to think about what mental states would cause that in us, which causes us to think about qualia. This is why I don’t think qualia make a difference in judgments of others. But they might make a difference in self judgments. Unlike judgments about others, where external features trigger qualia associations which trigger the ascription, it seems that qualia themselves might trigger ascriptions of mental states to ourselves. So, the absence of qualia might prevent someone from ascribing a qualitative mental state to themselves, since System One is a positition to be aware of that absence.
Now, this is not uncontroversial; at least some people think that we ascertain our own mental states in much the same way that we ascertain those of others. But if you do think that the first person perspective is different, then these judgments might be interesting. Since they likely avoid the problem I have with System One judgments of others, if they do not track the qualitative/non-qualitative distinction, they generate more evidence for your view.
I have to admit that this entire thread is a lot to take in all at once but this last bit in Brian’s last response perked my interest. When you speak of ‘we should expect a strong association between qualia and our own qualitative mental states, stronger than is likely between qualia and others’ mental states.’ it seems you are biasing this to philosophers and not general folk. Folk, as this discussion and have been painted in Brian’s paper are those that are unaware, generally speaking, of qualia altogether. If we need them (and by association ourselves) to be aware of qualia in themselves/ourselves it feels as though there needs to be some HOTs in there. Is that what you would consider the System Two to be in this case?