Chair: Richard Brown
Presenter: Derek Ball, Arche; The University of St Andrew
Derek’s paper
Commentator 1: David Papinaeu, Kings College –London
–>audio hosted on website no longer functioning<—
Commentator 2: James Dow, The Graduate Center, CUNY
Jame’s paper or a larger version of his video
27 responses to “The New New Mysterianism”
Two issues:
A. If we accept the idea that whenever some truth appears, now, for us, as not even in principle deducible from some other truths, that does not entail that it is not in principle deducible per se, then we get a widespread skepticism about knowing which truths are independent from which truths. For instance, it appears to us now that “There are x nuclear warheads in the world in 2009” is not in principle deducible from the set of truths about the spatial distribution of blue on the walls of New York. Yet, we don’t say that this is consistent with the in principle deducibility of the former from the latter.
B. The analogy with the halting problem is a bit problematic, in my view. What the undecidability thesis says is that a general algorithm to solve the halting problem for ALL possible program-input pairs cannot exist; it does not say that there are no input-program pairs for which whether the machine halts or not is decidable. For simple enough inputs the problem can ve solved, I suppose (I’m not an expert!).
If the anlogy between CONSCİOUSNESS and HALT were to hold, the Turing result would be ğaraphrased as the thesis that it is not the case that there is a way to deduce consciousness-truths (e.g. whether there is or not a conscious state) from all possible sets of physical truths, which would be consistent with the thesis that for some sets of physşcal truths we could deduce such consciousness truths. Then, for instance, the thesis would say that we can deduce whether bacteria are conscious, but not whether any random physical system is or is not conscious. But the point of the dualist, for better or for worse, is that even in the case of bacteria one can’t deduce such consciousness-related truths.
I’d like to thank David and James for their interesting and helpful commentaries. I have a few responses to each of them; I’ll start with David.
David makes two main criticisms of my paper. The first is that it’s not clear that facts that aren’t effectively computable are knowable on the basis of reasoning at all. My reply is that there are relatively well-understood notional computers that are more powerful than Turing machines: Turing’s oracle machines, analog computers, machines that can carry out infinitely many computational steps in finite time, etc. If a thinker could emulate one of these machines, she could reason to conclusions that are not effectively computable. I do not see why this sort of reasoning should not count as ‘apriori’ in the sense relevant to the dualist’s argument.
David’s second point is that some dualists will hold that the concepts we use to think about our own phenomenal states allow us special, direct access to those states, such that we should just be able to see the facts about consciousness in the physical facts, if in fact they are physical. This sort of argument presupposes a theory of the nature of concepts that I do not accept, but discussing it in detail would take me too far afield. Here is a more direct reply: plausibly, everyone has to admit that if you’re presented with a list of the microphysical facts, you might have to do some reasoning in order to be able to see the facts about consciousness. But if some reasoning might be required, then it seems to me that my style of response will be possible.
James makes several points. Some of them overlap with David’s, but some of his worries deserve further comment. James claims that the argument depends on an analogy between the explanatory gap and the halting problem, and points to a number of ways in which the problems are disanalogous. James is probably right about the disanalogies. But I think that’s okay, because I don’t think I need a very strong analogy. The point is just this: what you can infer from a given set of premises depends on the resources available to your reasoning. So we might not be able to infer conclusions about consciousness from premises about microphysics because we don’t have the right resources (even though someone else with greater resources could do so.) The halting problem is just a vivid example of this phenomenon; the same basic point could be made using (say) the Godel results.
James also suggests that a dualist might object that undergoing experiences is enought to put one in a position to know the facts about consciousness. I am happy to grant this. What I take to be at issue in the Chalmers-style argument is whether we can deduce the phenomenal facts from the microphysical facts. It might be that we know the relevant microphysical facts, we know the consciousness facts, but we can’t see how to reason our way from the former to the latter. This would be enough to get the dualist argument going; the fact that we can come to know the facts about consciousness in other ways isn’t obviously relevant.
Hi Istvan,
A. I think that everyone should accept the principle that you mention: it’s just generally the case the the appearance of p does not *entail* the reality of p. This had better not lead to skepticism; if it does, I think that we will be forced to skepticism about the external world, other minds, etc.
I agree that in most cases where we have apparent failure of in principle deducibility, we have actual failure; appearances are a reasonably good guide to reality. But many of us have found ourselves in a position where we are faced with a lot of evidence for physicalism, and also the explanatory gap. In this situation, it makes sense to consider the possibility that appearances are leading us astray.
B. I want to stress again that I am not depending on an analogy bearing much substantial weight here. The point is just that different thinkers might have qualitatively different reasoning abilities. If there is even a single value of the halting function that a Turing machine can’t compute, that is enough to make the necessary point.
Derek:
I don’t use “it appears to” in the sense of appearance as in “appearance versus reality”, but in the sense of “it is the case”. So replace it with “it is the case that”, and you get my objection, which is: if you were right about our not being justified to think that consciousness is not in principle derivable, because all we can assert is that it is not derivable by us, then, by parity of reasoning, one should be taken seriously when saying that the fact that “George Bush is American” is not in principle derivable by us from “King Diamond is a singer” is not enough to think that it is not derivable simpliciter, because it might be derivable by some being with superior cognitive capacities.
More generally, we are never justified in our beliefs to the effect that some truths are independent from others, because we can always appeal to the assertion that our cognitive capacities are limited and argue that those truths might be conceptually related by some super-reasoner. So, I guess, the question would be: how do we know today that some truth is independent from another, if knowledge of such independence requires, according to you, knowledge about what a being with capacities superior to ours and inaccessible today by us can deduce?
Hi Derek,
I understood your main idea as follows: It is possible that some truths (e.g. the halting truths) are in principle a priori deducible from the relevant basic truths, but that some thinkers may not be able to do this (e.g. those who function like Turing machines). Then, you make the point that we do not know from the armchair that we are not cognitively limited like that, e.g. that we have the cognitive ability to a priori deduce the truths about consciousness from the microphysical truths. Finally, you take this to cast doubt on the anti-physicalist claim that the non-deducibility of truths about consciousness shows that physicalism is false.
Now, here are my two worries about your argument:
(1) Anti-physicalists like Chalmers seem to advance quite specific arguments against the deducibility of truths about consciousness from microphysical truths, like the zombie argument. Now, what I don’t understand is how your very general point (that there may be certain higher level truths that we cannot deduce from the basic truths because of cognitive limitations) really addresses these very consciousness-specific claims about non-deducibility. But maybe you can clear that up rather easily…
(2) Does the anti-physicalist have to show, from her armchair, that we actually have the relevant cognitive abilities in order for her to be entitled to her anti-physicalist conclusion? Why couldn’t she rather take it as the default position that we are able to perform the relevant deductions unless there is some concrete defeater to that claim? I’m not sure that the abstract possibility that you raise, with your example of the halting truths, already counts as a defeater to that default position – just as the abstract possibility of an evil demon does not, I think, already defeat our perceptual entitlement. Put shortly, my second worry is that there may be some tacit epistemological assumption in your argument that leads to unwelcome skeptical consequences.
Dear Derek,
First, thank you for your response. I’m glad you agree with Istvan and I that there are relevant problems with the analogies you employ in the argument. You seem willing to do away with the analogy. But, without the analogy or with a different analogy, your argument changes. So, let’s try to discuss the new argument.
Let’s start with a different analogy to what you call “the Godel results.” Which ones? Do you mean that Godel’s first incompleteness theorem and second incompleteness theorem are equally up to the task in your argument? How so? The first incompleteness theorem is that any formal system S satisfying the three conditions of omega-consistency, recursively definable axioms and rules, and every recursive relation is definable in S, is incomplete. The second incompleteness theorem is that assuming a system is consistent, there is a formula of that system that can be interpreted as asserting its consistency but is not a theorem. Feel free to show me how the explanatory gap is accounted for in terms of these Godel results or other Godel results. I don’t see the analogy.
Two sticking worries in the form of questions: (w1): Why should results in formal systems have import for problems such as the explanatory gap? (Is physics a formal system?) Don’t you at least need that analogy to work? (w2): What qualitative change in reasoning would enable one to either infer (x)(Fx) or ~(x)(Fx) (first theorem) or make the interpreted consistency formula into a theorem?
Now, to your newly expressed point, “what you can infer from a given set of premises depends on the resources available to your reasoning. So we might not be able to infer conclusions about consciousness from premises about microphysics because we don’t have the right resources (even though someone else with greater resources could do so.)”
You might be using “resources available to your reasoning” to mean conceptual competence or you might mean the kind or quality of reasoning. As I mentioned in my comment, as far as I can tell, the anti-physicalists diagnose the origin of the explanatory gap by appeal to a lack of conceptual resources (I will clarify this in another post…). But, you are diagnosing the problem by appeal to a failure to ascend to certain qualitative level of reasoning ability.
I’m not sure what you mean here. For the sake of discussion, let’s say there are different levels of reasoning, say a priori and a posteriori. Let’s say this difference of levels is a qualitative difference. Now, when you discuss in your original paper that your position differs from McGinn’s, you suggest that McGinn’s view is that there are truths we cannot know and our inability to bridge the gap is permanent. Putting aside conceptual competence, you suggest that time and memory might allow us to bridge the gap. Those are quantitative differences not qualitative differences, right? I know that you mention qualitative differences when discussing hypercomputers, but then we’re back to analogy issues, because you haven’t said what the qualitative differences would be in our case.
So, finally!, a question: What is the qualitative difference that allows us to change our reasoning abilities, from merely a priori human reasoning to something else— the meta-apriori meta-human meta-reasoning? My worry is that you seem to suggest we need to reason like God(s) in order to solve the explanatory gap. Seems like a permanent gap to me. (A related comment is that in your reply to David, you suggest that the reasoning would still be a priori reasoning. But, then what makes the qualitative difference?)
Also, an editorial comment: in the paper you distinguish between in principle and something like “in psychological fact,” but when you say, “but not bridgeable with the tools we have, not even in principle” you are introducing a new “in principle distinction,” between bridgeable-in-principle-in-psychological-fact and bridgeable-in-psychological-fact (with the tools we have). That may shed light on my above question.
Best,
James
Hi Istvan,
Again, I agree that in general when some truths are independent from others, we are in a position to know this; and I don’t hold that such knowledge requires “knowledge about what a being with capacities superior to ours and inaccessible today by us can deduce”. But consciousness is a special case. Many of us are convinced that there is excellent reasons to think that physicalism is true. Then we are presented with a Chalmers-style argument of the sort that I discuss. It’s reasonable in this case to look for a place where the argument might go wrong. I am pointing to such a place. (Also see my second response to Joachim, below.)
Hi Joachim,
Let me take each of your points in turn:
(1) I think that the sort of specific considerations you have in mind don’t typically bear on the question of whether the facts about consciousness are armchair deducible in principle. For example, I’m happy to grant that *we* can’t rule zombies out by armchair reasoning; I’m even happy to grant that they are “secunda facie positively conceivable” (as Chalmers might say). But I would want to insist that we have good reason to believe that they can be ruled out in principle. And I don’t see that Chalmers has given us any (non-question-begging) reason to doubt this.
(2) This concern seems closely related to Istvan’s worry, but let me add something to what I wrote to him. I’m assuming that the physicalist has some good reasons to hold her position. If this is right, then there is some reason to hold that something must be wrong with an argument for dualism. So maybe we are in general entitled to the assumption that if we can’t deduce p from q, no one can. But in this case, we have reason to think that some one can (again, given that we have good reason to think that physicalism is true and that physicalism requires armchair deducibility).
In short, I agree that the abstract possibility of inaccessible truths would not be enough to undermine the argument if there were no independent reason to believe in physicalism. But this does not seem to me to be the dialectical situation in which we find ourselves.
Hi James,
Thanks for the detailed response. Just to be clear, I don’t think that the disanalogies raised before impact the argument in any significant way, and I certainly didn’t intend to be advancing a new or different argument in my comment. But let me try to answer some of your questions.
It might help to recap some points from the paper. Turing intended the Turing machine to formalize a certain sort of reasoning that a human could undertake: reasoning according to a fixed set of rules. Now we know that if we were limited to such reasoning, there would be certain problems that we could not solve. It may be that we are not limited to such reasoning. But plausibly, there is still some limit to our reasoning power. If so, there will be some problems that we cannot solve.
Similarly, Godel proved that if one is limited to certain axioms, there will be some claims that one can neither prove nor refute. Thus if one were limited to the axioms of Peano arithmetic in one’s arithmetical reasoning, there would be certain claims that one could not deduce. (Although of course, any given claim (though not every claim at once) might be deducible given further axioms.)
Now the explanatory gap (at least as Jackson and Chalmers present it) poses an analogous problem. We are presented with a set of premises expressing the truths of microphysics. We are invited to deduce on this basis the truths about consciousness. Chalmers claims that we couldn’t do so. Suppose he’s right. Now I am just claiming that there are other cases in which one can’t perform a certain deduction given a certain set of premises or a certain computational mechanism, even though a reasoner with some other premises or other mechanism could do it. (This is why formal systems are relevant – we can prove that certain problems are insoluble given certain resources.) So we shouldn’t be too quick in drawing ontological conclusions from our own inability.
There are a few points in your remaining comment that I’m not sure that I understand, but let me try to say a few things. I agree that “resources available to one’s reasoning” is a tricky term, and I’m not sure how to make it really precise. What I mean is something like: whatever it is that would let you solve more problems on the basis of your reasoning. (The sort of thing that a hypercomputer has more of than a Turing machine.)
You write, “you suggest that time and memory might allow us to bridge the gap.” There is a sense in which this is true: perhaps given time we can increase our reasoning powers (this is the dissimilarity with McGinn). But there is another sense in which I explicitly deny it: given our current reasoning powers, it is possible that no amount of time or increase in memory would allow us to bridge the gap.
As for your final question: if I knew the answer to that, I would be a (qualitatively!) smarter man. There are a ton of different ways in which different notional computers do things differently (and more powerfully) than Turing machines, so we might have a lot of options. Moreover, it may be the we can’t in fact increase our reasoning power in the relevant way; it’s an abstract possibility that I am happy to leave open.
Derek,
Thank you for the talk.
This is really a follow-up on James’ last comment. I am not sure how your proposal is a new new mysterianism. Your argument is the following:
1) Accept that facts about consciousness are in principle deducible from physical facts.
2) Accept that human reasoners cannot deduce facts about consciousness from physical facts.
3) Explain why (2) holds by drawing an analogy between the limitations of a Turing machine and the limitations of resources, whatever they might be, of human reasoners.
Sans the analogy this is McGinn’s position, which is that the gap will never be bridged. However, to (3) you add:
4) Qualitative limitations on reasoning (can?) reduce to substantial quantitative limitations.
To motivate (4) you mention notional computers like those that operate analogically as opposed to digitally or those that can perform an infinite number of operations.
My problem is that I don’t understand why you think that (4) is true. Furthermore, for the sake of argument, even if (4) was true, there is no reason to think that a substantial increase in quantitative reasoning would resolve the explanatory gap.
Some notional computers cannot exist so I will not discuss those. However, those that can be constructed (like the Oracle machine) and can therefore be useful to your argument face their own versions of the halting problem. So, at most you have shown that it is possible to move the explanatory gap one level further down the dialectic. It can be a problem that, like the halting problem for a Turing machine, can be solved by a special purpose machine like an Oracle machine. However, once such a solution is given, another explanatory gap will undoubtedly arise in the same way that another halting problem arises for an Oracle machine. So, it seems to me that there is nothing new about your new new mysterianism. At best, you have shown (by analogy) that the explanatory gap as it is stated in (1) and (2) can be resolved only to generate another. This kind of mysterianism seems to me to be the permanent kind that McGinn endorses.
Best,
Michal
Hi Everyone, very interesting discussion going one here!
Derek I quite like the paper and am generally sympathetic with your position. I think this nicely illustrates a kind of response to those who object to my claim that it is possible that zombies only seem conceivable when they aren’t…but I don’t want to hijack this discussion and I will leave that stuff to the discussion on my paper…
But even so I have a question that is possibly related to some of James’ questions and to Michal’s question as well.
You seem to be making the argument in terms of reasoning power but at least in the case of the Godel example this doesn’t seem to be the case. So the point there is that given a ceratin set of premise we cannot deduce some other truth. But, you claim, that we could if we had access to a different set of premises. This doesn’t seem like our coming to have more reasoning power.
For instance take some reasining system that is confined to first-order predicate logic. Then there will be modal truths that are not deducible for this system. So it may know -| p but then be unable to deduce []p. Now suppose we simply add the relevant axiom (i.e -| p –> []p). Now the system can deduce the relevant modal truths. But has its reasoning abilities really become more powerful? It seems like the system still has teh same reasoning powers, it just has access to a new ‘fact’ and so can deduce new facts. But if this is a fair characterization of your view then the “sheep’s clothing” that clothes your type-A physicalism is really wolf’s clothing! I.e. this is quite obviously just Type A physicalism.
I guess your answer might be that in so far as you allow that it might take more than simply adding an axiom for us to close the gap you are allowing the door open to a serious mysterianism. Is that fair?
Hi Derek,
so what you want to achieve with your argument is to expose a possible place where Chalmers-style anti-physicalist arguments might go wrong. What you don’t do is to establish that this is the place where they actually go wrong. So, you are merely establishing a mere possibility of error for proponents of such arguments. You take this to be important because you think that there must be something wrong with these arguments, for there are excellent reasons in favor of physicalism (by the way: what are, in your view, those excellent reasons for physicalism?). On the other hand, it is also possible that there is something wrong with what you take to be the best arguments for physicalism. But how important is it really, that some mere possibilities of error exist? Human arguments can always go wrong somewhere, so we already knew, by induction, that for almost any human argument A there is some possibility of error E. But unless we have reason to think that A is an instance of E, this does not weaken the force of A at all. So, unless you have some positive reasons that the anti-physicalist arguments fail because of our cognitive limitations, it seems that you haven’t weakened their force at all. What we have, then, is a strong a priori argument against physicalism on the one hand and some a posteriori (?) considerations in favor of physicalism on the other hand. Now we have to weigh them against each other – and why can’t the anti-physicalist simply say that there must be something wrong with the physicalist arguments, given the (undiminished) force of the anti-physicalist argument? Yet, maybe there are simply more reasons in favor of physicalism, such that they outweigh the reasons against physicalism. But this would have nothing to do with certain merely possible sources of error for both anti-physicalism and physicalism. In short, I am still not quite sure what you actually achieve against anti-physicalism with your argument, except to indicate a mere possibility of error.
Hi Michal,
I’m happy to grant your point. I didn’t intend to claim that we could become able to solve any possible problem. But I don’t see how that’s relevant to the question of the explanatory gap about consciousness. Perhaps we could become able to close that gap (even if some other gap, not relevant to consciousness, would then arise).
Hi Richard,
Suppose some being just knew by brute apriori insight the relevant Jackson-Chalmers conditional that if PTI then Q. Trivially, such a being could deduce the truths about consciousness from the microphysical truths by apriori reasoning. It would be natural to describe this sort of case as its having access to an axiom that we don’t have. (Oracle machines are another natural analogue.) In my terminology, such a being would have qualitatively greater reasoning power than we do. So ‘adding an axiom’ can increase your reasoning power.
I’m not sure that this is a problem. It’s not obvious that a being like the one I described is possible. But in any case, there will be more interesting scenarios; scenarios where we know all the relevant facts, but just can’t see the rational connections between them.
Hi Joachim,
Just to be clear about what I take the dialectical situation to be: I am not trying to offer an argument against dualism. I take the dualist to be trying to offer an argument against physicalism. That means that the burden of proof is on the dualist. I have pointed out a relatively realistic scenario in which there is an explanatory gap despite the truth of physicalism. (I might add that there is actual evidence of extraordinarily computationally complex activity occurring in the brain.) It seems to me that the dualist’s argument is unconvincing unless she can give some sort of evidence that this scenario does not obtain.
It seems to me that if you grant that the considerations I adduce show that the anti-physicalist argument is invalid pending further empirical research, that is some victory for the physicalist. Whether some sort of mysterian view is correct is just an unresolved empirical question; I don’t see why the dualist should get to assume that it will be resolved in her favor.
Hi Derek, thanks for the response and sorry it has taken a while to get back to you.
I am not sure what the difference is between having access to a new axiom and knowing all the facts and not seeing the rational connections between them. Surely a reasoner might know -| p and []p and not be able deduce []p because it lacks the axiom that states the connection.
But leaving that aside…I guess I just didn’t quite see how you were using ‘qualitative increase in reasoning power’, but now I do. So, the type A physicalist will undergo a qualitative increase in reasoning power when they learn that the Chalmers conditional is true.
But my question was about why you think this kind of view is a kind of mysterianism. You haven’t shown that creatures with our reasoning abilities can’t make the deductions in question. What you have shown is that if we are to make the deduction we need access to a fact that we don’t yet have. But this is just type A physicalism.
Hi Richard,
I agree that the sort of view I am proposing can be seen as a sort of type A physicalism. One way of seeing the upshot of the paper is that you can be a type A physicalist (by virtue of holding that the facts about consciousness are deducible in principle), while still holding that there is a substantive explanatory gap. (In other words, your point is an objection only if type A physicalism is incompatible with mysterianism. But I think that they are compatible.)
Part of your worry might be due to the fact that my framework doesn’t clearly distinguish between (i) coming to know (on the basis of brute apriori insight) that if PTI then Q; and (ii) gaining reasoning abilities that enable one to deduce Q from PTI. (ii) will put one in a position to know the conditional mentioned in (i), and (i) amounts to a way of accomplishing (ii). If (i) were the only way of accomplishing (ii), then my proposal wouldn’t be so different from McGinn’s. But (i) isn’t the only way to accomplish (ii); in fact, it isn’t really a very interesting or (for us) practical way. One can come to be able to solve certain problems in other ways than just learning new facts – the various hypercomputers show this.
I’d count Derek’s view, as it stands, as a version of type-C physicalism, not type-A physicalism. Type-A physicalism says that any epistemic gap is easily closable, whereas type-C physicalism says that the epistemic gap is not easily closable but is closable in the ideal limit. Note that a priori physicalism is not the same as type-A physicalism — it’s consistent with both type A and type C.
I think my case against type-C physicalism via the structure-function argument also applies to Derek’s view. The argument is roughly: (1) Physical accounts explain only structure and function, (2) Explaining structure and function doesn’t suffice to explain consciousness, so (3) Physical accounts don’t explain consciousness. Derek mentions this argument briefly and says it is question-begging, but the reason he gives is just that we can’t rule out the possibility that some physical account will explain consciousness. To rebut an argument one can’t just reject the conclusion — one has to give reason to reject one of the premises, or the transition from premises to conclusions, not the conclusion. I’m sure that Derek rejects one of these things (depending in part on whether he thinks that the concept of consciousness is functionally analyzable in principle or not), but it would be good to know which.
Derek also gives a parody argument with key premise “The physical truths explain at most whether a Turing machine
halts after finitely many computations”, but it’s far from clear why we should accept this premise. I take it that there is an ambiguity in the explanandum here, depending on whether the relevant Turing machine is abstract or concrete. In his paper (section 2) Derek talks as if concrete machines are what matter. Given that this is what’s at issue, one can then ask whether the physical world is finite or infinite in time. If it’s infinite, then I take it that there’s very little reason to accept the premise. And if it’s finite, then no concrete machine runs forever without halting — at best we have a distinction between those that halt and those whose functioning terminates for some other reason (e.g. the big crunch), and that distinction can be physically explained. If abstract machines are intended, on the other hand, then physical truths appear to be irrelevant to the explanation of the halting truths. This stands strongly in contrast to the physicalist’s view of consciousness and renders the analogy inapplicable.
I think the type-C strategy is an interesting strategy that’s well worth pursuing (my colleague Daniel Stoljar also attempts this in his recent book Ignorance and Imagination). Certainly it’s a important spot in philosophical space. But think that more needs to be said to flesh out the view in such a way that gives reason to reject the prima facie arguments against it.
Hi Derek, so does new new mysterianism really just boil down to complete agnosticism about our ability to discover the metaphysical status of the mind?
Hi Dave, no doubt Derek has his owon reply but the structure/function stuff is relevant to some of the comments you made to me and I can’t help but respond.
It doesn’t look, to me, as though what Derek is doing is trying to show that the conclusion is false. Rather it looks like what he is doing is presenting a counter-example to the conditional which entails (2). The conditional is something like C
C: If there is a gap between concepts of a structrual/functional kind and concepts of a phenomenal kind then explaing structure and function doesn’t suffice to explain consciosuness.
Derek trys to show that this is false because the physicalist can admit that there is a gap but deny that explaining structure and function doesn’t suffice to explain consciousness. Structure and function may suffice for explaining consciousness and yet it might be the case that with the reasoning abilities we have now we just can’t see how it suffices. This shows how premise (2) of the structure/function argument might be false and so shows that the structure/function argument is unsound. And what’s more, if this were true we wouldn’t be able to tell unless we had qualitatively different reasoning abilities and so no a priori argument can show whether physicalism is true or not.
No, I think that plausibly we already do know the metaphysical status of the mind. (Deducing the mental facts from the microphysical facts is not the only way to come to know this.) The point is that from the armchair, there would be reason for agnosticism about our ability to deduce the truths about consciousness from the microphysical truths. (So the argument from armchair deducibility doesn’t cut much ice.)
Thanks, Dave. I basically agree with Richard’s response, but let me address a couple of your points separately. You question whether we should accept the first premise of my parody argument (“The physical truths explain at most whether a Turing machine halts after finitely many computations”). I agree that we should not accept this premise; the point is that it would look acceptable from the point of view of a thinker who’s reasoning powers were limited in certain ways. Similarly, I want to suggest, although the premises of your structure/function argument look acceptable to us, this is to be expected if our reasoning abilities are limited in the way I describe, even if one of those premises is false. (Which premise is false? I wish I knew. One can rebut an argument by showing that similar arguments lead one astray, without knowing exactly where it goes wrong.)
One part of your reply I don’t understand. On your view, the physicalist is committed to the apriori knowability of a conditional of the form: if PTI, then the Turing machine halting truths. This is the case no matter whether the Turing machine halting truths are construed as abstract or concrete (assuming that either way, they are physicalistically acceptable). It’s true that on the abstract understanding P won’t be relevant in the reasoning used to produce this conditional, but I don’t see how that “renders the analogy inapplicable”; if some reasoners can’t come to know that conditional on the basis of apriori reasoning, then there is (something relevantly like) the explanatory gap for them, with no ontological consequences.
Ah, I see. Thanks for clearning that up Derek.
Derek: I don’t see why you say that premise 1 would look acceptable to thinkers whose reasoning powers are limited in the relevant way. My own reasoning powers are limited in the relevant way and I have no trouble seeing that premise 1 is false. This follows from the very simple reasoning in my previous comment, no idealized reasoning needed. Moral: One doesn’t need to be an ideal reasoner to draw conclusions about what can and can’t be explained through ideal reasoning!
Re the a priori entailment of halting truths: the arguments under discussion explicitly concern explanation rather than a priori entailment. Of course I think that a priori entailment is necessary for explanation in the relevant sense, but it isn’t sufficient (physical truths entail mathematical truths but don’t explain them). So the argument that physical truths don’t explain phenomenal truths strictly speaking leaves open the possibility that physical truths a priori entail phenomenal truths without explaining them. I think that’s a fairly unattractive view for a physicalist, but it’s still a view.
To address this view, one can put the structure/function argument directly in terms of a priori entailment. Here’s a very rough way to put it. (1) If physical truths a priori entail phenomenal truths, phenomenal truths must be structurally/functionally analyzable; (2) phenomenal truths are not structurally/functionally analyzable; so (3) physical truths don’t a priori entail phenomenal truths. Of course some abstract/mathematical truths are a priori entailed (because they are a priori) without being structurally/functionally analyzable, but this sort of loophole doesn’t apply to cases in which the truths are substantively a priori entailed by physical truths, i.e. a priori entailed in such a way that the physical truths ground the entailment (A substantively a priori entails B iff, in any instance of the relevant a priori reasoning from A to B, B is justified only if A is justified.) Here the idea is that for physical truths to substantively a priori entail other truths there needs to be some conceptual hook to undergird the a priori entailment, and the only available conceptual hooks are structural and functional.
Richard: The way I understanding these notions, “explaining A suffices to explain B” is incompatible with there being a large chain of idealized reasoning from A to B. I mean “suffices” in a stronger sense: that once one has explained A one has automatically explained B. This may allow some explication of the concepts involved in B, but not more than that. So if phenomenal concepts can be explicated as structural/functional concepts, premise 2 will be true, but otherwise it will be false. Given that you say there is a gap between the relevant concepts, you’d do better to accept premise 2 (so conceived) and deny premise 1. That is, you’ll have to hold that structure and function can be used to explain something more than structure and function. I think that this claim is unattractive, and it would take a lot more than we’ve seen to date to support it. (I think the claim that s/f explains only s/f is one of the many claims about ideal reasoning that nonideal reasoners can have good justification to accept.) Denying premise 2 is also unattractive, though — e.g. I don’t think the appeal to idealization does much to make more plausible the implausible view that phenomenal concepts are functional concepts.
Suppose you are in a world that’s infinite in time with physically realized Turing machines around. And suppose that you can reason about them, but only using means available to a Turing machine. Sometimes you could explain on the basis of the physical facts whether a certain machine would halt, but you couldn’t do this in general. I just don’t see what reason you’d have for thinking that it could be done. (If you could envision analog computers or something, maybe you’d see that it could be done; but suppose that you haven’t done this.) That’s why I think that the first premise of the parody argument would look plausible to you.
One way to put the hypothesis under consideration in the paper is that the reason that it seems to us that the microphysical truths don’t explain the phenomenal truths is the same as the reason it would seem to the being considered in the last paragraph that the microphysical truths don’t explain the halting truths: i.e., our reasoning powers aren’t good enough to see the apriori entailment. (So that if we did see the entailment, we would have an explanation.) You are offering a different hypothesis as to the source of the gap: namely, that we can’t see how to link up physical concepts (which can explain only structure and function) and phenomenal concepts. But it seems to me that if my hypothesis were correct, we should expect some such appearance to arise; that is what the parody argument is designed to show. So I don’t see that the argument from structure and function casts doubt on my position.
(I agree that one doesn’t need to be an ideal reasoner to draw many conclusions about what can be explained by ideal reasoning. I only need the weaker point that in some cases non-ideal reasoners can’t draw such conclusions.)
I don’t see why a Turing machine couldn’t make exactly the argument I made in the first comment. If the explanandum is the halting of concrete machines, then there’s no problem in physically explaining all such explananda whether the world is finite or infinite; if the explanandum is the halting of abstract machines, then the character of the physical world is irrelevant. Maybe your thought is that the TM couldn’t explain the future halting behavior of all concrete machines in terms of current physical facts, but this now seems clearly disanalogous to the original case. Furthermore, I don’t see why the TM should be even slightly tempted to think that this is inexplicable in principle, as opposed to uncalculable given its own resources — at least given that the TM knows that the world is deterministic, so that future behavior is explicable from present conditions and laws of nature.
Just to clear up one point. I wasn’t suggesting that I think there is a gap between these concepts. I was just trying to explicate a way that Derek’s strategy might go. A physicalist might admit that there is this very strong prima facie gap and yet still hold that phenomenal facts are explainable by structureal/functional facts in the strong sense you mean because of a limit on our reasoning abilities.
Also, you say, “My own reasoning powers are limited in the relevant way and I have no trouble seeing that premise 1 is false,” but I take it that Derek denies that your reasoning powers are limited in the relevant way. From our vantage point we can see that the halting truth gap is bridgable but only because we have qualitatively different reasoning abilities, or at least this is what I thought Derek was saying.
Dear Derek,
I apologize for taking my time with this reply and am very happy the conference has been extended so I could re-engage you. Since the discussion has moved considerably since I posted I will quote you and try to put my comments in context of what Joachim, David, and Richard have said.
Derek says: I’m happy to grant your point. I didn’t intend to claim that we could become able to solve any possible problem.
I was under the impression that the point of the analogy between Turing Machines and human reasoners was to show how the problems for the former, like the halting problem, are analogous to problems for the latter, which in this context is the explanatory gap. On this analogy, TMs cannot determine the truth-value of certain propositions about their own functioning in the same way that humans cannot bridge the explanatory gap about consciousness. However, and here is your new new mysterianism, TMs are not the only type of machines around. So, if we follow the analogy, humans are not the only reasoners around. Just as we can posit notional computers that solve the halting problem so can we posit notional humans that solve the explanatory gap. This isn’t an issue of any problem, but a problem of the explanatory gap. I wasn’t suggesting that notional humans could solve any problem whatsoever.
Derek says: But I don’t see how that’s relevant to the question of the explanatory gap about consciousness.
My claim was that if the relevant notional computers have a halting problem your analogy breaks down. If you argue that the halting problem is like the explanatory gap in some relevant way and the halting problem isn’t resolved but moved one level further in computation, as I presume it is for Oracle machines, then the problem of the explanatory gap isn’t resolved either. That is why my point is relevant to the question of the explanatory gap about consciousness.
Derek says: Perhaps we could become able to close that gap (even if some other gap, not relevant to consciousness, would then arise).
Given your argument we have every reason to believe that this new gap will be relevant to consciousness. In fact, it will be the same gap. Or so I argued.
The exchange you’ve had with David makes me think that you have some reason to think that even un-constructable notional computers are relevant to your argument. I don’t understand how this could be so. The microphysical facts that imply facts about computation are not merely possible microphysical facts. Similarly, notional humans are relevant to your argument only because they are possibly existing humans, with micro-physical properties like our own. They are not purely imaginary super-reasoners. If we were like angels we might be able to transcend the explanatory gap but so what.
Best,
Michal
Let me be a little bit more explicit about why I think a TM couldn’t echo the reasoning you produced in your first response. Let’s consider the case of explaining future halting behavior of concrete machines in terms of current physical facts, in a world with infinite time; I agree that there is a disanalogy here, but I think that it is essentially a technical problem (and I show how to avoid it in the paper (p. 3 n. 3).) You simply asserted that in such a case, there would be “very little reason” to accept that the halting facts are inexplicable. But I think that a TM might have very good reason to accept this: after all, she cannot produce the relevant reasoning, and (let’s suppose) she cannot (or at least has not) even envision the sort of cognitive resources it would take to produce it.
You suppose that a TM who knows that the world is deterministic would know “that future behavior is explicable from present conditions and laws of nature”. But I don’t see any reason to admit this. Such a TM would know that future behavior is *determined by* present conditions and laws of nature. But why should the TM think that this means that it is *explicable*, in the sense that requires apriori entailment? Given her cognitive limitations, the TM might very well think that there is no chain of reasoning that would lead from the present facts to the future facts.
I think that the key difference between us here is something like this: you think that the argument about structure and function shows that the microphysical facts can’t explain the phenomenal facts, while I am trying to argue that the argument only shows (or maybe that the plausibility of the argument results from) that *we can’t see how* the microphysical facts explain the phenomenal facts. (I’m tempted to echo what you wrote above: I don’t see why you should be even slightly tempted to think that this is inexplicable in principle, as opposed to inexplicable given your own resources.)
Thanks for following up. I’m not entirely sure I’m following you, but maybe this will help. The anti-physicalist argument I am considering depends on the claim that the physicalist is committed to the armchair deducibility of all of the facts from the microphysical facts. Now armchair deducibility here can’t just be armchair deducibility by me, or by some other limited thinker, because then the claim is subject to counter-examples (this is one of the things that the Turing machine example shows). So the claim has to be read as involving armchair deducibility in principle (i.e., possibly by more powerful thinkers than myself). In particular: suppose that there is some limitation on the reasoning abilities of beings with microphysical properties like our own, so that there is some ‘explanatory gap’ that is impossible for such beings to bridge. Then this gap will generate a counterexample to the claim that the dualist argument depends on, so the argument will fail.
In short, I think that my opponent may very well need to admit the relevance of purely imaginary reasoners, and I’m happy to play along. If you think that we shouldn’t be concerned with such things, then you shouldn’t be tempted by the dualist argument in the first place.