Monday, March 23, 2009

New Draft Plantinga Paper

A work in progress - posted for comments - NOT TO BE COPIED OR REPRODUCED ELSEWHERE!

3rd draft


Plantinga’s Belief-Cum-Desire Argument Refuted


In the final chapter of Warrant and Proper Function , Plantinga argues that, if both:

(N) naturalism – the view that there are no supernatural beings

(E) evolution - current evolutionary doctrine

are true, then the probability that:

(R) our cognitive faculties are reliable and produce mostly true beliefs

must be either low or inscrutable.
Plantinga argues, further, that this argument furnishes anyone who accepts N&E with a undefeatable defeater for any belief produced by those faculties, including N&E itself. Hence, N&E has been shown to be self-defeating.
One part of this larger argument is what I call Plantinga’s belief-cum-desire argument. The belief-cum-desire argument is designed to show something more specific - that if the content of our beliefs does causally affect behaviour, and N&E, then the probability of R cannot be high.
Critics of Plantinga’s larger argument against N&E have usually tended to concede, whether or not for the sake of argument, that the conclusion of the belief-cum-desire argument is true, and have instead rejected Plantinga’s suggestion that his larger conclusion that N&E is self-defeating does not follow (or, if it does, much the same problem plagues theism). My aim here is different – I aim simply to refute the belief-cum-desire argument.

Plantinga’s belief-cum-desire argument
Suppose some hypothetical rational creatures a lot like us evolve on a planet a lot like Earth - they “hold beliefs, change beliefs, make inferences, and so on” . Suppose:

(C) causal efficacy – the content of beliefs causally affects behaviour

is true. What is the probability of R/N&E&C specified with respect to these creatures – what is the probability that their cognitive faculties are reliable?
The probability, says Plantinga, is not as high as you might initially be tempted to suppose. For it is not belief per se that is adaptive, but the behaviour it produces. And behaviour is caused by both belief and desire. But then, claims Plantinga, for any given adaptive action (action that enhances the creatures ability to survive and reproduce),

there will be many belief-desire combinations that could produce that action; and very many of those belief-desire combinations will be such that the belief involved is false.

Plantinga illustrates like so:

So suppose Paul is a prehistoric hominid; a hungry tiger approaches. Fleeing is perhaps the most appropriate behavior: I pointed out that this behavior could be produced by a large number of different belief-desire pairs. To quote myself: “Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect, because he thinks it unlikely that the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief. . . . . Or perhaps he thinks the tiger is a large, friendly, cuddly pussycat and wants to pet it; but he also believes that the best way to pet it is to run away from it. . . . or perhaps he thinks the tiger is a regularly recurring illusion, and, hoping to keep his weight down, has formed the resolution to run a mile at top speed whenever presented with such an illusion; or perhaps he thinks he is about to take part in a 1600 meter race, wants to win, and believes the appearance of the tiger is the starting signal; or perhaps . . . . Clearly there are any number of belief-cum-desire systems that equally fit a given bit of behavior.”

So adaptive behaviour can be produced by many belief-desire combinations and, “in many of these combinations, the beliefs are false” . We cannot, concludes Plantinga, estimate the probability of R on N&E&C as high. And of course, if we cannot estimate that probability as high for these hypothetical creatures, then we cannot estimate it as high in our own case either.
The above argument that the probability of R given N&E&C cannot be high has some superficial plausibility. Plantinga is surely correct that:

(i) it is behaviour that evolution selects for rather than beliefs per se.

He is also correct that:

(ii) for any piece of adaptive behaviour, there are many belief-desire combinations that might produce it, on many of which the belief or beliefs in question are false.

However, I will show that, appearances to the contrary, it does not follow from (i) and (ii) that we cannot reasonably estimate the probability of R on N&E&C as being high. Indeed, I shall go further, and sketch out some reasons for supposing that the probability of R given N&E&C must, in fact, be fairly high.

Refutation of Plantinga’s belief-cum-desire argument

Consider two possible scenarios:

(a) we have evolved certain false beliefs and certain desires that, in combination, result in adaptive behaviour
(b) we evolved certain unreliable belief-producing mechanisms and certain desires that, in combination, result in adaptive behaviour.

Perhaps, on N&E&C, (a) is not so unlikely, for the reasons Plantinga cites. Suppose I have an innate belief that tigers are cuddly and that best way to pet a tiger is to run away from it. If I am also equipped with an innate desire to pet tigers, this results in adaptive behaviour.
But what about (b)? How likely is it on N&E&C that our belief-producing mechanisms are unreliable? Consider the question: what particular set of desires would a species need to evolve in order for the beliefs generated by such an unreliable mechanism to result in generally adaptive behaviour? We will look at some examples, beginning with the cognitive faculty of reason.

Example one: fallacy of affirming the consequent

Consider the fallacy of affirming the consequent (FAC). The FAC is an unreliable form of inference. It sometimes produces true conclusions, but often false.
Suppose evolution hard-wires a species of hominid H to be highly prone to the FAC. Suppose a member of this species, H1, concludes using the FAC that jumping out of planes is not safe. Another member of the same species, H2, concludes using the FAC that jumping off tall buildings is safe. They might reason like so:

H1’s inference:
If jumping out of planes is not safe, jumping out of balloons is not safe
Jumping out of balloons is not safe
Jumping out of planes is not safe

H2’s inference:
If jumping out of planes is safe, then jumping out of planes wearing a parachute is safe
Jumping out of planes wearing a parachute is safe
Jumping out of planes is safe

If evolution hard-wires a desire into species H to make H2’s resulting belief that jumping out of planes is safe adaptive – e.g. a powerful desire to commit suicide - that same hard-wired desire will result in the likely death of H1.
What set of desires must evolution instil in species H to render adaptive the potentially-mal-adaptive consequences of applying the FAC? There is none. But then evolution cannot render this unreliable form of inference adaptive by instilling a particular set of desires in H.
The FAC sometimes produces false beliefs, but sometimes true. Is that the reason why evolution cannot render the FAC adaptive? Could a method of inference that consistently produced false conclusions from true premises be made adaptive by pairing it with an appropriate set of desires? No, as I explain below.

Example two: Counter-induction

Suppose that two hominids appear: hominid A and hominid B. A reasons inductively, B counter-inductively, like so:

A observes that whenever other hominids eat, they usually continue to live, and when they stop eating, they die. He concludes that if he eats, he’ll probably continue to live, and if he stops eating, he’ll die.

B observes that whenever other hominids eat, they usually continue to live, and when they stop eating, they die. He concludes that if he eats, he’ll probably die, and if he stops eating, he’ll continue to live.

When A applies his form of reasoning to true premises, he is likely to end up with a true belief. B on the other hand, is likely to end up with a false belief. His counter-inductive method of reasoning consistently produces false beliefs.
However, if evolution equips A with a desires to live, and B with a desires to die, B’s false belief is just as adaptive as A’s true belief. So while counter-induction has delivered a false belief, it has not delivered behaviour that is mal-adaptive. In fact, it has delivered behaviour just as adaptive as that delivered by induction.
So far, it seems that Plantinga is correct: given evolution equips A and B with the right desires, the behaviour produced by the two belief-forming mechanisms is equally adaptive.
But now suppose A and B engage in further reasoning, applying their respective methods of inference like so:

A observes that other hominids that forage and hunt get food to eat, and those who don’t get none. A concludes that if he hunts and gathers, he’ll get food to eat, and if he doesn’t he’ll get none.

B observes that other hominids that forage and hunt get food to eat, and those that don’t get none. B concludes that if he doesn’t hunts and gather, he’ll get food to eat, and if he does, he’ll get none.

Now, A’s reasoning helps him survive. Given his desire to live, these two inferences together will lead him to hunt and gather. That’s adaptive behaviour.
The problem is, given the desire required to get B’s first counter-inductive inference to produce adaptive behaviour, B’s second counter-inductive inference is now likely to produce mal-adaptive behaviour. Given B’s desire to die, plus his false belief that eating will kill him, his second counter-inductively generated conclusion will no doubt lead him not to go hunting and gathering. That’s not adaptive behaviour. B will probably starve to death.

Plantinga is correct that for any piece of adaptive behaviour, there are many belief-desire combinations that might produce it, on many of which the belief or beliefs in question are false. But it does not follow that the probability of R given N&E&C cannot reasonably be estimated as high. The members of a species equipped with unreliable belief-forming mechanisms such as the FAC or counter-induction will end up with all sorts of combinations of false beliefs the mal-adaptive consequences of which cannot be neutralized by evolution hard-wiring that species with some particular set of desires.
In fact there are two difficulties here.
First, there is the problem of novel beliefs. An advantage of procedural reasoning is that it allows for creatures able to problem solve and adapt, within their own lifetimes, to a changing environment and novel situations. An adaptive inferential mechanism is likely to applied in new ways. But then evolution cannot then anticipate what desires will be required to render adaptive the innumerable potentially mal-adaptive conclusions likely to be drawn. If B draws the first counter-inductive conclusion, his desire to die renders his conclusion adaptive. But if B happens to go on to draw that second conclusion using the same unreliable form of inference, that very same desire now renders the conclusion mal-adaptive.
The second problem is – not only can evolution not anticipate which desires creatures will need to render the conclusions of an unreliable mechanism adaptive – when it comes to unreliable forms of inference, there just is no set of desires that will render the mechanism adaptive. A set of desires that renders one set of conclusions adaptive will render another set of conclusions generated by the same mechanism mal-adaptive.
On the other hand, evolution can make reliable forms of inference adaptive in a straightforward way, by equipping the species in question with desires for those things that enhance its ability to survive and reproduce. In which case, the probability that reliable forms of inference will evolve, as opposed to an unreliable forms of inference, looks to be high.

Other cognitive faculties

The considerations sketched out above suggest that N&E&C should lead us to estimate the probability that our cognitive faculty of procedural reasoning is reliable as fairly high. But of course, procedural reason alone furnishes us with little, if any, knowledge. Other cognitive faculties – mostly notable perception and memory – must also come into play.
How reasonable is it, given N&E&C, to suppose that these other faculties are reliable? If there is no good reason to suppose they are reliable, then there’s no good reason to suppose our various faculties working in conjunction constitute a reliable belief-forming system. My car may have a reliable carburettor, but if other parts are unreliable, the car as a whole remains unreliable.
So let’s now look at the cognitive faculties of memory and perception. Has Plantinga shown that, given N&E&C, the probability that these other faculties are reliable connot be high?

Memory
Suppose hominid species H is equipped with an unreliable memory. Hominid H1 has at time t1 true beliefs A and B. But, because H1’s memory is unreliable, she later believes the falsehoods not-A and not-B. Is there a desire or set of desires with which evolution might also equip species H that will render adaptive the behaviour produced by these two resulting false beliefs? Very probably. If A is the belief that if you eat you will live and B the belief that if you don’t eat you will die, these beliefs will result in adaptive action if H1 desires to die. However, because H1 previously believed A and B, she would previously have not eaten, which is mal-adaptive behaviour. There is no set of desires that will make both the input and output beliefs of this unreliable faculty result in adaptive behaviour. But then unguided evolution cannot equip species H with a set of desires that will make the input and output beliefs of this unreliable memory faculty generally adaptive. Evolution can, on the other hand, equip a species with a set of desires that will make the input and output beliefs of a reliable faculty generally adaptive. It appears, then, that N&E&C will therefore strongly favour a reliable memory faculty over an unreliable faculty.

Perception
How likely is it, on N&E&C, that evolution would produce a species with a reliable perceptual-mechanism-cum-desire combination, rather than an unreliable-perceptual-mechanism-cum-desire combination?
Fairly likely, I suspect. There are two categories of unreliable perceptual mechanisms:

(1) Unreliable mechanisms producing mostly false beliefs.
(2) Unreliable mechanisms that produce significant proportion of, but not mostly, false beliefs

Let’s begin by considering perceptual or quasi-perceptual mechanisms of type (1). Such mechanisms fall, in turn, into two categories:

(1a) Unreliable mechanisms producing mostly false beliefs but in a systematic, predictable way.

and

(1b) Unreliable mechanisms producing mostly false beliefs in a random, unpredictable way

An example of (1a) would be a perceptual or quasi-perceptual mechanism that, whenever the subject is presented with a tiger, produces the belief there is a rabbit present. There is consistency to the error. An example of (1b) would be a perceptual or quasi-perceptual mechanism that, when the subject is presented with a tiger, may the first time produce the belief there is nothing present, the next time the belief a rabbit present, the next time the belief there is a chair present, and the time after that the belief there’s a side of beef present, etc., but rarely if ever the belief that there is a tiger present. While we can predict that the subject will make an error about there being a tiger in front of them, it is not possible, even given knowledge of the erroneous beliefs previously produced when a tiger was present, to predict what erroneous belief will now be produced on this occasion.
Can unguided evolution make an unreliable mechanism of type (1b) produce adaptive behaviour by combining it with an appropriate set of desires? It is hard to see how. If there is a tiger present and the mechanism makes me believe there is a rabbit present, my mistaken belief can still result in adaptive behaviour if evolution has given me a desire to run away from rabbits. But if the erroneous beliefs are being generated in a random way, there will be no particular desire or set of desires with which evolution might equip me that will make the random false beliefs generated by this mechanism adaptive.
What about a mechanism of type (1a)? Does the pattern to the errors produced by the mechanism mean that evolution can render the mechanism adaptive by combining it with an appropriate set of desires?
That suggestion might seem plausible when we consider a very simple example of adaptive behaviour, such as running away from tigers. If the mechanism systematically produces the belief that a rabbit is present whenever a tiger is present, all evolution need do is instil in these subjects a powerful desire to run away from rabbits.
But the suggestion becomes far less plausible when we consider more we consider sophisticated patterns of adaptive behaviour.
Suppose, for example, that to reach food you need to survive, you need to engage in some team activity with other members of your species – e.g. negotiating some tricky terrain that includes a narrow ledge and a poisonous snake. Someone has to distract the snake while someone else crawls carefully along the edge and leaps over the snake at the exact moment it is distracted.
Now try to imagine a perceptual mechanism of type (1a) that produces false mostly beliefs about your surroundings, but beliefs that, when paired with certain desires with which evolution has pre-equipped your species, will result in the required adaptive behaviour from you and your team mates.
You must not believe there is a snake and a ledge and some food and some team mates with whom you must co-operate. And nor must your team mates. You, and they, must have mostly false beliefs about your environment, but beliefs that, nevertheless, when paired with desires with which evolution has collectively furnished you, lead you to act in tandem with your other team members to retrieve and eat the food.
In fact, setting aside the challenge of imagining such a mechanism, it is a difficult enough challenge to construct just a set of mostly false beliefs and hard-wired desires that would result in the complex sequence of actions required. Perhaps it is not impossible. Perhaps your (1a) type mechanism causes you to believe that instead of food at the end of the ledge, there’s a little man who will give you a tickle stick if you walk carefully along a white line, jumping in the air after 15 seconds, and then reach down and take the stick. Perhaps you believe that eating the tickle stick is the best way to get tickled. If we pair this false belief with a desire to be tickled, your resulting sequence of actions might yet be adaptive. You might successfully negotiate the narrow ledge, leap over that snake (though who is going to distract it?) and then eat the food.
However, even if we can come up with a mostly-false-perceptual-belief-cum-desire combination that would, in this situation, result in adaptive action, it is still more difficult still to come up with a belief-forming mechanism of type (1a), which, paired with an appropriate set of desires, will result in sophisticated patterns of adaptive behaviour generally of the sort of which we are capable. If the next time the food lies beyond a precipice that can only be negotiated if you and your team place a tree trunk across the gap, then the false belief “There’s a little man who will give you a tickle stick if you walk carefully forward along the white line, jumping in the air after 15 seconds…” combined with that powerful desire to be tickled will send you and your team mates straight over the cliff. That is not adaptive behaviour.
It is not yet clear that there is any set of desires that, when combined with an unreliable perceptual mechanism of type (1a), will generally produce sophisticated patterns of adaptive behaviour of the kind we actually exhibit.

Let’s now turn to perceptual mechanisms of the second sort:

(2) Unreliable mechanisms that produce a significant proportion of, but not mostly, false beliefs

Such, as it were, hit and miss (as opposed to consistently miss) mechanisms may also be of two kinds: those that producing false beliefs in a random way – there being no pattern to the errors, and those in which the errors are, in certain respects, systematic.
We have already seen in the case of mechanisms of type (1b) that a mechanism producing erroneous beliefs in a random way is not a mechanism that evolution might pair off with a particular set of desires such that adaptive behaviour will result.
But what of a hit and miss mechanism in which there is a pattern to the misses? An example would be a mechanism that was reliable with respect to the shape of objects but systematically unreliable with respect to position. Equipped with such a mechanism, a creature might believe, correctly, that there is a square object in its vicinity, but it will be mistaken about where that object is located. Could evolution pair off such an unreliable mechanism with a set of desires that will make the false beliefs produced result in generally adaptive behaviour?
Are there potentially many such perceptual or quasi-perceptual mechanisms that while systematically producing many false beliefs, will still result in generally adaptive behaviour given evolution pairs the mechanism with an appropriate set of desires? And, if so, is there a significant probability, on N&E&C, that we have evolved such an unreliable mechanism, rather than a reliable mechanism?
Here is a sketch of two reasons why we may not be able to answer yes to both these questions.
First, we have seen that it is difficult to envisage type (1a) mechanisms that, given N&E, will result not just generally adaptive bits of behaviour, but sophisticated sequences of team activity of the sort required to retrieve the food from that snake-inhabited narrow ledge. I cannot see that it is significantly easier to envisage a type (2) mechanisms of that sort. Try, for example, to envisage a type (2) mechanism producing mostly correct beliefs about the shape of objects but systematically incorrect beliefs about their location that will result in such successful sequences of team activity – I have tried, and failed. If someone claims there are many such potential mechanisms, the onus is on them to provide a list of examples to illustrate the point. I am unable.
Second, even if there are many such potential mechanisms, is there a significant probability, given N&E&C, that we have evolved such a mechanism rather than a reliable mechanism? Possibly not. Consider, again, a mechanism that is reliable about the shape of objects but systematically unreliable about their position. The most obvious way such a mechanism might evolve is in two stages: first evolving a mechanism that is reliable about both the shape and position of objects, and then engineering a mechanism that systematically reassigns positions to objects, but in such a way that, given the desires with which the species is also equipped, still results in adaptive behaviour. But why would that second level of engineering evolve given the reliable first level is already producing adaptive behaviour? What would be the pay off, for evolution, of now adding a sophisticated location-reassignment mechanism and changing the desires so that adaptive behaviour still results? If there is unlikely to be such a pay off, it is unlikely such a systematic-error-producing mechanism would evolve. Evolution will stick with the reliable mechanism.
In fact, even if N&E&C had equipped us with an unreliable perceptual faculty or faculties of type (2), it would still not follow that many of our beliefs are false. We have seen reasons to suppose that N&E&C will favour reliable as opposed to unreliable faculties of memory and procedural reasoning. If a species also possesses perceptual faculties that are partly reliable and partly, but systematically, unreliable, there arises the possibility that members of this species will be able to figure out that they are, to some extent, being systematically misled by those faculties. In which case, they may adjust their beliefs accordingly. Their beliefs would now reliably reflect reality, despite the fact that they possessed unreliable perceptual faculties. If R is the reliability of our cognitive faculties acting in tandem, the probability of R might still be high, even if it was more probable than not that we possess unreliable perceptual faculties of type (2).

Conclusion

Regarding his hypothetical creatures evolving on another planet, Plantinga claims that, for any given piece of adaptive behaviour,

there are many belief-desire combinations that will lead to the adaptive action; in many of these combinations, the beliefs are false. Without further knowledge of these creatures, therefore, we could hardly estimate the probability of R on N&E and this final possibility [C] as high.

The word “therefore” in this passage is not justified. While it is true that there are many belief-cum-desire combinations that will lead to any given piece of adaptive action, it does not follow that we cannot reasonably estimate the probability of R/N&E&C to be high. This is because, when we turn from beliefs to belief-producing cognitive mechanisms of the sort with which we are actually equipped (e.g. reason, perception, memory), it is no longer clear that there are many (indeed, any) unreliable versions of such mechanisms that, by virtue of unguided evolution pairing them with certain hard-wired desires, will nevertheless result in sophisticated patterns of adaptive behaviour of the sort we actually exhibit.
Further, we have seen that, even if there are many such unreliable but adaptive versions of our cognitive faculties, (i) it doesn’t follow, on N&E&C, that such an unreliable version is at least as likely to have evolved as a reliable version (for it may be that, for the most likely candidates, the evolutionary route to the unreliable version is likely to be via the reliable version), and (ii) even if it is probable, on N&E&C, that we have evolved an unreliable version of such a mechanism (perception, say), it still does not follow that our cognitive mechanisms, acting in tandem, are unlikely to be reliable (for the errors introduced by the most likely candidate unreliable mechanisms may still be correctable).
So Plantinga’s belief-cum-desire argument fails. Indeed, I have sketched out some reasons for thinking that while, for any given piece of adaptive action, there are many belief-cum-desire combinations that will produce it, the probability, on N&E&C, that our cognitive faculties, operating in tandem, are reliable, is pretty high (though I certainly do not claim to have established that here).

35 comments:

wombat said...

Coming along nicely.

"If the mechanism systematically produces the belief that a rabbit is present whenever a tiger is present, all evolution need do is instil in these subjects a powerful desire to run away from rabbits."

Not quite - it will depend I think on the relative cost/benefits involved and the frequency of real rabbits and tigers. Running has a cost in itself albeit a relatively small one - when you are running energy is expended and you are not busy feeding or reproducing. (OK the exercise may be a plus). If you spend too much time doing it then exhaustion will lead to death.
Consider also the benefit to be accrued by approaching a real rabbit. They are tasty nutritious morsels if you can catch them. If there are very many rabbits and few tigers the behaviour will hinder not aid survival.


Re: systematically correct shapes but wrong position.

This does seem to happen An example being spear fishing from a boat where the fish appears in a different position through optical properties of water.
In any case, if the position of the persons arm is also affected by the error then the brain can compensate.
The other example that springs to mind is the "upside down glasses" experiment where the brain adapts to get thing the right way up again. ( see e.g. here ). Newscientist are also running a mini series on bodily illusions some of which seem to show this sort of thing happening.

AIGBusted said...

Excellent Work Stephen! You'll have to let me know when this is published!

Papilio said...

Any examples that use instant death as a consequence aren't based on reason. A deer runs from a tiger but there is precious little reasoning and belief involved. In fact I'd wager that a suicidal person would still run from a tiger by instinct. Plantinga's example is actually appallingly weak.

Better examples would have to use behaviour that is clearly too complex to be instinctive, i.e. anything that no animal (humans excepted) does. The result might be more subtle but harder to argue.

What belief/reasoning could determine whether or not a pre-human hominid would choose to take psychoactive substances, for example? Would it be adaptive?

wombat said...

"What belief/reasoning could determine whether or not a pre-human hominid would choose to take psychoactive substances, for example? Would it be adaptive?"

How about - "If I smoke this weed I will have the strength to overcome the fearsome rabbit rather than having to run away from it."?

Steven Carr said...

If the probablity of rational cognitive faculties is high, given evolution and naturalism, then why do so few species out of the billions that have existed learn how to count to 20?



By the way, is running away from a tiger the best thing for a human to do?

Surely it just alerts the tiger to the fact that something thinks it is tiger-food.

Stephen Law said...

actually it is not that the prob of creatures with rational faculties is high, but that creatures with "rational" faculties, like ours should have reliable as opposed to unreliable faculties. that needs clarifying in the paper...

Mr. Hamtastic said...

This must be outstanding... I understand each paragraph on it's own, but not in combination. Must be a "professional philosopher" thing. I like the algebraics. Everything seems spelled correctly.

Hope that helps!

Harald Hanche-Olsen said...

Actually, I am sort of turned off by what seems to me like a pseudo-mathematical notation. It begins easy enough: I suppose N&E means “N and E”. But then R and C get thrown into the pot and we end up with R/N&E+C. What does the slash mean? And the plus sign? I suppose the plus sign does not mean “and”, since there is already the ampersand for this purpose. But it seems to make sense if it does mean “and”. And the slash? Does it indicate conditional probability? I am more familiar with using a vertical bar | for that. But this symbol only has meaning in a context: P(A|B) is the conditional probability of A given B. But A|B has no meaning whatsoever except in the given context. I don't mean to be curmudgeonly (or maybe I do?), but I think the pseudo-mathematics (if that is what it is, I am not sure) is more confusing than enlightening.

Kyle said...

I've read quite a bit of the material on this argument, and I think you have come up with a genuinely new approach. I also think what you have to say furthers the discussion.

However, I have three concerns.

1. These are only counter-examples. There seems to be a lot of room for a reply here. Plantinga does not need to show that any old mechanism can produce adaptive behaviour, only that plenty of unreliable mechanisms that can produce adaptive behaviour.

2. Perhaps all you have shown is that our beliefs must have an internal consistency, so that we can be sure that when confronted with the same situation the same behaviour will be produced. This does not mean we have to have mostly true beliefs though. Perhaps we form false input beliefs, process them reliably, output false response beliefs, which when combined with our desires produces adaptive behaviour.

3. Even if it is impossible to find a set of desires that render unreliable mechanism reliable, it does not mean that those mechanisms would be deselected. The mechanism do not have to generally produce adaptive behaviour, only adaptive behaviour in the environment in which it evolved. For example, our tastes in food are a good guide to what to eat in a hunter gatherer setting, but in the 21st century with processed food with lots of salt and sugar, they are a bad guide. If a species finds itself with a faulty belief-producing mechanism all that is required is that it has desires that will produce adaptive behaviour in the environment it is in, not adaptive behaviour generally.

wombat said...

Kyle:

"Plantinga does not need to show that any old mechanism can produce adaptive behaviour, only that plenty of unreliable mechanisms that can produce adaptive behaviour."

AND that at least some of the unreliable mechanisms are more favorable in evolutionary terms than the alternatives.

(Taking into account costs and the context of the local environment of course as you rightly stress in point (3))

Tony Lloyd said...

I think I'm with you. Are you saying that Plantinga's belief/desire pairing can come up with the right strategy but, if you look at the broader range of situations, Natural Selection can choose true belief and true desire because it more often comes up with the right one?

Given “true” and “false” desires we have four possibilities:

1.True belief and true desire
2.True belief and false desire
3.False belief and true desire
4.False belief and false desire

“2” reliably gets you killed, so that's out. “1” never gets you killed. “2” and “3” sometimes keep you alive but, sometimes does not: that's when Natural Selection pounces.

That would seem to me to justify a conclusion a little stronger that “I certainly do not claim to have established that here”.

Steven Carr said...

So false belief and true desire are more evolutionary adaptable than true belief and true desire?

So if I see a tiger, which of these is more likely to get me killed?

'That tiger weighs more than 1 ounce and I do not want to be eaten.'

'That tiger can run at 80 mph and I do not want to be eaten'



Surely there are other categories of belief other than 'true or false'.

'True but irrelevant'

'True enough'

'False but useful'

'False and irrelevant'

Stephen Law is correct that what should be considered is not individual true or false beliefs but the totality of the system that generates beliefs.

Can that be made to consistently produce false beliefs and still be adaptive to the ever changing problems thrown up by the world around us?

Tony Lloyd said...

Hi Steven (ie Carr, witht the "V")

True belief and true desire would be most adaptive, mixed truth and falsity would better than wanting to die and getting it spot on.

Waking up today I was thinking that option "1" may not be an option: if it doesn't arise in the genome Natural Selection cannot select it. But Stephen's argument would establish a direction. Stephen (with a "P", this time) haven't you just established an explanation for the apparent direction in evolution?

Stephen Law said...

Tony - what's a true desire, though? Desires are not usually thought of as having truth-values.

Tony Lloyd said...

Would "positive" and "negative" work instead of "true" and "false"? The desire to live being "positive" and the desire to die "negative".

Kosh3 said...

"Consider the fallacy of affirming the consequent (FAC). The FAC is an unreliable form of inference. It sometimes produces true conclusions, but often false."

Just a quick comment on the above. Although FAC is a formal fallacy (being, as it is, deductively invalid), it is also the inferential format of inference to the best explanation/hypothetical induction, which is perhaps the most ubiquitous form of reasoning there is.

To say that it is unreliable is to say that in most cases (>50%) it leads to false conclusions. But that is surely dependent on specific arguments, and must be taken on a case by case basis.

The Barefoot Bum said...

Amazingly enough, I have to side with Kyle here. You have demonstrated only that it is unlikely that some false beliefs or belief-producing mechanisms cannot plausibly result in adaptive behavior. You have not proven that it is impossible or even unlikely that some other — perhaps even many other — false beliefs or belief-producing mechanisms coupled with the appropriate desires cannot produce adaptive behavior.

There are some key terms left vague here. For example in:

(C) causal efficacy – the content of beliefs causally affects behaviour

Precisely what do you mean by the content of a belief? What's the difference between how the content of a belief affects behavior, and how the belief itself affects behavior? Is there indeed a difference between a belief and its content? This objection is not just a semantic quibble: the difference between the causal efficacy of a belief and the content of a belief is key to Plantinga's argument.

Plantinga's argument can be seen mutatis mutandis as a restatement of the underdetermination argument against empirical science. Evolution is closely analogous to empirical science: mutation maps to novel hypothesis; genome maps to theoretical context; and experiment maps to selection.

The underdetermination argument says that for a finite set of experiments, there are infinitely many theories that will account for those experiments, and only one of those theories can actually be true. (Even assuming one can be true begs the question that a true theory does indeed explain experiments, but that's a metaphysical horse of a different color.) Therefore, the probability of hitting the true theory by empirical means is infinitesimal. IIRC, this is Carnap's objection to Popper's formulation of probabilism.

The Barefoot Bum said...
This comment has been removed by the author.
The Barefoot Bum said...

In short, showing that there are some non-adaptive beliefs and belief-formation mechanisms does not rebut Plantinga's argument in the same sense that some theories are contradicted by the evidence does not rebut the the underdetermination argument.

Stephen Law said...

I am not trying to prove what you think I am BB. To refute an argument that P, you don't have to prove not-P. However, as I say at the end, I do consider the examples in the paper to raise serious difficulties for anyone who thinks unreliable versions of our faculties are more likely to have evolved than reliable versions.

The Barefoot Bum said...

To refute an argument that P, you don't have to prove not-P.

<shrugs> It's your paper.

M. Tully said...

"Consider two possible scenarios:

(a) we have evolved certain false beliefs and certain desires that, in combination, result in adaptive behaviour
(b) we evolved certain unreliable belief-producing mechanisms and certain desires that, in combination, result in adaptive behaviour."

So then, doesn't Plantinga's argument assume heritable beliefs? I don't recall heritable beliefs ever being demonstrated so can't you just dispense with "a" altogether?

M. Tully said...

Also,

Doesn't Plantinga's "erroneous belief" man have to live in a vacuum? By that I mean must he not, by necessity, never come into contact with counter evidence to his beliefs?

For instance, if the man wants to pet the tiger and thinks he can do that by running away from it, after he attempts that strategy several times and fails, the evolved error correcting mechanism would respond with negative stimulus prompting that he change his beliefs. Plantinga's argument ignores this evolved mechanism altogether.

Maybe run the draft by David Buss or Stephen Pinker, but I believe the evolved error-related negativity response is fairly well documented.

M. Tully said...

Damn it!

There was something about Plantinga's argument that was gnawing at me and I couldn't put my finger on it. It seemed not to fit the standard pattern.

But it does. It's a god-of-the-gaps argument. Let me rephrase, "The human brain seems to, on the whole, generate a generally reliable picture of the world in which we live. I for the life of me cannot figure out how that could of occurred naturally, ergo god."

He attempts to surreptitiously shift the burden of proof to the naturalist when in fact he is the one making a positive claim.

Consequently, all you must do is give one PLAUSIBLE way in which a naturally evolved, generally reliable, mechanism of interpreting data developed and you have refuted the argument.

It doesn't have to be the exact mechanism, just a plausible one. In your paper it might be a good idea explicitly point out the reasoning why a plausible explanation constitutes refutation.

Got to give Plantinga credit though, it was subtle.

M. Tully said...

Oh, and then the finisher.

Ask the apologists to demonstrate where, in the hundreds of millions of years of the evolution of the human brain, does the supernatural mechanism appear? And where exactly does it interface with the material brain states that have been demonstrated to affect behavior? And why is it that something that is divine is also apparently so reliant on the material brain to function (cases of damage to the material brain damage affecting moral behavior; Phineas Gage would be a good starting example)?

The apologists can't give an evidential based argument to explain the above. They are only left with their own personal gap.

Steven Carr said...

'There is a gap'.

Well, yes. At least Descartes was prepared to stick his neck out and have a punt on the pineal gland as being the brain/soul interface.

Plantinga knows better than to give such a hostage to fortune however, and won't say who this sensus divinatus might work.

I think the apologists position can be summed up as they can't rely on their brains, there is a gap somewhere in their heads and it is simply a miracle that they can think straight.

But that might be construed as an unfair depiction of their position...

It is a difficult problem , hbow we manage to generate beliefs which are useful and relevant when faced with life.

I think we work far more on 'rules of thumb' than on deductive reasoning.

wombat said...

Re : "Where is the gap"

I had construed Plantingas argument to be a step towards allowing supernatural intervention in the process of evolution rather than a direct interface to the brain. It sidesteps a huge number of the usual naturalist objections to things like souls etc. being active on a day to day basis.

Steven Carr said...

The 'process of evolution' is still happening today, so Plantinga still needs to show this supernatural intervention happening today.

If supernatural intervention is not happening today, how does a fertilised egg, lacking all cognitive faculties, develop into a unique brain with reliable cognitive faculties using only natural processes?

If natural processes can produce reliable brains from a fertlised egg in 38 weeks, why can it not do so in 3 billion years?

Unless God is implanting cognitive faculties into each developing baby?

wombat said...

Re: "supernatural intervention happening today."

Surely it needs only to happen once, like any other mutation? Since Plantinga argues that the truth or otherwise of a belief is neutral in evolutionary terms, since it is the resultant behavior that matters to survival, he can also (indeed must) claim that there is no selection pressure to remove reliable cognitive faculties.

The line "God made man, but he used a monkey to do it." seems apt.

M. Tully said...

"I had construed Plantingas argument to be a step towards allowing supernatural intervention in the process of evolution rather than a direct interface to the brain."

It's still a gap. "I don't know how the correct line of mutations came about naturally, ergo god."

M. Tully said...

Wombat,

"Surely it needs only to happen once, like any other mutation?" Interesting point. But then wouldn't Plantinga need to explain what prevents those mutations that would cause the false belief, answer-by-accident brain from emerging? According to Plantinga's hypothesis that would be the most probable outcome.

M. Tully said...

Speaking of probabilities, I think there is another problem Plantinga has to deal with. His argument implies that all paths to the evolution of the human brain are equally probable. Unless he somehow has evidence showing equal probability (e.g. observed evolved intelligence in thousands of species), I believe his premise lacks the necessary warrant. I would have stronger ground to state that the fact that the human brain evolved to generally generate correct beliefs based on sensory input is because it is the most probable outcome (p=1.0 in all observed cases).

M. Tully said...

OK, one last thing and then I'm really done thinking about this one.

But since I'm on the topic of probabilities, doesn't Plantinga really have to answer the question of, "What's the probability that a supernatural entity altered natural process of evolution?" And then show that probability to be higher than a totally natural process?

wombat said...

M. Tully -


Yep its still a gap - but it's now hidden under veil of mystery rather than being directly amenable to either physical science ("what happens when we do this to the brain?") or philosophical arguments about Cartesian dualism etc.

As to the probability of the human brain evolving Plantinga has also sidestepped that one. What he's talking about is true belief - the human brain could have arisen quite naturally. As far as I can see he is attempting to show that

(a) the holding of a true rather than
false beliefs carries no evolutionary benefits or disadvantages - hence all the examples with tigers - and

(b) that the number of possible survival oriented, relevant false beliefs far outnumbers the set of useful true ones.

The latter seems intuitively to be the case - "there are infinitely many more ways of being wrong than being right".

Ian said...

What struck me about Plantinga's argument - especially in his reply to Ramsey in Naturalism Defeated?, is the fact that he doesn't seem to get evolution. Adaptive behaviour doesn't mean "behaviour that won't get you killed", it means "behaviour that will allow you to out-compete your neighbours". Since evolution can work on very small differences in fitness, adaptive behaviours not only have to be possible, they have to be better that the alternative.

Broadly speaking, reliable data processing is superior to unreliable data processing. Natural selection should thus work to preserve reliable data processing where it matters. Granted, as Plantinga discussing in Naturalism Defeated?, beliefs about the subtlety of Proust's writing aren't adaptive, so the mechanisms by which they form should not be subject to selection. But the portions of the neural architecture that are important - those that help us separate our environment from hallucination, for example - will be subject to selection. And selection should work towards maintaining superior systems of data processing.

There's a useful analogy in DNA. Genes that matter most to survival are highly conserved. Stretches of DNA that are not important accumulate mutations quickly. The mere existence of so-called "junk DNA" isn't evidence against the reliability of coding genes.