Tuesday, November 30, 2010

Latest version EAAN paper - for comments

Plantinga’s Latest EAAN Refuted

In “Content and Natural Selection” (PPR forthcoming) Plantinga presents a version of his Evolutionary Argument Against Naturalism (EAAN) that he then bolsters to deal with a certain sort of objection.

The EAAN itself runs as follows. Let N be the view that there’s no such person as God or anything at all like God (or if there is, then this being plays no causal role in the world’s transactions), and E be the view that our cognitive faculties have come to be by way of the processes postulated by contemporary evolutionary theory. Then, argues Plantinga, the combination N&E is incoherent of self-defeating. This, he maintains, is because if N&E is true, then the probability that R – that we have cognitive faculties that are reliable (that is to say, produce a preponderance of true over false beliefs in nearby possible worlds) – is low. But anyone who sees that P(R/N&E) is low then has an undefeatable defeater for R. And if they have such a defeater for R, then they have such a defeater for any belief produced by their cognitive faculties, including their belief that N&E.

But why suppose P(R/N&E) is low? Plantinga supports this premise by means of a further argument. He begins by asserting that

materialism or physicalism is de rigeur for naturalism… A belief, presuming there are such things, will be a physical structure of some sort, presumably a neurological structure. (p2)

According to a proponent of naturalism, then, this structure will have both neurophysiological (NP) properties and semantic properties. However, it is, claims Plantinga, exceedingly difficult to see how its semantic properties could have any causal effect on behaviour. This because the belief would have the same impact on behaviour were it to possess the same NP properties but different semantic properties. So, claims Plantinga, N&E lead us to semantic epiphenomenalism (SE). But if semantic properties such as having such and such content or being true or false cannot causally impinge on behaviour, they cannot be selected for by unguided evolution. Truth and falsehood will be, as Plantinga puts it, invisible to natural selection. In which case, (on the modest assumptions that, say, 75% of beliefs produced must be true in order for a cognitive mechanism to be reliable and that we have at least 100 such beliefs) P(R/N&E&SE) will be low.

In “Content and Natural Selection”, Plantinga attempts to deal with an objection to the above argument. The objection runs as follows: suppose content properties just are NP properties. That’s to say, suppose that reductive materialism (RM) is true. Then, because NP properties cause behaviour, and semantic properties just are NP properties, so semantic properties cause behaviour. But if semantic properties cause behaviour, then they can be can then be selected for by unguided evolution. As we have just seen, Plantinga’s argument for P(R/N&E) being low is that SE is likely given N&E, and SE in combination with N&E makes R unlikely. But if our adherent to N&E also accepts RM, then they no longer have grounds for supposing SE is likely. Indeed, SE is actually rendered unlikely by the addition of RM. In which case, whether not P(R/N&E) is low, there’s no reason to think that P(R/N&E&RM) is low. In fact, as unguided evolution can and presumably will now favour true belief, there are grounds for supposing P(R/N&E&RM) is high.

Plantinga’s argument that P(R/N&E&RM) is low

So runs the objection. Which brings us to the heart of Plantinga’s argument in “Content and Natural Selection”. Plantinga argues that adding RM to N&E does not, in fact, make the probability of R high.

According to Plantinga, while RM does indeed allow semantic properties to have causal effects on behaviour (because they just are NP properties), the combination N&E&RM gives us no reason to suppose that the content of belief/neural structures resulting in adaptive behaviour are likely to be true. Suppose the belief/neural structure resulting in a piece of adaptive behaviour has the content q. While the property of having q as content does now enter into the causal chain leading to that behaviour, it doesn’t matter whether it is true:

What matters is only that the NP property in question cause adaptive behaviour; whether the content it constitutes is also true is simply irrelevant. It can do its job of causing adaptive behaviour just as well if it is false as if it is true. It might be true, and it might be false; it doesn’t matter.
P10.

But if the NP property can do its job of causing adaptive behaviour just as well whether its content be true or false, true belief cannot be favoured by natural selection. In which case (PR/N&E&RM) is low.

Plantinga goes on to consider various other philosophical theories of mind that might be supposed, in conjunction with N&E, to make R likely, namely non-reductive materialism, Dretske’s indicator semantics, functionalism and Ruth Millikan’s biosemantics. Plantinga argues that on none of these theories does N&E render the probability of R high. He intimates that there is a pattern to their respective failings such that we can see that nothing like these theories is capable of making R probable given N&E.

Refutation of Plantinga’s argument

There is, to seems to me, a fairly obvious flaw in Plantinga’s argument.

Let’s concede that what unguided evolution favours, in the first instance, is adaptive behaviour. As to what causes that behaviour, evolution doesn’t care – true beliefs, false beliefs, something else; it’s all the same to evolution. It is only the result - adaptive behaviour - that is preferred. That is why Plantinga supposes unguided evolution is unlikely to select for cognitive mechanisms favouring true beliefs.

But even if unguided evolution doesn’t care what causes adaptive behaviour, just so long as it is caused, it may not follow, given further facts about belief, that if beliefs cause behaviour, then natural selection won’t favour true belief.

I suggest there is this further fact about belief: that there exist certain conceptual constraints on what content a given belief can, or is likely to, have given its causal relationships both to behaviour and other mental states, such as desires.

Consider a human residing in an arid environment. Suppose the only accessible water lies five miles to the south of him. Our human is desperately thirsty. My suggestion is that we can know a priori, just by reflecting on the matter, that if something is a belief that, solely in combination with a strong desire for water, typically results in such a human walking five miles to the south, then it is quite likely to be the belief that there’s water five miles to the south (or the belief that there’s reachable water thataway [pointing south] or whatever). It’s highly unlikely to be the belief that there’s isn’t any water five miles to the south (or isn’t any reachable water thataway), or the belief that there’s water five miles to the north (or thisaway [pointing north]), or the belief that there’s a mountain of dung five miles to the south, or that inflation is high, or that Paris is the capital of Bolivia.

I don’t say this because I am wedded to some particular reductionist, materialist-friendly theory of content of the sort that Plantinga goes on to attack in “Content and Natural Selection”, such as Dretskian indicator semantics or functionalism or whatever: I’m not. Those theories might build on the thought that such conceptual constraints exist. But the suggestion that there are such constraints does not depend on any such theory being correct. True, perhaps content cannot be exhaustively captured in terms of causal role, as functionalists claim. That’s not to say that there are no conceptual constraints at all on what the content of a given belief is likely to be, given the causal links that belief has to behaviour and other mental states such as desires. Surely there are. That, at least, is my suggestion.

To suggest that such conceptual constraints on likely content exist is not, of course, to presuppose that beliefs are neural structures, or even that materialism is true. Let’s suppose, for the sake of argument, that substance dualism is true and that beliefs are, say, soul-stuff structures. Then my suggestion is we can know priori that if beliefs are soul-stuff structures, and if a given soul-stuff structure in combination with a strong desire for water typically results in subjects walking five miles south, then that soul-stuff structure is quite likely to have the content that there’s water five miles south, and is highly unlikely to have the content that there’s water five miles north.

So now suppose beliefs are neural structures. What Plantinga overlooks, it seems to me, is that one cannot, as it were, just plug any old belief into any old neural structure. I am suggesting that if beliefs are neural structures, then it is at least partly by virtue of its having certain sorts of behavioural consequence that a given neural structure has the content it does.

Thus a neural structure that, in combination with a powerful desire to drink water, typically causes one to go to the tap and drink from it is hardly like to be the belief that inflation is running high, that Paris is the capital of Bolivia, or that water does not come out of taps. Among the various candidates for being the semantic content of the neural structure in question, being the belief that water comes out of taps must rank fairly high on the list. Even if we acknowledge that some other beliefs might also typically have that behavioural consequence when combined with just that desire, the belief that water comes out of taps must at least be among the leading candidates.

But then, given the existence of such conceptual constraints, once Plantinga allows both that beliefs might be neural structures and that such beliefs/neural structures causally affect behaviour, he allows for natural selection to favour true belief.

To see why, let’s return to our thirsty human. He’ll survive only if he walks five miles south. Suppose he does so as a result of his strong desire for water. That adaptive behaviour is caused by his desire for water in combination with a belief/neural structure, and, given the kind of conceptual constraints outlined above, if that belief in combination with that desire typically results in subjects walking five miles south, then it is likely to be the belief that there’s water five miles to the south – a true belief. Were our human to head off north, on the other hand, as a result of his having a belief/neural structure that, in combination with such a desire, typically result in subjects walking five miles north, then it’s likely his belief is that there’s water five miles north. That’s a false belief. As, a result of it being false, he’ll die.

True, there are other candidates for being the content of the belief that causes our human to head off in the right direction. Indeed, some may be more likely candidates. Suppose our human has no conception of miles or south. Then, instead of the belief that there’s water five miles south that causes his behaviour, perhaps the belief that causes his adaptive behaviour is the belief there’s reachable water thataway. However, notice that, either way, the content of the belief in question is still true.

But then for the naturalist who supposes there exist such conceptual constraints on likely content, it’s reasonable to suppose that, given we are likely to have evolved powerful desires for things that help us survive and reproduce, such as water, food and a mate, then we are likely to have evolved reliable cognitive mechanisms.

Non-Reductive Materialism

Notice that the point made in the preceding section stands whether or not our naturalist plumps for RM or NRM. Perhaps some will doubt this.

For example, Plantinga might raise the following worry. If NRM is true, then semantic properties aren’t NP properties. They are properties of neural structures that exist over and above their NP properties. True, the semantic properties of a neural structure are determined by its NP properties (by virtue of the supervenience relation holding between them), but the fact is that if (perhaps per impossibile) the semantic properties of the neural structure were removed but the NP properties remained unaltered, the same behaviour would still result. It is not, as it were, by virtue of its having the semantic properties that it does that that a neural state has the behavioural consequences that it does. In which case it seems that if NRM is true, then semantic epiphenomenalism is, after all, true: the semantic properties of a neural structure are causally irrelevant so far as its behavioural effects are concerned. But if the semantic properties of a neural structure are causally irrelevant, then those properties must be invisible to natural selection. Natural selection is only interested in getting the right kind of behaviour caused. If semantic properties are causally inert, why should natural selection prefer one sort of semantic property to another?

The answer is that, if a neural structure’s semantic properties nevertheless supervene on its NP properties, and are thus determined by those NP properties, then, given the kind of conceptual constraints I have outlined above, it remains the case that the kind of neural structure that typically causes subjects to walk five miles south given a strong desire for water will quite probably have the semantic property of having the content there’s water five miles south. Notice it really doesn’t matter whether or not that sort of neural structure typically causes that behaviour by virtue of its having that semantic property. It remains the case that, if that sort of neural structure for whatever reason has that typical behavioural consequence, then, given the suggested conceptual constraints, it quite probably has that semantic content, and very probably doesn’t have the conceptual content there’s water five miles north.

In short, perhaps it’s only the NP properties of neural structures that determines what causal effects such structures have on behaviour. My suggestion is that the causal role played by neural structures nevertheless still places constraints on their likely content.

But then, interestingly, given such conceptual constraints, natural selection is likely to favour true belief even if semantic epiphenomenalism is true. It’s actually irrelevant to my argument whether or not it is by virtue of having certain semantic properties that neural structures cause behaviour.

Conclusion

Of course, I am merely making a suggestion. Perhaps what I suggest is not true. Perhaps there exist no such a priori, conceptual constraints. Still, the view that there are such constraints is commonplace. It seems just intuitively obvious to many of us, myself included, that such constraints exist. It seems intuitively obvious to me that belief content is not entirely conceptually independent of causal role. Such intuitions would appear to be, philosophically speaking, largely pre-theoretical. To acknowledge that there are such constraints does not require that we sign up to naturalism, materialism or dualism, for example. Nor, as I have already pointed out, does it require that we sign up to any of the reductive materialist-friendly theories of content (functional-role semantics, indicator semantics, etc.) that Plantinga goes on to attack.

My conclusion, then, is this. Suppose there exist conceptual constraints on likely content of the sort I have proposed (call this supposition CC). Then, given that natural selection is likely to have equipped us with desires for things that enhance our ability to survive and reproduce, unguided natural selection will indeed favour reliable cognitive mechanisms.

In short, whatever the probability of R given N&E, it seems that the probability of R given N&E&CC is high. Indeed, it seems that that probability remains high even if semantic epiphenomenalism is true.

Many naturalists subscribe to CC. In order to show that their naturalism is self-defeating, then, Plantinga now needs to show that, given naturalism, CC is unlikely to be true. Without the addition of that further argument, Plantinga’s case for the self-defeating character of naturalism collapses.

Anticipating two replies

I now anticipate two responses Plantinga might make in response to the above argument.

1. Why assume we will evolve desires for things that enhance our ability to survive and reproduce?

First, Plantinga may question my assumption that unguided evolution is likely to equip us with desires for things that enhance our ability to survive and reproduce, such as water, food and a mate. Plantinga may ask, “Why should evolution favour such desires?” Why shouldn’t it select other desires that, in conjunction with false beliefs, nevertheless result in adaptive behaviour?

In support of this suggestion, Plantinga may resurrect an argument that featured in an earlier incarnation of the EAAN, an argument I call the belief-cum-desire argument. According to Plantinga, for any given adaptive action (action that enhances the creatures’ ability to survive and reproduce),

there will be many belief-desire combinations that could produce that action; and very many of those belief-desire combinations will be such that the belief involved is false. p. 4

Plantinga illustrates like so:

So suppose Paul is a prehistoric hominid; a hungry tiger approaches. Fleeing is perhaps the most appropriate behavior: I pointed out that this behavior could be produced by a large number of different belief-desire pairs. To quote myself: ‘Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect, because he thinks it unlikely that the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief… Or perhaps he thinks the tiger is a large, friendly, cuddly pussycat and wants to pet it; but he also believes that the best way to pet it is to run away from it … or perhaps he thinks the tiger is a regularly recurring illusion, and, hoping to keep his weight down, has formed the resolution to run a mile at top speed whenever presented with such an illusion; or perhaps he thinks he is about to take part in a 1600 meter race, wants to win, and believes the appearance of the tiger is the starting signal; or perhaps…. Clearly there are any number of belief-cum-desire systems that equally fit a given bit of behavior. p.5

So adaptive behaviour can be produced by numerous belief-desire combinations. As Plantinga points out, on many of these combinations, the belief in question is not true. But then similarly, on many of these combinations, the desire is not for something that enhances the organism’s ability to survive and reproduce (wanting to be eaten by a tiger is not such a desire, obviously). So why assume that unguided evolution will favour desires for things that enhance our ability to survive and reproduce, given that desires for things that provide no such advantage, when paired with the right sort of belief (irrespective of whether those beliefs are true or false), will also produce adaptive behaviour?

The belief-cum-desire argument is flawed. We should concede that any belief can be made it result in adaptive behaviour if paired off with the right desire, and that any desire can be made to result in adaptive behaviour if paired off with the right belief. It does not follow that unguided evolution won’t favour the development of a combination of reliable cognitive mechanisms with desires for things that enhance survival and reproductive prospects.

The reason for this (spelt out in more detail in my [reference withheld for purposes of anonymity]) is as follows.

When we begin to think through the behavioural consequences of a species possessing unreliable cognitive mechanisms, it becomes clear that in at least very many cases (i) unguided evolution cannot predict with much accuracy what false beliefs such unreliable mechanisms are likely to throw up, and (ii) worse still, there just is no set of desires with which the species might be hard-wired that will, in combination with the mechanism in question, render the behavioural consequences broadly adaptive.

To illustrate, consider a hominid species H much like us but that with unreliable cognitive faculties. Let’s suppose, to begin with, that these creatures reason very badly. Rather than use reliable rules of inference, they employ rules like this:

If P then Q
Q
Therefore P

Call this the Fallacy of Affirming the Consequent (FAC) rule. What desire might evolution hardwire into this species that will render the behavioural consequences of the various beliefs generated by the FAC rule adaptive?

Notice first of all that an advantage of having belief producing mechanisms, as opposed to hard-wired (i.e. innate) beliefs, is that such mechanisms can produce different beliefs depending on environment. Evolution will favour such mechanisms if, as the organisms environment changes, so too do its resulting beliefs, and in such a way that adaptive behaviour still results.

But now notice that unguided evolution cannot anticipate what novel environments our hominids will encounter, and what false beliefs they will, as a result of employing the unreliable FAC rule in those environments, acquire. In which case, unguided evolution cannot pre-equip species H with some innate desire or set of desires that will make the false beliefs the FAC rule might easily throw up result in adaptive behaviour.

Even if our hypothetical species’ environment does not vary much, members may still employ the FAC rule in all sorts of ways. Suppose hominid H1 reasons like so:

If jumping out of planes is not safe, jumping out of balloons is not safe
Jumping out of balloons is not safe
Therefore jumping of planes is not safe


Hominid H2 reasons thus:

If jumping out of planes is safe, jumping out of planes with a parachute is safe
Jumping out of planes with a parachute is safe
Therefore jumping out of planes is safe


Both hominids employ the FAC and both start with true premises. Will H2’s conclusion result in adaptive behaviour? Perhaps not, if H2 is hard-wired with, say, a powerful desire to commit suicide. As a consequence, perhaps H2 would now unlikely to bother jumping out of a plane. However, that same hard-wired, species-wide desire now makes it much more likely that hominid H1 will plunge to his death. There’s no desire or set of desires with which this species might be hard-wired that will simultaneously render all the various conclusions that might easily be generated by their use of the unreliable FAC rule.

Similar problems arise when we turn to the suggestion that we have unreliable memories. An unreliable memory has as output beliefs differing significantly from those it has as input. Suppose species H is equipped with an unreliable memory. Suppose we want this unreliable faculty to produce adaptive behavioural consequences. With what desire or desires must the species then be programmed? Again, given novel environments, how can unguided evolution predict what the input beliefs and output beliefs of the faculty will be? Moreover, what set of desires that will result in both the input and output beliefs of this unreliable memories producing generally adaptive behaviour? If I learn that red berries are poisonous and grain is nutritious, but my unreliable memory later tells me red berries are nutritious and grain is poisonous, a desire to poison myself and avoid nutrition might render the output beliefs adaptive. However, those same desires, in combination with the input beliefs, will probably kill me.

The moral I draw is this. It is true that any false belief can, on any occasion, be made to result in adaptive behaviour if it is paired off with the right desire, and also that any desire (even a desire for something that hinders ones chances of surviving and reproducing) can result in adaptive behaviour if it is paired off with the right belief. However, it is not so easy to see what set of desires would make unreliable cognitive mechanisms of the sort we have been examining here result in adaptive behaviour. Indeed, it seems to me highly unlikely that a species will evolve unreliable mechanisms such as those described above, given there is no way to neutralize their otherwise maladaptive likely consequences by hard-wiring them with certain desires. Unguided evolution is far more likely to produce reliable cognitive mechanisms in combination with desires for things that enhance our ability to survive and reproduce.

Perhaps Plantinga will suggest that I have cherry-picked my examples, and that there are still a great many candidate unreliable cognitive mechanisms that unguided evolution might easily select in combination with some appropriate set of hard-wired desires. I don’t believe that is the case. However, even if I am mistaken, the onus is surely now on Plantinga to demonstrate that unguided evolution is as likely to select unreliable cognitive mechanisms as reliable ones, given that, in the case of the FAC rule and unreliable memory, reliable mechanisms will surely be preferred.

2 Insist that beliefs cannot be, or are unlikely to be, neural structures

.
As we have seen, Plantinga’s EAAN, as presented in “Content and Natural Selection”, concedes the possibility that beliefs might just be neural structures, but then goes on to argue that, even if they are, the semantic properties of those neural structures cannot be selected for by unguided evolution. I have explained why I believe the latter argument fails. Given the existence of certain conceptual constraints on what belief any given neural structure might be, unguided evolution probably will select for true belief.

However, Plantinga could just drop the concession that beliefs might be neural structures. He has already indicated that it is a concession about which he has significant doubts. See for example footnote 4, where he says “It is far from obvious that a material or physical structure can have a content.”

However, this would be a significant retreat, and would change the character of the EAAN. The claim that neural structures cannot just be beliefs would now require some support. It would not be enough for Plantinga to say, “I can’t see how beliefs could just be neural structures”.

8 comments:

Anonymous said...

Philosophy is dead. It has been reduced to making the same arguments over and over again, but this time borrowing the language of other fields, whether evolutionary biology or mathematics.

Next: Philosophy through the language of Astrology! YEAH!!!!!

John Pieret said...

When I was fooling around with Plantinga's argument last year, a commenter, Søren K, made the following argument:

My thinking is that at its most basic our cognitive abilities are such as our ability to detect motion, sight, sound etc.

So one such ability is to predict where a moving object wil be in the close future.

For Plantinga to be true this belief should not be adaptive. But to me at least, it seems most reasonable to assume that correctly identifying where your legs will end up when you move them, would have an adaptive advantage over the belief that the leg will leave your body, go to the moon, and then turn into a turnip.

The more basic the cognitive ability the more adaptive pressure for it to be right.


This may not be unlike the "scaffolding" reply to the ID "irreducible complexity" argument. If you look only at the end result -- a bacterial flagellum -- and its "purpose" of moving the bacterium from one point to another, it is not obvious how it could evolve ("reliably") from any precursor. But if our basic cognitive abilities (more correctly, of our ancestors) evolved to reliably link basic physical relationships with perception of those relationships, then we have evolved a scaffold on which more complex relationships can be perceived. Crudely: if I can recognize a tiger and recognize that it is "there," rather than "here," I have the beginnings of recognizing reliably that the tiger being "there," rather than "here," is an advantage to me. If I don't recognize the latter, I won't be "here" for long.

Curt said...

It seems an argument could be made that neural structures which produce true beliefs are more "adaptable" to the variations found in the real world compared to structures that produce beneficial but false beliefs. And as a result of this adaptability these structures will be selected over time as the species encounters variations in their environment. For example: sure, there may exist a false belief that compels a thirsty person to walk 5 miles south. But, the water is not always 5 miles south. If the water is in a different direction then this believe fails, but the person with a different belief structure, one which more accurately reflects the truth of the situation (provides a better model of the external reality), will be able to adapt and compel different behavior (like a systematic search for water).

The challenges of survival in the constantly changing world would seem to weed out false beliefs. By definition, a false belief is wrong at some point - it will necessarily entail some inaccuracy about the real world. If a circumstance is encountered which is not modeled correctly by the false belief then the false belief would not be likely to produce beneficial behavior.

wombat said...

Do not the facilities of memory and introspection cause serious problems for the type of SE where there is no correlation between NP structure and belief? The act of introspecting to discover what ones beliefs are and then remembering what they are must surely translate semantic properties back into NP.

Mike D said...

I have one nitpick to offer, and it's really not a nitpick against you specifically so much as you are parroting the language of ID proponents for the sake of expediency.

Darwinian evolution is not, as the IDer's claim, "unguided". It is guided by survival and reproduction.

Anonymous said...

Bingo to Curt. False beliefs resulting in 'correct' action may work some of the time, but will also fail some of the time. Evolution's sieve of time will eventually remove these beliefs - a single failure (being poisoned by berries)removes the belief, while only repeated successes will cause that belief to persist.

And one can then argue that repeated successes will be most likely in proportion to beliefs reflecting physical reality.

mat roberts said...

I saw on "Autumn Watch" a couple of weeks ago, a discussion about a heron and a fish pond.

The viewer had a fish pond in his garden. It was being visited by a heron that was eating his goldfish. The guy attempted a number of remedies, but in the end the one that worked was to partially submerge a plastic head of a crocodile in the pond. The heron stayed away and his fish were safe.

The interesting thing, to me at least, is that the heron has never seen a crocodile in it's life. Crocodiles are not indigenous to the UK and herons do not generally migrate - or at least not very far. And yet the heron knows that crocodiles are dangerous.

The heron has a false belief. It sees the shape of a crocodile in the pond and makes the assumption that it is dangerous - and so misses out on a tasty meal of goldfish. To me it seems that a semantic property is having an effect on behaviour.

=====

RE - your paper.

I find abbreviating the arguments to a single letter R, N&E etc. makes the whole thing fairly hard to follow.

I think it would be better to set out Plantinga's argument - without commentary - before attacking it. I'm finding it hard to follow exactly what he's saying.

Maybe it's just me, but I wouldn't anticipate Plantinga's response. I personally find it intensely annoying when folks put words in my mouth.

======

Is Plantinga's argument that P(R/N&E) is low basically: "Look there are an infinite number of belief's which you can adopt a priori, which will lead to the correct behaviour. Therefore given that R/N&E is just one of them, it's probability must be low too"?

Which sounds like "the problem of improper priors"
http://telescoper.wordpress.com/2010/12/11/deductivism-and-irrationalism/

Anonymous said...

Absolutely loving it. Guess all those religious people who evolved that belief are sensible then! Thanks evolution for religion. Rock on. Happy Christmas, Hannukkah and all that.