Tuesday, November 30, 2010

Latest version EAAN paper - for comments

Plantinga’s Latest EAAN Refuted

In “Content and Natural Selection” (PPR forthcoming) Plantinga presents a version of his Evolutionary Argument Against Naturalism (EAAN) that he then bolsters to deal with a certain sort of objection.

The EAAN itself runs as follows. Let N be the view that there’s no such person as God or anything at all like God (or if there is, then this being plays no causal role in the world’s transactions), and E be the view that our cognitive faculties have come to be by way of the processes postulated by contemporary evolutionary theory. Then, argues Plantinga, the combination N&E is incoherent of self-defeating. This, he maintains, is because if N&E is true, then the probability that R – that we have cognitive faculties that are reliable (that is to say, produce a preponderance of true over false beliefs in nearby possible worlds) – is low. But anyone who sees that P(R/N&E) is low then has an undefeatable defeater for R. And if they have such a defeater for R, then they have such a defeater for any belief produced by their cognitive faculties, including their belief that N&E.

But why suppose P(R/N&E) is low? Plantinga supports this premise by means of a further argument. He begins by asserting that

materialism or physicalism is de rigeur for naturalism… A belief, presuming there are such things, will be a physical structure of some sort, presumably a neurological structure. (p2)

According to a proponent of naturalism, then, this structure will have both neurophysiological (NP) properties and semantic properties. However, it is, claims Plantinga, exceedingly difficult to see how its semantic properties could have any causal effect on behaviour. This because the belief would have the same impact on behaviour were it to possess the same NP properties but different semantic properties. So, claims Plantinga, N&E lead us to semantic epiphenomenalism (SE). But if semantic properties such as having such and such content or being true or false cannot causally impinge on behaviour, they cannot be selected for by unguided evolution. Truth and falsehood will be, as Plantinga puts it, invisible to natural selection. In which case, (on the modest assumptions that, say, 75% of beliefs produced must be true in order for a cognitive mechanism to be reliable and that we have at least 100 such beliefs) P(R/N&E&SE) will be low.

In “Content and Natural Selection”, Plantinga attempts to deal with an objection to the above argument. The objection runs as follows: suppose content properties just are NP properties. That’s to say, suppose that reductive materialism (RM) is true. Then, because NP properties cause behaviour, and semantic properties just are NP properties, so semantic properties cause behaviour. But if semantic properties cause behaviour, then they can be can then be selected for by unguided evolution. As we have just seen, Plantinga’s argument for P(R/N&E) being low is that SE is likely given N&E, and SE in combination with N&E makes R unlikely. But if our adherent to N&E also accepts RM, then they no longer have grounds for supposing SE is likely. Indeed, SE is actually rendered unlikely by the addition of RM. In which case, whether not P(R/N&E) is low, there’s no reason to think that P(R/N&E&RM) is low. In fact, as unguided evolution can and presumably will now favour true belief, there are grounds for supposing P(R/N&E&RM) is high.

Plantinga’s argument that P(R/N&E&RM) is low

So runs the objection. Which brings us to the heart of Plantinga’s argument in “Content and Natural Selection”. Plantinga argues that adding RM to N&E does not, in fact, make the probability of R high.

According to Plantinga, while RM does indeed allow semantic properties to have causal effects on behaviour (because they just are NP properties), the combination N&E&RM gives us no reason to suppose that the content of belief/neural structures resulting in adaptive behaviour are likely to be true. Suppose the belief/neural structure resulting in a piece of adaptive behaviour has the content q. While the property of having q as content does now enter into the causal chain leading to that behaviour, it doesn’t matter whether it is true:

What matters is only that the NP property in question cause adaptive behaviour; whether the content it constitutes is also true is simply irrelevant. It can do its job of causing adaptive behaviour just as well if it is false as if it is true. It might be true, and it might be false; it doesn’t matter.
P10.

But if the NP property can do its job of causing adaptive behaviour just as well whether its content be true or false, true belief cannot be favoured by natural selection. In which case (PR/N&E&RM) is low.

Plantinga goes on to consider various other philosophical theories of mind that might be supposed, in conjunction with N&E, to make R likely, namely non-reductive materialism, Dretske’s indicator semantics, functionalism and Ruth Millikan’s biosemantics. Plantinga argues that on none of these theories does N&E render the probability of R high. He intimates that there is a pattern to their respective failings such that we can see that nothing like these theories is capable of making R probable given N&E.

Refutation of Plantinga’s argument

There is, to seems to me, a fairly obvious flaw in Plantinga’s argument.

Let’s concede that what unguided evolution favours, in the first instance, is adaptive behaviour. As to what causes that behaviour, evolution doesn’t care – true beliefs, false beliefs, something else; it’s all the same to evolution. It is only the result - adaptive behaviour - that is preferred. That is why Plantinga supposes unguided evolution is unlikely to select for cognitive mechanisms favouring true beliefs.

But even if unguided evolution doesn’t care what causes adaptive behaviour, just so long as it is caused, it may not follow, given further facts about belief, that if beliefs cause behaviour, then natural selection won’t favour true belief.

I suggest there is this further fact about belief: that there exist certain conceptual constraints on what content a given belief can, or is likely to, have given its causal relationships both to behaviour and other mental states, such as desires.

Consider a human residing in an arid environment. Suppose the only accessible water lies five miles to the south of him. Our human is desperately thirsty. My suggestion is that we can know a priori, just by reflecting on the matter, that if something is a belief that, solely in combination with a strong desire for water, typically results in such a human walking five miles to the south, then it is quite likely to be the belief that there’s water five miles to the south (or the belief that there’s reachable water thataway [pointing south] or whatever). It’s highly unlikely to be the belief that there’s isn’t any water five miles to the south (or isn’t any reachable water thataway), or the belief that there’s water five miles to the north (or thisaway [pointing north]), or the belief that there’s a mountain of dung five miles to the south, or that inflation is high, or that Paris is the capital of Bolivia.

I don’t say this because I am wedded to some particular reductionist, materialist-friendly theory of content of the sort that Plantinga goes on to attack in “Content and Natural Selection”, such as Dretskian indicator semantics or functionalism or whatever: I’m not. Those theories might build on the thought that such conceptual constraints exist. But the suggestion that there are such constraints does not depend on any such theory being correct. True, perhaps content cannot be exhaustively captured in terms of causal role, as functionalists claim. That’s not to say that there are no conceptual constraints at all on what the content of a given belief is likely to be, given the causal links that belief has to behaviour and other mental states such as desires. Surely there are. That, at least, is my suggestion.

To suggest that such conceptual constraints on likely content exist is not, of course, to presuppose that beliefs are neural structures, or even that materialism is true. Let’s suppose, for the sake of argument, that substance dualism is true and that beliefs are, say, soul-stuff structures. Then my suggestion is we can know priori that if beliefs are soul-stuff structures, and if a given soul-stuff structure in combination with a strong desire for water typically results in subjects walking five miles south, then that soul-stuff structure is quite likely to have the content that there’s water five miles south, and is highly unlikely to have the content that there’s water five miles north.

So now suppose beliefs are neural structures. What Plantinga overlooks, it seems to me, is that one cannot, as it were, just plug any old belief into any old neural structure. I am suggesting that if beliefs are neural structures, then it is at least partly by virtue of its having certain sorts of behavioural consequence that a given neural structure has the content it does.

Thus a neural structure that, in combination with a powerful desire to drink water, typically causes one to go to the tap and drink from it is hardly like to be the belief that inflation is running high, that Paris is the capital of Bolivia, or that water does not come out of taps. Among the various candidates for being the semantic content of the neural structure in question, being the belief that water comes out of taps must rank fairly high on the list. Even if we acknowledge that some other beliefs might also typically have that behavioural consequence when combined with just that desire, the belief that water comes out of taps must at least be among the leading candidates.

But then, given the existence of such conceptual constraints, once Plantinga allows both that beliefs might be neural structures and that such beliefs/neural structures causally affect behaviour, he allows for natural selection to favour true belief.

To see why, let’s return to our thirsty human. He’ll survive only if he walks five miles south. Suppose he does so as a result of his strong desire for water. That adaptive behaviour is caused by his desire for water in combination with a belief/neural structure, and, given the kind of conceptual constraints outlined above, if that belief in combination with that desire typically results in subjects walking five miles south, then it is likely to be the belief that there’s water five miles to the south – a true belief. Were our human to head off north, on the other hand, as a result of his having a belief/neural structure that, in combination with such a desire, typically result in subjects walking five miles north, then it’s likely his belief is that there’s water five miles north. That’s a false belief. As, a result of it being false, he’ll die.

True, there are other candidates for being the content of the belief that causes our human to head off in the right direction. Indeed, some may be more likely candidates. Suppose our human has no conception of miles or south. Then, instead of the belief that there’s water five miles south that causes his behaviour, perhaps the belief that causes his adaptive behaviour is the belief there’s reachable water thataway. However, notice that, either way, the content of the belief in question is still true.

But then for the naturalist who supposes there exist such conceptual constraints on likely content, it’s reasonable to suppose that, given we are likely to have evolved powerful desires for things that help us survive and reproduce, such as water, food and a mate, then we are likely to have evolved reliable cognitive mechanisms.

Non-Reductive Materialism

Notice that the point made in the preceding section stands whether or not our naturalist plumps for RM or NRM. Perhaps some will doubt this.

For example, Plantinga might raise the following worry. If NRM is true, then semantic properties aren’t NP properties. They are properties of neural structures that exist over and above their NP properties. True, the semantic properties of a neural structure are determined by its NP properties (by virtue of the supervenience relation holding between them), but the fact is that if (perhaps per impossibile) the semantic properties of the neural structure were removed but the NP properties remained unaltered, the same behaviour would still result. It is not, as it were, by virtue of its having the semantic properties that it does that that a neural state has the behavioural consequences that it does. In which case it seems that if NRM is true, then semantic epiphenomenalism is, after all, true: the semantic properties of a neural structure are causally irrelevant so far as its behavioural effects are concerned. But if the semantic properties of a neural structure are causally irrelevant, then those properties must be invisible to natural selection. Natural selection is only interested in getting the right kind of behaviour caused. If semantic properties are causally inert, why should natural selection prefer one sort of semantic property to another?

The answer is that, if a neural structure’s semantic properties nevertheless supervene on its NP properties, and are thus determined by those NP properties, then, given the kind of conceptual constraints I have outlined above, it remains the case that the kind of neural structure that typically causes subjects to walk five miles south given a strong desire for water will quite probably have the semantic property of having the content there’s water five miles south. Notice it really doesn’t matter whether or not that sort of neural structure typically causes that behaviour by virtue of its having that semantic property. It remains the case that, if that sort of neural structure for whatever reason has that typical behavioural consequence, then, given the suggested conceptual constraints, it quite probably has that semantic content, and very probably doesn’t have the conceptual content there’s water five miles north.

In short, perhaps it’s only the NP properties of neural structures that determines what causal effects such structures have on behaviour. My suggestion is that the causal role played by neural structures nevertheless still places constraints on their likely content.

But then, interestingly, given such conceptual constraints, natural selection is likely to favour true belief even if semantic epiphenomenalism is true. It’s actually irrelevant to my argument whether or not it is by virtue of having certain semantic properties that neural structures cause behaviour.

Conclusion

Of course, I am merely making a suggestion. Perhaps what I suggest is not true. Perhaps there exist no such a priori, conceptual constraints. Still, the view that there are such constraints is commonplace. It seems just intuitively obvious to many of us, myself included, that such constraints exist. It seems intuitively obvious to me that belief content is not entirely conceptually independent of causal role. Such intuitions would appear to be, philosophically speaking, largely pre-theoretical. To acknowledge that there are such constraints does not require that we sign up to naturalism, materialism or dualism, for example. Nor, as I have already pointed out, does it require that we sign up to any of the reductive materialist-friendly theories of content (functional-role semantics, indicator semantics, etc.) that Plantinga goes on to attack.

My conclusion, then, is this. Suppose there exist conceptual constraints on likely content of the sort I have proposed (call this supposition CC). Then, given that natural selection is likely to have equipped us with desires for things that enhance our ability to survive and reproduce, unguided natural selection will indeed favour reliable cognitive mechanisms.

In short, whatever the probability of R given N&E, it seems that the probability of R given N&E&CC is high. Indeed, it seems that that probability remains high even if semantic epiphenomenalism is true.

Many naturalists subscribe to CC. In order to show that their naturalism is self-defeating, then, Plantinga now needs to show that, given naturalism, CC is unlikely to be true. Without the addition of that further argument, Plantinga’s case for the self-defeating character of naturalism collapses.

Anticipating two replies

I now anticipate two responses Plantinga might make in response to the above argument.

1. Why assume we will evolve desires for things that enhance our ability to survive and reproduce?

First, Plantinga may question my assumption that unguided evolution is likely to equip us with desires for things that enhance our ability to survive and reproduce, such as water, food and a mate. Plantinga may ask, “Why should evolution favour such desires?” Why shouldn’t it select other desires that, in conjunction with false beliefs, nevertheless result in adaptive behaviour?

In support of this suggestion, Plantinga may resurrect an argument that featured in an earlier incarnation of the EAAN, an argument I call the belief-cum-desire argument. According to Plantinga, for any given adaptive action (action that enhances the creatures’ ability to survive and reproduce),

there will be many belief-desire combinations that could produce that action; and very many of those belief-desire combinations will be such that the belief involved is false. p. 4

Plantinga illustrates like so:

So suppose Paul is a prehistoric hominid; a hungry tiger approaches. Fleeing is perhaps the most appropriate behavior: I pointed out that this behavior could be produced by a large number of different belief-desire pairs. To quote myself: ‘Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect, because he thinks it unlikely that the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief… Or perhaps he thinks the tiger is a large, friendly, cuddly pussycat and wants to pet it; but he also believes that the best way to pet it is to run away from it … or perhaps he thinks the tiger is a regularly recurring illusion, and, hoping to keep his weight down, has formed the resolution to run a mile at top speed whenever presented with such an illusion; or perhaps he thinks he is about to take part in a 1600 meter race, wants to win, and believes the appearance of the tiger is the starting signal; or perhaps…. Clearly there are any number of belief-cum-desire systems that equally fit a given bit of behavior. p.5

So adaptive behaviour can be produced by numerous belief-desire combinations. As Plantinga points out, on many of these combinations, the belief in question is not true. But then similarly, on many of these combinations, the desire is not for something that enhances the organism’s ability to survive and reproduce (wanting to be eaten by a tiger is not such a desire, obviously). So why assume that unguided evolution will favour desires for things that enhance our ability to survive and reproduce, given that desires for things that provide no such advantage, when paired with the right sort of belief (irrespective of whether those beliefs are true or false), will also produce adaptive behaviour?

The belief-cum-desire argument is flawed. We should concede that any belief can be made it result in adaptive behaviour if paired off with the right desire, and that any desire can be made to result in adaptive behaviour if paired off with the right belief. It does not follow that unguided evolution won’t favour the development of a combination of reliable cognitive mechanisms with desires for things that enhance survival and reproductive prospects.

The reason for this (spelt out in more detail in my [reference withheld for purposes of anonymity]) is as follows.

When we begin to think through the behavioural consequences of a species possessing unreliable cognitive mechanisms, it becomes clear that in at least very many cases (i) unguided evolution cannot predict with much accuracy what false beliefs such unreliable mechanisms are likely to throw up, and (ii) worse still, there just is no set of desires with which the species might be hard-wired that will, in combination with the mechanism in question, render the behavioural consequences broadly adaptive.

To illustrate, consider a hominid species H much like us but that with unreliable cognitive faculties. Let’s suppose, to begin with, that these creatures reason very badly. Rather than use reliable rules of inference, they employ rules like this:

If P then Q
Q
Therefore P

Call this the Fallacy of Affirming the Consequent (FAC) rule. What desire might evolution hardwire into this species that will render the behavioural consequences of the various beliefs generated by the FAC rule adaptive?

Notice first of all that an advantage of having belief producing mechanisms, as opposed to hard-wired (i.e. innate) beliefs, is that such mechanisms can produce different beliefs depending on environment. Evolution will favour such mechanisms if, as the organisms environment changes, so too do its resulting beliefs, and in such a way that adaptive behaviour still results.

But now notice that unguided evolution cannot anticipate what novel environments our hominids will encounter, and what false beliefs they will, as a result of employing the unreliable FAC rule in those environments, acquire. In which case, unguided evolution cannot pre-equip species H with some innate desire or set of desires that will make the false beliefs the FAC rule might easily throw up result in adaptive behaviour.

Even if our hypothetical species’ environment does not vary much, members may still employ the FAC rule in all sorts of ways. Suppose hominid H1 reasons like so:

If jumping out of planes is not safe, jumping out of balloons is not safe
Jumping out of balloons is not safe
Therefore jumping of planes is not safe


Hominid H2 reasons thus:

If jumping out of planes is safe, jumping out of planes with a parachute is safe
Jumping out of planes with a parachute is safe
Therefore jumping out of planes is safe


Both hominids employ the FAC and both start with true premises. Will H2’s conclusion result in adaptive behaviour? Perhaps not, if H2 is hard-wired with, say, a powerful desire to commit suicide. As a consequence, perhaps H2 would now unlikely to bother jumping out of a plane. However, that same hard-wired, species-wide desire now makes it much more likely that hominid H1 will plunge to his death. There’s no desire or set of desires with which this species might be hard-wired that will simultaneously render all the various conclusions that might easily be generated by their use of the unreliable FAC rule.

Similar problems arise when we turn to the suggestion that we have unreliable memories. An unreliable memory has as output beliefs differing significantly from those it has as input. Suppose species H is equipped with an unreliable memory. Suppose we want this unreliable faculty to produce adaptive behavioural consequences. With what desire or desires must the species then be programmed? Again, given novel environments, how can unguided evolution predict what the input beliefs and output beliefs of the faculty will be? Moreover, what set of desires that will result in both the input and output beliefs of this unreliable memories producing generally adaptive behaviour? If I learn that red berries are poisonous and grain is nutritious, but my unreliable memory later tells me red berries are nutritious and grain is poisonous, a desire to poison myself and avoid nutrition might render the output beliefs adaptive. However, those same desires, in combination with the input beliefs, will probably kill me.

The moral I draw is this. It is true that any false belief can, on any occasion, be made to result in adaptive behaviour if it is paired off with the right desire, and also that any desire (even a desire for something that hinders ones chances of surviving and reproducing) can result in adaptive behaviour if it is paired off with the right belief. However, it is not so easy to see what set of desires would make unreliable cognitive mechanisms of the sort we have been examining here result in adaptive behaviour. Indeed, it seems to me highly unlikely that a species will evolve unreliable mechanisms such as those described above, given there is no way to neutralize their otherwise maladaptive likely consequences by hard-wiring them with certain desires. Unguided evolution is far more likely to produce reliable cognitive mechanisms in combination with desires for things that enhance our ability to survive and reproduce.

Perhaps Plantinga will suggest that I have cherry-picked my examples, and that there are still a great many candidate unreliable cognitive mechanisms that unguided evolution might easily select in combination with some appropriate set of hard-wired desires. I don’t believe that is the case. However, even if I am mistaken, the onus is surely now on Plantinga to demonstrate that unguided evolution is as likely to select unreliable cognitive mechanisms as reliable ones, given that, in the case of the FAC rule and unreliable memory, reliable mechanisms will surely be preferred.

2 Insist that beliefs cannot be, or are unlikely to be, neural structures

.
As we have seen, Plantinga’s EAAN, as presented in “Content and Natural Selection”, concedes the possibility that beliefs might just be neural structures, but then goes on to argue that, even if they are, the semantic properties of those neural structures cannot be selected for by unguided evolution. I have explained why I believe the latter argument fails. Given the existence of certain conceptual constraints on what belief any given neural structure might be, unguided evolution probably will select for true belief.

However, Plantinga could just drop the concession that beliefs might be neural structures. He has already indicated that it is a concession about which he has significant doubts. See for example footnote 4, where he says “It is far from obvious that a material or physical structure can have a content.”

However, this would be a significant retreat, and would change the character of the EAAN. The claim that neural structures cannot just be beliefs would now require some support. It would not be enough for Plantinga to say, “I can’t see how beliefs could just be neural structures”.

Sunday, November 28, 2010

KEITH WARD, TUESDAY EVENING - PLEASE COME!

Here's an event I have set up for this coming Tuesday in central London. Keith is an excellent speaker and this should be very interesting. Hope to see some of you there.

Centre for Inquiry UK and South Place Ethical Society present

THE GOD VIRUS?

Prof. KEITH WARD


Keith Ward is a philosopher and theologian, Regius Professor of Divinity (Emeritus), Oxford, and the author of The God Conclusion.

Following up Darrell Ray’s talk The God Virus (Oct. 23) Ward’s talk addresses Richard Dawkins’s suggestion, developed by Ray, that religion functions in a similar way to a virus.

This is a free-standing talk. No familiarity with Ray’s book or talk will be assumed. Ward is a great guy, as well as one of the world's leading religious thinkers. There will be plenty of time for discussion. Please come!

Tues. November 30th, 2010, 7.30-9.00 pm

Conway Hall, 25 Red Lion Square, Holborn, London WC1R 4RL – Main Hall.

Just £4 on the door. Students £3.

Tickets on the door. To book in advance go to www.cfiuk.org, hit button “support cfiuk” and follow instructions. Credit and debit cards welcome. Include names of those coming, phone number, return address, etc.

Thursday, November 25, 2010

Nigel Warburton at Oxford Playhouse


This wil be worth going to - Nigel is a very clear and entertaining speaker.

Nigel Warburton on Everyday Philosophy

What is philosophy? Who needs it? Writer and podcaster Nigel Warburton, Senior Lecturer in Philosophy at the Open University, discusses the relevance of philosophy to life today. From questions about the limits of free speech to the nature of happiness, from what art is to the impact of new technology, philosophy offers insights into questions that matter. Warburton will explore how the thoughts of some of the great philosophers of the past shed light on our present day predicament.

Nigel Warburton is the author of many books including Philosophy: The Basics, Philosophy: The Classics, Thinking from A to Z and Free Speech: A Very Short Introduction. With David Edmonds, he makes the popular philosophy podcast Philosophy Bites, and is co-editor of a book based on the series. He also writes a monthly column Everyday Philosophy for Prospect magazine.

Fri 11th February 2001 17:00 Book Tickets £5

webpage here

Tuesday, November 23, 2010

Draft paper for comments

Here's a draft paper written after my radio thing with Plantinga. Work in progress. Alvin has been kind enough to comment so it will be revised in light of that...

Plantinga’s Latest EAAN Refuted

In “Content and Natural Selection” (PPR forthcoming) Plantinga presents a version of his Evolutionary Argument Against Naturalism that he then bolsters to deal with a certain sort of objection.

The EAAN itself runs as follows. Let N be the view that there’s no such person as God or anything at all like God (or if there is, this being plays no causal role in the world’s transactions), and E be the view that our cognitive faculties have come to be by way of the processes postulated by contemporary evolutionary theory. Then, argues Plantinga, the combination N&E is incoherent of self-defeating. This, he maintains, is because if N&E is true then the probability that R – that we have cognitive faculties that are reliable (that is to say, produce a preponderance of true over false beliefs in nearby possible worlds) –is low. But anyone who sees that P(R/N&E) is low then has an undefeatable defeater for R. And if they have such a defeater for R, then they have such a defeater for any belief produced by their cognitive faculties, including the belief that N&E.

But why suppose P(P/N&E) - that is to say, the probability of R given N&E - is low? Plantinga supports this premise by means of a further argument. He begins by asserting that

materialism or physicalism is de rigeur for naturalism… A belief, presuming there are such things, will be a physical structure of some sort, presumably a neurological structure. (p2)

According to a proponent of naturalism, then, this structure will have both neurophysiological (NP) properties and semantic properties. However, it is, claims Plantinga, exceedingly difficult to see how its semantic properties could have any causal effect on behaviour. This because the belief would have the same impact on behaviour were it to possess the same NP properties but different semantic properties. So, claims Plantinga, N&E lead us to semantic epiphenomenalism (SE). But if semantic properties such as having such and such content or being true or false cannot causally impinge on behaviour, they cannot be selected for by unguided evolution. Truth and falsehood will be, as Plantinga puts it, invisible to natural selection. In which case, (on the modest assumptions that, say, 75% of beliefs produced must be true in order for a cognitive mechanism to be reliable, and that we have at least 100 such beliefs) P(R/N&E&SE) will be low.

In “Content and Natural Selection”, Plantinga attempts to deal with an objection to the above argument. The objection runs as follows: suppose content properties just are NP properties. That’s to say, suppose that reductive materialism (RM) is true. Then, because NP properties cause behaviour, and semantic properties just are NP properties, so semantic properties cause behaviour. But if semantic properties cause behaviour, then they can be can be selected for by unguided evolution. As we have just seen, Plantinga’s argument for P(R/N&E) being low is that SE is likely given N&E, and SE in combination with N&E makes R unlikely. But if our adherent to N&E also accepts RM, then they no longer have grounds for supposing SE is likely. Indeed, SE is actually rendered unlikely by the addition of RM. In which case, whether not P(R/N&E) is low, there’s no reason to think that P(R/N&E&RM) is low. In fact, as unguided evolution can and will now favour true belief, there are no grounds for supposing P(R/N&E&RM) is high.

Plantinga’s argument that P(R/N&E&RM) is low


So runs the objection. Which brings us to the heart of Plantinga’s argument in “Content and Natural Selection”. Plantinga argues that adding RM to N&E does not, in fact, make the probability of R high.

According to Plantinga, while RM does indeed allow semantic properties to have causal effects on behaviour (because they just are NP properties), the combination N&E&RM gives us no reason to suppose that the content of belief/neural structures resulting in adaptive behaviour are likely to be true. Suppose the belief/neural structure resulting in a piece of adaptive behaviour has the content q. While the property of having q as content does now enter into the causal chain leading to that behaviour, it doesn’t matter whether it is true:

What matters is only that the NP property in question cause adaptive behaviour; whether the content it constitutes is also true is simply irrelevant. It can do its job of causing adaptive behaviour just as well if it is false as if it is true. It might be true, and it might be false; it doesn’t matter. P10.

But if the NP property can do its job of causing adaptive behaviour just as well whether its content be true or false, true belief cannot be favoured by natural selection. In which case (PR/N&E&RM) is low.

Plantinga goes on to consider various other philosophical theories of mind that might be supposed, in conjunction with N&E, to make R likely, namely non-reductive materialism, Dretske’s indicator semantics, functionalism and Ruth Millikan’s biosemantics. Plantinga argues that on none of these theories does N&E render the probability of R high. He suggests intimates that there is a pattern to their respective failings,such that we can see that nothing like these theories will be capable of doing the job of making R probable given N&E.

Refutation of Plantinga’s argument

I shall focus here on Plantinga’s argument that, even given the addition of RM, N&E is still unlikely to produce cognitive faculties favouring true belief. There is, to seems to me, a fairly obvious flaw in this argument.

Let’s concede that what unguided evolution favours, in the first instance, is adaptive behaviour. As to what causes that behaviour, evolution doesn’t care – true beliefs, false beliefs, something else, it’s all the same to evolution. It’s only the result - adaptive behaviour - that is preferred. That is why Plantinga supposes unguided evolution is unlikely to select for cognitive mechanisms that favour true beliefs.

But even if unguided evolution doesn’t care what causes adaptive behaviour, just so long as it is caused, it does not follow, given further facts about belief, that if belief content causes behaviour, then the content in question is no more likely to be true than false.

The further fact about belief to which I now draw attention is this: that there exist certain conceptual constraints on what content a given belief can, or is likely to, have given its causal relationships both to behaviour and other mental states, such as desires

Consider a human residing in an arid environment. Suppose the only accessible water lies five miles to the south of him. Our human is desperately thirsty. My suggestion is that we can know a priori, just by reflecting on the matter, that if something is a belief that, solely in combination with a strong desire for water, typically results in such a human walking five miles to the south, then it is quite likely to be the belief that there’s water five miles to the south (or the belief that there’s reachable water thataway [pointing south] or whatever). It is highly unlikely to be the belief that there’s isn’t any water five miles to the south (or isn’t any reachable water thataway), or the belief that there’s water five miles to the north (or thisaway [pointing north]), or the belief that there’s a mountain of dung five miles to the south, or that inflation is high, or that Paris is the capital of Bolivia.

I don’t say this because I am wedded to some particular reductionist, materialist-friendly theory of content of the sort that Plantinga goes on to attack in “Content and Natural Selection”, such as Dretskian indicator semantics or functionalism or whatever: I’m not. Such theories may build on the thought that such conceptual constraints exist. But they suggestion that there are such constraints does not depend on any such theory being correct. True, perhaps content cannot be exhaustively captured in terms of its causal role, as functionalists claim. But that is not to say that there are no conceptual constraints at all on what the content of a given belief is likely to be, given the causal links that belief has to behaviour and other mental states such as desires.

What Plantinga overlooks is that, assuming NP is true and beliefs are neural structures, one cannot, as it were, just plug any old belief into any old neural structure willy-nilly. A neural structure that, in combination with a powerful desire to drink water, typically causes one to go to the tap and drink from it is hardly like to be the belief that inflation is running high, that Paris is the capital of Bolivia, or that water does not come out of taps. Among the various candidates for being the semantic content of the neural structure in question, being the belief that water comes out of taps must rank fairly high on the list. Even if we acknowledge that some other beliefs might also typically have that behavioural consequence when combined with that just that desire, the belief that water comes out of taps must at least be among the leading candidates.

But then we have grounds for supposing that, given we are likely to have evolved powerful desires for things that help us survive and reproduce, such a water, food and a mate, we are also likely to have evolved cognitive mechanisms that favour true beliefs.

Unguided natural selection will favour the adaptive behaviour of walking five miles south to find water if one strongly desires a water, but that adaptive behaviour will be caused by that desire in combination with a belief, and the belief in question is likely to be the belief that there’s water five miles to the south – a true belief, rather than the belief there’s no water five miles to the south – a false belief. True, there are other candidates for being the content of the belief in question. Indeed, they may be more likely candidates. Suppose our human has no conception of miesl or of south. Then, instead of being the belief that there’s water five miles south that causes his behaviour, it may be the belief there’s reachable water thataway. Either way, notice that the likely candidate for being the content of the belief in question is still content that is true.

So, my suggestion is that Plantinga overlooks significant conceptual constraints on what content can, or is likely to be, possessed, by a given belief, given it’s role in producing, in combination with desires for things that enhance our ability to survive and reproduce, adaptive behaviour. Once these conceptual constraints are acknowledged, it no longer looks unreasonable to expect N&E&RM to produce cognitive mechanisms favouring true belief. Indeed, given N&E&RM makes it likely that we will we have strong desires for things that enhance our ability to survive and reproduce, surely R starts to look quite probable.

What is P(R/N&E&NRM)?


I mentioned above that Plantinga also argues that the probability of R given N&E and non-reductive materialism (NRM) is also low. His argument is much the same as for the probability of P(R/N&E&RM) being low.

Plantinga considers a variety of non-reductive materialism on which beliefs are neural structures, but the semantic properties of those beliefs/neural structures strongly supervene on their NP properties. On this version of NRM, necessarily: a neurological structure exemplifies a given semantic poperty if and only if it exemplifies a certain NP property.

Plantinga argues that on R&E&NRM, the probability of R is still low. Suppose a belief/neural structure causes a piece of adaptive behaviour. The belief/neural structures semantic properties will follow from its (presumably adaptive) NP properties. But even if the NP property is adaptive, that gives us no reason to suppose that the supervening semantic content is true. If the content property involves false content, says Plantinga, “that won’t in the least compromise the adaptivity of the NP property.” p14 Thus, on N&E&NRM, the probability of our having cognitive mechanisms that favour true beliefs must still be low.

But again, Plantinga has overlooked the fact that there are significant conceptual constraints on what the belief content of a given neural structure can be, given its causal and/or logical relationships to behaviour and other mental states. A belief that, in conjunction with a powerful desire for water, typically causes subjects to walk five miles south is surely quite likely to have the belief content that there is water five miles to the south. It’s hardly likely to be the content that there’s water five miles to the north. A belief with the latter content will, in conjunction with a strong desire for water, typically lead subjects to walk five miles north, where, if their belief is false, they may well die of thirst. Thus, on N&E&NRM, natural selection will tend to favour neural structures/beliefs possessing NP properties that (because of the supervenience relation holding between them) in turn necessitate content properties the content of which is, in fact, true.

It seems, then, that in combination with either RM or NRM, N&E renders the probability of R high. Adherents of naturalism and evolution that subscribe to either RM or NRM, once they reflect a priori on the conditions under which a belief is likely to have such and such content,

Anticipating two replies


I now anticipate two responses Plantinga might make in response to the above argument.

1. Why assume we will evolve to desire for things that enhance our ability to survive and reproduce?

First, Plantinga may question my assumption that unguided evolution is likely to equip us with desires for things that enhance our ability to survive and reproduce, such as water, food and a mate. Plantinga may ask, “Why should evolution favour such desires?” Why should it not select other desires that, in conjunction with false beliefs, nevertheless result in adaptive behaviour?

In support of this suggestion, Plantinga may resurrect an argument that featured in an earlier incarnation of the EAAN, an argument I call the belief-cum-desire argument. According to Plantinga, for any given adaptive action (action that enhances the creatures’ ability to survive and reproduce), ‘there will be many belief-desire combinations that could produce that action; and very many of those belief-desire combinations will be such that the belief involved is false ’. p. 4

Plantinga illustrates like so:

So suppose Paul is a prehistoric hominid; a hungry tiger approaches. Fleeing is perhaps the most appropriate behavior: I pointed out that this behavior could be produced by a large number of different belief-desire pairs. To quote myself: ‘Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect, because he thinks it unlikely that the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief… Or perhaps he thinks the tiger is a large, friendly, cuddly pussycat and wants to pet it; but he also believes that the best way to pet it is to run away from it … or perhaps he thinks the tiger is a regularly recurring illusion, and, hoping to keep his weight down, has formed the resolution to run a mile at top speed whenever presented with such an illusion; or perhaps he thinks he is about to take part in a 1600 meter race, wants to win, and believes the appearance of the tiger is the starting signal; or perhaps…. Clearly there are any number of belief-cum-desire systems that equally fit a given bit of behavior. p.5

So adaptive behaviour can be produced by numerous belief-desire combinations. As Plantinga points out, on many of these combinations, the belief in question is not true. But then similarly, on many of these combinations, the desire is not for something that enhances the organism’s ability to survive and reproduce (wanting to be eaten by a tiger is not such a desire, obviously). Why assume that unguided evolution will favour desires for things that enhance our ability to survive and reproduce, given that desires for things that provide no such advantage, when paired with the right sort of belief (irrespective of whether those beliefs are true or false), will also result in adaptive behaviour?

This belief-cum-desire argument is flawed. We should concede that any belief can be made it result in adaptive behaviour if paired off with the right desire, and that any desire can be made to result in adaptive behaviour if paired off with the right belief. It does not follow that unguided evolution won’t favour the development of a combination of reliable cognitive mechanisms with desires for things that enhance survival and reproductive prospects.

The reason for this (spelt out in more detail in my [reference withheld for purposes of anonymity]) is as follows.

When we begin to think through the behavioural consequences of a species possessing unreliable cognitive mechanisms, it becomes clear that in at least very many cases (i) unguided evolution cannot predict with much accuracy what false beliefs such unreliable mechanisms are likely to throw up, and (ii) worse still, there just is no set of desires with which the species might be hard-wired that will, in combination with the mechanism in question, make the behavioural consequences broadly adaptive.

To illustrate, consider a hominid species H much like us but that with unreliable cognitive faculties. Let’s suppose, to begin with, that these creatures are reason very badly. Rather than use reliable rules of inference, they employ rules like this:

If P then Q
Q
Therefore P

(call this the FAC rule). What desire might evolution hardwire into this species that will render the behavioural consequences of the various beliefs generated by the FAC rule adaptive?

Notice first of all that an advantage of having belief producing mechanisms, as opposed to hard-wired (i.e. innate) beliefs, is that such mechanisms can produce generate beliefs depending on environment. Evolution will favour such mechanisms if, as the organisms environment changes, so too do its resulting beliefs, and in such a way that adaptive behaviour still results.

But now notice that unguided evolution cannot anticipate what novel environments our hominids will encounter, and what false beliefs they will, as a result of employing the unreliable FAC rule in those environments, acquire. In which case, unguided evolution cannot pre-equip species H with some innate desire or set of desires that will make the false beliefs the FAC rule might easily throw up result in adaptive behaviour.

Even if the species’ environment does not vary much, he FAC rule may be still be employed in all sorts of ways. Suppose hominid H1 reasons like so:

If jumping out of planes is not safe, jumping out of balloons is not safe
Jumping out of balloons is not safe
Therefore jumping of planes is not safe

Hominid H2 reasons thus:

If jumping out of planes is safe, jumping out of planes with a parachute is safe
Jumping out of planes with a parachute is safe
Therefore jumping out of planes is safe

Both hominids employ the FAC and both start with true premises. Will H2’s conclusion result in adaptive behaviour? Not if H2 is hard-wired with, say, a powerful desire to commit suicide. As a consequence, perhaps H2 would now unlikely to bother jumping out of a plane. However, that same hard-wired, species-general desire now makes it much more likely that hominid H1 will plunge to his death. There’s no desire or set of desires with which species H might be hard-wired that will simultaneously make all the various conclusions that might easily be generated by their adopting the FAC rule nevertheless result in adaptive behaviour.

Similar problems arise when we turn to the suggestion that we have unreliable memories. An unreliable memory has as output beliefs that differ significantly from those it has as input. Suppose species H is equipped with an unreliable memory. Supposing we want this unreliable faculty to produce adaptive behavioural consequences, with what desire or desires must the species be programmed? Again, given novel environments, how can unguided evolution predict what the input beliefs and output beliefs of the faculty will be? Moreover, what set of desires that will result in both the input and output beliefs of this unreliable memories producing generally adaptive behaviour? If I learn that red berries are poisonous and grain is nutritious, but my unreliable memory later tells me red berries are nutritious and grain is poisonous, a desire to poison myself and avoid nutrition might render the output beliefs adaptive. However, those same desires, in combination with the input beliefs, will probably kill me.

The moral I draw is this. It is true that any false belief can, on any occasion, be made to result in adaptive behaviour if it is paired off with the right desire, and also that any desire (even a desire for something that hinders ones chances of surviving and reproducing) can result in adaptive behaviour if it is paired off with the right belief. However, it is not so easy to see what set of desires would make unreliable cognitive mechanisms of the sort we have been examining here result in adaptive behaviour. Indeed, it seems to me highly unlikely that a species will evolve unreliable mechanisms such as those described above, given there is no way to neutralize their otherwise maladaptive likely consequences with hard-wired desires. Unguided evolution is far more likely to produce reliable cognitive mechanisms in combination with desires for things that enhance our ability to survive and reproduce.

Perhaps Plantinga will suggest that I have cherry-picked my examples, and that there are still a great many candidate unreliable cognitive mechanisms that unguided evolution might easily select in combination with some appropriate set of hard-wired desires. I don’t believe that is the case. However, even if I am mistaken, the onus is surely now on Plantinga to demonstrate that unguided evolution is as likely to select unreliable cognitive mechanisms as reliable ones, given that, in the case of the FAC rule and unreliable memory, reliable mechanisms will surely be preferred.

2 Insist that beliefs cannot be, or are unlikely to be, neural structures
.
As we have seen, Plantinga’s EAAN, as presented in “Content and Natural Selection”, concedes the possibility that beliefs might just be neural structures, but then goes on to argue that, even if they are, the semantic properties of those neural structures cannot be selected for by unguided evolution. I have explained why I believe the latter argument fails. Given he existence of certain conceptual constraints on what belief any given neural structure might be, unguided evolution probably will select for true belief.

However, Plantinga could just drop the concession that beliefs might be neural structures. He has already indicated that it is a concession about which he has significant doubts. See for example footnote 4, where he says “It is far from obvious that a material or physical structure can have a content.”

However, this would be a significant retreat, and would change the character of the EAAN. The claim that neural structures cannot just be beliefs would now require some support. It would not be enough for Plantinga to say, “I can’t see how beliefs could just be neural structures”.

Conclusion

to be completed

"I Just Know!"



From forthcoming book Believing Bullshit. Warning: this excerpt is 7,400 words.

When someone’s claim is challenged, and they find themselves struggling to come up with a rational reply, they will often say resort to saying, “Look, I just know!”

How reasonable a response is “I just know”? It depends. Sometimes, by “I just know”, people mean you should just take their word for it, perhaps because time is short and the evidence supporting their belief too complex to present in a convenient sound-bite.

Suppose, for example, I’m asked how I know Tom can be trusted to pay back the five dollars you just lent him. I could spend five minutes rehearsing several bits of evidence that would, together, show my claim was reasonable, but that would take time and effort. So, instead I say, “Look, I just know, okay!” To which I might add, “Take my word for it!” And, if you know me to be a pretty good judge of character, you’ll probably be justified in doing so.

Another situation in which it might be appropriate for me to say “I just know” is to flag up that, rather than coming to a belief on the basis of evidence, I can, say, just see, clearly and directly, that such-and-such is the case.

Suppose I’m looking out the window and see our good friend Frank. You’re convinced Frank is away on vacation, so you ask me I’m sure. I might say, “Look, I just know it’s Frank”. What I’m trying to convey is that I can see, very clearly, that it really is Frank. I’m not just hazarding a guess that it’s Frank on the basis of some passing resemblance (the shape of the back of his head, say). Again, knowing me to be a reliable witness, you would probably be justified in taking my word for it.

So saying “I just know” isn’t always an inappropriate thing to say in response to requests for supporting evidence. But then suppose I am asked how I know that God exists, or whether crystals really can cure people. Why can’t it be appropriate and reasonable for me to say, “Look, I just know!” in such situations too?

Maybe, just as I might directly experience Frank walking down the path to my front door, so I might directly experience God. I might just see, as it were, very clearly, that God really does exist. And if it’s reasonable for you to take my word about Frank, then why isn’t reasonable for you take my word about God?

Or, if I have a wealth of evidence that crystals really do have miraculous healing properties, but it would take considerable effort to organize it into a cogent argument – effort I can’t reasonably be expected to make under the circumstances – why isn’t it appropriate for me to say, “Look I just know crystals have these powers”? And if it’s reasonable for you to take my word for it about Tom’s trustworthiness, why not about the healing power of crystals?

We can now begin to see why saying “I just know!” offers those who believe conspiracy theories, wacky religious claims, psychic powers, and so on a potential a “get out of jail free” card. Suppose you find your belief in such things running up against a stiff challenge. Say, “Look, I just know, okay” and you may succeed in putting your critic on the back foot. Make them feel that the onus is now very much on them to demonstrate that you don’t “just know”. Then make quick your escape, head held high, continuing to maintain the superior wisdom that they have failed to show you don’t have.

In this chapter we will be taking a closer look at this sort of appeal to “I just know” to befuddle critics and shut down debate.

When saying “I just know” won’t do

While “Look, I just know” is sometimes an appropriate thing to say in response to a challenge to your belief, often, it isn’t.

Note, first of all, while there are circumstances in which it might be unreasonable to expect someone to set out the evidence supporting their claim, there are other circumstances in which this excuse won’t wash. If someone is writing a book on a subject, a book in which they have ample time and space available to properly set out their evidence, it obviously won’t do for them to say, “Look, I just know.”

The same is true of important political debates. Politicians are rightly expected to set out their case for raising taxes or invading another country clearly and in detail. Short of their decision being based on, say, top-secret information regarding national security, they have no legitimate excuse for not doing so.

“I just know” is an expression that also crops up at the race track. Suppose Jane puts her money on a horse, and says “I just know it’s going to win.” She says this even though the evidence – the betting odds and so on – suggest it probably won’t win. Even if Jane’s horse does happen to win, we’ll usually be inclined to think that not only did Jane not “just know”, it wasn’t reasonable for her to suppose she did.

Deciding “with your gut”

We all go with our gut, intuition, or instinct on occasion. Sometimes it’s unavoidable. Suppose I don’t know whether I should employ someone. The evidence concerning their reliability is somewhat mixed. I’ve received some very positive reports, but also some negative ones. I need to make a snap decision. Under such circumstances, I may just have to go with my gut. It’s that or toss a coin.

It’s been suggested that our gut feelings can be insightful. Police officers often have to make rapid decisions about, say, who is most likely to be armed in a rapidly unfolding and dangerous situation. There’s no time to assess the evidence properly. Officers often just have to go with their instincts. But their instincts are, it’s claimed, surprisingly accurate. They make fairly reliable judgements, despite not engaging in any conscious deliberation or evidence-weighing at all.

So there’s not necessarily anything wrong with going with your gut in certain situations. However, none of this is to say that it’s sensible to go with your gut feeling when you don’t need to, because, say, there’s ample and decisive evidence available. We are also ill-advised to trust the instincts of someone whose particular gut has a poor track record, or to trust our own gut feelings in areas where we know that gut feeling has proved to be unreliable.

Bush’s gut

Notoriously, during George W. Bush’s presidency, Bush’s gut became the oracle of the State. Bush was distrustful of book learning and those with established expertise in a given area. When Bush made the decision to invade Iraq, and was subsequently confronted by a skeptical audience, Bush said that ultimately, he just knew in his gut that invading was the right thing to do. As writer Rich Procter noted prior to the invasion:

Now we're preparing to invade a country in the middle of the most volatile "powder-keg" region on earth. We're going to toss out our history of using military force only when provoked. We're going to launch a "pre-emptive" invasion that violates two hundred-plus years of American history and culture. We're on the verge of becoming a fundamentally different kind of nation - an aggressive, "go-it-alone" rogue state - based on Bush's gut…

The invasion went ahead. A few months later, Senator Joe Biden told Bush of his growing worries about the aftermath. In response, Bush again appealed to the reliability of his “instincts”, as Ron Suskind here reports:

''I was in the Oval Office a few months after we swept into Baghdad,'' [Biden] began, ''and I was telling the president of my many concerns'' - concerns about growing problems winning the peace, the explosive mix of Shiite and Sunni, the disbanding of the Iraqi Army and problems securing the oil fields. Bush, Biden recalled, just looked at him, unflappably sure that the United States was on the right course and that all was well. '''Mr. President,' I finally said, 'How can you be so sure when you know you don't know the facts?''' Biden said that Bush stood up and put his hand on the senator's shoulder. ''My instincts,'' he said. ''My instincts.'' …The Delaware senator was, in fact, hearing what Bush's top deputies - from cabinet members like Paul O'Neill, Christine Todd Whitman and Colin Powell to generals fighting in Iraq - have been told for years when they requested explanations for many of the president's decisions, policies that often seemed to collide with accepted facts. The president would say that he relied on his ''gut'' or his ''instinct'' to guide the ship of state…

How did Bush suppose his gut was able to steer the ship of state? He supposed it was functioning as a sort of God-sensing faculty. Bush believed that by means of his gut he could sense what God wanted of him. But how reasonable was it for Bush, or anyone else, to trust what his gut was telling him?

What is knowledge?

Interestingly, a theory of knowledge developed over the last half century or so would seem to have the consequence that it is at least in principle possible (notice I don’t say likely) that some psychics, religious gurus and so on might “just know” things by means of some sort of psychic or divinely-given sense. They might “just know” these things even if they don’t have any evidence to support what they believe. In which case, perhaps Bush might “just know” what God wants of him by means of his gut? Let’s make a short detour of a few pages into contemporary theory of knowledge to look more closely at these ideas.

What is knowledge? Under what circumstances can someone correctly be described as knowing that so-and-so? The classic definition of knowledge comes from the Ancient Greek philosopher Plato, who thought that, in order to know that so-and-so, three conditions must be satisfied:

First, the person in question must believe that so-and-so. In order to know that, say, the battle of Hastings was in 1066, or that there is a pen on my desk, I must believe it.

Second, the belief must be true. I can’t know what isn’t true. If there’s no pen on my desk, then I cannot know that there is (though of course I might still believe it).

Thirdly, Plato thought that, in order to know that so-and-so, I need to be justified in believing that so-and-so. In order to know that the battle of Hastings was in 1066, or that there’s a pen on my desk, I need to be justified in believing these things.

Up until the mid-Twentieth century, this account of knowledge was widely accepted.

The third condition needs a little explanation, perhaps. Justification can take various forms. Perhaps the most obvious way in which you might be justified in believing something is if you have good evidence that what you believe is true. Incidentally, those who sign up to this definition of knowledge don’t normally mean that your justification must guarantee the truth of your belief. They typically allow that you can be justified in believing something even if you are mistaken. For example, surely you are justified in supposing that John is an expert on chemistry after him having shown you round a chemistry laboratory and seen various credentials hanging on his study wall, even though it still remains possible (if unlikely) that John is a con-man and you are the victim of some elaborate, Mission Impossible type fraud.

Evidentialism

Let’s now quickly turn to a well-known claim about evidence made by the philosopher W.K. Clifford. Clifford claimed that


it is wrong, always and everywhere, to believe anything on insufficient evidence.


People who believe despite not possessing good evidence that their belief is true are being downright irresponsible, thought Clifford. This quotation is often used to condemn those who believe in such things as the Loch Ness monster, angels, fairies and even God. Such beliefs, it is suggested, are not well-supported by the evidence. So it is wrong for people to believe them.

The idea that it is, at the very least, unwise to accept claims for which we possess little or no supporting evidence is certainly widespread. Richard Dawkins, for example, writes:

Next time somebody tells you something that sounds important, think to yourself: ‘Is this the kind of thing that people probably know because of evidence? Or is it the kind of thing that people only believe because of tradition, authority or revelation?’ And next time somebody tells you that something is true, why not say to them: ‘What kind of evidence is there for that?’ And if they can’t give you a good answer, I hope you’ll think very carefully before you believe a word they say.

Let’s call the view that we ought not to accept any belief not well-supported by evidence evidentialism. Is evidentialism true?

Probably not. Evidentialism faces some obvious difficulties. Perhaps the most glaring is this. Suppose I believe some claim A because I suppose I have supporting evidence B. But now ought I to believe that evidence B obtains? If evidentialism is true, it seems I ought to believe B obtains only if I posses, in turn, evidence for that – C, say. But then I should believe that C obtains only if there is, in turn, evidence for that, and so on ad infinitum. Evidentialism seems to entail that, before I adopt any belief, I must first acquire evidence to support an infinite number of beliefs – which, as a finite being, I can’t do. In short, Clifford’s injunction that I ought not to believe anything on the basis of insufficient evidence appears to have the disastrous consequence that I ought not to believe anything at all!

A problem for Plato’s theory


Let’s now return for a moment to Plato’s theory that knowledge is justified true belief. It is widely supposed that Plato’s theory runs into a similar problem. The theory says that, in order to know that so-and-so, my belief must be justified. But if my justification is supplied by another belief of mine, then, presumably, I am only justified in believing the first belief if I am justified in believing the second. But then the second belief will require a third belief to justify it, and so on ad infinitum. So, in order to justify even one belief I will have to justify an infinite number. Being a finite being, I cannot justify an infinite series of beliefs. It seems, then, that I cannot justify any belief, and thus cannot know anything at all!

How do we escape from this conclusion? The theory of knowledge known as reliabilism provides one solution.

Reliabilism


Here is a simple reliabilist theory of knowledge. In order for person a to know that P,

(i) P must be true
(ii) a must believe that P
(iii) A’s belief that P must be brought about by the fact that P via a reliable mechanism

You will notice that the first two conditions are the same as for Plato’s definition of knowledge. But the third is different, and requires a little explanation.

What’s meant by a “reliable mechanism”? A reliable mechanism is a mechanism that tends to produce true beliefs. My sense of sight is a fairly reliable belief-producing mechanism. It allows my beliefs fairly reliably to track how things are in my environment.

Suppose, for example, someone puts an orange on the table in front of me. Light bounces off the orange into me eyes, which in turn causes certain cells to fire in my retina, which causes a pattern of electrical impulses to pass down my optic nerves into my brain, eventually bringing it about that I believe there’s an orange before me. Remove the orange and that will in turn cause me, by means of the same mechanism, to believe the orange has gone.

The same goes for my other senses – they are fairly reliable belief-producing mechanisms. Blindfold me and put me in a crowded street and my ears, nose will, in response to the sound of car horns and the odour of hot dogs, cause me to believe I am in a crowded street. Move me to a fragrant garden filled with singing birds and those same senses will cause me to believe I am in such a garden. My senses of sight, touch, smell, hearing and taste cause me to hold beliefs that tend accurately to reflect how things actually are around me.

I don’t say our senses are one hundred percent reliable, of course. Sometimes we get things wrong. They are occasionally prone to illusion. But they are fairly reliable.

Let’s now apply our reliabilist definition of knowledge. Suppose someone puts an orange on the table in front of me. I look at the orange, and so come to believe there’s an orange there. Do I know there’s an orange on the table?

According to our reliabilist, I do. The simple reliabilist theory says that if (i) it’s true that there’s an orange there, (ii) I believe there’s an orange there, and (iii) my belief is produced via a reliable mechanism, e.g. sight, by the presence of an orange there, then I know there’s an orange there.

Now here is an interesting twist to this theory – a twist that will prove relevant to our discussion of psychic powers and George Bush’s gut. Notice, that, according to reliablism, in order to know there’s an orange on the table, I need not infer there’s an orange there. I need not arrive at my belief on the basis of good grounds or evidence. No evidence is required. All that’s required is that I hold the belief and that it be produced in the right sort of way – by a reliable mechanism.

Also notice that if, by saying that a belief is “justified”, we mean we have good grounds for believing it, then reliabilism says that we can know without justification. In which case, the regress problem with Plato’s theory that knowledge is justified true belief is also sidestepped by reliabilism.

Reliabilism and psychic powers


Many contemporary philosophers accept some form of reliabilism (though they have developed it in various ways). You can now see why reliabilism might also appeal to, say, a psychic who believes she “just knows” things about the dead.

Suppose a psychic (notice that by “psychic” I mean someone who is supposed to have psychic powers, whether or not they actually do) – call her Mary – finds herself believing that her dead Aunt Sarah is currently in the room with her. Also suppose, for the sake of argument, that Mary really does have some sort of reliable psychic sense, that dead Aunt Sarah really is in the room with Mary, and that Mary’s psychic sense is what is causing Mary to believe Aunt Sarah is present. Then, says our reliabilist theory, Mary knows that Aunt Sarah is in the room with her.

Notice that Mary doesn’t infer that Aunt Sarah is present on the basis of evidence. Mary just finds herself stuck with that belief that Aunt Sarah is present, caused as it is by her reliable psychic sense. Yet, says our reliabilist, despite the fact that Mary doesn’t possess any evidence that Aunt Sarah is present, Mary knows Aunt Sarah is there. In fact, were Mary to claim that she “just knows” that Mary is in the room with her right now, she’d be right!

Of course, that they do “just know” such things despite not having any publicly available evidence is a claim psychics make on a daily basis. So, while few psychics will have heard of reliabilism, reliabilism nevertheless opens up at least the possibility that these psychics are actually correct – they do know, despite not possessing any evidence.

“But hang on” you may object. “Even if reliabilism is correct and Mary does know her dead Aunt is in the room with her, that is not something she ought to believe. The fact is, Mary is being downright irresponsible in just accepting at face value this belief that happens to have popped into her head. Clifford is still correct – she shouldn’t believe it. It’s still unwise for her to believe it.”

In her own defence, a Mary might appeal to a further principle. Surely, Mary may insist, If something seems very clearly and obviously to be the case, then, other things being equal, it’s reasonable to believe it’s true. It’s reasonable to take appearance at face value. For example, if it seems clear and obvious to me that there’s on orange on the table before me, then surely it’s reasonable for me to believe there’s an orange there.

This principle does seem intuitively plausible. And it entails that, if it seems just clearly and obviously true to Mary that her dead Aunt is in the room with her, then, other things being equal, it is reasonable for Mary to hold that belief. Whether or not she can provide any publicly available evidence.

Reliabilism and religious experience


Let’s now return to George Bush’s gut. Bush believes he can directly know, by means of his gut, what God wants him to do.

Many people believe that they “just know” directly, rather than on the basis of evidence, that God exists and that, say, the Bible is true. Ask them why they believe, and they may give reasons and justifications of one sort or another. But typically, even if such grounds are provided, not much weight is placed on them. Most Theists will say that they don’t believe on the basis of evidence. Rather, they “just know” God exists. They believe they directly experience God, perhaps in something like the way I just directly experience that orange on the table in front of me. To them, it seems perfectly clear and obvious that God exists.

Reliabilism seems to open up the possibility that some people might, indeed, “just know” that God exists. Suppose God has provided us with a sort of sensus divinitatis – a reliable, God-sensing faculty (in Bush’s case, that would be his gut). On the reliabilist view, it seems that a sensus diviniatis could provide such knowledge.

Moreover, a religious person might add, just as, if it seems clearly and obviously true to me that there’s an orange on the table, then it is reasonable for me to suppose there’s an orange there, so if it seems clearly and obviously true to someone that God exists, then it’s reasonable for them to believe God exists. There’s certainly nothing wrong, or irresponsible, about them taking their experience at face value.

This view about religious experience has been developed by several contemporary Christian philosophers, chief among whom is Alvin Plantinga. Plantinga’s version is detailed, but the gist is essentially this, that something like reliabilism is essentially correct, that God has indeed given everyone of us a God-sensing faculty or sensus divinitatis, and that consequently, some of us can know, directly and without evidence, that God exists. Indeed, that God exists is an entirely reasonable thing for such people to believe if that’s very much how things clearly and obviously seem to them even after careful reflection.

Plantinga adds that, if there is a God, he probably would want us to know of his existence directly by means of such a reliable God-sensing faculty. So, if there is a God, then some of us probably do know by such means that God exists.

You may be wondering: “But if we all have a sensus divinitatis, as Plantinga supposes, why don’t we all enjoy such clear and unambiguous God experiences?” Because, Plantinga explains, in many cases our sensus divinitatis has been damaged by sin:

Were it not for sin and its effects, God’s presence and glory would be as obvious and uncontroversial to us all as the presence of other minds, physical objects and the past. Like any cognitive process, however, the sensus divinitatis can malfunction; as a result of sin, it has been damaged.

The reason I don’t have such God experiences, then, is because my sensus divinitatis has been damaged by sin. Obviously, it doesn’t follow that, if I don’t have such experiences, then others aren’t, by means of them, able to know that God exists. To draw that conclusion would be analogous to me, having poked my eyes out and so blinded myself to the orange on the table in front of me, defiantly claiming, “I don’t see any orange on the table, so – even if there is– you certainly don’t know there’s any orange there!”

Assessing psychic and religious claims to “just know”

We have seen how the reliabilist theory of knowledge seems to open up the possibility that some people might “just know” that their dead relative is in the room with them, or “just know” that God exists. We have also seen that evidentialism has been challenged, and that, according to Plantinga and others, it can be entirely reasonable for people to take their religious experiences at face value. If it seems just clearly and obviously true to them that God exists, then it can entirely reasonable for them to believe God exists, whether or not they possess any evidence. Psychics might say much the same thing about their psychic experiences. Let’s now begin to assess these various claims.

Let me say at the outset that I find reliabilism plausible. I suspect that some version of reliabilism may well be correct. Let me also be clear that I do not rule out in principle the possibility that some people might be equipped with reliable psychic powers, or a sensus divinitatis, or whatever.

I also agree that evidentialism is probably false, and that, generally speaking, it is indeed reasonable for us to take appearances at face value. If it seems just clearly and obviously the case that there’s an orange on the table in front of me, well then, other things being equal, it’s reasonable for me to believe there’s an orange on the table in front of me.

However, I remain entirely unconvinced that anyone who claims to “just know” that the dead walk among us, or that God exists, knows any such thing. Not only do I think the rest of us have good grounds for doubting their experience, I don’t believe it’s reasonable for them to take their own experience at face value either. I’ll explain why by means of what I call the case of the mad, fruit-fixated brain scientist.

The case of the mad, fruit-fixated brain scientist


Suppose Jane is shown what appears, quite clearly and obviously, to be an orange on the table in front of her. Surely then, it is, other things being equal, reasonable for Jane to believe there’s an orange there.

But now suppose the orange is presented to Jane in a rather unusual situation. Jane is one of several visitors to the laboratory of a mad brain scientist with a weird fruit fixation. She, like the other visitors, is wearing an electronic helmet that can influence what happens in her brain. From his central computer terminal, the mad brain scientist can, by means of these helmets, control what people are experiencing. He can create vivid and convincing hallucinations.

The scientist demonstrates by causing one of the visitors to hallucinate an apple. There’s much hilarity as the victim tries to grab for the fruit that’s not there. The visitors are then invited to wander round the lab where, the scientist tells them, they may experience several other virtual fruit. Jane then comes across what appears to be an orange on a table. Now, as a matter of fact, it is a real orange – one that fell out of someone’s packed lunch bag. Jane’s faculty of sight is functioning normally and reliably. This is no hallucination.

Now ask yourself two questions: (i) does Jane know there’s an orange on the table? And (ii) is it reasonable for Jane to suppose there’s an orange on the table?

Intuitively, it seems Jane doesn’t know there’s an orange present. After all, for all Jane knows, it could be one of the many hallucinatory fruit she knows about. But what would a reliabilist say? Well, sight is generally a reliable belief producing mechanism, and sight is what’s producing her belief. So some reliabilists may say that, yes, Jane does know. On the other hand, very many reliabilists say that, while in a standard environment, sight is reliable, it isn’t reliable in other kinds of environment, e.g. the kind of environment in which we will often as not be deceived by visual hallucinations. But then it follows that, because she is in just such an environment, Jane doesn’t know.

Now let’s turn to question (ii), which is the pivotal question: is it reasonable for Jane to believe there’s an orange before her?

Surely not. Given Jane knows that she is in an environment (the mad brain scientist’s laboratory) in which people regularly have compelling fruit hallucinations (indistinguishable from real fruit experiences), Jane should remain rather skeptical about her own fruit experience. For all she can tell, she’s probably having a mad-scientist-induced fruit hallucination.

I draw two morals for religious experience:

First of all, even if reliabilism is true, and even if some of us do have God-experiences produced by a sensus divinitatis, it remains debatable whether such people know that God exists. If human beings are highly prone to delusional religious experiences that they nevertheless find entirely convincing, then, even if, as a matter of fact, I happen to be having a wholly accurate religious experience revealing that, say, the Judeo-Christian God exists, it’s by no means clear I can be said to know the Judeo-Christian God exists, any more than Jane, coming upon a real orange in the brain scientist’s lab, can be said to know that there’s an orange on the table in front of her.

Second, and more importantly, even if it’s true, because of my religious experience, that I do know that the Judeo-Christian God exists, surely it still isn’t reasonable for me to take my experience at face value. For I find myself in a situation much like Jane finds herself in the brain scientist’s lab. Even though it looks to Jane clearly and obviously to be true that there’s an orange on the table in front of her, Jane should, surely, remain pretty skeptical about whether there’s actually an orange there, given that, for all she knows, she might very easily be having one of the many delusional fruit experiences currently being generated in the lab. Jane would be foolish to take appearance at face value. Similarly, if I have good evidence that many religious experiences are delusional – even the most compelling examples – then surely I should be equally skeptical about my own religious experiences, no matter how compelling those experiences might be. I would be foolish for me to take my experiences at face value.

A similar morals might be drawn about psychic experiences. If most – including even the most compelling examples – are delusional, then it’s debatable whether the psychic can be said to know. However, even if the psychic can be said to know, if they’re aware that many such experiences are delusional, then it surely isn’t reasonable for such a person to take their experience at face value. They would be foolish to do so.

The dubious nature of religious experience

The above argument presupposes that there is good evidence that most psychic and religious experiences are delusional – even the most compelling examples. Which of course there is. Let’s focus on religious experience. We know that:

(i) Religious experiences tend to be culturally specific. Christians experience the guiding hand of Jesus, while Muslims experience Allah. Just like experiences of alien abduction (reports of alien abduction pretty much stop at certain national borders), the character of religious experiences often changes at national borders. In Catholic countries, the Virgin Mary is often seen, but not over the border in a predominantly Muslim country. This strongly suggests that to a significant degree religious experiences are shaped by our cultural expectations – by the power of suggestion (see Piling Up The Anecdotes). And once we know that a large part of what is experienced is a result of the power of suggestion, we immediately have grounds for being somewhat suspicious about what remains.

(ii) Religious experiences often contradict each other. George W. Bush’s gut told him God wanted war with Iraq. However, the religious antenna of other believers – including other Christians – tell them God wanted peace. Some religious people claim to know by virtue of a revelatory experience that Christ is divine and was resurrected. Muslims, relying instead on the religious revelations of the prophet Mohammad, deny this. Religious experience reveals that some Gods are cruel and vengeful, some even requiring the blood of children (The Mayan and Aztec gods, for example), while others are loving and kind. The religious experiences of some Buddhists reveal there’s no personal God, whereas those of many Christians Jews and Muslims reveal that there is but one personal God. Other religions have a pantheon of Gods. Take a step back and look at the sweep of human history, and you find an extraordinary range of such experiences. Religious revelation has produced a vast hodge-podge of contradictory claims, many of which must, therefore, be false. Even those who believe they have had things directly revealed to them by God must acknowledge that a great many equally-convinced people are deluded about what has supposedly been revealed to them.

There are similar reasons for supposing the bulk of psychic experiences are also delusional. What is revealed to psychics is often wrong, often contradicted by what other psychics claim, and so on.

For these reasons, then, it’s not reasonable for me to take my psychic or religious experience at face value – not even if it’s very vivid and convincing. It might be genuinely revelatory. But, under the circumstances, it would be rather foolish of me to assume that it is. Those who, like George W. Bush, place a simple trusting faith in their gut, or wherever else they think their sensus divinitatis is located, are being irresponsible and foolish.

Notice that it would be particularly foolhardy for, say, someone who believes in an all-powerful, all-knowing and all-good God, but who is confronted with the evidential problem of evil, to sweep the problem to one side, saying, “But look, I just know in my heart [or gut, or wherever] that my God exists!” While it might remain a theoretical possibility that they do “just know”, it’s certainly not reasonable for them to maintain this – not if they have been presented with both (i) good evidence that many such experiences are delusional, and (ii) powerful empirical evidence that what they believe is false. To insist one “just knows” under these circumstances is very unreasonable indeed.

The common core of religious experience – “ineffable transcendence”?


Some will say that it is unfair to lump all religious experiences together. There is a certain kind of experience - the sort enjoyed by the mystics of many different religions down through the centuries - that is essentially the same. What is this experiential common denominator? According to Karen Armstrong, it is an experience of “indescribable transcendence”. As we saw in Moving The Goalposts, Armstrong’s view is that “God” is merely a symbol for this transcendence. Once we strip away the cultural artifacts peculiar to the different mainstream religions, we find they all have this common, experiential core.

According to Armstrong, such experiences of indescribable transcendence typically don’t just happen. Usually, they emerge only after subjects have committed themselves over an extended period of time to a particular sort of lifestyle – a religious lifestyle. Religion, on Armstrong’s way of thinking, is not a body of doctrine (how could it be, if that towards which religion is orientated is ineffable?) but an activity: the kind of activity that produces experiences of this sort. Religion, says Armstrong, is

a practical discipline, and its insights are not derived from abstract speculation but from spiritual exercises and a dedicated lifestyle.

By engaging in certain religious practices and forms of life, maintains Armstrong, people can come to live “on a higher, divine or godlike plane and thus wake up their true selves.”

Some noteworthy features of religious practice

Suppose, then, that having immersed themselves in such a lifestyle, someone claims to “just know” that there is indeed such an ineffable transcendence? Is it reasonable for us, or for them, to suppose they’ve achieved awareness of Armstrong’s “sacred reality”?

I don’t believe so. As Armstrong acknowledges, religious practice takes many forms involving a variety of activities. An interesting feature of many of these activities is that we know they can induce interesting – sometimes rather beneficial – psychological states, even outside of a religious setting. Let’s look at some examples:

Meditation and prayer. Consider meditation. It has proven effects on both our psychology and physiology. It can reduce stress, lower blood pressure and induce feelings of calm and contentment. Even atheists meditate to gain these benefits. Prayer can be a form of meditation, of course. Sometimes prayer and other devotional activities are accompanied by repetitive swaying or rocking motions known to induce a sense of well-being – the so-called “jogger’s high” (though this is not, as is widely believed, a result of releasing endorphins).

Isolation. Isolation can have a powerful psychological effect on people. It can render them more easily psychologically manipulated (which is why isolation is a favourite tool of interrogators) and can produce hallucinations and other altered states of consciousness. Many religions encourage periods of isolation for spiritual purposes – several days in the wilderness say.

Fasting. Fasting, too, is known to produce some peculiar psychological states, including hallucinations, even outside of a religious setting.

Collective singing/chanting. Coming together in a large group to chant or sing can also be a very intoxicating experience, as anyone who has sat on a football terrace can testify.

Architecture. If you have ever entered a large cave by torchlight, you will know that it too can induce a powerful emotional experience. The darkness, echoing sounds, and glimpses of magnificent structures making one fearful and yet excited all at the same time – leading us to start talking in whispers. The echoing grandeur of many places of worship has a similar psychological effect.

Giving. Helping others in face-to-face situation can be an immensely powerful psychological experience - often a deeply gratifying and positive experience, whether or not you happen to do it in a religious setting.

Ritual. Engaging in ritualistic activity often has a calming and beneficial effect, whether or not performed within a religious setting. For example, sportsmen and women often engage in rituals before competing (and can become very disturbed if for some reason the ritual cannot be performed because e.g. their lucky shirt has been lost).

Religious practice typically involves at least some of, and usually, many of, these activities. Activities we know can have a powerful psychological effect even outside of any religious setting. If people collectively engage in such activities with intensity of purpose over a long period of time, this might very well have a marked psychological effect. It might well produce some interesting, and quite possibly beneficial, psychological states.

If we then mix into this heady and intoxicating brew the suggestion that what people are experiencing or becoming psychological attuned to as a result of long term engagement in such regime is some sort of ineffable transcendence, then, given the power of suggestion (see Piling Up The Anecdotes, p xxx), many will probably become quite convinced that this is what’s going on.

The experiences and insights that, as a result of the regime, then coalesce under the label “God” will no doubt be complex and difficult to articulate. There probably is a sense in which someone who has never been through such a regime will not fully appreciate what the experience is actually like for the subject, “from the inside” as it were. Those who have had such an experience will no doubt struggle to communicate its character in much same way that someone who has been through, say, a war or childbirth may struggle. They may well have to resort to poetry or music or other art forms in order to convey its unique intensity.

Armstrong says,

[i]t is clear that the meditation, yoga and rituals that work aesthetically on a congregation have, when practised assiduously over a lifetime, a marked effect on the personality – an effect that is another form of natural theology. There is no ‘born again’ conversion, but a slow, incremental and imperceptible transformation… The effect of these practices cannot give us concrete information about God; it is certainly not a scientific ‘proof’. But something indefinable happens to people who involve themselves in these disciplines with commitment and talent. The ‘something’ remains opaque, however, to those who do not undergo these disciplines…

While it may indeed be difficult for those of us that have not been through such a process to appreciate exactly what it’s like to be in the kind of psychological state it can produce, surely we have pretty good grounds for doubting that what is experienced is some sort of transcendent reality. Given what we know about human psychology, it’s likely that people put through such an intense regime over an extended period of time will think they have become attuned to such a reality anyway, whether or not any such reality exists, and whether or not they have obtained any sort of genuine insight into it.

I don’t wish to deny there is value in engaging in meditation, yoga, and so on. It may well be that those who engage in such practices gain some valuable insights into themselves and the human condition as a result. Certainly, there may be some positive psychological effects, such as a lasting sense of peace and contentment, from determinedly engaging in such activities over a long period of time, effects that will undoubtedly by magnified by the accompanying thought that what they are becoming attuned to is “God”.

But the claim that they have thereby become attuned to some sort of “sacred reality” is dubious to say the least. Surely, given our understanding of human psychology, by far the best explanation of what people experience after having engaged in religious practice with dedication over long periods of time is not that they have become attuned to some sort of ineffable transcendence, but that they have succeeded in altering their own psychology by fairly well-understood mechanisms common to both the religious and non-religious spheres, and that they have then mistakenly interpreted this alteration as their becoming attuned to such a reality.

Conclusion


As we have seen, “I just know” isn’t always unreasonable thing to say. But sometimes it is. Indeed, sometimes it’s a foolish thing to say.

Consider these two examples:
… sometimes I see images and I just know something terrible has happened to them. Psychic Margaret Solis quoted in “The Scots psychic helping Hollywood stars - and hunting down murder victims”, Daily Record, 14th September, 2010

How do I know when God is talking to me? I just know. Internet comment.

Suppose these individuals claiming to “just know” can’t provide any sort of publicly available evidence or rational argument to back up what they claim to “just know”. We have seen that, if reliabilism is true, then the fact that they don’t have any such evidence or argument does not rule out the possibility that they “just know”. However, given what we, and presumably they, know about the unreliability of such psychic and religious experiences generally, surely it’s not reasonable for either us, or them, to take such seemingly revelatory experiences at face value. It’s not reasonable for them to insist they “just know”.