(ADAPTED FROM MY COMPANION TO PHILOSOPHY)
This functions as an introduction to reason and argument, what it means to be 'rational', inductive and deductive reasoning, etc. It was illustrated, hence there are 'captions' included in the text, which should be fairly self-explanatory.
This is in three parts: 1. Reasonong, 2. Fallacies, 3. Thinking Tools.
TEXT BOX What is an argument? Outside of philosophy, the word “argument” is used in a variety of ways. An argument in a bar may involve little more than people hurling insults at each other. In philosophy, the word tends to be used more specifically. Usually, when philosophers talk about an argument, they are referring to a sequence of one or more premises and a conclusion. The premises are supposed rationally to support the conclusion.
Arguments can be simple. But they can also be highly complex. Often, a philosophical book or treatise consists of one big argument made up of a series of smaller ones, which may in turn involve further subsidiary arguments, and so on. In order to assess the overall argument, you need to check whether each of the component arguments works properly.
In Philosophy, we often want to construct a reasoned case for believing something, or to spot where someone has made an unreasonable move. In this chapter, we are going to look more closely at the use of reason. We begin by asking what is reason, and what makes a belief reasonable?
Perhaps the most obvious way of showing that a claim is reasonable is by producing a sound argument in its support.
Such an argument is an inference involving one or more premises and a conclusion, where the premises are supposed rationally to support the conclusion. Here is a simple example.
Tom is a human
All humans have a brain
Therefore: Tom has a brain
This is a deductive argument. In a deductive argument, the premises are supposed logically to entail the conclusion. When the premises entail the conclusion, we say the argument is valid.
The above argument is valid. Necessarily, if the two premises are true, then the conclusion is true. Someone who asserts the premises but denies the conclusion is involved in a logical contradiction.
Of course, even if a deductive argument is valid, its conclusion may not be true. Consider the following example:
Elvis Presley is alive
Anything alive resides in Brazil
Therefore: Elvis Presley resides in Brazil
This argument is valid. But its conclusion is false. In order to confirm that the conclusion is true, we need to ensure two things – we need to ensure both that the argument is valid and that its premises are true.
Deductive argument is not the only legitimate form of inference. There is also inductive argument. In an inductive argument the premises do not, and are not intended, logically to entail the conclusion. They are supposed merely to provide rational support to the conclusion. Here is a classic example:
Peach number 1 contains a stone
Peach number 2 contains a stone
Peach number 3 contains a stone
Peach number 1000 contains a stone
Therefore: all peaches contain stones
This argument contains one thousand premises (I have not bothered to them list all) and a conclusion. Obviously, the premises do not logically entail the conclusion. There is no logical contradiction involved in claiming that although the first one thousand peaches I observed contained stones, the next one won’t. Still, despite not being deductively valid, we suppose inductive arguments are able to provide good grounds for beieving their conclusions are true. Surely, the more peaches I observe that contain stones, the more reasonable it is for me to believe they all contain stones (unless, of course, I happen to discover one without – that would immediately falsify the hypothesis that all peaches contain stones).
The above argument is an example of enumerative induction – we observe a number of Xs that are Y, and then generalize to the conclusion that all Xs are Y (or that the next X will be Y).
Enumerative induction is not the only form of inductive reasoning. In an argument to the best explanation, the existence of something may be posited as the best available explanation of what has been observed, like so:
X is observed.
The existence of Y provides the best available explanation of X
Therefore: Y exists.
Suppose I am a detective investigating the scene of a murder that took place only moments ago. While studying the room, I notice a pair of shoes poking out from under a twitching curtain. Under these circumstances, it may well be reasonable for me to conclude that there is someone standing behind the curtain. There’s no logical guarantee there is anyone there, of course – perhaps the shoes are empty and the curtain is blown by the wind. Still, that there is someone hiding behind the curtain may provide the best available explanation of what I can observe. In which case, it is reasonable for me to conclude that there is someone standing there.
Inductive reasoning is particularly important to the empirical sciences. Scientists construct theories that are supposed to hold for all places and all times, including the distant future and past. But they cannot themselves directly observe all times and places. So they must rely on what they can observe in order to justify their claims. It is inductive reasoning that allows them to do this.
For example, scientists may note that every action they have observed has been accompanied by an equal and opposite reaction, and then use enumerative induction to draw the conclusion that all actions are accompanied by equal and opposite reactions. Or they may observe certain experimental results, note that the existence of a theoretical particle such as the electron provides the best available explanation of those results, and so conclude that electrons exist. That would be a scientific application of argument to the best explanation.
You can see that cogent inductive and deductive arguments have a truth-preserving quality to them. If you feed true premises into a valid deductive argument, you are guaranteed to arrive at a true conclusion. If you feed true premises into a sound inductive argument, you are likely to arrive at a true conclusion. For those of us who like to believe what is true, this is a nice feature.
Notice that if you are unwilling to accept the conclusion of an apparently cogent inductive or deductive argument, the onus is on you to do at least one of two things. You might fault the argument by showing that it is invalid (if a deductive argument) or unsound (if inductive). Or you might try to show that one or more of the premises is either false at least inadequately justified. Or you might try to do both these things.
Other ways to be “reasonable”?
A belief supported by a cogent argument may be reasonable. But is that the only way in which beliefs can qualify as reasonable? After all, if a belief is reasonable only if it is inferred by means of a cogent argument from other reasonable beliefs, they too will have to be inferred, and so on. You can see that a regress threatens here – in order to show that even one of our beliefs is reasonable, we will have to show that an infinite number are.
How might we avoid this regress? One possibility would be to claim that certain beliefs are non-inferentially justified. Suppose I believe that there is an orange on the table in front of me for the simple reason that I can see it there. We would ordinarily consider my belief reasonable despite the fact that I do not infer the presence of the orange – I just directly observe it is there. True, it is possible I am mistaken about there being an orange on the table (perhaps I am hallucinating or dreaming). But surely, despite that possibility, my belief is still very reasonable indeed.
So it seems
some beliefs can be reasonable despite not being inferred. In particular, my
belief might be reasonable because, on observing the situation, it just clearly seems to be true, and I have no reason to suspect I'm being duped or misled in some way.
Text box: Justifying reason
We believe that the use of inductive and deductive inference is reasonable. Indeed, we believe these forms of reasoning are, in the case of valid deductive arguments, guaranteed, and in the case of sound inductive arguments, at least likely, to lead us to true conclusions, given that we start with true premises.
But what, in turn, is the justification for believing that these forms of reasoning are themselves reliable roads to truth? If, in order to justify them, we need to construct a cogent argument in their support, then we will be using reason to justify itself. But that, surely, is a circular justification and so no justification at all. We can no more use reason to justify reason than we can justify trusting a second-hand car salesman by pointing out that he himself claims to be trustworthy.
But then how might reason be justified? One possible solution would be to claim that the reliability of these various forms of reasoning can be shown non-inferentially. In the very simplest cases, that our forms of reasoning are at least likely to lead us to true conclusions can just directly be seen.
Reason as a filter
One of the ways in which we can apply reason is as a filter. You might think of your mind as a basket towards into which all sorts of belief might tumble – from sensible ones such as that the Earth is round to ridiculous one such as that Elvis lives or that the Belgians are the secret rulers of the universe. By applying your powers of reason to these various beliefs – by subjecting them to critical scrutiny – you can filter them, allowing through only those beliefs that have at least a good chance of being true.
How demanding should this filter be? Descartes famously decided to subject all his beliefs to critical scrutiny, allowing through the filter only those that could not be doubted. Of course, few if any beliefs are indubitable. A less stringent, but still very robust, requirement would be to allow through only those beliefs that have a high probability of being true.
A fallacy is an error in reasoning. Often, the error is not obvious, with the result that people are easily duped by the argument. There is a whole series of more-or-less plausible looking arguments that turn out, on closer inspection, to be fallacious. This section looks at nine classic examples.
In a fallacious argument, the premises do not rationally support the conclusion. In a fallacious deductive argument, the premises do not logically entail the conclusion. In a fallacious inductive argument, the premises do not inductively support the conclusion. Learning to spot fallacies is an important philosophical skill. In fact some of the best-known philosophical arguments involve fairly straightforward fallacies. We will shortly see some examples.
Slippery slope fallacy
We are often warned against stepping onto “slippery slopes” – dangerously greasy slides that lead down to where the really bad stuff lies. Unfortunately, these warnings often over-estimate the risk of the “slide”. Unless the proponent of a “slippery slope” argument can provide good grounds for supposing such a slide is inevitable, or even just likely, their argument is fallacious.
Here is a simple example. Suppose I ask you to lend me one pound. Your friend warns you against lending me the money on the following grounds:
If you lend Stephen one pound today, tomorrow it will be two pounds, then ten pounds. Pretty soon he will owe you thousands!
Obviously, if you lend me one pound today, you can still easily refuse to lend me two pounds tomorrow or ten next week. The slide from owing one pound to owing thousands is not inevitable. In fact it is not even likely. As it stands, this is a fallacious use of the “slippery slope”.
It is possible this argument might be salvaged. Perhaps your friend can show both that I am an inveterate borrower and that you find it hard to say “no” once you have said “yes”. In that case, their warning not to lend me even one pound begins to look more credible. But your friend does need to be able to provide these additional grounds. Without them, the warning is hollow.
But what about the following argument? Does it commit the “slippery slope” fallacy?
If we allow a couple to select the sex of their baby today, tomorrow we will have to allow selection for eye and hair colour. Pretty soon, we will have to permit “designer babies.”
Yes, it does, if no justification is provided for supposing that we cannot or will not simply stop at some point along the “slide” from selection of sex to full-blown “designer babies”.
Slippery slope arguments often crop up in connection with the legalizing of things, such as recreational drugs, euthanasia, genetic engineering, and so on.
Suggest that the recreational use of marijuana should be legalized, for example, and many will warn that this would be the first step onto a slippery slope that will quickly lead us on to legalizing heroin and crack cocaine. Perhaps such a slide is likely. But the onus is on the proponent of this argument to show that. If they cannot show it, they too have committed the slippery slope fallacy.
There are degrees of slipperiness, of course. Even where there is some tendency for a slide to occur, a slip might still easily be avoided. Slippery slope arguments often obscure the fact that there may be effective ways of halting any skid.
Other phrases that may indicate the use of a slippery slope argument are “thin end of the wedge”, “opening the floodgates” and “give them an inch and they’ll take a mile”. In each case, the result of even a small move in a particular direction is often just assumed to be a dangerous and probably unstoppable slide. Where that is the case, the argument is fallacious.
The Gambler’s Fallacy
TEXT BOX The lottery fallacy. Another common, gambling-related reasoning error is the lottery fallacy. People sometimes conclude that because a particular event would otherwise be very improbable, the fact that it did occur makes it probable that someone or something must have somehow deliberately produced it.
For example, suppose I buy one of a million lottery tickets. My ticket wins. That leads me to conclude that someone or something must have arranged for me to win. I conclude I must have a guardian angel who organized the windfall for me.
But of course, this is faulty reasoning. I have no justification at all for supposing someone fixed the lottery in my favour. After all, whoever won would have been no less likely to win. Given there was bound to be a winner, there was bound to be an extremely unlikely event. There are no grounds for believing that the fact that the one-in-a-million winner is me is anything more than an amazing coincidence.
Here is a simple example of the gambler’s fallacy.
Jenny: Still buying those scratch cards?
John: Yes. I’ve been playing regularly for three years and I haven’t won a thing.
Jenny: So why do you bother?
John: Well, as I haven’t won anything yet, I must be due a win soon!
In this version of the fallacy, someone takes the probability of an event A happening over a period of time, notices that, over the first part of that period, the actual incidence of A is much lower than what is probable, and concludes that A is therefore much more probable over the rest of the period. They predict a short-term increase in the probability of A to “even things up” over the longer term.
Here is another example: someone rolls a dice 30 times and happens not to get a single six. They conclude they are now much more likely to get a six on the next roll. The truth, of course, is that the probability of their getting a six still remains exactly the same – one-in-six.
The fallacy can work the other way too: people sometimes assume that a higher than expected incidence of A must result in a short term lowering in the probability of A to “even things up”, as in this case.
Ruth: Doing the lottery again this week?
John: Yes. What numbers are you going to pick?
Ruth: Well, the numbers 6, 9 and 23 have come up a lot recently, so I’ll be avoiding them - they aren’t likely to come up again for a while.
The gambler’s fallacy is common. Stand next to a lottery outlet for a little while and it won’t be long before you hear someone say they are “due” a win, that they won’t be silly enough to pick the same numbers that won last week, and so on.
The fact is it makes no difference what numbers have come up before. Each week the probability of any particular sequence of numbers winning the UK lottery is always exactly the same: about 14 million to one.
Appeal to Authority
We are often justified in believing something because an authority on the subject tells us that it is true. If a car mechanic advises you to put water and not oil in your car radiator, I would follow her advice.
But sometimes such “appeals to authority” are suspect. Here are four examples:
I believe that Supawhite toothpaste cleans whiter than any other brand.
Because scientists working for the Supawhite Corporation tell me so.
I am going to find my perfect partner soon
Why are you so sure?
I consulted a fortune cookie
I believe homeopathy can cure serious diseases
Why do you believe that?
Because Dr Smedley told me
Is Dr Smedley a medical expert?
No, he’s a professor of mathematics
Camel dung face packs are an effective beauty treatment
How do you know?
Joe Sopwith, actor and pop star, advertizes them on TV
In the first example, the authority in question may be untrustworthy. To what extent can scientists working for a particular company be trusted to give unbiased advice about its products?
In the second and last examples, the “authorities” in question are dubious. What grounds do we have for supposing that fortune cookies are reliable source of information about the future? And why is a celebrity like Joe Sopwith to be any better informed about the effectiveness of camel dung face packs than anyone else?
In the third example, it is true that the person consulted really is an authority. Unfortunately, they are not an authority in the relevant area. Dr Smedley’s area of expertise is maths, not medicine. There is no reason to suppose that Dr Smedley’s views about homeopathy are any better informed than are yours or mine.
The moral is that, when appealing to an “authority”, you need to check several things, including:
Is the person in question really an authority?
Are they an authority on the relevant subject?
Can we be confident this authority is not biased?
Is the view of this authority consistent with that of the majority of competent authorities in this area?
If the answer to any of these questions is “no”, you would be wise not to place your trust in the authority in question.
Caption: road sign showing fork in road. In an example of false dilemma, we are presented with just two options when there are, in truth, other alternatives.
Caption: advert products a and b: Salespeople sometimes use false dilemma: “Your choice is to either buy our product A, or inferior product B. So you just have to buy A!” There may be other alternatives, such as buying neither.
It is common to argue like this:
Either A or B
This is often a perfectly acceptable form of argument, as in this case:
Either Peter has a pilot’s license or else Peter is not permitted to pilot a plane.
Peter has not got a pilot’s license.
Therefore, Peter is not permitted to pilot a plane.
This argument, on the other hand, is not acceptable:
Either I am a giraffe or I am a hippo
I am not a hippo
Therefore, I am a giraffe
What is the problem with the second argument? The first premise presents us with two options both of which are false. In the fallacy of false dilemma we are similarly presented with just two options when there are more. We are told that the only alternatives are A or B. The possibility of choosing C is entirely ignored.
Politicians sometimes use false dilemma to try to force us into making a decision we do not in fact have to make. For example, they may say:
Either we invade Zenda or we allow Zenda to take over the world.
We don’t want Zenda to take over the world, do we?
So we should invade Zenda.
It may not be true that Zenda is planning to take over the world. If so, the choice with which we are presented is a false one. But notice that, even if Zenda is intent on world domination, there may be other effective ways of dealing with such rogue states. Invasion is unlikely to be the only option.
Politicians are not the only culprits when it comes to false dilemma. Customers are often maneuvered into making bad decisions by a salesperson’s use of false dilemma:
You can either buy Supawhite toothpaste for a pearly-white smile, or you can make do with yellow teeth.
You don’t want yellow teeth do you?
So you have to buy Supawhite!
No doubt some other toothpastes that are just as effective - perhaps even more effective - than Supawhite. These alternatives have been conveniently airbrushed out by the salesperson – leaving you with a false dilemma.
The moral is that, when you seem forced to choose between two alternatives, it is often worth checking whether they really are the only available options. Are you being railroaded by false dilemma?
The Post Hoc Fallacy
Here is a classic example of the post hoc fallacy:
I had been worrying about my driving test. So John bought me a rabbit’s for luck. I took the foot and passed with flying colours. So you see, the rabbit’s foot worked! I am going to take it to all my other exams to help me pass them too.
In the post hoc fallacy, someone concludes that because one event happened after another, the first is likely to be the cause of the second.
Obviously, the mere fact that one thing happened after another does not normally give us much reason to suppose that the two events are causally connected. Suppose I turn on my toaster. Shortly afterwards a volcano erupts on Mars. Did my turning the toaster on cause the Martian eruption? Of course not. There is no reason at all to suppose these two events are causally connected.
Here is another example:
John’s psychic healer gave him a twig to chew on. And he got better! So you see, chewing on that twig really did make him well. I am going to start visiting the same psychic healer myself!
Again, the fact that one thing happened after another is taken to be good evidence of a causal connection.
Of course, there may be a causal connection between two consecutive events. Perhaps John’s twig-chewing really did make him better. Perhaps rabbit feet can magically help us pass exams. The point is that a single “one off” observation does not remotely justify these claims.
The moral is: don’t leap to conclusions. Noticing that one event occurs immediately after another might give us grounds for investigating whether the events are causally related. But it does not, by itself, make it rational to believe there is any such a connection.
Superstitious people tend to be particularly prone to the post hoc fallacy. But almost all of us fall for this fallacy on occasion. So beware.
Affirming the consequent: Joe’s DIY mistake
TEXT BOX. Modus Ponens. Arguments of this type:
If A then B
are valid. Here’s an example:
If the power is off, then the light won’t come on
The power is off
Therefore the light won’t come on
This argument form is called Modus Ponens.
In a conditional of the form: If A then B, A is called the antecedent and B the consequent. In the fallacy of affirming the consequent, the second premise of the argument affirms the consequent (rather than the antecedent, as in Modus Ponens), like so:
If A then B
Hence the fallacy is called affirming the consequent.
Joe is busy rewiring his house. He is about to touch one of the wires when he suddenly wonders whether he remembered to turn off the power. He looks up, and sees that, though the light is switched on, it remains off. So Joe reasons like this: If the power is off, the light won’t come on. But the light won’t come on. So the power is off. Joe touches the wire and gets a nasty shock. Why?
Joe has just been electrocuted by a bit of bad reasoning. He reasoned like this:
If the power is on, the light won’t come on
The light won’t come on
Therefore the power is off
Joe’s argument has the following form:
If A then B
Arguments of this form aren’t valid. True, the light isn’t on. But perhaps that is because it has a faulty bulb. It doesn’t follow that the power is off.
Joe’s argument resembles a valid form of argument called Modus Ponens, which is why he got confused, with nasty consequences. This error is a very common. It’s called the fallacy of affirming the consequent. A recent study indicates that over two thirds of people without any training in informal logic regularly commit this fallacy. In all probability, you sometimes make the same sort of mistake as Joe. To avoid this type of faulty reasoning, keep an eye out for “If …then….” claims and make sure the logic of the argument runs in the right direction. That way you won’t end up fried like Joe.
QUOTE: Oak trees come from acorns. Acorns are small and shiny. Therefore oak trees are small and shiny
CAPTION: An example of the genetic fallacy: “If the egg has a hard shell, and the chicken came from the egg, the chicken must have a hard shell too.”
In the genetic fallacy, it is argued that if one thing B has its origin in another thing A, any properties possessed by A are also likely to be possessed by B. Here are two simple examples:
Eggs have hard shells
Chickens come from eggs
So chickens have hard shells too
Oak trees come from acorns
Acorns are small and shiny
Therefore oak trees are small and shiny
The philosopher Friedrich Nietzsche stands accused of committing the genetic fallacy. Nietzsche argues that modern Christian morality has its roots in the “slave morality” of ancient Rome’s slaves, a morality born of the resentment the slaves felt towards their masters (the slaves effectively reversed what their masters believed was of value, making weakness a strength, strength a weakness, and so on).
Suppose Nietzsche is right about the genesis of Christian morality – that it originated in feelings of resentment. Does that discredit Christian morality?
Nietzsche seems to assume that pointing out a defect in the origin of a thing discredits the thing itself. But that is usually fallacious reasoning. Here are two more examples:
Fred’s father was a Nazi
So Fred must be a Nazi himself
Democracy in Zenda was born of a violent and bloody struggle
So Zenda’s democracy must be a bad thing
The fallacy is also committed when it is assumed that if something had its origin in something good, that thing must itself be good. For example:
Hitler’s parents were loving and kind
So Hitler must have been loving and kind
The Klingons’ terrorist activity is the result of a legitimate grievance
Therefore the terrorist activity must itself be legitimate
Sometimes it is possible to draw such an inference, but only rarely. For example, this inference:
John is reliable source of information
This claim came from John
Therefore this claim is reliable
is reasonable. I will leave you to figure out why.
Leibniz’s Law and the masked man fallacy
P203 TEXT BOX (120 words) In his Meditations, Descartes argues that mind and body are distinct substances capable of independent existence. In order to draw that conclusion, Descartes applies Leibniz’s law. One of Descartes’ arguments appears to be a version of the argument from doubt, discussed here. Another argument concerns spatial extension. Descartes points out that physical substances are spatially extended. His mind, on the other hand, appears not to be spatially extended. Descartes then applies Leibniz’s law something like so:
My mind is not spatially extended
My body is spatially extended
Therefore, my mind and my body are not identical
What, if anything, is wrong with this application of Leibniz’s law?
P204 image: cary grant (preferably in Hitchcock’s North by NorthWest). Caption: Cary Grant is identical with Archibald Leach. They are one and the same person.
P204 quote: “Identical objects must share all the same properties.” Leibniz’s law.
MAIN TEXT Philosophers and scientists often consider identity claims. Let’s begin with a couple of scientific examples.
One important ancient astronomical discovery was that Hesperus, the evening star, is identical with Phosphorus, the morning star. What appeared to be two distinct heavenly objects turned out to be one and the same object – the planet we now call “Venus”.
Scientists also claim that certain properties are identical. For example, it is claimed that heat is molecular motion. They are one and the same property.
We also make identity claims in everyday life. You might discover, for example, that Cary Grant and Archibald Leach are one and the same person, or that Chomolunga and Mount Everest are one and the same mountain.
The philosopher Leibniz noted that if two objects are identical, then any property possessed by one object will also be possessed by the other. Take Mount Everest and Chomolunga, for example. If Mount Everest has the property of being 29,000 feet high, then Chomolunga will have that property too. The principle that identical objects must share the same properties is often referred to as Leibniz’s law.
Leibniz’s law provides us with a useful tool. Suppose an explorer discovers what he believes to be two separate mountains. But then, on returning home from his travels, he wonders whether it wasn’t just the same mountain seen from two different angles. How might the explorer establish that he is the discoverer of two mountains, and not just one?
The explorer might apply Leibniz’s law. If identical objects share all the same properties, then all the explorer has to do is find a property possessed by one mountain not possessed by the other. That would show that the number of mountains he discovered is two, not one.
Suppose, for example, that the explorer measured and recorded the height of both mountains. He discovered that while mountain A was 5,000 metres high, mountain B was much higher. Then the explorer can apply Leibniz’s law like so:
Mountain A is 5,000 metres high
Mountain B is not 5,000 metres high
Therefore mountain A is not identical with mountain B
This is a cogent argument. The mountains differ in at least one of their properties. So they cannot be identical.
Philosophers also consider identity claims. And they also regularly press Leibniz’s law into service.
Take substance dualism, for example. Substance dualists deny that mind and body are identical. According to substance dualists, mind and body are distinct substances capable of independent existence.
How might a substance dualist use Leibniz’s law to make their case? If the dualist can find a property possessed by the mind that is not also possessed by the body, or vice verse, that would show that mind and body are not identical.
Is there such a property?
Here is one suggestion. I can doubt whether my body exists. I can even entertain the thought that there may not be a physical world at all. In his Meditations, Descartes famously raises the hypothesis that there might be a powerful evil demon intent on deceiving him into believing the physical world exists when it is in truth just an illusion. The demon causes his mind to have experiences which, while they seem to be of trees, houses and even his own body, are wholly deceptive. I do not say that it is at all likely such a demon exists, of course. But it does at least appear to be a possibility. I can, in a similar way, at least entertain the doubt that my body exists.
On the other hand, it seems I cannot doubt that I exist. By trying to doubt that I exist, I think, but by thinking I immediately demonstrate that I do exist. As Descartes puts it: “I think, therefore I am”.
But then it seems we have discovered a property that my body possesses that I myself lack. My body possesses the property of being something I doubt exists. My mind lacks this property. But then why can’t we apply Leibniz’s law like so?
My body possesses the property of being something I doubt exists
My mind does not possess the property of being something I doubt exists
Therefore: my mind is not identical with my body
This is a version of what is often called the argument from doubt. Both premises of the argument are true. And, by Leibniz’s law, the conclusion seems to follow. So have we succeeded in proving my mind is not my body?
No. The argument from doubt (or at least this version of it) commits a notorious fallacy called the masked man fallacy. Suppose I witness a robbery. I see a masked man rob a bank. Later, detectives tell me their chief suspect is my father. I am horrified. Surely my father would never do such a thing. So I attempt to prove my father’s innocence in the following way. I point out that the masked man has a property my father lacks. The masked man is someone I believe robbed the bank. My father is not someone I believe robbed the bank. But then, by Leibniz’s law, it seems the masked man cannot be my father:
The masked man is someone I believe robbed the bank
My father is not someone I believe robbed the bank
Therefore: the masked man is not identical with my father
Both premises of this argument are true. Yet the conclusion does not follow. Clearly, my father could still turn out to be the masked man. There is something wrong with this argument. But what?
The answer is that Leibniz’s law does not apply to all properties. It works for properties such as being 5,0000 metres high. It does not work for properties such as being someone I believe robbed the bank. More generally, this form of argument does not work whenever the property in question involves someone’s psychological attitude towards a thing.
For example, in the masked man case, I try to show that my father and the masked man are distinct by pointing out that I have an attitude towards one that I don’t have towards the other: I believe one robbed the bank but not the other. But such attitudes are incapable of revealing whether or not the items in question really are distinct. Here are two more examples:
Cary Grant is someone Tom believes starred in in “North by North-West”
Archilbald Leach is not someone Tom believes starred in “North by North-West”
Therefore Cary Grant is not Archibald Leach
Alcohol is widely known to intoxicate
C2H6O is not widely known to intoxicate
Therefore alcohol is not C2H6O
Both these arguments have false conclusions despite having true premises. The problem, again, is that what someone may know or believe or recognise about one thing but not another is not the sort of property one can use to establish that they are not one and the same thing. Both arguments commit the masked man fallacy. So does the above version of the argument from doubt outlined above.
“It’s true-for-me!” - The Relativist fallacy
CAPTION (Wichitty grubs.) May be some truths are relative. For example, perhaps the claim that wichitty grubs are delicious is true for some Australian aboriginals, but false for most Westerners.
CAPTION polygamy. Some believe the claim that polygamy is wrong is true relative to mainstream Western culture but is false for other cultures. This truth is also alleged to be relative.
Jane: Belief in fairies is patently false. There’s no evidence to suggest that fairies exist, and plenty of evidence that don’t. So it’s ridiculous for you to believe in them.
Joe: Well, that fairies exist may not be true for you. But it is true for me!
You may have come across this retort yourself. Perhaps you have just made a very good case for supposing it’s false that there is a community of goblins living in your friend’s biscuit barrel, but then your goblin-fixated friend hits you with “Well, it’s true for me!”
What is “It’s true for me” supposed to mean, exactly? Presumably, Joe means more than that Jane believes one thing about fairies while he believes another. After all, even Jane can agree about that.
Perhaps what Joe is suggesting is that truth of the claim fairies exist is relative. There’s no objective truth about fairies – the truth is simply whatever each of us believes it to be. Of course, what is believed does vary from one person to the next. But can truth vary in the same way?
One thing that can confuse here is an unacknowledged slide from what is true about what a person believes to the truth of what they believe. Yes, it may be true that I believe Paris is the capital of Germany. It doesn’t follow that my belief that Paris is the capital of Germany is true.
After all, if truth were relative in that way, I could make any claim true just by believing it. That would be convenient. Suppose I want to be able to fly. I can make it true that I can fly just by believing that I can. But of course the truth about whether or not I can fly, or whether or not fairies exist, is not relative in this way. If Joe is claiming otherwise, he is simply mistaken.
Still, perhaps the truth of some claims is relative. Take the truth of claims about whether or not things are delicious. Perhaps the claim wichitti grubs are delicious is true for some aboriginal Australians but false for most Westerners. But it’s hardly plausible that the truth of all claims is relative in this way.
Someone commits the relativist fallacy when they say “That may be false for you, but it’s true for me” without providing any grounds for supposing that the truth in question is indeed relative.
3. THINKING TOOLS
Thinking philosophically is a skill, and, like most skills, the more you practice, the better you get. This section introduces a few of the philosophers tricks of the trade – tools which, once mastered, can be applied in many different areas of philosophy. There are many such tools – what follows is merely a small sample.
Most of the thinking tools listed in this section warn against a common sort of mistake or error. These include:
- Beware explanations that are really circular, generating a regress.
- Beware category mistakes – wrongly assuming that the sort of thing that can be said of one category of thing can also sensibly be said of another.
- Beware being seduced in by pseudo-profundity.
Also included is an outline of a particular approach to answering a certain philosophical questions – an approach known as the method of counter-examples. Those new to philosophy are often confused by the method, which is why it receives its own explanation here.
Spotting a regress
CAPTION: “Everything has a cause. Therefore the universe has a cause. Therefore God must exist as the cause of the universe.” But if everything has a cause, what is the caus of God?
CAPTION. Homunculi. If the behaviour of people is explained by the actions of little people running round inside them, then do these little people have even littler people running round inside them?
CAPTION. The Hindu myth of the Earth held up by an elephant held up by a turtle. What holds up the turtle?
INTRODUCTION Vicious regresses are not uncommon in philosophy. Philosophers often seek to explain things. Unfortunately, their explanations sometimes turn out to be circular. They merely take for granted what they are really supposed to be explaining. They simply postpone the question, rather than really answer it. Where that is the case, a regress looms. Spotting where an explanation generates a regress is an important philosophical skill.
Things fall when not supported. Take the sheet of paper on which I am writing. It doesn’t fall because it is supported by a table. Why doesn’t the table fall? Because it’s supported by the Earth. So why doesn’t the Earth fall? Perhaps it was this question that led some ancient Hindu thinkers to suppose that the Earth too must be supported. They concluded that the Earth sits on the back of an enormous elephant. But of course this merely raises a further question: what holds up the elephant? These Hindu thinkers had an answer for that question too – the elephant is supported by a turtle. But then what holds up the turtle?
You can see that a regress looms here. Even if we introduce a giant squirrel to support the turtle, and a giant panda to support the squirrel, and so on, we will never really succeed in explaining why everything doesn’t fall. At each step we merely postpone that mystery.
Of course, we can avoid this regress by insisting that one particular animal is the exception to the rule that everything falls if not supported. The Hindus made the turtle the exception to the rule. It is the one thing that requires no further support.
But if we are going to introduce an exception to the rule, why go so far as the turtle? Why not just make the Earth the exception to the rule instead. But then what justification have we for introducing any of these cosmic beasts? The answer, it seems, is none at all.
Similar regress problems often crop up in philosophy. Take this simple argument for the existence of God.
Everything has a cause. But then what is the cause of the universe? It seems God must exist as the cause of the universe.
You can see straightway that a regress threatens here, too. The first premise says everything has a cause. But if everything has a cause, so does God. It seems we will need to introduce a second God as the cause of the first, a third God as the cause of the second, and so on.
Of course, just as the ancient Hindus made the turtle the exception to the rule that everything falls if unsupported, we might insist that God is the exception to the rule that everything has a cause. But then why not make the universe the exception to the rule, instead? We have not, as yet, been given any more reason to suppose God exists than we have to suppose there exists a giant turtle.
INTRODUCTION Around the globe, audiences sit at the feet of marketing experts, life-style consultants, mystics, cult-leaders and other “gurus” waiting for the next deep and profound insight. Audiences often pay a great deal of money to hear these words of wisdom. So how do these elevated individuals come by their penetrating insights? What is the secret of their profundity? Unfortunately, in some cases, the audience is duped by the dark arts of pseudo-profundity.
TEXT BOX. ORWELL AND THE ART OF CONTRADICTION. Another secret of pseudo-profundity is to pick two words that have opposite or incompatible meanings, and combine them cryptically, like so:
Sanity is just another kind of madness
Life is a often a form of death
The ordinary is extraordinary
Try it for yourself. You’ll soon start sounding deep. In George Orwell’s novel Nineteen-Eighty Four, the three slogans of the Party are all examples of this sort of pseudo-profundity:
War is peace
Freedom is slavery
Ignorance is strength
A particularly useful feature of these remarks is that they make your audience do all the work for you. “Freedom is a kind of slavery” for example, is interpretable in all sorts of ways that probably won’t even have occurred to you. Just sit back, adopt a sage-like expression, and let your audience figure out what you mean.
None of this is to say that such cryptic remarks can’t be profound, of course. But given the ease with which they are generated, it’s wise not to be too easily impressed.
MAIN TEXT Actually, the art of sounding profound is fairly easily mastered. You too can make deep- and meaningful-sounding pronouncements if you are prepared to follow a few simple rules.
First, try stating the incredibly obvious. Only do it v-e-r-y s-l-o-w-l-y, with a sort of knowing nod. This works particularly well if your remark has something to do with one of the big themes of life, love, death and money. Here are some examples:
Death comes to us all
We all want to be loved
Money is used to buy things
Try it yourself. If you state the obvious with sufficient gravitas, following up with a pregnant pause, you may soon find others start to nod in agreement, perhaps muttering “How true that is”.
Now that you have warmed up, let’s move on to a different technique – the use of jargon. A few big, not fully understood words can easily enhance the illusion of profundity. All that’s required is a little imagination.
To begin with, try making up some words that have similar meanings to certain familiar terms, but that differ from them in some subtle and never-fully-explained way. For example, don’t talk about people being happy or sad, but about people having “positive or negative attitudinal orientations”. That sounds far more impressive and scientific-sounding, doesn’t it?
Now try translating some dull truisms into your newly invented language. For, example, the obvious fact that happy people tend to make other people happier can be expressed as “positive attitudinal orientations have high transferability”.
Also, whether you are a business guru, cult-leader or a mystic, it always helps to talk of “energies” and “balances”. This makes it sound as if you have discovered some deep mechanism or power that could potentially be harnessed and used by others. That will make it much easier to convince people that if they don’t buy into your advice, they will really be missing out. For example, publish an article entitled “Harnessing positive attitudinal energies within the retail environment”, and Lo! another modern business guru is born.
Finally, if someone does get up the courage to ask exactly what a “positive attitudinal energy” is, you can always give a definition using other bits of your newly-invented jargon, leaving your questioner none the wiser. If all your jargon is defined using other jargon, no one will ever be able to figure out exactly what you mean (though your devotees may think they know). And the fact that buried within your pseudo-profundities are one or true truisms will give your audience the impression that you must really be on to something, even if they don’t quite understand what it is. So they will be eager to hear more.
Unfortunately, some cult-leaders, business gurus, mystics, life-style consultants, therapists - and even some philosophers – make use of these techniques to generate the illusion that they possess deep and penetrating insights. Now you can see how easy it is to generate pseudo-profundities of your own, I’m sure you will be rather less impressed the next time some self-styled “guru” suggests that your attitudinal energies need balancing.
Method of Counter-examples
INTRO. Philosophers often ask questions of the form “What is X?” Outside of philosophy, these questions are rarely asked. We assume we can answer them quite easily. Until we try. In fact they are notoriously difficult to answer. One of the most popular approaches to answering them is known as the method of counter-examples.
You’ll find numerous examples of such “What is X?” questions in the dialogues of Plato. In the dialogues, Plato has Socrates ask the citizens of Athens such questions as “What is justice?”, “What is beauty?” and so on. The Athenians usually think they know the answers. They offer definitions that, at first sight, look very plausible. Unfortunately, Socrates is able quickly to reveal the inadequacy of their definitions. One way in which he does this is by employing the method of counter-examples.
To explain the method, let’s begin by applying it to a more mundane example. Suppose we ask “What is a chair?” Most of us think we know perfectly well what a chair is. Isn’t this a straightforward question, easily answered?
So let’s try to answer it. Suppose we begin with:
A chair is an object built to be sat on.
This sounds plausible. Except that, with a little ingenuity, it is possible to think of counter-examples. A wooden bench is built to be sat on. But it is not, strictly speaking, a chair. Or suppose you discover a large chair-shaped boulder that turns out to be perfect for sitting on. You install it in your garden as a piece of garden furniture – a chair. The boulder is now a chair. Yet this boulder, while now a chair, was not, strictly speaking, built to be sat on.
Faced with these counter-examples to our definition, we might attempt to refine it. Perhaps we might try this:
A chair is an object used for just one person to sit on.
This definition gets round our first two counter examples. A bench no longer qualifies as a chair, because a bench is used to seat more than one person. And by switching from “built to be sat on” to “used for sitting on”, our boulder-chair does now qualify as a chair.
Still, perhaps we can think of counter-examples to this new definition. A bicycle saddles is used for just one person to sit on. But a bicycle saddle is not a chair. To deal with this counter-example, we might refine our definition still further, like so:
A chair is an object with legs that is used for just one person to sit on.
This definition rules out bicycle saddles. For bicycle saddles don’t have legs. Unfortunately it also rules out our boulder-chair, which does not have legs. It rules out inflatable chairs, too.
In order to deal with these new counter-examples, we might try to refine our definition still more. But you can begin to see how difficult it can be to provide a watertight definition of even an object as straightforward and familiar as a chair.
Two sorts of counter-examples
In trying to answer the question “What is a chair?” we have been employing the method of counter-examples. A definition of X is offered. One or more counter-examples to the definition are produced. The definition is then refined in response to these counter-examples. But then more counter-examples are offered. And so on, until we arrive at a satisfactory definition.
Notice that counter-examples may be of one of two sorts:
- We may think up possible examples which, though they do fit the definition of X, are not examples of X. Or,
- We may think up possible examples which, though they do not fit the definition of X, are examples of X.
The first sort of counter-example shows that fitting the suggested definition is not sufficient to qualify something as an X. The second sort of counter-example shows that fitting the suggested definition is not necessary if something is to qualify as an X.
In the chair example we produced counter-examples of both sorts. The boulder counter-example showed that our first definition does not specify a necessary condition for being a chair. The bench counter-example showed that satisfying the suggested definition is not sufficient to qualify something as a chair.
When philosophers consider “What is X?” questions, they are usually interested in providing a definition that succeeds in pinning down the essence of X. Their focus is not on those features that, merely as a matter of fact, all and only the Xs happen to possess. Rather, they want to know what must be true of all and only the Xs. They want to know what will be true of all and only the Xs, not just in the actual situation, but in any possible situation.
But then we can undermine such a definition by coming up with a merely possible counter-example. Take the boulder-chair example discussed above. Perhaps no one has ever used a conveniently shaped boulder as an item of garden furniture. That is irrelevant. As the definition of a chair is supposed to say what is true of all and only the chairs in any possible situation, so a merely possible counter-example will do.
Students new to philosophy are often confused about this. Confronted with the boulder chair counter-example, they may point out that as a matter of fact there are no boulder chairs. But whether or not any such chair exists is irrelevant to its power as a counter-example. An imaginary boulder-chair is just as effective as a real. Just so long as it is possible.
We don’t know, and yet we do…
In the dialogue the Laches, Socrates asks the eponymous Athenian general “What is courage?” Laches defines courage as standing firm in battle. But Socrates quickly comes up with a counter-example – someone might stand firm in battle, but simply out of foolish endurance, putting both themselves and others in danger. That would not be courage. A genuinely courageous person knows both when to stand firm and when to retreat.
After several more abortive attempts to define courage, Socrates concludes that, though there must be some essential feature common and peculiar to all acts of courage in virtue of which they are courageous, we are remain ignorant about what this essential feature is. It seems that the “essence” of courage is hidden.
Yet the method Socrates employs in order to try to show this – the method of counter-examples - suggests that, at some level, we do possess this knowledge.
After all, Laches is able to recognise that someone who foolishly holds fast in battle is not truly courageous. He recognises that such a person is a counter-example to his definition. But then Laches must, at some level, already know what courage is. If Laches didn’t know what courage was, how would he be able to recognise that he has been confronted with a counter-example?
So it seems as if the knowledge we seek is, in a sense, something we already possess. It is, if you like, buried within us (in fact Socrates believes it is innate). It’s just that we are not able to bring this knowledge to the surface and make it clear and explicit. The method of counter-examples is designed to help us do this.
TEXT BOX. Necessary and sufficient conditions. In asking the question “What is X?” philosophers are typically looking for a special sort of definition. Here is an example of such a definition:
Something is a triangle if and only if it is a three-straight-sided closed figure.
Being a three-straight-sided closed figure is a necessary condition of being triangle – necessarily, anything that isn’t straight-sided is not a triangle. Being a three-straight-sided closed figure is also sufficient to qualify something as a triangle - necessarily, if something is a three-straight-sided closed figure, then it is a triangle.
When philosophers ask “What is beauty?”, What is knowledge?”, What is justice?” and so on, they are typically also looking for a definition that supplies the necessary and sufficient conditions for being an X.
Counter-examples to such a definition will show either that the definition does not specify a necessary condition, or that it does not specify a sufficient condition.END TEXT BOX.
NEW IMAGE OF CHAIR (TO REPLACE BELL QUOTE): CAPTION: We all know what a chair is. Or do we? It can be remarkably difficult to pin down what’s essential so far as being a chair is concerned. One way in which we might try to do this is by applying the method of counter-examples.
CAPTION: “But what is art?” Such “What is X?” questions often crop up in dinner party conversations. The method of counter-examples may well be employed.
CAPTION (photo). Family Resemblance. The members of a family may all look similar even when there is no one feature they all share.
CAPTION (graphic). These faces strongly resemble each other. Some have the same XXX, others the same XX, and other share the same XX. Yet, despite these overlapping similarities, there is no one feature they all possess.
INTRODUCTION: In the preceding section we looked at the method of counter-examples. In Plato’s dialogues, Socrates supposes there must be one thing that, say, all and only the beautiful things possess in virtue of which they are beautiful. Socrates then demolishes various suggestions as to what this one feature might be by applying the method of counter-examples. By why assume there must be such a common feature? That there must be such a common denominator is famously questioned by Wittgenstein.
Socrates supposes there must be one thing all beautiful things have in common in virtue of which they are beautiful, one thing all examples of courage possess in virtue of which they are courageous, and so on. Similarly, the philosopher of art Clive Bell, when addressing the question “What is visual art?” assumes that there must be one quality that all works of visual art have in common in virtue of which they are works of visual art:
For either all works of visual art have some common quality, or when we speak of “works of art” we gibber. …There must be some one quality without which a work of art cannot exist… What is this quality?
Yet we can struggle to identify what this quality is. Indeed, the history of Western philosophy is in large part constituted by unsuccessful attempts to identify these elusive common denominators.
The philosopher Ludwig Wittgenstein suggests that the hunt for the common quality may, in many cases, be a wild goose chase.
Take a look at these faces. Some have the same eyes, others the same nose, and so on. Yet, despite these overlapping similarities, there is no one feature shared by all the faces. Wittgenstein calls this kind of similarity “family resemblance”. And he suggests that many of our concepts may be family resemblance concepts. Wittgenstein illustrates with the example of games:
Consider for example the proceedings that we call “games”. I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? — Don’t say: “There must be something common, or they would not be called ‘games’— For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. To repeat: don’t think, but look!— Look for example at board-games, with their multifarious relationships. Now pass to card-games; here you find many correspondences with the first group, but many common features drop out, and others appear. When we pass next to ball-games, much that is common is retained, but much is lost…[T]he result of this examination is: we see a complicated network of similarities overlapping and criss-crossing: sometimes overall similarities, sometimes similarities of detail. I can think of no better expression to characterize these similarities than “family resemblances”; for the various resemblances between members of a family: build, features, colour of eyes, gait, temperament, etc. etc. overlap and criss-cross in the same way. — And I shall say: ‘games’ form a family.
Clive Bell assumes there must be one quality that all works of visual art have in common. But perhaps visual art is also a family resemblance concept. Perhaps there is only an overlapping series of resemblances among works of visual art, as there is in the case of games. If art is a family resemblance concept, then Clive Bell’s attempt to pin down the one quality possessed by all examples of visual art is indeed a wild goose chase.
The moral is: whenever you are confronted by a “What is X?” question, it is always worth considering whether X might be a family resemblance concept. Don’t just assume that there must be some one quality all examples of Xs have in common.
We can easily introduce a family resemblance concept. Let’s define “widget” as follows. Something is a widget if and only if it possesses three of the following six characteristics:
1. It is portable
2. It costs over £200
3. It can be blown through
4. It makes a noise
5. It is longer than it is wide
6. It has holes
This kazoo, clarinet and a python are all widgets. This kite, chair and football are non-widgets. Note that there is no one feature that all widgets must possess. [END ILLUSTRATED BOX]
Reasonableness comes in degrees
INTRO: Beliefs can be more or less reasonable. There is, if you like, a scale of reasonableness on which beliefs may be located. Unfortunately, that reasonableness is a matter of degree is often overlooked. It’s sometimes assumed that if neither a belief A, nor its denial B, are conclusively “proved”, then the two beliefs must be more or less equally reasonable or unreasonable. As we will see, this assumption is false.
MAIN TEXT. Some beliefs are very reasonable indeed. It’s reasonable for me to believe that the orange on the table in front of me exists, because I can see it there. It’s also reasonable for me to believe that the tree outside my house still exists, because it was there when I last looked, and I have no reason to suppose anyone has removed it in the meantime. And it is reasonable for me to believe that Japan exists, despite the fact that I have never actually been there. I possess an enormous amount of evidence that Japan exists, and hardly any evidence to suggest it doesn’t.
Of course, despite being highly reasonable, these beliefs could still conceivably turn out to be false. Perhaps the orange I seem to see before me is an hallucination. Perhaps the tree in my garden has secretly been removed by pranksters. In the film The Truman Show, there is a conspiracy to dupe the main character into thinking he is living his life out in the real world when in fact everything around him is part of a carefully managed TV set. Even those he believes to be his closest relatives are, in truth, merely actors. Perhaps I am the unwitting victim of a similar complex conspiracy to make me believe Japan exists when in fact it doesn’t.
So let’s acknowledge I might be mistaken in holding these beliefs. Certainly, I cannot prove them beyond all doubt. But of course, this is not to say these beliefs aren’t eminently reasonable. They clearly are. They lie towards the top of the scale of reasonableness.
At the bottom of the scale lies the belief that faeries and goblins exist. This is a very unreasonable thing to believe because there’s no good evidence these tiny folk exist and plenty of evidence that they are fictional. Still, it does remain a remote possibility that these fairy-tale folk exist. We can’t prove beyond all doubt that they don’t.
Around the middle of the scale of reasonableness lie beliefs which are neither highly reasonable nor highly unreasonable. Take the belief that there are intelligent life forms living somewhere out there in the universe. True, we have no direct evidence of any such extra-terrestrial intelligence. On the other hand, we know that intelligent life has evolved on this planet, and we also know that there are countless other similar planets out there. So it’s not particularly improbable that there is intelligence out there somewhere.
Beliefs can change their position on this scale over time. A few decades ago, belief in electrons was fairly reasonable. Given the additional scientific evidence that’s since been discovered, it is now very reasonable. At one time belief that the world is flat was not particularly unreasonable. It’s now very unreasonable indeed.
The scale may also vary from one person to the next. It’s very reasonable for me to believe there is an orange on the table in front of me, because I can see it there. Perhaps it’s not quite so reasonable for you to believe there’s an orange there. After all, you can’t see the orange. You simply have to take my word for it.
Of course, it’s contentious where some beliefs lie. Take belief in the existence of God, for example. Some consider belief God is no more reasonable than belief in fairies. Others believe it is fairly reasonable – at least as reasonable as, say, belief in extra-terrestrial intelligence. Those who claim to have had direct experience of God, or who think miracles and so on constitute fairly good evidence that God exists, may place belief fairly high up on the scale (even while acknowledging that their belief is not “proved”).
The “You can’t prove it” move
Having set up the scale of reasonableness, let’s now look at a common mistake people make when assessing the reasonableness of a belief.
Sometimes, when someone has been given very good grounds for supposing a belief B belief is false, they respond by saying “But you can’t prove B is false, can you? B might be true!” They think this shows belief B is still pretty reasonable – perhaps even as reasonable as the belief that B is false.
Here is an example. Suppose you have just provided Ted with excellent grounds for supposing his belief that there are fairies at the bottom of the garden is false. Ted responds “But you can’t prove there are no faeries down there, can you?”, as if that showed that his belief is, after all, pretty reasonable – perhaps even as reasonable as yours. Now perhaps you can’t prove beyond all doubt that there are no faeries. It’s just possible that you’re mistaken. Still, it’s hardly likely, given the evidence. On the available evidence, Ted’s belief remains downright silly.
Here’s a philosophical example. Even if we cannot conclusively prove either that God does exist or that he doesn’t, it doesn’t follow that the belief that God exists is just as reasonable or unreasonable as the belief that he doesn’t. It might still be the case that there are very good grounds for supposing God exists, and little reason to suppose he doesn’t. In which case it is far more reasonable to believe in God than it is to deny his existence. Conversely, there might be powerful evidence God doesn’t exist, and little reason to suppose he does. In which case atheism may be by far the most reasonable position to adopt. We should not allow the fact that neither belief can be conclusively proved to obscure the fact that one belief might not be far more reasonable than the other.
Unfortunately, theists sometimes respond to atheist arguments by pointing out the atheist has not conclusively proved there is no God, as if that showed belief in God must be fairly reasonable after all. Actually, even if the atheist can’t conclusively prove there is no God, they might still succeed in showing that belief in God is very unreasonable indeed – perhaps even as unreasonable as belief in fairies.
Pointing out the absence of “proof” against a belief does not show that the belief is, after all, at least fairly reasonable
QUOTATION. REMOVE RUSSELL QUOTE AND REPLACE WITH IMAGE: IMAGE OF A RECTANGLE, LEFT HALF RED MIDDLE HALF WHITE AND RIGHT HALF GREEN. IN LEFT-HAND RED HALF PUT “DISPROVED” IN RIGHT HAND GREEN HALF, PUT “PROVED” IN MIDDLE BOX PUT “NEITHER PROVED NOR DISPROVED”. Caption: Rather than arranging beliefs on the scale of reasonableness, we might sort them instead into the three boxes “proved, “disproved” and “neither proved nor disproved”. We may then lose sight of the fact that the beliefs in the middle box may still differ dramatically in terms of their reasonableness.
CAPTION. In the film The Truman Show, the central character, played by Jim Carey, believes he is living his life out in an ordinary small town, when it is in fact a TV set and everyone is an actor.
CAPTION . Here is a series of beliefs about what exists arranged on a scale indicating roughly how reasonable they are (we might argue over exactly where they should appear on the scale). Some beliefs are very reasonable indeed (despite not being beyond all doubt). Others are highly unreasonable (though there remains the remote possibility they might be true).
QUOTE – DELETE LUTHER QUOTE AND REPLACE WITH TEXTBOX. The ambiguity of “proved” People often talk about a belief being “proved”, “not proved”, “disproved”, and so on. But what does “proved” mean here? It can mean a variety of things, including:
Proved beyond all possible doubt
Proved beyond reasonable doubt
Shown to be certain
Shown to be almost certainly true
Shown to be very probably true
Notice that people often talk of “scientific proof” despite the fact that most, perhaps all, scientific claims are open to at least some doubt.
When using the term “proved” it is important to be clear what you mean. Take for example, the claim that we cannot “prove” God exists. It might be true we can’t “prove” beyond all possible doubt God exists. But then perhaps we can still “prove” God exists in the sense we can still show his existence to be extremely probable, or to be at least beyond reasonable doubt. Conversely, even if we can’t “prove” beyond all doubt God does not exist, it doesn’t follow that we can’t show his existence to be extremely improbable. We should not allow loose use of the word “proved” to obscure these facts.
CAPTION DELETE MOTHER THERESA. REPLACE WITH IMAGE OF GOD. Caption: Where should we place “God exists” on the scale of reasonableness? Indeed, should belief in God appear on the scale at all (but if it doesn’t appear on the scale, why not?)
CAPTION: (phrenology head) CHANGE THIS IMAGE TO SOMETHING RELATED TO “GHOST IN THE MACHINE”. Caption: Ryle believed that the Cartesian mind – an immaterial entity that exists in addition to the physical organism and its various behavioural dispositions – is a mere “ghost in the machine”.
CAPTION (oxford colleges). The tourist who says “Yes, I know where all the different colleges are, but now where is the University?” has made a category mistake.
INTRO. Someone commits a category mistake when the mistakenly assume that if A, B, C and D all belong to the same category of thing, then so must E. For example, the tourist who asks, “Yes, I know where all the different colleges are, but where is the University?” has made a category mistake. The University is not another building alongside the other colleges. Rather it is the overarching institution to which the various colleges belong.
MAIN TEXT Here is another example. Suppose you invite someone in to see your home. You show them the living room, dining room, kitchen, bathroom and bedrooms. But at the end of the tour, your guest looks mystified. “That was all very pleasant” they say, “But can you now show me your home.” Your guest has made a category mistake. They have assumed that your home is a further thing in addition to the various rooms they have visited. The truth, of course, is that those rooms together constitute your home.
The expression “category mistake” was introduced by the philosopher Gilbert Ryle in his book The Concept of Mind. Ryle believes Descartes makes a this type of mistake in supposing the mind is a further entity in addition to physical objects like tables, mountains and our physical bodies. That leads Descartes to suppose that, as the mind is not a physical object, it must be an immaterial object – a sort of “ghost in the machine”. The truth, claims Ryle, is that to possess a mind is to possess a whole series of behavioural dispositions. As they are dispositions even physical organism can possess, so no further immaterial “something” is required. To suppose otherwise is, according to Ryle, to commit a category mistake.