Wednesday, May 2, 2012

Howthelightgetsin 2012

 
The following is from Spoonfed, written by Hilary Lawson founder of Howthelightgetsin.

In the closing pages of A Brief History of Time, Stephen Hawking takes a sideswipe at contemporary philosophy, arguing that it has been reduced to an analysis of language. In The Grand Design he goes a step further. “Philosophy,” he tells us, “is dead”.

As a philosopher and founder of the world’s largest philosophy and music festival,  HowTheLightGetsIn, I’m not convinced by Hawking’s vision. Excluding philosophy and any other strategy one might employ to tackle life’s big questions seems a careless dismissal of forms of human enquiry.

The reality is that science has no monopoly on truth. Scientist and broadcaster Baroness Greenfield has spoken of the “smugness and complacency” of science and has called for a return to curiosity and open-mindedness. If science gave up its metaphysical pretentions and stopped supposing that it was uncovering the essential character of the world, it would be stronger not weaker. It would be in a better position to entertain new theories, which might enable more effective intervention in what we take to be reality. Just as science demonstrated the limitations of the church, so now we must come to terms with the limitations of science.

The relationship between data and knowledge is a complex one. It’s important to consider what human knowledge consists of, how is it uncovered and whether data can be trusted. This is turn raises issues about what remains unknown and where developments are likely to be found, as well as the ethical implications of new scientific research. To what extent, for example, can we trust research funded by commercial organisations? What are moral issues accompany human genome mapping? These are not questions that can be answered by science alone.

In The Limits of Science, one of this year’s debates held in association with Spoonfed, controversial biologist and author of The Science Delusion, Rupert Sheldrake and physician and historian of science James Le Fanu will debate the limits of knowledge with Think editor, commentator and philosopher Stephen Law. They’ll be asking whether science can uncover the ultimate nature of the world, or whether there are things it simply cannot fathom.

The irony is that in Hawking’s ‘science-trumps-all’ world, philosophy has never been more necessary. The old certainties that came from religion and science have been shaken and their demise has left us confused and lost in our postmodern world.  And where can we look now but to philosophy to try and make sense of the strange circumstances in which we find ourselves?

For centuries in the West we've looked forward to political, economic, and ethical progress. We've seen ourselves as on the upward curve of history.  But the future looks uncertain, our values precarious. Do we need a new notion of progress and if so what would it be? HowTheLightGetsIn is really an opportunity to get philosophy out of the academy and into people’s lives.

The Limits of Science is on Saturday 9th June 2012 at HowTheLightGetsIn

HowTheLightGetsIn runs from 31st May to 10th June 2012.


From here.

I am also co-hosting a dinner...

Event [438]
Saturday 9 June 2012
7:00pm
 
Philosophy Now Dinner
Anja Steinbaer, Stephen Law, Rick Lewis.

Philosophy Now editors Rick Lewis and Anja Steinbaer and philosopher Stephen Law consider questions of truth and ethics in philosophy and literature.


Price includes dinner

42 comments:

Tony Lloyd said...

Surely the opinions of a scientist on philosophy are about as insightful as a bird's opinions on ornithology.

The Celtic Chimp said...

Wow, that all sounded incredibly wooish to me.

the “smugness and complacency” of science and has called for a return to curiosity and open-mindedness. If science gave up its metaphysical pretentions and stopped supposing that it was uncovering the essential character of the world, it would be stronger not weaker.

Thats is almost the mantra of the woo merchants.

Hawking's point and it is entirely valid in my opinion is that philosophers don't seem to want to bother to educate themselves on what we actually know about the world and as a consequence many of their arguments are essentially out of date. I really don't think he meant the entire discipline was useless now.

I think philosophy is a vital discipline and would eagerly encourage anyone to take it up and it disappoints me to see this kind of anti-science message coming from philosophers.

Modern revelations from the cutting edge of physics has radically altered our view of what is real and what isn't and what the fundamental nature of reality is. Perfect ground for philosophy to thrive but you won't find much philosophical discussion on these topics. Its hard and takes a lot of work to learn about.

At the risk of bruising egos, almost anyone can understand philosophy if they take the time to read it but very few people really understand physics and essentially nobody understands some of the things we have discovered about the universe. Feynman wasn't being flippant when he said that no one understands quantum mechanics. The insights that science offers should be irresistible to philosophers.

Philosophers complain about scientists presuming too much often without having the first clue about what the scientist might actually be aware of that they aren't. When you find yourself making "other ways of knowing" arguments it should be a warning sign that you are on shaky ground.

Anonymous said...

I’m not convinced by Hawking’s vision.
Me neither. But that’s because I’ve discovered the optician, from whence we all obtain our obligatory blur goggles.

“Philosophy,” he tells us, “is dead”. Then what does he use to explain the pre Big Bang state? Fluid stasis?
The reality is that science is not an exact science, and that is easily demonstrable.
The relationship between data and knowledge is a complex one. Have we done every experiment in every way that it can be done? Or have we done a few and extrapolated the result as being infallible?
Where can we look now but to philosophy. Philosophy doesn’t tell us everything. But it does constantly remind us that we know an awful lot less than we think we do.
Doesn’t Stephen also say something to the effect, that there is nothing conclusive to indicate that intelligence is of benefit to survival. Well Steve. I would suggest proving that either way, would first require us obtaining some and then trying it out.

Andrew G. said...

Uh, Sheldrake?

"Controversial biologist" doesn't cover the half of it - the man is a crank, into telepathy and "morphic resonance" and who knows what other pseudoscientific crap.

Of course he thinks science doesn't have a monopoly on truth, because scientists are telling him he's wrong.

Dan P said...

In general, the language used in science relies on logic, coherence, consistency, falsifiability, precison, reliabity, among other things.

Presumably, even in advanced mathematics or physics, complicated concepts best explicated using symbols and formal notation is translatable into ordinary language, even if this translation requires an emormous number of parenthetical remarks, footnotes, and qualifiers.

Obviously, it is this language that is used in scientific textbooks to explain the terminology needed to master the advanced scientific concepts of their fields in the first place. Scientists do not literally speak a different language.

In other words, the language of science is shorthand for what is otherwise not supposed to be vague, mystical, poetic, or the tool of "other ways of knowing".

Good philosophy shares the same linguistic properties, though dealing with sometimes different topics, e.g. values, morals, aesthetics, and whether certain scientific endeavors are a good idea.

That there may be aspects of the world not explainable by science, or more plainly, not understandable by humans, is not to suggest an alternative approach, say, religion.

In the end, if it does not sound like science, then it is jibberish.

If there are astronomical phenomena, ffor example, that are not amenable to the tools of science, then we simply cannot speak meaningfully about them.

A Buddhist monk will not have a better shot at explaining them.

daz365 said...

I think Hawking is describing a lot of lazy philosophers who are living in the last century.

I not sure whose side quacks like Greenfield and Sheldrake are on but count me on the opposite side.

Paul P. Mealing said...

Stephen, my comments keep disappearing. It's happened 2 times in a row on this post.

I'd recommend Why Beliefs Matter; reflections on the nature of science by E. Brian Davies, Professor of mathematics at King's College London and Fellow of the Royal Society.

As Xenophenes apparently said, it's important to distinguish between what we 'know' and what we 'believe'. Philosophy is using what one knows, via argument, to support what one believes. There are no right and wrong answers with philosophy like there are with science, and I think that's why scientists can sometimes be dismissive.

Regards, Paul.

March Hare said...

Philosophy was once useful as a way to uncover ideas about our universe, but ultimately those ideas can only be evaluated by testing them against reality - and that is science.

Perhaps a useful question to philosophers is: What have you done for me lately?

As science incidentally and accidentally solves philosophical puzzles, leaving philosophy a smaller and smaller field of ignorance to play in, we are left asking: what's the point of philosophy?

My view is that philosophy is very useful in teaching people how to think, how to avoid logical traps and make common mistakes, but if you want to make progress in philosophy the best way to do so is science, especially neuroscience.

Dan P said...

March Hare:

It would seem that neuroscience has been unable to provide any solution to puzzles involving the so-called mind-body problem.

Despite enormous discoveries involving the physical basis and determinants of mental states,and despite the almost universal acceptance of physicalism by most philosophers of mind,including those who maintain seemingly unresolvable difficulties (Thomas Nagel),neuroscience has not contributed much to the resolution or explanation of this "problem".

The point is that although conceding that dualism is not feasible, and conceding that the brain is the organ of conciousness, how this is so remains a puzzle; a puzzle that bench science has not been able to budge.

The Celtic Chimp said...

Dan,

My personal take on the "problem" is that isn't one. Our inability to accept or explain in detail that "mind" (itself a term laden with presumptive baggage) is a direct result of physical brains in no way validates that there even is a problem.

This is in my opinion a consequence of presumption and essentially an argument from ignorance. "I don't understand how a complex network of neurons result in a mind therefore there is a problem to be resolved." There is much to learn but any assumption of mind-body separateness is just that, an assumption. One that seems to run contrary to everything we so know and one that many of us are quite happy not to make.

Not all questions are sensible.
I might ask why this particular rock is here when it could be elsewhere. That fact that nobody can give me a satisfactory answer is more an indication of a failure in the question than the answer.

wombat said...

"The point is that although conceding that dualism is not feasible, and conceding that the brain is the organ of conciousness, how this is so remains a puzzle; a puzzle that bench science has not been able to budge."

Aside from the observation that if it were not for bench science we might still be stuck with dualism or the ideas that certain mental state arose from the spleen/liver etc., we would appear to be making progress in this area, having identified parts of the brain involved, plausible mechanisms for action and new ways of investigating consciousness. For example we have the recent case of functional MRI being used to discover the thoughts of patients who were believed to be in vegetative states. I see that there is now a proposal to use similar techniques to investigate the thoughts of animals. While being able to experience the feeling of being a bat might be out of reach, we might soon be able to determine what they think.

(see What is your dog thinking? and Vegetative patient "talks" using brain waves)

The philosophical questions arising from the increased knowledge in this area seem much more relevant (e.g. best ethical treatment of animals, disabled people etc,)

Dan P said...

The Celtic Chimp and Womat:

Your points are well taken!

The importance of "bench science" is beyond question.
The molecular biology of neurotransmitters, neuromodulators, cell metabolism, genetic polymorphisms, and the like, obviously will continue to provide insight into human conciousness and behavior.

Additionally, any difficulties in explaining how these mechanisms give rise to conciousness/subjectivity are not more pressing as areas of enquiry than than those neuroscience deals with in the mainstream.

Neuroscienctists are not at some sort of scientific impasse or crisis in not satisfying philosophers of mind as to the answers to their "prolems".

My modest proposal is that neuroscience is in a slightly different position than (say) genetics is in, in explaining the etiology of cancer. Presumably, the etiology of cancer involves changes to the expression of genetic material, growth factors, inhibitory factors, and other mechanistic processes.

After discovering all of the molecular processes and alterations involved in cell division/death/apotosis/metastasis,there would appear to be no further questions, and an outline of such an explanation is imaginable.

It appears that neuroscience, in explaining how conciousness/subjectivity arises from a brain is not in such a position.

Furthermore, what makes this interesting is that those perceiving a "problem" are not claiming dualism in any Cartesian sense.

My proposal is simply that with all of the discoveries of neuroscience, there remain legitimate philosophical questions not arising simply from lack of knowledge of neuroscience on the part of those posing the questions.

wombat said...

"It appears that neuroscience, in explaining how consciousness/subjectivity arises from a brain is not in such a position."

I am tempted to say "Well we've had a cure for consciousness for a while now"!

We may be closer to the m-b problem than you think, and I suspect further away from the solution in oncology. Cancer gets all the publicity, and until fairly recently neuroscientists and similar people trod very warily around the consciousness issue.

Probably this was not helped by the philosophers stock in trade props of the "brain in a vat" and "zombies" in many varieties.

Have you seen any of Antonio Damasio's books on the topic?

Ross Templeman said...

I have studied philosophy in my spare time for the better part of a decade now, yet my education and training are in applied mathematics.

I don't doubt that my life would have been the poorer if I had not stumbled into philosophy by accident when I was beginning by sixth form studies.

I started off with a dismissive attitude towards the subject for sure (being a science man through and through), but I got over that as my curiosity grew and my thinking matured.

Now I can't imagine myself without philosophy books on my shelf or a philosophical problem occupying some of my thoughts.

Paul P. Mealing said...

I think Celtic Chimp’s ‘no problem’ and Wombat’s ‘consciousness is cured’ comments are over-optimistic and an oversimplification. We always think we know more than we do and consciousness is a case in point. Science tends to extrapolate beyond what it actually knows, which is how science progresses, but it doesn’t mean we know all the answers, as history keeps revealing.

I tend to agree with Colin McGinn (The Mysterious Flame) that we may never understand consciousness. As he points out, sentience (consciousness) evolved early but it’s dependence on neuronal activity is not an explanation. Subconscious activity is also based on neuronal activity.

The best we can do is that consciousness occurs when activity goes from local to global, according to work done by Bernard Baars of The Neuroscience Institute in San Diego, California (New Scientist, 20 March 2010, pp.39-41). Baars calls it the ‘global workspace’ theory of consciousness.

To date, no one can explain the subjective experience we call consciousness. And I’m pretty sure that if we didn’t all experience consciousness then science would tell us that it doesn’t exist in the same way that science tells us that free will doesn’t exit, which is part of the subjective experience.

Regards, Paul.

March Hare said...

Hi Paul,

You may be right on consciousness, I tend to think you're completely wrong - time may tell, regardless, what progress has philosophy made in the field of consciousness in the past 20-30 years? And, if there has been any, has it been based on progress in the scientific study of the brain? Or AI?

Which isn't to say field A should use discoveries in field B to progress, that's usually how things work, but philosophy seems particularly beholden to other fields while not currently providing a lot going the other way.

The other point is - perhaps we're too quick to assume consciousness exists. The same way some versions of free will took a dive into something more compatible with science, perhaps this may be the way consciousness goes.

For example, how can I be certain of my consciousness? And would it make a difference (to me) if I was simply a memory of an imagined conscious creature? Not to (intentionally) fall off the page into solipsism, but if there is no difference between consciousness and the illusion of consciousness then without further evidence we should lose the illusion (from science if not from common language).

wombat said...

@Paul - I didn't think ‘consciousness is cured’ was optimistic - simply the observation of the effects of suitable doses of ethanol on hominids. One spirit drives out the other.

A blow to the head is also effective.

Seriously though, I certainly would not want to underplay the amount of work left to do in this area, rather to highlight that we are making progress.

(there's a neat list of some highlights here from University Of Washington

To say we might never understand it seems problematic in itself. Have we got a good definition of "understanding" that avoids either the state where we will always face the charge "Ah sure you know what causes it, how to re-create it etc. but you don't know x about it so you dont REALLY understand it."?

In this particular case I suppose there is also the issue of whether consciousness is neccessary for understanding.

Dan P said...

My claim that I am conscious or that my right toe hurts is to make a metaphysically neutral statement. It is not a stance on the mind-body problem.
I do not say these things using "conscious" defined in a physicalist or non-physicalist manner.

I do think that I am in fact conscious and that the cashier thanking me for shopping at her store is conscious.

The anaesthetist who orders more anaesthesia can rightly justify this by stating that it appears the patient remains conscious.

To argue for the non-existence of consciousness is to return to an eliminative materialism I think untenable.

Paul P. Mealing said...

Hi March Hare,

Your last comment describes what happens when we dream, which is literally solipsism. The difference between dream and reality is that we share reality with common memories but in a dream only the dreamer has the memories.

By the way, without memory, you wouldn’t know that you’re conscious because consciousness exists in a continuous present (as Erwin Schrodinger observed in What is Life?).

Hi wombat,

We don’t understand quantum mechanics either, even though it’s mathematically and scientifically the most successful theory ever. I’ll go out on a limb and predict that we’ll understand quantum mechanics before we understand consciousness. To answer your question, I’d say we ‘understand’ something when we don’t have a multitude of explanations.

Regards, Paul.

Paul P. Mealing said...

Sorry March Hare, I meant 'your last paragraph'.

Regards, Paul.

March Hare said...

Dan P,
I think you are using the common or garden version of conscious in your examples which simply reflect an ability to accept and respond to stimuli.

Consciousness, as discussed here is the ability to be aware of stimuli and your response to them, and potentially alter that.

I am more than willing (in fact begging) for reasons I am wrong on consciousness, not because I need to believe that I am more than a pre-programmed response-bot, but because I hate being bloody wrong.

Paul, I was not even remotely referring to what happens when WE dream, I was thinking if we were a memory or someone else's dream, with exceptional recall, how would we know? If others were putting thoughts into our heads that were consistent with prior experience then we'd be dolls that wouldn't notice, but think that we would.

Again, this is not some slip into solipsism, I just want ideas to pay rent, and I don't see what the non-subjective evidence is for consciousness, or what the concept brings us beyond what we have without it.

Paul P. Mealing said...

Hi March Hare,

If others were putting thoughts into our heads that were consistent with prior experience then we'd be dolls that wouldn't notice, but think that we would.

Okay, this sounds like the brain-in-the-vat thought experiment that Stephen raised in The Philosopher's Gym if my memory serves me right.

Unless everyone we interact with is also a brain-in-a-vat, and we are all interconnected, then this is solipsism, albeit imposed from outside. A brain-in-a-vat is subjectively the same as being in a dream, only there's no external interference (in a dream) as far as we know.

For a brain-in-a-vat to work it would effectively have to be a computer simulation to keep it all consistent. Some people contend that the universe could be just that, but, as Paul Davies points out in The Goldilocks Enigma, it's the same scenario as Intelligent Design only the designer is a computer programme.

Totally off-track, but that's where your thought experiment leads.

Regards, Paul.

Dan P said...

March Hare:

If you leave out the subjective evidence for consciousness, then you leave out the very phenomenon in question. That there is a subjective aspect to be "left out" is the problem.

It would be like considering the sensation I experience when my toe hurts, leaving out how it feels. A neurologist can certainly describe the neuroanatomy and physiology of peripheral neuropathy, but must acknowledge that these processes involve a subjective experience of pain.

After the neurologist described the neuron firings, neurotransmitters, ion channel opening involved, etc...One would still need to add, "And it hurts".

My garden variety concept of consciousness is more naive than yours. I use the term without any notion of stimulus and response. I use it as an illiterate, uneducated speaker of the language does who has toe pain. I use it without any reference to such sophisticated causal explanations.

It is the "awareness" issue that is at the heart of the problem.

Nobody is debating physicalism. That is assumed. It is how does physicalism account for subjectivity.

Paul P. Mealing said...

I pretty well agree with Dan P. It's the subjectivity of consciousness that's both inescapable and inexplicable from what we currently know.

Just to clarify my previous comment to March Hare, I'm not suggesting that you believe in your proposed thought experiment, assuming I've interpreted it correctly.

It's just that if you have a thought experiment where we are puppets, then syllogistically you need a puppet-master - it's unavoidable.

This has been explored in a number of SF movies: Matrix, Ghost in the Shell, Dark City, The Adjustment Bureau; and these are just the ones I've seen.

Regards, Paul.

The Celtic Chimp said...

Dan,

I understand your point and have some sympathy with the idea. I suppose I am suggesting that we might never be happy with the answer even if we get one. Whether we admit to it or not, we elevate consciousness to mystical heights.
If I tell you that photons of a certain frequency striking an atom of X element will cause the atom to release an electron you will likely accept that explanation quite happily. If I explain that a vast number of neural pathways all being stimulated simultaneously in a particular manner leads to the sensation of "I", you would likely think I have omitted most of the explanation.
You will want a bridge to be built that might not exist. The underlying implication being that consciousness must be more than that, that some deeper explanation would be needed when it just might not be so.

As Feynman put it "Do not keep saying to yourself, if you can possibly avoid it, "But how can it be like that?" because you will get "down the drain," into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that."

Dan P said...

The Celtic Chimp:

You characterize my position perfectly! I would accept the photon/electron explanation happily. On the other hand, the neural firing explanation would leave something out! And I do expect some sort of a bridge!

I sympathize with Thomas Nagel on this issue. However, I differ on emphasis.
Whereas he argues that the problem is that such explanations leave out, for example, what it's like to be a bat, I think the difficulty is how to explain that it's like anything at all for the bat.

In the case of electrons and other non-sentient things, what is left out of our explanations seems to be of a fundamentally different kind than what we leave out in explanations of consciousness.

The Celtic Chimp said...

Dan,

Obviously the explanation I gave was not intended to be a complete explanation, just an example of a type of explanation but I am assuming that a little, or even a lot, more detail wouldn't change what you see as the fundamental disconnect between the base physical explanation and consciousness. I am assuming that any mechanistic explanation is going to leave you still feeling like something is missing.

If that assumption is correct, how do you know or why do you think this is so?

Do you entertain the idea that there may be nothing additional to explain?

and

Would you accept say a conscious machine intelligence built by people or would you assume that in some crucial way it wasn't really conscious?

Paul P. Mealing said...

Hi Celtic Chimp,

Would you accept say a conscious machine intelligence built by people or would you assume that in some crucial way it wasn't really conscious?

I think this is the nub. Most people seem to think that machine intelligent consciousness is inevitable, whereas I think it’s highly unlikely. It’s an indication of how much we don’t know that people make that assumption.

In an earlier comment I made a comparison with quantum mechanics. I think consciousness is no less weird than quantum mechanics; the major difference being that consciousness is a universal everyday experience and QM isn’t.

Regards, Paul.

March Hare said...

"I think consciousness is no less weird than quantum mechanics; the major difference being that consciousness is a universal everyday experience and QM isn’t."

Except when you use a computer, watch TV or do virtually anything to do with electronics...

Dan P said...

The Celtic Chimp:

I will address the last question. The others I have to think about.

In "Identity and Necessity" and less directly, in "Naming and Necessity", Saul Kripke argues that if mental state A=brain state B, then necessarily mental state A=brain state B.

He was addressing issues promoted by JJC Smart and others in the so-called "Identity Thesis". Their position was that although it is imaginable that I am a disembodied Cartesian-like soul, it is in fact, not the case.

Mental states are in fact brain states. However, the identity is contingent! Mental states could have been otherwise. It just happens to be that they are brain states.

Kripke argues that the contingency is illusory. If mental states are identical to brain states, then those mental states could not have been identical to some other sort of state, similar to the identity of water and H2O, which he argues is necessary.
The idea is that if the substrate of mental states is a brain, then it could not have been otherwise.
This is a biological view.

Mental states are like tomatoes. You really could not have a silicon chip based tomato, because a tomato is a biological natural kind. Similarly, if pain sensations are states of brains, then they could not have been states of something other than brains, like computer hardware.

I would accept that a "conscious machine intelligence built by people" could be conscious, but it would have to have a brain, or something with neurons.

It is the hardware, not the software, that would make something conscious!

The Celtic Chimp said...

Paul,

I happily admit to being one of those people who thinks machine intelligence is an inevitability. I tend to put a much greater time estimation on how soon it is likely to happen than most do but I think we will get there eventually.

By all evidence we have accrued, human consciousness is a direct result of the operation of the human brain. Alter or damage the brain and the consciousness is altered or damaged. The human brain is a machine. It is a very complicated, currently not particularly well understood machine, but a machine none the less.

Our understanding of how the brain operates is growing fast and however long is takes I am confident we will eventually understand it well enough to make a non-organic replica.

We have already had significant success with neural network models in computers. They are a vast, vast distance from being anything like conscious or intelligent but they are a start.

I don't know what your personal view of the brain/consciousness is but if you are willing to accept that other primate species have a less sophisticated version of this experience then I believe you are admitting to a spectrum of consciousness experience by more or less advanced brain machines. That would suggest that consciousness is not a binary proposition but the result of sufficiently complex workings. For me this divests it of any ineffable or unreachable property and puts it firmly in the realm of the build-able. It is just a fact of the operation of a machine, like heat is a fact of running current through a wire, though I am sure the latter was just as mysterious when first encountered.

A good analogy for brain in my opinion are electronic circuits.
The base components do little more than alter their output (two levels of current) based on input (two levels of current).
That we can take this simple operation and stack it, combine it, lay it out just so; that it is possible to have this collection of switches and gates beat Gary Kasparov at chess. It has zero intellect and no consciousness of course but it is an illuminating lesson on how the emergent result of the whole machine can be so far removed and so much more versatile and complex than the operation of its constituent parts.

March Hare said...

DanP: "It is the hardware, not the software, that would make something conscious!"

Quite an assertion.

We manage to successfully simulate many processes, including biological ones since they all run on rules, the vast majority of which we know. Hence a virtual tomato could be created and, given an identical starting state and stimuli to a real one, end up exactly the same. At present it may take longer on the simulation and require an exceedingly accurate measure of starting conditions with a really strict environment for the real tomato to grow in, but I see no reason why it wouldn't. Stick a virtual taste bud and brain in there and it should be able to predict exactly how it would taste too!

@TCC, one thing to be aware of in simulations (although this can be simulated too) is that the hardware has properties outside of the simulation: http://www.sussex.ac.uk/Users/adrianth/ade.html that one is for electronics!

wombat said...

It is the hardware, not the software, that would make something conscious!"

This is reminiscent of the origin of "organic chemistry" - where it took a while before we could show that certain compounds associated with life could be synthesized from unliving equivalents.

That said if one asserts that life of some sort is needed for consciousness then we still have quite a palette to work from viruses, fungi, prions etc.

Even if this is so, I suspect that a simulation approach might be sufficiently close that we have to expand the definitions a bit. Something along the lines of recognizing both plants and animals as living but different. So we might end up with Classic Conscious (animals etc) and LoFat Conscious (silicon monsters)

I think we are probably even closer than CC suggests, simply because we do not need anything as big as a human brain. I would be prepared to accept that say a mouse or similar small animal has consciousness so a similar sized simulation might do the trick.

It doesnt have to be smart.

Alternatively we seem to be able to lose quite a lot of our brains and still remain aware and conscious (although possibly losing some other functions). Ethically I'd prefer the small rodent option.

Dan P said...

March Hare:

...a virtual tomato could be created and ...end up exactly the same.


You are arguing for my point. You argue that the end of the "simulation" process, which sounds simply like in vitro plant growth, will result in "exactly the same thing".

Exactly the same thing is a biological tomato, not a simulation of one. The result is the hardware that makes something a tomato.

My position takes the biology of consciousness seriously.

Paul P. Mealing said...

Hi March Hare,

Except when you use a computer, watch TV or do virtually anything to do with electronics...

We’ve known about consciousness since antiquity but we only discovered quantum mechanics in the last century, so the epistemological gap is humongous. Despite the fact that the entire universe is dependent on it (not just electronic devices) it’s still very, very weird because it defies common sense. Consciousness doesn’t defy common sense, but only because we are unavoidably familiar with it.

Hi Celtic Chimp,

I admit my view is the minority, even heretical, but I contend that the evidence supports it. As McGinn points out, sentience evolved early yet computers are not sentient in the slightest. There is a widely held belief that if we make computers intelligent enough they will become sentient, yet sentience in the animal kingdom is not dependent on intelligence; we don’t become more sentient by being more intelligent.

The other commonly held belief is that we will have to assume that a computer or AI is sentient (New Scientist editorial, 2 April 2011, Rights for robots; We will know when it’s time to recognise artificial cognition, simply because we make the same assumption about other sentient creatures – in other words, we don’t really know. So this is an assumption made from ignorance not from knowledge. Science can’t really tell consciousness from non-consciousness, therefore we will assume that an AI is conscious when we make one that mimics us.

I’m simply stating that we claim to know more than we do. To quote an aphorism attributed to Socrates: ‘The height of wisdom is to know how ignorant we are.’ I learnt very early, from studying science, that real knowledge is knowing how much we don’t know.

Regards, Paul.

wombat said...

@Paul "There is a widely held belief that if we make computers intelligent..."

As you point out this is quite likely to be wrong but if one take "intelligence" of computers to be a proxy for complexity we are only recently getting close to that of the simplest higher animals, so the idea still cannot be quite dismissed just yet.

As to sentience evolving early, it is surely significant that the goal of such evolution is the flourishing of the organism in an unstructured environment rather than the very narrow intelligence that we build our machines for. If getting fruit from trees had always involved winning chess tournaments rather than spotting the right tree, climbing and avoiding hazards then I'm pretty sure monkeys would be beating humans at chess!

Paul P. Mealing said...

Hi Wombat,

I’m sure that this is one philosophical issue that will eventually be decided one way or another. I admit I’m one of the ‘contrarians’ on this issue, and I could be proven wrong.

Regards, Paul.

The Celtic Chimp said...

Dan,

I would say invoking tomatoes is something of a category error. A tomato is a tomato strictly based on its substance.
If I have a slice of tomato the slice is entirely tomato. If I have a slice of a brain I don't have a slice of consciousness. Consciousness is the result of a machine in operation. It is a process or emergant property.

Kripke's idea that mental states are essentially uncouple-able from brain states is interesting. I would like to see why he presumes this must be the case. It obviously is the case with human brains because they are the only known consciousness prodcing machines. I currently know of nothing that would suggest that only this kind of machine should be capable of producing this result. I wonder if Kripke is not just being a tad braincentric.

Lets assume for a moment that Kripke is entirely correct. Where do you think this leaves the "problem" of consciousness? Kripke's point of view would seem to eliminate the problem. If you can explain brianstates you have already explained mental states.

It may be the case that hardware not software is what makes something conscious but I don't find a compelling reason to assume this is infact the case. If the hardware was silicon based electronics rather than carbon based organics, would you accept it could be conscious?
If you would accept this, and such a system could hypothetically be constructed, do you think this would be sufficient to eliminate the "problem" in the sense that it would provide us a provable mechanistic explanation of consciousness.

The Celtic Chimp said...

Wombat,

You are right about the rodent in that I think we might be reasonably close to artificially generating a rodent analog with regards to behavior etc. I think there is significant debate to be had about whether or not a rodent qualifies as conscious. There must surely be some level of complexity required to be considered conscious or we would have to start extending the concept to increasingly basic forms of life. Viruses are in many ways only considered alive by some strict definitions. They straddle the boundary between chemical process and living thing.
Consciousness I would suggest implies a level of mental activity that surpasses simple autonomic stimulus response. How much it surpasses it and what is required to apply the label is open to debate.

The Celtic Chimp said...

Paul,

There are several problems there. Firstly, sentience is essentially a watershed/binary proposition. Intelligence isn't. I don't know if sentience without intelligence makes sense. You say that we apply sentience to animals and that it is not reliant on the intelligence. Fair enough in that it is not reliant on the degree of intelligence; I don't think we apply it to anything we assume has no intelligence.
By what scientific means do we detect or apply this sentience to animals? The truth is, we do so by observation of behavior. You are right to say that science can't distinguish conscious from non-conscious (yet - Presuming that something can't or won't be known is not something I think Socrates would approve of :))
but if that is so we have no empirical grounds for applying it to non-human animals.

If you are right that it is something we couldn't or won't ever be capable of knowing (if an AI was sentient), then we have no business applying the idea to animals either. Is an ant sentient? Is a flower? Is a mouse? Is a dog? Is a monkey?
We don't know the answers by robust scientific/empirical means. We judge by the complexity of the actions/reactions of each and by an analysis of their cognitive machinery that they either are sentient or aren't.

I don't agree that we are claiming knowledge we don't have. This is mostly conjecture at this point. I would say we are making educated guesses which one way or another are likely to become more educated and less like guesses as we progress. I would say it is just an application of Occam's razor.

Dan P said...

The celtic Chimp:

Kripke's point addresses the so-called "Identitiy Thesis". It has nothing to do with with being "brain centric".

He argues that if the Identity Thesis is true, then like a tomato, a slice of Brain State A IS a slice of Mental State B. They are one and the same thing.

The category mistake is not realizing that this is a consequence of the Identitiy Thesis.

He suggests that there is a strong Cartesian intuition as to the independene of mental states and brain states, but that if in fact they are one and the same thing, then this intuition is faulty.

He does not endorse the Identity Thesis, but merely points out a counter-intuitive consequence of it.

This is my understanding based on Naming and Necessity and Identity and Necessity, publshed in 1970!

You state that, "If I have a slice of tomato the slice is entirely tomato. If I have a slice of a brain I don't have a slice of consciousness. Consciousness is the result of a machine in operation. It is a process or emergant property."

I believe that we are largely in agreement! That you think that after preparing histological tomato slides, not much is left to explain, whereas brain slices do not present you with slices of consciousness, is essentially what I find problematic with the mind-body problem!

Your claim that consiousness is "emergent" is the whole issue! It places consciouness in a completely different realm than everything else in the physical world!

The Celtic Chimp said...

He argues that if the Identity Thesis is true, then like a tomato, a slice of Brain State A IS a slice of Mental State B. They are one and the same thing.

I agree with Kripke in that I also think brain states are identical to mental states. This to my mind works against the idea that there is any problem of explanation here, apart from explaining the entirely physical brain state.

It seems to me that you are confusing a brain with a brain state. Brain states can change, i.e. the same piece of brain can generate different mental states. A slice of brain and a slice of brain state are different in the extreme. In much the same way that a slice of your T.V. is not comparable to a slice of an image that your T.V. was displaying. The slice of the T.V. and the slice of the image are not equatable.

"I believe that we are largely in agreement! That you think that after preparing histological tomato slides, not much is left to explain, whereas brain slices do not present you with slices of consciousness, is essentially what I find problematic with the mind-body problem!

There is a vast difference between the two (hence my suggestion of the category error). You are comparing a slice of a rock with a slice of a T.V. (and the image it was displaying). The T.V. performs a function based on the simultaneous operation of many discrete parts. A tomato is just a tomato. It doesn't operate in any fashion and its being fully tomato is not contingent on anything else other than its substance. It just isn't a valid comparison.

Your claim that consiousness is "emergent" is the whole issue! It places consciouness in a completely different realm than everything else in the physical world!

It really isn't different from innumerable other examples of emergence in the physical world.

A television actually manages to display a person, even one moving about coherently on the screen despite the screen being constructed of nothing more than points of light that can change colour.
It requires an explanation certainly!
In this case of how all the data inputs manipulate the inner workings to produce a specific outcome. Should you or I or everyone be mystified by this magic seeming function of this strange box, it is not the slightest hint or suggestion that there is anything inexplicable going on. It is infinitely more likely that we just lack the understanding of its workings. The fact that unplugging the T.V. makes it all stop, the fact that no images ever float off the screen, the fact that damaging the screen will disrupt the picture all strongly suggest a basic mechanistic relationship between the screen and the image.
I see consciousness the same way.