Tag Archives: philosophy

Occam’s Razor For a Trumpian World

William_of_OckhamOccam’s razor (or Ockham’s razor) is a principle from philosophy popularized by 14th century philosopher and logician William of Ockham.

Put simply, it states that if there are two or more explanations for a possible occurrence, the simplest explanation is usually the best. That is, the more assumptions required for a possible hypothesis, the less likely is the explanation for that hypothesis.

While Occam’s razor has been found to apply reasonably well in the philosophy of science and more generally, it is not infallible. So, in this 21st century, it’s time to give Occam’s razor a much needed, fresh coat of paint. Further, it also requires an update to make it 100 percent accurate and logically water-tight.

So, let me present Trump’s razor. It goes like this:

Ascertain the stupidest possible scenario that can be reconciled with the available facts and that answer is likely correct.

Unfortunately I can’t lay claim to this brilliant new tool of logic. Thanks go to the great folks over at Talking Points Memo, Josh Marshall and John Scalzi.

Read their insightful proposal here.

Image: William of Ockham, from stained glass window at a church in Surrey. Courtesy: Wikipedia. Creative Commons CC BY-SA 3.0.

Towards an Understanding of Consciousness

Robert-Fudd-Consciousness-17C

The modern scientific method has helped us make great strides in our understanding of much that surrounds us. From knowledge of the infinitesimally small building blocks of atoms to the vast structures of the universe, theory and experiment have enlightened us considerably over the last several hundred years.

Yet a detailed understanding of consciousness still eludes us. Despite the intricate philosophical essays of John Locke in 1690 that laid the foundations for our modern day views of consciousness, a fundamental grasp of its mechanisms remain as elusive as our knowledge of the universe’s dark matter.

So, it’s encouraging to come across a refreshing view of consciousness, described in the context of evolutionary biology. Michael Graziano, associate professor of psychology and neuroscience at Princeton University, makes a thoughtful case for Attention Schema Theory (AST), which centers on the simple notion that there is adaptive value for the brain to build awareness. According to AST, the brain is constantly constructing and refreshing a model — in Graziano’s words an “attention schema” — that describes what its covert attention is doing from one moment to the next. The brain constructs this schema as an analog to its awareness of attention in others — a sound adaptive perception.

Yet, while this view may hold promise from a purely adaptive and evolutionary standpoint, it does have some way to go before it is able to explain how the brain’s abstraction of a holistic awareness is constructed from the physical substrate — the neurons and connections between them.

Read more of Michael Graziano’s essay, A New Theory Explains How Consciousness Evolved. Graziano is the author of Consciousness and the Social Brain, which serves as his introduction to AST. And, for a compelling rebuttal, check out R. Scott Bakker’s article, Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem.

Unfortunately, until our experimentalists make some definitive progress in this area, our understanding will remain just as abstract as the theories themselves, however compelling. But, ideas such as these inch us towards a deeper understanding.

Image: Representation of consciousness from the seventeenth century. Robert FluddUtriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione. Courtesy: Wikipedia. Public Domain.

A Career As An Existentialist, Perhaps?

[tube]crIJvcWkVcs[/tube]

Philosophy isn’t what it used to be. Gone are the days of the 4-hour debate over lunch on the merits of ethics, aesthetics and or metaphysics. Gone are the days of heated discussions over breakfast lattes on the philosophical traditions of existentialism versus rationalism. But we do still have a duty to think big and to ponder the great questions. So, why not become a part-time, if not professional, existentialist?

From the Guardian:

I was a teenage existentialist. I became one at 16 after spending birthday money from my granny on Jean-Paul Sartre’s Nausea. It was the cover that attracted me, with its Dalí painting of a dripping watch and sickly green rock formation, plus a blurb describing it as “a novel of the alienation of personality and the mystery of being”. I didn’t know what was mysterious about being, or what alienation meant – although I was a perfect example of it at the time. I just guessed that it would be my kind of book. Indeed it was: I bonded at once with its protagonist Antoine Roquentin, who drifts around his provincial seaside town staring at tree trunks and beach pebbles, feeling physical disgust at their sheer blobbish reality, and making scornful remarks about the bourgeoisie. The book inspired me: I played truant from school and tried drifting around my own provincial town of Reading. I even went to a park and tried to see the Being of a Tree. I didn’t quite glimpse it, but I did decide that I wanted to study philosophy, and especially this strange philosophy of Sartre’s, which I learned was “existentialism”.

I am convinced that existentialism should be seen as more than a fad, however, and that it still has something to offer us today. In a spirit of experiment, here are 10 possible reasons to be an existentialist – or at least to read their books with a fresh sense of curiosity.

1 Existentialists are philosophers of living

2 Existentialists really care about freedom

3 (Some) existentialists have interesting sex lives

4 Existentialists tackle painful realities

5 Existentialists try to be authentic

6 Existentialists think it matters what we do (and may stay up all night arguing about it)

7 Existentialists are not conformists

8 Existentialists can be fun to read

9 Existentialists also write about unconventional subjects

10 Existentialists think big

Read the entire article here.

 

 

 

Video: Mrs. Premise and Mrs. Conclusion, Monty Python. Courtesy of Monty Python / BBC.

Image: From left to right, top to bottom: Kierkegaard, Dostoyevsky, Nietzsche, Sartre. Courtesy: Wikipedia. Public Domain.

Fictionalism of Free Will and Morality

In a recent opinion column William Irwin professor of philosophy at King’s College summarizes an approach to accepting the notion of free will rather than believing it. While I’d eventually like to see an explanation for free will and morality in biological and chemical terms — beyond metaphysics — I will (or may, if free will does not exist) for the time being have to content myself with mere acceptance. But, I my acceptance is not based on the notion that “free will” is pre-determined by a supernatural being — rather, I suspect it’s an illusion, instigated in the dark recesses of our un- or sub-conscious, and our higher reasoning functions rationalize it post factum in the full light of day. Morality on the other hand, as Irwin suggests, is an rather different state of mind altogether.

From the NYT:

Few things are more annoying than watching a movie with someone who repeatedly tells you, “That couldn’t happen.” After all, we engage with artistic fictions by suspending disbelief. For the sake of enjoying a movie like “Back to the Future,” I may accept that time travel is possible even though I do not believe it. There seems no harm in that, and it does some good to the extent that it entertains and edifies me.

Philosophy can take us in the other direction, by using reason and rigorous questioning to lead us to disbelieve what we would otherwise believe. Accepting the possibility of time travel is one thing, but relinquishing beliefs in God, free will, or objective morality would certainly be more troublesome. Let’s focus for a moment on morality.

The philosopher Michael Ruse has argued that “morality is a collective illusion foisted upon us by our genes.” If that’s true, why have our genes played such a trick on us? One possible answer can be found in the work of another philosopher Richard Joyce, who has argued that this “illusion” — the belief in objective morality — evolved to provide a bulwark against weakness of the human will. So a claim like “stealing is morally wrong” is not true, because such beliefs have an evolutionary basis but no metaphysical basis. But let’s assume we want to avoid the consequences of weakness of will that would cause us to act imprudently. In that case, Joyce makes an ingenious proposal: moral fictionalism.

Following a fictionalist account of morality, would mean that we would accept moral statements like “stealing is wrong” while not believing they are true. As a result, we would act as if it were true that “stealing is wrong,” but when pushed to give our answer to the theoretical, philosophical question of whether “stealing is wrong,” we would say no. The appeal of moral fictionalism is clear. It is supposed to help us overcome weakness of will and even take away the anxiety of choice, making decisions easier.

Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.

Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.

William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.

Read the entire article here.

Cause and Effect

One of the most fundamental tenets of our macroscopic world is the notion that an effect has a cause. Throw a pebble (cause) into a still pond and the ripples (effect) will be visible for all to see. Down at the microscopic level, physicists have determined through their mathematical convolutions that there is no such thing — there is nothing precluding the laws of physics running in reverse. Yet, we never witness ripples in a pond diminishing and ejecting a pebble, which then finds its way back to a catcher.

Of course, this quandary has kept many a philosopher’s pencil well sharpened while physicists continue to scratch their heads. So, is cause and effect merely an coincidental illusion? Or, does our physics only operate in one direction, determined by a yet to be discovered fundamental law?

Author of Causal Reasoning in Physics, philosopher Mathias Frisch, offers great summary of current thinking, but no fundamental breakthrough.

From Aeon:

Do early childhood vaccinations cause autism, as the American model Jenny McCarthy maintains? Are human carbon emissions at the root of global warming? Come to that, if I flick this switch, will it make the light on the porch come on? Presumably I don’t need to persuade you that these would be incredibly useful things to know.

Since anthropogenic greenhouse gas emissions do cause climate change, cutting our emissions would make a difference to future warming. By contrast, autism cannot be prevented by leaving children unvaccinated. Now, there’s a subtlety here. For our judgments to be much use to us, we have to distinguish between causal relations and mere correlations. From 1999 and 2009, the number of people in the US who fell into a swimming pool and drowned varies with the number of films in which Nicholas Cage appeared – but it seems unlikely that we could reduce the number of pool drownings by keeping Cage off the screen, desirable as the remedy might be for other reasons.

In short, a working knowledge of the way in which causes and effects relate to one another seems indispensible to our ability to make our way in the world. Yet there is a long and venerable tradition in philosophy, dating back at least to David Hume in the 18th century, that finds the notions of causality to be dubious. And that might be putting it kindly.

Hume argued that when we seek causal relations, we can never discover the real power; the, as it were, metaphysical glue that binds events together. All we are able to see are regularities – the ‘constant conjunction’ of certain sorts of observation. He concluded from this that any talk of causal powers is illegitimate. Which is not to say that he was ignorant of the central importance of causal reasoning; indeed, he said that it was only by means of such inferences that we can ‘go beyond the evidence of our memory and senses’. Causal reasoning was somehow both indispensable and illegitimate. We appear to have a dilemma.

Hume’s remedy for such metaphysical quandaries was arguably quite sensible, as far as it went: have a good meal, play backgammon with friends, and try to put it out of your mind. But in the late 19th and 20th centuries, his causal anxieties were reinforced by another problem, arguably harder to ignore. According to this new line of thought, causal notions seemed peculiarly out of place in our most fundamental science – physics.

There were two reasons for this. First, causes seemed too vague for a mathematically precise science. If you can’t observe them, how can you measure them? If you can’t measure them, how can you put them in your equations? Second, causality has a definite direction in time: causes have to happen before their effects. Yet the basic laws of physics (as distinct from such higher-level statistical generalisations as the laws of thermodynamics) appear to be time-symmetric: if a certain process is allowed under the basic laws of physics, a video of the same process played backwards will also depict a process that is allowed by the laws.

The 20th-century English philosopher Bertrand Russell concluded from these considerations that, since cause and effect play no fundamental role in physics, they should be removed from the philosophical vocabulary altogether. ‘The law of causality,’ he said with a flourish, ‘like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed not to do harm.’

Neo-Russellians in the 21st century express their rejection of causes with no less rhetorical vigour. The philosopher of science John Earman of the University of Pittsburgh maintains that the wooliness of causal notions makes them inappropriate for physics: ‘A putative fundamental law of physics must be stated as a mathematical relation without the use of escape clauses or words that require a PhD in philosophy to apply (and two other PhDs to referee the application, and a third referee to break the tie of the inevitable disagreement of the first two).’

This is all very puzzling. Is it OK to think in terms of causes or not? If so, why, given the apparent hostility to causes in the underlying laws? And if not, why does it seem to work so well?

A clearer look at the physics might help us to find our way. Even though (most of) the basic laws are symmetrical in time, there are many arguably non-thermodynamic physical phenomena that can happen only one way. Imagine a stone thrown into a still pond: after the stone breaks the surface, waves spread concentrically from the point of impact. A common enough sight.

Now, imagine a video clip of the spreading waves played backwards. What we would see are concentrically converging waves. For some reason this second process, which is the time-reverse of the first, does not seem to occur in nature. The process of waves spreading from a source looks irreversible. And yet the underlying physical law describing the behaviour of waves – the wave equation – is as time-symmetric as any law in physics. It allows for both diverging and converging waves. So, given that the physical laws equally allow phenomena of both types, why do we frequently observe organised waves diverging from a source but never coherently converging waves?

Physicists and philosophers disagree on the correct answer to this question – which might be fine if it applied only to stones in ponds. But the problem also crops up with electromagnetic waves and the emission of light or radio waves: anywhere, in fact, that we find radiating waves. What to say about it?

On the one hand, many physicists (and some philosophers) invoke a causal principle to explain the asymmetry. Consider an antenna transmitting a radio signal. Since the source causes the signal, and since causes precede their effects, the radio waves diverge from the antenna after it is switched on simply because they are the repercussions of an initial disturbance, namely the switching on of the antenna. Imagine the time-reverse process: a radio wave steadily collapses into an antenna before the latter has been turned on. On the face of it, this conflicts with the idea of causality, because the wave would be present before its cause (the antenna) had done anything. David Griffiths, Emeritus Professor of Physics at Reed College in Oregon and the author of a widely used textbook on classical electrodynamics, favours this explanation, going so far as to call a time-asymmetric principle of causality ‘the most sacred tenet in all of physics’.

On the other hand, some physicists (and many philosophers) reject appeals to causal notions and maintain that the asymmetry ought to be explained statistically. The reason why we find coherently diverging waves but never coherently converging ones, they maintain, is not that wave sources cause waves, but that a converging wave would require the co?ordinated behaviour of ‘wavelets’ coming in from multiple different directions of space – delicately co?ordinated behaviour so improbable that it would strike us as nearly miraculous.

It so happens that this wave controversy has quite a distinguished history. In 1909, a few years before Russell’s pointed criticism of the notion of cause, Albert Einstein took part in a published debate concerning the radiation asymmetry. His opponent was the Swiss physicist Walther Ritz, a name you might not recognise.

It is in fact rather tragic that Ritz did not make larger waves in his own career, because his early reputation surpassed Einstein’s. The physicist Hermann Minkowski, who taught both Ritz and Einstein in Zurich, called Einstein a ‘lazy dog’ but had high praise for Ritz.  When the University of Zurich was looking to appoint its first professor of theoretical physics in 1909, Ritz was the top candidate for the position. According to one member of the hiring committee, he possessed ‘an exceptional talent, bordering on genius’. But he suffered from tuberculosis, and so, due to his failing health, he was passed over for the position, which went to Einstein instead. Ritz died that very year at age 31.

Months before his death, however, Ritz published a joint letter with Einstein summarising their disagreement. While Einstein thought that the irreversibility of radiation processes could be explained probabilistically, Ritz proposed what amounted to a causal explanation. He maintained that the reason for the asymmetry is that an elementary source of radiation has an influence on other sources in the future and not in the past.

This joint letter is something of a classic text, widely cited in the literature. What is less well-known is that, in the very same year, Einstein demonstrated a striking reversibility of his own. In a second published letter, he appears to take a position very close to Ritz’s – the very view he had dismissed just months earlier. According to the wave theory of light, Einstein now asserted, a wave source ‘produces a spherical wave that propagates outward. The inverse process does not exist as elementary process’. The only way in which converging waves can be produced, Einstein claimed, was by combining a very large number of coherently operating sources. He appears to have changed his mind.

Given Einstein’s titanic reputation, you might think that such a momentous shift would occasion a few ripples in the history of science. But I know of only one significant reference to his later statement: a letter from the philosopher Karl Popper to the journal Nature in 1956. In this letter, Popper describes the wave asymmetry in terms very similar to Einstein’s. And he also makes one particularly interesting remark, one that might help us to unpick the riddle. Coherently converging waves, Popper insisted, ‘would demand a vast number of distant coherent generators of waves the co?ordination of which, to be explicable, would have to be shown as originating from the centre’ (my italics).

This is, in fact, a particular instance of a much broader phenomenon. Consider two events that are spatially distant yet correlated with one another. If they are not related as cause and effect, they tend to be joint effects of a common cause. If, for example, two lamps in a room go out suddenly, it is unlikely that both bulbs just happened to burn out simultaneously. So we look for a common cause – perhaps a circuit breaker that tripped.

Common-cause inferences are so pervasive that it is difficult to imagine what we could know about the world beyond our immediate surroundings without them. Hume was right: judgments about causality are absolutely essential in going ‘beyond the evidence of the senses’. In his book The Direction of Time (1956), the philosopher Hans Reichenbach formulated a principle underlying such inferences: ‘If an improbable coincidence has occurred, there must exist a common cause.’ To the extent that we are bound to apply Reichenbach’s rule, we are all like the hard-boiled detective who doesn’t believe in coincidences.

Read the entire article here.

Socks and Self-knowledge

ddg-search-socks

How well do you really know yourself?  Go beyond your latte preferences and your favorite movies. Knowing yourself means being familiar with your most intimate thoughts, desires and fears, your character traits and flaws, your values. for many this quest for self-knowledge is a life-long process. And, it may begin with knowing about your socks.

From NYT:

Most people wonder at some point in their lives how well they know themselves. Self-knowledge seems a good thing to have, but hard to attain. To know yourself would be to know such things as your deepest thoughts, desires and emotions, your character traits, your values, what makes you happy and why you think and do the things you think and do. These are all examples of what might be called “substantial” self-knowledge, and there was a time when it would have been safe to assume that philosophy had plenty to say about the sources, extent and importance of self-knowledge in this sense.

Not any more. With few exceptions, philosophers of self-knowledge nowadays have other concerns. Here’s an example of the sort of thing philosophers worry about: suppose you are wearing socks and believe you are wearing socks. How do you know that that’s what you believe? Notice that the question isn’t: “How do you know you are wearing socks?” but rather “How do you know you believe you are wearing socks?” Knowledge of such beliefs is seen as a form of self-knowledge. Other popular examples of self-knowledge in the philosophical literature include knowing that you are in pain and knowing that you are thinking that water is wet. For many philosophers the challenge is explain how these types of self-knowledge are possible.

This is usually news to non-philosophers. Most certainly imagine that philosophy tries to answer the Big Questions, and “How do you know you believe you are wearing socks?” doesn’t sound much like one of them. If knowing that you believe you are wearing socks qualifies as self-knowledge at all — and even that isn’t obvious — it is self-knowledge of the most trivial kind. Non-philosophers find it hard to figure out why philosophers would be more interested in trivial than in substantial self-knowledge.

One common reaction to the focus on trivial self-knowledge is to ask, “Why on earth would you be interested in that?” — or, more pointedly, “Why on earth would anyone pay you to think about that?” Philosophers of self-knowledge aren’t deterred. It isn’t unusual for them to start their learned articles and books on self-knowledge by declaring that they aren’t going to be discussing substantial self-knowledge because that isn’t where the philosophical action is.

How can that be? It all depends on your starting point. For example, to know that you are wearing socks requires effort, even if it’s only the minimal effort of looking down at your feet. When you look down and see the socks on your feet you have evidence — the evidence of your senses — that you are wearing socks, and this illustrates what seems a general point about knowledge: knowledge is based on evidence, and our beliefs about the world around us can be wrong. Evidence can be misleading and conclusions from evidence unwarranted. Trivial self-knowledge seems different. On the face of it, you don’t need evidence to know that you believe you are wearing socks, and there is a strong presumption that your beliefs about your own beliefs and other states of mind aren’t mistaken. Trivial self-knowledge is direct (not based on evidence) and privileged (normally immune to error). Given these two background assumptions, it looks like there is something here that needs explaining: How is trivial self-knowledge, with all its peculiarities, possible?

From this perspective, trivial self-knowledge is philosophically interesting because it is special. “Special” in this context means special from the standpoint of epistemology or the philosophy of knowledge. Substantial self-knowledge is much less interesting from this point of view because it is like any other knowledge. You need evidence to know your own character and values, and your beliefs about your own character and values can be mistaken. For example, you think you are generous but your friends know you better. You think you are committed to racial equality but your behaviour suggests otherwise. Once you think of substantial self-knowledge as neither direct nor privileged why would you still regard it as philosophically interesting?

What is missing from this picture is any real sense of the human importance of self-knowledge. Self-knowledge matters to us as human beings, and the self-knowledge which matters to us as human beings is substantial rather than trivial self-knowledge. We assume that on the whole our lives go better with substantial self-knowledge than without it, and what is puzzling is how hard it can be to know ourselves in this sense.

The assumption that self-knowledge matters is controversial and philosophy might be expected to have something to say about the importance of self-knowledge, as well as its scope and extent. The interesting questions in this context include “Why is substantial self-knowledge hard to attain?” and “To what extent is substantial self-knowledge possible?”

Read the entire article here.

Image courtesy of DuckDuckGo Search.

 

Sartre: Forever Linked with Mrs Premise and Mrs Conclusion

Jean-Paul_Sartre_FP

One has to wonder how Jean-Paul Sartre would have been regarded today had he accepted the Nobel Prize in Literature in 1964, or had the characters of Monty Python not used him as a punching bag in one of their infamous, satyrical philosopher sketches:

Mrs Conclusion: What was Jean-Paul like? 

Mrs Premise: Well, you know, a bit moody. Yes, he didn’t join in the fun much. Just sat there thinking. Still, Mr Rotter caught him a few times with the whoopee cushion. (she demonstrates) Le Capitalisme et La Bourgeoisie ils sont la m~me chose… Oooh we did laugh…

From the Guardian:

In this age in which all shall have prizes, in which every winning author knows what’s necessary in the post-award trial-by-photoshoot (Book jacket pressed to chest? Check. Wall-to-wall media? Check. Backdrop of sponsor’s logo? Check) and in which scarcely anyone has the couilles, as they say in France, to politely tell judges where they can put their prize, how lovely to recall what happened on 22 October 1964, when Jean-Paul Sartre turned down the Nobel prize for literature.

“I have always declined official honours,” he explained at the time. “A writer should not allow himself to be turned into an institution. This attitude is based on my conception of the writer’s enterprise. A writer who adopts political, social or literary positions must act only within the means that are his own – that is, the written word.”

Throughout his life, Sartre agonised about the purpose of literature. In 1947’s What is Literature?, he jettisoned a sacred notion of literature as capable of replacing outmoded religious beliefs in favour of the view that it should have a committed social function. However, the last pages of his enduringly brilliant memoir Words, published the same year as the Nobel refusal, despair over that function: “For a long time I looked on my pen as a sword; now I know how powerless we are.” Poetry, wrote Auden, makes nothing happen; politically committed literature, Sartre was saying, was no better. In rejecting the honour, Sartre worried that the Nobel was reserved for “the writers of the west or the rebels of the east”. He didn’t damn the Nobel in quite the bracing terms that led Hari Kunzru to decline the 2003 John Llewellyn Rhys prize, sponsored by the Mail on Sunday (“As the child of an immigrant, I am only too aware of the poisonous effect of the Mail’s editorial line”), but gently pointed out its Eurocentric shortcomings. Plus, one might say 50 years on, ça change. Sartre said that he might have accepted the Nobel if it had been offered to him during France’s imperial war in Algeria, which he vehemently opposed, because then the award would have helped in the struggle, rather than making Sartre into a brand, an institution, a depoliticised commodity. Truly, it’s difficult not to respect his compunctions.

But the story is odder than that. Sartre read in Figaro Littéraire that he was in the frame for the award, so he wrote to the Swedish Academy saying he didn’t want the honour. He was offered it anyway. “I was not aware at the time that the Nobel prize is awarded without consulting the opinion of the recipient,” he said. “But I now understand that when the Swedish Academy has made a decision, it cannot subsequently revoke it.”

Regrets? Sartre had a few – at least about the money. His principled stand cost him 250,000 kronor (about £21,000), prize money that, he reflected in his refusal statement, he could have donated to the “apartheid committee in London” who badly needed support at the time. All of which makes one wonder what his compatriot, Patrick Modiano, the 15th Frenchman to win the Nobel for literature earlier this month, did with his 8m kronor (about £700,000).

The Swedish Academy had selected Sartre for having “exerted a far-reaching influence on our age”. Is this still the case? Though he was lionised by student radicals in Paris in May 1968, his reputation as a philosopher was on the wane even then. His brand of existentialism had been eclipsed by structuralists (such as Lévi-Strauss and Althusser) and post-structuralists (such as Derrida and Deleuze). Indeed, Derrida would spend a great deal of effort deriding Sartrean existentialism as a misconstrual of Heidegger. Anglo-Saxon analytic philosophy, with the notable exception of Iris Murdoch and Arthur Danto, has for the most part been sniffy about Sartre’s philosophical credentials.

Sartre’s later reputation probably hasn’t benefited from being championed by Paris’s philosophical lightweight, Bernard-Henri Lévy, who subtitled his biography of his hero The Philosopher of the Twentieth Century (Really? Not Heidegger, Russell, Wittgenstein or Adorno?); still less by his appearance in Monty Python’s least funny philosophy sketch, “Mrs Premise and Mrs Conclusion visit Jean-Paul Sartre at his Paris home”. Sartre has become more risible than lisible: unremittingly depicted as laughable philosopher toad – ugly, randy, incomprehensible, forever excitably over-caffeinated at Les Deux Magots with Simone de Beauvoir, encircled with pipe smoke and mired in philosophical jargon, not so much a man as a stock pantomime figure. He deserves better.

How then should we approach Sartre’s writings in 2014? So much of his lifelong intellectual struggle and his work still seems pertinent. When we read the “Bad Faith” section of Being and Nothingness, it is hard not to be struck by the image of the waiter who is too ingratiating and mannered in his gestures, and how that image pertains to the dismal drama of inauthentic self-performance that we find in our culture today. When we watch his play Huis Clos, we might well think of how disastrous our relations with other people are, since we now require them, more than anything else, to confirm our self-images, while they, no less vexingly, chiefly need us to confirm theirs. When we read his claim that humans can, through imagination and action, change our destiny, we feel something of the burden of responsibility of choice that makes us moral beings. True, when we read such sentences as “the being by which Nothingness comes to the world must be its own Nothingness”, we might want to retreat to a dark room for a good cry, but let’s not spoil the story.

His lifelong commitments to socialism, anti-fascism and anti-imperialism still resonate. When we read, in his novel Nausea, of the protagonost Antoine Roquentin in Bouville’s art gallery, looking at pictures of self-satisfied local worthies, we can apply his fury at their subjects’ self-entitlement to today’s images of the powers that be (the suppressed photo, for example, of Cameron and his cronies in Bullingdon pomp), and share his disgust that such men know nothing of what the world is really like in all its absurd contingency.

In his short story Intimacy, we confront a character who, like all of us on occasion, is afraid of the burden of freedom and does everything possible to make others take her decisions for her. When we read his distinctions between being-in-itself (être-en-soi), being-for-itself (être-pour-soi) and being-for-others (être-pour-autrui), we are encouraged to think about the tragicomic nature of what it is to be human – a longing for full control over one’s destiny and for absolute identity, and at the same time, a realisation of the futility of that wish.

The existential plight of humanity, our absurd lot, our moral and political responsibilities that Sartre so brilliantly identified have not gone away; rather, we have chosen the easy path of ignoring them. That is not a surprise: for Sartre, such refusal to accept what it is to be human was overwhelmingly, paradoxically, what humans do.

Read the entire article here.

Image: Jean-Paul Sartre (c1950). Courtesy: Archivo del diario Clarín, Buenos Aires, Argentina

 

Theism Versus Spirituality

Prominent neo-atheist Sam Harris continues to reject theism, and does so thoughtfully and eloquently. In his latest book, Waking Up, he continues to argue the case against religion, but makes a powerful case for spirituality. Harris defines spirituality as an inner sense of a good and powerful reality, based on sound self-awarenesses and insightful questioning of one’s own consciousness. This type of spirituality, quite rightly, is devoid of theistic angels and demons. Harris reveals more in his interview with Gary Gutting, professor of philosophy at the University of Notre Dame.

From the NYT:

Sam Harris is a neuroscientist and prominent “new atheist,” who along with others like Richard Dawkins, Daniel Dennett and Christopher Hitchens helped put criticism of religion at the forefront of public debate in recent years. In two previous books, “The End of Faith” and “Letter to a Christian Nation,” Harris argued that theistic religion has no place in a world of science. In his latest book, “Waking Up,” his thought takes a new direction. While still rejecting theism, Harris nonetheless makes a case for the value of “spirituality,” which he bases on his experiences in meditation. I interviewed him recently about the book and some of the arguments he makes in it.

Gary Gutting: A common basis for atheism is naturalism — the view that only science can give a reliable account of what’s in the world. But in “Waking Up” you say that consciousness resists scientific description, which seems to imply that it’s a reality beyond the grasp of science. Have you moved away from an atheistic view?

Sam Harris: I don’t actually argue that consciousness is “a reality” beyond the grasp of science. I just think that it is conceptually irreducible — that is, I don’t think we can fully understand it in terms of unconscious information processing. Consciousness is “subjective”— not in the pejorative sense of being unscientific, biased or merely personal, but in the sense that it is intrinsically first-person, experiential and qualitative.

The only thing in this universe that suggests the reality of consciousness is consciousness itself. Many philosophers have made this argument in one way or another — Thomas Nagel, John Searle, David Chalmers. And while I don’t agree with everything they say about consciousness, I agree with them on this point.

The primary approach to understanding consciousness in neuroscience entails correlating changes in its contents with changes in the brain. But no matter how reliable these correlations become, they won’t allow us to drop the first-person side of the equation. The experiential character of consciousness is part of the very reality we are studying. Consequently, I think science needs to be extended to include a disciplined approach to introspection.

G.G.: But science aims at objective truth, which has to be verifiable: open to confirmation by other people. In what sense do you think first-person descriptions of subjective experience can be scientific?

S.H.: In a very strong sense. The only difference between claims about first-person experience and claims about the physical world is that the latter are easier for others to verify. That is an important distinction in practical terms — it’s easier to study rocks than to study moods — but it isn’t a difference that marks a boundary between science and non-science. Nothing, in principle, prevents a solitary genius on a desert island from doing groundbreaking science. Confirmation by others is not what puts the “truth” in a truth claim. And nothing prevents us from making objective claims about subjective experience.

Are you thinking about Margaret Thatcher right now? Well, now you are. Were you thinking about her exactly six minutes ago? Probably not. There are answers to questions of this kind, whether or not anyone is in a position to verify them.

And certain truths about the nature of our minds are well worth knowing. For instance, the anger you felt yesterday, or a year ago, isn’t here anymore, and if it arises in the next moment, based on your thinking about the past, it will quickly pass away when you are no longer thinking about it. This is a profoundly important truth about the mind — and it can be absolutely liberating to understand it deeply. If you do understand it deeply — that is, if you are able to pay clear attention to the arising and passing away of anger, rather than merely think about why you have every right to be angry — it becomes impossible to stay angry for more than a few moments at a time. Again, this is an objective claim about the character of subjective experience. And I invite our readers to test it in the laboratory of their own minds.

G. G.: Of course, we all have some access to what other people are thinking or feeling. But that access is through probable inference and so lacks the special authority of first-person descriptions. Suppose I told you that in fact I didn’t think of Margaret Thatcher when I read your comment, because I misread your text as referring to Becky Thatcher in “The Adventures of Tom Sawyer”? If that’s true, I have evidence for it that you can’t have. There are some features of consciousness that we will agree on. But when our first-person accounts differ, then there’s no way to resolve the disagreement by looking at one another’s evidence. That’s very different from the way things are in science.

S.H.: This difference doesn’t run very deep. People can be mistaken about the world and about the experiences of others — and they can even be mistaken about the character of their own experience. But these forms of confusion aren’t fundamentally different. Whatever we study, we are obliged to take subjective reports seriously, all the while knowing that they are sometimes false or incomplete.

For instance, consider an emotion like fear. We now have many physiological markers for fear that we consider quite reliable, from increased activity in the amygdala and spikes in blood cortisol to peripheral physiological changes like sweating palms. However, just imagine what would happen if people started showing up in the lab complaining of feeling intense fear without showing any of these signs — and they claimed to feel suddenly quite calm when their amygdalae lit up on fMRI, their cortisol spiked, and their skin conductance increased. We would no longer consider these objective measures of fear to be valid. So everything still depends on people telling us how they feel and our (usually) believing them.

However, it is true that people can be very poor judges of their inner experience. That is why I think disciplined training in a technique like “mindfulness,” apart from its personal benefits, can be scientifically important.

Read the entire story here.

A Godless Universe: Mind or Mathematics

In his science column for the NYT George Johnson reviews several recent books by noted thinkers who for different reasons believe science needs to expand its borders. Philosopher Thomas Nagel and physicist Max Tegmark both agree that our current understanding of the universe is rather limited and that science needs to turn to new or alternate explanations. Nagel, still an atheist, suggests in his book Mind and Cosmos that the mind somehow needs to be considered a fundamental structure of the universe. While Tegmark in his book Our Mathematical Universe: My Quest for the Ultimate Nature of Reality suggests that mathematics is the core, irreducible framework of the cosmos. Two radically different ideas — yet both are correct in one respect: we still know so very little about ourselves and our surroundings.

From the NYT:

Though he probably didn’t intend anything so jarring, Nicolaus Copernicus, in a 16th-century treatise, gave rise to the idea that human beings do not occupy a special place in the heavens. Nearly 500 years after replacing the Earth with the sun as the center of the cosmic swirl, we’ve come to see ourselves as just another species on a planet orbiting a star in the boondocks of a galaxy in the universe we call home. And this may be just one of many universes — what cosmologists, some more skeptically than others, have named the multiverse.

Despite the long string of demotions, we remain confident, out here on the edge of nowhere, that our band of primates has what it takes to figure out the cosmos — what the writer Timothy Ferris called “the whole shebang.” New particles may yet be discovered, and even new laws. But it is almost taken for granted that everything from physics to biology, including the mind, ultimately comes down to four fundamental concepts: matter and energy interacting in an arena of space and time.

There are skeptics who suspect we may be missing a crucial piece of the puzzle. Recently, I’ve been struck by two books exploring that possibility in very different ways. There is no reason why, in this particular century, Homo sapiens should have gathered all the pieces needed for a theory of everything. In displacing humanity from a privileged position, the Copernican principle applies not just to where we are in space but to when we are in time.

Since it was published in 2012, “Mind and Cosmos,” by the philosopher Thomas Nagel, is the book that has caused the most consternation. With his taunting subtitle — “Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False” — Dr. Nagel was rejecting the idea that there was nothing more to the universe than matter and physical forces. He also doubted that the laws of evolution, as currently conceived, could have produced something as remarkable as sentient life. That idea borders on anathema, and the book quickly met with a blistering counterattack. Steven Pinker, a Harvard psychologist, denounced it as “the shoddy reasoning of a once-great thinker.”

What makes “Mind and Cosmos” worth reading is that Dr. Nagel is an atheist, who rejects the creationist idea of an intelligent designer. The answers, he believes, may still be found through science, but only by expanding it further than it may be willing to go.

“Humans are addicted to the hope for a final reckoning,” he wrote, “but intellectual humility requires that we resist the temptation to assume that the tools of the kind we now have are in principle sufficient to understand the universe as a whole.”

Dr. Nagel finds it astonishing that the human brain — this biological organ that evolved on the third rock from the sun — has developed a science and a mathematics so in tune with the cosmos that it can predict and explain so many things.

Neuroscientists assume that these mental powers somehow emerge from the electrical signaling of neurons — the circuitry of the brain. But no one has come close to explaining how that occurs.

Continue reading the main story Continue reading the main story
Continue reading the main story

That, Dr. Nagel proposes, might require another revolution: showing that mind, along with matter and energy, is “a fundamental principle of nature” — and that we live in a universe primed “to generate beings capable of comprehending it.” Rather than being a blind series of random mutations and adaptations, evolution would have a direction, maybe even a purpose.

“Above all,” he wrote, “I would like to extend the boundaries of what is not regarded as unthinkable, in light of how little we really understand about the world.”

Dr. Nagel is not alone in entertaining such ideas. While rejecting anything mystical, the biologist Stuart Kauffman has suggested that Darwinian theory must somehow be expanded to explain the emergence of complex, intelligent creatures. And David J. Chalmers, a philosopher, has called on scientists to seriously consider “panpsychism” — the idea that some kind of consciousness, however rudimentary, pervades the stuff of the universe.

Some of this is a matter of scientific taste. It can be just as exhilarating, as Stephen Jay Gould proposed in “Wonderful Life,” to consider the conscious mind as simply a fluke, no more inevitable than the human appendix or a starfish’s five legs. But it doesn’t seem so crazy to consider alternate explanations.

Heading off in another direction, a new book by the physicist Max Tegmark suggests that a different ingredient — mathematics — needs to be admitted into science as one of nature’s irreducible parts. In fact, he believes, it may be the most fundamental of all.

In a well-known 1960 essay, the physicist Eugene Wigner marveled at “the unreasonable effectiveness of mathematics” in explaining the world. It is “something bordering on the mysterious,” he wrote, for which “there is no rational explanation.”

The best he could offer was that mathematics is “a wonderful gift which we neither understand nor deserve.”

Dr. Tegmark, in his new book, “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality,” turns the idea on its head: The reason mathematics serves as such a forceful tool is that the universe is a mathematical structure. Going beyond Pythagoras and Plato, he sets out to show how matter, energy, space and time might emerge from numbers.

Read the entire article here.

I Don’t Know, But I Like What I Like: The New Pluralism

choiceIn an insightful opinion piece, excerpted below, a millennial wonders if our fragmented and cluttered, information-rich society has damaged pluralism by turning action into indecision. Even aesthetic preferences come to be so laden with judgmental baggage that expressing a preference for one type of art, or car, or indeed cereal, seems to become an impossible conundrum  for many born in the mid-1980s or later. So, a choice becomes a way to alienate those not chosen — when did selecting a cereal become such an onerous exercise in political correctness and moral relativism?

From the New York Times:

Critics of the millennial generation, of which I am a member, consistently use terms like “apathetic,” “lazy” and “narcissistic” to explain our tendency to be less civically and politically engaged. But what these critics seem to be missing is that many millennials are plagued not so much by apathy as by indecision. And it’s not surprising: Pluralism has been a large influence on our upbringing. While we applaud pluralism’s benefits, widespread enthusiasm has overwhelmed desperately needed criticism of its side effects.

By “pluralism,” I mean a cultural recognition of difference: individuals of varying race, gender, religious affiliation, politics and sexual preference, all exalted as equal. In recent decades, pluralism has come to be an ethical injunction, one that calls for people to peacefully accept and embrace, not simply tolerate, differences among individuals. Distinct from the free-for-all of relativism, pluralism encourages us (in concept) to support our own convictions while also upholding an “energetic engagement with diversity, ” as Harvard’s Pluralism Project suggested in 1991. Today, paeans to pluralism continue to sound throughout the halls of American universities, private institutions, left-leaning households and influential political circles.

However, pluralism has had unforeseen consequences. The art critic Craig Owens once wrote that pluralism is not a “recognition, but a reduction of difference to absolute indifference, equivalence, interchangeability.” Some millennials who were greeted by pluralism in this battered state are still feelings its effects. Unlike those adults who encountered pluralism with their beliefs close at hand, we entered the world when truth-claims and qualitative judgments were already on trial and seemingly interchangeable. As a result, we continue to struggle when it comes to decisively avowing our most basic convictions.

Those of us born after the mid-1980s whose upbringing included a liberal arts education and the fruits of a fledgling World Wide Web have grown up (and are still growing up) with an endlessly accessible stream of texts, images and sounds from far-reaching times and places, much of which were unavailable to humans for all of history. Our most formative years include not just the birth of the Internet and the ensuing accelerated global exchange of information, but a new orthodoxy of multiculturalist ethics and “political correctness.”

These ideas were reinforced in many humanities departments in Western universities during the 1980s, where facts and claims to objectivity were eagerly jettisoned. Even “the canon” was dislodged from its historically privileged perch, and since then, many liberal-minded professors have avoided opining about “good” literature or “high art” to avoid reinstating an old hegemony. In college today, we continue to learn about the byproducts of absolute truths and intractable forms of ideology, which historically seem inextricably linked to bigotry and prejudice.

For instance, a student in one of my English classes was chastened for his preference for Shakespeare over that of the Haitian-American writer Edwidge Danticat. The professor challenged the student to apply a more “disinterested” analysis to his reading so as to avoid entangling himself in a misinformed gesture of “postcolonial oppression.” That student stopped raising his hand in class.

I am not trying to tackle the challenge as a whole or indict contemporary pedagogies, but I have to ask: How does the ethos of pluralism inside universities impinge on each student’s ability to make qualitative judgments outside of the classroom, in spaces of work, play, politics or even love?

In 2004, the French sociologist of science Bruno Latour intimated that the skeptical attitude which rebuffs claims to absolute knowledge might have had a deleterious effect on the younger generation: “Good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on.” Latour identified a condition that resonates: Our tenuous claims to truth have not simply been learned in university classrooms or in reading theoretical texts but reinforced by the decentralized authority of the Internet. While trying to form our fundamental convictions in this dizzying digital and intellectual global landscape, some of us are finding it increasingly difficult to embrace qualitative judgments.

Matters of taste in music, art and fashion, for example, can become a source of anxiety and hesitation. While clickable ways of “liking” abound on the Internet, personalized avowals of taste often seem treacherous today. Admittedly, many millennials (and nonmillennials) might feel comfortable simply saying, “I like what I like,” but some of us find ourselves reeling in the face of choice. To affirm a preference for rap over classical music, for instance, implicates the well-meaning millennial in a web of judgments far beyond his control. For the millennial generation, as a result, confident expressions of taste have become more challenging, as aesthetic preference is subjected to relentless scrutiny.

Philosophers and social theorists have long weighed in on this issue of taste. Pierre Bourdieu claimed that an “encounter with a work of art is not ‘love at first sight’ as is generally supposed.” Rather, he thought “tastes” function as “markers of ‘class.’ ” Theodor Adorno and Max Horkheimer argued that aesthetic preference could be traced along socioeconomic lines and reinforce class divisions. To dislike cauliflower is one thing. But elevating the work of one writer or artist over another has become contested territory.

This assured expression of “I like what I like,” when strained through pluralist-inspired critical inquiry, deteriorates: “I like what I like” becomes “But why do I like what I like? Should I like what I like? Do I like it because someone else wants me to like it? If so, who profits and who suffers from my liking what I like?” and finally, “I am not sure I like what I like anymore.” For a number of us millennials, commitment to even seemingly simple aesthetic judgments have become shot through with indecision.

Read the entire article here.

Which is Your God?

Is your God the one to be feared from the Old Testament? Or is yours the God who brought forth the angel Moroni? Or are your Gods those revered by Hindus or Ancient Greeks or the Norse? Theists have continuing trouble in answering these fundamental questions much to the consternation, and satisfaction, of atheists.

In a thoughtful interview with Gary Gutting, Louise Antony a professor of philosophy at the University of Massachusetts, structures these questions in the broader context of morality and social justice.

From the NYT:

Gary Gutting: You’ve taken a strong stand as an atheist, so you obviously don’t think there are any good reasons to believe in God. But I imagine there are philosophers whose rational abilities you respect who are theists. How do you explain their disagreement with you? Are they just not thinking clearly on this topic?

Louise Antony: I’m not sure what you mean by saying that I’ve taken a “strong stand as an atheist.” I don’t consider myself an agnostic; I claim to know that God doesn’t exist, if that’s what you mean.

G.G.: That is what I mean.

L.A.: O.K. So the question is, why do I say that theism is false, rather than just unproven? Because the question has been settled to my satisfaction. I say “there is no God” with the same confidence I say “there are no ghosts” or “there is no magic.” The main issue is supernaturalism — I deny that there are beings or phenomena outside the scope of natural law.

That’s not to say that I think everything is within the scope of human knowledge. Surely there are things not dreamt of in our philosophy, not to mention in our science – but that fact is not a reason to believe in supernatural beings. I think many arguments for the existence of a God depend on the insufficiencies of human cognition. I readily grant that we have cognitive limitations. But when we bump up against them, when we find we cannot explain something — like why the fundamental physical parameters happen to have the values that they have — the right conclusion to draw is that we just can’t explain the thing. That’s the proper place for agnosticism and humility.

But getting back to your question: I’m puzzled why you are puzzled how rational people could disagree about the existence of God. Why not ask about disagreements among theists? Jews and Muslims disagree with Christians about the divinity of Jesus; Protestants disagree with Catholics about the virginity of Mary; Protestants disagree with Protestants about predestination, infant baptism and the inerrancy of the Bible. Hindus think there are many gods while Unitarians think there is at most one. Don’t all these disagreements demand explanation too? Must a Christian Scientist say that Episcopalians are just not thinking clearly? Are you going to ask a Catholic if she thinks there are no good reasons for believing in the angel Moroni?

G.G.: Yes, I do think it’s relevant to ask believers why they prefer their particular brand of theism to other brands. It seems to me that, at some point of specificity, most people don’t have reasons beyond being comfortable with one community rather than another. I think it’s at least sometimes important for believers to have a sense of what that point is. But people with many different specific beliefs share a belief in God — a supreme being who made and rules the world. You’ve taken a strong stand against that fundamental view, which is why I’m asking you about that.

L.A.: Well I’m challenging the idea that there’s one fundamental view here. Even if I could be convinced that supernatural beings exist, there’d be a whole separate issue about how many such beings there are and what those beings are like. Many theists think they’re home free with something like the argument from design: that there is empirical evidence of a purposeful design in nature. But it’s one thing to argue that the universe must be the product of some kind of intelligent agent; it’s quite something else to argue that this designer was all-knowing and omnipotent. Why is that a better hypothesis than that the designer was pretty smart but made a few mistakes? Maybe (I’m just cribbing from Hume here) there was a committee of intelligent creators, who didn’t quite agree on everything. Maybe the creator was a student god, and only got a B- on this project.

In any case though, I don’t see that claiming to know that there is no God requires me to say that no one could have good reasons to believe in God. I don’t think there’s some general answer to the question, “Why do theists believe in God?” I expect that the explanation for theists’ beliefs varies from theist to theist. So I’d have to take things on a case-by-case basis.

I have talked about this with some of my theist friends, and I’ve read some personal accounts by theists, and in those cases, I feel that I have some idea why they believe what they believe. But I can allow there are arguments for theism that I haven’t considered, or objections to my own position that I don’t know about. I don’t think that when two people take opposing stands on any issue that one of them has to be irrational or ignorant.

G.G.: No, they may both be rational. But suppose you and your theist friend are equally adept at reasoning, equally informed about relevant evidence, equally honest and fair-minded — suppose, that is, you are what philosophers call epistemic peers: equally reliable as knowers. Then shouldn’t each of you recognize that you’re no more likely to be right than your peer is, and so both retreat to an agnostic position?

L.A.: Yes, this is an interesting puzzle in the abstract: How could two epistemic peers — two equally rational, equally well-informed thinkers — fail to converge on the same opinions? But it is not a problem in the real world. In the real world, there are no epistemic peers — no matter how similar our experiences and our psychological capacities, no two of us are exactly alike, and any difference in either of these respects can be rationally relevant to what we believe.

G.G.: So is your point that we always have reason to think that people who disagree are not epistemic peers?

L.A.: It’s worse than that. The whole notion of epistemic peers belongs only to the abstract study of knowledge, and has no role to play in real life. Take the notion of “equal cognitive powers”: speaking in terms of real human minds, we have no idea how to seriously compare the cognitive powers of two people.

Read the entire article here.

Chomsky

Chomsky. It’s highly likely that the mere sound of his name will polarize you. You will find yourself either for Noam Chomsky or adamantly against. You will either stand with him on the Arab-Israeli conflict or you won’t; you either support his libertarian-socialist views or you’re firmly against; you either agree with him on issues of privacy and authority or you don’t. However, regardless of your position on the Chomsky-support-scale you have to recognize that once he’s gone — he’s 84 years old — he’ll be recognized as one of the world’s great contemporary thinkers and writers. In the same mold as George Orwell, who was one of Chomsky’s early influences, Chomsky speaks truth to power. Whether the topic is political criticism, mass media, analytic philosophy, the military-industrial complex, computer science or linguistics the range of Chomsky’s discourse is astonishing, and his opinion not to be ignored.

From the Guardian:

It may have been pouring with rain, water overrunning the gutters and spreading fast and deep across London’s Euston Road, but this did not stop a queue forming, and growing until it snaked almost all the way back to Euston station. Inside Friends House, a Quaker-run meeting hall, the excitement was palpable. People searched for friends and seats with thinly disguised anxiety; all watched the stage until, about 15 minutes late, a short, slightly top-heavy old man climbed carefully on to the stage and sat down. The hall filled with cheers and clapping, with whoops and with whistles.

Noam Chomsky, said two speakers (one of them Mariam Said, whose late husband, Edward, this lecture honours) “needs no introduction”. A tired turn of phrase, but they had a point: in a bookshop down the road the politics section is divided into biography, reference, the Clintons, Obama, Thatcher, Marx, and Noam Chomsky. He topped the first Foreign Policy/Prospect Magazine list of global thinkers in 2005 (the most recent, however, perhaps reflecting a new editorship and a new rubric, lists him not at all). One study of the most frequently cited academic sources of all time found that he ranked eighth, just below Plato and Freud. The list included the Bible.

When he starts speaking, it is in a monotone that makes no particular rhetorical claim on the audience’s attention; in fact, it’s almost soporific. Last October, he tells his audience, he visited Gaza for the first time. Within five minutes many of the hallmarks of Chomsky’s political writing, and speaking, are displayed: his anger, his extraordinary range of reference and experience – journalism from inside Gaza, personal testimony, detailed knowledge of the old Egyptian government, its secret service, the new Egyptian government, the historical context of the Israeli occupation, recent news reports (of sewage used by the Egyptians to flood tunnels out of Gaza, and by Israelis to spray non-violent protesters). Fact upon fact upon fact, but also a withering, sweeping sarcasm – the atrocities are “tolerated politely by Europe as usual”. Harsh, vivid phrases – the “hideously charred corpses of murdered infants”; bodies “writhing in agony” – unspool until they become almost a form of punctuation.

You could argue that the latter is necessary, simply a description of atrocities that must be reported, but it is also a method that has diminishing returns. The facts speak for themselves; the adjectives and the sarcasm have the counterintuitive effect of cheapening them, of imposing on the world a disappointingly crude and simplistic argument. “The sentences,” wrote Larissa MacFarquhar in a brilliant New Yorker profile of Chomsky 10 years ago, “are accusations of guilt, but not from a position of innocence or hope for something better: Chomsky’s sarcasm is the scowl of a fallen world, the sneer of hell’s veteran to its appalled naifs” – and thus, in an odd way, static and ungenerative.

first came to prominence in 1959, with the argument, detailed in a book review (but already present in his first book, published two years earlier), that contrary to the prevailing idea that children learned language by copying and by reinforcement (ie behaviourism), basic grammatical arrangements were already present at birth. The argument revolutionised the study of linguistics; it had fundamental ramifications for anyone studying the mind. It also has interesting, even troubling ramifications for his politics. If we are born with innate structures of linguistic and by extension moral thought, isn’t this a kind of determinism that denies political agency? What is the point of arguing for any change at all?

“The most libertarian positions accept the same view,” he answers. “That there are instincts, basic conditions of human nature that lead to a preferred social order. In fact, if you’re in favour of any policy – reform, revolution, stability, regression, whatever – if you’re at least minimally moral, it’s because you think it’s somehow good for people. And good for people means conforming to their fundamental nature. So whoever you are, whatever your position is, you’re making some tacit assumptions about fundamental human nature … The question is: what do we strive for in developing a social order that is conducive to fundamental human needs? Are human beings born to be servants to masters, or are they born to be free, creative individuals who work with others to inquire, create, develop their own lives? I mean, if humans were totally unstructured creatures, they would be … a tool which can properly be shaped by outside forces. That’s why if you look at the history of what’s called radical behaviourism, [where] you can be completely shaped by outside forces – when [the advocates of this] spell out what they think society ought to be, it’s totalitarian.”

Chomsky, now 84, has been politically engaged all his life; his first published article, in fact, was against fascism, and written when he was 10. Where does the anger come from? “I grew up in the Depression. My parents had jobs, but a lot of the family were unemployed working class, so they had no jobs at all. So I saw poverty and repression right away. People would come to the door trying to sell rags – that was when I was four years old. I remember riding with my mother in a trolley car and passing a textile worker’s strike where the women were striking outside and the police were beating them bloody.”

He met Carol, who would become his wife, at about the same time, when he was five years old. They married when she was 19 and he 21, and were together until she died nearly 60 years later, in 2008. He talks about her constantly, given the chance: how she was so strict about his schedule when they travelled (she often accompanied him on lecture tours) that in Latin America they called her El Comandante; the various bureaucratic scrapes they got into, all over the world. By all accounts, she also enforced balance in his life: made sure he watched an hour of TV a night, went to movies and concerts, encouraged his love of sailing (at one point, he owned a small fleet of sailboats, plus a motorboat); she water-skied until she was 75.

But she was also politically involved: she took her daughters (they had three children: two girls and a boy) to demonstrations; he tells me a story about how, when they were protesting against the Vietnam war, they were once both arrested on the same day. “And you get one phone call. So my wife called our older daughter, who was at that time 12, I guess, and told her, ‘We’re not going to come home tonight, can you take care of the two kids?’ That’s life.” At another point, when it looked like he would be jailed for a long time, she went back to school to study for a PhD, so that she could support the children alone. It makes no sense, he told an interviewer a couple of years ago, for a woman to die before her husband, “because women manage so much better, they talk and support each other. My oldest and closest friend is in the office next door to me; we haven’t once talked about Carol.” His eldest daughter often helps him now. “There’s a transition point, in some way.”

Does he think that in all these years of talking and arguing and writing, he has ever changed one specific thing? “I don’t think any individual changes anything alone. Martin Luther King was an important figure but he couldn’t have said: ‘This is what I changed.’ He came to prominence on a groundswell that was created by mostly young people acting on the ground. In the early years of the antiwar movement we were all doing organising and writing and speaking and gradually certain people could do certain things more easily and effectively, so I pretty much dropped out of organising – I thought the teaching and writing was more effective. Others, friends of mine, did the opposite. But they’re not less influential. Just not known.”

Read the entire article following the jump.

Antifragile

One of our favorite thinkers (and authors) here at theDiagonal is Nassim Taleb. His new work entitled Antifragile expands on ideas that he first described in his bestseller Black Swan.

Based on humanity’s need to find order and patterns out of chaos, and proclivity to seek causality where none exists we’ll need several more books from him before his profound and yet common-sense ideas sink in. In his latest work, Taleb shows how the improbable and unpredictable lie at the foundation of our universe.

[div class=attrib]From the Guardian:[end-div]

Now much does Nassim Taleb dislike journalists? Let me count the ways. “An erudite is someone who displays less than he knows; a journalist or consultant the opposite.” “This business of journalism is about pure entertainment, not the search for the truth.” “Most so-called writers keep writing and writing with the hope, some day, to find something to say.” He disliked them before, but after he predicted the financial crash in his 2007 book, The Black Swan, a book that became a global bestseller, his antipathy reached new heights. He has dozens and dozens of quotes on the subject, and if that’s too obtuse for us non-erudites, his online home page puts it even plainer: “I beg journalists and members of the media to leave me alone.”

He’s not wildly keen on appointments either. In his new book, Antifragile, he writes that he never makes them because a date in the calendar “makes me feel like a prisoner”.

So imagine, if you will, how keenly he must be looking forward to the prospect of a pre-arranged appointment to meet me, a journalist. I approach our lunch meeting, at the Polytechnic Institute of New York University where he’s the “distinguished professor of risk engineering”, as one might approach a sleeping bear: gingerly. And with a certain degree of fear. And yet there he is, striding into the faculty lobby in a jacket and Steve Jobs turtleneck (“I want you to write down that I started wearing them before he did. I want that to be known.”), smiling and effusive.

First, though, he has to have his photo taken. He claims it’s the first time he’s allowed it in three years, and has allotted just 10 minutes for it, though in the end it’s more like five. “The last guy I had was a fucking dick. He wanted to be artsy fartsy,” he tells the photographer, Mike McGregor. “You’re OK.”

Being artsy fartsy, I will learn, is even lower down the scale of Nassim Taleb pet hates than journalists. But then, being contradictory about what one hates and despises and loves and admires is actually another key Nassim Taleb trait.

In print, the hating and despising is there for all to see: he’s forever having spats and fights. When he’s not slagging off the Nobel prize for economics (a “fraud”), bankers (“I have a physical allergy to them”) and the academic establishment (he has it in for something he calls the “Soviet-Harvard illusion”), he’s trading blows with Steven Pinker (“clueless”), and a random reviewer on Amazon, who he took to his Twitter stream to berate. And this is just in the last week.

And yet here he is, chatting away, surprisingly friendly and approachable. When I say as much as we walk to the restaurant, he asks, “What do you mean?”

“In your book, you’re quite…” and I struggle to find the right word, “grumpy”.

He shrugs. “When you write, you don’t have the social constraints of having people in front of you, so you talk about abstract matters.”

Social constraints, it turns out, have their uses. And he’s an excellent host. We go to his regular restaurant, a no-nonsense, Italian-run, canteen-like place, a few yards from his faculty in central Brooklyn, and he insists that I order a glass of wine.

“And what’ll you have?” asks the waitress.

“I’ll take a coffee,” he says.

“What?” I say. “No way! You can’t trick me into ordering a glass of wine and then have coffee.” It’s like flunking lesson #101 at interviewing school, though in the end he relents and has not one but two glasses and a plate of “pasta without pasta” (though strictly speaking you could call it “mixed vegetables and chicken”), and attacks the bread basket “because it doesn’t have any calories here in Brooklyn”.

But then, having read his latest book, I actually know an awful lot about his diet. How he doesn’t eat sugar, any fruits which “don’t have a Greek or Hebrew name” or any liquid which is less than 1,000 years old. Just as I know that he doesn’t like air-conditioning, soccer moms, sunscreen and copy editors. That he believes the “non-natural” has to prove its harmlessness. That America tranquillises its children with drugs and pathologises sadness. That he values honour above all things, banging on about it so much that at times he comes across as a medieval knight who’s got lost somewhere in the space-time continuum. And that several times a week he goes and lifts weights in a basement gym with a bunch of doormen.

He says that after the financial crisis he received “all manner of threats” and at one time was advised to “stock up on bodyguards”. Instead, “I found it more appealing to look like one”. Now, he writes, when he’s harassed by limo drivers in the arrival hall at JFK, “I calmly tell them to fuck off.”

Taleb started out as a trader, worked as a quantitative analyst and ran his own investment firm, but the more he studied statistics, the more he became convinced that the entire financial system was a keg of dynamite that was ready to blow. In The Black Swan he argued that modernity is too complex to understand, and “Black Swan” events – hitherto unknown and unpredicted shocks – will always occur.

What’s more, because of the complexity of the system, if one bank went down, they all would. The book sold 3m copies. And months later, of course, this was more or less exactly what happened. Overnight, he went from lone-voice-in-the-wilderness, spouting off-the-wall theories, to the great seer of the modern age.

Antifragile, the follow-up, is his most important work so far, he says. It takes the central idea of The Black Swan and expands it to encompass almost every other aspect of life, from the 19th century rise of the nation state to what to eat for breakfast (fresh air, as a general rule).

[div class-attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Black Swan, the movie, not the book by the same name by Nassim Taleb. Courtesy of Wkipedia.[end-div]

Socialism and Capitalism Share the Same Parent

Expanding on the work of Immanuel Kant in the late 18th century, German philosopher Georg Wilhelm Friedrich Hegel laid the foundations for what would later become two opposing political systems, socialism and free market capitalism. His comprehensive framework of Absolute Idealism influenced numerous philosophers and thinkers of all shades including Karl Marx and Ralph Waldo Emerson. While many thinkers later rounded on Hegel’s world view as nothing but a thinly veiled attempt to justify totalitarianism in his own nation, there is no argument as to the profound influence of his works on later thinkers from both the left and the right wings of the political spectrum.

[div class=attrib]From FairObserver:[end-div]

It is common knowledge that among developed western countries the two leading socioeconomic systems are socialism and capitalism. The former is often associated more closely with European systems of governance and the latter with the American free market economy. It is also generally known that these two systems are rooted in two fundamentally different assumptions about how a healthy society progresses. What is not as well known is that they both stem from the same philosophical roots, namely the evolutionary philosophy of Georg Wilhelm Friedrich Hegel.

Georg Wilhelm Friedrich Hegel was a leading figure in the movement known as German Idealism that had its beginnings in the late 18th century. That philosophical movement was initiated by another prominent German thinker, Immanuel Kant. Kant published “The Critique of Pure Reason” in 1781, offering a radical new way to understand how we as human beings get along in the world. Hegel expanded on Kant’s theory of knowledge by adding a theory of social and historical progress. Both socialism and capitalism were inspired by different, and to some extent apposing, interpretations of Hegel’s philosophical system.

Immanuel Kant recognized that human beings create their view of reality by incorporating new information into their previous understanding of reality using the laws of reason. As this integrative process unfolds we are compelled to maintain a coherent picture of what is real in order to operate effectively in the world. The coherent picture of reality that we maintain Kant called a necessary transcendental unity. It can be understood as the overarching picture of reality, or worldview, that helps us make sense of the world and against which we interpret and judge all new experiences and information.

Hegel realized that not only must individuals maintain a cohesive picture of reality, but societies and cultures must also maintain a collectively held and unified understanding of what is real. To use a gross example, it is not enough for me to know what a dollar bill is and what it is worth. If I am to be able to buy something with my money, then other people must agree on its value. Reality is not merely an individual event; it is a collective affair of shared agreement. Hegel further saw that the collective understanding of reality that is held in common by many human beings in any given society develops over the course of history. In his book “The Philosophy of History”, Hegel outlines his theory of how this development occurs. Karl Marx started with Hegel’s philosophy and then added his own profound insights – especially in regards to how oppression and class struggle drive the course of history.

Across the Atlantic in America, there was another thinker, Ralph Waldo Emerson, who was strongly influenced by German Idealism and especially the philosophy of Hegel. In the development of the American mind one cannot overstate the role that Emerson played as the pathfinder who marked trails of thought that continue to guide the  current American worldview. His ideas became grooves in consciousness set so deeply in the American psyche that they are often simply experienced as truth.  What excited Emerson about Hegel was his description of how reality emerged from a universal mind. Emerson similarly believed that what we as human beings experience as real has emerged through time from a universal source of intelligence. This distinctly Hegelian tone in Emerson can be heard clearly in this passage from his essay entitled “History”:

“There is one mind common to all individual men. Of the works of this mind history is the record. Man is explicable by nothing less than all his history. All the facts of history pre-exist as laws. Each law in turn is made by circumstances predominant. The creation of a thousand forests is in one acorn, and Egypt, Greece, Rome, Gaul, Britain, America, lie folded already in the first man. Epoch after epoch, camp, kingdom, empire, republic, democracy, are merely the application of this manifold spirit to the manifold world.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The portrait of G.W.F. Hegel (1770-1831); Steel engraving by Lazarus Sichling after a lithograph by Julius L. Sebbers. Courtesy of Wikipedia.[end-div]

Work as Punishment (and For the Sake of Leisure)

Gary Gutting, professor of philosophy at the University of Notre Dame reminds us that work is punishment for Adam’s sin, according to the Book of Genesis. No doubt, many who hold other faiths, as well as those who don’t, may tend to agree with this basic notion.

So, what on earth is work for?

Gutting goes on to remind us that Aristotle and Bertrand Russell had it right: that work is for the sake of leisure.

[div class=attrib]From the New York Times:[end-div]

Is work good or bad?  A fatuous question, it may seem, with unemployment such a pressing national concern.  (Apart from the names of the two candidates, “jobs” was the politically relevant word most used by speakers at the Republican and Democratic conventions.) Even apart from current worries, the goodness of work is deep in our culture. We applaud people for their work ethic, judge our economy by its productivity and even honor work with a national holiday.

But there’s an underlying ambivalence: we celebrate Labor Day by not working, the Book of Genesis says work is punishment for Adam’s sin, and many of us count the days to the next vacation and see a contented retirement as the only reason for working.

We’re ambivalent about work because in our capitalist system it means work-for-pay (wage-labor), not for its own sake.  It is what philosophers call an instrumental good, something valuable not in itself but for what we can use it to achieve.  For most of us, a paying job is still utterly essential — as masses of unemployed people know all too well.  But in our economic system, most of us inevitably see our work as a means to something else: it makes a living, but it doesn’t make a life.

What, then, is work for? Aristotle has a striking answer: “we work to have leisure, on which happiness depends.” This may at first seem absurd. How can we be happy just doing nothing, however sweetly (dolce far niente)?  Doesn’t idleness lead to boredom, the life-destroying ennui portrayed in so many novels, at least since “Madame Bovary”?

Everything depends on how we understand leisure. Is it mere idleness, simply doing nothing?  Then a life of leisure is at best boring (a lesson of Voltaire’s “Candide”), and at worst terrifying (leaving us, as Pascal says, with nothing to distract from the thought of death).  No, the leisure Aristotle has in mind is productive activity enjoyed for its own sake, while work is done for something else.

We can pass by for now the question of just what activities are truly enjoyable for their own sake — perhaps eating and drinking, sports, love, adventure, art, contemplation? The point is that engaging in such activities — and sharing them with others — is what makes a good life. Leisure, not work, should be our primary goal.

Bertrand Russell, in his classic essay “In Praise of Idleness,” agrees. ”A great deal of harm,” he says, “is being done in the modern world by belief in the virtuousness of work.” Instead, “the road to happiness and prosperity lies in an organized diminution of work.” Before the technological breakthroughs of the last two centuries, leisure could be only “the prerogative of small privileged classes,” supported by slave labor or a near equivalent. But this is no longer necessary: “The morality of work is the morality of slaves, and the modern world has no need of slavery.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bust of Aristotle. Marble, Roman copy after a Greek bronze original by Lysippos from 330 BC; the alabaster mantle is a modern addition. Courtesy of Wikipedia.[end-div]

Philosophy and Science Fiction

We excerpt an fascinating article from I09 on the association of science fiction to philosophical inquiry. It’s quiet remarkable that this genre of literature can provide such a rich vein for philosophers to mine, often more so than reality itself. Though, it is no coincidence that our greatest authors of science fiction were, and are, amateur philosophers at heart.

[div class=attrib]From i09:[end-div]

People use science fiction to illustrate philosophy all the time. From ethical quandaries to the very nature of existence, science fiction’s most famous texts are tailor-made for exploring philosophical ideas. In fact, many college campuses now offer courses in the philosophy of science fiction.

But science fiction doesn’t just illuminate philosophy — in fact, the genre grew out of philosophy, and the earliest works of science fiction were philosophical texts. Here’s why science fiction has its roots in philosophy, and why it’s the genre of thought experiments about the universe.

Philosophical Thought Experiments As Science Fiction
Science fiction is a genre that uses strange worlds and inventions to illuminate our reality — sort of the opposite of a lot of other writing, which uses the familiar to build a portrait that cumulatively shows how insane our world actually is. People, especially early twenty-first century people, live in a world where strangeness lurks just beyond our frame of vision — but we can’t see it by looking straight at it. When we try to turn and confront the weird and unthinkable that’s always in the corner of our eye, it vanishes. In a sense, science fiction is like a prosthetic sense of peripheral vision.

We’re sort of like the people chained up in on the cave wall, but never seeing the full picture.

Plato is probably the best-known user of allegories — a form of writing which has a lot in common with science fiction. A lot of allegories are really thought experiments, trying out a set of strange facts to see what principles you derive from them. As plenty of people have pointed out, Plato’s Allegory of the Cave is the template for a million “what is reality” stories, from the works of Philip K. Dick to The Matrix. But you could almost see the cave allegory in itself as a proto-science fiction story, because of the strange worldbuilding that goes into these people who have never seen the “real” world. (Plato also gave us an allegory about the Ring of Gyges, which turns its wearer invisible — sound familiar?).

Later philosophers who ponder the nature of existence also seem to stray into weird science fiction territory — like Descartes, raising the notion that he, Descartes, could have existed since the beginning of the universe (as an alternative to God as a cause for Descartes’ existence.) Sitting in his bread oven, Descartes tries to cut himself off from sensory input to see what he can deduce of the universe.

And by the same token, the philosophy of human nature often seems to depend on conjuring imaginary worlds, whether it be Hobbes’ “nasty, brutish and short” world without laws, or Rousseau’s “state of nature.” A great believer in the importance of science, Hobbes sees humans as essentially mechanistic beings who are programmed to behave in a selfish fashion — and the state is a kind of artificial human that can contain us and give us better programming, in a sense.

So not only can you use something like Star Trek’s Holodeck to point out philosophical notions of the fallibility of the senses, and the possible falseness of reality — philosophy’s own explorations of those sorts of topics are frequently kind of other-worldly. Philosophical thought experiments, like the oft-cited “state of nature,” are also close kin to science fiction world building. As Susan Schneider writes in the book Science Fiction and Philosophy, “if you read science fiction writers like Stanislaw Lem, Isaac Asimov, Arthur C. Clarke and Robert Sawyer, you already aware that some of the best science fiction tales are in fact long versions of philosophical thought experiments.”

But meanwhile, when people come to list the earliest known works that could be considered “real” science fiction, they always wind up listing philosophical works, written by philosophers.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Description This is the front cover art for the book Nineteen Eighty-Four (1984) written by George Orwell. Courtesy of Secker and Warburg (London) / Wikipedia.[end-div]

Ignorance [is] the Root and Stem of All Evil

Hailing from Classical Greece of around 2,400 years ago, Plato has given our contemporary world many important intellectual gifts. His broad interests in justice, mathematics, virtue, epistemology, rhetoric and art, laid the foundations for Western philosophy and science. Yet in his quest for deeper and broader knowledge he also had some important things to say about ignorance.

Massimo Pigliucci over at Rationally Speaking gives us his take on Platonic Ignorance. His caution is appropriate: in this age of information overload and extreme politicization it is ever more important for us to realize and acknowledge our own ignorance. Spreading falsehoods and characterizing opinion as fact to others — transferred ignorance — is rightly identified by Plato as a moral failing. In his own words (of course translated), “Ignorance [is] the Root and Stem of All Evil”.

[div class=attrib]From Rationally Speaking:[end-div]

Plato famously maintained that knowledge is “justified true belief,” meaning that to claim the status of knowledge our beliefs (say, that the earth goes around the sun, rather than the other way around) have to be both true (to the extent this can actually be ascertained) and justified (i.e., we ought to be able to explain to others why we hold such beliefs, otherwise we are simply repeating the — possibly true — beliefs of someone else).

It is the “justified” part that is humbling, since a moment’s reflection will show that a large number of things we think we know we actually cannot justify, which means that we are simply trusting someone else’s authority on the matter. (Which is okay, as long as we realize and acknowledge that to be the case.)

I was recently intrigued, however, not by Plato’s well known treatment of knowledge, but by his far less discussed views on the opposite of knowledge: ignorance. The occasion for these reflections was a talk by Katja Maria Vogt of Columbia University, delivered at CUNY’s Graduate Center, where I work. Vogt began by recalling the ancient skeptics’ attitude toward ignorance, as a “conscious positive stand,” meaning that skepticism is founded on one’s realization of his own ignorance. In this sense, of course, Socrates’ contention that he knew nothing becomes neither a self-contradiction (isn’t he saying that he knows that he knows nothing, thereby acknowledging that he knows something?), nor false modesty. Socrates was simply saying that he was aware of having no expertise while at the same time devoting his life to the quest for knowledge.

Vogt was particularly interested in Plato’s concept of “transferred ignorance,” which the ancient philosopher singled out as morally problematic. Transferred ignorance is the case when someone imparts “knowledge” that he is not aware is in fact wrong. Let us say, for instance, that I tell you that vaccines cause autism, and I do so on the basis of my (alleged) knowledge of biology and other pertinent matters, while, in fact, I am no medical researcher and have only vague notions of how vaccines actually work (i.e., imagine my name is Jenny McCarthy).

The problem, for Plato, is that in a sense I would be thinking of myself as smarter than I actually am, which of course carries a feeling of power over others. I wouldn’t simply be mistaken in my beliefs, I would be mistaken in my confidence in those beliefs. It is this willful ignorance (after all, I did not make a serious attempt to learn about biology or medical research) that carries moral implications.

So for Vogt the ancient Greeks distinguished between two types of ignorance: the self-aware, Socratic one (which is actually good) and the self-oblivious one of the overconfident person (which is bad). Need I point out that far too little of the former and too much of the latter permeate current political and social discourse? Of course, I’m sure a historian could easily come up with a plethora of examples of bad ignorance throughout human history, all the way back to the beginning of recorded time, but it does strike me that the increasingly fact-free public discourse on issues varying from economic policies to scientific research has brought Platonic transferred ignorance to never before achieved peaks (or, rather, valleys).

And I suspect that this is precisely because of the lack of appreciation of the moral dimension of transferred or willful ignorance. When politicians or commentators make up “facts” — or disregard actual facts to serve their own ideological agendas — they sometimes seem genuinely convinced that they are doing something good, at the very least for their constituents, and possibly for humanity at large. But how can it be good — in the moral sense — to make false knowledge one’s own, and even to actively spread it to others?

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Socrates and Plato in a medieval picture. Courtesy of Wikipedia.[end-div]

Philip K. Dick – Future Gnostic

Simon Critchley, professor of philosophy, continues his serialized analysis of Philip K. Dick. Part I first appeared here. Part II examines the events around 2-3-74 that led to Dick’s 8,000 page Gnostic treatise “Exegesis”.

[div class=attrib]From the New York Times:[end-div]

In the previous post, we looked at the consequences and possible philosophic import of the events of February and March of 1974 (also known as 2-3-74) in the life and work of Philip K. Dick, a period in which a dose of sodium pentathol, a light-emitting fish pendant and decades of fiction writing and quasi-philosophic activity came together in revelation that led to Dick’s 8,000-page “Exegesis.”

So, what is the nature of the true reality that Dick claims to have intuited during psychedelic visions of 2-3-74? Does it unwind into mere structureless ranting and raving or does it suggest some tradition of thought or belief? I would argue the latter. This is where things admittedly get a little weirder in an already weird universe, so hold on tight.

In the very first lines of “Exegesis” Dick writes, “We see the Logos addressing the many living entities.” Logos is an important concept that litters the pages of “Exegesis.” It is a word with a wide variety of meaning in ancient Greek, one of which is indeed “word.” It can also mean speech, reason (in Latin, ratio) or giving an account of something. For Heraclitus, to whom Dick frequently refers, logos is the universal law that governs the cosmos of which most human beings are somnolently ignorant. Dick certainly has this latter meaning in mind, but — most important — logos refers to the opening of John’s Gospel, “In the beginning was the word” (logos), where the word becomes flesh in the person of Christ.

But the core of Dick’s vision is not quite Christian in the traditional sense; it is Gnostical: it is the mystical intellection, at its highest moment a fusion with a transmundane or alien God who is identified with logos and who can communicate with human beings in the form of a ray of light or, in Dick’s case, hallucinatory visions.

There is a tension throughout “Exegesis” between a monistic view of the cosmos (where there is just one substance in the universe, which can be seen in Dick’s references to Spinoza’s idea as God as nature, Whitehead’s idea of reality as process and Hegel’s dialectic where “the true is the whole”) and a dualistic or Gnostical view of the cosmos, with two cosmic forces in conflict, one malevolent and the other benevolent. The way I read Dick, the latter view wins out. This means that the visible, phenomenal world is fallen and indeed a kind of prison cell, cage or cave.

Christianity, lest it be forgotten, is a metaphysical monism where it is the obligation of every Christian to love every aspect of creation – even the foulest and smelliest – because it is the work of God. Evil is nothing substantial because if it were it would have to be caused by God, who is by definition good. Against this, Gnosticism declares a radical dualism between the false God who created this world – who is usually called the “demiurge” – and the true God who is unknown and alien to this world. But for the Gnostic, evil is substantial and its evidence is the world. There is a story of a radical Gnostic who used to wash himself in his own saliva in order to have as little contact as possible with creation. Gnosticism is the worship of an alien God by those alienated from the world.

The novelty of Dick’s Gnosticism is that the divine is alleged to communicate with us through information. This is a persistent theme in Dick, and he refers to the universe as information and even Christ as information. Such information has a kind of electrostatic life connected to the theory of what he calls orthogonal time. The latter is rich and strange idea of time that is completely at odds with the standard, linear conception, which goes back to Aristotle, as a sequence of now-points extending from the future through the present and into the past. Dick explains orthogonal time as a circle that contains everything rather than a line both of whose ends disappear in infinity. In an arresting image, Dick claims that orthogonal time contains, “Everything which was, just as grooves on an LP contain that part of the music which has already been played; they don’t disappear after the stylus tracks them.”

It is like that seemingly endless final chord in the Beatles’ “A Day in the Life” that gathers more and more momentum and musical complexity as it decays. In other words, orthogonal time permits total recall.

[div class=attrib]Read the entire article after the jump.[end-div]

Philip K. Dick – Mystic, Epileptic, Madman, Fictionalizing Philosopher

Professor of philosophy Simon Critchley has an insightful examination (serialized) of Philip K. Dick’s writings. Philip K. Dick had a tragically short, but richly creative writing career. Since his death twenty years ago, many of his novels have profoundly influenced contemporary culture.

[div class=attrib]From the New York Times:[end-div]

Philip K. Dick is arguably the most influential writer of science fiction in the past half century. In his short and meteoric career, he wrote 121 short stories and 45 novels. His work was successful during his lifetime but has grown exponentially in influence since his death in 1982. Dick’s work will probably be best known through the dizzyingly successful Hollywood adaptations of his work, in movies like “Blade Runner” (based on “Do Androids Dream of Electric Sheep?”), “Total Recall,” “Minority Report,” “A Scanner Darkly” and, most recently, “The Adjustment Bureau.” Yet few people might consider Dick a thinker. This would be a mistake.

Dick’s life has long passed into legend, peppered with florid tales of madness and intoxication. There are some who consider such legend something of a diversion from the character of Dick’s literary brilliance. Jonathan Lethem writes — rightly in my view — “Dick wasn’t a legend and he wasn’t mad. He lived among us and was a genius.” Yet Dick’s life continues to obtrude massively into any assessment of his work.

Everything turns here on an event that “Dickheads” refer to with the shorthand “the golden fish.” On Feb. 20, 1974, Dick was hit with the force of an extraordinary revelation after a visit to the dentist for an impacted wisdom tooth for which he had received a dose of sodium pentothal. A young woman delivered a bottle of Darvon tablets to his apartment in Fullerton, Calif. She was wearing a necklace with the pendant of a golden fish, an ancient Christian symbol that had been adopted by the Jesus counterculture movement of the late 1960s.

The fish pendant, on Dick’s account, began to emit a golden ray of light, and Dick suddenly experienced what he called, with a nod to Plato, anamnesis: the recollection or total recall of the entire sum of knowledge. Dick claimed to have access to what philosophers call the faculty of “intellectual intuition”: the direct perception by the mind of a metaphysical reality behind screens of appearance. Many philosophers since Kant have insisted that such intellectual intuition is available only to human beings in the guise of fraudulent obscurantism, usually as religious or mystical experience, like Emmanuel Swedenborg’s visions of the angelic multitude. This is what Kant called, in a lovely German word, “die Schwärmerei,” a kind of swarming enthusiasm, where the self is literally en-thused with the God, o theos. Brusquely sweeping aside the careful limitations and strictures that Kant placed on the different domains of pure and practical reason, the phenomenal and the noumenal, Dick claimed direct intuition of the ultimate nature of what he called “true reality.”

Yet the golden fish episode was just the beginning. In the following days and weeks, Dick experienced and indeed enjoyed a couple of nightlong psychedelic visions with phantasmagoric visual light shows. These hypnagogic episodes continued off and on, together with hearing voices and prophetic dreams, until his death eight years later at age 53. Many very weird things happened — too many to list here — including a clay pot that Dick called “Ho On” or “Oh Ho,” which spoke to him about various deep spiritual issues in a brash and irritable voice.

Now, was this just bad acid or good sodium pentothal? Was Dick seriously bonkers? Was he psychotic? Was he schizophrenic? (He writes, “The schizophrenic is a leap ahead that failed.”) Were the visions simply the effect of a series of brain seizures that some call T.L.E. — temporal lobe epilepsy? Could we now explain and explain away Dick’s revelatory experience by some better neuroscientific story about the brain? Perhaps. But the problem is that each of these causal explanations misses the richness of the phenomena that Dick was trying to describe and also overlooks his unique means for describing them.

The fact is that after Dick experienced the events of what he came to call “2-3-74” (the events of February and March of that year), he devoted the rest of his life to trying to understand what had happened to him. For Dick, understanding meant writing. Suffering from what we might call “chronic hypergraphia,” between 2-3-74 and his death, Dick wrote more than 8,000 pages about his experience. He often wrote all night, producing 20 single-spaced, narrow-margined pages at a go, largely handwritten and littered with extraordinary diagrams and cryptic sketches.

The unfinished mountain of paper, assembled posthumously into some 91 folders, was called “Exegesis.” The fragments were assembled by Dick’s friend Paul Williams and then sat in his garage in Glen Ellen, Calif., for the next several years. A beautifully edited selection of these texts, with a golden fish on the cover, was finally published at the end of 2011, weighing in at a mighty 950 pages. But this is still just a fraction of the whole.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Philip K. Dick by R.Crumb. Courtesy of Wired.[end-div]

Death May Not Be as Bad For You as You Think

Professor of philosopher Shelly Kagan has an interesting take on death. After all, how bad can something be for you if you’re not alive to experience it?

[div class=attrib]From the Chronicle:[end-div]

We all believe that death is bad. But why is death bad?

In thinking about this question, I am simply going to assume that the death of my body is the end of my existence as a person. (If you don’t believe me, read the first nine chapters of my book.) But if death is my end, how can it be bad for me to die? After all, once I’m dead, I don’t exist. If I don’t exist, how can being dead be bad for me?

People sometimes respond that death isn’t bad for the person who is dead. Death is bad for the survivors. But I don’t think that can be central to what’s bad about death. Compare two stories.

Story 1. Your friend is about to go on the spaceship that is leaving for 100 Earth years to explore a distant solar system. By the time the spaceship comes back, you will be long dead. Worse still, 20 minutes after the ship takes off, all radio contact between the Earth and the ship will be lost until its return. You’re losing all contact with your closest friend.

Story 2. The spaceship takes off, and then 25 minutes into the flight, it explodes and everybody on board is killed instantly.

Story 2 is worse. But why? It can’t be the separation, because we had that in Story 1. What’s worse is that your friend has died. Admittedly, that is worse for you, too, since you care about your friend. But that upsets you because it is bad for her to have died. But how can it be true that death is bad for the person who dies?

In thinking about this question, it is important to be clear about what we’re asking. In particular, we are not asking whether or how the process of dying can be bad. For I take it to be quite uncontroversial—and not at all puzzling—that the process of dying can be a painful one. But it needn’t be. I might, after all, die peacefully in my sleep. Similarly, of course, the prospect of dying can be unpleasant. But that makes sense only if we consider death itself to be bad. Yet how can sheer nonexistence be bad?

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

Despite the overall plausibility of the deprivation account, though, it’s not all smooth sailing. For one thing, if something is true, it seems as though there’s got to be a time when it’s true. Yet if death is bad for me, when is it bad for me? Not now. I’m not dead now. What about when I’m dead? But then, I won’t exist. As the ancient Greek philosopher Epicurus wrote: “So death, the most terrifying of ills, is nothing to us, since so long as we exist, death is not with us; but when death comes, then we do not exist. It does not then concern either the living or the dead, since for the former it is not, and the latter are no more.”

If death has no time at which it’s bad for me, then maybe it’s not bad for me. Or perhaps we should challenge the assumption that all facts are datable. Could there be some facts that aren’t?

Suppose that on Monday I shoot John. I wound him with the bullet that comes out of my gun, but he bleeds slowly, and doesn’t die until Wednesday. Meanwhile, on Tuesday, I have a heart attack and die. I killed John, but when? No answer seems satisfactory! So maybe there are undatable facts, and death’s being bad for me is one of them.

Alternatively, if all facts can be dated, we need to say when death is bad for me. So perhaps we should just insist that death is bad for me when I’m dead. But that, of course, returns us to the earlier puzzle. How could death be bad for me when I don’t exist? Isn’t it true that something can be bad for you only if you exist? Call this idea the existence requirement.

Should we just reject the existence requirement? Admittedly, in typical cases—involving pain, blindness, losing your job, and so on—things are bad for you while you exist. But maybe sometimes you don’t even need to exist for something to be bad for you. Arguably, the comparative bads of deprivation are like that.

Unfortunately, rejecting the existence requirement has some implications that are hard to swallow. For if nonexistence can be bad for somebody even though that person doesn’t exist, then nonexistence could be bad for somebody who never exists. It can be bad for somebody who is a merely possible person, someone who could have existed but never actually gets born.

t’s hard to think about somebody like that. But let’s try, and let’s call him Larry. Now, how many of us feel sorry for Larry? Probably nobody. But if we give up on the existence requirement, we no longer have any grounds for withholding our sympathy from Larry. I’ve got it bad. I’m going to die. But Larry’s got it worse: He never gets any life at all.

Moreover, there are a lot of merely possible people. How many? Well, very roughly, given the current generation of seven billion people, there are approximately three million billion billion billion different possible offspring—almost all of whom will never exist! If you go to three generations, you end up with more possible people than there are particles in the known universe, and almost none of those people get to be born.

If we are not prepared to say that that’s a moral tragedy of unspeakable proportions, we could avoid this conclusion by going back to the existence requirement. But of course, if we do, then we’re back with Epicurus’ argument. We’ve really gotten ourselves into a philosophical pickle now, haven’t we? If I accept the existence requirement, death isn’t bad for me, which is really rather hard to believe. Alternatively, I can keep the claim that death is bad for me by giving up the existence requirement. But then I’ve got to say that it is a tragedy that Larry and the other untold billion billion billions are never born. And that seems just as unacceptable.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Still photograph from Ingmar Bergman’s “The Seventh Seal”. Courtesy of the Guardian.[end-div]

Language as a Fluid Construct

Peter Ludlow, professor of philosophy at Northwestern University, has authored a number of fascinating articles on the philosophy of language and linguistics. Here he discusses his view of language as a dynamic, living organism. Literalists take note.

[div class=attrib]From the New York Times:[end-div]

There is a standard view about language that one finds among philosophers, language departments, pundits and politicians.  It is the idea that a language like English is a semi-stable abstract object that we learn to some degree or other and then use in order to communicate or express ideas and perform certain tasks.  I call this the static picture of language, because, even though it acknowledges some language change, the pace of change is thought to be slow, and what change there is, is thought to be the hard fought product of conflict.  Thus, even the “revisionist” picture of language sketched by Gary Gutting in a recent Stone column counts as static on my view, because the change is slow and it must overcome resistance.

Recent work in philosophy, psychology and artificial intelligence has suggested an alternative picture that rejects the idea that languages are stable abstract objects that we learn and then use.  According to the alternative “dynamic” picture, human languages are one-off things that we build “on the fly” on a conversation-by-conversation basis; we can call these one-off fleeting languages microlanguages.  Importantly, this picture rejects the idea that words are relatively stable things with fixed meanings that we come to learn. Rather, word meanings themselves are dynamic — they shift from microlanguage to microlanguage.

Shifts of meaning do not merely occur between conversations; they also occur within conversations — in fact conversations are often designed to help this shifting take place.  That is, when we engage in conversation, much of what we say does not involve making claims about the world but involves instructing our communicative partners how to adjust word meanings for the purposes of our conversation.

I’d I tell my friend that I don’t care where I teach so long as the school is in a city.  My friend suggests that I apply to the University of Michigan and I reply “Ann Arbor is not a city.”  In doing this, I am not making a claim about the world so much as instructing my friend (for the purposes of our conversation) to adjust the meaning of “city” from official definitions to one in which places like Ann Arbor do not count as a cities.

Word meanings are dynamic, but they are also underdetermined.  What this means is that there is no complete answer to what does and doesn’t fall within the range of a term like “red” or “city” or “hexagonal.”  We may sharpen the meaning and we may get clearer on what falls in the range of these terms, but we never completely sharpen the meaning.

This isn’t just the case for words like “city” but, for all words, ranging from words for things, like “person” and “tree,” words for abstract ideas, like “art” and “freedom,” and words for crimes, like “rape” and “murder.” Indeed, I would argue that this is also the case with mathematical and logical terms like “parallel line” and “entailment.”  The meanings of these terms remain open to some degree or other, and are sharpened as needed when we make advances in mathematics and logic.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Leif Parsons / New York Times.[end-div]

A Philosoper On Avoiding Death

Below we excerpt a brilliant essay by Alex Byrne summarizing his argument that our personal survival is grossly over-valued. But, this should not give future teleportation engineers chance to pause. Alex Byrne is a professor of philosophy at MIT.

[div class=attrib]From the Boston Review:[end-div]

Star Trek–style teleportation may one day become a reality. You step into the transporter, which instantly scans your body and brain, vaporizing them in the process. The information is transmitted to Mars, where it is used by the receiving station to reconstitute your body and brain exactly as they were on Earth. You then step out of the receiving station, slightly dizzy, but pleased to arrive on Mars in a few minutes, as opposed to the year it takes by old-fashioned spacecraft.

But wait. Do you really step out of the receiving station on Mars? Someone just like you steps out, someone who apparently remembers stepping into the transporter on Earth a few minutes before. But perhaps this person is merely your replica—a kind of clone or copy. That would not make this person you: in Las Vegas there is a replica of the Eiffel Tower, but the Eiffel Tower is in Paris, not in Las Vegas. If the Eiffel Tower were vaporized and a replica instantly erected in Las Vegas, the Eiffel Tower would not have been transported to Las Vegas. It would have ceased to exist. And if teleportation were like that, stepping into the transporter would essentially be a covert way of committing suicide. Troubled by these thoughts, you now realize that “you” have been commuting back and forth to Mars for years . . .

So which is it? You are preoccupied with a question about your survival: Do you survive teleportation to Mars? A lot hangs on the question, and it is not obvious how to answer it. Teleportation is just science fiction, of course; does the urgent fictional question have a counterpart in reality? Indeed it does: Do you, or could you, survive death?

Teeming hordes of humanity adhere to religious doctrines that promise survival after death: perhaps bodily resurrection at the Day of Judgment, reincarnation, or immaterial immortality. For these people, death is not the end.

Some of a more secular persuasion do not disagree. The body of the baseball great Ted Williams lies in a container cooled by liquid nitrogen to -321 degrees Fahrenheit, awaiting the Great Thawing, when he will rise to sign sports memorabilia again. (Williams’s prospects are somewhat compromised because his head has apparently been preserved separately.) For the futurist Ray Kurzweil, hope lies in the possibility that he will be uploaded to new and shiny hardware—as pictures are transferred to Facebook’s servers—leaving his outmoded biological container behind.

Isn’t all this a pipe dream? Why isn’t “uploading” merely a way of producing a perfect Kurzweil-impersonator, rather than the real thing? Cryogenic storage might help if I am still alive when frozen, but what good is it after I am dead? And is the religious line any more plausible? “Earth to earth, ashes to ashes, dust to dust” hardly sounds like the dawn of a new day. Where is—as the Book of Common Prayer has it—the “sure and certain hope of the Resurrection to eternal life”? If a forest fire consumes a house and the luckless family hamster, that’s the end of them, presumably. Why are we any different?

Philosophers have had a good deal of interest to say about these issues, under the unexciting rubric of “personal identity.” Let us begin our tour of some highlights with a more general topic: the survival, or “persistence,” of objects over time.

Physical objects (including plants and animals) typically come into existence at some time, and cease to exist at a later time, or so we normally think. For example, a cottage might come into existence when enough beams and bricks are assembled, and cease to exist a century later, when it is demolished to make room for a McMansion. A mighty oak tree began life as a tiny green shoot, or perhaps an acorn, and will end its existence when it is sawn into planks.

The cottage and the oak survive a variety of vicissitudes throughout their careers. The house survived Hurricane Irene, say. That is, the house existed before Irene and also existed after Irene. We can put this in terms of “identity”: the house existed before Irene and something existed after Irene that was identical to the house.

[div class=attrib]Read the entire essay here.[end-div]

Do We Need Philosophy Outside of the Ivory Tower?

In her song “What I Am”, Edie Brickell reminds us that philosophy is “the talk on a cereal box” and “a walk on the slippery rocks“.

Philosopher Gary Gutting makes the case that the discipline is more important than ever, and yes, it belongs in the mainstream consciousness, and not just within the confines of academia.

[div class=attrib]From the New York Times:[end-div]

Almost every article that appears in The Stone provokes some comments from readers challenging the very idea that philosophy has anything relevant to say to non-philosophers.  There are, in particular, complaints that philosophy is an irrelevant “ivory-tower” exercise, useless to any except those interested in logic-chopping for its own sake.

There is an important conception of philosophy that falls to this criticism.  Associated especially with earlier modern philosophers, particularly René Descartes, this conception sees philosophy as the essential foundation of the beliefs that guide our everyday life.  For example, I act as though there is a material world and other people who experience it as I do.   But how do I know that any of this is true?  Couldn’t I just be dreaming of a world outside my thoughts?  And, since (at best) I see only other human bodies, what reason do I have to think that there are any minds connected to those bodies?  To answer these questions, it would seem that I need rigorous philosophical arguments for my existence and the existence of other thinking humans.

Of course, I don’t actually need any such arguments, if only because I have no practical alternative to believing that I and other people exist.  As soon as we stop thinking weird philosophical thoughts, we immediately go back to believing what skeptical arguments seem to call into question.  And rightly so, since, as David Hume pointed out, we are human beings before we are philosophers.

But what Hume and, by our day, virtually all philosophers are rejecting is only what I’m calling the foundationalist conception of philosophy. Rejecting foundationalism means accepting that we have every right to hold basic beliefs that are not legitimated by philosophical reflection.  More recently, philosophers as different as Richard Rorty and Alvin Plantinga have cogently argued that such basic beliefs include not only the “Humean” beliefs that no one can do without, but also substantive beliefs on controversial questions of ethics, politics and religion.  Rorty, for example, maintained that the basic principles of liberal democracy require no philosophical grounding (“the priority of democracy over philosophy”).

If you think that the only possible “use” of philosophy would be to provide a foundation for beliefs that need no foundation, then the conclusion that philosophy is of little importance for everyday life follows immediately.  But there are other ways that philosophy can be of practical significance.

Even though basic beliefs on ethics, politics and religion do not require prior philosophical justification, they do need what we might call “intellectual maintenance,” which itself typically involves philosophical thinking.  Religious believers, for example, are frequently troubled by the existence of horrendous evils in a world they hold was created by an all-good God.  Some of their trouble may be emotional, requiring pastoral guidance.  But religious commitment need not exclude a commitment to coherent thought. For instance, often enough believers want to know if their belief in God makes sense given the reality of evil.  The philosophy of religion is full of discussions relevant to this question.  Similarly, you may be an atheist because you think all arguments for God’s existence are obviously fallacious. But if you encounter, say, a sophisticated version of the cosmological argument, or the design argument from fine-tuning, you may well need a clever philosopher to see if there’s anything wrong with it.

[div class=attrib]Read the entire article here.[end-div]

Consciousness as Illusion?

Massimo Pigliucci over at Rationally Speaking ponders free will, moral responsibility and consciousness and, as always, presents a well reasoned and eloquent argument — we do exist!

[div class=attrib]From Rationally Speaking:[end-div]

For some time I have been noticing the emergence of a strange trinity of beliefs among my fellow skeptics and freethinkers: an increasing number of them, it seems, don’t believe that they can make decisions (the free will debate), don’t believe that they have moral responsibility (because they don’t have free will, or because morality is relative — take your pick), and they don’t even believe that they exist as conscious beings because, you know, consciousness is an illusion.

As I have argued recently, there are sensible ways to understand human volition (a much less metaphysically loaded and more sensible term than free will) within a lawful universe (Sean Carroll agrees and, interestingly, so does my sometime opponent Eliezer Yudkowsky). I also devoted an entire series on this blog to a better understanding of what morality is, how it works, and why it ain’t relative (within the domain of social beings capable of self-reflection). Let’s talk about consciousness then.

The oft-heard claim that consciousness is an illusion is an extraordinary one, as it relegates to an entirely epiphenomenal status what is arguably the most distinctive characteristic of human beings, the very thing that seems to shape and give meaning to our lives, and presumably one of the major outcome of millions of years of evolution pushing for a larger brain equipped with powerful frontal lobes capable to carry out reasoning and deliberation.

Still, if science tells us that consciousness is an illusion, we must bow to that pronouncement and move on (though we apparently cannot escape the illusion, partly because we have no free will). But what is the extraordinary evidence for this extraordinary claim? To begin with, there are studies of (very few) “split brain” patients which seem to indicate that the two hemispheres of the brain — once separated — display independent consciousness (under experimental circumstances), to the point that they may even try to make the left and right sides of the body act antagonistically to each other.

But there are a couple of obvious issues here that block an easy jump from observations on those patients to grand conclusions about the illusoriness of consciousness. First off, the two hemispheres are still conscious, so at best we have evidence that consciousness is divisible, not that it is an illusion (and that subdivision presumably can proceed no further than n=2). Second, these are highly pathological situations, and though they certainly tell us something interesting about the functioning of the brain, they are informative mostly about what happens when the brain does not function. As a crude analogy, imagine sawing a car in two, noticing that the front wheels now spin independently of the rear wheels, and concluding that the synchronous rotation of the wheels in the intact car is an “illusion.” Not a good inference, is it?

Let’s pursue this illusion thing a bit further. Sometimes people also argue that physics tells us that the way we perceive the world is also an illusion. After all, apparently solid objects like tables are made of quarks and the forces that bind them together, and since that’s the fundamental level of reality (well, unless you accept string theory) then clearly our senses are mistaken.

But our senses are not mistaken at all, they simply function at the (biologically) appropriate level of perception of reality. We are macroscopic objects and need to navigate the world as such. It would be highly inconvenient if we could somehow perceive quantum level phenomena directly, and in a very strong sense the solidity of a table is not an illusion at all. It is rather an emergent property of matter that our evolved senses exploit to allow us to sit down and have a nice meal at that table without worrying about the zillions of subnuclear interactions going on about it all the time.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Consciousness Art. Courtesy of Google search.[end-div]

What Exactly is a Person?

The recent “personhood” amendment on the ballot in Mississippi has caused many to scratch their heads and ponder the meaning of “person”. Philosophers through the ages have tackled this thorny question with detailed treatises and little consensus.

Boethius suggested that a person is “the individual substance of a rational nature.” Descartes described a person as an agent, human or otherwise, possessing consciousness, and capable of creating and acting on a plan. John Locke extended this definition to include reason and reflection. Kant looked at a person as a being having a conceptualizing mind capable of purposeful thought. Charles Taylor takes this naturalistic view further, defining a person as an agent driven by matters of significance. Harry Frankfurt characterized as person as an entity enshrining free will driven by a hierarchy of desires. Still others provide their own definition of a person. Peter Singer offers self-awareness as a distinguishing trait; Thomas White suggests that a person has the following elements: is alive, is aware, feels sensations, has emotions, has a sense of self, controls its own behaviour, recognises other persons, and has a various cognitive abilities.

Despite the variation in positions, all would seem to agree that a fertilized egg is certainly not a person.

    [div class=attrib]A thoughtful take over at 13.7 Cosmos and Culture blog:[end-div]

    According to Catholic doctrine, the Father, the Son and Holy Spirit are three distinct persons even though they are one essence. Only one of those persons — Jesus Christ — is also a human being whose life had a beginning and an end.

    I am not an expert in Trinitarian theology. But I mention it here because, great mysteries aside, this Catholic doctrine uses the notion of person in what, from our point of view today, is the standard way.

    John Locke called person a forensic concept. What he had in mind is that a person is one to whom credit and blame may be attached, one who is deemed responsible. The concept of a person is the concept of an agent.

    Crucially, Locke argued, persons are not the same as human beings. Dr. Jekyl and Mr. Hyde may be one and the same human being, that is, one and the same continuously existing organic life; they share a birth event; but they are two distinct persons. And this is why we don’t blame the one for the other’s crimes. Multiple personality disorder might be a real world example of this.

    I don’t know whether Locke believed that two distinct persons could actually inhabit the same living human body, but he certainly thought there was nothing contradictory in the possibility. Nor did he think there was anything incoherent in the thought that one person could find existence in multiple distinct animal lives, even if, as a matter of fact, this may not be possible. If you believe in reincarnation, then you think this is a genuine possibility. For Locke, this was no more incoherent than the idea of two actors playing the same role in a play.

    Indeed, the word “person” derives from a Latin (and originally a Greek) word meaning “character in a drama” or “mask” (because actors wore masks). This usage survives today in the phrase “dramatis personae.” To be a person, from this standpoint, is to play a role. The person is the role played, however, not the player.

    From this standpoint, the idea of non-human, non-living person certainly makes sense, even if we find it disturbing. Corporations are persons under current law, and this makes sense. They are actors, after all, and we credit and blame them for the things they do. They play an important role in our society.

    [div class=attrib]Read the whole article here.[end-div]

    [div class=attrib]Image: Abstract painting of a person, titled WI (In Memoriam), by Paul Klee (1879–1940). Courtesy of Wikipedia.[end-div]

    Atheism: Scientific or Humanist

    [div class=attrib]From The Stone forum, New York Times:[end-div]

    Led by the biologist Richard Dawkins, the author of “The God Delusion,” atheism has taken on a new life in popular religious debate. Dawkins’s brand of atheism is scientific in that it views the “God hypothesis” as obviously inadequate to the known facts. In particular, he employs the facts of evolution to challenge the need to postulate God as the designer of the universe. For atheists like Dawkins, belief in God is an intellectual mistake, and honest thinkers need simply to recognize this and move on from the silliness and abuses associated with religion.

    Most believers, however, do not come to religion through philosophical arguments. Rather, their belief arises from their personal experiences of a spiritual world of meaning and values, with God as its center.

    In the last few years there has emerged another style of atheism that takes such experiences seriously. One of its best exponents is Philip Kitcher, a professor of philosophy at Columbia. (For a good introduction to his views, see Kitcher’s essay in “The Joy of Secularism,” perceptively discussed last month by James Wood in The New Yorker.)

    Instead of focusing on the scientific inadequacy of theistic arguments, Kitcher critically examines the spiritual experiences underlying religious belief, particularly noting that they depend on specific and contingent social and cultural conditions. Your religious beliefs typically depend on the community in which you were raised or live. The spiritual experiences of people in ancient Greece, medieval Japan or 21st-century Saudi Arabia do not lead to belief in Christianity. It seems, therefore, that religious belief very likely tracks not truth but social conditioning. This “cultural relativism” argument is an old one, but Kitcher shows that it is still a serious challenge. (He is also refreshingly aware that he needs to show why a similar argument does not apply to his own position, since atheistic beliefs are themselves often a result of the community in which one lives.)

    [div class=attrib]More of the article here.[end-div]

    [div class=attrib]Image: Ephesians 2,12 – Greek atheos, courtesy of Wikipedia.[end-div]

    Free Will: An Illusion?

    Neuroscientists continue to find interesting experimental evidence that we do not have free will. Many philosophers continue to dispute this notion and cite inconclusive results and lack of holistic understanding of decision-making on the part of brain scientists. An article by Kerri Smith over at Nature lays open this contentious and fascinating debate.

    [div class=attrib]From Nature:[end-div]

    The experiment helped to change John-Dylan Haynes’s outlook on life. In 2007, Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience in Berlin, put people into a brain scanner in which a display screen flashed a succession of random letters1. He told them to press a button with either their right or left index fingers whenever they felt the urge, and to remember the letter that was showing on the screen when they made the decision. The experiment used functional magnetic resonance imaging (fMRI) to reveal brain activity in real time as the volunteers chose to use their right or left hands. The results were quite a surprise.

    “The first thought we had was ‘we have to check if this is real’,” says Haynes. “We came up with more sanity checks than I’ve ever seen in any other study before.”

    The conscious decision to push the button was made about a second before the actual act, but the team discovered that a pattern of brain activity seemed to predict that decision by as many as seven seconds. Long before the subjects were even aware of making a choice, it seems, their brains had already decided.

    As humans, we like to think that our decisions are under our conscious control — that we have free will. Philosophers have debated that concept for centuries, and now Haynes and other experimental neuroscientists are raising a new challenge. They argue that consciousness of a decision may be a mere biochemical afterthought, with no influence whatsoever on a person’s actions. According to this logic, they say, free will is an illusion. “We feel we choose, but we don’t,” says Patrick Haggard, a neuroscientist at University College London.

    You may have thought you decided whether to have tea or coffee this morning, for example, but the decision may have been made long before you were aware of it. For Haynes, this is unsettling. “I’ll be very honest, I find it very difficult to deal with this,” he says. “How can I call a will ‘mine’ if I don’t even know when it occurred and what it has decided to do?”

    [div class=attrib]More from theSource here.[end-div]

    [div class=attrib]Image courtesy of Nature.[end-div]

    The science behind disgust

    [div class=attrib]From Salon:[end-div]

    We all have things that disgust us irrationally, whether it be cockroaches or chitterlings or cotton balls. For me, it’s fruit soda. It started when I was 3; my mom offered me a can of Sunkist after inner ear surgery. Still woozy from the anesthesia, I gulped it down, and by the time we made it to the cashier, all of it managed to come back up. Although it is nearly 30 years later, just the smell of this “fun, sun and the beach” drink is enough to turn my stomach.

    But what, exactly, happens when we feel disgust? As Daniel Kelly, an assistant professor of philosophy at Purdue University, explains in his new book, “Yuck!: The Nature and Moral Significance of Disgust,” it’s not just a physical sensation, it’s a powerful emotional warning sign. Although disgust initially helped keep us away from rotting food and contagious disease, the defense mechanism changed over time to effect the distance we keep from one another. When allowed to play a role in the creation of social policy, Kelly argues, disgust might actually cause more harm than good.

    Salon spoke with Kelly about hiding the science behind disgust, why we’re captivated by things we find revolting, and how it can be a very dangerous thing.

    What exactly is disgust?

    Simply speaking, disgust is the response we have to things we find repulsive. Some of the things that trigger disgust are innate, like the smell of sewage on a hot summer day. No one has to teach you to feel disgusted by garbage, you just are. Other things that are automatically disgusting are rotting food and visible cues of infection or illness. We have this base layer of core disgusting things, and a lot of them don’t seem like they’re learned.

    [div class=attrib]More from theSource here.[end-div]

    Scientific Evidence for Indeterminism

    [div class=attrib]From Evolutionary Philosophy:[end-div]

    The advantage of being a materialist is that so much of our experience seems to point to a material basis for reality. Idealists usually have to appeal to some inner knowing as the justification of their faith that mind, not matter, is the foundation of reality. Unfortunately the appeal to inner knowing is exactly what a materialist has trouble with in the first place.

    Charles Sanders Peirce was a logician and a scientist first and a philosopher second. He thought like a scientists and as he developed his evolutionary philosophy his reasons for believing in it were very logical and scientific. One of the early insights that lead him to his understanding of an evolving universe was his realization that the state of our world or its future was not necessarily predetermined.

    One conclusion that materialism tends to lead to is a belief that ‘nothing comes from nothing.’ Everything comes from some form of matter or interaction between material things. Nothing just immerges spontaneously. Everything is part of an ongoing chain of cause and effect. The question, how did the chain of cause and effect start, is one that is generally felt best to be left to the realm of metaphysics and unsuitable for scientific investigation.

    And so the image of a materially based universe tends to lead to a deterministic account of reality. You start with something and then that something unravels according to immutable laws. As an image to picture imagine this, a large bucket filled with pink and green tennis balls. Then imagine that there are two smaller buckets that are empty. This arrangement represents the starting point of the universe. The natural laws of this universe dictate that individual tennis balls will be removed from the large bucket and placed in one of the two smaller ones. If the ball that is removed is pink it goes in the left hand bucket and if it is green it goes in the right hand bucket. In this simple model the end state of the universe is going to be that the large bucket will be empty, the left hand bucket will be filled with pink tennis balls and the right hand bucket will be filled with green tennis balls. The outcome of the process is predetermined by the initial conditions and the laws governing the subsequent activity.

    A belief in this kind of determinism seems to be constantly reinforced for us through our ongoing experience with the material universe.  Go ahead pick up a rock hold it up and then let it go. It will fall. Every single time it will fall. It is predetermined that a rock that is held up in the air and then dropped will fall. Punch a wall. It will hurt – every single time.  Over and over again our experience of everyday reality seems to reinforce the fact that we live in a universe which is exactly governed by immutable laws.

    [div class=attrib]More from theSource here.[end-div]

    Susan Wolf and Meaningfulness

    [div class=attrib]From PEA Soup:[end-div]

    A lot of interesting work has been done recently on what makes lives meaningful. One brilliant example of this is Susan Wolf’s recent wonderful book Meaning in Life and Why It Matters. It consists of two short lectures, critical commentaries by John Koethe, Robert M. Adams, Nomy Arpaly, and Jonathan Haidt, and responses by Wolf herself. What I want to do here is to introduce quickly Wolf’s ‘Fitting Fulfillment’ View, and then I’ll raise a potential objection to it.

    According to Wolf, all meaningful lives have both a ‘subjective’ and an ‘objective’ element to them. These elements can make lives meaningful only together. Wolf’s view of the subjective side is highly complex. The starting-point is the idea that agent’s projects and activities ultimately make her life meaningful. However, this happens only when the projects and activities satisfy two conditions on the subjective side and one on the objective side.
    Firstly, in order for one’s projects and activities to make one’s life meaningful, one must be at least somewhat successful in carrying them out. This does not mean that one must fully complete one’s projects and excel in the activities but, other things being equal, the more successful one is in one’s projects and activities the more they can contribute to the meaningfulness of one’s life.

    Secondly, one must have a special relation to one’s projects and activities. This special relation has several overlapping aspects which seem to have two main aspects. I’ll call one of them the ‘loving relation’. Thus, Wolf often seems to claim that one must love the relevant projects and activities, experience subjective attraction towards them, and be gripped and excited by them. This seems to imply that one must be passionate about the relevant projects and activities. It also seems to entail that our willingness to pursue the relevant projects must be diachronically stable (and even constitute ‘volitional necessities’).

    The second aspect could be called the ‘fulfilment side’. This means that, when one is successfully engaged in one’s projects and activities, one must experience some positive sensations – fulfilment, satisfaction, feeling good and happy and the like. Wolf is careful to emphasise that there need not be single felt quality present in all cases. Rather, there is a range of the positive experiences some of which need to be present in each case.

    Finally, on the objective side, one’s projects and activities must be objectively worthwhile. One way to think about this is to start from the idea that one can be more or less successful in the relevant projects and activities. This seems to entail that the relevant projects and activities are difficult to complete and master in the beginning. As a result, one can become better in them through practice.

    The objective element of Wolf’s view requires that some objective values are promoted either during this process or as a consequence of completion. There are some basic reasons to take part in the activities and to try to succeed in the relevant projects. These reasons are neither purely prudential nor necessarily universal moral reasons. Wolf is a pluralist about which projects and activities are objectively worthwhile (she takes no substantial stand in order to avoid any criticism of elitism). She also emphasises that saying all of this is fairly neutral metaethically.

    [div class=attrib]More from theSource here.[end-div]