France: return to Babel

[div class=attrib]From Eurozine:[end-div]

Each nation establishes its borders, sometimes defines itself, certainly organises itself, and always affirms itself around its language, says Marc Hatzfeld. The language is then guarded by men of letters, by strict rules, not allowing for variety of expression. Against this backdrop, immigrants from ever more distant shores have arrived in France, bringing with them a different style of expression and another, more fluid, concept of language.

Today more than ever, the language issue, which might at one time have segued gracefully between pleasure in sense and sensual pleasure, is being seized on and exploited for political ends. Much of this we can put down to the concept of the nation-state, that symbolic and once radical item that was assigned the task of consolidating the fragmented political power of the time. During the long centuries from the end of the Middle Ages to the close of the Ancien Régime, this triumphant political logic sought to bind together nation, language and religion. East of the Rhine, for instance, this was particularly true of the links between nation and religion; West of the Rhine, it focused more on language. From Villers-Cotterêts[1] on, language – operating almost coercively – served as an instrument of political unification. The periodic alternation between an imperial style that was both permissive and varied when it came to customary practise, and the homogeneous and monolithic style adopted on the national front, led to constant comings and goings in the relationship between language and political power.

In France, the revocation of the Edict of Nantes by Louis XIV in 1685 resolved the relationship between nation and religion and gave language a more prominent role in defining nationality. Not long after, the language itself – by now regarded as public property – became a ward of state entitled to public protection. Taking things one step further, the eighteenth century philosophers of the Enlightenment conceived the idea of a coherent body of subject people and skilfully exploited this to clip the wings of a fabled absolute monarch in the name of another, equally mythical, form of sovereignty. All that remained was to organise the country institutionally. Henceforth, the idea that the allied forces of people, nation and language together made up the same collective history was pursued with zeal.

What we see as a result is this curious emergence of language itself as a concept. Making use of a fiction that reached down from a great height to penetrate a cultural reality that was infinitely more subtle and flexible, each nation establishes its borders, sometimes defines itself, certainly organises itself, and always affirms itself around its language. While we in Europe enjoy as many ways of speaking as there are localities and occupations, there are administrative and symbolic demands to fabricate the fantasy of a language that clerics and men of letters would appropriate to themselves. It is these who, in the wake of the politicians, help to eliminate the variety of ways people have of expressing themselves and of understanding one another. Some scholars, falling into what they fail to see is a highly politicised trap, complete this process by coming up with a scientific construct heavily dependent on the influence of mathematical theories such as those of de Saussure and, above all, of Jakobson. Paradoxically, this body of work relies on a highly malleable, mobile, elastic reality to develop the tight, highly structured concept that is “language” (Jacques Lacan). And from that point, language itself becomes a prisoner of Lacan’s own system – linguistics.
[div class=attrib]From theSource here.[end-div]

The Great Cosmic Roller-Coaster Ride

[div class=attrib]From Scientific American:[end-div]

Could cosmic inflation be a sign that our universe is embedded in a far vaster realm

You might not think that cosmologists could feel claustrophobic in a universe that is 46 billion light-years in radius and filled with sextillions of stars. But one of the emerging themes of 21st-century cosmology is that the known universe, the sum of all we can see, may just be a tiny region in the full extent of space. Various types of parallel universes that make up a grand “multiverse” often arise as side effects of cosmological theories. We have little hope of ever directly observing those other universes, though, because they are either too far away or somehow detached from our own universe.

Some parallel universes, however, could be separate from but still able to interact with ours, in which case we could detect their direct effects. The possibility of these worlds came to cosmologists’ attention by way of string theory, the leading candidate for the foundational laws of nature. Although the eponymous strings of string theory are extremely small, the principles governing their properties also predict new kinds of larger membranelike objects—“branes,” for short. In particular, our universe may be a three-dimensional brane in its own right, living inside a nine-dimensional space. The reshaping of higher-dimensional space and collisions between different universes may have led to some of the features that astronomers observe today.

[div class=attrib]More from theSource here.[end-div]

Windows on the Mind

[div class=attrib]From Scientific American:[end-div]

Once scorned as nervous tics, certain tiny, unconscious flicks of the eyes now turn out to underpin much of our ability to see. These movements may even reveal subliminal thoughts.

As you read this, your eyes are rapidly flicking from left to right in small hops, bringing each word sequentially into focus. When you stare at a person’s face, your eyes will similarly dart here and there, resting momentarily on one eye, the other eye, nose, mouth and other features. With a little introspection, you can detect this frequent flexing of your eye muscles as you scan a page, face or scene.

But these large voluntary eye movements, called saccades, turn out to be just a small part of the daily workout your eye muscles get. Your eyes never stop moving, even when they are apparently settled, say, on a person’s nose or a sailboat bobbing on the horizon. When the eyes fixate on something, as they do for 80 percent of your waking hours, they still jump and jiggle imperceptibly in ways that turn out to be essential for seeing. If you could somehow halt these miniature motions while fixing your gaze, a static scene would simply fade from view.

[div class=attrib]More from theSource here.[end-div]

On the mystery of human consciousness

[div class=attrib]From Eurozine:[end-div]

Philosophers and natural scientists regularly dismiss consciousness as irrelevant. However, even its critics agree that consciousness is less a problem than a mystery. One way into the mystery is through an understanding of autism.

It started with a letter from Michaela Martinková:

Our eldest son, aged almost eight, has Asperger’s Syndrome (AS). It is a diagnosis that falls into the autistic spectrum, but his IQ is very much above average. In an effort to find out how he thinks, I decided that I must find out how we think, and so I read into the cognitive sciences and epistemology. I found what I needed there, although I have an intense feeling that precisely the way of thinking of such people as our son is missing from the mosaic of these sciences. And I think that this missing piece could rearrange the whole mosaic.

In the book Philosophy and the Cognitive Sciences, you write, among other things: “Actually the only handicap so far observed in these children (with autism and AS) is that they cannot use human psychology. They cannot postulate intentional states in their own minds and in the minds of other people.” I think that deeper knowledge of autism, and especially of Asperger’s Syndrome as its version found in people with higher IQ in the framework of autism, could be immensely enriching for the cognitive sciences. I am convinced that these people think in an entirely different way from us.

Why the present interest in autism? It is generally known that some people whose diagnosis falls under Asperger’s Syndrome, namely people with Asperger’s Syndrome and high-functional autism, show a remarkable combination of highly above-average intelligence and well below-average social ability. The causes of this peculiarity, although far from being sufficiently clarified, are usually explained by reduced ability in the areas of verbal communication and empathy, which form the basis of social intelligence. And why consciousness? Many people think today that, if we are to better understand ourselves and our relationships to the world and other people, the last problem we must solve is consciousness. Many others think that if we understand the brain, its structure, and its functioning, consciousness will cease to be a problem. The more critical supporters of both views agree on one thing: consciousness is not a problem, it is more a mystery. If a problem is something about which we formulate a question, to which it is possible to seek a reasonable answer, then consciousness is a mystery, because it is still not possible to formulate a question which could be answered in a way that could be verified or refuted by the normal methods of science. Perhaps the psychiatrist Daniel M. Wegner best grasped the present state of knowledge with the statement: “All human experience states that we consciously control our actions, but all theories are against this.” In spite of all the unclearness and disputes about what consciousness is and how it works, the view has begun to prevail in recent years that language and consciousness are the link that makes a group of individuals into a community.

[div class=attrib]More from theSource here.[end-div]

Suprealist art, suprealist life

[div class=attrib]From Eurozine:[end-div]

Suprealism is a “movement” pioneered by Leonard Lapin that combines suprematism and realism; it mirrors the “suprealist world”, where art is packaged for consumer culture.

In 1993, when I started the suprealist phase of my work, which was followed by the “Suprealist manifesto” and the exhibition at Vaal gallery in Tallinn, a prominent art critic proclaimed that it represented the “hara-kiri of the old avant-garde”. A decade has passed, and the “old avant-gardist” and his suprealism are still alive and kicking, while, as if following my prophecy, life and its cultural representations have become more and more suprealist.

The term “suprealism” emerged quite naturally: its first half originates from the “suprematism” of the early twentieth-century Russian avant-garde, which claimed to represent the highest form of being, abandoning Earth and conquering space. The other half relates to the familiar, dogmatically imposed “realism”, which was the only officially tolerated style under communist rule. Initially, I attempted to bring to the concept the structures of high art and images from mass culture. The most popular domain which attracted most attention was of course pornography. During my 1996 exhibition at the Latvian Museum of Foreign Art, in Riga, the exhibition room containing 30 of my “pornographical works” was closed. There were similar incidents in Bristol, where some of my pieces were censored, not to speak about angry reactions in Estonia. It is remarkable that it is art that highlights what is otherwise hypocritically hidden behind cellophane in news kiosks. But nobody is dismantling the kiosks – the rage is directed at an artist’s exhibition.

An important event in the history of suprealism happened in 2001, when the Estonian Art Museum held an exhibition on the anniversary of the nineteenth-century Estonian academic painter Johan Köler. The exhibition was advertised with posters representing Köler’s sugary painting “A maid at a well”, sometimes ten times the size of the original. Since during the Soviet rule, Köler was officially turned into a predecessor of socialist realism, our generation has a complex and ambiguous relationship with this master. When the 2001 exhibition repeated the old stereotypical clichés about the artist, I expressed my disappointment by relating the exhibition posters to modern commercial packaging, advertisements, and catalogues. It was the starting point of the series “Suprealist artists”, which I am still continuing, using cheap reproductions of classical and modern art and packages, puzzles, flyers, ads, and so on, belonging to the contemporary consumer world. I use them to make new visual structures for the new century.

The “rape of art” as an advertising method is becoming more and more visible: many famous twentieth-century modernists are used in some way in advertising, which brings the images of Dali, Magritte, or Picasso to the consuming masses.

[div class=attrib]More from theSource here.[end-div]

The Memory Code

[div class=attrib]From Scientific American:[end-div]

Researchers are closing in on the rules that the brain uses to lay down memories. Discovery of this memory code could lead to the design of smarter computers and robots and even to new ways to peer into the human mind.

INTRODUCTION
Anyone who has ever been in an earthquake has vivid memories of it: the ground shakes, trembles, buckles and heaves; the air fills with sounds of rumbling, cracking and shattering glass; cabinets fly open; books, dishes and knickknacks tumble from shelves. We remember such episodes–with striking clarity and for years afterward–because that is what our brains evolved to do: extract information from salient events and use that knowledge to guide our responses to similar situations in the future. This ability to learn from past experience allows all animals to adapt to a world that is complex and ever changing.

For decades, neuroscientists have attempted to unravel how the brain makes memories. Now, by combining a set of novel experiments with powerful mathematical analyses and an ability to record simultaneously the activity of more than 200 neurons in awake mice, my colleagues and I have discovered what we believe is the basic mechanism the brain uses to draw vital information from experiences and turn that information into memories. Our results add to a growing body of work indicating that a linear flow of signals from one neuron to another is not enough to explain how the brain represents perceptions and memories. Rather, the coordinated activity of large populations of neurons is needed.

Furthermore, our studies indicate that neuronal populations involved in encoding memories also extract the kind of generalized concepts that allow us to transform our daily experiences into knowledge and ideas. Our findings bring biologists closer to deciphering the universal neural code: the rules the brain follows to convert collections of electrical impulses into perception, memory, knowledge and, ultimately, behavior. Such understanding could allow investigators to develop more seamless brain-machine interfaces, design a whole new generation of smart computers and robots, and perhaps even assemble a codebook of the mind that would make it possible to decipher–by monitoring neural activity–what someone remembers and thinks.

HISTORICAL PERSPECTIVE
My group’s research into the brain code grew out of work focused on the molecular basis of learning and memory. In the fall of 1999 we generated a strain of mice engineered to have improved memory. This “smart” mouse–nicknamed Doogie after the brainy young doctor in the early-1990s TV dramedy Doogie Howser, M.D.—learns faster and remembers things longer than wild-type mice. The work generated great interest and debate and even made the cover of Time magazine. But our findings left me asking, What exactly is a memory?

Scientists knew that converting perceptual experiences into long-lasting memories requires a brain region called the hippocampus. And we even knew what molecules are critical to the process, such as the NMDA receptor, which we altered to produce Doogie. But no one knew how, exactly, the activation of nerve cells in the brain represents memory. A few years ago I began to wonder if we could find a way to describe mathematically or physiologically what memory is. Could we identify the relevant neural network dynamic and visualize the activity pattern that occurs when a memory is formed?

For the better part of a century, neuroscientists had been attempting to discover which patterns of nerve cell activity represent information in the brain and how neural circuits process, modify and store information needed to control and shape behavior. Their earliest efforts involved simply trying to correlate neural activity–the frequency at which nerve cells fire–with some sort of measurable physiological or behavioral response. For example, in the mid-1920s Edgar Adrian performed electrical recordings on frog tissue and found that the firing rate of individual stretch nerves attached to a muscle varies with the amount of weight that is put on the muscle. This study was the first to suggest that information (in this case the intensity of a stimulus) can be conveyed by changes in neural activity–work for which he later won a Nobel Prize.

Since then, many researchers using a single electrode to monitor the activity of one neuron at a time have shown that, when stimulated, neurons in different areas of the brain also change their firing rates. For example, pioneering experiments by David H. Hubel and Torsten N. Wiesel demonstrated that the neurons in the primary visual cortex of cats, an area at the back of the brain, respond vigorously to the moving edges of a bar of light. Charles G. Gross of Princeton University and Robert Desimone of the Massachusetts Institute of Technology found that neurons in a different brain region of the monkey (the inferotemporal cortex) can alter their behavior in response to more complex stimuli, such as pictures of faces.

[div class=attrib]More from the source here.[end-div]

A Simpler Origin for Life

[div class=attrib]From Scientific American:[end-div]

Extraordinary discoveries inspire extraordinary claims. Thus, James Watson reported that immediately after he and Francis Crick uncovered the structure of DNA, Crick “winged into the Eagle (pub) to tell everyone within hearing that we had discovered the secret of life.” Their structure–an elegant double helix–almost merited such enthusiasm. Its proportions permitted information storage in a language in which four chemicals, called bases, played the same role as 26 letters do in the English language.

Further, the information was stored in two long chains, each of which specified the contents of its partner. This arrangement suggested a mechanism for reproduction: The two strands of the DNA double helix parted company, and new DNA building blocks that carry the bases, called nucleotides, lined up along the separated strands and linked up. Two double helices now existed in place of one, each a replica of the original.

[div class=attrib]More from theSource here.[end-div]

The Mystery of Methane on Mars and Titan

[div class=attrib]From Scientific American:[end-div]

It might mean life, it might mean unusual geologic activity; whichever it is, the presence of methane in the atmospheres of Mars and Titan is one of the most tantalizing puzzles in our solar system.

Of all the planets in the solar system other than Earth, Mars has arguably the greatest potential for life, either extinct or extant. It resembles Earth in so many ways: its formation process, its early climate history, its reservoirs of water, its volcanoes and other geologic processes. Microorganisms would fit right in. Another planetary body, Saturn’s largest moon Titan, also routinely comes up in discussions of extraterrestrial biology. In its primordial past, Titan possessed conditions conducive to the formation of molecular precursors of life, and some scientists believe it may have been alive then and might even be alive now.

To add intrigue to these possibilities, astronomers studying both these worlds have detected a gas that is often associated with living things: methane. It exists in small but significant quantities on Mars, and Titan is literally awash with it. A biological source is at least as plausible as a geologic one, for Mars if not for Titan. Either explanation would be fascinating in its own way, revealing either that we are not alone in the universe or that both Mars and Titan harbor large underground bodies of water together with unexpected levels of geochemical activity. Understanding the origin and fate of methane on these bodies will provide crucial clues to the processes that shape the formation, evolution and habitability of terrestrial worlds in this solar system and possibly in others.

[div class=attrib]More from theSource here.[end-div]

Can we say what we want?

[div class=attrib]From Eurozine:[end-div]

The French satirical paper Charlie-Hebdo has just been acquitted of publicly insulting Muslims by reprinting the notorious Danish cartoons featuring the Prophet. Influential Islamic groups had sued it for inciting hatred. Is free speech really in danger worldwide?

The understanding and practices of freedom of expression are being challenged in the twenty-first century. Some of the controversies of the past year or so that have drawn worldwide attention have included the row over Danish cartoons seen as anti-Muslim, the imprisonment of a British historian in Austria for Holocaust denial, and disputes over a French law forbidding denial of the Armenian genocide.

These debates are not new: the suppression of competing views and dissent, and of anything deemed immoral, heretical, or offensive, has dominated social, religious, and political history. These have returned to the fore in response to the stimuli of the communication revolution and of the events of 9/11. The global reach of most of our messages, including the culturally and politically specific, has rendered all expressions and their controls a prize worth fighting for, even to the death. Does this imply that stronger restrictions on freedom of expression should be established?

Freedom of expression, including the right to access to information, is a fundamental human right, central to achieving individual freedoms and real democracy. It increases the knowledge base and participation within a society and can also secure external checks on state accountability.

Yet freedom of expression is not absolute. The extent to which expression ought to be protected or censored has been the object of many impassionate debates. Few argue that freedom of expression is absolute and suffers no limits. But the line between what is permissible and what is not is always contested. Unlike many others, this right depends on its context and its definition is mostly left to the discretion of states.

Under international human rights standards, the right to freedom of expression may be restricted in order to protect the rights or reputation of others and national security, public order, or public health or morals, and provided it is necessary in a democratic society to do so and it is done by law. This formulation is found in both the International Covenant on Civil and Political Rights under article 19, and in the European Convention on Human Rights.

[div class=attrib]More from theSource here.[end-div]

The Movies in Our Eyes

[div class=attrib]From Scientific American:[end-div]

The retina processes information much morethan anyone has ever imagined, sending a dozen different movies to the brain.

We take our astonishing visual capabilities so much for granted that few of us ever stop to consider how we actually see. For decades, scientists have likened our visual-processing machinery to a television camera: the eye’s lens focuses incoming light onto an array of photoreceptors in the retina. These light detectors magically convert those photons into electrical signals that are sent along the optic nerve to the brain for processing. But recent experiments by the two of us and others indicate that this analogy is inadequate. The retina actually performs a significant amount of preprocessing right inside the eye and then sends a series of partial representations to the brain for interpretation.

We came to this surprising conclusion after investigating the retinas of rabbits, which are remarkably similar to those in humans. (Our work with salamanders has led to similar results.) The retina, it appears, is a tiny crescent of brain matter that has been brought out to the periphery to gain more direct access to the world. How does the retina construct the representations it sends? What do they “look” like when they reach the brain’s visual centers? How do they convey the vast richness of the real world? Do they impart meaning, helping the brain to analyze a scene? These are just some of the compelling questions the work has begun to answer.

[div class=attrib]More from theSource here.[end-div]

The concept of God – and why we don’t need it

[div class=attrib]From Eurozine:[end-div]

In these newly religious times, it no longer seems superfluous to rearm the atheists with arguments. When push comes to shove, atheists can only trust their reason, writes Burkhard Müller.

Some years ago I wrote a book entitled Drawing a Line – A Critique of Christianity [Schlußstrich – Kritik des Christentums], which argued that Christianity was false: not only in terms of its historical record, but fundamentally, as a very concept. I undertook to uncover this falsity as a contradiction in terms. While I do not wish to retract any of what I said at the time, I would now go beyond what I argued then in two respects.

For one thing, I no longer wish to adopt the same aggressive tone. The book was written at the beginning of the 1990s, when I was still living in Würzburg (in Bavaria), a bastion of Roman Catholicism. It is a prosperous city, powerful and conscious of the fact, which made it more than capable of provoking my ire; whereas for thirteen years now I have been living in the new East of Germany, where roughly eighty per cent of the population no longer recognize Christianity even as a rumour, where it appears as the exception, not the rule, and where one has the opportunity to reflect on the truth of the claim “this is as good as it gets”.

The second point is this: it seems to me that institutionalized, dogmatic Christianity, as expressed in the words of the Holy Scriptures and – more succinctly still – in the Credo, is losing ground. This is not only at the expense of a stupid and potentially violent strain of fundamentalism, as manifested in Islam and the American religious Right, but in Europe mostly at the expense of an often rather intellectually woolly and mawkish eclecticism. I will not be dealing here with any theological system in its doctrinal sense. I want rather to sound out the religious impulse, even – and especially – in its more diffuse form, and to get to its root. That is to say, to enquire of the concept of God whether in practice it accomplishes what is expected of it.

For people do not believe in God because they have been shown the proof of his existence. All such proofs presented by philosophers and theologians through the millennia have, by their very nature, the regrettable flaw that a proof can only refer to the circumstances of existing things, whereas God, as the predecessor of all circumstances, comes before, so to speak, and outside the realm of the demonstrable. These proofs, then, all have the character of something tacked on, giving the impression of a thin veneer on a very hefty block of wood. Belief in God, where it does not merely arise out of an unquestioned tradition, demands a spontaneous act on the part of the believer which the believers themselves will tend to describe as an act of faith, their opponents as a purely arbitrary decision; one, nevertheless, that always stems from a need of some kind. People believe in God because along with this belief goes an expectation that a particular wish will be fulfilled for them, a particular problem solved. What kinds of need are these, and how can God meet them?

[div class=attrib]More from theSource here.[end-div]

A Digital Life

[div class=attrib]From Scientific American:[end-div]

New systems may allow people to record everything they see and hear–and even things they cannot sense–and to store all these data in a personal digital archive.

Human memory can be maddeningly elusive. We stumble upon its limitations every day, when we forget a friend’s telephone number, the name of a business contact or the title of a favorite book. People have developed a variety of strategies for combating forgetfulness–messages scribbled on Post-it notes, for example, or electronic address books carried in handheld devices–but important information continues to slip through the cracks. Recently, however, our team at Microsoft Research has begun a quest to digitally chronicle every aspect of a person’s life, starting with one of our own lives (Bell’s). For the past six years, we have attempted to record all of Bell’s communications with other people and machines, as well as the images he sees, the sounds he hears and the Web sites he visits–storing everything in a personal digital archive that is both searchable and secure.

Digital memories can do more than simply assist the recollection of past events, conversations and projects. Portable sensors can take readings of things that are not even perceived by humans, such as oxygen levels in the blood or the amount of carbon dioxide in the air. Computers can then scan these data to identify patterns: for instance, they might determine which environmental conditions worsen a child’s asthma. Sensors can also log the three billion or so heartbeats in a person’s lifetime, along with other physiological indicators, and warn of a possible heart attack. This information would allow doctors to spot irregularities early, providing warnings before an illness becomes serious. Your physician would have access to a detailed, ongoing health record, and you would no longer have to rack your brain to answer questions such as “When did you first feel this way?”

[div class=attrib]More from theSource here.[end-div]

The Universe’s Invisible Hand

[div class=attrib]From Scientific American:[end-div]

Dark energy does more than hurry along the expansion of the universe. It also has a stranglehold on the shape and spacing of galaxies.

What took us so long? Only in 1998 did astronomers discover we had been missing nearly three quarters of the contents of the universe, the so-called dark energy–an unknown form of energy that surrounds each of us, tugging at us ever so slightly, holding the fate of the cosmos in its grip, but to which we are almost totally blind. Some researchers, to be sure, had anticipated that such energy existed, but even they will tell you that its detection ranks among the most revolutionary discoveries in 20th-century cosmology. Not only does dark energy appear to make up the bulk of the universe, but its existence, if it stands the test of time, will probably require the development of new theories of physics.

Scientists are just starting the long process of figuring out what dark energy is and what its implications are. One realization has already sunk in: although dark energy betrayed its existence through its effect on the universe as a whole, it may also shape the evolution of the universe’s inhabitants–stars, galaxies, galaxy clusters. Astronomers may have been staring at its handiwork for decades without realizing it.

[div class=attrib]More from theSource here.[end-div]

Evolved for Cancer?

[div class=attrib]From Scientific American:[end-div]

Natural selection lacks the power to erase cancer from our species and, some scientists argue, may even have provided tools that help tumors grow.

Natural selection is not natural perfection. Living creatures have evolved some remarkably complex adaptations, but we are still very vulnerable to disease. Among the most tragic of those ills–and perhaps most enigmatic–is cancer. A cancerous tumor is exquisitely well adapted for survival in its own grotesque way. Its cells continue to divide long after ordinary cells would stop. They destroy surrounding tissues to make room for themselves, and they trick the body into supplying them with energy to grow even larger. But the tumors that afflict us are not foreign parasites that have acquired sophisticated strategies for attacking our bodies. They are made of our own cells, turned against us. Nor is cancer some bizarre rarity: a woman in the U.S. has a 39 percent chance of being diagnosed with some type of cancer in her lifetime. A man has a 45 percent chance.

These facts make cancer a grim yet fascinating puzzle for evolutionary biologists. If natural selection is powerful enough to produce complex adaptations, from the eye to the immune system, why has it been unable to wipe out cancer? The answer, these investigators argue, lies in the evolutionary process itself. Natural selection has favored certain defenses against cancer but cannot eliminate it altogether. Ironically, natural selection may even inadvertently provide some of the tools that cancer cells can use to grow.

[div class=attrib]More from theSource here.[end-div]