Tag Archives: knowledge

Practice May Make You Perfect, But Not Creative

Practice will help you improve in a field with well-defined and well-developed tasks, processes and rules. This includes areas like sports and musicianship. Though, keep in mind that it may indeed take some accident of genetics to be really good at one of these disciplines in the first place.

But, don’t expect practice to make you better in all areas of life, particularly in creative endeavors. Creativity stems from original thought not replicable behavior. Scott Kaufman director of the Imagination Institute at the University of Pennsylvania reminds us of this in a recent book review.” The authors of Peak: Secrets from the New Science of Expertise, psychologist Anders Ericsson and journalist Robert Pool, review a swath of research on human learning and skill acquisition and conclude that deliberate, well-structured practice can help anyone master new skills. I think we can all agree with this conclusion.

But like Kaufman I believe that many creative “skills” lie in an area of human endeavor that is firmly beyond the assistance of practice. Most certainly practice will help an artist hone and improve her brushstrokes; but practice alone will not bring forth her masterpiece. So, here is a brief summary of 12 key elements that Kaufman distilled from over 50 years of research studies into creativity:

Excerpts from Creativity Is Much More Than 10,000 Hours of Deliberate Practice by Scott Kaufman:

  1. Creativity is often blind. If only creativity was all about deliberate practice… in reality, it’s impossible for creators to know completely whether their new idea or product will be well received.
  2. Creative people often have messy processes. While expertise is characterized by consistency and reliability, creativity is characterized by many false starts and lots and lots of trial-and-error.
  3. Creators rarely receive helpful feedback. When creators put something novel out into the world, the reactions are typically either acclaim or rejection
  4. The “10-Year Rule” is not a rule. The idea that it takes 10 years to become a world-class expert in any domain is not a rule. [This is the so-called Ericsson rule from his original paper on deliberate practice amongst musicians.]
  5. Talent is relevant to creative accomplishment. If we define talent as simply the rate at which a person acquires expertise, then talent undeniably matters for creativity.
  6. Personality is relevant. Not only does the speed of expertise acquisition matter, but so do a whole host of other traits. People differ from one another in a multitude of ways… At the very least, research has shown that creative people do tend to have a greater inclination toward nonconformity, unconventionality, independence, openness to experience, ego strength, risk taking, and even mild forms of psychopathology.
  7. Genes are relevant. [M]odern behavioral genetics has discovered that virtually every single psychological trait — including the inclination and willingness to practice — is influenced by innate genetic endowment.
  8. Environmental experiences also matter. [R]esearchers have found that many other environmental experiences substantially affect creativity– including socioeconomic origins, and the sociocultural, political, and economic context in which one is raised.
  9. Creative people have broad interests. While the deliberate practice approach tends to focus on highly specialized training… creative experts tend to have broader interests and greater versatility compared to their less creative expert colleagues.
  10. Too much expertise can be detrimental to creative greatness. The deliberate practice approach assumes that performance is a linear function of practice. Some knowledge is good, but too much knowledge can impair flexibility.
  11. Outsiders often have a creative advantage. If creativity were all about deliberate practice, then outsiders who lack the requisite expertise shouldn’t be very creative. But many highly innovative individuals were outsiders to the field in which they contributed. Many marginalized people throughout history — including immigrants — came up with highly creative ideas not in spite of their experiences as an outsider, but because of their experiences as an outsider.
  12. Sometimes the creator needs to create a new path for others to deliberately practice. Creative people are not just good at solving problems, however. They are also good at finding problems.

In my view the most salient of Kaufman’s dozen ingredients for creativity are #11 and #12 — and I can personally attest to their importance: fresh ideas are more likely to come from outsiders; and, creativeness in one domain often stems from experiences in another, unrelated, realm.

Read Kaufman’s enlightening article in full here.

Socks and Self-knowledge

ddg-search-socks

How well do you really know yourself?  Go beyond your latte preferences and your favorite movies. Knowing yourself means being familiar with your most intimate thoughts, desires and fears, your character traits and flaws, your values. for many this quest for self-knowledge is a life-long process. And, it may begin with knowing about your socks.

From NYT:

Most people wonder at some point in their lives how well they know themselves. Self-knowledge seems a good thing to have, but hard to attain. To know yourself would be to know such things as your deepest thoughts, desires and emotions, your character traits, your values, what makes you happy and why you think and do the things you think and do. These are all examples of what might be called “substantial” self-knowledge, and there was a time when it would have been safe to assume that philosophy had plenty to say about the sources, extent and importance of self-knowledge in this sense.

Not any more. With few exceptions, philosophers of self-knowledge nowadays have other concerns. Here’s an example of the sort of thing philosophers worry about: suppose you are wearing socks and believe you are wearing socks. How do you know that that’s what you believe? Notice that the question isn’t: “How do you know you are wearing socks?” but rather “How do you know you believe you are wearing socks?” Knowledge of such beliefs is seen as a form of self-knowledge. Other popular examples of self-knowledge in the philosophical literature include knowing that you are in pain and knowing that you are thinking that water is wet. For many philosophers the challenge is explain how these types of self-knowledge are possible.

This is usually news to non-philosophers. Most certainly imagine that philosophy tries to answer the Big Questions, and “How do you know you believe you are wearing socks?” doesn’t sound much like one of them. If knowing that you believe you are wearing socks qualifies as self-knowledge at all — and even that isn’t obvious — it is self-knowledge of the most trivial kind. Non-philosophers find it hard to figure out why philosophers would be more interested in trivial than in substantial self-knowledge.

One common reaction to the focus on trivial self-knowledge is to ask, “Why on earth would you be interested in that?” — or, more pointedly, “Why on earth would anyone pay you to think about that?” Philosophers of self-knowledge aren’t deterred. It isn’t unusual for them to start their learned articles and books on self-knowledge by declaring that they aren’t going to be discussing substantial self-knowledge because that isn’t where the philosophical action is.

How can that be? It all depends on your starting point. For example, to know that you are wearing socks requires effort, even if it’s only the minimal effort of looking down at your feet. When you look down and see the socks on your feet you have evidence — the evidence of your senses — that you are wearing socks, and this illustrates what seems a general point about knowledge: knowledge is based on evidence, and our beliefs about the world around us can be wrong. Evidence can be misleading and conclusions from evidence unwarranted. Trivial self-knowledge seems different. On the face of it, you don’t need evidence to know that you believe you are wearing socks, and there is a strong presumption that your beliefs about your own beliefs and other states of mind aren’t mistaken. Trivial self-knowledge is direct (not based on evidence) and privileged (normally immune to error). Given these two background assumptions, it looks like there is something here that needs explaining: How is trivial self-knowledge, with all its peculiarities, possible?

From this perspective, trivial self-knowledge is philosophically interesting because it is special. “Special” in this context means special from the standpoint of epistemology or the philosophy of knowledge. Substantial self-knowledge is much less interesting from this point of view because it is like any other knowledge. You need evidence to know your own character and values, and your beliefs about your own character and values can be mistaken. For example, you think you are generous but your friends know you better. You think you are committed to racial equality but your behaviour suggests otherwise. Once you think of substantial self-knowledge as neither direct nor privileged why would you still regard it as philosophically interesting?

What is missing from this picture is any real sense of the human importance of self-knowledge. Self-knowledge matters to us as human beings, and the self-knowledge which matters to us as human beings is substantial rather than trivial self-knowledge. We assume that on the whole our lives go better with substantial self-knowledge than without it, and what is puzzling is how hard it can be to know ourselves in this sense.

The assumption that self-knowledge matters is controversial and philosophy might be expected to have something to say about the importance of self-knowledge, as well as its scope and extent. The interesting questions in this context include “Why is substantial self-knowledge hard to attain?” and “To what extent is substantial self-knowledge possible?”

Read the entire article here.

Image courtesy of DuckDuckGo Search.

 

Technology: Mind Exp(a/e)nder

Rattling off esoteric facts to friends and colleagues at a party or in the office is often seen as a simple way to impress. You may have tried this at some point — to impress a prospective boy or girl friend or a group of peers or even your boss. Not surprisingly, your facts will impress if they are relevant to the discussion at hand. However, your audience will be even more agog at your uncanny, intellectual prowess if the facts and figures relate to some wildly obtuse domain — quotes from authors, local bird species, gold prices through the years, land-speed records through the ages, how electrolysis works, etymology of polysyllabic words, and so it goes.

So, it comes as no surprise that many technology companies fall over themselves to promote their products as a way to make you, the smart user, even smarter. But does having constant, realtime access to a powerful computer or smartphone or spectacles linked to an immense library of interconnected content, make you smarter? Some would argue that it does; that having access to a vast, virtual disk drive of information will improve your cognitive abilities. There is no doubt that our technology puts an unparalleled repository of information within instant and constant reach: we can read all the classic literature — for that matter we can read the entire contents of the Library of Congress; we can find an answer to almost any question — it’s just a Google search away; we can find fresh research and rich reference material on every subject imaginable.

Yet, all this information will not directly make us any smarter; it is not applied knowledge nor is it experiential wisdom. It will not make us more creative or insightful. However, it is more likely to influence our cognition indirectly — freed from our need to carry volumes of often useless facts and figures in our heads, we will be able to turn our minds to more consequential and noble pursuits — to think, rather than to memorize. That is a good thing.

From Slate:

Quick, what’s the square root of 2,130? How many Roadmaster convertibles did Buick build in 1949? What airline has never lost a jet plane in a crash?

If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.

True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.

The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.

Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.

The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.

So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”

Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.

The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”

So here’s a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What’s the exact recipe for your favorite homemade pastry?

Read the entire article after the jump.

Image: Google Glass. Courtesy of Google.

Your Tax Dollars at Work

Naysayers would say that government, and hence taxpayer dollars, should not be used to fund science initiatives. After all academia and business seem to do a fairly good job of discovery and innovation without a helping hand pilfering from the public purse. And, without a doubt, and money aside, government funded projects do raise a number of thorny questions: On what should our hard-earned income tax be spent? Who decides on the priorities? How is progress to be measured? Do taxpayers get any benefit in return? After many of us cringe at the thought of an unelected bureaucrat or a committee of such spending millions if not billions of our dollars. Why not just spend the money on fixing our national potholes?

But despite our many human flaws and foibles we are at heart explorers. We seek to know more about ourselves, our world and our universe. Those who seek answers to fundamental questions of consciousness, aging, and life are pioneers in this quest to expand our domain of understanding and knowledge. These answers increasingly aid our daily lives through continuous improvement in medical science, and innovation in materials science. And, our collective lives are enriched as we increasingly learn more about the how and the why of our and our universe’s existence.

So, some of our dollars have gone towards big science at the Large Hadron Collider (LHC) beneath Switzerland looking for constituents of matter, the wild laser experiment at the National Ignition Facility designed to enable controlled fusion reactions, and the Curiosity rover exploring Mars. Yet more of our dollars have gone to research and development into enhanced radar, graphene for next generation circuitry, online courseware, stress in coral reefs, sensors to aid the elderly, ultra-high speed internet for emergency response, erosion mitigation, self-cleaning surfaces, flexible solar panels.

Now comes word that the U.S. government wants to spend $3 billion dollars — over 10 years — on building a comprehensive map of the human brain. The media has dubbed this the “connectome” following similar efforts to map our human DNA, the genome. While this is the type of big science that may yield tangible results and benefits only decades from now, it ignites the passion and curiosity of our children to continue to seek and to find answers. So, this is good news for science and the explorer who lurks within us all.

[div class=attrib]From ars technica:[end-div]

Over the weekend, The New York Times reported that the Obama administration is preparing to launch biology into its first big project post-genome: mapping the activity and processes that power the human brain. The initial report suggested that the project would get roughly $3 billion dollars over 10 years to fund projects that would provide an unprecedented understanding of how the brain operates.

But the report was remarkably short on the scientific details of what the studies would actually accomplish or where the money would actually go. To get a better sense, we talked with Brown University’s John Donoghue, who is one of the academic researchers who has been helping to provide the rationale and direction for the project. Although he couldn’t speak for the administration’s plans, he did describe the outlines of what’s being proposed and why, and he provided a glimpse into what he sees as the project’s benefits.

What are we talking about doing?

We’ve already made great progress in understanding the behavior of individual neurons, and scientists have done some excellent work in studying small populations of them. On the other end of the spectrum, decades of anatomical studies have provided us with a good picture of how different regions of the brain are connected. “There’s a big gap in our knowledge because we don’t know the intermediate scale,” Donaghue told Ars. The goal, he said, “is not a wiring diagram—it’s a functional map, an understanding.”

This would involve a combination of things, including looking at how larger populations of neurons within a single structure coordinate their activity, as well as trying to get a better understanding of how different structures within the brain coordinate their activity. What scale of neuron will we need to study? Donaghue answered that question with one of his own: “At what point does the emergent property come out?” Things like memory and consciousness emerge from the actions of lots of neurons, and we need to capture enough of those to understand the processes that let them emerge. Right now, we don’t really know what that level is. It’s certainly “above 10,” according to Donaghue. “I don’t think we need to study every neuron,” he said. Beyond that, part of the project will focus on what Donaghue called “the big question”—what emerges in the brain at these various scales?”

While he may have called emergence “the big question,” it quickly became clear he had a number of big questions in mind. Neural activity clearly encodes information, and we can record it, but we don’t always understand the code well enough to understand the meaning of our recordings. When I asked Donaghue about this, he said, “This is it! One of the big goals is cracking the code.”

Donaghue was enthused about the idea that the different aspects of the project would feed into each other. “They go hand in hand,” he said. “As we gain more functional information, it’ll inform the connectional map and vice versa.” In the same way, knowing more about neural coding will help us interpret the activity we see, while more detailed recordings of neural activity will make it easier to infer the code.

As we build on these feedbacks to understand more complex examples of the brain’s emergent behaviors, the big picture will emerge. Donaghue hoped that the work will ultimately provide “a way of understanding how you turn thought into action, how you perceive, the nature of the mind, cognition.”

How will we actually do this?

Perception and the nature of the mind have bothered scientists and philosophers for centuries—why should we think we can tackle them now? Donaghue cited three fields that had given him and his collaborators cause for optimism: nanotechnology, synthetic biology, and optical tracers. We’ve now reached the point where, thanks to advances in nanotechnology, we’re able to produce much larger arrays of electrodes with fine control over their shape, allowing us to monitor much larger populations of neurons at the same time. On a larger scale, chemical tracers can now register the activity of large populations of neurons through flashes of fluorescence, giving us a way of monitoring huge populations of cells. And Donaghue suggested that it might be possible to use synthetic biology to translate neural activity into a permanent record of a cell’s activity (perhaps stored in DNA itself) for later retrieval.

Right now, in Donaghue’s view, the problem is that the people developing these technologies and the neuroscience community aren’t talking enough. Biologists don’t know enough about the tools already out there, and the materials scientists aren’t getting feedback from them on ways to make their tools more useful.

Since the problem is understanding the activity of the brain at the level of large populations of neurons, the goal will be to develop the tools needed to do so and to make sure they are widely adopted by the bioscience community. Each of these approaches is limited in various ways, so it will be important to use all of them and to continue the technology development.

Assuming the information can be recorded, it will generate huge amounts of data, which will need to be shared in order to have the intended impact. And we’ll need to be able to perform pattern recognition across these vast datasets in order to identify correlations in activity among different populations of neurons. So there will be a heavy computational component as well.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: White matter fiber architecture of the human brain. Courtesy of the Human Connectome Project.[end-div]

The Half Life of Facts

There is no doubting the ever expanding reach of science and the acceleration of scientific discovery. Yet the accumulation, and for that matter the acceleration in the accumulation, of ever more knowledge does come with a price — many historical facts that we learned as kids are no longer true. This is especially important in areas such as medical research where new discoveries are constantly making obsolete our previous notions of disease and treatment.

Author Samuel Arbesman, tells us why facts should have an expiration date in his new book, A review of The Half-Life of Facts.

[div class=attrib]From Reason:[end-div]

Dinosaurs were cold-blooded. Vast increases in the money supply produce inflation. Increased K-12 spending and lower pupil/teacher ratios boosts public school student outcomes. Most of the DNA in the human genome is junk. Saccharin causes cancer and a high fiber diet prevents it. Stars cannot be bigger than 150 solar masses. And by the way, what are the ten most populous cities in the United States?

In the past half century, all of the foregoing facts have turned out to be wrong (except perhaps the one about inflation rates). We’ll revisit the ten biggest cities question below. In the modern world facts change all of the time, according to Samuel Arbesman, author of The Half-Life of Facts: Why Everything We Know Has an Expiration Date.

Arbesman, a senior scholar at the Kaufmann Foundation and an expert in scientometrics, looks at how facts are made and remade in the modern world. And since fact-making is speeding up, he worries that most of us don’t keep up to date and base our decisions on facts we dimly remember from school and university classes that turn out to be wrong.

The field of scientometrics – the science of measuring and analyzing science – took off in 1947 when mathematician Derek J. de Solla Price was asked to store a complete set of the Philosophical Transactions of the Royal Society temporarily in his house. He stacked them in order and he noticed that the height of the stacks fit an exponential curve. Price started to analyze all sorts of other kinds of scientific data and concluded in 1960 that scientific knowledge had been growing steadily at a rate of 4.7 percent annually since the 17th century. The upshot was that scientific data was doubling every 15 years.

In 1965, Price exuberantly observed, “All crude measures, however arrived at, show to a first approximation that science increases exponentially, at a compound interest of about 7 percent  per annum, thus doubling in size every 10–15 years, growing by a factor of 10 every half century, and by something like a factor of a million in the 300 years which separate us from the seventeenth-century invention of the scientific paper when the process began.” A 2010 study in the journal Scientometrics looked at data between 1907 and 2007 and concluded that so far the “overall growth rate for science still has been at least 4.7 percent per year.”

Since scientific knowledge is still growing by a factor of ten every 50 years, it should not be surprising that lots of facts people learned in school and universities have been overturned and are now out of date.  But at what rate do former facts disappear? Arbesman applies the concept of half-life, the time required for half the atoms of a given amount of a radioactive substance to disintegrate, to the dissolution of facts. For example, the half-life of the radioactive isotope strontium-90 is just over 29 years. Applying the concept of half-life to facts, Arbesman cites research that looked into the decay in the truth of clinical knowledge about cirrhosis and hepatitis. “The half-life of truth was 45 years,” reported the researchers.

In other words, half of what physicians thought they knew about liver diseases was wrong or obsolete 45 years later. As interesting and persuasive as this example is, Arbesman’s book would have been strengthened by more instances drawn from the scientific literature.

Facts are being manufactured all of the time, and, as Arbesman shows, many of them turn out to be wrong. Checking each by each is how the scientific process is supposed work, i.e., experimental results need to be replicated by other researchers. How many of the findings in 845,175 articles published in 2009 and recorded in PubMed, the free online medical database, were actually replicated? Not all that many. In 2011, a disheartening study in Nature reported that a team of researchers over ten years was able to reproduce the results of only six out of 53 landmark papers in preclinical cancer research.

[div class=attrib]Read the entire article after the jump.[end-div]

Improvements to Our Lives Through Science

Ask a hundred people how science can be used for the good and you’re likely to get a hundred different answers. Well, Edge Magazine did just that, posing the question: “What scientific concept would improve everybody’s cognitive toolkit”, to 159 critical thinkers. Below we excerpt some of our favorites. The thoroughly engrossing, novel length article can be found here in its entirety.

[div class=attrib]From Edge:[end-div]

ether
Richard H. Thaler. Father of behavioral economics.

I recently posted a question in this space asking people to name their favorite example of a wrong scientific belief. One of my favorite answers came from Clay Shirky. Here is an excerpt:
The existence of ether, the medium through which light (was thought to) travel. It was believed to be true by analogy — waves propagate through water, and sound waves propagate through air, so light must propagate through X, and the name of this particular X was ether.
It’s also my favorite because it illustrates how hard it is to accumulate evidence for deciding something doesn’t exist. Ether was both required by 19th century theories and undetectable by 19th century apparatus, so it accumulated a raft of negative characteristics: it was odorless, colorless, inert, and so on.

Ecology
Brian Eno. Artist; Composer; Recording Producer: U2, Cold Play, Talking Heads, Paul Simon.

That idea, or bundle of ideas, seems to me the most important revolution in general thinking in the last 150 years. It has given us a whole new sense of who we are, where we fit, and how things work. It has made commonplace and intuitive a type of perception that used to be the province of mystics — the sense of wholeness and interconnectedness.
Beginning with Copernicus, our picture of a semi-divine humankind perfectly located at the centre of The Universe began to falter: we discovered that we live on a small planet circling a medium sized star at the edge of an average galaxy. And then, following Darwin, we stopped being able to locate ourselves at the centre of life. Darwin gave us a matrix upon which we could locate life in all its forms: and the shocking news was that we weren’t at the centre of that either — just another species in the innumerable panoply of species, inseparably woven into the whole fabric (and not an indispensable part of it either). We have been cut down to size, but at the same time we have discovered ourselves to be part of the most unimaginably vast and beautiful drama called Life.

We Are Not Alone In The Universe
J. Craig Venter. Leading scientist of the 21st century.

I cannot imagine any single discovery that would have more impact on humanity than the discovery of life outside of our solar system. There is a human-centric, Earth-centric view of life that permeates most cultural and societal thinking. Finding that there are multiple, perhaps millions of origins of life and that life is ubiquitous throughout the universe will profoundly affect every human.

Correlation is not a cause
Susan Blackmore. Psychologist; Author, Consciousness: An Introduction.

The phrase “correlation is not a cause” (CINAC) may be familiar to every scientist but has not found its way into everyday language, even though critical thinking and scientific understanding would improve if more people had this simple reminder in their mental toolkit.
One reason for this lack is that CINAC can be surprisingly difficult to grasp. I learned just how difficult when teaching experimental design to nurses, physiotherapists and other assorted groups. They usually understood my favourite example: imagine you are watching at a railway station. More and more people arrive until the platform is crowded, and then — hey presto — along comes a train. Did the people cause the train to arrive (A causes B)? Did the train cause the people to arrive (B causes A)? No, they both depended on a railway timetable (C caused both A and B).

A Statistically Significant Difference in Understanding the Scientific Process
Diane F. Halpern. Professor, Claremont McKenna College; Past-president, American Psychological Society.

Statistically significant difference — It is a simple phrase that is essential to science and that has become common parlance among educated adults. These three words convey a basic understanding of the scientific process, random events, and the laws of probability. The term appears almost everywhere that research is discussed — in newspaper articles, advertisements for “miracle” diets, research publications, and student laboratory reports, to name just a few of the many diverse contexts where the term is used. It is a short hand abstraction for a sequence of events that includes an experiment (or other research design), the specification of a null and alternative hypothesis, (numerical) data collection, statistical analysis, and the probability of an unlikely outcome. That is a lot of science conveyed in a few words.

 

Confabulation
Fiery Cushman. Post-doctoral fellow, Mind/Brain/Behavior Interfaculty Initiative, Harvard University.

We are shockingly ignorant of the causes of our own behavior. The explanations that we provide are sometimes wholly fabricated, and certainly never complete. Yet, that is not how it feels. Instead it feels like we know exactly what we’re doing and why. This is confabulation: Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties. Every year psychologists use dramatic examples to entertain their undergraduate audiences. Confabulation is funny, but there is a serious side, too. Understanding it can help us act better and think better in everyday life.

We are Lost in Thought
Sam Harris. Neuroscientist; Chairman, The Reason Project; Author, Letter to a Christian Nation.

I invite you to pay attention to anything — the sight of this text, the sensation of breathing, the feeling of your body resting against your chair — for a mere sixty seconds without getting distracted by discursive thought. It sounds simple enough: Just pay attention. The truth, however, is that you will find the task impossible. If the lives of your children depended on it, you could not focus on anything — even the feeling of a knife at your throat — for more than a few seconds, before your awareness would be submerged again by the flow of thought. This forced plunge into unreality is a problem. In fact, it is the problem from which every other problem in human life appears to be made.
I am by no means denying the importance of thinking. Linguistic thought is indispensable to us. It is the basis for planning, explicit learning, moral reasoning, and many other capacities that make us human. Thinking is the substance of every social relationship and cultural institution we have. It is also the foundation of science. But our habitual identification with the flow of thought — that is, our failure to recognize thoughts as thoughts, as transient appearances in consciousness — is a primary source of human suffering and confusion.

Knowledge
Mark Pagel. Professor of Evolutionary Biology, Reading University, England and The Santa Fe.

The Oracle of Delphi famously pronounced Socrates to be “the most intelligent man in the world because he knew that he knew nothing”. Over 2000 years later the physicist-turned-historian Jacob Bronowski would emphasize — in the last episode of his landmark 1970s television series the “Ascent of Man” — the danger of our all-too-human conceit of thinking we know something. What Socrates knew and what Bronowski had come to appreciate is that knowledge — true knowledge — is difficult, maybe even impossible, to come buy, it is prone to misunderstanding and counterfactuals, and most importantly it can never be acquired with exact precision, there will always be some element of doubt about anything we come to “know”‘ from our observations of the world.

[div class=attrib]More from theSource here.[end-div]

Learning to learn

[div class=attrib]By George Blecher for Eurozine:[end-div]

Before I learned how to learn, I was full of bullshit. I exaggerate. But like any bright student, I spent a lot of time faking it, pretending to know things about which I had only vague generalizations and a fund of catch-words. Why do bright students need to fake it? I guess because if they’re considered “bright”, they’re caught in a tautology: bright students are supposed to know, so if they risk not knowing, they must not be bright.

In any case, I faked it. I faked it so well that even my teachers were afraid to contradict me. I faked it so well that I convinced myself that I wasn’t faking it. In the darkest corners of the bright student’s mind, the borders between real and fake knowledge are blurred, and he puts so much effort into faking it that he may not even recognize when he actually knows something.

Above all, he dreads that his bluff will be called – that an honest soul will respect him enough to pick apart his faulty reasoning and superficial grasp of a subject, and expose him for the fraud he believes himself to be. So he lives in a state of constant fear: fear of being exposed, fear of not knowing, fear of appearing afraid. No wonder that Plato in The Republic cautions against teaching the “dialectic” to future Archons before the age of 30: he knew that instead of using it to pursue “Truth”, they’d wield it like a weapon to appear cleverer than their fellows.

Sometimes the worst actually happens. The bright student gets caught with his intellectual pants down. I remember taking an exam when I was 12, speeding through it with great cockiness until I realized that I’d left out a whole section. I did what the bright student usually does: I turned it back on the teacher, insisting that the question was misleading, and that I should be granted another half hour to fill in the missing part. (Probably Mr Lipkin just gave in because he knew what a pain in the ass the bright student can be!)

So then I was somewhere in my early 30s. No more teachers or parents to impress; no more exams to ace: just the day-to-day toiling in the trenches, trying to build a life.

[div class=attrib]More from theSource here.[end-div]

Mind Over Mass Media

[div class=attrib]From the New York Times:[end-div]

NEW forms of media have always caused moral panics: the printing press, newspapers, paperbacks and television were all once denounced as threats to their consumers’ brainpower and moral fiber.

So too with electronic technologies. PowerPoint, we’re told, is reducing discourse to bullet points. Search engines lower our intelligence, encouraging us to skim on the surface of knowledge rather than dive to its depths. Twitter is shrinking our attention spans.

But such panics often fail basic reality checks. When comic books were accused of turning juveniles into delinquents in the 1950s, crime was falling to record lows, just as the denunciations of video games in the 1990s coincided with the great American crime decline. The decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.

For a reality check today, take the state of science, which demands high levels of brainwork and is measured by clear benchmarks of discovery. These days scientists are never far from their e-mail, rarely touch paper and cannot lecture without PowerPoint. If electronic media were hazardous to intelligence, the quality of science would be plummeting. Yet discoveries are multiplying like fruit flies, and progress is dizzying. Other activities in the life of the mind, like philosophy, history and cultural criticism, are likewise flourishing, as anyone who has lost a morning of work to the Web site Arts & Letters Daily can attest.

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Experience does not revamp the basic information-processing capacities of the brain. Speed-reading programs have long claimed to do just that, but the verdict was rendered by Woody Allen after he read “War and Peace” in one sitting: “It was about Russia.” Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.

Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.

The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.

Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.

And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.

Steven Pinker, a professor of psychology at Harvard, is the author of “The Stuff of Thought.”

[div class=attrib]More from theSource here.[end-div]

‘Thirst For Knowledge’ May Be Opium Craving

[div class=attrib]From ScienceDaily:[end-div]

Neuroscientists have proposed a simple explanation for the pleasure of grasping a new concept: The brain is getting its fix.

The “click” of comprehension triggers a biochemical cascade that rewards the brain with a shot of natural opium-like substances, said Irving Biederman of the University of Southern California. He presents his theory in an invited article in the latest issue of American Scientist.

“While you’re trying to understand a difficult theorem, it’s not fun,” said Biederman, professor of neuroscience in the USC College of Letters, Arts and Sciences.

“But once you get it, you just feel fabulous.”

The brain’s craving for a fix motivates humans to maximize the rate at which they absorb knowledge, he said.

“I think we’re exquisitely tuned to this as if we’re junkies, second by second.”

Biederman hypothesized that knowledge addiction has strong evolutionary value because mate selection correlates closely with perceived intelligence.

Only more pressing material needs, such as hunger, can suspend the quest for knowledge, he added.

The same mechanism is involved in the aesthetic experience, Biederman said, providing a neurological explanation for the pleasure we derive from art.

[div class=attrib]More from theSource here.[end-div]