Tag Archives: learning

Those Were the Days

I (still) have school-age children. So, I’m in two minds as to whether I support columnist Joe Queenan’s position on the joys that come from freedom-from-kids. He is not the mythical empty nester bemoaning the loss of his kids to the vagaries of adulthood. He is not the control-freak helicopter parent suffering from the withdrawal pains that come from no longer being able to offer advice on math homework. He doesn’t miss offering a soothing critique on the latest cardboard diorama. Nor does he mourn the loss of the visits to school counsellors, the coach, the nurse or ferrying the kids to and from the endless after-school and extracurricular activities. He’s joyfully free.

While I anticipate a certain pleasure to be had from this added freedom when the kids trek off to college and beyond, I think it will come as a mixed blessing. Will I miss scratching my head over 9th grade calculus? Will I miss cheering on my budding basketball star? Will I miss drawing diagrams of electron shells and maps of the Middle East? Will I miss the video book reviews or the poetry slam? I think I will.

From the WSJ:

Once their children are all grown up and have moved away for good, parents are supposed to suffer from profound melancholy and sometimes even outright depression. This is the phenomenon widely known by the horrid term “empty nest syndrome.”

“It all went by too fast.” “We didn’t really enjoy those precious little moments as much as we should have.” “The future now looks so bleak.” These are the sorts of things that rueful empty nesters—nostalgic for the glorious, halcyon days when their children were young and innocent and still nesting—say to themselves. Or so runs the popular mythology.

This has not been my experience as a parent. From the moment my children left school forever ten years ago, I felt a radiant, ineffable joy suffuse my very being. Far from being depressed or sad, I was elated. There was a simple reason for this: From that point onward, I would never again have to think about the kids and school. Never, ever, ever.

 I would never have to go to the middle school office to find out why my child was doing so poorly in math. I would never have to ask the high-school principal why the French teacher didn’t seem to speak much French. I would never have to ask the grade-school principal why he rewrote my daughter’s sixth-grade graduation speech to include more references to his own prodigious sense of humor and caring disposition, and fewer jokes of her own.

I would never have to complain that the school had discontinued the WordMasters competition, the one activity at which my son truly excelled. I would never have to find out if my son was in any way responsible for a classmate damaging his wrist during recess. I would never again have to listen to my child, or anyone else’s, play the cello.

I would never have to attend a parent-teacher meeting to find out why my daughter’s history instructor was teaching the class that England’s King Edward II didn’t have a son. A son named Edward III. A son who took special pains to publicly hang the man who allegedly killed his dad—and let the body rot for a couple of days, just to show how ticked off he was about his father’s mistreatment. All of which my kids knew because their mother grew up 5 miles from the castle where Edward II was heinously butchered. Leaving behind Edward III. His son.

“The timeline gets confusing back then,” the teacher explained when we visited him. No, it doesn’t. In history, this thing happened and that thing didn’t. If you didn’t know that, your students got crummy AP scores. And then they didn’t get into the best college. My wife and I weren’t going out of our way to embarrass the teacher. It was just…well…first you’re wrong about Edward III, and then you’re wrong about Henry III, and before you know it, you’re wrong about Richard III. Who knows where it all could lead?

But now it no longer mattered. The ordeal had ended; the 18-year plague had run its course; the bitter cup had passed from my lips. I would never quaff from its putrid contents again. Good riddance.

Read the entire story here.

Past Experience is Good; Random Decision-Making is Better

We all know that making decisions from past experience is wise. We learn from the benefit of hindsight. We learn to make small improvements or radical shifts in our thinking and behaviors based on history and previous empirical evidence. Stock market gurus and investment mavens will tell you time after time that they have a proven method — based on empirical evidence and a lengthy, illustrious track record — for picking the next great stock or investing your hard-earned retirement funds.

Yet, empirical evidence shows that chimpanzees throwing darts at the WSJ stock pages are just as good at stock market tips as we humans (and the “masters of the universe”). So, it seems that random decision-making can be just as good, if not better, than wisdom and experience.

From the Guardian:

No matter how much time you spend reading the recent crop of books on How To Decide or How To Think Clearly, you’re unlikely to encounter glowing references to a decision-making system formerly used by the Azande of central Africa. Faced with a dilemma, tribespeople would force poison down the neck of a chicken while asking questions of the “poison oracle”; the chicken answered by surviving (“yes”) or expiring (“no”). Clearly, this was cruel to chickens. That aside, was it such a terrible way to choose among options? The anthropologist EE Evans-Pritchard, who lived with the Azande in the 1920s, didn’t think so. “I always kept a supply of poison [and] we regulated our affairs in accordance with the oracle’s decisions,” he wrote, adding drily: “I found this as satisfactory a way of running my home and affairs as any other I know of.” You could dismiss that as a joke. After all, chicken-poisoning is plainly superstition, delivering random results. But what if random results are sometimes exactly what you need?

The other day, US neuroscientists published details of experiments on rats, showing that in certain unpredictable situations, they stop trying to make decisions based on past experience. Instead, a circuit in their brains switches to “random mode”. The researchers’ hunch is that this serves a purpose: past experience is usually helpful, but when uncertainty levels are high, it can mislead, so randomness is in the rats’ best interests. When we’re faced with the unfamiliar, experience can mislead humans, too, partly because we filter it through various irrational biases. According to those books on thinking clearly, we should strive to overcome these biases, thus making more rational calculations. But there’s another way to bypass our biased brains: copy the rats, and choose randomly.

In certain walks of life, the usefulness of randomness is old news: the stock market, say, is so unpredictable that, to quote the economist Burton Malkiel, “a blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do as well as one carefully selected by experts”. (This has been tried, with simulated monkeys, andthey beat the market.) But, generally, as Michael Schulson put it recentlyin an Aeon magazine essay, “We take it for granted that the best decisions stem from empirical analysis and informed choice.” Yet consider, he suggests, the ancient Greek tradition of filling some government positions by lottery. Randomness disinfects a process that might be dirtied by corruption.

Randomness can be similarly useful in everyday life. For tiny choices, it’s a time-saver: pick randomly from a menu, and you can get back to chatting with friends. For bigger ones, it’s an acknowledgment of how little one can ever know about the complex implications of a decision. Let’s be realistic: for the biggest decisions, such as whom to marry, trusting to randomness feels absurd. But if you can up the randomness quotient for marginally less weighty choices, especially when uncertainty prevails, you may find it pays off. Though kindly refrain from poisoning any chickens.

Read the entire article here.

Paper is the Next Big Thing

Da-Vinci-Hammer-Codex

Luddites and technophobes rejoice, paper-bound books may be with us for quite some time. And, there may be some genuinely scientific reasons why physical books will remain. Recent research shows that people learn more effectively when reading from paper versus its digital offspring.

From Wired:

Paper books were supposed to be dead by now. For years, information theorists, marketers, and early adopters have told us their demise was imminent. Ikea even redesigned a bookshelf to hold something other than books. Yet in a world of screen ubiquity, many people still prefer to do their serious reading on paper.

Count me among them. When I need to read deeply—when I want to lose myself in a story or an intellectual journey, when focus and comprehension are paramount—I still turn to paper. Something just feels fundamentally richer about reading on it. And researchers are starting to think there’s something to this feeling.

To those who see dead tree editions as successors to scrolls and clay tablets in history’s remainder bin, this might seem like literary Luddism. But I e-read often: when I need to copy text for research or don’t want to carry a small library with me. There’s something especially delicious about late-night sci-fi by the light of a Kindle Paperwhite.

What I’ve read on screen seems slippery, though. When I later recall it, the text is slightly translucent in my mind’s eye. It’s as if my brain better absorbs what’s presented on paper. Pixels just don’t seem to stick. And often I’ve found myself wondering, why might that be?

The usual explanation is that internet devices foster distraction, or that my late-thirty-something brain isn’t that of a true digital native, accustomed to screens since infancy. But I have the same feeling when I am reading a screen that’s not connected to the internet and Twitter or online Boggle can’t get in the way. And research finds that kids these days consistently prefer their textbooks in print rather than pixels. Whatever the answer, it’s not just about habit.

Another explanation, expressed in a recent Washington Post article on the decline of deep reading, blames a sweeping change in our lifestyles: We’re all so multitasked and attention-fragmented that our brains are losing the ability to focus on long, linear texts. I certainly feel this way, but if I don’t read deeply as often or easily as I used to, it does still happen. It just doesn’t happen on screen, and not even on devices designed specifically for that experience.

Maybe it’s time to start thinking of paper and screens another way: not as an old technology and its inevitable replacement, but as different and complementary interfaces, each stimulating particular modes of thinking. Maybe paper is a technology uniquely suited for imbibing novels and essays and complex narratives, just as screens are for browsing and scanning.

“Reading is human-technology interaction,” says literacy professor Anne Mangen of Norway’s University of Stavenger. “Perhaps the tactility and physical permanence of paper yields a different cognitive and emotional experience.” This is especially true, she says, for “reading that can’t be done in snippets, scanning here and there, but requires sustained attention.”

Mangen is among a small group of researchers who study how people read on different media. It’s a field that goes back several decades, but yields no easy conclusions. People tended to read slowly and somewhat inaccurately on early screens. The technology, particularly e-paper, has improved dramatically, to the point where speed and accuracy aren’t now problems, but deeper issues of memory and comprehension are not yet well-characterized.

Complicating the scientific story further, there are many types of reading. Most experiments involve short passages read by students in an academic setting, and for this sort of reading, some studies have found no obvious differences between screens and paper. Those don’t necessarily capture the dynamics of deep reading, though, and nobody’s yet run the sort of experiment, involving thousands of readers in real-world conditions who are tracked for years on a battery of cognitive and psychological measures, that might fully illuminate the matter.

In the meantime, other research does suggest possible differences. A 2004 study found that students more fully remembered what they’d read on paper. Those results were echoed by an experiment that looked specifically at e-books, and another by psychologist Erik Wästlund at Sweden’s Karlstad University, who found that students learned better when reading from paper.

Wästlund followed up that study with one designed to investigate screen reading dynamics in more detail. He presented students with a variety of on-screen document formats. The most influential factor, he found, was whether they could see pages in their entirety. When they had to scroll, their performance suffered.

According to Wästlund, scrolling had two impacts, the most basic being distraction. Even the slight effort required to drag a mouse or swipe a finger requires a small but significant investment of attention, one that’s higher than flipping a page. Text flowing up and down a page also disrupts a reader’s visual attention, forcing eyes to search for a new starting point and re-focus.

Mangen is among a small group of researchers who study how people read on different media. It’s a field that goes back several decades, but yields no easy conclusions. People tended to read slowly and somewhat inaccurately on early screens. The technology, particularly e-paper, has improved dramatically, to the point where speed and accuracy aren’t now problems, but deeper issues of memory and comprehension are not yet well-characterized.

Complicating the scientific story further, there are many types of reading. Most experiments involve short passages read by students in an academic setting, and for this sort of reading, some studies have found no obvious differences between screens and paper. Those don’t necessarily capture the dynamics of deep reading, though, and nobody’s yet run the sort of experiment, involving thousands of readers in real-world conditions who are tracked for years on a battery of cognitive and psychological measures, that might fully illuminate the matter.

In the meantime, other research does suggest possible differences. A 2004 study found that students more fully remembered what they’d read on paper. Those results were echoed by an experiment that looked specifically at e-books, and another by psychologist Erik Wästlund at Sweden’s Karlstad University, who found that students learned better when reading from paper.

Wästlund followed up that study with one designed to investigate screen reading dynamics in more detail. He presented students with a variety of on-screen document formats. The most influential factor, he found, was whether they could see pages in their entirety. When they had to scroll, their performance suffered.

According to Wästlund, scrolling had two impacts, the most basic being distraction. Even the slight effort required to drag a mouse or swipe a finger requires a small but significant investment of attention, one that’s higher than flipping a page. Text flowing up and down a page also disrupts a reader’s visual attention, forcing eyes to search for a new starting point and re-focus.

Read the entire electronic article here.

Image: Leicester or Hammer Codex, by Leonardo da Vinci (1452-1519). Courtesy of Wikipedia / Public domain.

 

Elite Mediocrity

Yet another survey of global education attainment puts the United States firmly in yet another unenviable position. US students ranked a mere 28th in science and came further down the scale on math, at 36th, out of 65 nations. So, it’s time for another well-earned attack on the system that is increasingly nurturing mainstream mediocrity and dumbing-down education to mush. In fact, some nameless states seem to celebrate the fact by re-working textbooks and curricula to ensure historic fact and scientific principles are distorted to promote a religious agenda. And, for those who point to the US as a guiding light in all things innovative, please don’t forget that a significant proportion of the innovators gained their educational credentials elsewhere, outside the US.

As the news Comedy Central faux-news anchor and satirist Stephen Colbert recently put it:

“Like all great theologies, Bill [O’Reilly]’s can be boiled down to one sentence: there must be a God, because I don’t know how things work.”

From the Huffington Post:

The 2012 Programme for International Student Assessment, or PISA, results are in, and there’s some really good news for those that worry about the U.S. becoming a nation of brainy elitists. Of the 65 countries that participated in the PISA assessment, U.S. students ranked 36th in math, and 28th in science. When it comes to elitism, the U.S. truly has nothing to worry about.

For those relative few Americans who were already elite back when the 2009 PISA assessment was conducted, there’s good news for them too: they’re even more elite than they were in 2009, when the US ranked 30th in math and 23rd in science. Educated Americans are so elite, they’re practically an endangered species.

The only nagging possible shred of bad news from these test scores comes in the form of a question: where will the next Internet come from? Which country will deliver the next great big, landscape-changing, technological innovation that will propel its economy upward? The country of bold, transformative firsts, the one that created the world’s first nuclear reactor and landed humans on the moon seems very different than the one we live in today.

Mediocrity in science education has metastasized throughout the American mindset, dumbing down everything in its path, including the choices made by our elected officials. A stinging byproduct of America’s war on excellence in science education was the loss of its leadership position in particle physics research. On March 14 of this year, CERN, the European Organization for Nuclear Research, announced that the Higgs Boson, aka the “God particle,” had been discovered at the EU’s Large Hadron Collider. CERN describes itself as “the world’s leading laboratory for particle physics” — a title previously held by America’s Fermilab. Fermilab’s Tevatron particle accelerator was the world’s largest and most powerful until eclipsed by CERN’s Large Hadron Collider. The Tevatron was shut down on September 30th, 2011.

The Tevatron’s planned replacement, Texas’ Superconducting Super Collider (SSC), would have been three times the size of the EU’s Large Hadron Collider. Over one third of the SSC’s underground tunnel had been bored at the time of its cancellation by congress in 1993. As Texas Monthly reported in “How Texas Lost the World’s Largest Super Collider,” “Nobody doubts that the 40 TeV Superconducting Super Collider (SSC) in Texas would have discovered the Higgs boson a decade before CERN.” Fighting to save the SSC in 1993, its director, Dr. Roy Schwitters, said in a New York Times interview, “The SSC is becoming a victim of the revenge of the C students.”

Ever wonder about the practical benefits of theoretical physics? Consider this: without Einstein’s theory of general relativity, GPS doesn’t work. That’s because time in those GPS satellites whizzing above us in space is slightly different than time for us terrestrials. Without compensating for the difference, our cars would end up in a ditch instead of Starbucks. GPS would also not have happened without advances in US space technology. Consider that, in 2013, there are two manned spacefaring nations on Earth – the US isn’t one of them. GPS alone is estimated to generate $122.4 billion annually in direct and related benefits according to an NDP Consulting Group report. The Superconducting Super Collider would have cost $8.4 billion.

‘C’ students’ revenge doesn’t stop with crushing super colliders or grounding our space program. Fox News’ Bill O’Reilly famously translated his inability to explain 9th grade astronomy into justification for teaching creationism in public schools, stating that we don’t know how tides work, or where the sun or moon comes from, or why the Earth has a moon and Mars doesn’t (Mars actually has two moons).

Read the entire article here.

A Better Way to Study and Learn

Our current educational process in one sentence: assume student is empty vessel; provide student with content; reward student for remembering and regurgitating content; repeat.

Yet, we have known for a while, and an increasing body of research corroborates our belief, that this method of teaching and learning is not very effective, or stimulating for that matter. It’s simply an efficient mechanism for the mass production of an adequate resource for the job market. Of course, for most it then takes many more decades following high school or college to unlearn the rote trivia and re-learn what is really important.

Mind Hacks reviews some recent studies that highlight better approaches to studying.

[div class=attrib]From Mind Hacks:[end-div]

Decades old research into how memory works should have revolutionised University teaching. It didn’t.

If you’re a student, what I’m about to tell you will let you change how you study so that it is more effective, more enjoyable and easier. If you work at a University, you – like me – should hang your head in shame that we’ve known this for decades but still teach the way we do.

There’s a dangerous idea in education that students are receptacles, and teachers are responsible for providing content that fills them up. This model encourages us to test students by the amount of content they can regurgitate, to focus overly on statements rather than skills in assessment and on syllabuses rather than values in teaching. It also encourages us to believe that we should try and learn things by trying to remember them. Sounds plausible, perhaps, but there’s a problem. Research into the psychology of memory shows that intention to remember is a very minor factor in whether you remember something or not. Far more important than whether you want to remember something is how you think about the material when you encounter it.

A classic experiment by Hyde and Jenkins (1973) illustrates this. These researchers gave participants lists of words, which they later tested recall of, as their memory items. To affect their thinking about the words, half the participants were told to rate the pleasentness of each word, and half were told to check if the word contained the letters ‘e’ or ‘g’. This manipulation was designed to affect ‘depth of processing’. The participants in the rating-pleasentness condition had to think about what the word meant, and relate it to themselves (how they felt about it) – “deep processing”. Participants in the letter-checking condition just had to look at the shape of the letters, they didn’t even have to read the word if they didn’t want to – “shallow processing”. The second, independent, manipulation concerned whether participants knew that they would be tested later on the words. Half of each group were told this – the “intentional learning” condition – and half weren’t told, the test would come as a surprise – the “incidental learning” condition.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of the Telegraph / AP.[end-div]

Learning to learn

[div class=attrib]By George Blecher for Eurozine:[end-div]

Before I learned how to learn, I was full of bullshit. I exaggerate. But like any bright student, I spent a lot of time faking it, pretending to know things about which I had only vague generalizations and a fund of catch-words. Why do bright students need to fake it? I guess because if they’re considered “bright”, they’re caught in a tautology: bright students are supposed to know, so if they risk not knowing, they must not be bright.

In any case, I faked it. I faked it so well that even my teachers were afraid to contradict me. I faked it so well that I convinced myself that I wasn’t faking it. In the darkest corners of the bright student’s mind, the borders between real and fake knowledge are blurred, and he puts so much effort into faking it that he may not even recognize when he actually knows something.

Above all, he dreads that his bluff will be called – that an honest soul will respect him enough to pick apart his faulty reasoning and superficial grasp of a subject, and expose him for the fraud he believes himself to be. So he lives in a state of constant fear: fear of being exposed, fear of not knowing, fear of appearing afraid. No wonder that Plato in The Republic cautions against teaching the “dialectic” to future Archons before the age of 30: he knew that instead of using it to pursue “Truth”, they’d wield it like a weapon to appear cleverer than their fellows.

Sometimes the worst actually happens. The bright student gets caught with his intellectual pants down. I remember taking an exam when I was 12, speeding through it with great cockiness until I realized that I’d left out a whole section. I did what the bright student usually does: I turned it back on the teacher, insisting that the question was misleading, and that I should be granted another half hour to fill in the missing part. (Probably Mr Lipkin just gave in because he knew what a pain in the ass the bright student can be!)

So then I was somewhere in my early 30s. No more teachers or parents to impress; no more exams to ace: just the day-to-day toiling in the trenches, trying to build a life.

[div class=attrib]More from theSource here.[end-div]