Category Archives: BigBang

Early Adopters of Inconvenient Truths

Flat_earth

Conspiracy theorists are a small but vocal and influential minority. Their views span the gamut of conspiracy theories: holocaust denial, President Kennedy’s assassination, UFOs, extraterrestrials, Flat Earth, alternate technology suppression, climate change, to name just a handful.

The United States is after all host to a candidate for the Presidency who subscribes to a number of conspiratorial theories, and, importantly, there’s even a dating app — Awake Dating — for like-minded conspiracy theorists. Though, the site’s COO Jarrod Fidden prefers to label his members “early adopter[s] of inconvenient truths” over the term “conspiracy theorist”, which, let’s face it, is often used pejoratively.

So, perhaps it serves to delve a little deeper into why some nonsensical and scientifically disproved ideas persist in 2016.

Briefly, it seems that zombie ideas thrive for a couple of key reasons: first, they may confer some level of group identity, attention and/or influence; second, they provide a degree of simplistic comfort to counter often highly complex scientific explanations. Moreover, conspiracy theories do have a generally positive cultural effect — some bring laughter to our days, but most tend to drive serious debate and further research in the quest for true (scientific) consensus.

From the Guardian:

In January 2016, the rapper BoB took to Twitter to tell his fans that the Earth is really flat. “A lot of people are turned off by the phrase ‘flat earth’,” he acknowledged, “but there’s no way u can see all the evidence and not know … grow up.” At length the astrophysicist Neil deGrasse Tyson joined in the conversation, offering friendly corrections to BoB’s zany proofs of non-globism, and finishing with a sarcastic compliment: “Being five centuries regressed in your reasoning doesn’t mean we all can’t still like your music.”

Actually, it’s a lot more than five centuries regressed. Contrary to what we often hear, people didn’t think the Earth was flat right up until Columbus sailed to the Americas. In ancient Greece, the philosophers Pythagoras and Parmenides had already recognised that the Earth was spherical. Aristotle pointed out that you could see some stars in Egypt and Cyprus that were not visible at more northerly latitudes, and also that the Earth casts a curved shadow on the moon during a lunar eclipse. The Earth, he concluded with impeccable logic, must be round.

Read the entire article here.

Image: Azimuthal equidistant projection, used by some Flat Earthers as evidence for a flat Earth. Courtesy: Trekky0623 / Wikipedia. Public Domain.

Fish Roasts Human: Don’t Read It, Share It

Common_goldfish2

Interestingly enough, though perhaps not surprisingly, people on social media share news stories rather than read them. At first glance this seems rather perplexing: after all, why would you tweet or re-tweet or like or share a news item before actually reading and understanding it?

Arnaud Legout co-author of a recent study, out of Columbia University and the French National Institute (Inria), tells us that “People form an opinion based on a summary, or summary of summaries, without making the effort to go deeper.” More confusingly, he adds, “Our results show that sharing content and actually reading it are poorly correlated.”

Please take 8 seconds or more to mull over this last statement again:

Our results show that sharing content and actually reading it are poorly correlated.

Without doubt our new technological platforms and social media have upended traditional journalism. But, in light of this unnerving finding I have to wonder if this means the eventual and complete collapse of deep analytical, investigative journalism and the replacement of thoughtful reflection with “NationalEnquirerThink”.

Perhaps I’m reading too much into the findings, but it does seem that it is more important for social media users to bond with and seek affirmation from their followers than it is to be personally informed.

With average human attention span now down to 8 seconds I think our literary and contemplative future now seems to belong safely in the fins of our cousin, the goldfish (attention span, 9 seconds).

Learn more about Arnaud Legout’s disturbing study here.

Image: Common Goldfish. Courtesy: Wikipedia. Public Domain.

The Accelerated Acceleration

Dark_Energy

Until the mid-1990s accepted scientific understanding of the universe held that the cosmos was expanding. Scientists have accepted this since 1929 when Edwin Hubble‘s celestial observations showed that distant galaxies were all apparently moving away from us.

But, in 1998 two independent groups of cosmologists made a startling finding. The universe was not only expanding, its expansion was accelerating. Recent studies show that this acceleration in the fabric of spacetime is actually faster than first theorized and observed.

And, nobody knows why. This expansion, indeed the accelerating expansion, remains one of our current great scientific mysteries.

Cosmologists, astronomers and theoreticians of all stripes have proposed no shortage of possible explanations. But, there is still scant observational evidence to support any of the leading theories. The most popular revolves around the peculiar idea of dark energy.

From Scientific American:

Our universe is flying apart, with galaxies moving away from each other faster each moment than they were the moment before. Scientists have known about this acceleration since the late 1990s, but whatever is causing it—dubbed dark energy—remains a mystery. Now the latest measurement of how fast the cosmos is growing thickens the plot further: The universe appears to be ballooning more quickly than it should be, even after accounting for the accelerating expansion caused by dark energy.

Scientists came to this conclusion after comparing their new measurement of the cosmic expansion rate, called the Hubble constant, to predictions of what the Hubble constant should be based on evidence from the early universe. The puzzling conflict—which was hinted at in earlier data and confirmed in the new calculation—means that either one or both of the measurements are flawed, or that dark energy or some other aspect of nature acts differently than we think.

“The bottom line is that the universe looks like it’s expanding about eight percent faster than you would have expected based on how it looked in its youth and how we expect it to evolve,” says study leader Adam Riess of the Space Telescope Science Institute in Baltimore, Md. “We have to take this pretty darn seriously.” He and his colleagues described their findings, based on observations from the Hubble Space Telescope, in a paper submitted last week to the Astrophysical Journal and posted on the preprint server arXiv.

One of the most exciting possibilities is that dark energy is even stranger than the leading theory suggests. Most observations support the idea that dark energy behaves like a “cosmological constant,” a term Albert Einstein inserted into his equations of general relativity and later removed. This kind of dark energy would arise from empty space, which, according to quantum mechanics, is not empty at all, but rather filled with pairs of “virtual” particles and antiparticles that constantly pop in and out of existence. These virtual particles would carry energy, which in turn might exert a kind of negative gravity that pushes everything in the universe outward.

Read the entire story here.

Image: The universe’s accelerated expansion. Courtesy: NASA and ESA.

Towards an Understanding of Consciousness

Robert-Fudd-Consciousness-17C

The modern scientific method has helped us make great strides in our understanding of much that surrounds us. From knowledge of the infinitesimally small building blocks of atoms to the vast structures of the universe, theory and experiment have enlightened us considerably over the last several hundred years.

Yet a detailed understanding of consciousness still eludes us. Despite the intricate philosophical essays of John Locke in 1690 that laid the foundations for our modern day views of consciousness, a fundamental grasp of its mechanisms remain as elusive as our knowledge of the universe’s dark matter.

So, it’s encouraging to come across a refreshing view of consciousness, described in the context of evolutionary biology. Michael Graziano, associate professor of psychology and neuroscience at Princeton University, makes a thoughtful case for Attention Schema Theory (AST), which centers on the simple notion that there is adaptive value for the brain to build awareness. According to AST, the brain is constantly constructing and refreshing a model — in Graziano’s words an “attention schema” — that describes what its covert attention is doing from one moment to the next. The brain constructs this schema as an analog to its awareness of attention in others — a sound adaptive perception.

Yet, while this view may hold promise from a purely adaptive and evolutionary standpoint, it does have some way to go before it is able to explain how the brain’s abstraction of a holistic awareness is constructed from the physical substrate — the neurons and connections between them.

Read more of Michael Graziano’s essay, A New Theory Explains How Consciousness Evolved. Graziano is the author of Consciousness and the Social Brain, which serves as his introduction to AST. And, for a compelling rebuttal, check out R. Scott Bakker’s article, Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem.

Unfortunately, until our experimentalists make some definitive progress in this area, our understanding will remain just as abstract as the theories themselves, however compelling. But, ideas such as these inch us towards a deeper understanding.

Image: Representation of consciousness from the seventeenth century. Robert FluddUtriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione. Courtesy: Wikipedia. Public Domain.

Are You Monotasking or Just Paying Attention?

We have indeed reached the era of peak multi-tasking. It’s time to select a different corporate meme.

Study after recent study shows that multi-tasking is an illusion — we can’t perform two or more cognitive tasks in parallel, at the same time. Rather, we timeshare: dividing our attention from one task to another sequentially. These studies also show that dividing our attention in this way tends to have a deleterious effect on all of the tasks. I say cognitive tasks because it’s rather obvious that we can all perform some tasks at the same time: walk and chew gum (or thumb a smartphone); drive and sing; shower and think; read and eat. But, all of these combinations require that one of these tasks is mostly autonomic. That is, we perform one task without conscious effort.

Yet more social scientists have determined that multi-tasking is a fraud — perhaps perpetuated by corporate industrial engineers convinced that they can wring more hours of work from you.

What are we to do now having learned that our super-efficient world of juggling numerous tasks as the “same time” is nothing but a mirage?

Well, observers of the fragile human condition have not rested. This time social scientists have discovered an amazing human talent. And they’ve coined a mesmerizing new term, known as monotasking. In some circles it’s called uni-tasking or single-tasking.

When I was growing up this was called “paying attention”.

But, this being the era of self-help-life-experience-consulting gone mad and sub-minute attention spans (fueled by multi-tasking) we can now all eagerly await the rise of an entirely new industry dedicated to this wonderful monotasking breakthrough. Expect a whole host of monotasking books, buzzworthy news articles, daytime TV shows with monotasking tips and personal coaching experts at TED events armed with “look what monotasking can do for you” powerpoint decks.

Personally, I will quietly retreat, and return to old-school staying focused, and remind my kids to do the same.

From NYT:

Stop what you’re doing.

Well, keep reading. Just stop everything else that you’re doing.

Mute your music. Turn off your television. Put down your sandwich and ignore that text message. While you’re at it, put your phone away entirely. (Unless you’re reading this on your phone. In which case, don’t. But the other rules still apply.)

Just read.

You are now monotasking.

Maybe this doesn’t feel like a big deal. Doing one thing at a time isn’t a new idea.

Indeed, multitasking, that bulwark of anemic résumés everywhere, has come under fire in recent years. A 2014 study in the Journal of Experimental Psychology found that interruptions as brief as two to three seconds — which is to say, less than the amount of time it would take you to toggle from this article to your email and back again — were enough to double the number of errors participants made in an assigned task.

Earlier research out of Stanford revealed that self-identified “high media multitaskers” are actually more easily distracted than those who limit their time toggling.

So, in layman’s terms, by doing more you’re getting less done.

But monotasking, also referred to as single-tasking or unitasking, isn’t just about getting things done.

Not the same as mindfulness, which focuses on emotional awareness, monotasking is a 21st-century term for what your high school English teacher probably just called “paying attention.”

“It’s a digital literacy skill,” said Manoush Zomorodi, the host and managing editor of WNYC Studios’ “Note to Self” podcast, which recently offered a weeklong interactive series called Infomagical, addressing the effects of information overload. “Our gadgets and all the things we look at on them are designed to not let us single-task. We weren’t talking about this before because we simply weren’t as distracted.”

Continue reading the main story

Ms. Zomorodi prefers the term “single-tasking”: “ ‘Monotasking’ seemed boring to me. It sounds like ‘monotonous.’ ”

Kelly McGonigal, a psychologist, lecturer at Stanford and the author of “The Willpower Instinct,” believes that monotasking is “something that needs to be practiced.” She said: “It’s an important ability and a form of self-awareness as opposed to a cognitive limitation.”

Read the entire article here.

Image courtesy of Google Search.

The Collapsing Wave Function

Schrodinger-equationOnce in every while I have to delve into the esoteric world of quantum mechanics. So, you will have to forgive me.

Since it was formalized in the mid-1920s QM has been extremely successful at describing the behavior of systems at the atomic scale. Two giants of the field — Niels Bohr and Werner Heisenberg — devised the intricate mathematics behind QM in 1927. Since then it has become known as the Copenhagen Interpretation, and has been widely and accurately used to predict and describe the workings of elementary particles and forces between them.

Yet recent theoretical stirrings in the field threaten to turn this widely held and accepted framework on its head. The Copenhagen Interpretation holds that particles do not have definitive locations until they are observed. Rather, their positions and movements are defined by a wave function that describes a spectrum of probabilities, but no certainties.

Rather understandably, this probabilistic description of our microscopic world tends to unnerve those who seek a more solid view of what we actually observe. Enter Bohmian mechanics, or more correctly, the De BroglieBohm theory of quantum mechanics. An increasing number of present day researchers and theorists are revisiting this theory, which may yet hold some promise.

From Wired:

Of the many counterintuitive features of quantum mechanics, perhaps the most challenging to our notions of common sense is that particles do not have locations until they are observed. This is exactly what the standard view of quantum mechanics, often called the Copenhagen interpretation, asks us to believe.

But there’s another view—one that’s been around for almost a century—in which particles really do have precise positions at all times. This alternative view, known as pilot-wave theory or Bohmian mechanics, never became as popular as the Copenhagen view, in part because Bohmian mechanics implies that the world must be strange in other ways. In particular, a 1992 study claimed to crystalize certain bizarre consequences of Bohmian mechanics and in doing so deal it a fatal conceptual blow. The authors of that paper concluded that a particle following the laws of Bohmian mechanics would end up taking a trajectory that was so unphysical—even by the warped standards of quantum theory—that they described it as “surreal.”

Nearly a quarter-century later, a group of scientists has carried out an experiment in a Toronto laboratory that aims to test this idea. And if their results, first reported earlier this year, hold up to scrutiny, the Bohmian view of quantum mechanics—less fuzzy but in some ways more strange than the traditional view—may be poised for a comeback.

As with the Copenhagen view, there’s a wave function governed by the Schrödinger equation. In addition, every particle has an actual, definite location, even when it’s not being observed. Changes in the positions of the particles are given by another equation, known as the “pilot wave” equation (or “guiding equation”). The theory is fully deterministic; if you know the initial state of a system, and you’ve got the wave function, you can calculate where each particle will end up.

That may sound like a throwback to classical mechanics, but there’s a crucial difference. Classical mechanics is purely “local”—stuff can affect other stuff only if it is adjacent to it (or via the influence of some kind of field, like an electric field, which can send impulses no faster than the speed of light). Quantum mechanics, in contrast, is inherently nonlocal. The best-known example of a nonlocal effect—one that Einstein himself considered, back in the 1930s—is when a pair of particles are connected in such a way that a measurement of one particle appears to affect the state of another, distant particle. The idea was ridiculed by Einstein as “spooky action at a distance.” But hundreds of experiments, beginning in the 1980s, have confirmed that this spooky action is a very real characteristic of our universe.

Read the entire article here.

Image: Schrödinger’s time-dependent equation. Courtesy: Wikipedia.

 

 

Juno on the 4th of July

Jupiter and Ganymede

Perhaps not coincidentally, NASA’s latest foray into the great beyond reached a key milestone today. The Juno spacecraft entered orbit around the gas giant Jupiter on the 4th of July, 2016.

NASA is still awaiting all the cool science (and image-capture) to begin. So, in the meantime I’m posting an gorgeous picture taken of Jupiter by the Hubble Space Telescope.

Image: Jupiter and Ganymede, Taken April 9, 2007. Courtesy: Credit: NASA, ESA, and E. Karkoschka (University of Arizona).

Achieving Failure

Our society values success.

Our work environments value triumphing over the competition. We look to our investments to beat the market. We support our favorite teams, but adore them when they trounce their rivals. Our schools and colleges (mostly) help educate our children, but do so in a way that rewards success — good grades, good test scores and good behavior (as in, same as everyone else). We continually reward our kids for success on a task, at school, with a team.

Yet, all of us know, in our hearts and the back of our minds, that the most important lessons and trials stem from failure — not success. From failure we learn to persevere, we learn to change and adapt, we learn to overcome. From failure we learn to avoid, or tackle obstacles head on; we learn to reassess and reevaluate. We evolve from our failures.

So this begs the question: why are so many of our processes and systems geared solely to rewarding and reinforcing success?

From NPR:

Is failure a positive opportunity to learn and grow, or is it a negative experience that hinders success? How parents answer that question has a big influence on how much children think they can improve their intelligence through hard work, a study says.

“Parents are a really critical force in child development when you think about how motivation and mindsets develop,” says Kyla Haimovitz, a professor of psychology at Stanford University. She coauthored the study, published in Psychological Science with colleague Carol Dweck, who pioneered research on mindsets. “Parents have this powerful effect really early on and throughout childhood to send messages about what is failure, how to respond to it.”

Although there’s been a lot of research on how these forces play out, relatively little looks at what parents can do to motivate their kids in school, Haimovitz says. This study begins filling that gap.

“There is a fair amount of evidence showing that when children view their abilities as more malleable and something they can change over time, then they deal with obstacles in a more constructive way,” says Gail Heyman, a professor of psychology at the University of California at San Diego who was not involved in this study.

But communicating that message to children is not simple.

“Parents need to represent this to their kids in the ways they react about their kids’ failures and setbacks,” Haimovitz says. “We need to really think about what’s visible to the other person, what message I’m sending in terms of my words and my deeds.”

In other words, if a child comes home with a D on a math test, how a parent responds will influence how the child perceives their own ability to learn math. Even a well-intentioned, comforting response of “It’s OK, you’re still a great writer” may send the message that it’s time to give up on math rather than learn from the problems they got wrong, Haimovitz explains.

Read the entire story here.

What Keeps NASA Going?

Apollo 17 Commander Gene Cernan on lunar rover

Apollo astronaut Eugene Cernan is the last human to have set foot on a world other than Earth. It’s been 44 years since he last stepped off the moon. In fact, in 1972 he drove around using the lunar rover and found time to scribble his daughter’s initials on the dusty lunar surface. So, other than forays to the International Space Station (ISS) and trips to service the Hubble Space Telescope (HST) NASA has kept humans firmly rooted to the homeland.

Of course, in the intervening decades the space agency has not rested on its laurels. NASA has sent probes and robots all over the Solar System and beyond: Voyager to the gas giants and on to interstellar space,  Dawn to visit asteroids; Rosetta (in concert with the European Space Agency) to visit a comet; SOHO and its countless cousins to keep an eye on our home star; Galileo and Pioneer to Jupiter; countless spacecraft including Curiosity Rover to Mars; Messenger to map Mercury; Magellan to probe the clouds of Venus; Cassini to survey Saturn and its fascinating moons; and of course, New Horizons to Pluto and beyond.

Spiral galaxies together with irregular galaxies make up approximately 60% of the galaxies in the local Universe. However, despite their prevalence, each spiral galaxy is unique — like snowflakes, no two are alike. This is demonstrated by the striking face-on spiral galaxy NGC 6814, whose luminous nucleus and spectacular sweeping arms, rippled with an intricate pattern of dark dust, are captured in this NASA/ESA Hubble Space Telescope image. NGC 6814 has an extremely bright nucleus, a telltale sign that the galaxy is a Seyfert galaxy. These galaxies have very active centres that can emit strong bursts of radiation. The luminous heart of NGC 6814 is a highly variable source of X-ray radiation, causing scientists to suspect that it hosts a supermassive black hole with a mass about 18 million times that of the Sun. As NGC 6814 is a very active galaxy, many regions of ionised gas are studded along  its spiral arms. In these large clouds of gas, a burst of star formation has recently taken place, forging the brilliant blue stars that are visible scattered throughout the galaxy.

Our mechanical human proxies reach out a little farther each day to learn more about our universe and our place in it. Exploration and discovery is part of our human DNA; it’s what we do. NASA is our vehicle. So, it’s good to see what NASA is planning. The agency just funded eight advanced-technology programs that officials believe may help transform space exploration. The grants are part of the NASA Innovative Advanced Concepts (NIAC) program. The most interesting, perhaps, are a program to evaluate inducing hibernation in Mars-bound astronauts, and an assessment of directed energy propulsion for interstellar travel.

Our science and technology becomes more and more like science fiction each day.

Read more about NIAC programs here.

Image 1: Apollo 17 mission commander Eugene A. Cernan makes a short checkout of the Lunar Roving Vehicle during the early part of the first Apollo 17 extravehicular activity at the Taurus-Littrow landing site. Courtesy: NASA.

Image 2: Hubble Spies a Spiral Snowflake, galaxy NGC 6814. Courtesy: NASA/ESA Hubble Space Telescope.

Your Brain on LSD

Brain-on-LSD

For the first time, researchers have peered inside the brain to study the realtime effect of the psychedelic drug LSD (lysergic acid diethylamide). Yes, neuroscientists scanned the brains of subjects who volunteered to take a trip inside an MRI scanner, all in the name of science.

While the researchers did not seem to document the detailed subjective experiences of their volunteers, the findings suggest that they were experiencing intense dreamlike visions, effectively “seeing with their eyes shut”. Under the influence of LSD many areas of the brain that are usually compartmentalized showed far greater interconnection and intense activity.

LSD was first synthesized in 1938. Its profound psychological properties were studied from the mid-1940s to the early sixties. The substance was later banned — worldwide — after its adoption as a recreational drug.

This new study was conducted by researchers from Imperial College London and The Beckley Foundation, which researches psychoactive substances.

From Guardian:

The profound impact of LSD on the brain has been laid bare by the first modern scans of people high on the drug.

The images, taken from volunteers who agreed to take a trip in the name of science, have given researchers an unprecedented insight into the neural basis for effects produced by one of the most powerful drugs ever created.

A dose of the psychedelic substance – injected rather than dropped – unleashed a wave of changes that altered activity and connectivity across the brain. This has led scientists to new theories of visual hallucinations and the sense of oneness with the universe some users report.

The brain scans revealed that trippers experienced images through information drawn from many parts of their brains, and not just the visual cortex at the back of the head that normally processes visual information. Under the drug, regions once segregated spoke to one another.

Further images showed that other brain regions that usually form a network became more separated in a change that accompanied users’ feelings of oneness with the world, a loss of personal identity called “ego dissolution”.

David Nutt, the government’s former drugs advisor, professor of neuropsychopharmacology at Imperial College London, and senior researcher on the study, said neuroscientists had waited 50 years for this moment. “This is to neuroscience what the Higgs boson was to particle physics,” he said. “We didn’t know how these profound effects were produced. It was too difficult to do. Scientists were either scared or couldn’t be bothered to overcome the enormous hurdles to get this done.”

Read the entire story here.

Image: Different sections of the brain, either on placebo, or under the influence of LSD (lots of orange). Courtesy: Imperial College/Beckley Foundation.

Practice May Make You Perfect, But Not Creative

Practice will help you improve in a field with well-defined and well-developed tasks, processes and rules. This includes areas like sports and musicianship. Though, keep in mind that it may indeed take some accident of genetics to be really good at one of these disciplines in the first place.

But, don’t expect practice to make you better in all areas of life, particularly in creative endeavors. Creativity stems from original thought not replicable behavior. Scott Kaufman director of the Imagination Institute at the University of Pennsylvania reminds us of this in a recent book review.” The authors of Peak: Secrets from the New Science of Expertise, psychologist Anders Ericsson and journalist Robert Pool, review a swath of research on human learning and skill acquisition and conclude that deliberate, well-structured practice can help anyone master new skills. I think we can all agree with this conclusion.

But like Kaufman I believe that many creative “skills” lie in an area of human endeavor that is firmly beyond the assistance of practice. Most certainly practice will help an artist hone and improve her brushstrokes; but practice alone will not bring forth her masterpiece. So, here is a brief summary of 12 key elements that Kaufman distilled from over 50 years of research studies into creativity:

Excerpts from Creativity Is Much More Than 10,000 Hours of Deliberate Practice by Scott Kaufman:

  1. Creativity is often blind. If only creativity was all about deliberate practice… in reality, it’s impossible for creators to know completely whether their new idea or product will be well received.
  2. Creative people often have messy processes. While expertise is characterized by consistency and reliability, creativity is characterized by many false starts and lots and lots of trial-and-error.
  3. Creators rarely receive helpful feedback. When creators put something novel out into the world, the reactions are typically either acclaim or rejection
  4. The “10-Year Rule” is not a rule. The idea that it takes 10 years to become a world-class expert in any domain is not a rule. [This is the so-called Ericsson rule from his original paper on deliberate practice amongst musicians.]
  5. Talent is relevant to creative accomplishment. If we define talent as simply the rate at which a person acquires expertise, then talent undeniably matters for creativity.
  6. Personality is relevant. Not only does the speed of expertise acquisition matter, but so do a whole host of other traits. People differ from one another in a multitude of ways… At the very least, research has shown that creative people do tend to have a greater inclination toward nonconformity, unconventionality, independence, openness to experience, ego strength, risk taking, and even mild forms of psychopathology.
  7. Genes are relevant. [M]odern behavioral genetics has discovered that virtually every single psychological trait — including the inclination and willingness to practice — is influenced by innate genetic endowment.
  8. Environmental experiences also matter. [R]esearchers have found that many other environmental experiences substantially affect creativity– including socioeconomic origins, and the sociocultural, political, and economic context in which one is raised.
  9. Creative people have broad interests. While the deliberate practice approach tends to focus on highly specialized training… creative experts tend to have broader interests and greater versatility compared to their less creative expert colleagues.
  10. Too much expertise can be detrimental to creative greatness. The deliberate practice approach assumes that performance is a linear function of practice. Some knowledge is good, but too much knowledge can impair flexibility.
  11. Outsiders often have a creative advantage. If creativity were all about deliberate practice, then outsiders who lack the requisite expertise shouldn’t be very creative. But many highly innovative individuals were outsiders to the field in which they contributed. Many marginalized people throughout history — including immigrants — came up with highly creative ideas not in spite of their experiences as an outsider, but because of their experiences as an outsider.
  12. Sometimes the creator needs to create a new path for others to deliberately practice. Creative people are not just good at solving problems, however. They are also good at finding problems.

In my view the most salient of Kaufman’s dozen ingredients for creativity are #11 and #12 — and I can personally attest to their importance: fresh ideas are more likely to come from outsiders; and, creativeness in one domain often stems from experiences in another, unrelated, realm.

Read Kaufman’s enlightening article in full here.

Dishonesty and Intelligence

Another day, another survey. This time it’s one that links honesty and intelligence. Apparently, the more intelligent you are — as measured by a quick intelligence test — the less likely you’ll be to lie. Fascinatingly, the survey also shows that those who do lie from the small subgroup of the most intelligent tell smaller whoppers; people in the less intelligent subgroup tell bigger lies, for a bigger payoff.

From Washington Post:

Last summer, a couple of researchers ran a funny experiment about honesty. They went to an Israeli shopping mall and recruited people, one-by-one, into a private booth. Alone inside the booth, each subject rolled a six-sided die. Then they stepped out and reported the number that came up.

There was an incentive to lie. The higher the number, the more money people received. If they rolled a one, they got a bonus of about $2.50. If they rolled a two, they got a bonus of $5, and so on. If they rolled a six, the bonus was about $15. (Everyone also received $5 just for participating.)

Before I reveal the results, think about what you would do in that situation. Someone comes up to you at the mall and offers you free money to roll a die. If you wanted to make a few extra bucks, you could lie about what you rolled. Nobody would know, and nobody would be harmed.

Imagine you went into that booth and rolled a 1. What would you do? Would you be dishonest? Would you say you rolled a six, just to get the largest payout?

The researchers, Bradley Ruffle of Wilfrid Laurier University and Yossef Tobol, of the Jerusalem College of Technology, wanted to know what kinds of people would lie in this situation. So they asked everyone about their backgrounds, whether they considered themselves honest, whether they thought honesty was important. They asked whether people were employed, how much money they earned, and whether they were religious. They also gave people a quick intelligence test.

Out of all those attributes, brainpower stood out. Smarter people were less likely to lie about the number they rolled.

It didn’t matter whether they claimed they were honest or not; it didn’t matter whether they were religious, whether they were male or female, or whether they lived in a city. Money didn’t seem to be a factor either. Even after controlling for incomes, the researchers found that the most honest people were the ones who scored highest on the intelligence test.

Read the entire article here.

The Case For Planet Nine

Planet_nine_artistic-impression

First, let me say that Pluto should never have been downgraded to the status of “dwarf planet”. The recent (and ongoing) discoveries by NASA’s New Horizons probe show Pluto’s full, planetary glory: kilometer high mountains, flowing glaciers, atmospheric haze, organic compounds, complex and colorful landforms. So, in my mind Pluto still remains as the ninth planet in our beautiful solar system.

However, many astronomers have moved on and are getting excited over the possibility of a new Planet Nine. The evidence for its existence is mounting and comes mostly from models that infer the presence of a massive object far-beyond Pluto, which is influencing the orbits of asteroids and even some of the outer planets.

From Scientific American:

The hunt is on to find “Planet Nine”—a large undiscovered world, perhaps 10 times as massive as Earth and four times its size—that scientists think could be lurking in the outer solar system. After Konstantin Batygin and Mike Brown, two planetary scientists from the California Institute of Technology, presented evidence for its existence this January, other teams have searched for further proof by analyzing archived images and proposing new observations to find it with the world’s largest telescopes.

Just this month, evidence from the Cassini spacecraft orbiting Saturn helped close in on the missing planet. Many experts suspect that within as little as a year someone will spot the unseen world, which would be a monumental discovery that changes the way we view our solar system and our place in the cosmos. “Evidence is mounting that something unusual is out there—there’s a story that’s hard to explain with just the standard picture,” says David Gerdes, a cosmologist at the University of Michigan who never expected to find himself working on Planet Nine. He is just one of many scientists who leapt at the chance to prove—or disprove—the team’s careful calculations.

Batygin and Brown made the case for Planet Nine’s existence based on its gravitational effect on several Kuiper Belt objects—icy bodies that circle the sun beyond Neptune’s orbit. Theoretically, though, its gravity should also tug slightly on the planets.* With this in mind, Agnès Fienga at the Côte d’Azur Observatory in France and her colleagues checked whether a theoretical model (one that they have been perfecting for over a decade) with the new addition of Planet Nine could better explain slight perturbations seen in Saturn’s orbit as observed by Cassini.* Without it, the other seven planets in the solar system, 200 asteroids and five of the most massive Kuiper Belt objects cannot perfectly account for it.* The missing puzzle piece might just be a ninth planet.

So Fienga and her colleagues compared the updated model, which placed Planet Nine at various points in its hypothetical orbit, with the data. They found a sweet spot—with Planet Nine 600 astronomical units (about 90 billion kilometers) away toward the constellation Cetus—that can explain Saturn’s orbit quite well.* Although Fienga is not yet convinced that she has found the culprit for the planet’s odd movements, most outside experts are blown away.* “It’s a brilliant analysis,” says Greg Laughlin, an astronomer at Lick Observatory, who was not involved in the study. “It’s completely amazing that they were able to do that so quickly.” Gerdes agrees: “That’s a beautiful paper.”

The good news does not end there. If Planet Nine is located toward the constellation Cetus, then it could be picked up by the Dark Energy Survey, a Southern Hemisphere observation project designed to probe the acceleration of the universe. “It turns out fortuitously that the favored region from Cassini’s data is smack dab in the middle of our survey footprint,” says Gerdes, who is working on the cosmology survey.* “We could not have designed our survey any better.” Although the survey was not planned to search for solar system objects, Gerdes has discovered some (including one of the icy objects that led Batygin and Brown to conclude Planet Nine exists in the first place).

Read the entire article here.

Image: Artist’s impression of Planet Nine as an ice giant eclipsing the central Milky Way, with a star-like Sun in the distance. Neptune’s orbit is shown as a small ellipse around the Sun. Courtesy: Tomruen, nagualdesign / Wikipedia. Creative Commons.

Your Ticket to the Past: Tipler Cylinder

So, you want to travel back in time? Here’s the solution. But first forget the tricked-out DeLorean and H.G. Wells’ victorian time machine. What you need is a Tipler Cylinder. Let’s begin with the ingredients if you are inclined to construct your very own cylinder.

  1. Take a mass of about 10 times that of the Sun.
  2. Compress and fashion the mass into an infinitely long, spaghetti-like cylinder.
  3. Spin the cylinder, along its longitudinal axis, at least up to several billion revolutions per minute.

Once you’ve  done this all you need in a craft able to spiral around the cylinder — without getting crushed by gravity — to make use of its frame-dragging of spacetime. Voila! Do this correctly, and you might well emerge billions of years from where you began. But, you’ll be in the past, of course.

Read more about the Tipler Cylinder here.

World Happiness Ranking

national-happiness-2015

Yet again, nations covering the northern latitudes outrank all others on this year’s global happiness scale. Not surprisingly, Denmark topped the happiness list in 2015, having secured the top spot since 2012, except for 2014 when it was pipped by Switzerland. The top 5 for 2015 are: Denmark, Iceland, Norway, Finland, and Canada.

The report finds that the happiest nations tend to be those with lower income disparity and strong national health and social safety programs. Ironically, richer nations, including the United States, tend to rank lower due to rising inequalities in income, wealth and health.

That said, the United States moved to No. 13, up two places from No. 15 the previous year. This is rather perplexing considering all the anger that we’re hearing about during the relentless 2016 presidential election campaign.

At the bottom of the list of 157 nations is Burundi, recently torn by a violent political upheaval. The bottom five nations for 2015 are: Benin, Afghanistan, Togo, Syria and Burundi; all have recently suffered from war or disease or both.

The happiness score for each nation is based on multiple national surveys covering a number of criteria, which are aggregated into six key measures: GDP per capita, social support; healthy life expectancy; freedom to make life choices; generosity; and perceptions of corruption.

The World Happiness Report was prepared by the Sustainable Development Solutions Network, an international group of social scientists and public health experts under the auspices of the United Nations.

Read more on the report here.

Image: Top 30 nations ranked for happiness, screenshot. Courtesy: World Happiness Report, The Distribution of World Happiness, by John F. Helliwell, Canadian Institute for Advanced Research and Vancouver School of Economics, University of British Columbia; Haifang Huang, Department of Economics, University of Alberta; Shun Wang, KDI School of Public Policy and Management, South Korea.

Bad Behavior Goes Viral

Social psychologists often point out how human behavior is contagious. Laugh and others will join in. Yawn and all those around you will yawn as well. In a bad mood at home? Well, soon, chances are that the rest of your family with join you on a downer as well.

And, the contagion doesn’t end there, especially with negative behaviors; study after study shows the viral spread of suicide, product tampering, rioting, looting, speeding and even aircraft hijacking. So too, are mass shootings. Since the United States is a leading venue for mass shootings, there is now even a term for a mass shooting that happens soon after the first — an echo shooting.

From the Washington Post:

A man had just gone on a shooting rampage in Kalamazoo, Mich., allegedly killing six people while driving for Uber. Sherry Towers, an Arizona State University physicist who studies how viruses spread, worried while watching the news coverage.

Last year, Towers published a study using mathematical models to examine whether mass shootings, like viruses, are contagious. She identified a 13-day period after high-profile mass shootings when the chance of another spikes. Her findings are confirmed more frequently than she would like.

Five days after Kalamazoo, a man in Kansas shot 17 people, killing three by firing from his car. To Towers, that next shooting seemed almost inevitable.

“I absolutely dread watching this happen,” she said.

As the nation endures an ongoing stream of mass shootings, criminologists, police and even the FBI are turning to virus epidemiology and behavioral psychology to understand what sets off mass shooters and figure out whether, as with the flu, the spread can be interrupted.

“These things are clustering in time, and one is causing the next one to be more likely,” said Gary Slutkin, a physician and epidemiologist at the University of Illinois at Chicago who runs Cure Violence, a group that treats crime as a disease. “That’s definitional of a contagious disease. Flu is a risk factor for more flu. Mass shootings are a risk factor for mass shootings.”

The idea is not without skeptics. James Alan Fox, a Northeastern University professor who studies mass shootings, said: “Some bunching just happens. Yes, there is some mimicking going on, but the vast majority of mass killers don’t need someone else to give them the idea.”

Confirming, disputing or further exploring the idea scientifically is hampered by the federal funding ban on gun violence research. Towers and her colleagues did their study on their own time. And there’s not even a common database or definition of mass shootings.

The Congressional Research Service uses the term “public mass shootings” to describe the killing of four or more people in “relatively public places” by a perpetrator selecting victims “somewhat indiscriminately.”

In the 1980s, the violence occurred in post offices. In the 1990s, schools. Now it is mutating into new forms, such as the terrorist attack in San Bernardino, Calif., that initially appeared to be a workplace shooting by a disgruntled employee.

Researchers say the contagion is potentially more complicated than any virus. There is the short-term effect of a high-profile mass shooting, which can lead quickly to another incident. Towers found that such echo shootings account for up to 30 percent of all rampages.

But there appear to be longer incubation periods, too. Killers often find inspiration in past mass shootings, praising what their predecessors accomplished, innovating on their methods and seeking to surpass them in casualties and notoriety.

Read the entire article here.

The Global Peril of Narcissism

Google-search-demagogue

I suspect that prior to our gluttonous always-on, social media age narcissists were very much a local phenomenon — probably much like European diseases remained mostly confined to the Old World prior to the advent of frequent shipping and air travel. Nowadays narcissistic traits such as self-absorption, image inflation and lack of empathy spread and amplify across the globe as impressionable tribes like, follow and emulate their narcissistic role models. As the virus of self-obsession spreads this puts our increasingly global village at some peril — replacing empathy with indifference and altruism with self-promotion, and leading to the inevitable rise of charismatic demagogues.

Author and psychotherapist Pat Macdonald aptly describes the rise of narcissism in her recent paper Narcissism in the Modern World. Quite paradoxically, Macdonald finds that,

“Much of our distress comes from a sense of disconnection. We have a narcissistic society where self-promotion and individuality seem to be essential, yet in our hearts that’s not what we want. We want to be part of a community, we want to be supported when we’re struggling, we want a sense of belonging. Being extraordinary is not a necessary component to being loved.”

From the Guardian:

“They unconsciously deny an unstated and intolerably poor self-image through inflation. They turn themselves into glittering figures of immense grandeur surrounded by psychologically impenetrable walls. The goal of this self-deception is to be impervious to greatly feared external criticism and to their own rolling sea of doubts.” This is how Elan Golomb describes narcissistic personality disorder in her seminal book Trapped in the Mirror. She goes on to describe the central symptom of the disorder – the narcissist’s failure to achieve intimacy with anyone – as the result of them seeing other people like items in a vending machine, using them to service their own needs, never being able to acknowledge that others might have needs of their own, still less guess what they might be. “Full-bodied narcissistic personality disorder remains a fairly unusual diagnosis,” Pat MacDonald, author of the paper Narcissism in the Modern World, tells me. “Traditionally, it is very difficult to reverse narcissistic personality disorder. It would take a long time and a lot of work.”

What we talk about when we describe an explosion of modern narcissism is not the disorder but the rise in narcissistic traits. Examples are everywhere. Donald Trump epitomises the lack of empathy, the self-regard and, critically, the radical overestimation of his own talents and likability. Katie Hopkins personifies the perverse pride the narcissist takes in not caring for others. (“No,” she wrote in the Sun about the refugee crisis. “I don’t care. Show me pictures of coffins, show me bodies floating in water, play violins and show me skinny people looking sad. I still don’t care.”) Those are the loudest examples, blaring like sirens; there is a general hubbub of narcissism beneath, which is conveniently – for observation purposes, at least – broadcast on social media. Terrible tragedies, such as the attacks on Paris, are appropriated by people thousands of miles away and used as a backdrop to showcase their sensitivity. The death of David Bowie is mediated through its “relevance” to voluble strangers.

It has become routine for celebrities to broadcast banal information and fill Instagram with the “moments” that constitute their day, the tacit principle being that, once you are important enough, nothing is mundane. This delusion then spills out to the non-celebrity; recording mundane events becomes proof of your importance. The dramatic rise in cosmetic surgery is part of the same effect; the celebrity fixates on his or her appearance to meet the demands of fame. Then the vanity, being the only truly replicable trait, becomes the thing to emulate. Ordinary people start having treatments that only intense scrutiny would warrant – 2015 saw a 13% rise in procedures in the UK, with the rise in cosmetic dentistry particularly marked, because people don’t like their teeth in selfies. The solution – stop taking selfies – is apparently so 2014.

Read the entire story here.

Image courtesy of Google Search.

MondayMap: Internet Racism

map-internet-racism

Darkest blue and light blue respectively indicate much less and less racist areas than the national average. The darkest red indicates the most racist zones.

No surprise: the areas with the highest number of racists are in the South and the rural Northeastern United States. Head west of Texas and you’ll find fewer and fewer pockets of racists. Further, and perhaps not surprisingly, the greater the degree of n-word usage the higher is the rate of black mortality.

Sadly, this map is not of 18th or 19th century America, it’s from a recent study, April 2015, posted on Public Library of Science (PLOS) ONE.

Now keep in mind that the map highlights racism through tracking of pejorative search terms such as the n-word, and doesn’t count actual people, and it’s a geographic generalization. Nonetheless it’s a stark reminder that we seem to be two nations divided by the mighty Mississippi River and we still have a very long way to go before we are all “westerners”.

From Washington Post:

Where do America’s most racist people live? “The rural Northeast and South,” suggests a new study just published in PLOS ONE.

The paper introduces a novel but makes-tons-of-sense-when-you-think-about-it method for measuring the incidence of racist attitudes: Google search data. The methodology comes from data scientist Seth Stephens-Davidowitz. He’s used it before to measure the effect of racist attitudes on Barack Obama’s electoral prospects.

“Google data, evidence suggests, are unlikely to suffer from major social censoring,” Stephens-Davidowitz wrote in a previous paper. “Google searchers are online and likely alone, both of which make it easier to express socially taboo thoughts. Individuals, indeed, note that they are unusually forthcoming with Google.” He also notes that the Google measure correlates strongly with other standard measures social science researchers have used to study racist attitudes.

This is important, because racism is a notoriously tricky thing to measure. Traditional survey methods don’t really work — if you flat-out ask someone if they’re racist, they will simply tell you no. That’s partly because most racism in society today operates at the subconscious level, or gets vented anonymously online.

For the PLOS ONE paper, researchers looked at searches containing the N-word. People search frequently for it, roughly as often as searches for  “migraine(s),” “economist,” “sweater,” “Daily Show,” and “Lakers.” (The authors attempted to control for variants of the N-word not necessarily intended as pejoratives, excluding the “a” version of the word that analysis revealed was often used “in different contexts compared to searches of the term ending in ‘-er’.”)

Read the entire article here.

Image: Association between an Internet-Based Measure of Area Racism and Black Mortality. Courtesy of Washington Post / PLOS (Public Library of Science) ONE.

Searching for Signs of Life

Gliese 581 c

Surely there is intelligent life somewhere in the universe. Cosmologists estimate that the observable universe contains around 1,000,000,000,000,000,000,000,000 planets. And, they calculate that our Milky Way galaxy alone contains around 100 billion planets that are hospitable to life (as we currently know it).

These numbers boggle the mind and beg a question: how do we find evidence for life beyond our shores? The decades long search for extraterrestrial intelligence (SETI) pioneered the use of radio telescope observations to look for alien signals from deep space. But, the process has remained rather rudimentary and narrowly focused. The good news now is that astronomers and astrobiologists have a growing toolkit of techniques that allow for much more sophisticated detection and analysis of the broader signals of life — not just potential radio transmissions from an advanced alien culture.

From Quanta:

Huddled in a coffee shop one drizzly Seattle morning six years ago, the astrobiologist Shawn Domagal-Goldman stared blankly at his laptop screen, paralyzed. He had been running a simulation of an evolving planet, when suddenly oxygen started accumulating in the virtual planet’s atmosphere. Up the concentration ticked, from 0 to 5 to 10 percent.

“Is something wrong?” his wife asked.

“Yeah.”

The rise of oxygen was bad news for the search for extraterrestrial life.

After millennia of wondering whether we’re alone in the universe — one of “mankind’s most profound and probably earliest questions beyond, ‘What are you going to have for dinner?’” as the NASA astrobiologist Lynn Rothschild put it — the hunt for life on other planets is now ramping up in a serious way. Thousands of exoplanets, or planets orbiting stars other than the sun, have been discovered in the past decade. Among them are potential super-Earths, sub-Neptunes, hot Jupiters and worlds such as Kepler-452b, a possibly rocky, watery “Earth cousin” located 1,400 light-years from here. Starting in 2018 with the expected launch of NASA’s James Webb Space Telescope, astronomers will be able to peer across the light-years and scope out the atmospheres of the most promising exoplanets. They will look for the presence of “biosignature gases,” vapors that could only be produced by alien life.

They’ll do this by observing the thin ring of starlight around an exoplanet while it is positioned in front of its parent star. Gases in the exoplanet’s atmosphere will absorb certain frequencies of the starlight, leaving telltale dips in the spectrum.

As Domagal-Goldman, then a researcher at the University of Washington’s Virtual Planetary Laboratory (VPL), well knew, the gold standard in biosignature gases is oxygen. Not only is oxygen produced in abundance by Earth’s flora — and thus, possibly, other planets’ — but 50 years of conventional wisdom held that it could not be produced at detectable levels by geology or photochemistry alone, making it a forgery-proof signature of life. Oxygen filled the sky on Domagal-Goldman’s simulated world, however, not as a result of biological activity there, but because extreme solar radiation was stripping oxygen atoms off carbon dioxide molecules in the air faster than they could recombine. This biosignature could be forged after all.

The search for biosignature gases around faraway exoplanets “is an inherently messy problem,” said Victoria Meadows, an Australian powerhouse who heads VPL. In the years since Domagal-Goldman’s discovery, Meadows has charged her team of 75 with identifying the major “oxygen false positives” that can arise on exoplanets, as well as ways to distinguish these false alarms from true oxygenic signs of biological activity. Meadows still thinks oxygen is the best biosignature gas. But, she said, “if I’m going to look for this, I want to make sure that when I see it, I know what I’m seeing.”

Meanwhile, Sara Seager, a dogged hunter of “twin Earths” at the Massachusetts Institute of Technology who is widely credited with inventing the spectral technique for analyzing exoplanet atmospheres, is pushing research on biosignature gases in a different direction. Seager acknowledges that oxygen is promising, but she urges the astrobiology community to be less terra-centric in its view of how alien life might operate — to think beyond Earth’s geochemistry and the particular air we breathe. “My view is that we do not want to leave a single stone unturned; we need to consider everything,” she said.

As future telescopes widen the survey of Earth-like worlds, it’s only a matter of time before a potential biosignature gas is detected in a faraway sky. It will look like the discovery of all time: evidence that we are not alone. But how will we know for sure?

Read the entire article here.

Image: Artist’s Impression of Gliese 581 c, the first terrestrial extrasolar planet discovered within its star’s habitable zone. Courtesy: Hervé Piraud, Latitude0116, Xhienne. Creative Commons Attribution 2.5.

The Increasing Mortality of White Males

This is the type of story that you might not normally, and certainly should not, associate with the world’s richest country. In a reversal of a long-established trend, death rates are increasing for less educated, white males. The good news is that death rates continue to fall for other demographic and racial groups, especially Hispanics and African Americans. So, what is happening to white males?

From the NYT:

It’s disturbing and puzzling news: Death rates are rising for white, less-educated Americans. The economists Anne Case and Angus Deaton reported in December that rates have been climbing since 1999 for non-Hispanic whites age 45 to 54, with the largest increase occurring among the least educated. An analysis of death certificates by The New York Times found similar trends and showed that the rise may extend to white women.

Both studies attributed the higher death rates to increases in poisonings and chronic liver disease, which mainly reflect drug overdoses and alcohol abuse, and to suicides. In contrast, death rates fell overall for blacks and Hispanics.

Why are whites overdosing or drinking themselves to death at higher rates than African-Americans and Hispanics in similar circumstances? Some observers have suggested that higher rates of chronic opioid prescriptions could be involved, along with whites’ greater pessimism about their finances.

Yet I’d like to propose a different answer: what social scientists call reference group theory. The term “reference group” was pioneered by the social psychologist Herbert H. Hyman in 1942, and the theory was developed by the Columbia sociologist Robert K. Merton in the 1950s. It tells us that to comprehend how people think and behave, it’s important to understand the standards to which they compare themselves.

How is your life going? For most of us, the answer to that question means comparing our lives to the lives our parents were able to lead. As children and adolescents, we closely observed our parents. They were our first reference group.

And here is one solution to the death-rate conundrum: It’s likely that many non-college-educated whites are comparing themselves to a generation that had more opportunities than they have, whereas many blacks and Hispanics are comparing themselves to a generation that had fewer opportunities.

Read the entire article here.

A Trip to Titan

titanNASA is advertising its upcoming space tourism trip to Saturn’s largest moon Titan with this gorgeous retro poster.

Just imagine rowing across Titan’s lakes and oceans, and watching Saturn set below the horizon. So, dump that planned cruise down the Danube and hike to your local travel agent before all the seats are gone. But, before you purchase a return ticket keep in mind the following:

Frigid and alien, yet similar to our own planet billions of years ago, Saturn’s largest moon, Titan, has a thick atmosphere, organic-rich chemistry and a surface shaped by rivers and lakes of liquid ethane and methane. Cold winds sculpt vast regions of hydrocarbon-rich dunes. There may even be cryovolcanoes of cold liquid water. NASA’s Cassini orbiter was designed to peer through Titan’s perpetual haze and unravel the mysteries of this planet-like moon.
Image: Titan poster. Courtesy of NASA/JPL.

Deconstructing Schizophrenia

Genetic and biomedical researchers have made yet another tremendous breakthrough from analyzing the human genome. This time a group of scientists, from Harvard Medical School, Boston Children’s Hospital and the Broad Institute, have identified key genetic markers and biological pathways that underlie schizophrenia.

In the US alone the psychiatric disorder affects around 2 million people. Symptoms of schizophrenia usually include hallucinations, delusional thinking and paranoia. While there are a number of drugs used to treat its symptoms, and psychotherapy to address milder forms, nothing as yet has been able to address its underlying cause(s). Hence the excitement.

From NYT:

Scientists reported on Wednesday that they had taken a significant step toward understanding the cause of schizophrenia, in a landmark study that provides the first rigorously tested insight into the biology behind any common psychiatric disorder.

More than two million Americans have a diagnosis of schizophrenia, which is characterized by delusional thinking and hallucinations. The drugs available to treat it blunt some of its symptoms but do not touch the underlying cause.

The finding, published in the journal Nature, will not lead to new treatments soon, experts said, nor to widely available testing for individual risk. But the results provide researchers with their first biological handle on an ancient disorder whose cause has confounded modern science for generations. The finding also helps explain some other mysteries, including why the disorder often begins in adolescence or young adulthood.

“They did a phenomenal job,” said David B. Goldstein, a professor of genetics at Columbia University who has been critical of previous large-scale projects focused on the genetics of psychiatric disorders. “This paper gives us a foothold, something we can work on, and that’s what we’ve been looking for now, for a long, long time.”

The researchers pieced together the steps by which genes can increase a person’s risk of developing schizophrenia. That risk, they found, is tied to a natural process called synaptic pruning, in which the brain sheds weak or redundant connections between neurons as it matures. During adolescence and early adulthood, this activity takes place primarily in the section of the brain where thinking and planning skills are centered, known as the prefrontal cortex. People who carry genes that accelerate or intensify that pruning are at higher risk of developing schizophrenia than those who do not, the new study suggests.

Some researchers had suspected that the pruning must somehow go awry in people with schizophrenia, because previous studies showed that their prefrontal areas tended to have a diminished number of neural connections, compared with those of unaffected people. The new paper not only strongly supports that this is the case, but also describes how the pruning probably goes wrong and why, and identifies the genes responsible: People with schizophrenia have a gene variant that apparently facilitates aggressive “tagging” of connections for pruning, in effect accelerating the process.

The research team began by focusing on a location on the human genome, the MHC, which was most strongly associated with schizophrenia in previous genetic studies. On a bar graph — called a Manhattan plot because it looks like a cluster of skyscrapers — the MHC looms highest.

Using advanced statistical methods, the team found that the MHC locus contained four common variants of a gene called C4, and that those variants produced two kinds of proteins, C4-A and C4-B.

The team analyzed the genomes of more than 64,000 people and found that people with schizophrenia were more likely to have the overactive forms of C4-A than control subjects. “C4-A seemed to be the gene driving risk for schizophrenia,” Dr. McCarroll said, “but we had to be sure.”

Read the entire article here.

Fictionalism of Free Will and Morality

In a recent opinion column William Irwin professor of philosophy at King’s College summarizes an approach to accepting the notion of free will rather than believing it. While I’d eventually like to see an explanation for free will and morality in biological and chemical terms — beyond metaphysics — I will (or may, if free will does not exist) for the time being have to content myself with mere acceptance. But, I my acceptance is not based on the notion that “free will” is pre-determined by a supernatural being — rather, I suspect it’s an illusion, instigated in the dark recesses of our un- or sub-conscious, and our higher reasoning functions rationalize it post factum in the full light of day. Morality on the other hand, as Irwin suggests, is an rather different state of mind altogether.

From the NYT:

Few things are more annoying than watching a movie with someone who repeatedly tells you, “That couldn’t happen.” After all, we engage with artistic fictions by suspending disbelief. For the sake of enjoying a movie like “Back to the Future,” I may accept that time travel is possible even though I do not believe it. There seems no harm in that, and it does some good to the extent that it entertains and edifies me.

Philosophy can take us in the other direction, by using reason and rigorous questioning to lead us to disbelieve what we would otherwise believe. Accepting the possibility of time travel is one thing, but relinquishing beliefs in God, free will, or objective morality would certainly be more troublesome. Let’s focus for a moment on morality.

The philosopher Michael Ruse has argued that “morality is a collective illusion foisted upon us by our genes.” If that’s true, why have our genes played such a trick on us? One possible answer can be found in the work of another philosopher Richard Joyce, who has argued that this “illusion” — the belief in objective morality — evolved to provide a bulwark against weakness of the human will. So a claim like “stealing is morally wrong” is not true, because such beliefs have an evolutionary basis but no metaphysical basis. But let’s assume we want to avoid the consequences of weakness of will that would cause us to act imprudently. In that case, Joyce makes an ingenious proposal: moral fictionalism.

Following a fictionalist account of morality, would mean that we would accept moral statements like “stealing is wrong” while not believing they are true. As a result, we would act as if it were true that “stealing is wrong,” but when pushed to give our answer to the theoretical, philosophical question of whether “stealing is wrong,” we would say no. The appeal of moral fictionalism is clear. It is supposed to help us overcome weakness of will and even take away the anxiety of choice, making decisions easier.

Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.

Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.

William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.

Read the entire article here.

Flat Earth People’s Front or People’s Front of Flat Earth?

Orlando-Ferguson-flat-earth-map

If you follow today’s internationally accepted calendar the year is 2016. But that doesn’t stop a significant few from knowing that the Earth is flat. It also doesn’t stop the internecine wars of words between various flat-Earther factions, which subscribe to different flat-Earth creation stories. Oh well.

From the Guardian:

YouTube user TigerDan925 shocked his 26,000 followers recently by conceding a shocking point: Antarctica is a continent. It’s not, as he previously thought, an ice wall that encircles the flat disc of land and water we call earth.

For most of us, that’s not news. But TigerDan925’s followers, like Galileo’s 17th century critics, are outraged by his heresy. Welcome to the contentious universe of flat-Earthers – people who believe the notion of a globe-shaped world orbiting the sun is a myth.

Through popular YouTube videos and spiffy sites, they show how easy it is to get attention by questioning scientific consensus. Unfortunately, we don’t really know how many people believe in the movement because so many people in it accuse each other of being as fake as Santa Claus (or perhaps the moon landing).

That being said, TigerDan925’s admission was not a concession that the world is shaped like the globe. He merely said flat-Earthers need a new map. But for his community, he might as well have abandoned them altogether:

“Next he says the Antarctica is not governed and protected by the Illuminati, that somehow any group deciding to buy and invest in equipment is free to roam anywhere by plane or on land,” writes a user by the name Chris Madsen. “This is absolute rubbish … 2016 is the year it becomes common knowledge the earth is flat, just like 9/11 became common knowledge, no stopping the truth now. ”

Such schisms are commonplace in flat-Earthdom, where at least three websites are vying to be the official meeting ground for the movement to save us all from the delusion that our world is a globe. Their differences range from petty (who came up with which idea first) to shocking and offensive (whether Jewish people are to blame for suppressing flat-Earth thought). And they regard each other with deep suspicion – almost as if they can’t believe that anyone else would believe what they do.

“[The multiple sites are] just the tip of the iceberg,” said flat-Earth convert Mark Sargent, who used his two decades of work in the tech and video game industries to create the site enclosedworld.com and a YouTube series called Flat Earth Clues. “There’s dissension in the ranks all over the place.”

Sargent compares the frenzy to the Monty Python film Life of Brian, in which Brian gains a following that immediately splits over whether to gather shoes, wear one shoe, or possibly follow a gourd.

“It’s almost like the beginning of a new religion. Everyone’s trying to define it. And they’re turning on each other because there’s no unified theory.” And so, like the People’s Front of Judea and the Judean People’s Front, they often spend far less time discussing what they believe than they spend attacking each other.

The Flat Earth Society revived in 2004 under the leadership of one Daniel Shenton and was opened to new members in 2009. A dissatisfied group split away in 2013 and launched its own site. A reunification proposal in 2014 has withered, and Shenton’s Twitter feed went cold after he posted a cryptic photo of the Terminator in September.

Read the entire article here.

Image: Flat Earth map, by Orlando Ferguson in 1893. Licensed under Public Domain via Commons.

Another Glorious Hubble Image

This NASA/ESA Hubble Space Telescope image shows the spiral galaxy NGC 4845, located over 65 million light-years away in the constellation of Virgo (The Virgin). The galaxy’s orientation clearly reveals the galaxy’s striking spiral structure: a flat and dust-mottled disc surrounding a bright galactic bulge. NGC 4845’s glowing centre hosts a gigantic version of a black hole, known as a supermassive black hole. The presence of a black hole in a distant galaxy like NGC 4845 can be inferred from its effect on the galaxy’s innermost stars; these stars experience a strong gravitational pull from the black hole and whizz around the galaxy’s centre much faster than otherwise. From investigating the motion of these central stars, astronomers can estimate the mass of the central black hole — for NGC 4845 this is estimated to be hundreds of thousands times heavier than the Sun. This same technique was also used to discover the supermassive black hole at the centre of our own Milky Way — Sagittarius A* — which hits some four million times the mass of the Sun (potw1340a). The galactic core of NGC 4845 is not just supermassive, but also super-hungry. In 2013 researchers were observing another galaxy when they noticed a violent flare at the centre of NGC 4845. The flare came from the central black hole tearing up and feeding off an object many times more massive than Jupiter. A brown dwarf or a large planet simply strayed too close and was devoured by the hungry core of NGC 4845.

The Hubble Space Telescope captured this recent image of spiral galaxy NGC 4845. The galaxy lies around 65 million light-years from Earth, but it still presents a gorgeous sight. NGC 4845’s glowing center hosts a supermassive, and super hungry, black hole.

Thanks NASA, but I just wish you would give these galaxies more memorable names.

Image: NASA/ESA Hubble Space Telescope image shows the spiral galaxy NGC 4845, located over 65 million light-years away in the constellation of Virgo. Courtesy: ESA/Hubble & NASA and S. Smartt (Queen’s University Belfast).

Human Bloatware

Most software engineers and IT people are familiar with the term “bloatware“. The word is usually applied to a software application that takes up so much disk space and/or memory that its functional benefits are greatly diminished or rendered useless. Operating systems such as Windows and OSX are often characterized as bloatware — a newer version always seems to require an ever-expanding need for extra disk space (and memory) to accommodate an expanding array of new (often trivial) features with marginal added benefit.

DNA_Structure

But it seems that humans did not invent such obesity through our technology. Rather, a new genetic analysis shows that humans (and other animals) actually consist of biological bloatware, through a process which began when molecules of DNA first assembled the genes of the earliest living organisms.

From ars technica:

Eukaryotes like us are more complex than prokaryotes. We have cells with lots of internal structures, larger genomes with more genes, and our genes are more complex. Since there seems to be no apparent evolutionary advantage to this complexity—evolutionary advantage being defined as fitness, not as things like consciousness or sex—evolutionary biologists have spent much time and energy puzzling over how it came to be.

In 2010, Nick Lane and William Martin suggested that because they don’t have mitochondria, prokaryotes just can’t generate enough energy to maintain large genomes. Thus it was the acquisition of mitochondria and their ability to generate cellular energy that allowed eukaryotic genomes to expand. And with the expansion came the many different types of genes that render us so complex and diverse.

Michael Lynch and Georgi Marinov are now proposing a counter offer. They analyzed the bioenergetic costs of a gene and concluded that there is in fact no energetic barrier to genetic complexity. Rather, eukaryotes can afford bigger genomes simply because they have bigger cells.

First they looked at the lifetime energetic requirements of a cell, defined as the number of times that cell hydrolyzes ATP into ADP, a reaction that powers most cellular processes. This energy requirement rose linearly and smoothly with cell size from bacteria to eukaryotes with no break between them, suggesting that complexity alone, independently of cell volume, requires no more energy.

Then they calculated the cumulative cost of a gene—how much energy it takes to replicate it once per cell cycle, how much energy it takes to transcribe it into mRNA, and how much energy it takes to then translate that mRNA transcript into a functional protein. Genes may provide selective advantages, but those must be sufficient to overcome and justify these energetic costs.

At the levels of replication (copying the DNA) and transcription (making an RNA copy), eukaryotic genes are more costly than prokaryotic genes because they’re bigger and require more processing. But even though these costs are higher, they take up proportionally less of the total energy budget of the cell. That’s because bigger cells take more energy to operate in general (as we saw just above), while things like copying DNA only happens once per cell division. Bigger cells help here, too, as they divide less often.

Read the entire article here.

A Gravitational Wave Comes Ashore

ligo-gravitational-waves-detection

On February 11, 2016, a historic day for astronomers the world over, scientists announced a monumental discovery, which was made on September 14, 2015! Thank you LIGO, the era of gravitational wave (G-Wave) astronomy has begun.

One hundred years after a prediction from Einstein’s theory of general relativity scientists have their first direct evidence of gravitational waves. These waves are ripples in the fabric of spacetime itself rather than the movement of fields and particles, such as from electromagnetic radiation. These ripples show up when gravitationally immense bodies warp the structure of space in which they sit, such as through collisions or acceleration.

ligo-hanford-aerial

As you might imagine for such disturbances to be observed here on Earth over distances in the tens to hundreds of millions, of light-years requires not only vastly powerful forces at one end but immensely sensitive instruments at the other. In fact the detector credited with discovery in this case is the Laser Interferometer Gravitational-Wave Observatory, or LIGO. It is so sensitive it can detect a change in length of its measurement apparatus — infra-red laser beams — 10,000 times smaller than the width of a proton. LIGO is operated by Caltech and MIT and supported through the U.S. National Science Foundation.

Prof Kip Thorne, one of the founders of LIGO, said that until now, astronomers had looked at the universe as if on a calm sea. This is now changed. He adds:

“The colliding black holes that produced these gravitational waves created a violent storm in the fabric of space and time, a storm in which time speeded up and slowed down, and speeded up again, a storm in which the shape of space was bent in this way and that way.”

And, as Prof Stephen Hawking remarked:

“Gravitational waves provide a completely new way of looking at the universe. The ability to detect them has the potential to revolutionise astronomy. This discovery is the first detection of a black hole binary system and the first observation of black holes merging.”

Congratulations to the many hundreds of engineers, technicians, researchers and theoreticians who have collaborated on this ground-breaking experiment. Particular congratulations go to LIGO’s three principal instigators: Rainier Weiss, Kip Thorne, and Ronald Drever.

This discovery paves the way for deeper understanding of our cosmos and lays the foundation for a new and rich form of astronomy through gravitational observations.

Galileo’s first telescopes opened our eyes to the visual splendor of our solar system and its immediate neighborhood. More recently, radio-wave, x-ray and gamma-ray astronomy have allowed us to discover wonders further afield: star-forming nebulae, neutron stars, black holes, active galactic nuclei, the Cosmic Microwave Background (CMB). Now, through LIGO and its increasingly sensitive descendants we are likely to make even more breathtaking discoveries, some of which, courtesy of gravitational waves, may let us peer at the very origin of the universe itself — the Big Bang.

How brilliant is that!

Image 1: The historic detection of gravitational waves by the Laser Interferometer Gravitational-Wave Observatory (LIGO) is shown in this plot during a press conference in Washington, D.C. on Feb. 11, 2016.Courtesy: National Science Foundation.

Image 2: LIGO Laboratory operates two detector sites 1,800 miles apart: one near Hanford in eastern Washington, and another near Livingston, Louisiana. This photo shows the Hanford detector. Courtesy of LIGO Caltech.

 

Pass the Nicotinamide Adenine Dinucleotide

NAD-moleculeFor those of us seeking to live another 100 years or more the news and/or hype over the last decade belonged to resveratrol. The molecule is believed to improve functioning of specific biochemical pathways in the cell, which may improve cell repair and hinder the aging process. Resveratrol is found — in trace amounts — in grape skin (and hence wine), blueberries and raspberries. While proof remains scarce, this has not stopped the public from consuming large quantities of wine and berries.

Ironically, one would need to ingest such large amounts of resveratrol to replicate the benefits found in mice studies, that the wine alone would probably cause irreversible liver damage before any health benefits appeared. Oh well.

So, on to the next big thing, since aging cannot wait. It’s called NAD or Nicotinamide Adenine Dinucleotide. NAD performs several critical roles in the cell, one of which is energy metabolism. As we age our cells show diminishing levels of NAD and this is, possibly, linked to mitochondrial deterioration. Mitochondria are the cells’ energy factories, so keeping our mitochondria humming along is critical. Thus, hordes of researchers are now experimenting with NAD and related substances to see if they hold promise in postponing cellular demise.

From Scientific American:

Whenever I see my 10-year-old daughter brimming over with so much energy that she jumps up in the middle of supper to run around the table, I think to myself, “those young mitochondria.”

Mitochondria are our cells’ energy dynamos. Descended from bacteria that colonized other cells about 2 billion years, they get flaky as we age. A prominent theory of aging holds that decaying of mitochondria is a key driver of aging. While it’s not clear why our mitochondria fade as we age, evidence suggests that it leads to everything from heart failure to neurodegeneration, as well as the complete absence of zipping around the supper table.

Recent research suggests it may be possible to reverse mitochondrial decay with dietary supplements that increase cellular levels of a molecule called NAD (nicotinamide adenine dinucleotide). But caution is due: While there’s promising test-tube data and animal research regarding NAD boosters, no human clinical results on them have been published.

NAD is a linchpin of energy metabolism, among other roles, and its diminishing level with age has been implicated in mitochondrial deterioration. Supplements containing nicotinamide riboside, or NR, a precursor to NAD that’s found in trace amounts in milk, might be able to boost NAD levels. In support of that idea, half a dozen Nobel laureates and other prominent scientists are working with two small companies offering NR supplements.

The NAD story took off toward the end of 2013 with a high-profile paper by Harvard’s David Sinclair and colleagues. Sinclair, recall, achieved fame in the mid-2000s for research on yeast and mice that suggested the red wine ingredient resveratrol mimics anti-aging effects of calorie restriction. This time his lab made headlines by reporting that the mitochondria in muscles of elderly mice were restored to a youthful state after just a week of injections with NMN (nicotinamide mononucleotide), a molecule that naturally occurs in cells and, like NR, boosts levels of NAD.

It should be noted, however, that muscle strength was not improved in the NMN-treated micethe researchers speculated that one week of treatment wasn’t enough to do that despite signs that their age-related mitochondrial deterioration was reversed.

NMN isn’t available as a consumer product. But Sinclair’s report sparked excitement about NR, which was already on the market as a supplement called Niagen. Niagen’s maker, ChromaDex, a publicly traded Irvine, Calif., company, sells it to various retailers, which market it under their own brand names. In the wake of Sinclair’s paper, Niagen was hailed in the media as a potential blockbuster.

In early February, Elysium Health, a startup cofounded by Sinclair’s former mentor, MIT biologist Lenny Guarente, jumped into the NAD game by unveiling another supplement with NR. Dubbed Basis, it’s only offered online by the company. Elysium is taking no chances when it comes to scientific credibility. Its website lists a dream team of advising scientists, including five Nobel laureates and other big names such as the Mayo Clinic’s Jim Kirkland, a leader in geroscience, and biotech pioneer Lee Hood. I can’t remember a startup with more stars in its firmament.

A few days later, ChromaDex reasserted its first-comer status in the NAD game by announcing that it had conducted a clinical trial demonstrating that a single dose of NR resulted in statistically significant increases in NAD in humansthe first evidence that supplements could really boost NAD levels in people. Details of the study won’t be out until it’s reported in a peer-reviewed journal, the company said. (ChromaDex also brandishes Nobel credentials: Roger Kornberg, a Stanford professor who won the Chemistry prize in 2006, chairs its scientific advisory board. Hes the son of Nobel laureate Arthur Kornberg, who, ChromaDex proudly notes, was among the first scientists to study NR some 60 years ago.)

The NAD findings tie into the ongoing story about enzymes called sirtuins, which Guarente, Sinclair and other researchers have implicated as key players in conferring the longevity and health benefits of calorie restriction. Resveratrol, the wine ingredient, is thought to rev up one of the sirtuins, SIRT1, which appears to help protect mice on high doses of resveratrol from the ill effects of high-fat diets. A slew of other health benefits have been attributed to SIRT1 activation in hundreds of studies, including several small human trials.

Here’s the NAD connection: In 2000, Guarente’s lab reported that NAD fuels the activity of sirtuins, including SIRT1the more NAD there is in cells, the more SIRT1 does beneficial things. One of those things is to induce formation of new mitochondria. NAD can also activate another sirtuin, SIRT3, which is thought to keep mitochondria running smoothly.

Read the entire article here.

Image: Structure of nicotinamide adenine dinucleotide, oxidized (NAD+). Courtesy of Wikipedia. Public Domain.

A Painful End

This should come as no surprise — advances to our understanding of biochemical and genetic processes seem to make the news with ever-increasing regularity. Researchers seem to have found the mechanism for switching physical pain on and off in mammals. They recently succeeded in blocking and restoring pain signals in mice. And, through the same discovery have been able to restore the sensation in a woman who has an extremely rare condition that makes her unable to feel any pain. It’s all in the Nav1.7 sodium ion channel and in its regulation of opioid peptides.

Fascinating, but where will this lead us? And, more to the point, will there ever be a pill to end the interminable pain of the US political process?

From ars technica:

Physical pain is a near universal problem, whether its sudden pangs or chronic aches. Yet, researchers’ efforts to quash it completely have fallen short—possibly due to a moonlighting channel in nerve cells. But that may be about to change.

The sodium ion channel, called Nav1.7, helps generate the electrical signals that surge through pain-related nerve cells. It’s known to play a key role in pain, but researchers’ past attempts to power-down its charged activities did little to soothe suffering. In a bit of a shocking twist, researchers figured out why; the channel has a second, un-channel-like function—regulating painkilling molecules called opioid peptides. That revelation, published in Nature Communications, provided researchers with the know-how to reverse painlessness in a woman with a rare condition, plus make mice completely pain free.

The link between Nav1.7 and opioid painkillers is “fascinating,” Claire Gaveriaux-Ruff, a pain researcher and professor at the University of Strasbourg, told Ars. And, she added, “this discovery brings hope to the many patients suffering from pain that are not yet adequately treated with the available pain medications.”

That source of hope has been a long time coming, John N. Wood, lead author of the study and a neuroscientist at University College London, told Ars. Researchers have been interested in Nav1.7 for years, he said. Excitement peaked in 2006 when scientists reported finding a family who lacked the channel and could feel no pain at all. After that, researchers excitedly scrambled to relieve pain with Nav1.7-blocking drugs. But the drugs inexplicably failed, Wood said. “So we thought, well maybe this channel isn’t just a channel, maybe it’s got some other activities as well.”

Using genetically engineered mice, Wood and colleagues found that completely shutting off Nav1.7 not only made mice pain-free, it cranked up their amount of opioid peptides in nerve cells. These molecules are natural painkillers that help the body moderate pain responses. In these Nav1.7-lacking mice, opioid levels were extremely high, blunting all twinges and throbs. When the researchers gave the mice a drug that blocks those opioids, the animals could feel pain normally. (The opioid-blocking drug, naloxone, treats overdoses of opioid drugs, such as morphine and codeine.)

Even more promising, Wood and colleagues saw the same result in a person. The test subject, a 39-year-old woman with a rare mutation that shuts off Nav1.7, had been pain-free all her life. But, when the researchers gave her a dose of the opioid-blocking naloxone, she felt pain for the first time—the sting of a tiny laser. She was happy to go back to her normal, painless state after the drug wore off, Wood reported. But, she hopes that the drug treatment can be used in children with the pain-free condition to keep them from unknowingly injuring themselves.

Read the entire article here.