This is the Result of Pretending to be Stupid

This NYT opinion piece has nailed it on the head. Pretending to be “stupid” to appeal to the “anti-elite” common man may well have been a good electoral strategy for Republican candidates over the last 50-60 years. Just cast your mind back to Sarah Palin as potential VP in 2008 and you’ll get my drift.

But now in 2016 we’ve entered uncharted territory: the country is on the verge of electing a shamefully ignorant man-child and he also happens to be a narcissistic psychopath with a wide range of extremely dangerous character flaws — and that’s putting it mildly.

Unfortunately for the US — and the world — the Republican nominee’s contempt for truth and reason, disdain for intellectual inquiry, and complete and utter ignorance is not an act. Though he does claim, “I know words. I have the best words. But there is no better word than stupid.

But perhaps all is not lost: he does seem to have a sound grasp of how to acquire top-notch military and geopolitical insights — in his own (best) words, “I watch the shows.

Welcome to the apocalypse my friends.

From the NYT:

It’s hard to know exactly when the Republican Party assumed the mantle of the “stupid party.”

Stupidity is not an accusation that could be hurled against such prominent early Republicans as Abraham Lincoln, Theodore Roosevelt, Elihu Root and Charles Evans Hughes. But by the 1950s, it had become an established shibboleth that the “eggheads” were for Adlai Stevenson and the “boobs” for Dwight D. Eisenhower — a view endorsed by Richard Hofstadter’s 1963 book “Anti-Intellectualism in American Life,” which contrasted Stevenson, “a politician of uncommon mind and style, whose appeal to intellectuals overshadowed anything in recent history,” with Eisenhower — “conventional in mind, relatively inarticulate.” The John F. Kennedy presidency, with its glittering court of Camelot, cemented the impression that it was the Democrats who represented the thinking men and women of America.

Rather than run away from the anti-intellectual label, Republicans embraced it for their own political purposes. In his “time for choosing” speech, Ronald Reagan said that the issue in the 1964 election was “whether we believe in our capacity for self-government or whether we abandon the American Revolution and confess that a little intellectual elite in a far-distant Capitol can plan our lives for us better than we can plan them ourselves.” Richard M. Nixon appealed to the “silent majority” and the “hard hats,” while his vice president, Spiro T. Agnew, issued slashing attacks on an “effete core of impudent snobs who characterize themselves as intellectuals.”

Many Democrats took all this at face value and congratulated themselves for being smarter than the benighted Republicans. Here’s the thing, though: The Republican embrace of anti-intellectualism was, to a large extent, a put-on. At least until now.

Eisenhower may have played the part of an amiable duffer, but he may have been the best prepared president we have ever had — a five-star general with an unparalleled knowledge of national security affairs. When he resorted to gobbledygook in public, it was in order to preserve his political room to maneuver. Reagan may have come across as a dumb thespian, but he spent decades honing his views on public policy and writing his own speeches. Nixon may have burned with resentment of “Harvard men,” but he turned over foreign policy and domestic policy to two Harvard professors, Henry A. Kissinger and Daniel Patrick Moynihan, while his own knowledge of foreign affairs was second only to Ike’s.

There is no evidence that Republican leaders have been demonstrably dumber than their Democratic counterparts. During the Reagan years, the G.O.P. briefly became known as the “party of ideas,” because it harvested so effectively the intellectual labor of conservative think tanks like the American Enterprise Institute and the Heritage Foundation and publications like The Wall Street Journal editorial page and Commentary. Scholarly policy makers like George P. Shultz, Jeane J. Kirkpatrick and Bill Bennett held prominent posts in the Reagan administration, a tradition that continued into the George W. Bush administration — amply stocked with the likes of Paul D. Wolfowitz, John J. Dilulio Jr. and Condoleezza Rice.

The trend has now culminated in the nomination of Donald J. Trump, a presidential candidate who truly is the know-nothing his Republican predecessors only pretended to be.

Mr. Trump doesn’t know the difference between the Quds Force and the Kurds. He can’t identify the nuclear triad, the American strategic nuclear arsenal’s delivery system. He had never heard of Brexit until a few weeks before the vote. He thinks the Constitution has 12 Articles rather than seven. He uses the vocabulary of a fifth grader. Most damning of all, he traffics in off-the-wall conspiracy theories by insinuating that President Obama was born in Kenya and that Ted Cruz’s father was involved in the Kennedy assassination. It is hardly surprising to read Tony Schwartz, the ghostwriter for Mr. Trump’s best seller “The Art of the Deal,” say, “I seriously doubt that Trump has ever read a book straight through in his adult life.”

Mr. Trump even appears proud of his lack of learning. He told The Washington Post that he reached decisions “with very little knowledge,” but on the strength of his “common sense” and his “business ability.” Reading long documents is a waste of time because of his rapid ability to get to the gist of an issue, he said: “I’m a very efficient guy.” What little Mr. Trump does know seems to come from television: Asked where he got military advice, he replied, “I watch the shows.”

Read the entire op/ed here.

MondayMap: Addresses Made Simple

what3words-buckingham-palace

I recently tripped over a fascinating mapping app called What3Words. Its goal is to make location and address finding easier. It does so in quite a creative way — by assigning a unique combination of 3 words to every 3×3 square meter location on the planet. In What3Words own words:

So in case you were wondering. The Queen’s official residence in London (Buckingham Palace) is fence.gross.bats.

It’s far more accurate than a postal address and it’s much easier to remember, use and share than a set of coordinates.

Better addressing improves customer experience, delivers business efficiencies, drives growth and helps the social & economic development of countries.

How cool.

Image: What3Words screenshot. Courtesy: What3Words.

Benjamin Saves Us From Hollywood

Benjamin-screenshot

Not a moment too soon. Benjamin has arrived in California to save us from ill-conceived and poorly written screenplays vying to be the next Hollywood blockbuster.

Thankfully, Benjamin is neither the 20-something, creative-wunderkind nor a 30-something know-it-all uber-producer; he (or she) is not even human. Benjamin is an AI (artificial intelligence) based automatic screenwriter, and author of Sunspring, a short science fiction film.

From ars technica:

Ars is excited to be hosting this online debut of Sunspring, a short science fiction film that’s not entirely what it seems. It’s about three people living in a weird future, possibly on a space station, probably in a love triangle. You know it’s the future because H (played with neurotic gravity by Silicon Valley‘s Thomas Middleditch) is wearing a shiny gold jacket, H2 (Elisabeth Gray) is playing with computers, and C (Humphrey Ker) announces that he has to “go to the skull” before sticking his face into a bunch of green lights. It sounds like your typical sci-fi B-movie, complete with an incoherent plot. Except Sunspring isn’t the product of Hollywood hacks—it was written entirely by an AI. To be specific, it was authored by a recurrent neural network called long short-term memory, or LSTM for short. At least, that’s what we’d call it. The AI named itself Benjamin.

Knowing that an AI wrote Sunspring makes the movie more fun to watch, especially once you know how the cast and crew put it together. Director Oscar Sharp made the movie for Sci-Fi London, an annual film festival that includes the 48-Hour Film Challenge, where contestants are given a set of prompts (mostly props and lines) that have to appear in a movie they make over the next two days. Sharp’s longtime collaborator, Ross Goodwin, is an AI researcher at New York University, and he supplied the movie’s AI writer, initially called Jetson. As the cast gathered around a tiny printer, Benjamin spat out the screenplay, complete with almost impossible stage directions like “He is standing in the stars and sitting on the floor.” Then Sharp randomly assigned roles to the actors in the room. “As soon as we had a read-through, everyone around the table was laughing their heads off with delight,” Sharp told Ars. The actors interpreted the lines as they read, adding tone and body language, and the results are what you see in the movie. Somehow, a slightly garbled series of sentences became a tale of romance and murder, set in a dark future world. It even has its own musical interlude (performed by Andrew and Tiger), with a pop song Benjamin composed after learning from a corpus of 30,000 other pop songs.

Read more here.

Image: Benjamin screenshot. Courtesy of Benjamin.

Fish Roasts Human: Don’t Read It, Share It

Common_goldfish2

Interestingly enough, though perhaps not surprisingly, people on social media share news stories rather than read them. At first glance this seems rather perplexing: after all, why would you tweet or re-tweet or like or share a news item before actually reading and understanding it?

Arnaud Legout co-author of a recent study, out of Columbia University and the French National Institute (Inria), tells us that “People form an opinion based on a summary, or summary of summaries, without making the effort to go deeper.” More confusingly, he adds, “Our results show that sharing content and actually reading it are poorly correlated.”

Please take 8 seconds or more to mull over this last statement again:

Our results show that sharing content and actually reading it are poorly correlated.

Without doubt our new technological platforms and social media have upended traditional journalism. But, in light of this unnerving finding I have to wonder if this means the eventual and complete collapse of deep analytical, investigative journalism and the replacement of thoughtful reflection with “NationalEnquirerThink”.

Perhaps I’m reading too much into the findings, but it does seem that it is more important for social media users to bond with and seek affirmation from their followers than it is to be personally informed.

With average human attention span now down to 8 seconds I think our literary and contemplative future now seems to belong safely in the fins of our cousin, the goldfish (attention span, 9 seconds).

Learn more about Arnaud Legout’s disturbing study here.

Image: Common Goldfish. Courtesy: Wikipedia. Public Domain.

Psychic Quanta From the New Age Wisdom Generator

Over the last couple of years I’ve been compiling a list of my favorite online generators. You know. Enter a key word here or click a button there and the service will return some deeply meaningful and usually darkly funny computer-generated content — sans human intervention.

Check-out my recent Fave-Five list if you’re respectively weary of billionaire plutocrats, self-aggrandizing start-ups, politicians, unfathomable science and ivory tower academics:

Now, I have the profound pleasure to add another to my list:

This latest one delivers some profound transcendental literary waveforms worthy of any New Age mystic. A sample of its recent teachings:

We grow, we exist, we are reborn. Energy is the nature of inseparability, and of us. Soon there will be an unveiling of life-force the likes of which the infinite has never seen. We are in the midst of a psychic ennobling of intuition that will align us with the quantum soup itself. Our conversations with other beings have led to an unveiling of pseudo-unlimited consciousness. Humankind has nothing to lose. Sharing is the driver of consciousness. Nothing is impossible. The planet is electrified with vibrations.

Bedlam and the Mysterious Air Loom

Air Loom machine

During my college years I was fortunate enough to spend time as a volunteer in a Victorian era psychiatric hospital in the United Kingdom. Fortunate in two ways: that I was able to make some small, yet positive difference to the lives of some of the patients; and, fortunate enough to live on the outside.

Despite the good and professional intentions of the many caring staff the hospital itself — to remain nameless — was a dreary embodiment of many a nightmarish horror flick. The building had dark, endless corridors; small, leaky windows; creaky doors, many with locks exclusively on the outside, and even creakier plumbing; spare cell-like rooms for patients; treatment rooms with passive restraints on chairs and beds. Most locals still called it “____ lunatic asylum”.

All of this leads me to the fascinating and tragic story of James Tilly Matthews, a rebellious (and somewhat paranoid) peace activist who was confined to London’s infamous Bedlam asylum in 1797. He was incarcerated for believing he was being coerced and brainwashed by a mysterious governmental mind control machine known as the “Air Loom”.

Subsequent inquiries pronounced Matthews thoroughly sane, but the British government kept him institutionalized anyway because of his verbal threats against officials and then king, George III. In effect, this made Matthews a political prisoner — precisely that which he had always steadfastly maintained.

Ironically, George III’s well-documented, recurrent and serious mental illness had no adverse effect on his own reign as monarch from 1760-1820. Interestingly enough, Bedlam was the popular name for the Bethlem Royal Hospital, sometimes known as St Mary Bethlehem Hospital.

The word “Bedlam”, of course, later came to be a synonym for confusion and chaos.

Read the entire story of James Tilly Matthews and his nemesis, apothecary and discredited lay-psychiatrist, John Haslam, at Public Domain Review.

Image: Detail from the lower portion of James Tilly Matthews’ illustration of the Air Loom featured in John Haslam’s Illustrations of Madness (1810). Courtesy: Public Domain Review / Wellcome Library, London. Public Domain.

The Accelerated Acceleration

Dark_Energy

Until the mid-1990s accepted scientific understanding of the universe held that the cosmos was expanding. Scientists have accepted this since 1929 when Edwin Hubble‘s celestial observations showed that distant galaxies were all apparently moving away from us.

But, in 1998 two independent groups of cosmologists made a startling finding. The universe was not only expanding, its expansion was accelerating. Recent studies show that this acceleration in the fabric of spacetime is actually faster than first theorized and observed.

And, nobody knows why. This expansion, indeed the accelerating expansion, remains one of our current great scientific mysteries.

Cosmologists, astronomers and theoreticians of all stripes have proposed no shortage of possible explanations. But, there is still scant observational evidence to support any of the leading theories. The most popular revolves around the peculiar idea of dark energy.

From Scientific American:

Our universe is flying apart, with galaxies moving away from each other faster each moment than they were the moment before. Scientists have known about this acceleration since the late 1990s, but whatever is causing it—dubbed dark energy—remains a mystery. Now the latest measurement of how fast the cosmos is growing thickens the plot further: The universe appears to be ballooning more quickly than it should be, even after accounting for the accelerating expansion caused by dark energy.

Scientists came to this conclusion after comparing their new measurement of the cosmic expansion rate, called the Hubble constant, to predictions of what the Hubble constant should be based on evidence from the early universe. The puzzling conflict—which was hinted at in earlier data and confirmed in the new calculation—means that either one or both of the measurements are flawed, or that dark energy or some other aspect of nature acts differently than we think.

“The bottom line is that the universe looks like it’s expanding about eight percent faster than you would have expected based on how it looked in its youth and how we expect it to evolve,” says study leader Adam Riess of the Space Telescope Science Institute in Baltimore, Md. “We have to take this pretty darn seriously.” He and his colleagues described their findings, based on observations from the Hubble Space Telescope, in a paper submitted last week to the Astrophysical Journal and posted on the preprint server arXiv.

One of the most exciting possibilities is that dark energy is even stranger than the leading theory suggests. Most observations support the idea that dark energy behaves like a “cosmological constant,” a term Albert Einstein inserted into his equations of general relativity and later removed. This kind of dark energy would arise from empty space, which, according to quantum mechanics, is not empty at all, but rather filled with pairs of “virtual” particles and antiparticles that constantly pop in and out of existence. These virtual particles would carry energy, which in turn might exert a kind of negative gravity that pushes everything in the universe outward.

Read the entire story here.

Image: The universe’s accelerated expansion. Courtesy: NASA and ESA.

Poor Leadership and Destruction of Meaningful Work

WomanFactory1940s

First, your boss may be a great leader but she or he has little or no sway over how you assess the meaningfulness of the work you do. Second, while there is no correlation between a boss and meaningful work, a bad boss can destroy any likelihood of meaningful effort.

That’s the recent finding, excerpted below, by researchers from the University of Sussex and the University of Greenwich in the UK.

Therein lies a valuable set of lessons for any business wishing to recruit, retain and motivate employees.

From University of Sussex:

Bosses play no role in fostering a sense of meaningfulness at work – but they do have the capacity to destroy it and should stay out of the way, new research shows.

Published in MIT Sloan Management Review, the research indicates that, rather than being similar to other work-related attitudes, such as engagement or commitment, meaningfulness at work tends to be intensely personal and individual, and is often revealed to employees as they reflect on their work.

Thus what managers can do to encourage meaningfulness is limited, though what they can do to introduce meaninglessness is unfortunately of far greater capacity.

The authors identified five qualities of meaningful work:

1. Self-Transcendent. Individuals tend to experience their work as meaningful when it matters to others more than just to themselves. In this way, meaningful work is self-transcendent.

2. Poignant. People often find their work to be full of meaning at moments associated with mixed, uncomfortable, or even painful thoughts and feelings, not just a sense of unalloyed joy and happiness.

 3. Episodic. A sense of meaningfulness arises in an episodic rather than a sustained way. It seems that no one can find their work consistently meaningful, but rather that an awareness that work is meaningful arises at peak times that are generative of strong experiences.

4. Reflective. Meaningfulness is rarely experienced in the moment, but rather in retrospect and on reflection when people are able to see their completed work and make connections between their achievements and a wider sense of life meaning.

5. Personal. Work that is meaningful is often understood by people not just in the context of their work but also in the wider context of their personal life experiences.

Read more here.

Image: Turret lathe operator machining parts for transport planes at the Consolidated Aircraft Corporation plant, Fort Worth, Texas, USA, 1942. Courtesy: United States Library of Congress’s Prints and Photographs division. Public Domain.

Towards an Understanding of Consciousness

Robert-Fudd-Consciousness-17C

The modern scientific method has helped us make great strides in our understanding of much that surrounds us. From knowledge of the infinitesimally small building blocks of atoms to the vast structures of the universe, theory and experiment have enlightened us considerably over the last several hundred years.

Yet a detailed understanding of consciousness still eludes us. Despite the intricate philosophical essays of John Locke in 1690 that laid the foundations for our modern day views of consciousness, a fundamental grasp of its mechanisms remain as elusive as our knowledge of the universe’s dark matter.

So, it’s encouraging to come across a refreshing view of consciousness, described in the context of evolutionary biology. Michael Graziano, associate professor of psychology and neuroscience at Princeton University, makes a thoughtful case for Attention Schema Theory (AST), which centers on the simple notion that there is adaptive value for the brain to build awareness. According to AST, the brain is constantly constructing and refreshing a model — in Graziano’s words an “attention schema” — that describes what its covert attention is doing from one moment to the next. The brain constructs this schema as an analog to its awareness of attention in others — a sound adaptive perception.

Yet, while this view may hold promise from a purely adaptive and evolutionary standpoint, it does have some way to go before it is able to explain how the brain’s abstraction of a holistic awareness is constructed from the physical substrate — the neurons and connections between them.

Read more of Michael Graziano’s essay, A New Theory Explains How Consciousness Evolved. Graziano is the author of Consciousness and the Social Brain, which serves as his introduction to AST. And, for a compelling rebuttal, check out R. Scott Bakker’s article, Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem.

Unfortunately, until our experimentalists make some definitive progress in this area, our understanding will remain just as abstract as the theories themselves, however compelling. But, ideas such as these inch us towards a deeper understanding.

Image: Representation of consciousness from the seventeenth century. Robert FluddUtriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione. Courtesy: Wikipedia. Public Domain.

Five Tips For Re-Learning How to Walk

Google-search-walking-with-smartphone

It seems that the aimless walk to clear one’s mind has become a rarity. So too the gentle stroll to ponder and think. Purposeless walking, it seems, is a dying art. Indeed many in the West are so pampered for transportation alternatives and (self-)limited in time that walking has become an indulgence — who can afford to walk any more when driving or taking the bus or the train can save so much time (and energy). Moreover, when we do walk, we’re firmly hunched over our smartphones, entranced by cyberspace and its virtual acknowledgments and affirmations, and thoroughly unaware of our surroundings.

Google-search-walking-in-nature

Yet keep in mind that many of our revered artists, photographers, authors and philosophers were great walkers. They used the walk to sense and think. In fact, studies find a link between walking and creativity.

So, without further ado I present 5 tips to help you revive an endangered pastime:

#1. Ditch the smartphone and any other mobile device.

#2. Find a treasured place to walk. Stomping to the nearest pub or 7-Eleven does not count.

#3. Pay attention to your surroundings and walk mindfully. Observe the world around you. This goes back to #1.

#4. Take off the headphones, take out the earbuds and leave your soundtrack at home. Listen to the world around you.

#5. Leave the partner, friend and dog (or other walking companion) at home. Walk alone.

From the BBC:

A number of recent books have lauded the connection between walking – just for its own sake – and thinking. But are people losing their love of the purposeless walk?

Walking is a luxury in the West. Very few people, particularly in cities, are obliged to do much of it at all. Cars, bicycles, buses, trams, and trains all beckon.

Instead, walking for any distance is usually a planned leisure activity. Or a health aid. Something to help people lose weight. Or keep their fitness. But there’s something else people get from choosing to walk. A place to think.

Wordsworth was a walker. His work is inextricably bound up with tramping in the Lake District. Drinking in the stark beauty. Getting lost in his thoughts.

Charles Dickens was a walker. He could easily rack up 20 miles, often at night. You can almost smell London’s atmosphere in his prose. Virginia Woolf walked for inspiration. She walked out from her home at Rodmell in the South Downs. She wandered through London’s parks.

Henry David Thoreau, who was both author and naturalist, walked and walked and walked. But even he couldn’t match the feat of someone like Constantin Brancusi, the sculptor who walked much of the way between his home village in Romania and Paris. Or indeed Patrick Leigh Fermor, whose walk from the Hook of Holland to Istanbul at the age of 18 inspired several volumes of travel writing. George Orwell, Thomas De Quincey, Nassim Nicholas Taleb, Friedrich Nietzsche, Bruce Chatwin, WG Sebald and Vladimir Nabokov are just some of the others who have written about it.

Read the entire article here.

Images courtesy of Google Search: Walking with smartphone. Walking in nature (my preference).

Search and the Invisible Hand of Bias

duck-duck-go

I’ve written about the online filter bubble for a while now. It’s an insidious and disturbing consequence of our online world. It refers to the phenomenon whereby our profile, personal preferences, history and connections pre-select and filter the type of content that reaches us, eliminating things we don’t need to see. The filter bubble reduces our exposure to the wider world of information and serendipitous discovery.

If this were not bad enough the online world enables a much more dangerous threat — one of hidden bias through explicit manipulation. We’re all familiar with the pull and push exerted by the constant bombardment from overt advertising. We’re also familiar with more subtle techniques of ambient and subliminal control, which aim to sway our minds without our conscious awareness — think mood music in your grocery store (it really does work).

So, now comes another more subtle form of manipulation, but with more powerful results, and it’s tied to search engines and the central role these tools play in our daily lives.

Online search engines, such as Google, know you. They know your eye movements and your click habits; they know your proclivity to select a search result near the top of the first search engine results page (SERP). Advertisers part with a fortune each day with the goal of appearing in this sweet spot on a SERP. This is a tried and tested process — higher ranking on a SERP leads to more clicks and shifts more product.

Google and many other search engines will list a handful of sponsored results at the top of a SERP, followed by a collection of random results listed in order that best fit your search query. Your expectation is that these results are tailored to your query, but that they’re non-biased. That’s the key.

New research shows that you believe these SERP results to be non-biased, even if they are manipulated behind the scenes. Moreover, these manipulated results can greatly sway your opinion. The phenomenon now comes with a name, the search engine manipulation effect, or SEME (pronounced “seem”).

In the wrong hands — government overlords or technology oligarchs — this heralds a disturbing possible (and probable) future, already underway in countries with tightly controlled media and flows of information.

Check out a detailed essay on SEME by Robert Epstein here. Epstein is an author and research psychologist at the American Institute for Behavioral Research and Technology in California.

Finally, if you’re interested in using an alternative search engine that’s less interested in taking over the world, check out DuckDuckGo.

Image courtesy of DuckDuckGo.

Pokemon Go and the Post-Apocalyptic Future is Nigh

google-search-pokemon-go

Some have lauded Pokémon Go as the next great health and fitness enabler since the “invention” of running. After all, over the span of just a few days it has forced half of Western civilization to unplug from Netflix, get off the couch and move around, and to do so outside!

The cynic in me perceives deeper, darker motives at play: a plot by North Korea to distract the West while it prepares a preemptive nuclear strike; a corporate sponsored youth brain-washing program; an exquisitely orchestrated, self-perpetuated genocidal time-bomb wrought by shady political operatives; a Google inspired initiative to tackle the obesity epidemic.

While the true nature of this elegantly devious phenomenon unfolds over the long-term — and maintains the collective attention of tens of millions of teens and millennials in the process — I will make a dozen bold, short-term predictions:

  1. A legendary Pokémon, such as Mewtwo, will show up at the Republican National Convention in Cleveland, and it will be promptly shot by open carry fanatics.
  2. The first Pokémon Go fatality will occur by July 31, 2016 — a player will inadvertently step into traffic while trying to throw a Poké Ball.
  3. The hundredth Pokémon Go fatality will occur on August 1, 2016 — the 49th player to fall into a sewer and drown.
  4. Sales of comfortable running shoes will skyrocket over the next 3 days, as the West discovers walking.
  5. Evangelical mega-churches in the US will hack the game to ensure Pokémon characters appear during revivals to draw more potential customers.
  6. Pokémon characters will begin showing up on Fox News and the Supreme Court.
  7. Tinder will file for chapter 11 bankruptcy and emerge as a Pokémon dating site.
  8. Gyms and stadia around the country will ditch all sporting events to make way for mass Pokémon hunts; NFL’s next expansion team will be virtual and led by Pikachu as quarterback.
  9. The Pokémon Company, Nintendo and Niantic Labs will join forces to purchase Japan by year’s end.
  10. Google and Tesla will team up to deliver Poké Spot in-car navigation allowing players to automatically drive to Pokémon locations.
  11. Donald Trump will assume office of PokémonPresident of the United States on January 20, 2017; 18-35-year-olds forgot to vote.
  12. World ends, January 21, 2017.

Pokemon-Go WSJ screenshot 13Jul2016If you’re one of the few earthlings wondering what Pokémon Go is all about, and how in the space of just a few days our neighborhoods have become overrun by zombie-like players, look no further than the WSJ. Rupert Murdoch must be a fan.

Image courtesy of Google Search.

 

Steps of Life

Steps-of-life-19th-century-print

Are you adolescent or middle-aged? Are you on life’s upwardly mobile journey towards the peak years (whatever these may be) or are you spiraling downwards in terminal decline?

The stages of life — from childhood to death — may be the simplistic invention of ancient scholars who sought a way to classify and explain the human condition, but over hundreds of years authors and artists have continued to be drawn to the subject. Our contemporary demographers and market researchers are just the latest in a long line of those who seek to explain, and now monetize, particular groups by age.

So, if you’re fascinated by this somewhat arbitrary chronological classification system the Public Domain Review has a treat. They’ve assembled a fine collection of images from the last five hundred years that depict the different ages of man and woman.

A common representation is to show ages ascending a series of steps from infancy to a peak and then descending towards old-age, senility and death. The image above is a particularly wonderful example of the genre and while the ages are noted in French the categories are not difficult to decipher:

20 years: “Jeunesse”

40 years: “Age de discretion”

50 years: “Age de Maturité”

90 years: “Age de decrépitude”

Image: “Le cours de la vie de l’homme dans ses différents âges”. Early 19th-century print showing stages of life at ten year intervals from 10-90 years as ascending and then descending steps. Courtesy: Wikipedia. Public Domain.

Hoverboard or Jet Pack With That Martini?

[tube]kwXWTsQh3F8[/tube]

Personally, I’m still waiting for the advent of Star Trek-like teleportation to get me from point A to point B.

But, in the meantime, and Hyperloop notwithstanding, I’ll go for the hoverboard. It looks like a much more technically finessed product than James Bond’s jet pack.

Check out this Wired report on a recent record-setting hoverboard adventure around and over Saussett-Le-Pins, near Marseille, France.

Video: Franky Zapata set a new record for the farthest hoverboard flight. Courtesy: Guinness World Records.

As Clear As Black and White

Police-violence-screenshot-7Jul2016

The terrible tragedy that is wrought by guns in the United States continues unabated. And, it’s even more tragic when elements of our police forces fuel the unending violence, more often than not, enabled by racism. The governor of Minnesota Mark Dayton put it quite starkly yesterday, following the fatal shooting of Philando Castile on July 6, 2016, a resident of Falcon Heights, pulled over for a broken tail-light.

Just one day earlier, police officers in Baton Rouge, Louisiana shot and killed Alton Sterling.

Anti-police-violence-screenshot-8Jul2016

And, today we hear that the cycle of mistrust, hatred and deadly violence — courtesy of guns — has come full circle. A racist sniper (or snipers)  apparently targeting and murdering five white police officers in Dallas, Texas on July 7, 2016.

Images: Screenshots courtesy of Washington Post and WSJ, respectively.

Are You Monotasking or Just Paying Attention?

We have indeed reached the era of peak multi-tasking. It’s time to select a different corporate meme.

Study after recent study shows that multi-tasking is an illusion — we can’t perform two or more cognitive tasks in parallel, at the same time. Rather, we timeshare: dividing our attention from one task to another sequentially. These studies also show that dividing our attention in this way tends to have a deleterious effect on all of the tasks. I say cognitive tasks because it’s rather obvious that we can all perform some tasks at the same time: walk and chew gum (or thumb a smartphone); drive and sing; shower and think; read and eat. But, all of these combinations require that one of these tasks is mostly autonomic. That is, we perform one task without conscious effort.

Yet more social scientists have determined that multi-tasking is a fraud — perhaps perpetuated by corporate industrial engineers convinced that they can wring more hours of work from you.

What are we to do now having learned that our super-efficient world of juggling numerous tasks as the “same time” is nothing but a mirage?

Well, observers of the fragile human condition have not rested. This time social scientists have discovered an amazing human talent. And they’ve coined a mesmerizing new term, known as monotasking. In some circles it’s called uni-tasking or single-tasking.

When I was growing up this was called “paying attention”.

But, this being the era of self-help-life-experience-consulting gone mad and sub-minute attention spans (fueled by multi-tasking) we can now all eagerly await the rise of an entirely new industry dedicated to this wonderful monotasking breakthrough. Expect a whole host of monotasking books, buzzworthy news articles, daytime TV shows with monotasking tips and personal coaching experts at TED events armed with “look what monotasking can do for you” powerpoint decks.

Personally, I will quietly retreat, and return to old-school staying focused, and remind my kids to do the same.

From NYT:

Stop what you’re doing.

Well, keep reading. Just stop everything else that you’re doing.

Mute your music. Turn off your television. Put down your sandwich and ignore that text message. While you’re at it, put your phone away entirely. (Unless you’re reading this on your phone. In which case, don’t. But the other rules still apply.)

Just read.

You are now monotasking.

Maybe this doesn’t feel like a big deal. Doing one thing at a time isn’t a new idea.

Indeed, multitasking, that bulwark of anemic résumés everywhere, has come under fire in recent years. A 2014 study in the Journal of Experimental Psychology found that interruptions as brief as two to three seconds — which is to say, less than the amount of time it would take you to toggle from this article to your email and back again — were enough to double the number of errors participants made in an assigned task.

Earlier research out of Stanford revealed that self-identified “high media multitaskers” are actually more easily distracted than those who limit their time toggling.

So, in layman’s terms, by doing more you’re getting less done.

But monotasking, also referred to as single-tasking or unitasking, isn’t just about getting things done.

Not the same as mindfulness, which focuses on emotional awareness, monotasking is a 21st-century term for what your high school English teacher probably just called “paying attention.”

“It’s a digital literacy skill,” said Manoush Zomorodi, the host and managing editor of WNYC Studios’ “Note to Self” podcast, which recently offered a weeklong interactive series called Infomagical, addressing the effects of information overload. “Our gadgets and all the things we look at on them are designed to not let us single-task. We weren’t talking about this before because we simply weren’t as distracted.”

Continue reading the main story

Ms. Zomorodi prefers the term “single-tasking”: “ ‘Monotasking’ seemed boring to me. It sounds like ‘monotonous.’ ”

Kelly McGonigal, a psychologist, lecturer at Stanford and the author of “The Willpower Instinct,” believes that monotasking is “something that needs to be practiced.” She said: “It’s an important ability and a form of self-awareness as opposed to a cognitive limitation.”

Read the entire article here.

Image courtesy of Google Search.

The Collapsing Wave Function

Schrodinger-equationOnce in every while I have to delve into the esoteric world of quantum mechanics. So, you will have to forgive me.

Since it was formalized in the mid-1920s QM has been extremely successful at describing the behavior of systems at the atomic scale. Two giants of the field — Niels Bohr and Werner Heisenberg — devised the intricate mathematics behind QM in 1927. Since then it has become known as the Copenhagen Interpretation, and has been widely and accurately used to predict and describe the workings of elementary particles and forces between them.

Yet recent theoretical stirrings in the field threaten to turn this widely held and accepted framework on its head. The Copenhagen Interpretation holds that particles do not have definitive locations until they are observed. Rather, their positions and movements are defined by a wave function that describes a spectrum of probabilities, but no certainties.

Rather understandably, this probabilistic description of our microscopic world tends to unnerve those who seek a more solid view of what we actually observe. Enter Bohmian mechanics, or more correctly, the De BroglieBohm theory of quantum mechanics. An increasing number of present day researchers and theorists are revisiting this theory, which may yet hold some promise.

From Wired:

Of the many counterintuitive features of quantum mechanics, perhaps the most challenging to our notions of common sense is that particles do not have locations until they are observed. This is exactly what the standard view of quantum mechanics, often called the Copenhagen interpretation, asks us to believe.

But there’s another view—one that’s been around for almost a century—in which particles really do have precise positions at all times. This alternative view, known as pilot-wave theory or Bohmian mechanics, never became as popular as the Copenhagen view, in part because Bohmian mechanics implies that the world must be strange in other ways. In particular, a 1992 study claimed to crystalize certain bizarre consequences of Bohmian mechanics and in doing so deal it a fatal conceptual blow. The authors of that paper concluded that a particle following the laws of Bohmian mechanics would end up taking a trajectory that was so unphysical—even by the warped standards of quantum theory—that they described it as “surreal.”

Nearly a quarter-century later, a group of scientists has carried out an experiment in a Toronto laboratory that aims to test this idea. And if their results, first reported earlier this year, hold up to scrutiny, the Bohmian view of quantum mechanics—less fuzzy but in some ways more strange than the traditional view—may be poised for a comeback.

As with the Copenhagen view, there’s a wave function governed by the Schrödinger equation. In addition, every particle has an actual, definite location, even when it’s not being observed. Changes in the positions of the particles are given by another equation, known as the “pilot wave” equation (or “guiding equation”). The theory is fully deterministic; if you know the initial state of a system, and you’ve got the wave function, you can calculate where each particle will end up.

That may sound like a throwback to classical mechanics, but there’s a crucial difference. Classical mechanics is purely “local”—stuff can affect other stuff only if it is adjacent to it (or via the influence of some kind of field, like an electric field, which can send impulses no faster than the speed of light). Quantum mechanics, in contrast, is inherently nonlocal. The best-known example of a nonlocal effect—one that Einstein himself considered, back in the 1930s—is when a pair of particles are connected in such a way that a measurement of one particle appears to affect the state of another, distant particle. The idea was ridiculed by Einstein as “spooky action at a distance.” But hundreds of experiments, beginning in the 1980s, have confirmed that this spooky action is a very real characteristic of our universe.

Read the entire article here.

Image: Schrödinger’s time-dependent equation. Courtesy: Wikipedia.

 

 

Juno on the 4th of July

Jupiter and Ganymede

Perhaps not coincidentally, NASA’s latest foray into the great beyond reached a key milestone today. The Juno spacecraft entered orbit around the gas giant Jupiter on the 4th of July, 2016.

NASA is still awaiting all the cool science (and image-capture) to begin. So, in the meantime I’m posting an gorgeous picture taken of Jupiter by the Hubble Space Telescope.

Image: Jupiter and Ganymede, Taken April 9, 2007. Courtesy: Credit: NASA, ESA, and E. Karkoschka (University of Arizona).

Achieving Failure

Our society values success.

Our work environments value triumphing over the competition. We look to our investments to beat the market. We support our favorite teams, but adore them when they trounce their rivals. Our schools and colleges (mostly) help educate our children, but do so in a way that rewards success — good grades, good test scores and good behavior (as in, same as everyone else). We continually reward our kids for success on a task, at school, with a team.

Yet, all of us know, in our hearts and the back of our minds, that the most important lessons and trials stem from failure — not success. From failure we learn to persevere, we learn to change and adapt, we learn to overcome. From failure we learn to avoid, or tackle obstacles head on; we learn to reassess and reevaluate. We evolve from our failures.

So this begs the question: why are so many of our processes and systems geared solely to rewarding and reinforcing success?

From NPR:

Is failure a positive opportunity to learn and grow, or is it a negative experience that hinders success? How parents answer that question has a big influence on how much children think they can improve their intelligence through hard work, a study says.

“Parents are a really critical force in child development when you think about how motivation and mindsets develop,” says Kyla Haimovitz, a professor of psychology at Stanford University. She coauthored the study, published in Psychological Science with colleague Carol Dweck, who pioneered research on mindsets. “Parents have this powerful effect really early on and throughout childhood to send messages about what is failure, how to respond to it.”

Although there’s been a lot of research on how these forces play out, relatively little looks at what parents can do to motivate their kids in school, Haimovitz says. This study begins filling that gap.

“There is a fair amount of evidence showing that when children view their abilities as more malleable and something they can change over time, then they deal with obstacles in a more constructive way,” says Gail Heyman, a professor of psychology at the University of California at San Diego who was not involved in this study.

But communicating that message to children is not simple.

“Parents need to represent this to their kids in the ways they react about their kids’ failures and setbacks,” Haimovitz says. “We need to really think about what’s visible to the other person, what message I’m sending in terms of my words and my deeds.”

In other words, if a child comes home with a D on a math test, how a parent responds will influence how the child perceives their own ability to learn math. Even a well-intentioned, comforting response of “It’s OK, you’re still a great writer” may send the message that it’s time to give up on math rather than learn from the problems they got wrong, Haimovitz explains.

Read the entire story here.

What Keeps NASA Going?

Apollo 17 Commander Gene Cernan on lunar rover

Apollo astronaut Eugene Cernan is the last human to have set foot on a world other than Earth. It’s been 44 years since he last stepped off the moon. In fact, in 1972 he drove around using the lunar rover and found time to scribble his daughter’s initials on the dusty lunar surface. So, other than forays to the International Space Station (ISS) and trips to service the Hubble Space Telescope (HST) NASA has kept humans firmly rooted to the homeland.

Of course, in the intervening decades the space agency has not rested on its laurels. NASA has sent probes and robots all over the Solar System and beyond: Voyager to the gas giants and on to interstellar space,  Dawn to visit asteroids; Rosetta (in concert with the European Space Agency) to visit a comet; SOHO and its countless cousins to keep an eye on our home star; Galileo and Pioneer to Jupiter; countless spacecraft including Curiosity Rover to Mars; Messenger to map Mercury; Magellan to probe the clouds of Venus; Cassini to survey Saturn and its fascinating moons; and of course, New Horizons to Pluto and beyond.

Spiral galaxies together with irregular galaxies make up approximately 60% of the galaxies in the local Universe. However, despite their prevalence, each spiral galaxy is unique — like snowflakes, no two are alike. This is demonstrated by the striking face-on spiral galaxy NGC 6814, whose luminous nucleus and spectacular sweeping arms, rippled with an intricate pattern of dark dust, are captured in this NASA/ESA Hubble Space Telescope image. NGC 6814 has an extremely bright nucleus, a telltale sign that the galaxy is a Seyfert galaxy. These galaxies have very active centres that can emit strong bursts of radiation. The luminous heart of NGC 6814 is a highly variable source of X-ray radiation, causing scientists to suspect that it hosts a supermassive black hole with a mass about 18 million times that of the Sun. As NGC 6814 is a very active galaxy, many regions of ionised gas are studded along  its spiral arms. In these large clouds of gas, a burst of star formation has recently taken place, forging the brilliant blue stars that are visible scattered throughout the galaxy.

Our mechanical human proxies reach out a little farther each day to learn more about our universe and our place in it. Exploration and discovery is part of our human DNA; it’s what we do. NASA is our vehicle. So, it’s good to see what NASA is planning. The agency just funded eight advanced-technology programs that officials believe may help transform space exploration. The grants are part of the NASA Innovative Advanced Concepts (NIAC) program. The most interesting, perhaps, are a program to evaluate inducing hibernation in Mars-bound astronauts, and an assessment of directed energy propulsion for interstellar travel.

Our science and technology becomes more and more like science fiction each day.

Read more about NIAC programs here.

Image 1: Apollo 17 mission commander Eugene A. Cernan makes a short checkout of the Lunar Roving Vehicle during the early part of the first Apollo 17 extravehicular activity at the Taurus-Littrow landing site. Courtesy: NASA.

Image 2: Hubble Spies a Spiral Snowflake, galaxy NGC 6814. Courtesy: NASA/ESA Hubble Space Telescope.

Eg er Island

Eyjafjallajokull

A couple of days after “Brexit” — Britain’s move to pull out of the European Union — an enormous self-inflicted wound perpetrated by narrow-minded xenophobes and scare-mongering political opportunists, Britain got it just deserts. Iceland kicked England out of Euro 2016 — the Europe-wide football (soccer) tournament.

How significant? Well, let’s put this in some perspective. Iceland is a country of only ~330,000 souls, the size of several small London suburbs. It has never fielded a team in a major tournament. It’s national coach is a dentist. The combined income of the entire Icelandic team is less than 5 percent of the average salary earned by just one of England’s players.

The United States offers no giant-killing parallels; however, I suspect, Iceland’s 2-1 win over England would be akin to a high school football (American football) team drubbing the NFL’s Broncos or Patriots.

So, while I was born and raised in London, today I am Iceland, “Ég er Island”.

Image: Eyjafjallajökull glacier, one of the smallest glaciers in Iceland. Courtesy: Andreas Tille – Own work.

First, Order a Pizza. Second, World Domination

Google-search-pizza

Tech startups that plan to envelope the globe with their never-thought-of-before-but-cannot-do-without technologies and services have to begin somewhere. Usually, the path to worldwide domination begins with pizza.

From the Washington Post:

In an ordinary conference room in this city of start-ups, a group of engineers sat down to order pizza in an entirely new way.

“Get me a pizza from Pizz’a Chicago near my office,” one of the engineers said into his smartphone. It was their first real test of Viv, the artificial-intelligence technology that the team had been quietly building for more than a year. Everyone was a little nervous. Then, a text from Viv piped up: “Would you like toppings with that?”

The engineers, eight in all, started jumping in: “Pepperoni.” “Half cheese.” “Caesar salad.” Emboldened by the result, they peppered Viv with more commands: Add more toppings. Remove toppings. Change medium size to large.

About 40 minutes later — and after a few hiccups when Viv confused the office address — a Pizz’a Chicago driver showed up with four made-to-order pizzas.

The engineers erupted in cheers as the pizzas arrived. They had ordered pizza, from start to finish, without placing a single phone call and without doing a Google search — without any typing at all, actually. Moreover, they did it without downloading an app from Domino’s or Grubhub.

Of course, a pizza is just a pizza. But for Silicon Valley, a seemingly small change in consumer behavior or design can mean a tectonic shift in the commercial order, with ripple effects across an entire economy. Engineers here have long been animated by the quest to achieve the path of least friction — to use the parlance of the tech world — to the proverbial pizza.

The stealthy, four-year-old Viv is among the furthest along in an endeavor that many in Silicon Valley believe heralds that next big shift in computing — and digital commerce itself. Over the next five years, that transition will turn smartphones — and perhaps smart homes and cars and other devices — into virtual assistants with supercharged conversational capabilities, said Julie Ask, an expert in mobile commerce at Forrester.

Powered by artificial intelligence and unprecedented volumes of data, they could become the portal through which billions of people connect to every service and business on the Internet. It’s a world in which you can order a taxi, make a restaurant reservation and buy movie tickets in one long unbroken conversation — no more typing, searching or even clicking.

Viv, which will be publicly demonstrated for the first time at a major industry conference on Monday, is one of the most highly anticipated technologies expected to come out of a start-up this year. But Viv is by no means alone in this effort. The quest to define the next generation of artificial-intelligence technology has sparked an arms race among the five major tech giants: Apple, Google, Microsoft, Facebook and Amazon.com have all announced major investments in virtual-assistant software over the past year.

Read the entire story here.

Image courtesy of Google Search.

Victorian Mesmerism

Victorian-hypnotist-at-work

Myth suggests that Victorians were highly moralistic, sober, earnest and straight-laced. Yet, a cache of recently unearthed posters shows that those living during the mid-1830s until the turn of the century had other things in mind. Mesmerism was quite the rage, apparently. Oh, what would her majesty, Queen Victoria, have thought.

See more of these curious posters here.

Image: Poster showing a Victorian hypnotist at work on a group of subjects. Courtesy: Library of Congress.

Scary Chart. Scary Times

Chart-percent-able-to-pay-emergency-expense

A recent report by the US Federal Reserve examines the relative financial health of US households. It makes for very sober reading, highlighting the economic pain suffered by a large swathe of the population.

The report centers around one simple question put to households:

Can you come up with $400 in an emergency (say an unexpected medical bill) and pay for it either in cash or with a credit card whose bill you could pay off within a month?

The answer was jaw-dropping:

For people earning between $40,000 and $100,000 (i.e. not the very poorest), 44 percent said they could not come up with $400 in an emergency.

Even more astonishing, 27 percent of those making more than $100,000 also could not.

The report suggests that this is not poverty. So what on earth is going on?

One thing is clear, and it’s a disturbing message that we keep seeing in many of our neighborhoods and echoed in the media — the great middle-class is declining and income inequality continues to broaden. At the low-end of the economic spectrum, the number of households in or close to poverty is expanding — this, in the richest country in the history of the world. At the high-end, the 1 percent, and especially the richest 0.1 percent, hold an ever greater share of the income and wealth.

Image: Percent of respondents who would completely pay an emergency expense that costs $400 using cash or a credit card
that they pay off at the end of the month (by race/ethnicity and household income). Courtesy: Report on the Economic Well-Being
of U.S. Households in 2014, May 2015. Board of Governors of the Federal Reserve System.

The Lines

Lineas_de_Nazca-Nazca_Peru_2015

In an earlier post I wrote about Star Axis, a land art form designed by artist Charles Ross. One of its main elements is an 11-story high Solar Pyramid, which marks the daily and seasonal movements of the sun across a Shadow Field. It’s not only a naked-eye astronomical observatory, it’s a work of art — on an immense scale.

This cast my mind back to the late 1980s, when I was lucky enough to visit Peru for the first time. My trek included the Sechura Desert, also known as the Nazca Desert, about 250 miles southeast of Lima. The Nazca Desert is home to many thousands of land art forms — massive geoglyphs carved into the earth of the arid plateau.

These are the Nazca Lines.

Many of the lines form simple geometric patterns. However, around a hundred or so feature immense stylized images of animals and plants, including a monkey, spider, condor, and hummingbird. The largest of these figures is about 600 ft across.

Archeologists believe the figures were carved into the desert by the Nazca culture, dating from 500 BCE to 500 BE. The purpose of the geoglyphs is still debated today; theories include: astronomical observatory and calendar, fertility symbols and religious rituals.

Interestingly enough, many can only be best appreciated from the air — and that’s where they become works of art. This extraordinary art gallery is now preserved as a UNESCO World Heritage site.

Image: Hummingbird, Nazca Lines, Nazca, Peru. Courtesy: Diego Delso, Wikimedia Commons, License CC-BY-SA 4.0.

MondayMap: Beyond the Horizon

Map-beach-view

Andy Woodruff is a cartographer, he makes maps. His most recent construction is a series of whimsical maps that visualize what many off us at least once in our lives may have pondered.

When we are at the beach looking out to sea, casting our eyes to the distant horizon, we often wonder what lies beyond. If you could set off and walk in a straight line from your small plot of sand (or rock) across the vast ocean where would you first make landfall? Andy Woodruff’s “Beyond the Sea” maps answer this question, and the results are surprising.

For instance, if you happen to be looking out from any beach on the US Eastern Seaboard — and your vision could bend and stretch over the horizon — you would see the Southern coastline of Australia. So, drop the boring atlas and Google Maps and go follow some more of Andy Woodruff’s fascinating great circles.

From NPR:

Ever stood on the coastline, gazing out over the horizon, and wondered what’s on the other side? Pondered where you’d end up if you could fly straight ahead until you hit land?

Turns out the answer might be surprising. And even if you pulled out an atlas — or, more realistically, your smartphone — you might have trouble figuring it out. Lines of latitude won’t help, and drawing a path on most maps will lead you astray.

Cartographer Andy Woodruff, who recently embarked on a project called Beyond the Sea to illustrate this puzzle, says there are two simple reasons why it’s harder than it seems to figure out which coast lies directly on the other side of the horizon.

First, coastlines are “wacky,” he writes on his blog. And second, well, the Earth is round.

The crookedness of the world’s coastlines means moving a few miles up or down the coast will leave you facing a different direction (assuming your gaze is straight out, perpendicular to the coast around you).

Read the entire story here.

Map: Beach view of Australia. Courtesy Andy Woodruff.

 

Desert Earthworks and Cosmic Connections

Star Axis

Some timepieces are intimate, think Breitling or Rolex or your trusty Timex [does anyone wear a watch anymore?] Some timepieces are monumental — prime examples might include Big Ben in London, the astronomical clock in Prague and Munich’s Rathaus-Glockenspiel,.

But then, there are time-keeping instruments on an altogether different scale — ones that dominate a significant portion of the landscape. And, where better to find one such example than the stark, high desert of New Mexico.

From the Guardian:

Somewhere in the deserts of New Mexico, a nail is embedded into a type of flat-topped mountain known as a mesa. The positioning of this nail, shielded from the elements by a tin can, took days of trial and error, with astronomical measurements provided by the US Naval Observatory and the help of a surveyor. Finally, the correct spot was located: exactly in alignment with the axis of the Earth from the south pole to the north.

This nail – which I braved rattlesnakes to find, on a mountaintop strewn with slabs of granite – was fundamental to the success of Star Axis, an extraordinary naked-eye observatory that is the brainchild of artist Charles Ross. Only when Ross was sure he had the orientation precisely correct could he begin to build the structure he had dreamed about – an obsession that has consumed him since 1971.

Star Axis is one of the world’s defining earthworks, otherwise known as land art. In the late 60s, a generation of young, New York-based artists, inspired by the space race but also by the turmoil of Vietnam, decided that galleries weren’t big enough to house their visions. So they struck out, choosing instead to make works on an epic scale, sculpted from the elements, in the astounding desert landscapes of the US south-west.

Read the entire story here.

Image: Star Axis. Courtesy: Star Axis / Charles Ross.

Farm in a Box

Freight-FarmsIf you’ve read my blog for a while you undoubtedly know that I have a rather jaded view of tech startup culture — particularly with Silicon Valley’s myopic obsession for discovering the next multi-billion dollar mobile-consumer-facing-peer-to-peer-gig-economy-service-sharing-buzzword-laden-dating-platform-with-integrated-messaging-and-travel-meta-search app.

So, here’s something refreshing and different. A startup focused on reimagining the production and distribution of fresh food. The company is called Freight Farms, their product: a self-contained farm straight out of a box. Actually the farm is contained inside a box — a standard, repurposed 40 ft long shipping container. Each Leafy Green Machine, as it is called, comes fully equipped with a vertically-oriented growing environment, plant-optimized LED lighting, recirculating hydroponic plumbing and finger-tip climate control.

Freight Farms may not (yet) make a significant impact on the converging and accelerating global crises of population growth, climate change, ecological destruction and natural resource depletion. But the company offers a sound solution to tackling the increasing demand for locally grown and sustainably produced food, especially as the world becomes increasingly urbanized.

Please check out Freight Farms and spread the word.

Image: Freight Farms. Courtesy: Freight Farms.

Your Brain on LSD

Brain-on-LSD

For the first time, researchers have peered inside the brain to study the realtime effect of the psychedelic drug LSD (lysergic acid diethylamide). Yes, neuroscientists scanned the brains of subjects who volunteered to take a trip inside an MRI scanner, all in the name of science.

While the researchers did not seem to document the detailed subjective experiences of their volunteers, the findings suggest that they were experiencing intense dreamlike visions, effectively “seeing with their eyes shut”. Under the influence of LSD many areas of the brain that are usually compartmentalized showed far greater interconnection and intense activity.

LSD was first synthesized in 1938. Its profound psychological properties were studied from the mid-1940s to the early sixties. The substance was later banned — worldwide — after its adoption as a recreational drug.

This new study was conducted by researchers from Imperial College London and The Beckley Foundation, which researches psychoactive substances.

From Guardian:

The profound impact of LSD on the brain has been laid bare by the first modern scans of people high on the drug.

The images, taken from volunteers who agreed to take a trip in the name of science, have given researchers an unprecedented insight into the neural basis for effects produced by one of the most powerful drugs ever created.

A dose of the psychedelic substance – injected rather than dropped – unleashed a wave of changes that altered activity and connectivity across the brain. This has led scientists to new theories of visual hallucinations and the sense of oneness with the universe some users report.

The brain scans revealed that trippers experienced images through information drawn from many parts of their brains, and not just the visual cortex at the back of the head that normally processes visual information. Under the drug, regions once segregated spoke to one another.

Further images showed that other brain regions that usually form a network became more separated in a change that accompanied users’ feelings of oneness with the world, a loss of personal identity called “ego dissolution”.

David Nutt, the government’s former drugs advisor, professor of neuropsychopharmacology at Imperial College London, and senior researcher on the study, said neuroscientists had waited 50 years for this moment. “This is to neuroscience what the Higgs boson was to particle physics,” he said. “We didn’t know how these profound effects were produced. It was too difficult to do. Scientists were either scared or couldn’t be bothered to overcome the enormous hurdles to get this done.”

Read the entire story here.

Image: Different sections of the brain, either on placebo, or under the influence of LSD (lots of orange). Courtesy: Imperial College/Beckley Foundation.