MondayMap: House Prices via London Tube

London-Tube-house-prices

While I no longer live in London, I grew up there and still have a special affection for the city. I’m even attached to its famed Tube map (subway for my US readers). So I found this rendition rather fascinating — a map of average house prices at and around each tube station. No surprise: house prices on and inside the Central Line (red) are the highest, with the lowest hovering around £500,000 (roughly $710,000). Read more about this map here.

Map courtesy of eMoov with data provided by Zoopla.

 

Deconstructing Schizophrenia

Genetic and biomedical researchers have made yet another tremendous breakthrough from analyzing the human genome. This time a group of scientists, from Harvard Medical School, Boston Children’s Hospital and the Broad Institute, have identified key genetic markers and biological pathways that underlie schizophrenia.

In the US alone the psychiatric disorder affects around 2 million people. Symptoms of schizophrenia usually include hallucinations, delusional thinking and paranoia. While there are a number of drugs used to treat its symptoms, and psychotherapy to address milder forms, nothing as yet has been able to address its underlying cause(s). Hence the excitement.

From NYT:

Scientists reported on Wednesday that they had taken a significant step toward understanding the cause of schizophrenia, in a landmark study that provides the first rigorously tested insight into the biology behind any common psychiatric disorder.

More than two million Americans have a diagnosis of schizophrenia, which is characterized by delusional thinking and hallucinations. The drugs available to treat it blunt some of its symptoms but do not touch the underlying cause.

The finding, published in the journal Nature, will not lead to new treatments soon, experts said, nor to widely available testing for individual risk. But the results provide researchers with their first biological handle on an ancient disorder whose cause has confounded modern science for generations. The finding also helps explain some other mysteries, including why the disorder often begins in adolescence or young adulthood.

“They did a phenomenal job,” said David B. Goldstein, a professor of genetics at Columbia University who has been critical of previous large-scale projects focused on the genetics of psychiatric disorders. “This paper gives us a foothold, something we can work on, and that’s what we’ve been looking for now, for a long, long time.”

The researchers pieced together the steps by which genes can increase a person’s risk of developing schizophrenia. That risk, they found, is tied to a natural process called synaptic pruning, in which the brain sheds weak or redundant connections between neurons as it matures. During adolescence and early adulthood, this activity takes place primarily in the section of the brain where thinking and planning skills are centered, known as the prefrontal cortex. People who carry genes that accelerate or intensify that pruning are at higher risk of developing schizophrenia than those who do not, the new study suggests.

Some researchers had suspected that the pruning must somehow go awry in people with schizophrenia, because previous studies showed that their prefrontal areas tended to have a diminished number of neural connections, compared with those of unaffected people. The new paper not only strongly supports that this is the case, but also describes how the pruning probably goes wrong and why, and identifies the genes responsible: People with schizophrenia have a gene variant that apparently facilitates aggressive “tagging” of connections for pruning, in effect accelerating the process.

The research team began by focusing on a location on the human genome, the MHC, which was most strongly associated with schizophrenia in previous genetic studies. On a bar graph — called a Manhattan plot because it looks like a cluster of skyscrapers — the MHC looms highest.

Using advanced statistical methods, the team found that the MHC locus contained four common variants of a gene called C4, and that those variants produced two kinds of proteins, C4-A and C4-B.

The team analyzed the genomes of more than 64,000 people and found that people with schizophrenia were more likely to have the overactive forms of C4-A than control subjects. “C4-A seemed to be the gene driving risk for schizophrenia,” Dr. McCarroll said, “but we had to be sure.”

Read the entire article here.

Fictionalism of Free Will and Morality

In a recent opinion column William Irwin professor of philosophy at King’s College summarizes an approach to accepting the notion of free will rather than believing it. While I’d eventually like to see an explanation for free will and morality in biological and chemical terms — beyond metaphysics — I will (or may, if free will does not exist) for the time being have to content myself with mere acceptance. But, I my acceptance is not based on the notion that “free will” is pre-determined by a supernatural being — rather, I suspect it’s an illusion, instigated in the dark recesses of our un- or sub-conscious, and our higher reasoning functions rationalize it post factum in the full light of day. Morality on the other hand, as Irwin suggests, is an rather different state of mind altogether.

From the NYT:

Few things are more annoying than watching a movie with someone who repeatedly tells you, “That couldn’t happen.” After all, we engage with artistic fictions by suspending disbelief. For the sake of enjoying a movie like “Back to the Future,” I may accept that time travel is possible even though I do not believe it. There seems no harm in that, and it does some good to the extent that it entertains and edifies me.

Philosophy can take us in the other direction, by using reason and rigorous questioning to lead us to disbelieve what we would otherwise believe. Accepting the possibility of time travel is one thing, but relinquishing beliefs in God, free will, or objective morality would certainly be more troublesome. Let’s focus for a moment on morality.

The philosopher Michael Ruse has argued that “morality is a collective illusion foisted upon us by our genes.” If that’s true, why have our genes played such a trick on us? One possible answer can be found in the work of another philosopher Richard Joyce, who has argued that this “illusion” — the belief in objective morality — evolved to provide a bulwark against weakness of the human will. So a claim like “stealing is morally wrong” is not true, because such beliefs have an evolutionary basis but no metaphysical basis. But let’s assume we want to avoid the consequences of weakness of will that would cause us to act imprudently. In that case, Joyce makes an ingenious proposal: moral fictionalism.

Following a fictionalist account of morality, would mean that we would accept moral statements like “stealing is wrong” while not believing they are true. As a result, we would act as if it were true that “stealing is wrong,” but when pushed to give our answer to the theoretical, philosophical question of whether “stealing is wrong,” we would say no. The appeal of moral fictionalism is clear. It is supposed to help us overcome weakness of will and even take away the anxiety of choice, making decisions easier.

Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.

Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.

William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.

Read the entire article here.

A Case For Less News

Google-search-cable-news

I find myself agreeing with columnist Oliver Burkeman over at the Guardian that we need to carefully manage our access to the 24/7 news cycle. Our news media has learned to thrive on hyperbole and sensationalism, which — let’s face it — tends to be mostly negative. This unending and unnerving stream of gloom and doom tends to make us believe that we are surrounded by more badness than there actually is. I have to believe that most of the 7 billion+ personal stories each day that we could be hearing about — however mundane — are likely to not be bad or evil. So, while it may not be wise to switch off cable or satellite news completely, we should consider a more measured, and balanced, approach to the media monster.

From the Guardian:

A few days before Christmas, feeling rather furtive about it, I went on a media diet: I quietly unsubscribed from, unfollowed or otherwise disconnected from several people and news sources whose output, I’d noticed, did nothing but bring me down. This felt like defeat. I’ve railed against the popular self-help advice that you should “give up reading the news” on the grounds that it’s depressing and distracting: if bad stuff’s happening out there, my reasoning goes, I don’t want to live in an artificial bubble of privilege and positivity; I want to face reality. But at some point during 2015’s relentless awfulness, it became unignorable: the days when I read about another mass shooting, another tale of desperate refugees or anything involving the words “Donald Trump” were the days I’d end up gloomier, tetchier, more attention-scattered. Needless to say, I channelled none of this malaise into making the planet better. I just got grumbly about the world, like a walking embodiment of that bumper-sticker: “Where are we going, and why are we in this handbasket?”

One problem is that merely knowing that the news focuses disproportionately on negative and scary stories doesn’t mean you’ll adjust your emotions accordingly. People like me scorn Trump and the Daily Mail for sowing unwarranted fears. We know that the risk of dying in traffic is vastly greater than from terrorism. We may even know that US gun crime is in dramatic decline, that global economic inequality is decreasing, or that there’s not much evidence that police brutality is on the rise. (We just see more of it, thanks to smartphones.) But, apparently, the part of our minds that knows these facts isn’t the same part that decides whether to feel upbeat or despairing. It’s entirely possible to know things are pretty good, yet feel as if they’re terrible.

This phenomenon has curious parallels with the “busyness epidemic”. Data on leisure time suggests we’re not much busier than we were, yet we feel busier, partly because – for “knowledge workers”, anyway – there’s no limit to the number of emails we can get, the demands that can be made of us, or the hours of the day we can be in touch with the office. Work feels infinite, but our capacities are finite, therefore overwhelm is inevitable. Similarly, technology connects us to more and more of the world’s suffering, of which there’s an essentially infinite amount, until feeling steamrollered by it becomes structurally inevitable – not a sign that life’s getting worse. And the consequences go beyond glumness. They include “compassion fade”, the well-studied effect whereby our urge to help the unfortunate declines as their numbers increase.

Read the whole column here.

Image courtesy of Google Search.

Google AI Versus the Human Race

Korean_Go_Game_ca_1910-1920

It does indeed appear that a computer armed with Google’s experimental AI (artificial intelligence) software just beat a grandmaster of the strategy board game Go. The game was devised in ancient China — it’s been around for several millennia. Go is commonly held to be substantially more difficult than chess to master, to which I can personally attest.

So, does this mean that the human race is next in line for a defeat at the hands of an uber-intelligent AI? Well, not really, not yet anyway.

But, I’m with prominent scientists and entrepreneurs — including Stephen Hawking, Bill Gates and Elon Musk — who warn of the long-term existential peril to humanity from unfettered AI. In the meantime check out how AlphaGo from Google’s DeepMind unit set about thrashing a human.

From Wired:

An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess. And Nick Bostrom isn’t exactly impressed.

Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It’s not that he discounts the power of Google’s Go-playing machine. He just argues that it isn’t necessarily a huge leap forward. The technologies behind Google’s system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.

“There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,” Bostrom says. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”

But if you look at this another way, it’s exactly why Google’s triumph is so exciting—and perhaps a little frightening. Even Bostrom says it’s a good excuse to stop and take a look at how far this technology has come and where it’s going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it’s headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.

Building a Brain

Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own.

Using what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain—deep learning is what drives the remarkably effective image search tool build into Google Photos—not to mention the face recognition service on Facebook and the language translation tool built into Microsoft’s Skype and the system that identifies porn on Twitter. If you feed millions of game moves into a deep neural net, you can teach it to play a video game.

Reinforcement learning takes things a step further. Once you’ve built a neural net that’s pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level.

AlphaGo uses all this. And then some. Hassabis [Demis Hassabis, AlphaGo founder] and his team added a second level of “deep reinforcement learning” that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.

Read the entire story here.

Image: Korean couple, in traditional dress, play Go; photograph dated between 1910 and 1920. Courtesy: Frank and Frances Carpenter Collection. Public Domain.

MondayMap: 80 Years After Prohibition

Alcohol_control_in_United_States

Apparently, Prohibition (of alcohol sales, production and transportation) ended in the United States in 1933. But, you’d be surprised to learn that more than 80 years later many regions across the nation still have restrictions and bans.

The map shows areas where alcohol is restricted: red indicates that the sales of alcohol is banned (dry); blue shows that it is allowed (wet); and yellow denotes that the county is “partially dry” or “moist”.

Interestingly, Kansas, Tennessee and Mississippi are dry states by default and require individual counties to opt in to sell alcohol. Texas is a confusing patchwork: of Texas’s 254 counties, 11 are completely dry, 194 are partially dry, and 49 are entirely wet. And, to to add to the confusion, Texas prohibits off-premises sale of liquor — but not beer and wine — all day on Sunday and select holidays.

Read more here.

Image: Map shows dry (red), wet (blue), and mixed (yellow) counties in the United States as of March 2012. Courtesy of Wikipedia.

The Curious Psychology of Returns

In a recent post I wrote about the world of reverse logistics, which underlies the multi-billion dollar business of product returns. But while the process of consumer returns runs like a well-oiled, global machine the psychology of returns is confusingly counter-intuitive.

For instance, a lenient return policy leads to more returned products — no surprise there. But, it also causes increased consumer spending, and the increased spending outweighs the cost to the business of processing the increased returns. Also, and rather more curiously, a more lenient return time limit correlates to a reduction in returns, not an increase.From the Washington Post:

January is prime time for returns in the retail industry, the month where shoppers show up in droves to trade in an ill-fitting sweater from grandma or to unload the second and third “Frozen” dolls that showed up under the Christmas tree.

This post-Christmas ritual has always been costly for retailers, comprising a large share of the $284 billion in goods that were returned in 2014.  But now it is arguably becoming more urgent for the industry to think carefully about return policies, as analysts say the rise of online shopping is bringing with it a surge in returns. The return rate for the industry overall is about 8 percent, but analysts say that it is likely significantly higher than that online, since shoppers are purchasing goods without seeing them in person or trying them on.

Against that backdrop, researchers at University of Texas-Dallas sought to get a better handle on how return policies affect shopper behavior and, in turn, whether lenient policies such as offering a lengthy period for returns actually helps or hurts a retailer’s business.

Overall, a lenient return policy did indeed correlate with more returns. But, crucially, it was even more strongly correlated with an increase in purchases. In other words, retailers are generally getting a clear sales benefit from giving customers the assurance of a return.

One surprising finding: More leniency on time limits is associated with a reduction — not an increase — in returns.

This may seem counterintuitive, but researchers say it could have varying explanations. Ryan Freling, who conducted the research alongside Narayan Janakiraman and Holly Syrdal, said that this is perhaps a result of what’s known as “endowment effect.”

“That would say that the longer a customer has a product in their hands, the more attached they feel to it,” Freling said.

Plus, the long time frame creates less urgency around the decision over whether or not to take it back.

Read the entire article here.

Flat Earth People’s Front or People’s Front of Flat Earth?

Orlando-Ferguson-flat-earth-map

If you follow today’s internationally accepted calendar the year is 2016. But that doesn’t stop a significant few from knowing that the Earth is flat. It also doesn’t stop the internecine wars of words between various flat-Earther factions, which subscribe to different flat-Earth creation stories. Oh well.

From the Guardian:

YouTube user TigerDan925 shocked his 26,000 followers recently by conceding a shocking point: Antarctica is a continent. It’s not, as he previously thought, an ice wall that encircles the flat disc of land and water we call earth.

For most of us, that’s not news. But TigerDan925’s followers, like Galileo’s 17th century critics, are outraged by his heresy. Welcome to the contentious universe of flat-Earthers – people who believe the notion of a globe-shaped world orbiting the sun is a myth.

Through popular YouTube videos and spiffy sites, they show how easy it is to get attention by questioning scientific consensus. Unfortunately, we don’t really know how many people believe in the movement because so many people in it accuse each other of being as fake as Santa Claus (or perhaps the moon landing).

That being said, TigerDan925’s admission was not a concession that the world is shaped like the globe. He merely said flat-Earthers need a new map. But for his community, he might as well have abandoned them altogether:

“Next he says the Antarctica is not governed and protected by the Illuminati, that somehow any group deciding to buy and invest in equipment is free to roam anywhere by plane or on land,” writes a user by the name Chris Madsen. “This is absolute rubbish … 2016 is the year it becomes common knowledge the earth is flat, just like 9/11 became common knowledge, no stopping the truth now. ”

Such schisms are commonplace in flat-Earthdom, where at least three websites are vying to be the official meeting ground for the movement to save us all from the delusion that our world is a globe. Their differences range from petty (who came up with which idea first) to shocking and offensive (whether Jewish people are to blame for suppressing flat-Earth thought). And they regard each other with deep suspicion – almost as if they can’t believe that anyone else would believe what they do.

“[The multiple sites are] just the tip of the iceberg,” said flat-Earth convert Mark Sargent, who used his two decades of work in the tech and video game industries to create the site enclosedworld.com and a YouTube series called Flat Earth Clues. “There’s dissension in the ranks all over the place.”

Sargent compares the frenzy to the Monty Python film Life of Brian, in which Brian gains a following that immediately splits over whether to gather shoes, wear one shoe, or possibly follow a gourd.

“It’s almost like the beginning of a new religion. Everyone’s trying to define it. And they’re turning on each other because there’s no unified theory.” And so, like the People’s Front of Judea and the Judean People’s Front, they often spend far less time discussing what they believe than they spend attacking each other.

The Flat Earth Society revived in 2004 under the leadership of one Daniel Shenton and was opened to new members in 2009. A dissatisfied group split away in 2013 and launched its own site. A reunification proposal in 2014 has withered, and Shenton’s Twitter feed went cold after he posted a cryptic photo of the Terminator in September.

Read the entire article here.

Image: Flat Earth map, by Orlando Ferguson in 1893. Licensed under Public Domain via Commons.

SciDeny and Rain Follows the Plow Doctrine

Ploughmen

“SciDeny” is a growing genre of American fiction.

SciDeny is authored by writers who propose an alternate “reality” to rational scientific thought. But, don’t be fooled into believing that SciDeny is anything like SciFi.

There are 3 key differences between SciDeny and SciFi. First, SciDeny is authored by politicians, lawyers or lay-persons with political agendas, not professional novelists. Second, SciDeny porports to be non-fictional, and indeed many believe it to be so. Third, where SciFi often promotes a visionary future underpinned by scientific and technological progress, SciDeny is aimed squarely at countering the scientific method and turning back the clock on hundreds of years of scientific discourse and discovery.

SciDeny is most pervasive in our schools (and the current US Congress), where the SciDeniers promote the practice under the guise of academic freedom. The key target for the SciDeny movement is, of course, evolution. But, why stop there. I would encourage SciDeniers to band together to encourage schools to teach the following as well: flat-earth, four humors, luminiferous aether, alchemy, geo-centric theory of the universe, miasmatic theory of disease, phlogiston, spontaneous generation, expanding earth, world ice doctrine, species transmutation, hollow earth theory, phrenology, and rain follows the plow (or plough).

We’re off to a great start already in 2016, as various States vie to be the first to pass SciDeny-friendly legislation. Oklahoma is this year’s winner.

From ars technica:

The first state bills of the year that would interfere with science education have appeared in Oklahoma. There, both the House and Senate have seen bills that would prevent school officials and administrators from disciplining any teachers who introduce spurious information to science classes.

These bills have a long history, dating back to around the time when teaching intelligent design was determined to be an unconstitutional imposition of religion. A recent study showed that you could take the text of the bills and build an evolutionary tree that traces their modifications over the last decade. The latest two fit the patterns nicely.

The Senate version of the bill is by State Senator Josh Brecheen, a Republican. It is the fifth year in a row he’s introduced a science education bill after announcing he wanted “every publicly funded Oklahoma school to teach the debate of creation vs. evolution.” This year’s version omits any mention of specific areas of science that could be controversial. Instead, it simply prohibits any educational official from blocking a teacher who wanted to discuss the “strengths and weaknesses” of scientific theories.

The one introduced in the Oklahoma House is more traditional. Billed as a “Scientific Education and Academic Freedom Act” (because freedom!), it spells out a whole host of areas of science its author doesn’t like:

The Legislature further finds that the teaching of some scientific concepts including but not limited to premises in the areas of biology, chemistry, meteorology, bioethics, and physics can cause controversy, and that some teachers may be unsure of the expectations concerning how they should present information on some subjects such as, but not limited to, biological evolution, the chemical origins of life, global warming, and human cloning.

Read more here.

Image: Ploughing with oxen. A miniature from an early-sixteenth-century manuscript of the Middle English poem God Spede þe Plough, held at the British Museum. By Paul Lacroix. Public Domain.

Election 2016 QVC Infomercial

The 2016 US presidential election cycle just entered the realm of total absurdity.

Not content with puerile vulgarity, hate-speech, 4th-grade “best words” and policy-less demagoguery, current  frontrunner for the Republican nomination was hawking his fake steaks, bottled water, vodka and wine at his March 8, 2016 press conference…

trump-infomercial-8Mar2016

Image courtesy of Jared Wyand / Independent News.

Going, Going, Gone

Our planet continues to warm as climate change relentlessly marches on. These two images of Lake Poopó in the high Bolivian Andes over a period of three years shows the stark reality. The first image was taken in 2013 the second a mere three years later, in 2016.

The images are courtesy of the NASA Earth Observatory acquired by the Operational Land Imager (OLI) on the Landsat 8 satellite.

Most climatologists suggest that the lake will not return.

Lake Poopó, 2013

lakepoopo_oli_2013102

Lake Poopó, 2016

lakepoopo_oli_2016015

Read more about the causes and environmental and human consequences here.

Images: NASA Earth Observatory images by Jesse Allen, using Landsat data from the U.S. Geological Survey. Caption by Kathryn Hansen.

PhotoMash: A Great Leader vs Something Else Entirely

Today’s PhotoMash comes courtesy of the Guardian (UK Edition), on January 21, 2016. A kindly editor over there was thoughtful enough to put President Eisenhower alongside two elements of the 2016 US presidential election clown circus.

Photomash-Palin-vs-EisenhowerMany political scholars, commentators and members of the public — of all political stripes — who remember Eisenhower during his two terms in office (1953-1961) agree that he was one of the greatest US Presidents. As for the pretenders to the throne in the other half of this PhotoMash, well, ugh. Enough said.

Image courtesy of the Guardian.

DeepDrumpf the 4th-Grader

DeepDrumpf is a Twitter bot out of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). It uses artificial intelligence (AI) to learn from the jaw-dropping rants of the current Republican frontrunner for the Presidential nomination and then tweets its own remarkably Trump-like musings.

A handful of DeepDrumpf’s recent deep-thoughts here:

DeepDrumpf-Twitter-bot

 

The bot’s designer CSAIL postdoc Bradley Hayes says DeepDrumpf uses “techniques from ‘deep-learning,’ a field of artificial intelligence that uses systems called neural networks to teach computers to to find patterns on their own. ”

I would suggest that the deep-learning algorithms, in the case of Trump’s speech patterns, did not have to be too deep. After all, linguists who have studied his words agree that it’s mostly at a  4th-grade level — coherent language is not required.

Patterns aside, I think I prefer the bot over the real thing — it’s likely to do far less damage to our country and the globe than the real thing.

 

Old Fame and Insta-Fame

[tube]RfeaNKcffMk[/tube]

First, let me begin by introducing a quote for our times from David Bowie, dated 2003, published in Performing Songwriter.

“Fame itself, of course, doesn’t really afford you anything more than a good seat in a restaurant. That must be pretty well known by now. I’m just amazed how fame is being posited as the be all and end all, and how many of these young kids who are being foisted on the public have been talked into this idea that anything necessary to be famous is all right. It’s a sad state of affairs. However arrogant and ambitious I think we were in my generation, I think the idea was that if you do something really good, you’ll become famous. The emphasis on fame itself is something new. Now it’s, to be famous you should do what it takes, which is not the same thing at all. And it will leave many of them with this empty feeling.”

Thirteen years on, and just a few days following Bowie’s tragic death, his words on fame remain startlingly appropriate. We now live in a world where fame can be pursued, manufactured and curated without needing any particular talent — social media has seen to that.

This new type of fame — let’s call it insta-fame — is a very different kind of condition to our typical notion of old fame, which may be enabled by a gorgeous voice, or acting prowess, or a way with the written word, or prowess with a tennis racket, or at the wheel of a race car, or one a precipitous ski slope, or from walking on the surface of the Moon, or from winning the Spelling Bee, or from devising a cure for polio.

It’s easy to confuse insta-fame with old fame: both offer a huge following of adoring strangers and both, potentially, lead to inordinate monetary reward. But that’s where the similarities end. Old fame came from visible public recognition and required an achievement or a specific talent, usually honed after many years or decades. Insta-fame on the other hand doesn’t seem to demand any specific skill and is often pursued as an end in itself. With insta-fame the public recognition has become decoupled from the achievement — to such an extent, in fact, that it no longer requires any achievement or skill, other than the gathering of more public recognition. This is a gloriously self-sustaining circle that advertisers have grown to adore.

My diatribe leads to a fascinating article on the second type of fame, insta-fame, and some of its protagonists and victims. David Bowie’s words continue to ring true.

From the Independent:

Charlie Barker is in her pyjamas, sitting in the shared kitchen of her halls of residence, with an Asda shopping trolley next to her – storage overflow from her tiny room. A Flybe plane takes off from City Airport, just across the dank water from the University of East London, where Barker studies art in surroundings that could not be greyer. The only way out is the DLR, the driverless trains that link Docklands to the brighter parts of town.

 “I always wanted to move to London and when everyone was signing up for uni, I was like, I don’t want to go to uni – I just want to go to London,” says Barker, who calls David Bowie her “spirit animal” and is obsessed with Hello Kitty. But going to London is hard if you’re 18 and from Nottingham and don’t have a plan or money. “So then I was like, OK, I’ll go to uni in London.” So she ended up in Beckton, which is closer to Essex than the city centre.

It’s lunchtime and one of Barker’s housemates walks in to stick something in the microwave, which he quickly takes back to his room. They exchange hellos. “I don’t really talk to people here, I just go to central to meet my friends,” she says. “But the DLR is so long and tragic, especially when you’re not in the brightest of moods.” I ask her if she often goes to the student canteen. I noticed it on the way here; it’s called “Munch”. She’s in her second year and says she didn’t know it existed.

These are unlikely surroundings, in some ways. Because while Barker is a nice, normal student doing normal student things, she’s also famous. I take out my phone and we look through her pictures on Instagram, where her following is greater than the combined circulations of Hello! and OK! magazines. Now @charliexbarker is in the room and things become more colourful. Pink, mainly. And blue, and glitter, and selfies, and skin.

And Hello Kitty. “I wanted to get a tattoo on the palm of my hand and because it was painful I was like, ‘what do I believe in enough to get tattooed on my hand for the rest of my life?’, and I was like – Hello Kitty. My Mum was like, ‘you freak!'” The drawing of the Japanese cartoon cat features in a couple of Barker’s 700-plus photos. In a portrait of her hand, she holds a pink and blue lollipop, and her fingernails are painted pink and blue. The caption: “Pink n blu pink n blu.”

Before that, Barker, now 19, wanted a tattoo saying “Drink water, eat pussy”, but decided against it. The slogan appears in another photo, scrawled on the pavement in pink chalk as she sits wearing a Betty Boop jacket in pink and black, with pink hair and fishnets. “I was bumming around with my friend Daniel, who’s a photographer, and I wanted to see if I could do all the styling and everything,” she says. “We’d already done four of five looks and we were like, oh my God, so we just wet my hair and went with it.”

“Poco esplicita,” suggests one of her Italian followers beside the photo. Barker rarely replies to comments these days, most of which are from fans (“I love uuuuu… Your style just killing me… IM SCREAMING”) and doesn’t say much in her captions (“I do wat I want” in this case). Yet her followers – 622,000 of them at the time of writing – love her pictures, many of which receive more than 50,000 likes. She’s not on reality TV, can’t sing and has no famous relatives. She’s not rich and has no access to private jets or tigers as pets. Yet with a photographic glimpse – or at least suggestion – of a life of colour and attitude, a student in Beckton has earned the sort of fame that only exists on Instagram.

“That sounds so weird, saying that, stop it!” she says when I ask if she feels famous. “No, I’m not famous. I’m just doing my own thing, getting recognition doing it. And I think everyone’s famous now, aren’t they? Everyone has an Instagram and everyone’s famous.”

The photo app, bought by Facebook in 2012, boomed last year, overtaking Twitter in September with 400 million active monthly users. But there are degrees of Instafame. And if one measure, beyond an audience, is a change to one’s life, then Barker has it. So too do Brian Whittaker (@brianhwhittaker) and Olivia Knight-Butler (@livrosekb), whose followings also defy celebrity norms. Whittaker, an insanely grown-up 16-year-old from Solihull, also rejects the idea that he’s famous at all, despite having a quarter of a million followers. “I don’t see followers as a real thing, it’s just being popular on a page,” he says from his mum’s house.

Yet in the next sentence he talks about the best indicator of fame in any age. “I get stopped in the street quite a bit now. In the summer I was in Singapore with my parents and people were taking pictures of me. One person stopped me and then when I got back to the hotel room I saw pictures of me on Instagram shopping. People had tagged me and were asking, ‘is this really you, are you in Singapore?'”

“I get so so flattered when people ask me for a picture in the street,” Barker says. Most of her fans are younger teenage girls. Many have set up dedicated Charlie Barker fan accounts, re-posting her images adorned with love hearts. They idolise her. “I feel like I have to give them eternal love for it, I’m like, oh my God, that is so sweet.”

Read the entire article here.

Video: Fame, David Bowie. Courtesy mudroll / Youtube.

Anti-Gifting and Reverse Logistics

Google-search-gifts-returns

Call it what you may, but ’tis the season following the gift-giving season, which means only one thing, it’s returns season. Did you receive a gorgeous pair of shoes in the wrong size? Return. Did you get yet another hideous tie or shirt in the wrong color? Return. Yet more lotion that makes you break out in an orange rash? Return? Video game in the wrong format or book that you already digested last year? Return. Toaster that doesn’t match your kitchen decor? Return.

And, the numbers of returns are quite staggering. According to Optoro — a research firm that helps major retailers process and resell returns — consumers return nearly $70 billion worth of purchases during the holiday season. That’s more than the entire GDP of countries like Luxembourg or Sri Lanka.

So, with returns being such a huge industry how does the process work? Importantly, a returned gift is highly unlikely to end up back on the original shelf from where it was purchased. Rather, the gift is often transported by an inverse supply-chain — known as reverse logistics — from the consumer back to the retailer, sometimes back to a wholesaler, and then back to a liquidator. Importantly, up to 40 percent of returns don’t even make it back to a liquidator since it’s sometimes more economical for the retailer to discard the item.

From Wired:

For most retailers, the weeks leading up to Christmas are a frenzied crescendo of activity. But for Michael Ringelsten, the excitement starts after the holidays.

Ringelsten runs Shorewood Liquidators, which collects all those post-holiday returns—from unwanted gadgets and exercise equipment to office furniture and popcorn machines—and finds them a new home. Wait, what? A new home? Yep. Rejected gifts and returned goods don’t go back on the shelves from which they came. They follow an entirely different logistical path, a weird mirror image of the supply chain that brings the goods we actually want to our doors.

This parallel process exists because the cost of restocking and reselling returned items often exceeds the value of those items. To cut their losses, online retailers often turn to folks like Ringelsten.

I discovered Shorewood Liquidators through a rather low rent-looking online ad touting returned items from The Home Depot, Amazon, Sears, Wal-Mart, and other big retailers. I was surprised to find the items weren’t bad. Some were an out-and-out deal, like this comfy Arcadia recliner (perfect for my next Shark Tank marathon). Bidding starts at 99 cents for knickknacks or $5 for nicer stuff. The descriptions state whether there are scuffs, scratches, or missing parts.

“This recliner? It will definitely sell,” Ringelsten says. Shorewood employs 91 people who work out of a 100,000-square-foot warehouse in Illinois—a space that, after the holidays, is a Through the Looking Glass version of Amazon, selling unwanted gifts at rock-bottom prices. And as Americans buy more and more holiday gifts online, they’re also returning more, creating new opportunities for businesses prepared to handle what others don’t want. Call it “re-commerce.”

The Hidden World of Returns

UPS says last week it saw the highest volume of returns it expects to see all year, with people sending back more than 5 million gifts and impulse purchases. On the busiest day of that week, the shipper said, people sent back twice as many packages—1 million in all—than the same day a year ago.

But those returns often don’t return from whence they came. Instead, they’re shipped to returns facilities—some operated by retailers, others that serve as hubs for many sellers. Once there, the goods are collected, processed, and often resold by third-party contractors, including wholesalers and liquidators like Shorewood. These contractors often use software that determines the most profitable path, be it selling them to consumers online, selling them in lots to wholesale buyers, or simply recycling them. If none of these options is profitable, the item may well end up in a landfill, making the business of returns an environmental issue, as well.

Read the entire story here.

Image courtesy of Google Search.

A (Word) Cloud From the (Tweet) Storm of a Demagogue

trump-wordcloud-26Feb2016

It’s impossible to ignore the thoroughly shameful behavior of the current crop of politicians and non-politicians running in this year’s U.S clown car race presidential election. The vicious tripe that flows from the mouths of these people is certainly attention-grabbing. But while it may have been titillating at first, the discourse — in very loose terms — has now taken a deeply disgusting and dangerous turn.

Just take the foul-mouthed tweets of current front runner for the Republican nomination, Donald Trump.

Since he entered the race his penchant for bullying and demagoguery has taken center stage; no mention of any policy proposals, rational or otherwise; just a filthy mouth spouting hatred, bigotry, fear, shame and intimidation in a constant 140-character storm of drivel.

So I couldn’t resist taking all his recent tweets and creating a wordcloud from his stream of anger and nonsense. His favorite “policy” statements to date: wall, dumb, failing, dopey, dope, worst, dishonest, failed, bad, sad, boring. I must say it is truly astonishing to see this person attack another for being: hater, liar, dishonest, racist, sexist, dumb, total hypocrite!

Wordcloud generated using Wordclouds.com.

MondayMap: National Business Emotional Intelligence

A recent article in the Harvard Business Review (HBR) gives would-be business negotiators some general tips on how best to deal with counterparts from other regions of the world. After all, getting to yes and reaching a mutually beneficial agreement across all parties does require a good degree of cultural sensitivity and emotional intelligence.

map-Emotional-Counterpart

While there is no substitute to understanding other nations through travel and cultural immersion, the HBR article describes some interesting nuances to help those lacking in geographic awareness, international business experience,  and cross-cultural wisdom. The first step in this exotic journey is rather appropriately, a map.

No surprise, the Japanese and Filipinos shirk business confrontation, whereas the Russians and French savor it. Northern Europeans are less emotional, while Southern Europeans and Latin Americans are much more emotionally expressive.

From Frank Jacobs over at Strange Maps:

Negotiating with Filipinos? Be warm and personal, but stay polite. Cutting das Deal with Germans? Stay cool as ice, and be tough as nails. So what happens if you’re a German doing business in the Philippines?

That’s not the question this map was designed to answer. This map — actually, a diagram — shows differences in attitudes to business negotiations in a number of countries. Familiarise yourself with them, then burn the drawing. From now on, you’ll be a master international dealmaker.

Vertically, the map distinguishes between countries where it is highly haram to show emotions during business proceedings (Japan being the prime example) and countries where emotions are an accepted part of il commercio (yes, Italians are emotional extroverts — also in business).

The horizontal axis differentiates countries with a very confrontational negotiating style — think heated arguments and slammed doors — from places where decorum is the alpha and omega of commercial dealings. For an extreme example of the former, try trading with an Israeli company. For the latter, I refer you to those personable but (apparently also) persnickety Filipinos.

Read the entire article here.

Map courtesy of Erin Meyer, professor and the program director for Managing Global Virtual Teams at INSEAD. Courtesy of HBR / Strange Maps.

Colonizing the Milky Way 101

ESO-The_Milky_Way_panorama

The human race is likely to spend many future generations grappling with the aftermaths of its colonial sojourns across the globe. Almost every race and creed over our documented history has actively pursued encroaching upon and displacing others. By our very nature we are territorial animals, and very good ones at that.

Yet despite the untold volumes of suffering, pain and death wrought on those we colonize our small blue planet is not enough for our fantasies and follies. We send our space probes throughout the solar system to test for habitability. We dream of human outposts on the Moon and on Mars. But even our solar system is too minuscule for our expansive, acquisitive ambitions. Why not colonize our entire galaxy? Now we’re talking!

Kim Stanley Robinson, author extraordinaire of numerous speculative and science fiction novels, gives us an idea of what it may take to spread our wings across the Milky Way in a recent article for Scientific American, excerpted here.

It will be many centuries before humans move beyond our solar system. But, before we do so I’d propose that we get our own house in order first. That will be our biggest challenge, not the invention of yet to be imagined technologies.

From Scientific American:

The idea that humans will eventually travel to and inhabit other parts of our galaxy was well expressed by the early Russian rocket scientist Konstantin Tsiolkovsky, who wrote, “Earth is humanity’s cradle, but you’re not meant to stay in your cradle forever.” Since then the idea has been a staple of science fiction, and thus become part of a consensus image of humanity’s future. Going to the stars is often regarded as humanity’s destiny, even a measure of its success as a species. But in the century since this vision was proposed, things we have learned about the universe and ourselves combine to suggest that moving out into the galaxy may not be humanity’s destiny after all.

The problem that tends to underlie all the other problems with the idea is the sheer size of the universe, which was not known when people first imagined we would go to the stars. Tau Ceti, one of the closest stars to us at around 12 light-years away, is 100 billion times farther from Earth than our moon. A quantitative difference that large turns into a qualitative difference; we can’t simply send people over such immense distances in a spaceship, because a spaceship is too impoverished an environment to support humans for the time it would take, which is on the order of centuries. Instead of a spaceship, we would have to create some kind of space-traveling ark, big enough to support a community of humans and other plants and animals in a fully recycling ecological system.

On the other hand it would have to be small enough to accelerate to a fairly high speed, to shorten the voyagers’ time of exposure to cosmic radiation, and to breakdowns in the ark. Regarded from some angles bigger is better, but the bigger the ark is, the proportionally more fuel it would have to carry along to slow itself down on reaching its destination; this is a vicious circle that can’t be squared. For that reason and others, smaller is better, but smallness creates problems for resource metabolic flow and ecologic balance. Island biogeography suggests the kinds of problems that would result from this miniaturization, but a space ark’s isolation would be far more complete than that of any island on Earth. The design imperatives for bigness and smallness may cross each other, leaving any viable craft in a non-existent middle.

The biological problems that could result from the radical miniaturization, simplification and isolation of an ark, no matter what size it is, now must include possible impacts on our microbiomes. We are not autonomous units; about eighty percent of the DNA in our bodies is not human DNA, but the DNA of a vast array of smaller creatures. That array of living beings has to function in a dynamic balance for us to be healthy, and the entire complex system co-evolved on this planet’s surface in a particular set of physical influences, including Earth’s gravity, magnetic field, chemical make-up, atmosphere, insolation, and bacterial load. Traveling to the stars means leaving all these influences, and trying to replace them artificially. What the viable parameters are on the replacements would be impossible to be sure of in advance, as the situation is too complex to model. Any starfaring ark would therefore be an experiment, its inhabitants lab animals. The first generation of the humans aboard might have volunteered to be experimental subjects, but their descendants would not have. These generations of descendants would be born into a set of rooms a trillion times smaller than Earth, with no chance of escape.

In this radically diminished enviroment, rules would have to be enforced to keep all aspects of the experiment functioning. Reproduction would not be a matter of free choice, as the population in the ark would have to maintain minimum and maximum numbers. Many jobs would be mandatory to keep the ark functioning, so work too would not be a matter of choices freely made. In the end, sharp constraints would force the social structure in the ark to enforce various norms and behaviors. The situation itself would require the establishment of something like a totalitarian state.

Read the entire article here.

Image: The Milky Way panorama. Courtesy: ESO/S. Brunier – Licensed under Creative Commons.

Another Glorious Hubble Image

This NASA/ESA Hubble Space Telescope image shows the spiral galaxy NGC 4845, located over 65 million light-years away in the constellation of Virgo (The Virgin). The galaxy’s orientation clearly reveals the galaxy’s striking spiral structure: a flat and dust-mottled disc surrounding a bright galactic bulge. NGC 4845’s glowing centre hosts a gigantic version of a black hole, known as a supermassive black hole. The presence of a black hole in a distant galaxy like NGC 4845 can be inferred from its effect on the galaxy’s innermost stars; these stars experience a strong gravitational pull from the black hole and whizz around the galaxy’s centre much faster than otherwise. From investigating the motion of these central stars, astronomers can estimate the mass of the central black hole — for NGC 4845 this is estimated to be hundreds of thousands times heavier than the Sun. This same technique was also used to discover the supermassive black hole at the centre of our own Milky Way — Sagittarius A* — which hits some four million times the mass of the Sun (potw1340a). The galactic core of NGC 4845 is not just supermassive, but also super-hungry. In 2013 researchers were observing another galaxy when they noticed a violent flare at the centre of NGC 4845. The flare came from the central black hole tearing up and feeding off an object many times more massive than Jupiter. A brown dwarf or a large planet simply strayed too close and was devoured by the hungry core of NGC 4845.

The Hubble Space Telescope captured this recent image of spiral galaxy NGC 4845. The galaxy lies around 65 million light-years from Earth, but it still presents a gorgeous sight. NGC 4845’s glowing center hosts a supermassive, and super hungry, black hole.

Thanks NASA, but I just wish you would give these galaxies more memorable names.

Image: NASA/ESA Hubble Space Telescope image shows the spiral galaxy NGC 4845, located over 65 million light-years away in the constellation of Virgo. Courtesy: ESA/Hubble & NASA and S. Smartt (Queen’s University Belfast).

SPLAAT! Holy Onomatopoeia, Batman!

Batman fans rejoice. Here it is, a compendium of every ZWAPP! KAPOW! BLOOP! and THUNK! from every fight scene in the original 1960’s series.

[tube]rT8RkXmyJ9w[/tube]

I think we can all agree that the campy caped crusaders, dastardly villains and limp fight scenes, accompanied by bright onomatopoeiac graphics, guaranteed the show would become an enduring cult classic. Check out the full list here, compiled by the forces for good over at Fastcompany.

My favorites:

FLRBBBBB! GLURPP! KAPOW! KER-SPLOOSH! KLONK! OOOOFF! POWIE! QUNCKKK! URKK! ZLONK!

 

Video: Batman (1966):Fight Scenes-Season 1 (Pt.1). Courtesy of corijei v2 / Youtube.

Another Corporate Empire Bites the Dust

Motorola-DynaTACBusinesses and brands come and they go. Seemingly unassailable corporations, often valued in the tens of billions of dollars (and sometimes more) fall to the incessant march of technological change and increasingly due to the ever fickle desires of the consumer.

And, these monoliths of business last but blinks of an eye when compared with the likes of our vast social empires such as the Roman, Han, Ottoman, Venetian, Sudanese, Portuguese, which persist for many hundreds — sometimes thousands — of years.

Yet, even a few years ago who would have predicted the demise of the Motorola empire, the company mostly responsible for the advent of the handheld mobile phone. Motorola had been on a recent downward spiral, failing in part to capitalize on the shift to smartphones, mobile operating systems and apps. Now it’s brand is dust. RIP brick!

From the Guardian:

Motorola, the brand which invented the mobile phone, brought us the iconic “Motorola brick”, and gave us both the first flip-phone and the iconic Razr, is to cease to exist.

Bought from Google by the Chinese smartphone and laptop powerhouse Lenovo in January 2014, Motorola had found success over the past two years. It launched the Moto G in early 2014, which propelled the brand, which had all but disappeared after the Razr, from a near-0% market share to 6% of sales in the UK.

The Moto G kickstarted the reinvigoration of the brand, which saw Motorola ship more than 10m smartphones in the third quarter of 2014, up 118% year-on-year.

But now Lenovo has announced that it will kill off the US mobile phone pioneer’s name. It will keep Moto, the part of Motorola’s product naming that has gained traction in recent years, but Moto smartphones will be branded under Lenovo.

Motorola chief operating officer Rick Osterloh told Cnet that “we’ll slowly phase out Motorola and focus on Moto”.

The Moto line will be joined by Lenovo’s Vibe line in the low end, leaving the fate of the Moto E and G uncertain. The Motorola Mobility division of Lenovo will take over responsibility for the Chinese manufacturer’s entire smartphone range.

Read the entire story here.

Image: Motorola DynaTAC 8000X commercial portable cellular phone, 1983. Courtesy of Motorola.

Meet the Broadband Preacher

Welcome-to-mississippi_I20

This fascinating article follows Roberto Gallardo an extension professor at Mississippi State University as he works to bring digital literacy, the internet and other services of our 21st century electronic age to rural communities across the South. It’s an uphill struggle.

From Wired:

For a guy born and raised in Mexico, Roberto Gallardo has an exquisite knack for Southern manners. That’s one of the first things I notice about him when we meet up one recent morning at a deli in Starkville, Mississippi. Mostly it’s the way he punctuates his answers to my questions with a decorous “Yes sir” or “No sir”—a verbal tic I associate with my own Mississippi upbringing in the 1960s.

Gallardo is 36 years old, with a salt-and-pepper beard, oval glasses, and the faint remnant of a Latino accent. He came to Mississippi from Mexico a little more than a decade ago for a doctorate in public policy. Then he never left.

I’m here in Starkville, sitting in this booth, to learn about the work that has kept Gallardo in Mississippi all these years—work that seems increasingly vital to the future of my home state. I’m also here because Gallardo reminds me of my father.

Gallardo is affiliated with something called the Extension Service, an institution that dates back to the days when America was a nation of farmers. Its original purpose was to disseminate the latest agricultural know-how to all the homesteads scattered across the interior. Using land grant universities as bases of operations, each state’s extension service would deploy a network of experts and “county agents” to set up 4-H Clubs or instruct farmers in cultivation science or demonstrate how to can and freeze vegetables without poisoning yourself in your own kitchen.

State extension services still do all this, but Gallardo’s mission is a bit of an update. Rather than teach modern techniques of crop rotation, his job—as an extension professor at Mississippi State University—is to drive around the state in his silver 2013 Nissan Sentra and teach rural Mississippians the value of the Internet.

In sleepy public libraries, at Rotary breakfasts, and in town halls, he gives PowerPoint presentations that seem calculated to fill rural audiences with healthy awe for the technological sublime. Rather than go easy, he starts with a rapid-fire primer on heady concepts like the Internet of Things, the mobile revolution, cloud computing, digital disruption, and the perpetual increase of processing power. (“It’s exponential, folks. It’s just growing and growing.”) The upshot: If you don’t at least try to think digitally, the digital economy will disrupt you. It will drain your town of young people and leave your business in the dust.

Then he switches gears and tries to stiffen their spines with confidence. Start a website, he’ll say. Get on social media. See if the place where you live can finally get a high-speed broadband connection—a baseline point of entry into modern economic and civic life.

Even when he’s talking to me, Gallardo delivers this message with the straitlaced intensity of a traveling preacher. “Broadband is as essential to this country’s infrastructure as electricity was 110 years ago or the Interstate Highway System 50 years ago,” he says from his side of our booth at the deli, his voice rising high enough above the lunch-hour din that a man at a nearby table starts paying attention. “If you don’t have access to the technology, or if you don’t know how to use it, it’s similar to not being able to read and write.”

These issues of digital literacy, access, and isolation are especially pronounced here in the Magnolia State. Mississippi today ranks around the bottom of nearly every national tally of health and economic well-being. It has the lowest median household income and the highest rate of child mortality. It also ranks last in high-speed household Internet access. In human terms, that means more than a million Mississippians—over a third of the state’s population—lack access to fast wired broadband at home.

Gallardo doesn’t talk much about race or history, but that’s the broader context for his work in a state whose population has the largest percentage of African-Americans (38 percent) of any in the union. The most Gallardo will say on the subject is that he sees the Internet as a natural way to level out some of the persistent inequalities—between black and white, urban and rural—that threaten to turn parts of Mississippi into places of exile, left further and further behind the rest of the country.

And yet I can’t help but wonder how Gallardo’s work figures into the sweep of Mississippi’s history, which includes—looking back over just the past century—decades of lynchings, huge outward migrations, the fierce, sustained defense of Jim Crow, and now a period of unprecedented mass incarceration. My curiosity on this point is not merely journalistic. During the lead-up to the civil rights era, my father worked with the Extension Service in southern Mississippi as well. Because the service was segregated at the time, his title was “negro county agent.” As a very young child, I would travel from farm to farm with him. Now I’m here to travel around Mississippi with Gallardo, much as I did with my father. I want to see whether the deliberate isolation of the Jim Crow era—when Mississippi actively fought to keep itself apart from the main currents of American life—has any echoes in today’s digital divide.

Read the entire article here.

Image: Welcome to Mississippi. Courtesy of WebTV3.

Neck Tingling and ASMR

Google-search-asmrEver had that curious tingling sensation at the back and base of your neck? Of course you have. Perhaps you’ve felt this sensation during a particular piece of music or from a watching a key scene in a movie or when taking in a panorama from the top of a mountain or from smelling a childhood aroma again. In fact, most people report having felt this sensation, albeit rather infrequently.

But, despite its commonality very little research exists to help us understand how and why it happens. Psychologists tend to agree that the highly personal and often private nature of the neck tingling experience make it difficult to study and hence generalize. This means, of course, that the internet is rife with hypotheses and pseudo-science. Just try searching for ASMR videos and be (not) surprised by the 2 million+ results.

From the Guardian:

Autonomous Sensory Meridian Response, or ASMR, is a curious phenomenon. Those who experience it often characterise it as a tingling sensation in the back of the head or neck, or another part of the body, in response to some sort of sensory stimulus. That stimulus could be anything, but over the past few years, a subculture has developed around YouTube videos, and their growing popularity was the focus of a video posted on the Guardian this last week. It’s well worth a watch, but I couldn’t help but feel it would have been a bit more interesting if there had been some scientific background in it. The trouble is, there isn’t actually much research on ASMR out there.

To date, only one research paper has been published on the phenomenon. In March last year, Emma Barratt, a graduate student at Swansea University, and Dr Nick Davis, then a lecturer at the same institution, published the results of a survey of some 500 ASMR enthusiasts. “ASMR is interesting to me as a psychologist because it’s a bit ‘weird’” says Davis, now at Manchester Metropolitan University. “The sensations people describe are quite hard to describe, and that’s odd because people are usually quite good at describing bodily sensation. So we wanted to know if everybody’s ASMR experience is the same, and of people tend to be triggered by the same sorts of things.”

Read the entire story here.

Image courtesy of Google Search.

Please Laugh While You Can

Rationality requires us to laugh at the current state of the U.S. political “conversation” as Jonathan Jones so rightly reminds us. I say “conversation” in quotes because it’s no longer a dialog, not even a heated debate or argument. Politicians have replaced rational dialog and disagreement over policy with hate-speech, fear-mongering, bullying, venom, bigotry and character assassination. And, it’s all to the detriment of our democracy.

Those of us who crave a well-reasoned discussion about substantive issues and direction for our country have to gasp with utter incredulity — and then we must laugh.

From Jonathan Jones over at the Guardian:

When a man hoping to be president of the United States can sum up his own country with a photograph of a monogrammed gun and the single-word caption “America”, it may be time for the rest of the world to worry.

Instead they are laughing. Since the Republican nomination hopeful (although not very hopeful) Jeb Bush tweeted a picture of his handgun he has been mocked around the world with images that comically replace that violent symbol with the gentler images that sum up less trigger-happy places – a cup of tea for the UK, a bike for the Netherlands, a curry for Bradford.

The joke’s a bit thin, because what is currently happening in US politics is only funny if you are an alien watching from a spaceship and the fate of the entire planet is just one big laugh to you. For what is Bush trying to achieve with this picture? He’s trying to appeal to the rage and irrationality that have made Donald Trump’s bombastical assault on the White House look increasingly plausible while Bush languishes, a conventional politician swamped by unconventional times.

The centre cannot hold, WB Yeats wrote nearly a century ago, and this photograph shows exactly how off centre things are getting. When Jeb Bush – brother of one warmongering president, son of another, and a governor who sanctioned 21 executions during his tenure in Florida – embodies the centre ground, you know things have got strange. Compared with the strongman politics, explicit bigotry and perversion that a Trump presidency threatens, mere conservatism would be sweet sanity.

But this photograph reveals that that is not on offer. America, says Bush’s Twitter account, is a gun with your name on it. The candidate has his name inscribed on his weapon – Gov Jeb Bush, it says on the barrel. This man is a gun. He’s primed and loaded. You think Trump talks tough? Well, talk is cheap. “Speak softly, and carry a big stick,” said Theodore Roosevelt. Bush has got this gun, see, and he knows how to use it.

Read the entire article here.

Human Bloatware

Most software engineers and IT people are familiar with the term “bloatware“. The word is usually applied to a software application that takes up so much disk space and/or memory that its functional benefits are greatly diminished or rendered useless. Operating systems such as Windows and OSX are often characterized as bloatware — a newer version always seems to require an ever-expanding need for extra disk space (and memory) to accommodate an expanding array of new (often trivial) features with marginal added benefit.

DNA_Structure

But it seems that humans did not invent such obesity through our technology. Rather, a new genetic analysis shows that humans (and other animals) actually consist of biological bloatware, through a process which began when molecules of DNA first assembled the genes of the earliest living organisms.

From ars technica:

Eukaryotes like us are more complex than prokaryotes. We have cells with lots of internal structures, larger genomes with more genes, and our genes are more complex. Since there seems to be no apparent evolutionary advantage to this complexity—evolutionary advantage being defined as fitness, not as things like consciousness or sex—evolutionary biologists have spent much time and energy puzzling over how it came to be.

In 2010, Nick Lane and William Martin suggested that because they don’t have mitochondria, prokaryotes just can’t generate enough energy to maintain large genomes. Thus it was the acquisition of mitochondria and their ability to generate cellular energy that allowed eukaryotic genomes to expand. And with the expansion came the many different types of genes that render us so complex and diverse.

Michael Lynch and Georgi Marinov are now proposing a counter offer. They analyzed the bioenergetic costs of a gene and concluded that there is in fact no energetic barrier to genetic complexity. Rather, eukaryotes can afford bigger genomes simply because they have bigger cells.

First they looked at the lifetime energetic requirements of a cell, defined as the number of times that cell hydrolyzes ATP into ADP, a reaction that powers most cellular processes. This energy requirement rose linearly and smoothly with cell size from bacteria to eukaryotes with no break between them, suggesting that complexity alone, independently of cell volume, requires no more energy.

Then they calculated the cumulative cost of a gene—how much energy it takes to replicate it once per cell cycle, how much energy it takes to transcribe it into mRNA, and how much energy it takes to then translate that mRNA transcript into a functional protein. Genes may provide selective advantages, but those must be sufficient to overcome and justify these energetic costs.

At the levels of replication (copying the DNA) and transcription (making an RNA copy), eukaryotic genes are more costly than prokaryotic genes because they’re bigger and require more processing. But even though these costs are higher, they take up proportionally less of the total energy budget of the cell. That’s because bigger cells take more energy to operate in general (as we saw just above), while things like copying DNA only happens once per cell division. Bigger cells help here, too, as they divide less often.

Read the entire article here.

Documenting the Self

Samuel_PepysIs Nicolas Felton the Samuel Pepys of our digital age?

They both chronicled their observations over a period of 10 years, but separated by 345 years. However, that’s where the similarity between the two men ends.

Samuel Pepys was a 17th century member of British Parliament and naval bureaucrat, famous for the decade-long private diary. Pepys kept detailed personal notes from 1660 to 1669. The diary was subsequently published in the 19th century, and is now regarded as one of the principal sources of information of the Restoration period (return of the monarchy under Charles II). Many a British school kid [myself included] has been exposed to Pepys’ observations of momentous events, including his tales of the plague and the Great Fire of London.

Nicolas Felton a graphic designer and ex-Facebook employee cataloged his life from 2005 to 2015. Based in New York, Felton began obsessively recording the minutiae of his life in 2005. He first tracked his locations and time spent in each followed by his music-listening habits. Then he began counting his emails, correspondence, calendar entries, photos. Felton eventually compiled his detailed digital tracks into a visually fascinating annual Feltron Report.

So, Felton is certainly no Pepys, but his data trove remains interesting nonetheless — for different reasons. Pepys recorded history during a tumultuous time in England; his very rare, detailed first-person account across an entire decade has no parallel. His diary is now an invaluable literary chronicle for scholars and history buffs.

Our world is rather different today. Our technologies now enable institutions and individuals to record and relate their observations ad nauseam. Thus Felton’s data is not unique per se, though his decade-long obsession certainly provides us with a quantitative trove of data, which is not necessarily useful to us for historical reasons, but more so for those who study our tracks and needs, and market to us.

Read Samuel Pepys diary here. Read more about Nicolas Felton here.

Image: Samuel Pepys by John Hayls, oil on canvas, 1666. National Portrait Gallery. Public Domain.

MondayMap: Search by State

This treasure of a map shows the most popular Google search terms by state in 2015.

Google-search-by-state-2015

The vastly different searches show how the United States really is a collection of very diverse and loosely federated communities. The US may be a great melting pot, but down at the state level its residents seem to care about very different things.

For instance, while Floridians favorite search was “concealed weapons permit“, residents of Mississippi went rather dubiously for “Ashley Madison“, and Oklahoma’s top search was “Caitlyn Jenner“. Kudos to my home state, residents there put aside politics, reality TV, guns and other inanities by searching most for “water on mars“. Similarly, citizens of New Mexico looked far beyond their borders by searching most for “Pluto“.

And, I have to scratch my head over why New York State cares more about “Charlie Sheen HIV” and Kentucky prefers “Dusty Rhodes” over Washington State’s search for “Leonard Nimoy”.

The map was put together by the kind people at Estately. You can read more fascinating state-by-state search rankings here.

The Internet of Flow

Time-based structures of information and flowing data — on a global scale — will increasingly dominate the Web. Eventually, this flow is likely to transform how we organize, consume and disseminate our digital knowledge. While we see evidence of this in effect today, in blogs, Facebook’s wall and timeline and, most basically, via Twitter, the long-term implications of this fundamentally new organizing principle have yet to be fully understood — especially in business.

For a brief snapshot on a possible, and likely, future of the Internet I turn to David Gelernter. He is Professor of Computer Science at Yale University, an important thinker and author who has helped shape the fields of parallel computing, artificial intelligence (AI) and networking. Many of Gelernter’s papers, some written over 20 years ago offer a remarkably prescient view, most notably: Mirror Worlds (1991), The Muse In The Machine (1994) and The Second Coming – A Manifesto (1999).

From WSJ:

People ask where the Web is going; it’s going nowhere. The Web was a brilliant first shot at making the Internet usable, but it backed the wrong horse. It chose space over time. The conventional website is “space-organized,” like a patterned beach towel—pineapples upper left, mermaids lower right. Instead it might have been “time-organized,” like a parade—first this band, three minutes later this float, 40 seconds later that band.

We go to the Internet for many reasons, but most often to discover what’s new. We have had libraries for millennia, but never before have we had a crystal ball that can tell us what is happening everywhere right now. Nor have we ever had screens, from room-sized to wrist-sized, that can show us high-resolution, constantly flowing streams of information.

Today, time-based structures, flowing data—in streams, feeds, blogs—increasingly dominate the Web. Flow has become the basic organizing principle of the cybersphere. The trend is widely understood, but its implications aren’t.

Working together at Yale in the mid-1990s, we forecast the coming dominance of time-based structures and invented software called the “lifestream.” We had been losing track of our digital stuff, which was scattered everywhere, across proliferating laptops and desktops. Lifestream unified our digital life: Each new document, email, bookmark or video became a bead threaded onto a single wire in the Cloud, in order of arrival.

To find a bead, you search, as on the Web. Or you can watch the wire and see each new bead as it arrives. Whenever you add a bead to the lifestream, you specify who may see it: everyone, my friends, me. Each post is as private as you make it.

Where do these ideas lead? Your future home page—the screen you go to first on your phone, laptop or TV—is a bouquet of your favorite streams from all over. News streams are blended with shopping streams, blogs, your friends’ streams, each running at its own speed.

This home stream includes your personal stream as part of the blend—emails, documents and so on. Your home stream is just one tiny part of the world stream. You can see your home stream in 3-D on your laptop or desktop, in constant motion on your phone or as a crawl on your big TV.

By watching one stream, you watch the whole world—all the public and private events you care about. To keep from being overwhelmed, you adjust each stream’s flow rate when you add it to your collection. The system slows a stream down by replacing many entries with one that lists short summaries—10, 100 or more.

An all-inclusive home stream creates new possibilities. You could build a smartwatch to display the stream as it flows past. It could tap you on the wrist when there’s something really important onstream. You can set something aside or rewind if necessary. Just speak up to respond to messages or add comments. True in-car computing becomes easy. Because your home stream gathers everything into one line, your car can read it to you as you drive.

Read the entire article here.

 

A Gravitational Wave Comes Ashore

ligo-gravitational-waves-detection

On February 11, 2016, a historic day for astronomers the world over, scientists announced a monumental discovery, which was made on September 14, 2015! Thank you LIGO, the era of gravitational wave (G-Wave) astronomy has begun.

One hundred years after a prediction from Einstein’s theory of general relativity scientists have their first direct evidence of gravitational waves. These waves are ripples in the fabric of spacetime itself rather than the movement of fields and particles, such as from electromagnetic radiation. These ripples show up when gravitationally immense bodies warp the structure of space in which they sit, such as through collisions or acceleration.

ligo-hanford-aerial

As you might imagine for such disturbances to be observed here on Earth over distances in the tens to hundreds of millions, of light-years requires not only vastly powerful forces at one end but immensely sensitive instruments at the other. In fact the detector credited with discovery in this case is the Laser Interferometer Gravitational-Wave Observatory, or LIGO. It is so sensitive it can detect a change in length of its measurement apparatus — infra-red laser beams — 10,000 times smaller than the width of a proton. LIGO is operated by Caltech and MIT and supported through the U.S. National Science Foundation.

Prof Kip Thorne, one of the founders of LIGO, said that until now, astronomers had looked at the universe as if on a calm sea. This is now changed. He adds:

“The colliding black holes that produced these gravitational waves created a violent storm in the fabric of space and time, a storm in which time speeded up and slowed down, and speeded up again, a storm in which the shape of space was bent in this way and that way.”

And, as Prof Stephen Hawking remarked:

“Gravitational waves provide a completely new way of looking at the universe. The ability to detect them has the potential to revolutionise astronomy. This discovery is the first detection of a black hole binary system and the first observation of black holes merging.”

Congratulations to the many hundreds of engineers, technicians, researchers and theoreticians who have collaborated on this ground-breaking experiment. Particular congratulations go to LIGO’s three principal instigators: Rainier Weiss, Kip Thorne, and Ronald Drever.

This discovery paves the way for deeper understanding of our cosmos and lays the foundation for a new and rich form of astronomy through gravitational observations.

Galileo’s first telescopes opened our eyes to the visual splendor of our solar system and its immediate neighborhood. More recently, radio-wave, x-ray and gamma-ray astronomy have allowed us to discover wonders further afield: star-forming nebulae, neutron stars, black holes, active galactic nuclei, the Cosmic Microwave Background (CMB). Now, through LIGO and its increasingly sensitive descendants we are likely to make even more breathtaking discoveries, some of which, courtesy of gravitational waves, may let us peer at the very origin of the universe itself — the Big Bang.

How brilliant is that!

Image 1: The historic detection of gravitational waves by the Laser Interferometer Gravitational-Wave Observatory (LIGO) is shown in this plot during a press conference in Washington, D.C. on Feb. 11, 2016.Courtesy: National Science Foundation.

Image 2: LIGO Laboratory operates two detector sites 1,800 miles apart: one near Hanford in eastern Washington, and another near Livingston, Louisiana. This photo shows the Hanford detector. Courtesy of LIGO Caltech.

 

Pass the Nicotinamide Adenine Dinucleotide

NAD-moleculeFor those of us seeking to live another 100 years or more the news and/or hype over the last decade belonged to resveratrol. The molecule is believed to improve functioning of specific biochemical pathways in the cell, which may improve cell repair and hinder the aging process. Resveratrol is found — in trace amounts — in grape skin (and hence wine), blueberries and raspberries. While proof remains scarce, this has not stopped the public from consuming large quantities of wine and berries.

Ironically, one would need to ingest such large amounts of resveratrol to replicate the benefits found in mice studies, that the wine alone would probably cause irreversible liver damage before any health benefits appeared. Oh well.

So, on to the next big thing, since aging cannot wait. It’s called NAD or Nicotinamide Adenine Dinucleotide. NAD performs several critical roles in the cell, one of which is energy metabolism. As we age our cells show diminishing levels of NAD and this is, possibly, linked to mitochondrial deterioration. Mitochondria are the cells’ energy factories, so keeping our mitochondria humming along is critical. Thus, hordes of researchers are now experimenting with NAD and related substances to see if they hold promise in postponing cellular demise.

From Scientific American:

Whenever I see my 10-year-old daughter brimming over with so much energy that she jumps up in the middle of supper to run around the table, I think to myself, “those young mitochondria.”

Mitochondria are our cells’ energy dynamos. Descended from bacteria that colonized other cells about 2 billion years, they get flaky as we age. A prominent theory of aging holds that decaying of mitochondria is a key driver of aging. While it’s not clear why our mitochondria fade as we age, evidence suggests that it leads to everything from heart failure to neurodegeneration, as well as the complete absence of zipping around the supper table.

Recent research suggests it may be possible to reverse mitochondrial decay with dietary supplements that increase cellular levels of a molecule called NAD (nicotinamide adenine dinucleotide). But caution is due: While there’s promising test-tube data and animal research regarding NAD boosters, no human clinical results on them have been published.

NAD is a linchpin of energy metabolism, among other roles, and its diminishing level with age has been implicated in mitochondrial deterioration. Supplements containing nicotinamide riboside, or NR, a precursor to NAD that’s found in trace amounts in milk, might be able to boost NAD levels. In support of that idea, half a dozen Nobel laureates and other prominent scientists are working with two small companies offering NR supplements.

The NAD story took off toward the end of 2013 with a high-profile paper by Harvard’s David Sinclair and colleagues. Sinclair, recall, achieved fame in the mid-2000s for research on yeast and mice that suggested the red wine ingredient resveratrol mimics anti-aging effects of calorie restriction. This time his lab made headlines by reporting that the mitochondria in muscles of elderly mice were restored to a youthful state after just a week of injections with NMN (nicotinamide mononucleotide), a molecule that naturally occurs in cells and, like NR, boosts levels of NAD.

It should be noted, however, that muscle strength was not improved in the NMN-treated micethe researchers speculated that one week of treatment wasn’t enough to do that despite signs that their age-related mitochondrial deterioration was reversed.

NMN isn’t available as a consumer product. But Sinclair’s report sparked excitement about NR, which was already on the market as a supplement called Niagen. Niagen’s maker, ChromaDex, a publicly traded Irvine, Calif., company, sells it to various retailers, which market it under their own brand names. In the wake of Sinclair’s paper, Niagen was hailed in the media as a potential blockbuster.

In early February, Elysium Health, a startup cofounded by Sinclair’s former mentor, MIT biologist Lenny Guarente, jumped into the NAD game by unveiling another supplement with NR. Dubbed Basis, it’s only offered online by the company. Elysium is taking no chances when it comes to scientific credibility. Its website lists a dream team of advising scientists, including five Nobel laureates and other big names such as the Mayo Clinic’s Jim Kirkland, a leader in geroscience, and biotech pioneer Lee Hood. I can’t remember a startup with more stars in its firmament.

A few days later, ChromaDex reasserted its first-comer status in the NAD game by announcing that it had conducted a clinical trial demonstrating that a single dose of NR resulted in statistically significant increases in NAD in humansthe first evidence that supplements could really boost NAD levels in people. Details of the study won’t be out until it’s reported in a peer-reviewed journal, the company said. (ChromaDex also brandishes Nobel credentials: Roger Kornberg, a Stanford professor who won the Chemistry prize in 2006, chairs its scientific advisory board. Hes the son of Nobel laureate Arthur Kornberg, who, ChromaDex proudly notes, was among the first scientists to study NR some 60 years ago.)

The NAD findings tie into the ongoing story about enzymes called sirtuins, which Guarente, Sinclair and other researchers have implicated as key players in conferring the longevity and health benefits of calorie restriction. Resveratrol, the wine ingredient, is thought to rev up one of the sirtuins, SIRT1, which appears to help protect mice on high doses of resveratrol from the ill effects of high-fat diets. A slew of other health benefits have been attributed to SIRT1 activation in hundreds of studies, including several small human trials.

Here’s the NAD connection: In 2000, Guarente’s lab reported that NAD fuels the activity of sirtuins, including SIRT1the more NAD there is in cells, the more SIRT1 does beneficial things. One of those things is to induce formation of new mitochondria. NAD can also activate another sirtuin, SIRT3, which is thought to keep mitochondria running smoothly.

Read the entire article here.

Image: Structure of nicotinamide adenine dinucleotide, oxidized (NAD+). Courtesy of Wikipedia. Public Domain.