The Académie Française (French Academy) is the country’s foremost national watchdog of the French language. It’s been working to protect and preserve the language for over 380 years — mostly, I suspect, from the unceasing onslaught of English, think words like “le week-end”.
Interestingly enough members of the Académie Française proposed and accepted over 2,500 changes to the language — mostly spelling revisions — back in 1990. Now with mainstream French newspapers and TV networks having taken up the story, social media is abuzz with commentary; the French public is weighing in on the proposals, and many traditionalists and language purists don’t like what they see.
Exhibit A: the proposed loss of the circumflex accent (ˆ) that hovers above certain vowels. So, maîtresse becomes maitresse (mistress or female teacher); coût becomes cout (cost).
Exhibit B: oignon is to become ognon (onion).
Sacre bleu, I don’t like it either!
From the Guardian:
French linguistic purists have voiced online anger at the loss of one of their favourite accents – the pointy little circumflex hat (ˆ) that sits on top of certain vowels.
Changes to around 2,400 French words to simplify them for schoolchildren, such as allowing the word for onion to be spelled ognon as well as the traditional oignon, have brought accusations the country’s Socialist government is dumbing down the language.
Nothing provokes a Gallic row than changes to the language of Molière, but the storm took officials by surprise as the spelling revisions had been suggested by the Académie Française, watchdogs of the French language, and unanimously accepted by its members as long ago as 1990.
The aim was to standardise and simplify certain quirks in the written language making it easier to learn (among them chariot to charriot to harmonise with charrette, both words for a type of cart and the regrouping of compound nouns like porte-monnaie/portemonnaie (purse), extra-terrestres/extraterrestres and week-end/weekend, to do away with the hyphen.
While the “revised spelling list” was not obligatory, dictionaries were advised to carry both old and new spellings, and schools were instructed to use the new versions but accept both as correct.
The reforms provoked a #JeSuisCirconflexe campaign (derived from the #JeSuisCharlie hashtag) on Twitter. As the row spread across the internet and social networks, some wondered why the reforms, decided 26 years ago, had suddenly become such an issue.
In 2008, advice from the education ministry suggested the new spelling rules were “the reference” to be used, but it appears few people took notice. Last November, the changes were mentioned again in another ministry document about “texts following the spelling changes … approved by the Académie Française and published in the French Republic Official Journal on 6 December 1990”. Again, the news went unremarked.
It was only when a report by television channel TF1 appeared on Wednesday this week that the ognon went pear-shaped.
Ready? This one may come as a shock to some. Yet another body of research shows that children raised in religious families are less likely to be selfless and generous towards others. Yes, that’s right, morality and altruism do not automatically spring forth from religiosity. Increasingly, it looks like altruism is a much deeper human (and animal) trait, and indeed studies show that altruistic behaviors are common in primates and other animals.
From Scientific American:
Organized religion is a cornerstone of spiritual community and culture around the world. Religion, especially religious education, also attracts secular support because many believe that religion fosters morality. A majority of the United States believes that faith in a deity is necessaryto being a moral person.
In principle, religion’s emphasis on morality can smooth wrinkles out of the social fabric. Along those lines, believers are often instructed to act selflessly towards others. Islam places an emphasis on charity and alms-giving, Christianity on loving your neighbor as yourself. Taoist ethics, derived from the qualities of water, include the principle of selflessness
However, new research conducted in six countries around the world suggests that a religious upbringing may actually yield children who are less altruistic. Over 1000 children ages five to twelve took part in the study, from the United States, Canada, Jordan, Turkey, South Africa, and China. By finding that religious-raised children are less altruistic in the laboratory, the study alerts us to the possibility that religion might not have the wholesome effects we expect on the development of morality. The social practice of religion can complicate the precepts of a religious text. But in order to interpret these findings, we have to first look at how to test morality.
In an experiment snappily named the dictator game, a child designated “dictator” is tested for altruistic tendencies. This dictator child is conferred with great power to decide whether to share stickers with others. Researchers present the child with thirty stickers and instruct her to take ten favorite stickers. The researchers carefully mention that there isn’t time to play this game with everyone, setting up the main part of the experiment: to share or not to share. The child is given two envelopes and asked whether she will share stickers with other children at the school who cannot play the game. While the researcher faces the wall, the child can slip some stickers into the donation envelope and some into the other envelope to keep.
As the researchers expected, younger children were less likely to share stickers than older children. Also consistent with previous studies, children from a wealthier socioeconomic status shared more. More surprising was the tendency of children from religious households to share less than those from nonreligious backgrounds. When separated and analyzed by specific religion, the finding remained: children from both Christian and Muslim families on average shared less than nonreligious children. (Other religious designations were not represented in large enough numbers for separate statistical comparison.) Older kids from all backgrounds shared more than younger ones, but the tendency for religious children to share less than similar-aged children became more pronounced with age. The authors think this could be due to cumulative effects of time spent growing up in a religious household. While the large numbers of subjects strengthens the finding of a real difference between the groups of children, the actual disparity in typical sharing was about one sticker. We need to know if the gap in sticker sharing is meaningful in the real world.
Image: Religious symbols from the top nine organized faiths of the world. From left to right: 1st Row: Christian Cross, Jewish Star of David, Hindu Aumkar 2nd Row: Islamic Star and crescent, Buddhist Wheel of Dharma, Shinto Torii 3rd Row: Sikh Khanda, Bahá’í star, Jain Ahimsa Symbol. Courtesy: Rursus / Wikipedia. Public Domain.
Google became a monstrously successful technology company by inventing a solution to index and search content scattered across the Web, and then monetizing the search results through contextual ads. Since its inception the company has relied on increasingly sophisticated algorithms for indexing mountains of information and then serving up increasingly relevant results. These algorithms are based on a secret sauce that ranks the relevance of a webpage by evaluating its content, structure and relationships with other pages. They are defined and continuously improved by technologists and encoded into software by teams of engineers.
But as is the case in many areas of human endeavor, the underlying search engine technology and its teams of human designers and caregivers are being replaced by newer, better technology. In this case the better technology is based on artificial intelligence (AI), and it doesn’t rely on humans. It is based on machine or deep learning and neural networks — a combination of hardware and software that increasingly mimics the human brain in its ability to aggregate and filter information, decipher patterns and infer meaning.
[I’m sure it will not be long before yours truly is replaced by a bot.]
Yesterday, the 46-year-old Google veteran who oversees its search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea. On one level, these are just two guys doing something new with their lives. But you can also view the pair as the ideal metaphor for a momentous shift in the way things work inside Google—and across the tech world as a whole.
Giannandrea, you see, oversees Google’s work in artificial intelligence. This includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.
This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype. Over the past year, it has also reinvented Google Search, where the company generates most of its revenue. Early in 2015, as Bloomberg recently reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second.
Read the following sentence and you’ll conclude that this person is stark-raving-mad.
Writer Jenna Woginrich jettisoned her smartphone and lived 18 months without mobile calls and without texting, status updates and alerts.
Now read her complete story, excerpted below, and you’ll realize that after 18 months without a smartphone she is perfectly sane, more balanced, less stressed and generally more human.
From Jenna Woginrich via the Guardian:
The phone rings: it’s my friend checking to see if I can pick her up on the way to a dinner party. I ask her where she is and as she explains, I reach as far as I can across the countertop for a pen. I scribble the address in my trusty notebook I keep in my back pocket. I tell her I’ll be at her place in about 20 minutes, give or take a few. Then I hang up. Literally.
I physically take the handset receiver away from my ear and hang it on the weight-triggered click switch that cuts off my landline’s dial tone.
I take my laptop, Google the address, add better directions to my notes and head outside to my 1989 pick-up truck (whose most recent technological feature is a cassette player) and drive over. If I get lost on the way, I’ll need to ask someone for directions. If she changes her plans, she won’t be able to tell me or cancel at a moment’s notice. If I crash on the way, I won’t be calling 911.
I’m fine with all of this. As you guessed by now, I haven’t had a cellphone for more than 18 months.
I didn’t just cancel cellular service and keep the smartphone for Wi-Fi fun, nor did I downgrade to a flip phone to “simplify”; I opted out entirely. There is no mobile phone in my life, in any form, at all.
Arguably, there should be. I’m a freelance writer and graphic designer with many reasons to have a little computer in my holster, but I don’t miss it. There are a dozen ways to contact me between email and social media. When I check in, it’s on my terms. No one can interrupt my bad singing of Hooked on a Feeling with a text message. It’s as freeing as the first night of a vacation.
“My phone” has become “the phone”. It’s no longer my personal assistant; it has reverted back to being a piece of furniture – like “the fridge” or “the couch”, two other items you also wouldn’t carry around on your butt.
I didn’t get rid of it for some hipster-inspired luddite ideal or because I couldn’t afford it. I cut myself off because my life is better without a cellphone. I’m less distracted and less accessible, two things I didn’t realize were far more important than instantly knowing how many movies Kevin Kline’s been in since 2010 at a moment’s notice. I can’t be bothered unless I choose to be. It makes a woman feel rich.
Whether you subscribe to the idea that the death penalty is just [I do not] or not, you will surely find these final utterances moving — time for some reflection.
From the Independent:
Psychologists have analysed the last words of inmates who were condemned to death in Texas.
In a new paper, published in Frontiers in Psychology, researchers Dr. Sarah Hirschmüller and Dr. Boris Egloff used a database of last statements of inmates on death row and found the majority of the statements to be positive.
The researchers theorise that the inmates, the average age of whom in the current dataset is just over 39, expressed positive sentiments, because their minds were working in overdrive to avert them from fearing their current situation.
This is called ‘Terror-Management Theory’ (TMT). The concept is that people search for meaning when confronted with terror in a bid to maintain self-esteem and that “individuals employ a wide range of cognitive and behavioural efforts to regulate the anxiety that mortality salience evokes.”
Apparently there is some depth to ex-governor of Alaska Sarah Palin’s unintelligible vocalizations. According to Anna North, editor of the cultural blog at the NYT, Palin’s speech patterns are actually quite complex, reminiscent of the Latin oratory of ancient Rome. [Do I detect some tongue-in-cheekiness?] Oh, ignotum per ignotius!
Please make up your own mind. From the NYT:
Sarah Palin has been mocked a lot for the way she talks, especially in her strange and rambling endorsement speech for Donald Trump. But her speeches on the campaign trail aren’t simple; they are actually incredibly complicated.
Her unusual style was on display at a Trump rally on Monday afternoon in Cedar Rapids, Iowa. “When both parties, the machines involved, when both of them hate you,” she said at one point, “then you know America loves you and we do love he who will be the next president of the United States of America, Donald J. Trump!”
Let’s break that last part down: “We” love not just Donald Trump, or even just Donald J. Trump, but “he who will be the next president of the United States of America.”
Mrs. Palin relies heavily on this particular kind of dependent clause. “He is one who would know to negotiate,” she said of Donald Trump in her speech endorsing him on Jan. 19. Later in that speech, she spoke of “our own G.O.P. machine, the establishment, they who would assemble the political landscape.”
Maybe Mrs. Palin or her speechwriters think the convoluted sentence structure makes her sound smart. Maybe they think it makes her sound heroic, like the orators of the past. Or maybe all those extra clauses are just a really good way to load up a sentence with praise — or insults. Here’s Mrs. Palin using both a dependent clause and a participial phrase to attack President Obama on Jan. 19:
And he, who would negotiate deals, kind of with the skills of a community organizer maybe organizing a neighborhood tea, well, he deciding that, “No, America would apologize as part of the deal,” as the enemy sends a message to the rest of the world that they capture and we kowtow, and we apologize, and then, we bend over and say, “Thank you, enemy.”
I honestly am not sure what’s going on in this sentence. What I do know is that Sarah Palin has this in common with Roman orators: She loves to talk trash.
While I no longer live in London, I grew up there and still have a special affection for the city. I’m even attached to its famed Tube map (subway for my US readers). So I found this rendition rather fascinating — a map of average house prices at and around each tube station. No surprise: house prices on and inside the Central Line (red) are the highest, with the lowest hovering around £500,000 (roughly $710,000). Read more about this map here.
Map courtesy of eMoov with data provided by Zoopla.
In the US alone the psychiatric disorder affects around 2 million people. Symptoms of schizophrenia usually include hallucinations, delusional thinking and paranoia. While there are a number of drugs used to treat its symptoms, and psychotherapy to address milder forms, nothing as yet has been able to address its underlying cause(s). Hence the excitement.
Scientists reported on Wednesday that they had taken a significant step toward understanding the cause of schizophrenia, in a landmark study that provides the first rigorously tested insight into the biology behind any common psychiatric disorder.
More than two million Americans have a diagnosis of schizophrenia, which is characterized by delusional thinking and hallucinations. The drugs available to treat it blunt some of its symptoms but do not touch the underlying cause.
The finding, published in the journal Nature, will not lead to new treatments soon, experts said, nor to widely available testing for individual risk. But the results provide researchers with their first biological handle on an ancient disorder whose cause has confounded modern science for generations. The finding also helps explain some other mysteries, including why the disorder often begins in adolescence or young adulthood.
“They did a phenomenal job,” said David B. Goldstein, a professor of genetics at Columbia University who has been critical of previous large-scale projects focused on the genetics of psychiatric disorders. “This paper gives us a foothold, something we can work on, and that’s what we’ve been looking for now, for a long, long time.”
The researchers pieced together the steps by which genes can increase a person’s risk of developing schizophrenia. That risk, they found, is tied to a natural process called synaptic pruning, in which the brain sheds weak or redundant connections between neurons as it matures. During adolescence and early adulthood, this activity takes place primarily in the section of the brain where thinking and planning skills are centered, known as the prefrontal cortex. People who carry genes that accelerate or intensify that pruning are at higher risk of developing schizophrenia than those who do not, the new study suggests.
Some researchers had suspected that the pruning must somehow go awry in people with schizophrenia, because previous studies showed that their prefrontal areas tended to have a diminished number of neural connections, compared with those of unaffected people. The new paper not only strongly supports that this is the case, but also describes how the pruning probably goes wrong and why, and identifies the genes responsible: People with schizophrenia have a gene variant that apparently facilitates aggressive “tagging” of connections for pruning, in effect accelerating the process.
The research team began by focusing on a location on the human genome, the MHC, which was most strongly associated with schizophrenia in previous genetic studies. On a bar graph — called a Manhattan plot because it looks like a cluster of skyscrapers — the MHC looms highest.
Using advanced statistical methods, the team found that the MHC locus contained four common variants of a gene called C4, and that those variants produced two kinds of proteins, C4-A and C4-B.
The team analyzed the genomes of more than 64,000 people and found that people with schizophrenia were more likely to have the overactive forms of C4-A than control subjects. “C4-A seemed to be the gene driving risk for schizophrenia,” Dr. McCarroll said, “but we had to be sure.”
In a recent opinion column William Irwin professor of philosophy at King’s College summarizes an approach to accepting the notion of free will rather than believing it. While I’d eventually like to see an explanation for free will and morality in biological and chemical terms — beyond metaphysics — I will (or may, if free will does not exist) for the time being have to content myself with mere acceptance. But, I my acceptance is not based on the notion that “free will” is pre-determined by a supernatural being — rather, I suspect it’s an illusion, instigated in the dark recesses of our un- or sub-conscious, and our higher reasoning functions rationalize it post factum in the full light of day. Morality on the other hand, as Irwin suggests, is an rather different state of mind altogether.
From the NYT:
Few things are more annoying than watching a movie with someone who repeatedly tells you, “That couldn’t happen.” After all, we engage with artistic fictions by suspending disbelief. For the sake of enjoying a movie like “Back to the Future,” I may accept that time travel is possible even though I do not believe it. There seems no harm in that, and it does some good to the extent that it entertains and edifies me.
Philosophy can take us in the other direction, by using reason and rigorous questioning to lead us to disbelieve what we would otherwise believe. Accepting the possibility of time travel is one thing, but relinquishing beliefs in God, free will, or objective morality would certainly be more troublesome. Let’s focus for a moment on morality.
The philosopher Michael Ruse has argued that “morality is a collective illusion foisted upon us by our genes.” If that’s true, why have our genes played such a trick on us? One possible answer can be found in the work of another philosopher Richard Joyce, who has argued that this “illusion” — the belief in objective morality — evolved to provide a bulwark against weakness of the human will. So a claim like “stealing is morally wrong” is not true, because such beliefs have an evolutionary basis but no metaphysical basis. But let’s assume we want to avoid the consequences of weakness of will that would cause us to act imprudently. In that case, Joyce makes an ingenious proposal: moral fictionalism.
Following a fictionalist account of morality, would mean that we would accept moral statements like “stealing is wrong” while not believing they are true. As a result, we would act as if it were true that “stealing is wrong,” but when pushed to give our answer to the theoretical, philosophical question of whether “stealing is wrong,” we would say no. The appeal of moral fictionalism is clear. It is supposed to help us overcome weakness of will and even take away the anxiety of choice, making decisions easier.
Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.
Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.
William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.
I find myself agreeing with columnist Oliver Burkeman over at the Guardian that we need to carefully manage our access to the 24/7 news cycle. Our news media has learned to thrive on hyperbole and sensationalism, which — let’s face it — tends to be mostly negative. This unending and unnerving stream of gloom and doom tends to make us believe that we are surrounded by more badness than there actually is. I have to believe that most of the 7 billion+ personal stories each day that we could be hearing about — however mundane — are likely to not be bad or evil. So, while it may not be wise to switch off cable or satellite news completely, we should consider a more measured, and balanced, approach to the media monster.
From the Guardian:
A few days before Christmas, feeling rather furtive about it, I went on a media diet: I quietly unsubscribed from, unfollowed or otherwise disconnected from several people and news sources whose output, I’d noticed, did nothing but bring me down. This felt like defeat. I’ve railed against the popular self-help advice that you should “give up reading the news” on the grounds that it’s depressing and distracting: if bad stuff’s happening out there, my reasoning goes, I don’t want to live in an artificial bubble of privilege and positivity; I want to face reality. But at some point during 2015’s relentless awfulness, it became unignorable: the days when I read about another mass shooting, another tale of desperate refugees or anything involving the words “Donald Trump” were the days I’d end up gloomier, tetchier, more attention-scattered. Needless to say, I channelled none of this malaise into making the planet better. I just got grumbly about the world, like a walking embodiment of that bumper-sticker: “Where are we going, and why are we in this handbasket?”
One problem is that merely knowing that the news focuses disproportionately on negative and scary stories doesn’t mean you’ll adjust your emotions accordingly. People like me scorn Trump and the Daily Mail for sowing unwarranted fears. We know that the risk of dying in traffic is vastly greater than from terrorism. We may even know that US gun crime is in dramatic decline, that global economic inequality is decreasing, or that there’s not much evidence that police brutality is on the rise. (We just see more of it, thanks to smartphones.) But, apparently, the part of our minds that knows these facts isn’t the same part that decides whether to feel upbeat or despairing. It’s entirely possible to know things are pretty good, yet feel as if they’re terrible.
This phenomenon has curious parallels with the “busyness epidemic”. Data on leisure time suggests we’re not much busier than we were, yet we feel busier, partly because – for “knowledge workers”, anyway – there’s no limit to the number of emails we can get, the demands that can be made of us, or the hours of the day we can be in touch with the office. Work feels infinite, but our capacities are finite, therefore overwhelm is inevitable. Similarly, technology connects us to more and more of the world’s suffering, of which there’s an essentially infinite amount, until feeling steamrollered by it becomes structurally inevitable – not a sign that life’s getting worse. And the consequences go beyond glumness. They include “compassion fade”, the well-studied effect whereby our urge to help the unfortunate declines as their numbers increase.
It does indeed appear that a computer armed with Google’s experimental AI (artificial intelligence) software just beat a grandmaster of the strategy board game Go. The game was devised in ancient China — it’s been around for several millennia. Go is commonly held to be substantially more difficult than chess to master, to which I can personally attest.
So, does this mean that the human race is next in line for a defeat at the hands of an uber-intelligent AI? Well, not really, not yet anyway.
An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess. And Nick Bostrom isn’t exactly impressed.
Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It’s not that he discounts the power of Google’s Go-playing machine. He just argues that it isn’t necessarily a huge leap forward. The technologies behind Google’s system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.
“There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,” Bostrom says. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”
But if you look at this another way, it’s exactly why Google’s triumph is so exciting—and perhaps a little frightening. Even Bostrom says it’s a good excuse to stop and take a look at how far this technology has come and where it’s going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it’s headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.
Building a Brain
Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own.
Reinforcement learning takes things a step further. Once you’ve built a neural net that’s pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level.
AlphaGo uses all this. And then some. Hassabis [Demis Hassabis, AlphaGo founder] and his team added a second level of “deep reinforcement learning” that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.
The map shows areas where alcohol is restricted: red indicates that the sales of alcohol is banned (dry); blue shows that it is allowed (wet); and yellow denotes that the county is “partially dry” or “moist”.
Interestingly, Kansas, Tennessee and Mississippi are dry states by default and require individual counties to opt in to sell alcohol. Texas is a confusing patchwork: of Texas’s 254 counties, 11 are completely dry, 194 are partially dry, and 49 are entirely wet. And, to to add to the confusion, Texas prohibits off-premises sale of liquor — but not beer and wine — all day on Sunday and select holidays.
In a recent post I wrote about the world of reverse logistics, which underlies the multi-billion dollar business of product returns. But while the process of consumer returns runs like a well-oiled, global machine the psychology of returns is confusingly counter-intuitive.
For instance, a lenient return policy leads to more returned products — no surprise there. But, it also causes increased consumer spending, and the increased spending outweighs the cost to the business of processing the increased returns. Also, and rather more curiously, a more lenient return time limit correlates to a reduction in returns, not an increase.From the Washington Post:
January is prime time for returns in the retail industry, the month where shoppers show up in droves to trade in an ill-fitting sweater from grandma or to unload the second and third “Frozen” dolls that showed up under the Christmas tree.
This post-Christmas ritual has always been costly for retailers, comprising a large share of the $284 billion in goods that were returned in 2014. But now it is arguably becoming more urgent for the industry to think carefully about return policies, as analysts say the rise of online shopping is bringing with it a surge in returns. The return rate for the industry overall is about 8 percent, but analysts say that it is likely significantly higher than that online, since shoppers are purchasing goods without seeing them in person or trying them on.
Against that backdrop, researchers at University of Texas-Dallas sought to get a better handle on how return policies affect shopper behavior and, in turn, whether lenient policies such as offering a lengthy period for returns actually helps or hurts a retailer’s business.
Overall, a lenient return policy did indeed correlate with more returns. But, crucially, it was even more strongly correlated with an increase in purchases. In other words, retailers are generally getting a clear sales benefit from giving customers the assurance of a return.
One surprising finding: More leniency on time limits is associated with a reduction — not an increase — in returns.
This may seem counterintuitive, but researchers say it could have varying explanations. Ryan Freling, who conducted the research alongside Narayan Janakiraman and Holly Syrdal, said that this is perhaps a result of what’s known as “endowment effect.”
“That would say that the longer a customer has a product in their hands, the more attached they feel to it,” Freling said.
Plus, the long time frame creates less urgency around the decision over whether or not to take it back.
If you follow today’s internationally accepted calendar the year is 2016. But that doesn’t stop a significant few from knowing that the Earth is flat. It also doesn’t stop the internecine wars of words between various flat-Earther factions, which subscribe to different flat-Earth creation stories. Oh well.
From the Guardian:
YouTube user TigerDan925 shocked his 26,000 followers recently by conceding a shocking point: Antarctica is a continent. It’s not, as he previously thought, an ice wall that encircles the flat disc of land and water we call earth.
For most of us, that’s not news. But TigerDan925’s followers, like Galileo’s 17th century critics, are outraged by his heresy. Welcome to the contentious universe of flat-Earthers – people who believe the notion of a globe-shaped world orbiting the sun is a myth.
Through popular YouTube videos and spiffy sites, they show how easy it is to get attention by questioning scientific consensus. Unfortunately, we don’t really know how many people believe in the movement because so many people in it accuse each other of being as fake as Santa Claus (or perhaps the moon landing).
That being said, TigerDan925’s admission was not a concession that the world is shaped like the globe. He merely said flat-Earthers need a new map. But for his community, he might as well have abandoned them altogether:
“Next he says the Antarctica is not governed and protected by the Illuminati, that somehow any group deciding to buy and invest in equipment is free to roam anywhere by plane or on land,” writes a user by the name Chris Madsen. “This is absolute rubbish … 2016 is the year it becomes common knowledge the earth is flat, just like 9/11 became common knowledge, no stopping the truth now. ”
Such schisms are commonplace in flat-Earthdom, where at least three websites are vying to be the official meeting ground for the movement to save us all from the delusion that our world is a globe. Their differences range from petty (who came up with which idea first) to shocking and offensive (whether Jewish people are to blame for suppressing flat-Earth thought). And they regard each other with deep suspicion – almost as if they can’t believe that anyone else would believe what they do.
“[The multiple sites are] just the tip of the iceberg,” said flat-Earth convert Mark Sargent, who used his two decades of work in the tech and video game industries to create the site enclosedworld.com and a YouTube series called Flat Earth Clues. “There’s dissension in the ranks all over the place.”
“It’s almost like the beginning of a new religion. Everyone’s trying to define it. And they’re turning on each other because there’s no unified theory.” And so, like the People’s Front of Judea and the Judean People’s Front, they often spend far less time discussing what they believe than they spend attacking each other.
The Flat Earth Society revived in 2004 under the leadership of one Daniel Shenton and was opened to new members in 2009. A dissatisfied group split away in 2013 and launched its own site. A reunification proposal in 2014 has withered, and Shenton’s Twitter feed went cold after he posted a cryptic photo of the Terminator in September.
SciDeny is authored by writers who propose an alternate “reality” to rational scientific thought. But, don’t be fooled into believing that SciDeny is anything like SciFi.
There are 3 key differences between SciDeny and SciFi. First, SciDeny is authored by politicians, lawyers or lay-persons with political agendas, not professional novelists. Second, SciDeny porports to be non-fictional, and indeed many believe it to be so. Third, where SciFi often promotes a visionary future underpinned by scientific and technological progress, SciDeny is aimed squarely at countering the scientific method and turning back the clock on hundreds of years of scientific discourse and discovery.
We’re off to a great start already in 2016, as various States vie to be the first to pass SciDeny-friendly legislation. Oklahoma is this year’s winner.
From ars technica:
The first state bills of the year that would interfere with science education have appeared in Oklahoma. There, both the House and Senate have seen bills that would prevent school officials and administrators from disciplining any teachers who introduce spurious information to science classes.
These bills have a long history, dating back to around the time when teaching intelligent design was determined to be an unconstitutional imposition of religion. A recent study showed that you could take the text of the bills and build an evolutionary tree that traces their modifications over the last decade. The latest two fit the patterns nicely.
The Senate version of the bill is by State Senator Josh Brecheen, a Republican. It is the fifth year in a row he’s introduced a science education bill after announcing he wanted “every publicly funded Oklahoma school to teach the debate of creation vs. evolution.” This year’s version omits any mention of specific areas of science that could be controversial. Instead, it simply prohibits any educational official from blocking a teacher who wanted to discuss the “strengths and weaknesses” of scientific theories.
The one introduced in the Oklahoma House is more traditional. Billed as a “Scientific Education and Academic Freedom Act” (because freedom!), it spells out a whole host of areas of science its author doesn’t like:
The Legislature further finds that the teaching of some scientific concepts including but not limited to premises in the areas of biology, chemistry, meteorology, bioethics, and physics can cause controversy, and that some teachers may be unsure of the expectations concerning how they should present information on some subjects such as, but not limited to, biological evolution, the chemical origins of life, global warming, and human cloning.
Our planet continues to warm as climate change relentlessly marches on. These two images of Lake Poopó in the high Bolivian Andes over a period of three years shows the stark reality. The first image was taken in 2013 the second a mere three years later, in 2016.
The images are courtesy of the NASA Earth Observatory acquired by the Operational Land Imager (OLI) on the Landsat 8 satellite.
Most climatologists suggest that the lake will not return.
Lake Poopó, 2013
Lake Poopó, 2016
Read more about the causes and environmental and human consequences here.
Images: NASA Earth Observatory images by Jesse Allen, using Landsat data from the U.S. Geological Survey. Caption by Kathryn Hansen.
Many political scholars, commentators and members of the public — of all political stripes — who remember Eisenhower during his two terms in office (1953-1961) agree that he was one of the greatest US Presidents. As for the pretenders to the throne in the other half of this PhotoMash, well, ugh. Enough said.
DeepDrumpf is a Twitter bot out of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). It uses artificial intelligence (AI) to learn from the jaw-dropping rants of the current Republican frontrunner for the Presidential nomination and then tweets its own remarkably Trump-like musings.
The bot’s designer CSAIL postdoc Bradley Hayes says DeepDrumpf uses “techniques from ‘deep-learning,’ a field of artificial intelligence that uses systems called neural networks to teach computers to to find patterns on their own. ”
I would suggest that the deep-learning algorithms, in the case of Trump’s speech patterns, did not have to be too deep. After all, linguists who have studied his words agree that it’s mostly at a 4th-grade level — coherent language is not required.
Patterns aside, I think I prefer the bot over the real thing — it’s likely to do far less damage to our country and the globe than the real thing.
First, let me begin by introducing a quote for our times from David Bowie, dated 2003, published in Performing Songwriter.
“Fame itself, of course, doesn’t really afford you anything more than a good seat in a restaurant. That must be pretty well known by now. I’m just amazed how fame is being posited as the be all and end all, and how many of these young kids who are being foisted on the public have been talked into this idea that anything necessary to be famous is all right. It’s a sad state of affairs. However arrogant and ambitious I think we were in my generation, I think the idea was that if you do something really good, you’ll become famous. The emphasis on fame itself is something new. Now it’s, to be famous you should do what it takes, which is not the same thing at all. And it will leave many of them with this empty feeling.”
Thirteen years on, and just a few days following Bowie’s tragic death, his words on fame remain startlingly appropriate. We now live in a world where fame can be pursued, manufactured and curated without needing any particular talent — social media has seen to that.
This new type of fame — let’s call it insta-fame — is a very different kind of condition to our typical notion of old fame, which may be enabled by a gorgeous voice, or acting prowess, or a way with the written word, or prowess with a tennis racket, or at the wheel of a race car, or one a precipitous ski slope, or from walking on the surface of the Moon, or from winning the Spelling Bee, or from devising a cure for polio.
It’s easy to confuse insta-fame with old fame: both offer a huge following of adoring strangers and both, potentially, lead to inordinate monetary reward. But that’s where the similarities end. Old fame came from visible public recognition and required an achievement or a specific talent, usually honed after many years or decades. Insta-fame on the other hand doesn’t seem to demand any specific skill and is often pursued as an end in itself. With insta-fame the public recognition has become decoupled from the achievement — to such an extent, in fact, that it no longer requires any achievement or skill, other than the gathering of more public recognition. This is a gloriously self-sustaining circle that advertisers have grown to adore.
My diatribe leads to a fascinating article on the second type of fame, insta-fame, and some of its protagonists and victims. David Bowie’s words continue to ring true.
From the Independent:
Charlie Barker is in her pyjamas, sitting in the shared kitchen of her halls of residence, with an Asda shopping trolley next to her – storage overflow from her tiny room. A Flybe plane takes off from City Airport, just across the dank water from the University of East London, where Barker studies art in surroundings that could not be greyer. The only way out is the DLR, the driverless trains that link Docklands to the brighter parts of town.
“I always wanted to move to London and when everyone was signing up for uni, I was like, I don’t want to go to uni – I just want to go to London,” says Barker, who calls David Bowie her “spirit animal” and is obsessed with Hello Kitty. But going to London is hard if you’re 18 and from Nottingham and don’t have a plan or money. “So then I was like, OK, I’ll go to uni in London.” So she ended up in Beckton, which is closer to Essex than the city centre.
It’s lunchtime and one of Barker’s housemates walks in to stick something in the microwave, which he quickly takes back to his room. They exchange hellos. “I don’t really talk to people here, I just go to central to meet my friends,” she says. “But the DLR is so long and tragic, especially when you’re not in the brightest of moods.” I ask her if she often goes to the student canteen. I noticed it on the way here; it’s called “Munch”. She’s in her second year and says she didn’t know it existed.
These are unlikely surroundings, in some ways. Because while Barker is a nice, normal student doing normal student things, she’s also famous. I take out my phone and we look through her pictures on Instagram, where her following is greater than the combined circulations of Hello! and OK! magazines. Now @charliexbarker is in the room and things become more colourful. Pink, mainly. And blue, and glitter, and selfies, and skin.
And Hello Kitty. “I wanted to get a tattoo on the palm of my hand and because it was painful I was like, ‘what do I believe in enough to get tattooed on my hand for the rest of my life?’, and I was like – Hello Kitty. My Mum was like, ‘you freak!'” The drawing of the Japanese cartoon cat features in a couple of Barker’s 700-plus photos. In a portrait of her hand, she holds a pink and blue lollipop, and her fingernails are painted pink and blue. The caption: “Pink n blu pink n blu.”
Before that, Barker, now 19, wanted a tattoo saying “Drink water, eat pussy”, but decided against it. The slogan appears in another photo, scrawled on the pavement in pink chalk as she sits wearing a Betty Boop jacket in pink and black, with pink hair and fishnets. “I was bumming around with my friend Daniel, who’s a photographer, and I wanted to see if I could do all the styling and everything,” she says. “We’d already done four of five looks and we were like, oh my God, so we just wet my hair and went with it.”
“Poco esplicita,” suggests one of her Italian followers beside the photo. Barker rarely replies to comments these days, most of which are from fans (“I love uuuuu… Your style just killing me… IM SCREAMING”) and doesn’t say much in her captions (“I do wat I want” in this case). Yet her followers – 622,000 of them at the time of writing – love her pictures, many of which receive more than 50,000 likes. She’s not on reality TV, can’t sing and has no famous relatives. She’s not rich and has no access to private jets or tigers as pets. Yet with a photographic glimpse – or at least suggestion – of a life of colour and attitude, a student in Beckton has earned the sort of fame that only exists on Instagram.
“That sounds so weird, saying that, stop it!” she says when I ask if she feels famous. “No, I’m not famous. I’m just doing my own thing, getting recognition doing it. And I think everyone’s famous now, aren’t they? Everyone has an Instagram and everyone’s famous.”
The photo app, bought by Facebook in 2012, boomed last year, overtaking Twitter in September with 400 million active monthly users. But there are degrees of Instafame. And if one measure, beyond an audience, is a change to one’s life, then Barker has it. So too do Brian Whittaker (@brianhwhittaker) and Olivia Knight-Butler (@livrosekb), whose followings also defy celebrity norms. Whittaker, an insanely grown-up 16-year-old from Solihull, also rejects the idea that he’s famous at all, despite having a quarter of a million followers. “I don’t see followers as a real thing, it’s just being popular on a page,” he says from his mum’s house.
Yet in the next sentence he talks about the best indicator of fame in any age. “I get stopped in the street quite a bit now. In the summer I was in Singapore with my parents and people were taking pictures of me. One person stopped me and then when I got back to the hotel room I saw pictures of me on Instagram shopping. People had tagged me and were asking, ‘is this really you, are you in Singapore?'”
“I get so so flattered when people ask me for a picture in the street,” Barker says. Most of her fans are younger teenage girls. Many have set up dedicated Charlie Barker fan accounts, re-posting her images adorned with love hearts. They idolise her. “I feel like I have to give them eternal love for it, I’m like, oh my God, that is so sweet.”
Call it what you may, but ’tis the season following the gift-giving season, which means only one thing, it’s returns season. Did you receive a gorgeous pair of shoes in the wrong size? Return. Did you get yet another hideous tie or shirt in the wrong color? Return. Yet more lotion that makes you break out in an orange rash? Return? Video game in the wrong format or book that you already digested last year? Return. Toaster that doesn’t match your kitchen decor? Return.
And, the numbers of returns are quite staggering. According to Optoro — a research firm that helps major retailers process and resell returns — consumers return nearly $70 billion worth of purchases during the holiday season. That’s more than the entire GDP of countries like Luxembourg or Sri Lanka.
So, with returns being such a huge industry how does the process work? Importantly, a returned gift is highly unlikely to end up back on the original shelf from where it was purchased. Rather, the gift is often transported by an inverse supply-chain — known as reverse logistics — from the consumer back to the retailer, sometimes back to a wholesaler, and then back to a liquidator. Importantly, up to 40 percent of returns don’t even make it back to a liquidator since it’s sometimes more economical for the retailer to discard the item.
For most retailers, the weeks leading up to Christmas are a frenzied crescendo of activity. But for Michael Ringelsten, the excitement starts after the holidays.
Ringelsten runs Shorewood Liquidators, which collects all those post-holiday returns—from unwanted gadgets and exercise equipment to office furniture and popcorn machines—and finds them a new home. Wait, what? A new home? Yep. Rejected gifts and returned goods don’t go back on the shelves from which they came. They follow an entirely different logistical path, a weird mirror image of the supply chain that brings the goods we actually want to our doors.
This parallel process exists because the cost of restocking and reselling returned items often exceeds the value of those items. To cut their losses, online retailers often turn to folks like Ringelsten.
I discovered Shorewood Liquidators through a rather low rent-looking online ad touting returned items from The Home Depot, Amazon, Sears, Wal-Mart, and other big retailers. I was surprised to find the items weren’t bad. Some were an out-and-out deal, like this comfy Arcadia recliner (perfect for my next Shark Tank marathon). Bidding starts at 99 cents for knickknacks or $5 for nicer stuff. The descriptions state whether there are scuffs, scratches, or missing parts.
“This recliner? It will definitely sell,” Ringelsten says. Shorewood employs 91 people who work out of a 100,000-square-foot warehouse in Illinois—a space that, after the holidays, is a Through the Looking Glass version of Amazon, selling unwanted gifts at rock-bottom prices. And as Americans buy more and more holiday gifts online, they’re also returning more, creating new opportunities for businesses prepared to handle what others don’t want. Call it “re-commerce.”
The Hidden World of Returns
UPS says last week it saw the highest volume of returns it expects to see all year, with people sending back more than 5 million gifts and impulse purchases. On the busiest day of that week, the shipper said, people sent back twice as many packages—1 million in all—than the same day a year ago.
But those returns often don’t return from whence they came. Instead, they’re shipped to returns facilities—some operated by retailers, others that serve as hubs for many sellers. Once there, the goods are collected, processed, and often resold by third-party contractors, including wholesalers and liquidators like Shorewood. These contractors often use software that determines the most profitable path, be it selling them to consumers online, selling them in lots to wholesale buyers, or simply recycling them. If none of these options is profitable, the item may well end up in a landfill, making the business of returns an environmental issue, as well.
It’s impossible to ignore the thoroughly shameful behavior of the current crop of politicians and non-politicians running in this year’s U.S clown car race presidential election. The vicious tripe that flows from the mouths of these people is certainly attention-grabbing. But while it may have been titillating at first, the discourse — in very loose terms — has now taken a deeply disgusting and dangerous turn.
Just take the foul-mouthed tweets of current front runner for the Republican nomination, Donald Trump.
Since he entered the race his penchant for bullying and demagoguery has taken center stage; no mention of any policy proposals, rational or otherwise; just a filthy mouth spouting hatred, bigotry, fear, shame and intimidation in a constant 140-character storm of drivel.
So I couldn’t resist taking all his recent tweets and creating a wordcloud from his stream of anger and nonsense. His favorite “policy” statements to date: wall, dumb, failing, dopey, dope, worst, dishonest, failed, bad, sad, boring. I must say it is truly astonishing to see this person attack another for being: hater, liar, dishonest, racist, sexist, dumb, total hypocrite!