Women Are From Venus, Men Can’t Remember

Yet another body of research underscores how different women are from men. This time, we are told, that the sexes generally encode and recall memories differently. So, the next time you take issue with a spouse (of different gender) about a — typically trivial — past event keep in mind that your own actions, mood and gender will affect your recall. If you’re female, your memories may be much more vivid than your male counterpart, but not necessarily more correct. If you (male) won last night’s argument, your spouse (female) will — unfortunately for you — remember it more accurately than you, which of course will lead to another argument.

From WSJ:

Carrie Aulenbacher remembers the conversation clearly: Her husband told her he wanted to buy an arcade machine he found on eBay. He said he’d been saving up for it as a birthday present to himself. The spouses sat at the kitchen table and discussed where it would go in the den.

Two weeks later, Ms. Aulenbacher came home from work and found two arcade machines in the garage—and her husband beaming with pride.

“What are these?” she demanded.

“I told you I was picking them up today,” he replied.

She asked him why he’d bought two. He said he’d told her he was getting “a package deal.” She reminded him they’d measured the den for just one. He stood his ground.

“I believe I told her there was a chance I was going to get two,” says Joe Aulenbacher, who is 37 and lives in Erie, Pa.

“It still gets me going to think about it a year later,” says Ms. Aulenbacher, 36. “My home is now overrun with two machines I never agreed upon.” The couple compromised by putting one game in the den and the other in Mr. Aulenbacher’s weight room.

It is striking how many arguments in a relationship start with two different versions of an event: “Your tone of voice was rude.” “No it wasn’t.” “You didn’t say you’d be working late.” “Yes I did.” “I told you we were having dinner with my mother tonight.” “No, honey. You didn’t.”

How can two people have different memories of the same event? It starts with the way each person perceives the event in the first place—and how they encoded that memory. “You may recall something differently at least in part because you understood it differently at the time,” says Dr. Michael Ross, professor emeritus in the psychology department at the University of Waterloo in Ontario, Canada, who has studied memory for many years.

Researchers know that spouses sometimes can’t even agree on concrete events that happened in the past 24 hours—such as whether they had an argument or whether one received a gift from the other. A study in the early 1980s, published in the journal “Behavioral Assessment,” found that couples couldn’t perfectly agree on whether they had sex the previous night.

Women tend to remember more about relationship issues than men do. When husbands and wives are asked to recall concrete relationship events, such as their first date, an argument or a recent vacation, women’s memories are more vivid and detailed.

But not necessarily more accurate. When given a standard memory test where they are shown names or pictures and then asked to recall them, women do just about the same as men.

Researchers have found that women report having more emotions during relationship events than men do. They may remember events better because they pay more attention to the relationship and reminisce more about it.

People also remember their own actions better. So they can recall what they did, just not what their spouse did. Researchers call this an egocentric bias, and study it by asking people to recall their contributions to events, as well as their spouse’s. Who cleans the kitchen more? Who started the argument? Whether the event is positive or negative, people tend to believe that they had more responsibility.

Your mood—both when an event happens and when you recall it later—plays a big part in memory, experts say. If you are in a positive mood or feeling positive about the other person, you will more likely recall a positive experience or give a positive interpretation to a negative experience. Similarly, negative moods tend to reap negative memories.

Negative moods may also cause stronger memories. A person who lost an argument remembers it more clearly than the person who won it, says Dr. Ross. Men tend to win more arguments, he says, which may help to explain why women remember the spat more. But men who lost an argument remember it as well as women who lost.

Read the entire article here.

Send to Kindle

Heads in the Rising Tide

King-Knut

Officials from the state of Florida seem to have their heads in the sand (and other places); sand that is likely to be swept from their very own Florida shores as sea levels rise. However, surely climate change could be an eventual positive for Florida: think warmer climate and huge urban swathes underwater — a great new Floridian theme park! But, remember, don’t talk about it. I suppose officials will soon be looking for a contemporary version of King Canute to help them out of this watery pickle.

From Wired:

The oceans are slowly overtaking Florida. Ancient reefs of mollusk and coral off the present-day coasts are dying. Annual extremes in hot and cold, wet and dry, are becoming more pronounced. Women and men of science have investigated, and a great majority agree upon a culprit. In the outside world, this culprit has a name, but within the borders of Florida, it does not. According to a  Miami Herald investigation, the state Department of Environmental Protection has since 2010 had an unwritten policy prohibiting the use of some well-understood phrases for the meteorological phenomena slowly drowning America’s weirdest-shaped state. It’s … that thing where burning too much fossil fuel puts certain molecules into a certain atmosphere, disrupting a certain planetary ecosystem. You know what we’re talking about. We know you know. They know we know you know. But are we allowed to talk about … you know? No. Not in Florida. It must not be spoken of. Ever.

Unless … you could, maybe, type around it? It’s worth a shot.

The cyclone slowdown

It has been nine years since Florida was hit by a proper hurricane. Could that be a coincidence? Sure. Or it could be because of … something. A nameless, voiceless something. A feeling, like a pricking-of-thumbs, this confluence-of-chemistry-and-atmospheric-energy-over-time. If so, this anonymous dreadfulness would, scientists say, lead to a drier middle layer of atmosphere over the ocean. Because water vapor stores energy, this dry air will suffocate all but the most energetic baby storms. “So the general thinking, is that that as [redacted] levels increase, it ultimately won’t have an effect on the number of storms,” says Jim Kossin, a scientist who studies, oh, how about “things-that-happen-in-the-atmosphere-over-long-time-periods” at the National Centers for Environmental Information. “However, there is a lot of evidence that if a storm does form, it has a chance of getting very strong.”

Storms darken the sky

Hurricanes are powered by energy in the sea. And as cold and warm currents thread around the globe, storms go through natural, decades-long cycles of high-to-low intensity. “There is a natural 40-to-60-year oscillation in what sea surface temperatures are doing, and this is driven by ocean-wide currents that move on very slow time scales,” says Kossin, who has authored reports for the Intergovernmental Panel on, well, let’s just call it Chemical-and-Thermodynamic-Alterations-to-Long-Term-Atmospheric-Conditions. But in recent years, storms have become stronger than that natural cycle would otherwise predict. Kossin says that many in his field agree that while the natural churning of the ocean is behind this increasing intensity, other forces are at work. Darker, more sinister forces, like thermodynamics. Possibly even chemistry. No one knows for sure. Anyway, storms are getting less frequent, but stronger. It’s an eldritch tale of unspeakable horror, maybe.

 Read the entire article here.

Image: King Knut (or Cnut or Canute) the Great, illustrated in a medieval manuscript. Courtesy of Der Spiegel Geschichte.

Send to Kindle

The Big Crunch

cmb

It may just be possible that prophetic doomsayers have been right all along. The end is coming… well, in a few tens of billions of years. A group of physicists propose that the cosmos will soon begin collapsing in on itself. Keep in mind that soon in cosmological terms runs into the billions of years. So, it does appear that we still have some time to crunch down our breakfast cereal a few more times before the ultimate universal apocalypse. Clearly this may not please those who seek the end of days within their lifetimes, and for rather different — scientific — reasons, cosmologists seem to be unhappy too.

From Phys:

Physicists have proposed a mechanism for “cosmological collapse” that predicts that the universe will soon stop expanding and collapse in on itself, obliterating all matter as we know it. Their calculations suggest that the collapse is “imminent”—on the order of a few tens of billions of years or so—which may not keep most people up at night, but for the physicists it’s still much too soon.

In a paper published in Physical Review Letters, physicists Nemanja Kaloper at the University of California, Davis; and Antonio Padilla at the University of Nottingham have proposed the cosmological collapse mechanism and analyzed its implications, which include an explanation of dark energy.

“The fact that we are seeing dark energy now could be taken as an indication of impending doom, and we are trying to look at the data to put some figures on the end date,” Padilla told Phys.org. “Early indications suggest the collapse will kick in in a few tens of billions of years, but we have yet to properly verify this.”

The main point of the paper is not so much when exactly the universe will end, but that the mechanism may help resolve some of the unanswered questions in physics. In particular, why is the universe expanding at an accelerating rate, and what is the dark energy causing this acceleration? These questions are related to the cosmological constant problem, which is that the predicted vacuum energy density of the universe causing the expansion is much larger than what is observed.

“I think we have opened up a brand new approach to what some have described as ‘the mother of all physics problems,’ namely the cosmological constant problem,” Padilla said. “It’s way too early to say if it will stand the test of time, but so far it has stood up to scrutiny, and it does seem to address the issue of vacuum energy contributions from the standard model, and how they gravitate.”

The collapse mechanism builds on the physicists’ previous research on vacuum energy sequestering, which they proposed to address the cosmological constant problem. The dynamics of vacuum energy sequestering predict that the universe will collapse, but don’t provide a specific mechanism for how collapse will occur.

According to the new mechanism, the universe originated under a set of specific initial conditions so that it naturally evolved to its present state of acceleration and will continue on a path toward collapse. In this scenario, once the collapse trigger begins to dominate, it does so in a period of “slow roll” that brings about the accelerated expansion we see today. Eventually the universe will stop expanding and reach a turnaround point at which it begins to shrink, culminating in a “big crunch.”

Read the entire article here.

Image: Image of the Cosmic Microwave Background (CMB) from nine years of WMAP data. The image reveals 13.77 billion year old temperature fluctuations (shown as color differences) that correspond to the seeds that grew to become the galaxies. Courtesy of NASA.

Send to Kindle

PowerPoint Karaoke Olympics

PPT-karaokeIt may not be beyond the realm of fantasy to imagine a day in the not too distant future when PowerPoint Karaoke features as an olympic sport. Ugh!

Without a doubt karaoke has set human culture back at least a thousand years (thanks Japan). And, Powerpoint has singlehandedly dealt killer blows to creativity, deep thought and literary progress (thanks Microsoft). Surely, combining these two banes of modern society into a competitive event is the stuff of true horror. But, this hasn’t stopped the activity from becoming a burgeoning improv phenomenon for corporate hacks — validating the trend in which humans continue making fools of themselves. After all, it must be big — and there’s probably money in it — if the WSJ is reporting on it.

Nonetheless,

  • Count
  • me
  • out!

From the WSJ:

On a sunny Friday afternoon earlier this month, about 100 employees of Adobe Systems Inc. filed expectantly into an auditorium to watch PowerPoint presentations.

“I am really thrilled to be here today,” began Kimberley Chambers, a 37-year-old communications manager for the software company, as she nervously clutched a microphone. “I want to talk you through…my experience with whales, in both my personal and professional life.”

Co-workers giggled. Ms. Chambers glanced behind her, where a PowerPoint slide displayed four ink sketches of bare-chested male torsos, each with a distinct pattern of chest hair. The giggles became guffaws. “What you might not know,” she continued, “is that whales can be uniquely identified by a number of different characteristics, not the least of which is body hair.”

Ms. Chambers, sporting a black blazer and her employee ID badge, hadn’t seen this slide in advance, nor the five others that popped up as she clicked her remote control. To accompany the slides, she gave a nine-minute impromptu talk about whales, a topic she was handed 30 seconds earlier.

Forums like this at Adobe, called “PowerPoint karaoke” or “battle decks,” are cropping up as a way for office workers of the world to mock an oppressor, the ubiquitous PowerPoint presentation. The mix of improvised comedy and corporate-culture takedown is based on a simple notion: Many PowerPoint presentations are unintentional parody already, so why not go all the way?

Library associations in Texas and California held PowerPoint karaoke sessions at their annual conferences. At a Wal-Mart StoresInc. event last year, workers gave fake talks based on real slides from a meatpacking supplier. Twitter Inc. Chief Executive Dick Costolo, armed with his training from comedy troupe Second City, has faced off with employees at “battle decks” contests during company meetings.

One veteran corporate satirist gives these events a thumbs up. “Riffing off of PowerPoints without knowing what your next slide is going to be? The humorist in me says it’s kinda brilliant,” said “Dilbert” cartoonist Scott Adams, who has spent 26 years training his jaundiced eye on office work. “I assume this game requires drinking?” he asked. (Drinking is technically not required, but it is common.)

Mr. Adams, who worked for years at a bank and at a telephone company, said PowerPoint is popular because it offers a rare dose of autonomy in cubicle culture. But it often bores, because creators lose sight of their mission. “If you just look at a page and drag things around and play with fonts, you think you’re a genius and you’re in full control of your world,” he said.

At a February PowerPoint karaoke show in San Francisco, contestants were given pairings of topics and slides ranging from a self-help seminar for people who abuse Amazon Prime, with slides including a dog balancing a stack of pancakes on its nose, to a sermon on “Fifty Shades of Grey,” with slides including a pyramid dotted with blocks of numbers. Another had to explain the dating app Tinder to aliens invading the Earth, accompanied by a slide of old floppy disk drives, among other things.

Read and sing-a-long to the entire article here.

Send to Kindle

Circadian Misalignment and Your Smartphone

Google-search-smartphone-night

You take your portable electronics everywhere, all the time. You watch TV with or on your smartphone. You eat with a fork in one hand and your smartphone in the other. In fact, you probably wish you had two pairs of arms so you could eat, drink and use your smartphone and laptop at the same time. You use your smartphone in your car — hopefully or sensibly not while driving. You read texts on your smartphone while in the restroom. You use it at the movie theater, at the theater (much to the dismay of stage actors). It’s with you at the restaurant, on the bus or metro, in the aircraft, in the bath (despite chances of getting electrically shocked). You check your smartphone first thing in the morning and last thing before going to sleep. And, if your home or work-life demands you will check it periodically throughout the night.

Let’s leave aside for now the growing body of anecdotal and formal evidence that smartphones are damaging your physical wellbeing. This includes finger, hand and wrist problems (from texting); and neck and posture problems (from constantly bending over your small screen). Now there is evidence that constant use, especially at night, is damaging your mental wellbeing and increasing the likelihood of additional, chronic physical ailments. It appears that the light from our constant electronic companions is not healthy, particularly as it disrupts our regular rhythm of sleep.

From Wired:

For More than 3 billion years, life on Earth was governed by the cyclical light of sun, moon and stars. Then along came electric light, turning night into day at the flick of a switch. Our bodies and brains may not have been ready.

A fast-growing body of research has linked artificial light exposure to disruptions in circadian rhythms, the light-triggered releases of hormones that regulate bodily function. Circadian disruption has in turn been linked to a host of health problems, from cancer to diabetes, obesity and depression. “Everything changed with electricity. Now we can have bright light in the middle of night. And that changes our circadian physiology almost immediately,” says Richard Stevens, a cancer epidemiologist at the University of Connecticut. “What we don’t know, and what so many people are interested in, are the effects of having that light chronically.”

Stevens, one of the field’s most prominent researchers, reviews the literature on light exposure and human health the latest Philosophical Transactions of the Royal Society B. The new article comes nearly two decades after Stevens first sounded the alarm about light exposure possibly causing harm; writing in 1996, he said the evidence was “sparse but provocative.” Since then, nighttime light has become even more ubiquitous: an estimated 95 percent of Americans regularly use screens shortly before going to sleep, and incandescent bulbs have been mostly replaced by LED and compact fluorescent lights that emit light in potentially more problematic wavelengths. Meanwhile, the scientific evidence is still provocative, but no longer sparse.

As Stevens says in the new article, researchers now know that increased nighttime light exposure tracks with increased rates of breast cancer, obesity and depression. Correlation isn’t causation, of course, and it’s easy to imagine all the ways researchers might mistake those findings. The easy availability of electric lighting almost certainly tracks with various disease-causing factors: bad diets, sedentary lifestyles, exposure to they array of chemicals that come along with modernity. Oil refineries and aluminum smelters, to be hyperbolic, also blaze with light at night.

Yet biology at least supports some of the correlations. The circadian system synchronizes physiological function—from digestion to body temperature, cell repair and immune system activity—with a 24-hour cycle of light and dark. Even photosynthetic bacteria thought to resemble Earth’s earliest life forms have circadian rhythms. Despite its ubiquity, though, scientists discovered only in the last decade what triggers circadian activity in mammals: specialized cells in the retina, the light-sensing part of the eye, rather than conveying visual detail from eye to brain, simply signal the presence or absence of light. Activity in these cells sets off a reaction that calibrates clocks in every cell and tissue in a body. Now, these cells are especially sensitive to blue wavelengths—like those in a daytime sky.

But artificial lights, particularly LCDs, some LEDs, and fluorescent bulbs, also favor the blue side of the spectrum. So even a brief exposure to dim artificial light can trick a night-subdued circadian system into behaving as though day has arrived. Circadian disruption in turn produces a wealth of downstream effects, including dysregulation of key hormones. “Circadian rhythm is being tied to so many important functions,” says Joseph Takahashi, a neurobiologist at the University of Texas Southwestern. “We’re just beginning to discover all the molecular pathways that this gene network regulates. It’s not just the sleep-wake cycle. There are system-wide, drastic changes.” His lab has found that tweaking a key circadian clock gene in mice gives them diabetes. And a tour-de-force 2009 study put human volunteers on a 28-hour day-night cycle, then measured what happened to their endocrine, metabolic and cardiovascular systems.

Crucially, that experiment investigated circadian disruption induced by sleep alteration rather than light exposure, which is also the case with the many studies linking clock-scrambling shift work to health problems. Whether artificial light is as problematic as disturbed sleep patterns remains unknown, but Stevens thinks that some and perhaps much of what’s now assumed to result from sleep issues is actually a function of light. “You can wake up in the middle of the night and your melatonin levels don’t change,” he says. “But if you turn on a light, melatonin starts falling immediately. We need darkness.” According to Stevens, most people live in a sort of “circadian fog.”

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

3D Printing Magic

If you’ve visited this blog before you know I’m a great fan of 3D printing. Though some uses, such as printing 3D selfies, seem dubious at best. So, when Carbon3D unveiled its fundamentally different, and better, approach to 3D printing I was intrigued. The company uses an approach called continuous liquid interface production (CLIP), which seems to construct objects from a magical ooze. Check out the video — you’ll be enthralled. The future is here.

Learn more about Carbon3D here.

From Wired:

EVEN IF YOU have little interest in 3-D printing, you’re likely to find  Carbon3D’s Continuous Liquid Interface Production (CLIP) technology fascinating. Rather than the time-intensive printing of a 3-D object layer by layer like most printers, Carbon3D’s technique works 25 to 100 times faster than what you may have seen before, and looks a bit like Terminator 2‘s liquid metal T-1000 in the process.

CLIP creations grow out of a pool of UV-sensitive resin in a process that’s similar to the way laser 3-D printers work, but at a much faster pace. Instead of the laser used in conventional 3-D printers, CLIP uses an ultraviolet projector on the underside of a resin tray to project an image for how each layer should form. Light shines through an oxygen-permeable window onto the resin, which hardens it. Areas of resin that are exposed to oxygen don’t harden, while those that are cut off form the 3-D printed shape.

In practice, all that physics translates to unprecedented 3-D printing speed. At this week’s TED Conference in Vancouver, Carbon3D CEO and co-founder Dr. Joseph DeSimone demonstrated the printer onstage with a bit of theatrical underselling, wagering that his creation could produce in 10 minutes a geometric ball shape that would take a regular 3-D printer up to 10 hours. The CLIP process churned out the design in a little under 7 minutes.

Read the entire story here.

Video courtesy of Carbon3D.

Send to Kindle

We Are All Always Right, All of the Time

You already know this: you believe that your opinion is correct all the time, about everything. And, interestingly enough, your friends and neighbors believe that they are always right too. Oh, and the colleague at the office with whom you argue all the time — she’s right all the time too.

How can this be, when in an increasingly science-driven, objective universe facts trump opinion? Well, not so fast. It seems that we humans have an internal mechanism that colors our views based on a need for acceptance within a broader group. That is, we generally tend to spin our rational views in favor of group consensus, versus supporting the views of a subject matter expert, which might polarize the group. This is both good and bad. Good because it reinforces the broader benefits of being within a group; bad because we are more likely to reject opinion, evidence and fact from experts outside of our group — think climate change.

From the Washington Post:

It’s both the coolest — and also in some ways the most depressing — psychology study ever.

Indeed, it’s so cool (and so depressing) that the name of its chief finding — the Dunning-Kruger effect — has at least halfway filtered into public consciousness. In the classic 1999 paper, Cornell researchers David Dunning and Justin Kruger found that the less competent people were in three domains — humor, logic, and grammar — the less likely they were to be able to recognize that. Or as the researchers put it:

We propose that those with limited knowledge in a domain suffer from a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it.

Dunning and Kruger didn’t directly apply this insight to our debates about science. But I would argue that the effect named after them certainly helps to explain phenomena like vaccine denial, in which medical authorities have voiced a very strong opinion, but some parents just keep on thinking that, somehow, they’re in a position to challenge or ignore this view.

So why do I bring this classic study up now?

The reason is that an important successor to the Dunning-Kruger paper has just been come out — and it, too, is pretty depressing (at least for those of us who believe that domain expertise is a thing to be respected and, indeed, treasured)This time around, psychologists have not uncovered an endless spiral of incompetence and the inability to perceive it. Rather, they’ve shown that people have an “equality bias” when it comes to competence or expertise, such that even when it’s very clear that one person in a group is more skilled, expert, or competent (and the other less), they are nonetheless inclined to seek out a middle ground in determining how correct different viewpoints are.

Yes, that’s right — we’re all right, nobody’s wrong, and nobody gets hurt feelings.

The new study, just published in the Proceedings of the National Academy of Sciences, is by Ali Mahmoodi of the University of Tehran and a long list of colleagues from universities in the UK, Germany, China, Denmark, and the United States. And no wonder: The research was transnational, and the same experiment — with the same basic results — was carried out across cultures in China, Denmark, and Iran.

Read the entire story here.

Send to Kindle

Hyper-Parenting and Couch Potato Kids

Google-search-kids-playing

Parents who are overly engaged in micro-managing the academic, athletic and social lives of their kids may be responsible for ensuring their offspring lead less active lives. A new research study finds children of so-called hyper-parents are significantly less active than peers with less involved parents. Hyper-parenting seems to come in 4 flavors: helicopter parents who hover over their child’s every move; tiger moms who constantly push for superior academic attainment; little emperor parents who constantly bestow their kids material things; and concerted cultivation parents who over-schedule their kids with never-ending after-school activities. If you recognize yourself in one of these parenting styles, take a deep breath, think back on when as a 7-12 year-old you had the most fun, and let you kids play outside — preferably in the rain and mud!

From the WSJ / Preventive Medicine:

Hyper-parenting may increase the risk of physical inactivity in children, a study in the April issue of Preventive Medicine suggests.

Children with parents who tended to be overly involved in their academic, athletic and social lives—a child-rearing style known as hyper-parenting—spent less time outdoors, played fewer after-school sports and were less likely to bike or walk to school, friends’ homes, parks and playgrounds than children with less-involved parents.

Hyperparenting, although it’s intended to benefit children by giving them extra time and attention, could have adverse consequences for their health, the researchers said.

The study, at Queen’s University in Ontario, surveyed 724 parents of children, ages 7 to 12 years old, born in the U.S. and Canada from 2002 to 2007. (The survey was based on parents’ interaction with the oldest child.)

Questionnaires assessed four hyper-parenting styles: helicopter or overprotective parents; little-emperor parents who shower children with material goods; so-called tiger moms who push for exceptional achievement; and parents who schedule excessive extracurricular activities, termed concerted cultivation. Hyperparenting was ranked in five categories from low to high based on average scores in the four styles.

Children’s preferred play location was their yard at home, and 64% of the children played there at least three times a week. Only 12% played on streets and cul-de-sacs away from home. Just over a quarter walked or cycled to school or friends’ homes, and slightly fewer to parks and playgrounds. Organized sports participation was 26%.

Of parents, about 40% had high hyper-parenting scores and 6% had low scores. The most active children had parents with low to below-average scores in all four hyper-parenting styles, while the least active had parents with average-to-high hyper-parenting scores. The difference between children in the low and high hyper-parenting groups was equivalent to about 20 physical-activity sessions a week, the researchers said.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Humor Versus Horror

Faced with unspeakable horror many of usually turn away. Some courageous souls turn to humor to counter the vileness of others. So, it is heartwarming to see comedians and satirists taking up rhetorical arms in the backyards of murderers and terrorists. Fighting violence and terror with much of the same may show progress in the short-term, but ridiculing our enemies with humor and thoughtful dialogue is the only long-term way to fight evil in its many human forms. A profound thank you to these four brave Syrian refugees who, in the face of much personal danger, are able to laugh at their foes.

From the Guardian:

They don’t have much to laugh about. But four young Syrian refugees from Aleppo believe humour may be the only antidote to the horrors taking place back home.

Settled in a makeshift studio in the Turkish city of Gaziantep 40 miles from the Syrian border, the film-makers decided ridicule was an effective way of responding to Islamic State and its grisly record of extreme violence.

“The entire world seems to be terrified of Isis, so we want to laugh at them, expose their hypocrisy and show that their interpretation of Islam does not represent the overwhelming majority of Muslims,” says Maen Watfe, 27. “The media, especially the western media, obsessively reproduce Isis propaganda portraying them as strong and intimidating. We want to show their weaknesses.”

The films and videos on Watfe and his three friends’ website mock the Islamist extremists and depict them as naive simpletons, hypocritical zealots and brutal thugs. It’s a high-risk undertaking. They have had to move house and keep their addresses secret from even their best friends after receiving death threats.

But the video activists – Watfe, Youssef Helali, Mohammed Damlakhy and Aya Brown – will not be deterred.

Their film The Prince shows Isis leader and self-appointed caliph Abu Bakr al-Baghdadi drinking wine, listening to pop music and exchanging selfies with girls on his smartphone. A Moroccan jihadi arrives saying he came to Syria to “liberate Jerusalem”. The leader swaps the wine for milk and switches the music to Islamic chants praising martyrdom. Then he hands the Moroccan a suicide belt and sends him off against a unit of Free Syrian Army fighters. The grenades detonate, and Baghdadi reaches for his glass of wine and turns the pop music back on.

It is pieces like this that have brought hate mail and threats via social media.

“One of them said that they would finish us off like they finished off Charlie [Hebdo],” Brown, 26, recalls. She declined to give her real name out of fear for her family, who still live in Aleppo. “In the end we decided to move from our old apartment.”

The Turkish landlord told them Arabic-speaking men had repeatedly asked for their whereabouts after they left, and kept the studio under surveillance.

Follow the story here.

Video: Happy Valentine. Courtesy of Dayaaltaseh Productions.

Send to Kindle

Household Chores for Kids Are Good

Google-kid-chores

Apparently household chores are becoming rather yesterday. Several recent surveys — no doubt commissioned by my children — show that shared duties in the home are a dying phenomenon. No, I here you cry. Not only do chores provide a necessary respite from the otherwise 24/7-videogame-texting addiction, they help establish a sense of responsibility and reinforce our increasingly imperiled altruistic tendencies. So, parents, get out the duster, vacuum, fresh sheets, laundry basket and put those (little) people to work before it’s too late. But first of all let’s rename “chores” to responsibilities.

From WSJ:

Today’s demands for measurable childhood success—from the Common Core to college placement—have chased household chores from the to-do lists of many young people. In a survey of 1,001 U.S. adults released last fall by Braun Research, 82% reported having regular chores growing up, but only 28% said that they require their own children to do them. With students under pressure to learn Mandarin, run the chess club or get a varsity letter, chores have fallen victim to the imperatives of resume-building—though it is hardly clear that such activities are a better use of their time.

“Parents today want their kids spending time on things that can bring them success, but ironically, we’ve stopped doing one thing that’s actually been a proven predictor of success—and that’s household chores,” says Richard Rende, a developmental psychologist in Paradise Valley, Ariz., and co-author of the forthcoming book “Raising Can-Do Kids.” Decades of studies show the benefits of chores—academically, emotionally and even professionally.

Giving children household chores at an early age helps to build a lasting sense of mastery, responsibility and self-reliance, according to research by Marty Rossmann, professor emeritus at the University of Minnesota. In 2002, Dr. Rossmann analyzed data from a longitudinal study that followed 84 children across four periods in their lives—in preschool, around ages 10 and 15, and in their mid-20s. She found that young adults who began chores at ages 3 and 4 were more likely to have good relationships with family and friends, to achieve academic and early career success and to be self-sufficient, as compared with those who didn’t have chores or who started them as teens.

Chores also teach children how to be empathetic and responsive to others’ needs, notes psychologist Richard Weissbourd of the Harvard Graduate School of Education. In research published last year, he and his team surveyed 10,000 middle- and high-school students and asked them to rank what they valued more: achievement, happiness or caring for others.

Almost 80% chose either achievement or happiness over caring for others. As he points out, however, research suggests that personal happiness comes most reliably not from high achievement but from strong relationships. “We’re out of balance,” says Dr. Weissbourd. A good way to start readjusting priorities, he suggests, is by learning to be kind and helpful at home.

Read the entire story here.

Image courtesy of Google Search.

 

Send to Kindle

The Damned Embuggerance

Google-search-terry-pratchett-books

Sadly, genre-busting author Sir Terry Pratchett succumbed to DEATH on March 12, 2015. Luckily, for those of us still fending off the clutches of Reaper Man we have seventy-plus works of his to keep us company in the darkness.

So now that our world contains a little less magic it’s important to remind ourselves of a few choice words of his:

A man is not truly dead while his name is still spoken.

Stories of imagination tend to upset those without one.

It’s not worth doing something unless someone, somewhere, would much rather you weren’t doing it.

The truth may be out there, but the lies are inside your head.

Goodness is about what you do. Not who you pray to.

From the Guardian:

Neil Gaiman led tributes from the literary, entertainment and fantasy worlds to Terry Pratchett after the author’s death on Thursday, aged 66.

The author of the Discworld novels, which sold in the tens of millions worldwide, had been afflicted with a rare form of early-onset Alzheimer’s disease.

Gaiman, who collaborated with Pratchett on the huge hit Good Omens, tweeted: “I will miss you, Terry, so much,” pointing to “the last thing I wrote about you”, on the Guardian.

“Terry Pratchett is not a jolly old elf at all,” wrote Gaiman last September. “Not even close. He’s so much more than that. As Terry walks into the darkness much too soon, I find myself raging too: at the injustice that deprives us of – what? Another 20 or 30 books? Another shelf-full of ideas and glorious phrases and old friends and new, of stories in which people do what they really do best, which is use their heads to get themselves out of the trouble they got into by not thinking? … I rage at the imminent loss of my friend. And I think, ‘What would Terry do with this anger?’ Then I pick up my pen, and I start to write.”

Appealing to readers to donate to Alzheimer’s research, Gaiman added on his blog: “Thirty years and a month ago, a beginning author met a young journalist in a Chinese Restaurant, and the two men became friends, and they wrote a book, and they managed to stay friends despite everything. Last night, the author died.

“There was nobody like him. I was fortunate to have written a book with him, when we were younger, which taught me so much.

“I knew his death was coming and it made it no easier.”

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

The Internet 0f Th1ngs

Google-search-IoT

Technologist Marc Goodman describes a not too distant future in which all our appliances, tools, products… anything and everything is plugged into the so-called Internet of Things (IoT). The IoT describes a world where all things are connected to everything else, making for a global mesh of intelligent devices from your connected car and your WiFi enabled sneakers to your smartwatch and home thermostat. You may well believe it advantageous to have your refrigerator ping the local grocery store when it runs out of fresh eggs and milk or to have your toilet auto-call a local plumber when it gets stopped-up.

But, as our current Internet shows us — let’s call it the Internet of People — not all is rosy in this hyper-connected, 24/7, always-on digital ocean. What are you to do when hackers attack all your home appliances in a “denial of home service attack (DohS)”, or when your every move inside your home is scrutinized, collected, analyzed and sold to the nearest advertiser, or when your cooktop starts taking and sharing selfies with the neighbors?

Goodman’s new book on this important subject, excerpted here, is titled Future Crimes.

From the Guardian:

If we think of today’s internet metaphorically as about the size of a golf ball, tomorrow’s will be the size of the sun. Within the coming years, not only will every computer, phone and tablet be online, but so too will every car, house, dog, bridge, tunnel, cup, clock, watch, pacemaker, cow, streetlight, bridge, tunnel, pipeline, toy and soda can. Though in 2013 there were only 13bn online devices, Cisco Systems has estimated that by 2020 there will be 50bn things connected to the internet, with room for exponential growth thereafter. As all of these devices come online and begin sharing data, they will bring with them massive improvements in logistics, employee efficiency, energy consumption, customer service and personal productivity.

This is the promise of the internet of things (IoT), a rapidly emerging new paradigm of computing that, when it takes off, may very well change the world we live in forever.

The Pew Research Center defines the internet of things as “a global, immersive, invisible, ambient networked computing environment built through the continued proliferation of smart sensors, cameras, software, databases, and massive data centres in a world-spanning information fabric”. Back in 1999, when the term was first coined by MIT researcher Kevin Ashton, the technology did not exist to make the IoT a reality outside very controlled environments, such as factory warehouses. Today we have low-powered, ultra-cheap computer chips, some as small as the head of a pin, that can be embedded in an infinite number of devices, some for mere pennies. These miniature computing devices only need milliwatts of electricity and can run for years on a minuscule battery or small solar cell. As a result, it is now possible to make a web server that fits on a fingertip for $1.

The microchips will receive data from a near-infinite range of sensors, minute devices capable of monitoring anything that can possibly be measured and recorded, including temperature, power, location, hydro-flow, radiation, atmospheric pressure, acceleration, altitude, sound and video. They will activate miniature switches, valves, servos, turbines and engines – and speak to the world using high-speed wireless data networks. They will communicate not only with the broader internet but with each other, generating unfathomable amounts of data. The result will be an always-on “global, immersive, invisible, ambient networked computing environment”, a mere prelude to the tidal wave of change coming next.

In the future all objects may be smart

The broad thrust sounds rosy. Because chips and sensors will be embedded in everyday objects, we will have much better information and convenience in our lives. Because your alarm clock is connected to the internet, it will be able to access and read your calendar. It will know where and when your first appointment of the day is and be able to cross-reference that information against the latest traffic conditions. Light traffic, you get to sleep an extra 10 minutes; heavy traffic, and you might find yourself waking up earlier than you had hoped.

When your alarm does go off, it will gently raise the lights in the house, perhaps turn up the heat or run your bath. The electronic pet door will open to let Fido into the backyard for his morning visit, and the coffeemaker will begin brewing your coffee. You won’t have to ask your kids if they’ve brushed their teeth; the chip in their toothbrush will send a message to your smartphone letting you know the task is done. As you walk out the door, you won’t have to worry about finding your keys; the beacon sensor on the key chain makes them locatable to within two inches. It will be as if the Jetsons era has finally arrived.

While the hype-o-meter on the IoT has been blinking red for some time, everything described above is already technically feasible. To be certain, there will be obstacles, in particular in relation to a lack of common technical standards, but a wide variety of companies, consortia and government agencies are hard at work to make the IoT a reality. The result will be our transition from connectivity to hyper-connectivity, and like all things Moore’s law related, it will be here sooner than we realise.

The IoT means that all physical objects in the future will be assigned an IP address and be transformed into information technologies. As a result, your lamp, cat or pot plant will be part of an IT network. Things that were previously silent will now have a voice, and every object will be able to tell its own story and history. The refrigerator will know exactly when it was manufactured, the names of the people who built it, what factory it came from, and the day it left the assembly line, arrived at the retailer, and joined your home network. It will keep track of every time its door has been opened and which one of your kids forgot to close it. When the refrigerator’s motor begins to fail, it can signal for help, and when it finally dies, it will tell us how to disassemble its parts and best recycle them. Buildings will know every person who has ever worked there, and streetlights every car that has ever driven by.

All of these objects will communicate with each other and have access to the massive processing and storage power of the cloud, further enhanced by additional mobile and social networks. In the future all objects may become smart, in fact much smarter than they are today, and as these devices become networked, they will develop their own limited form of sentience, resulting in a world in which people, data and things come together. As a consequence of the power of embedded computing, we will see billions of smart, connected things joining a global neural network in the cloud.

In this world, the unknowable suddenly becomes knowable. For example, groceries will be tracked from field to table, and restaurants will keep tabs on every plate, what’s on it, who ate from it, and how quickly the waiters are moving it from kitchen to customer. As a result, when the next E coli outbreak occurs, we won’t have to close 500 eateries and wonder if it was the chicken or beef that caused the problem. We will know exactly which restaurant, supplier and diner to contact to quickly resolve the problem. The IoT and its billions of sensors will create an ambient intelligence network that thinks, senses and feels and contributes profoundly to the knowable universe.

Things that used to make sense suddenly won’t, such as smoke detectors. Why do most smoke detectors do nothing more than make loud beeps if your life is in mortal danger because of fire? In the future, they will flash your bedroom lights to wake you, turn on your home stereo, play an MP3 audio file that loudly warns, “Fire, fire, fire.” They will also contact the fire department, call your neighbours (in case you are unconscious and in need of help), and automatically shut off flow to the gas appliances in the house.

The byproduct of the IoT will be a living, breathing, global information grid, and technology will come alive in ways we’ve never seen before, except in science fiction movies. As we venture down the path toward ubiquitous computing, the results and implications of the phenomenon are likely to be mind-blowing. Just as the introduction of electricity was astonishing in its day, it eventually faded into the background, becoming an imperceptible, omnipresent medium in constant interaction with the physical world. Before we let this happen, and for all the promise of the IoT, we must ask critically important questions about this brave new world. For just as electricity can shock and kill, so too can billions of connected things networked online.

One of the central premises of the IoT is that everyday objects will have the capacity to speak to us and to each other. This relies on a series of competing communications technologies and protocols, many of which are eminently hackable. Take radio-frequency identification (RFID) technology, considered by many the gateway to the IoT. Even if you are unfamiliar with the name, chances are you have already encountered it in your life, whether it’s the security ID card you use to swipe your way into your office, your “wave and pay” credit card, the key to your hotel room, your Oyster card.

Even if you don’t use an RFID card for work, there’s a good chance you either have it or will soon have it embedded in the credit card sitting in your wallet. Hackers have been able to break into these as well, using cheap RFID readers available on eBay for just $50, tools that allow an attacker to wirelessly capture a target’s credit card number, expiration date and security code. Welcome to pocket picking 2.0.

More productive and more prison-like

A much rarer breed of hacker targets the physical elements that make up a computer system, including the microchips, electronics, controllers, memory, circuits, components, transistors and sensors – core elements of the internet of things. These hackers attack a device’s firmware, the set of computer instructions present on every electronic device we encounter, including TVs, mobile phones, game consoles, digital cameras, network routers, alarm systems, CCTVs, USB drives, traffic lights, gas station pumps and smart home management systems. Before we add billions of hackable things and communicate with hackable data transmission protocols, important questions must be asked about the risks for the future of security, crime, terrorism, warfare and privacy.

In the same way our every move online can be tracked, recorded, sold and monetised today, so too will that be possible in the near future in the physical world. Real space will become just like cyberspace. With the widespread adoption of more networked devices, what people do in their homes, cars, workplaces, schools and communities will be subjected to increased monitoring and analysis by the corporations making these devices. Of course these data will be resold to advertisers, data brokers and governments, providing an unprecedented view into our daily lives. Unfortunately, just like our social, mobile, locational and financial information, our IoT data will leak, providing further profound capabilities to stalkers and other miscreants interested in persistently tracking us. While it would certainly be possible to establish regulations and build privacy protocols to protect consumers from such activities, the greater likelihood is that every IoT-enabled device, whether an iron, vacuum, refrigerator, thermostat or lightbulb, will come with terms of service that grant manufacturers access to all your data. More troublingly, while it may be theoretically possible to log off in cyberspace, in your well-connected smart home there will be no “opt-out” provision.

We may find ourselves interacting with thousands of little objects around us on a daily basis, each collecting seemingly innocuous bits of data 24/7, information these things will report to the cloud, where it will be processed, correlated, and reviewed. Your smart watch will reveal your lack of exercise to your health insurance company, your car will tell your insurer of your frequent speeding, and your dustbin will tell your local council that you are not following local recycling regulations. This is the “internet of stool pigeons”, and though it may sound far-fetched, it’s already happening. Progressive, one of the largest US auto insurance companies, offers discounted personalised rates based on your driving habits. “The better you drive, the more you can save,” according to its advertising. All drivers need to do to receive the lower pricing is agree to the installation of Progressive’s Snapshot black-box technology in their cars and to having their braking, acceleration and mileage persistently tracked.

The IoT will also provide vast new options for advertisers to reach out and touch you on every one of your new smart connected devices. Every time you go to your refrigerator to get ice, you will be presented with ads for products based on the food your refrigerator knows you’re most likely to buy. Screens too will be ubiquitous, and marketers are already planning for the bounty of advertising opportunities. In late 2013, Google sent a letter to the Securities and Exchange Commission noting, “we and other companies could [soon] be serving ads and other content on refrigerators, car dashboards, thermostats, glasses and watches, to name just a few possibilities.”

Knowing that Google can already read your Gmail, record your every web search, and track your physical location on your Android mobile phone, what new powerful insights into your personal life will the company develop when its entertainment system is in your car, its thermostat regulates the temperature in your home, and its smart watch monitors your physical activity?

Not only will RFID and other IoT communications technologies track inanimate objects, they will be used for tracking living things as well. The British government has considered implanting RFID chips directly under the skin of prisoners, as is common practice with dogs. School officials across the US have begun embedding RFID chips in student identity cards, which pupils are required to wear at all times. In Contra Costa County, California, preschoolers are now required to wear basketball-style jerseys with electronic tracking devices built in that allow teachers and administrators to know exactly where each student is. According to school district officials, the RFID system saves “3,000 labour hours a year in tracking and processing students”.

Meanwhile, the ability to track employees, how much time they take for lunch, the length of their toilet breaks and the number of widgets they produce will become easy. Moreover, even things such as words typed per minute, eye movements, total calls answered, respiration, time away from desk and attention to detail will be recorded. The result will be a modern workplace that is simultaneously more productive and more prison-like.

At the scene of a suspected crime, police will be able to interrogate the refrigerator and ask the equivalent of, “Hey, buddy, did you see anything?” Child social workers will know there haven’t been any milk or nappies in the home, and the only thing stored in the fridge has been beer for the past week. The IoT also opens up the world for “perfect enforcement”. When sensors are everywhere and all data is tracked and recorded, it becomes more likely that you will receive a moving violation for going 26 miles per hour in a 25-mile-per-hour zone and get a parking ticket for being 17 seconds over on your meter.

The former CIA director David Petraeus has noted that the IoT will be “transformational for clandestine tradecraft”. While the old model of corporate and government espionage might have involved hiding a bug under the table, tomorrow the very same information might be obtained by intercepting in real time the data sent from your Wi-Fi lightbulb to the lighting app on your smart phone. Thus the devices you thought were working for you may in fact be on somebody else’s payroll, particularly that of Crime, Inc.

A network of unintended consequences

For all the untold benefits of the IoT, its potential downsides are colossal. Adding 50bn new objects to the global information grid by 2020 means that each of these devices, for good or ill, will be able to potentially interact with the other 50bn connected objects on earth. The result will be 2.5 sextillion potential networked object-to-object interactions – a network so vast and complex it can scarcely be understood or modelled. The IoT will be a global network of unintended consequences and black swan events, ones that will do things nobody ever planned. In this world, it is impossible to know the consequences of connecting your home’s networked blender to the same information grid as an ambulance in Tokyo, a bridge in Sydney, or a Detroit auto manufacturer’s production line.

The vast levels of cyber crime we currently face make it abundantly clear we cannot even adequately protect the standard desktops and laptops we presently have online, let alone the hundreds of millions of mobile phones and tablets we are adding annually. In what vision of the future, then, is it conceivable that we will be able to protect the next 50bn things, from pets to pacemakers to self-driving cars? The obvious reality is that we cannot.

Our technological threat surface area is growing exponentially and we have no idea how to defend it effectively. The internet of things will become nothing more than the Internet of things to be hacked.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

Luck

Four-leaf_clover

Some think they have it constantly at their side, like a well-trained puppy. Others crave and seek it. And yet others believe they have been shunned by it. Some put their love lives down to it, and many believe it has had a hand in guiding their careers, friendships, and finances. Of course, many know that it — luck — plays a crucial part in their fortunes at the poker table, roulette wheel or at the races. So what really is luck? Does it stem from within or does it envelope us like a benevolent (mostly) aether? And more importantly, how can more of us find some and tune it to our purposes? 

Carlin Flora over at Aeon presents an insightful analysis, with some rather simple answers. Oh, and you may wish to give away that rabbit’s foot.

From aeon:

In 1992, Archie Karas, then a waiter, headed out to Las Vegas. By 1995, he had turned $50 into $40 million, in what has become known as the biggest winning streak in gambling history. Most of us would call it an instance of great luck, or we might say of Archie himself: ‘What a lucky guy!’ The cold-hearted statistician would laugh at our superstious notions, and instead describe a series of chance processes that happened to work out for Karas. In the larger landscape where randomness reigns, anything can happen at any given casino. Calling its beneficiaries lucky is simply sticking a label on it after the fact.

To investigate luck is to take on one of the grandest of all questions: how can we explain what happens to us, and whether we will be winners, losers or somewhere in the middle at love, work, sports, gambling and life overall? As it turns out, new findings suggest that luck is not a phenomenon that appears exclusively in hindsight, like a hail storm on your wedding day. Nor is it an expression of our desire to see patterns where none exist, like a conviction that your yellow sweater is lucky. The concept of luck is not a myth.

Instead, the studies show, luck can be powered by past good or bad luck, personality and, in a meta-twist, even our own ideas and beliefs about luck itself. Lucky streaks are real, but they are the product of more than just blind fate. Our ideas about luck influence the way we behave in risky situations. We really can make our own luck, though we don’t like to think of ourselves as lucky – a descriptor that undermines other qualities, like talent and skill. Luck can be a force, but it’s one we interact with, shape and cultivate. Luck helps determine our fate here on Earth, even if you think its ultimate cause divine.

Luck is perspective and point of view: if a secular man happened to survive because he took a meeting outside his office at the World Trade Center on the morning of 11 September 2001, he might simply acknowledge random chance in life without assigning a deeper meaning. A Hindu might conclude he had good karma. A Christian might say God was watching out for him so that he could fulfil a special destiny in His service. The mystic could insist he was born under lucky stars, as others are born with green eyes.

Traditionally, the Chinese think luck is an inner trait, like intelligence or upbeat mood, notes Maia Young, a management expert at the University of California, Los Angeles. ‘My mom always used to tell me, “You have a lucky nose”, because its particular shape was a lucky one, according to Chinese lore.’ Growing up in the American Midwest, it dawned on Young that the fleeting luck that Americans often talked about – a luck that seemed to visit the same person at certain times (‘I got lucky on that test!’) but not others (‘I got caught in traffic before my interview!’) – was not equivalent to the unchanging, stable luck her mother saw in her daughter, her nose being an advertisement of its existence within.

‘It’s something that I have that’s a possession of mine, that can be more relied upon than just dumb luck,’ says Young. The distinction stuck with her. You might think someone with a lucky nose wouldn’t roll up their sleeves to work hard – why bother? – but here’s another cultural difference in perceptions of luck. ‘In Chinese culture,’ she says, ‘hard work can go hand-in-hand with being lucky. The belief system accommodates both.’

On the other hand, because Westerners see effort and good fortune as taking up opposite corners of the ring, they are ambivalent about luck. They might pray for it and sincerely wish others they care about ‘Good luck!’ but sometimes they just don’t want to think of themselves as lucky. They’d rather be deserving. The fact that they live in a society that is neither random nor wholly meritocratic makes for an even messier slamdance between ‘hard work’ and ‘luck’. Case in point: when a friend gets into a top law or medical school, we might say: ‘Congratulations! You’ve persevered. You deserve it.’ Were she not to get in, we would say: ‘Acceptance is arbitrary. Everyone’s qualified these days – it’s the luck of the draw.’

Read the entire article here.

Image: Four-leaf clover. Some consider it a sign of god luck. Courtesy of Phyzome.

Send to Kindle

Nuisance Flooding = Sea-Level Rise

hurricane_andrewGovernment officials in Florida are barred from using the terms “climate change”, “global warming”, “sustainable” and other related terms. Apparently, they’ll have to use the euphemism “nuisance flooding” in place of “sea-level rise”. One wonders what literary trick they’ll conjure up next time the state gets hit by a hurricane — “Oh, that? Just a ‘mischievous little breeze’, I’m not a scientist you know.”

From the Guardian:

Officials with the Florida Department of Environmental Protection (DEP), the agency in charge of setting conservation policy and enforcing environmental laws in the state, issued directives in 2011 barring thousands of employees from using the phrases “climate change” and “global warming”, according to a bombshell report by the Florida Center for Investigative Reporting (FCIR).

The report ties the alleged policy, which is described as “unwritten”, to the election of Republican governor Rick Scott and his appointment of a new department director that year. Scott, who was re-elected last November, has declined to say whether he believes in climate change caused by human activity.

“I’m not a scientist,” he said in one appearance last May.

Scott’s office did not return a call Sunday from the Guardian, seeking comment. A spokesperson for the governor told the FCIR team: “There’s no policy on this.”

The FCIR report was based on statements by multiple named former employees who worked in different DEP offices around Florida. The instruction not to refer to “climate change” came from agency supervisors as well as lawyers, according to the report.

“We were told not to use the terms ‘climate change’, ‘global warming’ or ‘sustainability’,” the report quotes Christopher Byrd, who was an attorney with the DEP’s Office of General Counsel in Tallahassee from 2008 to 2013, as saying. “That message was communicated to me and my colleagues by our superiors in the Office of General Counsel.”

“We were instructed by our regional administrator that we were no longer allowed to use the terms ‘global warming’ or ‘climate change’ or even ‘sea-level rise’,” said a second former DEP employee, Kristina Trotta. “Sea-level rise was to be referred to as ‘nuisance flooding’.”

According to the employees’ accounts, the ban left damaging holes in everything from educational material published by the agency to training programs to annual reports on the environment that could be used to set energy and business policy.

The 2014 national climate assessment for the US found an “imminent threat of increased inland flooding” in Florida due to climate change and called the state “uniquely vulnerable to sea level rise”.

Read the entire story here.

Image: Hurricane Floyd 1999, a “mischievous little breeze”. Courtesy of NASA.

Send to Kindle

The Power of Mediocrity

Over-achievers may well frown upon the slacking mediocre souls who strive to do less. But, mediocrity has a way of pervading the lives of the constantly striving 18 hour-a-day, multi-taskers as well. The figure of speech “jack of all trades, master of none”, sums up the inevitability of mediocrity for those who strive to do everything, but do nothing well. In fact, pursuit of the mediocre may well be an immutable universal law — both for under-archievers and over-archievers, and for that vast, second-rate, mediocre middle-ground of averageness.

From the Guardian:

In the early years of the last century, Spanish philosopher José Ortega y Gassetproposed a solution to society’s ills that still strikes me as ingenious, in a deranged way. He argued that all public sector workers from the top down (though, come to think of it, why not everyone else, too?) should be demoted to the level beneath their current job. His reasoning foreshadowed the Peter Principle: in hierarchies, people “rise to their level of incompetence”. Do your job well, and you’re rewarded with promotion, until you reach a job you’re less good at, where you remain.

In a recent book, The Hard Thing About Hard Things, the tech investor Ben Horowitz adds a twist: “The Law of Crappy People”. As soon as someone on a given rung at a company gets as good as the worst person the next rung up, he or she may expect a promotion. Yet, if it’s granted, the firm’s talent levels will gradually slide downhill. No one person need be peculiarly crappy for this to occur; bureaucracies just tend to be crappier than the sum of their parts.

Yet it’s wrong to think of these pitfalls as restricted to organisations. There’s a case to be made that the gravitational pull of the mediocre affects all life – as John Stuart Mill put it, that “the general tendency of things throughout the world is to render mediocrity the ascendant power among mankind”. True, it’s most obvious in the workplace (hence the observation that “a meeting moves at the pace of the slowest mind in the room”), but the broader point is that in any domain – work, love, friendship, health – crappy solutions crowd out good ones time after time, so long as they’re not so bad as to destroy the system. People and organisations hit plateaux not because they couldn’t do better, but because a plateau is a tolerable, even comfortable place. Even evolution – life itself! – is all about mediocrity. “Survival of the fittest” isn’t a progression towards greatness; it just means the survival of the sufficiently non-terrible.

And mediocrity is cunning: it can disguise itself as achievement. The cliche of a “mediocre” worker is a Dilbert-esque manager with little to do. But as Greg McKeown notes, in his book Essentialism: The Disciplined Pursuit Of Less, the busyness of the go-getter can lead to mediocrity, too. Throw yourself at every opportunity and you’ll end up doing unimportant stuff – and badly. You can’t fight this with motivational tricks or cheesy mission statements: you need a discipline, a rule you apply daily, to counter the pull of the sub-par. For a company, that might mean stricter, more objective promotion policies. For the over-busy person, there’s McKeown’s “90% Rule” – when considering an option, ask: does it score at least 9/10 on some relevant criterion? If not, say no. (Ideally, that criterion is: “Is this fulfilling?”, but the rule still works if it’s “Does this pay the bills?”).

Read the entire story here.

Send to Kindle

The Demise of the Language of Landscape

IMG_2006

In his new book entitled Landmarks author Robert Macfarlane ponders the relationship of words to our natural landscape. Reviewers describe the book as a “field guide to the literature of nature”. Sadly, Macfarlane’s detailed research for the book chronicles a disturbing trend: the culling of many words from our everyday lexicon that describe our natural world to make way for the buzzwords of progress. This substitution comes in the form of newer memes that describe our narrow, urbanized and increasingly virtual world circumscribed by technology. Macfarlane cited Oxford Junior Dictionary (OJD) as a vivid example of the evisceration of our language of landscape. The OJD has removed words such as acorn, beech, conker, dandelion, heather, heron, kingfisher, pasture and willow. In their place we now find words like attachmentblogbroadbandbullet-pointcelebritychatroomcut-and-pasteMP3 player and voice-mail. Get the idea?

I’m no fundamentalist luddite — I’m writing a blog after all — but surely some aspects of our heritage warrant protection. We are an intrinsic part of the natural environment despite our increasing urbanization. Don’t we all crave the escape to a place where we can lounge under a drooping willow surrounded by nothing more than the buzzing of insects and the babbling of a stream. I’d rather that than deal with the next attachment or voice-mail.

What a loss it would be for our children, and a double-edged loss at that. We, the preceding generation continue to preside over the systematic destruction of our natural landscape. And, in doing so we remove the words as well — the words that once described what we still crave.

From the Guardian:

Eight years ago, in the coastal township of Shawbost on the Outer Hebridean island of Lewis, I was given an extraordinary document. It was entitled “Some Lewis Moorland Terms: A Peat Glossary”, and it listed Gaelic words and phrases for aspects of the tawny moorland that fills Lewis’s interior. Reading the glossary, I was amazed by the compressive elegance of its lexis, and its capacity for fine discrimination: a caochan, for instance, is “a slender moor-stream obscured by vegetation such that it is virtually hidden from sight”, while a feadan is “a small stream running from a moorland loch”, and a fèith is “a fine vein-like watercourse running through peat, often dry in the summer”. Other terms were striking for their visual poetry: rionnach maoim means “the shadows cast on the moorland by clouds moving across the sky on a bright and windy day”; èit refers to “the practice of placing quartz stones in streams so that they sparkle in moonlight and thereby attract salmon to them in the late summer and autumn”, and teine biorach is “the flame or will-o’-the-wisp that runs on top of heather when the moor burns during the summer”.

The “Peat Glossary” set my head a-whirr with wonder-words. It ran to several pages and more than 120 terms – and as that modest “Some” in its title acknowledged, it was incomplete. “There’s so much language to be added to it,” one of its compilers, Anne Campbell, told me. “It represents only three villages’ worth of words. I have a friend from South Uist who said her grandmother would add dozens to it. Every village in the upper islands would have its different phrases to contribute.” I thought of Norman MacCaig’s great Hebridean poem “By the Graveyard, Luskentyre”, where he imagines creating a dictionary out of the language of Donnie, a lobster fisherman from the Isle of Harris. It would be an impossible book, MacCaig concluded:

A volume thick as the height of the Clisham,

A volume big as the whole of Harris,

A volume beyond the wit of scholars.

The same summer I was on Lewis, a new edition of the Oxford Junior Dictionarywas published. A sharp-eyed reader noticed that there had been a culling of words concerning nature. Under pressure, Oxford University Press revealed a list of the entries it no longer felt to be relevant to a modern-day childhood. The deletions included acornadderashbeechbluebellbuttercupcatkinconkercowslipcygnetdandelionfernhazelheatherheronivykingfisherlarkmistletoenectarnewtotterpasture and willow. The words taking their places in the new edition included attachmentblock-graphblogbroadbandbullet-pointcelebritychatroomcommitteecut-and-pasteMP3 player and voice-mail. As I had been entranced by the language preserved in the prose?poem of the “Peat Glossary”, so I was dismayed by the language that had fallen (been pushed) from the dictionary. For blackberry, read Blackberry.

I have long been fascinated by the relations of language and landscape – by the power of strong style and single words to shape our senses of place. And it has become a habit, while travelling in Britain and Ireland, to note down place words as I encounter them: terms for particular aspects of terrain, elements, light and creaturely life, or resonant place names. I’ve scribbled these words in the backs of notebooks, or jotted them down on scraps of paper. Usually, I’ve gleaned them singly from conversations, maps or books. Now and then I’ve hit buried treasure in the form of vernacular word-lists or remarkable people – troves that have held gleaming handfuls of coinages, like the Lewisian “Peat Glossary”.

Not long after returning from Lewis, and spurred on by the Oxford deletions, I resolved to put my word-collecting on a more active footing, and to build up my own glossaries of place words. It seemed to me then that although we have fabulous compendia of flora, fauna and insects (Richard Mabey’s Flora Britannica and Mark Cocker’s Birds Britannica chief among them), we lack a Terra Britannica, as it were: a gathering of terms for the land and its weathers – terms used by crofters, fishermen, farmers, sailors, scientists, miners, climbers, soldiers, shepherds, poets, walkers and unrecorded others for whom particularised ways of describing place have been vital to everyday practice and perception. It seemed, too, that it might be worth assembling some of this terrifically fine-grained vocabulary – and releasing it back into imaginative circulation, as a way to rewild our language. I wanted to answer Norman MacCaig’s entreaty in his Luskentyre poem: “Scholars, I plead with you, / Where are your dictionaries of the wind … ?”

Read the entire article here and then buy the book, which is published in March 2015.

Image: Sunset over the Front Range. Courtesy of the author.

Send to Kindle

News Anchor as Cult Hero

Google-search-news-anchor

Why and when did the news anchor, or newsreader as he or she is known in non-US parts of the world, acquire the status of cult hero? And, why is this a peculiarly US phenomenon? Let’s face it TV newsreaders in the UK, on the BBC or ITV, certainly do not have a following along the lines their US celebrity counterparts like Brian Williams, Megyn Kelly or Anderson Cooper. Why?

From the Guardian:

A game! Spot the odd one out in the following story. This year has been a terrible one so far for those who care about American journalism: the much-loved New York Times journalist David Carr died suddenly on 12 February; CBS correspondent Bob Simon was killed in a car crash the day before; Jon Stewart, famously the “leading news source for young Americans”, announced that he is quitting the Daily Show; his colleague Stephen Colbert is moving over from news satire to the softer arena of a nightly talk show; NBC anchor Brian Williams, as famous in America as Jeremy Paxman is in Britain, has been suspended after it was revealed he had “misremembered” events involving himself while covering the war in Iraq; Bill O’Reilly, an anchor on Fox News, the most watched cable news channel in the US, has been accused of being on similarly vague terms with the truth.

News of the Fox News anchor probably sounds like “dog bites man” to most Britons, who remember that this network recently described Birmingham as a no-go area for non-Muslims. But this latest scandal involving O’Reilly reveals something quite telling about journalism in America.

Whereas in Britain journalists are generally viewed as occupying a place on the food chain somewhere between bottom-feeders and cockroaches, in America there remains, still, a certain idealisation of journalists, protected by a gilded halo hammered out by sentimental memories of Edward R Murrow and Walter Cronkite.

Even while Americans’ trust in mass media continues to plummet, journalists enjoy a kind of heroic fame that would baffle their British counterparts. Television anchors and commentators, from Rachel Maddow on the left to Sean Hannity on the right, are lionised in a way that, say, Huw Edwards, is, quite frankly, not. A whole genre of film exists in the US celebrating the heroism of journalists, from All the President’s Men to Good Night, and Good Luck. In Britain, probably the most popular depiction of journalists came from Spitting Image, where they were snuffling pigs in pork-pie hats.

So whenever a journalist in the US has been caught lying, the ensuing soul-searching and garment-rending discovery has been about as prolonged and painful as a PhD on proctology. The New York Times and the New Republic both imploded when it was revealed that their journalists, respectively Jayson Blair and Stephen Glass, had fabricated their stories. Their tales have become part of American popular culture – The Wire referenced Blair in its fifth season and a film was made about the New Republic’s scandal – like national myths that must never be forgotten.

By contrast, when it was revealed that The Independent’s Johann Hari had committed plagiarism and slandered his colleagues on Wikipedia, various journalists wrote bewildering defences of him and the then Independent editor said initially that Hari would return to the paper. Whereas Hari’s return to the public sphere three years after his resignation has been largely welcomed by the British media, Glass and Blair remain shunned figures in the US, more than a decade after their scandals.

Which brings us back to the O’Reilly scandal, now unfolding in the US. Once it was revealed that NBC’s liberal Brian Williams had exaggerated personal anecdotes – claiming to have been in a helicopter that was shot at when he was in the one behind, for starters – the hunt was inevitably on for an equally big conservative news scalp. Enter stage left: Bill O’Reilly.

So sure, O’Reilly claimed that in his career he has been in “active war zones” and “in the Falklands” when he in fact covered a protest in Buenos Aires during the Falklands war. And sure, O’Reilly’s characteristically bullish defence that he “never said” he was “on the Falkland Islands” (original quote: “I was in a situation one time, in a war zone in Argentina, in the Falklands …”) and that being at a protest thousands of miles from combat constitutes “a war zone” verges on the officially bonkers (as the Washington Post put it, “that would mean that any reporter who covered an anti-war protest in Washington during the Iraq War was doing combat reporting”). But does any of this bother either O’Reilly or Fox News? It does not.

Unlike Williams, who slunk away in shame, O’Reilly has been bullishly combative, threatening journalists who dare to cover the story and saying that they deserve to be “in the kill zone”. Fox News too has been predictably untroubled by allegations of lies: “Fox News chairman and CEO Roger Ailes and all senior management are in full support of Bill O’Reilly,” it said in a statement.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

The US Senator From Oklahoma and the Snowball

By their own admission Republicans in the US Congress are not scientists, and clearly most, if not all, have no grasp of science, the scientific method, or the meaning of scientific theory or broad scientific consensus. The Senator from Oklahoma, James Inhofe, is the perfect embodiment of this extraordinary condition — perhaps a psychosis even — whereby a human living in the 21st century has no clue. Senator Inhofe recently gave us his infantile analysis of climate change on the Senate floor, accompanied by a snowball. This will make you then laugh, then cry.

From Scientific American:

“In case we have forgotten, because we keep hearing that 2014 has been the warmest year on record, I ask the chair, you know what this is? It’s a snowball. And that’s just from outside here. So it’s very, very cold out.”

Oklahoma Senator James Inhofe, the biggest and loudest climate change denier in Congress, last week on the floor of the senate. But his facile argument, that it’s cold enough for snow to exist in Washington, D.C., therefore climate change is a hoax, was rebutted in the same venue by Rhode Island Senator Sheldon Whitehouse:

“You can believe NASA and you can believe what their satellites measure on the planet, or you can believe the Senator with the snowball. The United States Navy takes this very seriously, to the point where Admiral Locklear, who is the head of the Pacific Command, has said that climate change is the biggest threat that we face in the Pacific…you can either believe the United States Navy or you can believe the Senator with the snowball…every major American scientific society has put itself on record, many of them a decade ago, that climate change is deadly real. They measure it, they see it, they know why it happens. The predictions correlate with what we see as they increasingly come true. And the fundamental principles, that it is derived from carbon pollution, which comes from burning fossil fuels, are beyond legitimate dispute…so you can believe every single major American scientific society, or you can believe the Senator with the snowball.”

Read the entire story here.

Video: Senator Inhofe with Snowball. Courtesy of C-Span.

Send to Kindle

Time For a New Body, Literally

Brainthatwouldntdie_film_poster

Let me be clear. I’m not referring to a hair transplant, but a head transplant.

A disturbing story has been making the media rounds recently. Dr. Sergio Canavero from the Turin Advanced Neuromodulation Group in Italy, suggests that the time is right to attempt the transplantation of a human head onto a different body. Canavero believes that advances in surgical techniques and immunotherapy are such that a transplantation could be attempted by 2017. Interestingly enough, he has already had several people volunteer for a new body.

Ethics aside, it certainly doesn’t stretch the imagination to believe Hollywood’s elite would clamor for this treatment. Now, I wonder if some people, liking their own body, would want a new head?

From New Scientist:

It’s heady stuff. The world’s first attempt to transplant a human head will be launched this year at a surgical conference in the US. The move is a call to arms to get interested parties together to work towards the surgery.

The idea was first proposed in 2013 by Sergio Canavero of the Turin Advanced Neuromodulation Group in Italy. He wants to use the surgery to extend the lives of people whose muscles and nerves have degenerated or whose organs are riddled with cancer. Now he claims the major hurdles, such as fusing the spinal cord and preventing the body’s immune system from rejecting the head, are surmountable, and the surgery could be ready as early as 2017.

Canavero plans to announce the project at the annual conference of the American Academy of Neurological and Orthopaedic Surgeons (AANOS) in Annapolis, Maryland, in June. Is society ready for such momentous surgery? And does the science even stand up?

The first attempt at a head transplant was carried out on a dog by Soviet surgeon Vladimir Demikhov in 1954. A puppy’s head and forelegs were transplanted onto the back of a larger dog. Demikhov conducted several further attempts but the dogs only survived between two and six days.

The first successful head transplant, in which one head was replaced by another, was carried out in 1970. A team led by Robert White at Case Western Reserve University School of Medicine in Cleveland, Ohio, transplanted the head of one monkey onto the body of another. They didn’t attempt to join the spinal cords, though, so the monkey couldn’t move its body, but it was able to breathe with artificial assistance. The monkey lived for nine days until its immune system rejected the head. Although few head transplants have been carried out since, many of the surgical procedures involved have progressed. “I think we are now at a point when the technical aspects are all feasible,” says Canavero.

This month, he published a summary of the technique he believes will allow doctors to transplant a head onto a new body (Surgical Neurology Internationaldoi.org/2c7). It involves cooling the recipient’s head and the donor body to extend the time their cells can survive without oxygen. The tissue around the neck is dissected and the major blood vessels are linked using tiny tubes, before the spinal cords of each person are cut. Cleanly severing the cords is key, says Canavero.

The recipient’s head is then moved onto the donor body and the two ends of the spinal cord – which resemble two densely packed bundles of spaghetti – are fused together. To achieve this, Canavero intends to flush the area with a chemical called polyethylene glycol, and follow up with several hours of injections of the same stuff. Just like hot water makes dry spaghetti stick together, polyethylene glycol encourages the fat in cell membranes to mesh.

Next, the muscles and blood supply would be sutured and the recipient kept in a coma for three or four weeks to prevent movement. Implanted electrodes would provide regular electrical stimulation to the spinal cord, because research suggests this can strengthen new nerve connections.

When the recipient wakes up, Canavero predicts they would be able to move and feel their face and would speak with the same voice. He says that physiotherapy would enable the person to walk within a year. Several people have already volunteered to get a new body, he says.

The trickiest part will be getting the spinal cords to fuse. Polyethylene glycol has been shown to prompt the growth of spinal cord nerves in animals, and Canavero intends to use brain-dead organ donors to test the technique. However, others are sceptical that this would be enough. “There is no evidence that the connectivity of cord and brain would lead to useful sentient or motor function following head transplantation,” says Richard Borgens, director of the Center for Paralysis Research at Purdue University in West Lafayette, Indiana.

Read the entire article here.

Image: Theatrical poster for the movie The Brain That Wouldn’t Die (1962). Courtesy of Wikipedia.

Send to Kindle

Jon Ronson Versus His Spambot Infomorph Imposter

While this may sound like a 1980′s monster flick, it’s rather more serious.

Author, journalist, filmmaker Jon Ronson weaves a fun but sinister tale of the theft of his own identity. The protagonists: a researcher in technology and cyberculture, a so-called “creative technologist” and a university lecturer in English and American literature. Not your typical collection of “identity thieves”, trolls, revenge pornographers, and online shamers. But an unnerving, predatory trio nevertheless.

From the Guardian:

In early January 2012, I noticed that another Jon Ronson had started posting on Twitter. His photograph was a photograph of my face. His Twitter name was @jon_ronson. His most recent tweet read: “Going home. Gotta get the recipe for a huge plate of guarana and mussel in a bap with mayonnaise :D #yummy.”

“Who are you?” I tweeted him.

“Watching #Seinfeld. I would love a big plate of celeriac, grouper and sour cream kebab with lemongrass #foodie,” he tweeted. I didn’t know what to do.

The next morning, I checked @jon_ronson’s timeline before I checked my own. In the night he had tweeted, “I’m dreaming something about #time and #cock.” He had 20 followers.

I did some digging. A young academic from Warwick University called Luke Robert Mason had a few weeks earlier posted a comment on the Guardian site. It was in response to a short video I had made about spambots. “We’ve built Jon his very own infomorph,” he wrote. “You can follow him on Twitter here: @jon_ronson.”

I tweeted him: “Hi!! Will you take down your spambot please?”

Ten minutes passed. Then he replied, “We prefer the term infomorph.”

“But it’s taken my identity,” I wrote.

“The infomorph isn’t taking your identity,” he wrote back. “It is repurposing social media data into an infomorphic aesthetic.”

I felt a tightness in my chest.

“#woohoo damn, I’m in the mood for a tidy plate of onion grill with crusty bread. #foodie,” @jon_ronson tweeted.

I was at war with a robot version of myself.

Advertisement

A month passed. @jon_ronson was tweeting 20 times a day about its whirlwind of social engagements, its “soirées” and wide circle of friends. The spambot left me feeling powerless and sullied.

I tweeted Luke Robert Mason. If he was adamant that he wouldn’t take down his spambot, perhaps we could at least meet? I could film the encounter and put it on YouTube. He agreed.

I rented a room in central London. He arrived with two other men – the team behind the spambot. All three were academics. Luke was the youngest, handsome, in his 20s, a “researcher in technology and cyberculture and director of the Virtual Futures conference”. David Bausola was a “creative technologist” and the CEO of the digital agency Philter Phactory. Dan O’Hara had a shaved head and a clenched jaw. He was in his late 30s, a lecturer in English and American literature at the University of Cologne.

I spelled out my grievances. “Academics,” I began, “don’t swoop into a person’s life uninvited and use him for some kind of academic exercise, and when I ask you to take it down you’re, ‘Oh, it’s not a spambot, it’s an infomorph.’”

Dan nodded. He leaned forward. “There must be lots of Jon Ronsons out there?” he began. “People with your name? Yes?”

I looked suspiciously at him. “I’m sure there are people with my name,” I replied, carefully.

“I’ve got the same problem,” Dan said with a smile. “There’s another academic out there with my name.”

“You don’t have exactly the same problem as me,” I said, “because my exact problem is that three strangers have stolen my identity and have created a robot version of me and are refusing to take it down.”

Dan let out a long-suffering sigh. “You’re saying, ‘There is only one Jon Ronson’,” he said. “You’re proposing yourself as the real McCoy, as it were, and you want to maintain that integrity and authenticity. Yes?”

I stared at him.

“We’re not quite persuaded by that,” he continued. “We think there’s already a layer of artifice and it’s your online personality – the brand Jon Ronson – you’re trying to protect. Yeah?”

“No, it’s just me tweeting,” I yelled.

“The internet is not the real world,” said Dan.

“I write my tweets,” I replied. “And I press send. So it’s me on Twitter.” We glared at each other. “That’s not academic,” I said. “That’s not postmodern. That’s the fact of it. It’s a misrepresentation of me.”

“You’d like it to be more like you?” Dan said.

“I’d like it to not exist,” I said.

“I find that quite aggressive,” he said. “You’d like to kill these algorithms? You must feel threatened in some way.” He gave me a concerned look. “We don’t go around generally trying to kill things we find annoying.”

“You’re a troll!” I yelled.

I dreaded uploading the footage to YouTube, because I’d been so screechy. I steeled myself for mocking comments and posted it. I left it 10 minutes. Then, with apprehension, I had a look.

“This is identity theft,” read the first comment I saw. “They should respect Jon’s personal liberty.”

Read the entire story here.

Video: JON VS JON Part 2 | Escape and Control. Courtesy of Jon Ronson.

Send to Kindle

Another London Bridge

nep-bridge-008

I don’t live in London. But having been born and raised there I still have a particular affinity for this great city. So, when the London Borough of Wandsworth recently published submissions for a new bridge of the River Thames I had to survey the designs. Over 70 teams submitted ideas since the process was opened to competition in December 2014.  The bridge will eventually span the river between Nine Elms and Pimlico.

Please check out the official designs here. Some are quite extraordinary.

Image: Scheme 008. Courtesy of Nine Elms to Pimlico (NEP) Bridge Competition, London Borough of Wandsworth.

 

Send to Kindle

A Physics Based Theory of Life

Carnot_heat_engine

Those who subscribe to the non-creationist theory of the origins of life tend gravitate towards the idea of assembly of self-replicating, organic molecules in our primeval oceans — the so-called primordial soup theory. Recently however, professor Jeremy England of MIT has proposed a thermodynamic explanation, which posits that inorganic matter tends to organize — under the right conditions — in a way that enables it to dissipate increasing amounts of energy. This is one of the fundamental attributes of living organisms.

Could we be the product of the Second Law of Thermodynamics, nothing more than the expression of increasing entropy?

Read more of this fascinating new hypothesis below or check out England’s paper on the Statistical Physics of Self-replication.

From Quanta:

Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

Read the entire story here.

Image: Carnot engine diagram, where an amount of heat QH flows from a high temperature TH furnace through the fluid of the “working body” (working substance) and the remaining heat QC flows into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions. Courtesy of Wikipedia.

 

Send to Kindle

Net Neutrality Lives!

The US Federal Communications Commission (FCC) took a giant step in the right direction, on February 26, 2015, when it voted to regulate internet broadband much like a public utility. This is a great victory for net neutrality advocates and consumers who had long sought to protect equal access for all to online services and information. Tim Berners Lee, inventor of the World Wide Web, offered his support and praise for the ruling, saying:

“It’s about consumer rights, it’s about free speech, it’s about democracy.”

From the Guardian:

Internet activists scored a landmark victory on Thursday as the top US telecommunications regulator approved a plan to govern broadband internet like a public utility.

Following one of the most intense – and bizarre – lobbying battles in the history of modern Washington politics, the Federal Communications Commission (FCC) passed strict new rules that give the body its greatest power over the cable industry since the internet went mainstream.

FCC chairman Tom Wheeler – a former telecom lobbyist turned surprise hero of net-neutrality supporters – thanked the 4m people who had submitted comments on the new rules. “Your participation has made this the most open process in FCC history,” he said. “We listened and we learned.”

Wheeler said that while other countries were trying to control the internet, the sweeping new US protections on net neutrality – the concept that all information and services should have equal access online – represented “a red-letter day for internet freedom”.

“The internet is simply too important to be left without rules and without a referee on the field,” said Wheeler. “Today’s order is more powerful and more expansive than any previously suggested.”

Broadband providers will be banned from creating so-called “fast lanes” blocking or slowing traffic online, and will oversee mobile broadband as well as cable. The FCC would also have the authority to challenge unforeseen barriers broadband providers might create as the internet develops.

Activists and tech companies argue the new rules are vital to protect net neutrality – the concept that all information and services should have equal access to the internet. The FCC’s two Republican commissioners, Ajit Pai and Michael O’Rielly, voted against the plan but were overruled at a much anticipated meeting by three Democratic members on the panel.

Republicans have long fought the FCC’s net neutrality protections, arguing the rules will create an unnecessary burden on business. They have accused Barack Obama of bullying the regulator into the move in order to score political points, with conservative lawmakers and potential 2016 presidential candidates expected to keep the fight going well into that election campaign.

Pai said the FCC was flip-flopping for “one reason and one reason only: president Obama told us to do so”.

Wheeler dismissed accusations of a “secret” plan “nonsense”. “This is no more a plan to regulate the internet than the first amendment is a plan to regulate free speech,” Wheeler said.

“This is the FCC using all the tools in our toolbox to protect innovators and consumers.”

Obama offered his support to the rules late last year, following an online activism campaign that pitched internet organisers and companies from Netflix and Reddit to the online craft market Etsy and I Can Has Cheezburger? – weblog home of the Lolcats meme – against Republican leaders and the cable and telecom lobbies.

Broadband will now be regulated under Title II of the Communications Act – the strongest legal authority the FCC has in its authority. Obama called on the independent regulator to implement Title II last year, leading to charges that he unduly influenced Wheeler’s decision that are now being investigated in Congress.

A small band of protesters gathered in the snow outside the FCC’s Washington headquarters before the meeting on Thursday, in celebration of their success in lobbying for a dramatic U-turn in regulation. Wheeler and his Democratic colleagues, Mignon Clyburn and Jessica Rosenworcel, were cheered as they sat down for the meeting.

Joining the activists outside was Apple co-counder Steve Wozniak, who said the FCC also needed more power to prevent future attacks on the open internet.

“We have won on net neutrality,” Wozniak told the Guardian. “This is important because they don’t want the FCC to have oversight over other bad stuff.”

Tim Berners Lee, inventor of the world wide web, addressed the meeting via video, saying he applauded the FCC’s decision to protect net neutrality: “More than anything else, the action you take today will preserve the reality of a permission-less innovation that is the heart of the internet.”

“It’s about consumer rights, it’s about free speech, it’s about democracy,” Berners Lee said.

Clyburn compared the new rules to the Bill of Rights. “We are here to ensure that there is only one internet,” she said. “We want to ensure that those with deep pockets have the same opportunity as those with empty pockets too succeed.”

Read the entire story here.

Send to Kindle

The Killer Joke and the Killer Idea

Some jokes can make you laugh until you cry. Some jokes can kill. And, research shows that thoughts alone can have equally devastating consequences as well.

From BBC:

Beware the scaremongers. Like a witch doctor’s spell, their words might be spreading modern plagues.

We have long known that expectations of a malady can be as dangerous as a virus. In the same way that voodoo shamans could harm their victims through the power of suggestion, priming someone to think they are ill can often produce the actual symptoms of a disease. Vomiting, dizziness, headaches, and even death, could be triggered through belief alone. It’s called the “nocebo effect”.

But it is now becoming clear just how easily those dangerous beliefs can spread through gossip and hearsay – with potent effect. It may be the reason why certain houses seem cursed with illness, and why people living near wind turbines report puzzling outbreaks of dizziness, insomnia and vomiting. If you have ever felt “fluey” after a vaccination, believed your cell phone was giving you a headache, or suffered an inexplicable food allergy, you may have also fallen victim to a nocebo jinx. “The nocebo effect shows the brain’s power,” says Dimos Mitsikostas, from Athens Naval Hospital in Greece. “And we cannot fully explain it.”

A killer joke

Doctors have long known that beliefs can be deadly – as demonstrated by a rather nasty student prank that went horribly wrong. The 18th Century Viennese medic, Erich Menninger von Lerchenthal, describes how students at his medical school picked on a much-disliked assistant. Planning to teach him a lesson, they sprung upon him before announcing that he was about to be decapitated. Blindfolding him, they bowed his head onto the chopping block, before dropping a wet cloth on his neck. Convinced it was the kiss of a steel blade, the poor man “died on the spot”.

While anecdotes like this abound, modern researchers had mostly focused on the mind’s ability to heal, not harm – the “placebo effect”, from the Latin for “I will please”. Every clinical trial now randomly assigns patients to either a real drug, or a placebo in the form of an inert pill. The patient doesn’t know which they are taking, and even those taking the inert drug tend to show some improvement – thanks to their faith in the treatment.

Yet alongside the benefits, people taking placebos often report puzzling side effects – nausea, headaches, or pain – that are unlikely to come from an inert tablet. The problem is that people in a clinical trial are given exactly the same health warnings whether they are taking the real drug or the placebo – and somehow, the expectation of the symptoms can produce physical manifestations in some placebo takers. “It’s a consistent phenomenon, but medicine has never really dealt with it,” says Ted Kaptchuk at Harvard Medical School.

Over the last 10 years, doctors have shown that this nocebo effect – Latin for “I will harm” – is very common. Reviewing the literature, Mitsikostas has so far documented strong nocebo effects in many treatments for headache, multiple sclerosis, and depression. In trials for Parkinson’s disease, as many as 65% report adverse events as a result of their placebo. “And around one out of 10 treated will drop out of a trial because of nocebo, which is pretty high,” he says.

Although many of the side-effects are somewhat subjective – like nausea or pain – nocebo responses do occasionally show up as rashes and skin complaints, and they are sometimes detectable on physiological tests too. “It’s unbelievable – they are taking sugar pills and when you measure liver enzymes, they are elevated,” says Mitsikostas.

And for those who think these side effects are somehow “deliberately” willed or imagined, measures of nerve activity following nocebo treatment have shown that the spinal cord begins responding to heightened painbefore conscious deliberation would even be possible.

Consider the near fatal case of “Mr A”, reported by doctor Roy Reeves in 2007. Mr A was suffering from depression when he consumed a whole bottle of pills. Regretting his decision, Mr A rushed to ER, and promptly collapsed at reception. It looked serious; his blood pressure had plummeted, and he was hyperventilating; he was immediately given intravenous fluids. Yet blood tests could find no trace of the drug in his system. Four hours later, another doctor arrived to inform Reeves that the man had been in the placebo arm of a drugs trial; he had “overdosed” on sugar tablets. Upon hearing the news, the relieved Mr A soon recovered.

We can never know whether the nocebo effect would have actually killed Mr A, though Fabrizio Benedetti at the University of Turin Medical School thinks it is certainly possible. He has scanned subjects’ brains as they undergo nocebo suggestions, which seems to set off a chain of activation in the hypothalamus, and the pituitary and adrenal glands – areas that deal with extreme threats to our body. If your fear and belief were strong enough, the resulting cocktail of hormones could be deadly, he says.

Read the entire story here.

Send to Kindle

Why Are We Obsessed With Zombies?

Google-search-zombie

Previous generations worried about Frankenstein, evil robots, even more evil aliens, hungry dinosaurs and, more recently, vampires. Nowadays our culture seems to be singularly obsessed with zombies. Why?

From the Conversation:

The zombie invasion is here. Our bookshops, cinemas and TVs are dripping with the pustulating debris of their relentless shuffle to cultural domination.

A search for “zombie fiction” on Amazon currently provides you with more than 25,000 options. Barely a week goes by without another onslaught from the living dead on our screens. We’ve just seen the return of one of the most successful of these, The Walking Dead, starring Andrew Lincoln as small-town sheriff, Rick Grimes. The show follows the adventures of Rick and fellow survivors as they kill lots of zombies and increasingly, other survivors, as they desperately seek safety.

Generational monsters

Since at least the late 19th century each generation has created fictional enemies that reflect a broader unease with cultural or scientific developments. The “Yellow Peril” villains such as Fu Manchu were a response to the massive increase in Chinese migration to the US and Europe from the 1870s, for example.

As the industrial revolution steamed ahead, speculative fiction of authors such as H G Wells began to consider where scientific innovation would take mankind. This trend reached its height in the Cold War during the 1950s and 1960s. Radiation-mutated monsters and invasions from space seen through the paranoid lens of communism all postulated the imminent demise of mankind.

By the 1970s, in films such as The Parallax View and Three Days of the Condor, the enemy evolved into government institutions and powerful corporations. This reflected public disenchantment following years of increasing social conflict, Vietnam and the Watergate scandal.

In the 1980s and 1990s it was the threat of AIDS that was embodied in the monsters of the era, such as “bunny boiling” stalker Alex in Fatal Attraction. Alex’s obsessive pursuit of the man with whom she shared a one night stand, Susanne Leonard argues, represented “the new cultural alignment between risk and sexual contact”, a theme continued with Anne Rices’s vampire Lestat in her series The Vampire Chronicles.

Risk and anxiety

Zombies, the flesh eating undead, have been mentioned in stories for more than 4,000 years. But the genre really developed with the work of H G Wells, Poe and particularly H P Lovecraft in the early 20th century. Yet these ponderous adversaries, descendants of Mary Shelley’s Frankenstein, have little in common with the vast hordes that threaten mankind’s existence in the modern versions.

M Keith Booker argued that in the 1950s, “the golden age of nuclear fear”, radiation and its fictional consequences were the flip side to a growing faith that science would solve the world’s problems. In many respects we are now living with the collapse of this faith. Today we live in societies dominated by an overarching anxiety reflecting the risk associated with each unpredictable scientific development.

Now we know that we are part of the problem, not necessarily the solution. The “breakthroughs” that were welcomed in the last century now represent some of our most pressing concerns. People have lost faith in assumptions of social and scientific “progress”.

Globalisation

Central to this is globalisation. While generating enormous benefits, globalisation is also tearing communities apart. The political landscape is rapidly changing as established political institutions seem unable to meet the challenges presented by the social and economic dislocation.

However, although destructive, globalisation is also forging new links between people, through what Anthony Giddens calls the “emptying of time and space”. Modern digital media has built new transnational alliances, and, particularly in the West, confronted people with stark moral questions about the consequences of their own lifestyles.

As the faith in inexorable scientific “progress” recedes, politics is transformed. The groups emerging from outside the political mainstream engage in much older battles of faith and identity. Whether right-wing nationalists or Islamic fundamentalists, they seek to build “imagined communities” through race, religion or culture and “fear” is their currency.

Evolving zombies

Modern zombies are the product of this globalised, risk conscious world. No longer the work of a single “mad” scientist re-animating the dead, they now appear as the result of secret government programmes creating untreatable viruses. The zombies indiscriminately overwhelm states irrespective of wealth, technology and military strength, turning all order to chaos.

Meanwhile, the zombies themselves are evolving into much more tenacious adversaries. In Danny Boyle’s 28 Days Later it takes only 20 days for society to be devastated. Charlie Higson’s Enemy series of novels have the zombies getting leadership and using tools. In the film of Max Brooks’ novel, World War Z, the seemingly superhuman athleticism of the zombies reflects the devastating springboard that vast urban populations would provide for such a disease. The film, starring Brad Pitt, had a reported budget of US$190m, demonstrating what a big business zombies have become.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

The Missing Sock Law

Google-search-socks

If you share a household with children, or adults who continually misplace things, you’ll be intimately familiar with the Missing Sock Law (MSL). No matter how hard you try to keep clothing, and people, organized, and no matter how diligent you are during the laundry process, you will always lose socks. After your weekly laundry you will always end up with an odd number of socks, they will always be mismatched and you will never find the missing ones again. This is the MSL, and science has yet to come up with a solution.

However, an increasing number of enterprising youngsters, non-OCD parents, and even some teens, are adopting a solution that’s been staring them in the face since socks were invented.  Apparently, it is now a monumentally cool fashion statement (at the time writing) to wear mismatched socks — there are strict rules of course, and parents, this is certainly not for you.

From WSJ:

Susana Yourcheck keeps a basket of mismatched socks in her laundry room, hoping that the missing match will eventually reappear. The pile is getting smaller these days, but not because the solitary socks are magically being reunited with their mates.

The credit for the smaller stash goes to her two teenage daughters, who no longer fuss to find socks that match. That’s because fashionable tweens and teens favor a jamboree of solids, colors and patterns on their feet.

“All my friends do it. Everyone in school wears them this way,” says 15-year-old Amelia Yourcheck.

For laundry-folding parents, the best match is sometimes a mismatch.

Generations of adults have cringed at their children’s fashion choices, suffering through bell bottoms, crop tops, piercings and tattoos. Socks have gone through various iterations of coolness: knee-high, no-see, wild patterns, socks worn with sandals, and no socks at all.

But the current trend has advantages for parents like Ms. Yourcheck. She has long been flummoxed by the mystery of socks that “disappear to the land of nowhere.”

“I’m not going to lie—[the mismatched look] bothers me. But I’m also kind of happy because at least we get some use out of them,” says Ms. Yourcheck, who is 40 years old and lives in Holly Springs, N.C.

“It definitely makes laundry way easier because they just go in a pile and you don’t have to throw the odd ones away,” agrees Washington, D.C., resident Jennifer Swanson Prince, whose 15-year-old daughter, Eleni, rocks the unmatched look. “And if we are lucky, the pile will go in a drawer.”

Some parents say they first noticed the trend a few years ago. Some saw girls whip off their shoes at a bat mitzvah celebration and go through a basket of mismatched socks that were supplied by the hosts for more comfortable dancing.

 For some teenage fashionistas, however, the style dictates that certain rules be followed. Among the most important: The socks must always be more or less the same length—no mixing a knee high with a short one. And while patterns can be combined, clashing seasons—as with snowflakes and flowers—are frowned upon.

The trend is so popular that retailers sell socks that go together, but don’t really go together.

“Matching is mundane, but mixing patterns and colors is monumentally cool,” states the website of LittleMissMatched, which has stores in New York, Florida and California. The company sells socks in sets of three that often sport the same pattern—stars, animal prints, argyles, but in different colors.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

The Religion of String Theory

Hyperboloid-of-one-sheetRead anything about string theory and you’ll soon learn that it resembles more of a religion than a scientific principle. String theory researchers and their supporters will be the first to tell you that this elegant, but extremely complex, integration of gravity and quantum field theory,  cannot be confirmed through experiment. And, neither, can it be dispelled through experiment.

So, while the promise of string theory — to bring us one unified understanding of the entire universe — is deliciously tantalizing, it nonetheless forces us to take a giant leap of faith. I suppose that would put string theory originators, physicists Michael Green and John Schwarz, somewhere in the same pantheon as Moses and Joseph Smith.

From Quanta:

Thirty years have passed since a pair of physicists, working together on a stormy summer night in Aspen, Colo., realized that string theory might have what it takes to be the “theory of everything.”

“We must be getting pretty close,” Michael Green recalls telling John Schwarz as the thunder raged and they hammered away at a proof of the theory’s internal consistency, “because the gods are trying to prevent us from completing this calculation.”

Their mathematics that night suggested that all phenomena in nature, including the seemingly irreconcilable forces of gravity and quantum mechanics, could arise from the harmonics of tiny, vibrating loops of energy, or “strings.” The work touched off a string theory revolution and spawned a generation of specialists who believed they were banging down the door of the ultimate theory of nature. But today, there’s still no answer. Because the strings that are said to quiver at the core of elementary particles are too small to detect — probably ever — the theory cannot be experimentally confirmed. Nor can it be disproven: Almost any observed feature of the universe jibes with the strings’ endless repertoire of tunes.

The publication of Green and Schwarz’s paper “was 30 years ago this month,” the string theorist and popular-science author Brian Greene wrote in Smithsonian Magazine in January, “making the moment ripe for taking stock: Is string theory revealing reality’s deep laws? Or, as some detractors have claimed, is it a mathematical mirage that has sidetracked a generation of physicists?” Greene had no answer, expressing doubt that string theory will “confront data” in his lifetime.

Recently, however, some string theorists have started developing a new tactic that gives them hope of someday answering these questions. Lacking traditional tests, they are seeking validation of string theory by a different route. Using a strange mathematical dictionary that translates between laws of gravity and those of quantum mechanics, the researchers have identified properties called “consistency conditions” that they say any theory combining quantum mechanics and gravity must meet. And in certain highly simplified imaginary worlds, they claim to have found evidence that the only consistent theories of “quantum gravity” involve strings.

According to many researchers, the work provides weak but concrete support for the decades-old suspicion that string theory may be the only mathematically consistent theory of quantum gravity capable of reproducing gravity’s known form on the scale of galaxies, stars and planets, as captured by Albert Einstein’s theory of general relativity. And if string theory is the only possible approach, then its proponents say it must be true — with or without physical evidence. String theory, by this account, is “the only game in town.”

“Proving that a big class of stringlike models are the only things consistent with general relativity and quantum mechanics would be a way, to some extent, of confirming it,” said Tom Hartman, a theoretical physicist at Cornell University who has been following the recent work.

If they are successful, the researchers acknowledge that such a proof will be seen as controversial evidence that string theory is correct. “‘Correct’ is a loaded word,” said Mukund Rangamani, a professor at Durham University in the United Kingdom and the co-author of a paper posted recently to the physics preprint site arXiv.org that finds evidence of “string universality” in a class of imaginary universes.

So far, the theorists have shown that string theory is the only “game” meeting certain conditions in “towns” wildly different from our universe, but they are optimistic that their techniques will generalize to somewhat more realistic physical worlds. “We will continue to accumulate evidence for the ‘string universality’ conjecture in different settings and for different classes of theories,” said Alex Maloney, a professor of physics at McGill University in Montreal and co-author of another recent paper touting evidence for the conjecture, “and eventually a larger picture will become clear.”

Meanwhile, outside experts caution against jumping to conclusions based on the findings to date. “It’s clear that these papers are an interesting attempt,” said Matt Strassler, a visiting professor at Harvard University who has worked on string theory and particle physics. “But these aren’t really proofs; these are arguments. They are calculations, but there are weasel words in certain places.”

Proponents of string theory’s rival, an underdog approach called “loop quantum gravity,” believe that the work has little to teach us about the real world. “They should try to solve the problems of their theory, which are many,” said Carlo Rovelli, a loop quantum gravity researcher at the Center for Theoretical Physics in Marseille, France, “instead of trying to score points by preaching around that they are ‘the only game in town.’”

Mystery Theory

Over the past century, physicists have traced three of the four forces of nature — strong, weak and electromagnetic — to their origins in the form of elementary particles. Only gravity remains at large. Albert Einstein, in his theory of general relativity, cast gravity as smooth curves in space and time: An apple falls toward the Earth because the space-time fabric warps under the planet’s weight. This picture perfectly captures gravity on macroscopic scales.

But in small enough increments, space and time lose meaning, and the laws of quantum mechanics — in which particles have no definite properties like “location,” only probabilities — take over. Physicists use a mathematical framework called quantum field theory to describe the probabilistic interactions between particles. A quantum theory of gravity would describe gravity’s origin in particles called “gravitons” and reveal how their behavior scales up to produce the space-time curves of general relativity. But unifying the laws of nature in this way has proven immensely difficult.

String theory first arose in the 1960s as a possible explanation for why elementary particles called quarks never exist in isolation but instead bind together to form protons, neutrons and other composite “hadrons.” The theory held that quarks are unable to pull apart because they form the ends of strings rather than being free-floating points. But the argument had a flaw: While some hadrons do consist of pairs of quarks and anti-quarks and plausibly resemble strings, protons and neutrons contain three quarks apiece, invoking the ugly and uncertain picture of a string with three ends. Soon, a different theory of quarks emerged. But ideas die hard, and some researchers, including Green, then at the University of London, and Schwarz, at the California Institute of Technology, continued to develop string theory.

Problems quickly stacked up. For the strings’ vibrations to make physical sense, the theory calls for many more spatial dimensions than the length, width and depth of everyday experience, forcing string theorists to postulate that six extra dimensions must be knotted up at every point in the fabric of reality, like the pile of a carpet. And because each of the innumerable ways of knotting up the extra dimensions corresponds to a different macroscopic pattern, almost any discovery made about our universe can seem compatible with string theory, crippling its predictive power. Moreover, as things stood in 1984, all known versions of string theory included a nonsensical mathematical term known as an “anomaly.”

On the plus side, researchers realized that a certain vibration mode of the string fit the profile of a graviton, the coveted quantum purveyor of gravity. And on that stormy night in Aspen in 1984, Green and Schwarz discovered that the graviton contributed a term to the equations that, for a particular version of string theory, exactly canceled out the problematic anomaly. The finding raised the possibility that this version was the one, true, mathematically consistent theory of quantum gravity, and it helped usher in a surge of activity known as the “first superstring revolution.”

 But only a year passed before another version of string theory was also certified anomaly-free. In all, five consistent string theories were discovered by the end of the decade. Some conceived of particles as closed strings, others described them as open strings with dangling ends, and still others generalized the concept of a string to higher-dimensional objects known as “D-branes,” which resemble quivering membranes in any number of dimensions. Five string theories seemed an embarrassment of riches.

Read the entire story here.

Image: Image of (1 + 1)-dimensional anti-de Sitter space embedded in flat (1 + 2)-dimensional space. The embedded surface contains closed timelike curves circling the x1 axis. Courtesy of Wikipedia.

Send to Kindle

Why Are Most Satirists Liberal?

Stephen_Colbert_2014Oliver Morrison over at The Atlantic has a tremendous article that ponders the comedic divide that spans our political landscape. Why, he asks, do most political satirists identify with left-of-center thought? And, why are the majority of radio talk show hosts right-wing? Why is there no right-wing Stephen Colbert, and why no leftie Rush? These are very interesting questions.

You’ll find some surprising answers, which go beyond the Liberal stereotype of the humorless Republican with no grasp of satire or irony.

From the Atlantic:

Soon after Jon Stewart arrived at The Daily Show in 1999, the world around him began to change. First, George W. Bush moved into the White House. Then came 9/11, and YouTube, and the advent of viral videos. Over the years, Stewart and his cohort mastered the very difficult task of sorting through all the news quickly and turning it around into biting, relevant satire that worked both for television and the Internet.

Now, as Stewart prepares to leave the show, the brand of comedy he helped invent is stronger than ever. Stephen Colbert is getting ready to bring his deadpan smirk to The Late Show. Bill Maher is continuing to provoke pundits and politicians with his blunt punch lines. John Oliver’s Last Week Tonight is about to celebrate the end of a wildly popular first year. Stewart has yet to announce his post-Daily Show plans, but even if he retires, the genre seems more than capable of carrying on without him.

Stewart, Colbert, Maher, Oliver and co. belong to a type of late-night satire that’s typically characterized as liberal, skewering Republicans (and, less frequently, Democrats) for absurd statements or pompousness or flagrant hypocrisy. “The Daily Show, The Colbert Report, Funny Or Die, and The Onion, while not partisan organs, all clearly have a left-of-center orientation,” wrote Jonathan Chait in The New Republic in 2011.This categorization, though, begs the question of why the form has no equal on the other side of the ideological spectrum. Some self-identified conservative comics argue that the biased liberal media hasn’t given them a chance to thrive. Others point out that Obama is a more difficult target than his Republican predecessor: He was the first African-American president, which meant comedians have had to tip-toe around anything with racial connotations, and his restrained personality has made him difficult to parody.

But six years in, Obama’s party has been thoroughly trounced in the midterms and publicly excoriated by right-wing politicians, yet there’s a dearth of conservative satirists taking aim, even though the niche-targeted structure of cable media today should make it relatively easy for them to find an audience. After all, it would have been difficult for Stewart or Colbert to find an audience during the era when three broadcast stations competed for the entire country and couldn’t afford to alienate too many viewers. But cable TV news programs need only find a niche viewership. Why then, hasn’t a conservative Daily Show found its own place on Fox?

Liberal satirists are certainly having no trouble making light of liberal institutions and societies. Portlandia is about to enter its fifth season skewering the kinds of liberals who don’t understand that eco-terrorismand militant feminism may not be as politically effective as they think. Jon Stewart has had success poking fun at Obama’s policies. And Alison Dagnes, a professor of political science at Shippensburg University, has found that the liberal Clinton was the butt of more jokes on late-night shows of the 1990s than either George W. Bush or Obama would later be.

So if liberals are such vulnerable targets for humor, why do relatively few conservative comedians seem to be taking aim at them?

ne explanation is simply that proportionately fewer people with broadly conservative sensibilities choose to become comedians. Just as liberals dominate academia, journalism, and other writing professions, there are nearly three times as many liberal- as conservative-minded people in the creative arts according to a recent study. Alison Dagnes, a professor of political science at Shippensburg University, argues that the same personality traits that shape political preferences also guide the choice of professions. These tendencies just get more pronounced in the case of comedy, which usually requires years of irregular income, late hours, and travel, as well as a certain tolerance for crudeness and heckling.

There are, of course, high-profile conservative comedians in America, such as the members of the Blue  Collar Comedy Tour. But these performers, who include Jeff Foxworthy and Larry the Cable Guy, tend carefully to avoid politicized topics, mocking so-called “rednecks” in the same spirit as Borscht Belt acts mocked Jewish culture.

When it comes to actual political satire, one of the most well-known figures nationally is Dennis Miller, a former Saturday Night Live cast member who now has a weekly segment on Fox News’ O’Reilly Factor. On a recent show, O’Reilly brought up the Democrats’ election losses, and Miller took the bait. “I think liberalism is like a nude beach,” Miller said. “It’s better off in your mind than actually going there.” His jokes are sometimes amusing, but they tend to be grounded in vague ideologies, not the attentive criticism to the news of the day that has given liberal satires plenty of fodder five days a week. The real problem, Frank Rich wrote about Miller, “is that his tone has become preachy. He too often seems a pundit first and a comic second.”

The Flipside, a more recent attempt at conservative satire, was launched this year by Kfir Alfia, who got his start in political performance a decade ago when he joined the Protest Warriors, a conservative group that counter-demonstrated at anti-war protests. The Flipside started airing this fall in more than 200 stations across the country, but its growth is hampered by its small budget, according to The Flipside’s producer, Rodney Lee Connover, who said he has to work 10 times as hard because his show has 10 times fewer resources than the liberal shows supported by cable networks.

Connover was a writer along with Miller on The 1/2 Hour News Hour, the first major attempt to create a conservative counterpart to The Daily Showin 2007. It was cancelled after just 13 episodes and has remained the worst-rated show of all time on Metacritic. It was widely panned by critics who complained that it was trying to be political first and funny second, so the jokes were unsurprising and flat.

The host of The Flipside, Michael Loftus, says he’s doing the same thing as Jon Stewart, just with some conservative window-dressing. Wearing jeans, Loftus stands and delivers his jokes on a set that looks like the set of Tool Time, the fictional home-improvement show Tim Allen hosts on the sitcom Home Improvement: The walls are decorated with a dartboard, a “Men at Work” sign, and various other items the producers might expect to find in a typical American garage. In a recent episode, after Republicans won the Senate, Loftus sang the song, “Looks like we made it …” to celebrate the victory.

But rather than talking about the news, as Colbert and Stewart do, or deconstructing a big political issue, as Oliver does, Loftus frequently makes dated references without offering new context to freshen them up. “What’s the deal with Harry Reid?” he asked in a recent episode. “You either hate him or you hate him, am I right? The man is in the business of telling people how greedy they are, and how they don’t pay their fair share, and he lives in the Ritz Carlton … This guy is literally Mr. Burns from The Simpsons.” Much of his material seems designed to resonate with only the most ardent Fox News viewers. Loftus obviously can’t yet attract the kinds of celebrity guests his network competitors can. But instead of playing games with the guests he can get, he asks softball questions that simply allow them to spout off.

Greg Gutfeld, the host of Fox’s Red Eye, can also be funny, but his willing-to-be-controversial style often comes across as more hackneyed than insightful. “You know you’re getting close to the truth when someone is calling you a racist,” he once said. Gutfeld has also railed against “greenie” leftists who shop at Whole Foods, tolerance, and football players who are openly gay. Gutfeld’s shtick works okay during its 3 a.m. timeslot, but a recent controversy over sexist jokes about a female fighter pilot highlighted just how far his humor is from working in prime time.

So if conservatives have yet to produce their own Jon Stewart, it could be the relatively small number of working conservative comedians, or their lack of power in the entertainment industry. Or it could be that shows like The Flipside are failing at least, in part, because they’re just not that funny. But what is it about political satire that makes it so hard for conservatives to get it right?

Read the entire article here.

Image: Stephen Colbert at the 2014 MontClair Film Festival. Courtesy of the 2014 MontClair Film Festival.

Send to Kindle

Bit Rot is In Your Future

1978_AMC_Matador_sedan_red_NC_detail_of_factory_AM-FM-stereo-8-track_unit

If you are over the age of 55 or 60 you may well have some 8-track cassettes still stashed in the trunk (or boot if you’re a Brit) of your car. If you’re over 50 it’s possible that you may have some old floppy disks or regular music cassettes stored in a bottom drawer. If you’re over 40 you’re likely to have boxes of old VHS tapes and crate-loads of CDs (or even laser disks) under your bed. So, if you fall into one of these categories most of the content memorized on any of these media types is now very likely to be beyond your reach — your car (hopefully) does not have an 8-track player; you dumped your Sony Walkman for an iPod; and your CDs have been rendered obsolete by music that descends to your ears from the “cloud”.

[Of course, 45s and 33s still seem to have a peculiar and lasting appeal -- and thanks to the analog characteristics of vinyl the music encoded in the spiral grooves is still relatively easily accessible. But this will be the subject of another post].

So our technological progress, paradoxically, comes at a cost. As our technologies become simpler to use and content becomes easier to construct and disseminate, it becomes “bit rot” for future generations. That is, our digital present will become lost to more advanced technologies in the future. One solution would be to hold on to your 8-track player. But, Vint Cerf, currently a VP at Google and one of the founding fathers of the internet, has other ideas.

From the Guardian:

Piles of digitised material – from blogs, tweets, pictures and videos, to official documents such as court rulings and emails – may be lost forever because the programs needed to view them will become defunct, Google’s vice-president has warned.

Humanity’s first steps into the digital world could be lost to future historians, Vint Cerf told the American Association for the Advancement of Science’s annual meeting in San Jose, California, warning that we faced a “forgotten generation, or even a forgotten century” through what he called “bit rot”, where old computer files become useless junk.

Cerf called for the development of “digital vellum” to preserve old software and hardware so that out-of-date files could be recovered no matter how old they are.

“When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history,” he said.

“We don’t want our digital lives to fade away. If we want to preserve them, we need to make sure that the digital objects we create today can still be rendered far into the future,” he added.

The warning highlights an irony at the heart of modern technology, where music, photos, letters and other documents are digitised in the hope of ensuring their long-term survival. But while researchers are making progress in storing digital files for centuries, the programs and hardware needed to make sense of the files are continually falling out of use.

“We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised,” Cerf told the Guardian. “If there are photos you really care about, print them out.”

Advertisement

Ancient civilisations suffered no such problems, because histories written in cuneiform on baked clay tablets, or rolled papyrus scrolls, needed only eyes to read them. To study today’s culture, future scholars would be faced with PDFs, Word documents, and hundreds of other file types that can only be interpreted with dedicated software and sometimes hardware too.

The problem is already here. In the 1980s, it was routine to save documents on floppy disks, upload Jet Set Willy from cassette to the ZX spectrum, slaughter aliens with a Quickfire II joystick, and have Atari games cartridges in the attic. Even if the disks and cassettes are in good condition, the equipment needed to run them is mostly found only in museums.

The rise of gaming has its own place in the story of digital culture, but Cerf warns that important political and historical documents will also be lost to bit rot. In 2005, American historian Doris Kearns Goodwin wrote Team of Rivals: the Political Genius of Abraham Lincoln, describing how Lincoln hired those who ran against him for presidency. She went to libraries around the US, found the physical letters of the people involved, and reconstructed their conversations. “In today’s world those letters would be emails and the chances of finding them will be vanishingly small 100 years from now,” said Cerf.

He concedes that historians will take steps to preserve material considered important by today’s standards, but argues that the significance of documents and correspondence is often not fully appreciated until hundreds of years later. Historians have learned how the greatest mathematician of antiquity considered the concept of infinity and anticipated calculus in 3BC after the Archimedes palimpsest was found hidden under the words of a Byzantine prayer book from the 13th century. “We’ve been surprised by what we’ve learned from objects that have been preserved purely by happenstance that give us insights into an earlier civilisation,” he said.

Researchers at Carnegie Mellon University in Pittsburgh have made headway towards a solution to bit rot, or at least a partial one. There, Mahadev Satyanarayanan takes digital snapshots of computer hard drives while they run different software programs. These can then be uploaded to a computer that mimics the one the software ran on. The result is a computer that can read otherwise defunct files. Under a project called Olive, the researchers have archived Mystery House, the original 1982 graphic adventure game for the Apple II, an early version of WordPerfect, and Doom, the original 1993 first person shooter game.

Inventing new technology is only half the battle, though. More difficult still could be navigating the legal permissions to copy and store software before it dies. When IT companies go out of business, or stop supporting their products, they may sell the rights on, making it a nightmarish task to get approval.

Read the entire article here.

Image: 1978 AMC Matador sedan red NC detail of factory AM-FM-stereo-8-track unit. Courtesy of CZmarlin / Wikipedia.

Send to Kindle

Yawn. Selfies Are So, Like, Yesterday!

DOOB 3D-image

If you know a dedicated and impassioned narcissist it’s time to convince him or her to ditch the selfie. Oh, and please ensure she or he discards the selfie-stick while they’re at it. You see, the selfie — that ubiquitous expression of the me-me-generation — is now rather passé.

So, where does a self-absorbed individual turn next? Enter the 3D printed version of yourself courtesy of a German company called DOOB 3D, with its Dooblicator scanner and high-res 3D printer. Connoisseurs of self can now — for a mere $395 — replicate themselves with a 10-inch facsimile. If you’re a cheapskate, you can get a Playmobil-sized replica for $95; while a 14-inch selfie-doll will fetch you $695. Love it!

To learn more about DOOB 3D visit their website.

From Wired:

We love looking at images of ourselves. First there were Olan Mills portraits. Nowadays there are selfies and selfie-stick selfies and drone selfies.

If you’re wondering what comes next, Dusseldorf-based DOOB 3D thinks it has the answer—and contrary to what the company’s name suggests, it doesn’t involve getting high and watching Avatar.

DOOB 3D can produce a detailed, four-inch figurine of your body—yes, a 3-D selfie. Making one of these figurines requires a massive pile of hardware and software: 54 DSLRs, 54 lenses, a complex 3-D modeling pipeline, and an $80,000 full-color 3-D printer, not to mention a room-size scanning booth.

Factor that all in and the $95 asking price for a replica of yourself that’s roughly the size of most classic Star Wars action figures doesn’t seem so bad. A Barbie-esque 10-inch model goes for $395, while a 14-inch figure that’s more along the lines of an old-school G.I. Joe doll costs $695.

The company has eight 3-D scanning booths (called “Doob-licators”) scattered in strategic locations throughout the world. There’s one in Dusseldorf, one in Tokyo, one at Santa Monica Place in Los Angeles, and one in New York City’s Chelsea Market. The company also says they’re set to add more U.S. locations soon, although details aren’t public yet.

In New York, the pop-up DOOB shop in Chelsea Market was a pretty big hit. According to Michael Anderson, CEO of DOOB 3D USA, the Doob-licator saw about 500 customers over the winter holiday season. About 10 percent of the booth’s customers got their pets Doob-licated.

“At first, (people got DOOBs made) mostly on a whim,” says Anderson of the holiday-season spike. Most people just walk up and stand in line, but you can also book an appointment in advance.

“Now that awareness has been built,” Anderson says, “there has been a shift where at least two thirds of our customers have planned ahead to get a DOOB.”

Each Doob-licator is outfitted with 54 Canon EOS Rebel T5i DSLRs, arranged in nine columns of six cameras each. You can make an appointment or just wait in line: A customer steps in, strikes a pose, and the Doob-licator operator fires all the cameras at once. That creates a full-body scan in a fraction of a second. The next step involves feeding all those 18-megapixel images through the company’s proprietary software, which creates a 3-D model of the subject.

The printing process requires more patience. The company operates three high-end 3-D printing centers to support its scanning operations: One in Germany, one in Tokyo, and one in Brooklyn. They all use 3D Systems’ ProJet 660Pro, a high-resolution (600 x 540 dpi) 3-D printer that creates full-color objects on the fly. The printer uses a resin polymer material, and the full range of CMYK color is added to each powder layer as it’s printed.

With a top printing speed of 1.1 inches per hour and a process that sometimes involves thousands of layers of powder, the process takes a few hours for the smallest-size DOOB and half a day or more for the larger ones. And depending on how many DOOBs are lined up in the queue, your mini statue takes between two and eight weeks to arrive in the mail.

Once you step inside that Doob-licator, it’s like international waters: You are largely unbound by laws and restrictions. Do you want to get naked? Go right ahead. Along with your nude statue, the company will also send you a 3-D PDF and keep your data in its database in case you want additional copies made (you can request that your data be deleted if that sounds too creepy).

Read the entire article here.

Image courtesy of of DOOB 3D.

Send to Kindle