Zombie Technologies

Next time Halloween festivities roll around consider dressing up as a fax machine — one of several technologies that seems unwilling to die.

From Wired:

One of the things we love about technology is how fast it moves. New products and new services are solving our problems all the time, improving our connectivity and user experience on a nigh-daily basis.

But underneath sit the technologies that just keep hanging on. Every flesh wound, every injury, every rupture of their carcass levied by a new device or new method of doing things doesn’t merit even so much as a flinch from them. They keep moving, slowly but surely, eating away at our livelihoods. They are the undead of the technology world, and they’re coming for your brains.

Below, you’ll find some of technology’s more persistent walkers—every time we seem to kill them off, more hordes still clinging to their past relevancy lumber up to distract you. It’s about time we lodged an axe in their skulls.

Oddly specific yet totally unhelpful error codes

It’s common when you’re troubleshooting hardware and software—something, somewhere throws an error code that pairs an incredibly specific alphanumerical code (“0x000000F4”) with a completely generic and unhelpful message like “an unknown error occurred” or “a problem has been detected.”

Back in computing’s early days, the desire to use these codes instead of providing detailed troubleshooting guides made sense—storage space was at a premium, Internet connectivity could not be assumed, and it was a safe bet that the software in question came with some tome-like manual to assist people in the event of problems. Now, with connectivity virtually omnipresent and storage space a non-issue, it’s not clear why codes like these don’t link to more helpful information in some way.

All too often, you’re left to take the law into your own hands. Armed with your error code, you head over to your search engine of choice and punch it in. At this point, one of two things can happen, and I’m not sure which is more infuriating: you either find an expanded, totally helpful explanation of the code and how to fix it on the official support website (could you really not have built that into the software itself?), or, alternatively, you find a bunch of desperate, inconclusive forum posts that offer no additional insight into the problem (though they do offer insight into the absurdity of the human condition). There has to be a better way.

Copper landlines

I’ve been through the Northeast blackout, the 9-11 attacks, and Hurricane Sandy, all of which took out cell service at the same time family and friends were most anxious to get in touch. So I’m a prime candidate for maintaining a landline, which carries enough power to run phones, often provided by a facility with a backup generator. And, in fact, I’ve tried to retain one. But corporate indifference has turned copper wiring into the technology of the living dead.

Verizon really wants you to have two things: cellular service and FiOS. Except it doesn’t actually want to give you FiOS—the company has stopped expanding its fiber footprint, and it’s moving with the speed of a glacier to hook up neighborhoods that are FiOS accessible. That has left Verizon in a position where the company will offer you cell service, but, if you don’t want that, it will stick you with a technology it no longer wants to support: service over copper wires.

This was made explicit in the wake of Sandy when a shore community that had seen its wires washed out was offered cellular service as a replacement. When the community demanded wires, Verizon backed down and gave it FiOS. But the issue shows up in countless other ways. One of our editors recently decided to have DSL service over copper wire activated in his apartment; Verizon took two weeks to actually get the job done.

I stuck with Verizon DSL in the hope that I would be able to transfer directly to FiOS when it finally got activated. But Verizon’s indifference to wired service led to a six-month nightmare. I’d experience erratic DSL, call for Verizon for help, and have it fixed through a process that cut off the phone service. Getting the phone service restored would degrade the DSL. On it went until I gave up and switched to cable—which was a good thing, because it took Verizon about two years to finally put fiber in place.

At the moment, AT&T still considers copper wiring central to its services, but it’s not clear how long that position will remain tenable. If AT&T’s position changes, then it’s likely that the company will also treat the copper just as Verizon has: like a technology that’s dead even as it continues to shamble around causing trouble.

The scary text mode insanity lying in wait beneath it all

PRESS DEL TO ENTER SETUP. Oh, BIOS, how I hate thee. Often the very first thing you have to deal with when dragging a new computer out of the box is the text mode BIOS setup screen, where you have to figure out how to turn on support for legacy USB devices, or change the boot order, or disable PXE booting, or force onboard video to work, or any number of other crazy things. It’s like being sucked into a time warp back into 1992.

Though slowly being replaced across the board by UEFI, BIOS setup screens are definitely still a thing even on new hardware—the small dual-Ethernet server I purchased just a month ago to serve as my new firewall required me to spend minutes figuring out which of its onboard USB ports were legacy-enabled and then which key summoned the setup screen (F2? Delete? F10? F1? IT’S NEVER THE SAME ONE!). Once in, I had to figure out how to enable USB device booting so that I could get Smoothwall installed, but the computer inexplicably wouldn’t boot from my carefully prepared USB stick, even though the stick worked great on the other servers in the closet. I ended up having to install from a USB CD-ROM drive instead.

Many motherboard OEMs now provide a way to adjust BIOS options from inside of Windows, which is great, but that won’t necessarily help you on a fresh Windows install (or on a computer you’ve parted together yourself and on which you haven’t installed the OEM’s universally hideous BIOS tweaking application). UEFI as a replacement has been steadily gaining ground for almost three years now, but we’ve likely got many more years of occasionally having to reboot and hold DEL to adjust some esoteric settings. Ugh.

Fax machines, and the general concept of faxing

Faxing has a longer and more venerable history than I would have guessed, based on how abhorrent it is in the modern day. The first commercial telefaxing service was established in France in 1865 via wire transmission, and we started sending faxes over phone lines circa 1964. For a long time, faxing was actually the best and fastest way to get a photographic clone of one piece of paper to an entirely different geographical location.

Then came e-mail. And digital cameras. And electronic signatures. And smartphones with digital cameras. And Passbook. And cloud storage. Yet people continue to ask me to fax them things.

When it comes to signing contracts or verifying or simply passing along information, digital copies, properly backed up with redundant files everywhere, are easier to deal with at literally every step in the process. On the very rare occasion that a physical piece of paper is absolutely necessary, here: e-mail it; I will sign it electronically and e-mail it back to you, and you print it out. You already sent me that piece of paper? I will sign it, take a picture with my phone, e-mail that picture to you, and you print it out. Everyone comes out ahead, no one has to deal with a fax machine.

That a business, let alone businesses, have actually cropped up around the concept of allowing people to e-mail documents to a fax number is ludicrous. Get an e-mail address. They are free. Get a printer. It is cheaper than a fax machine. Don’t get a printer that is also a fax machine, because then you are just encouraging this technological concept to live on, when, in fact, it needs to die.

Read the entire article here.

Image courtesy of Mobiledia.

Chromosomal Chronometer

Researchers find possible evidence of DNA mechanism that keep track of age. It is too early to tell if changes over time in specific elements of our chromosomes result in or are a consequence of aging. Yet, this is a tantalizing discovery that bodes well for a better understanding into the genetic and biological systems that underlie the aging process.

From the Guardian:

A US scientist has discovered an internal body clock based on DNA that measures the biological age of our tissues and organs.

The clock shows that while many healthy tissues age at the same rate as the body as a whole, some of them age much faster or slower. The age of diseased organs varied hugely, with some many tens of years “older” than healthy tissue in the same person, according to the clock.

Researchers say that unravelling the mechanisms behind the clock will help them understand the ageing process and hopefully lead to drugs and other interventions that slow it down.

Therapies that counteract natural ageing are attracting huge interest from scientists because they target the single most important risk factor for scores of incurable diseases that strike in old age.

“Ultimately, it would be very exciting to develop therapy interventions to reset the clock and hopefully keep us young,” said Steve Horvath, professor of genetics and biostatistics at the University of California in Los Angeles.

Horvath looked at the DNA of nearly 8,000 samples of 51 different healthy and cancerous cells and tissues. Specifically, he looked at how methylation, a natural process that chemically modifies DNA, varied with age.

Horvath found that the methylation of 353 DNA markers varied consistently with age and could be used as a biological clock. The clock ticked fastest in the years up to around age 20, then slowed down to a steadier rate. Whether the DNA changes cause ageing or are caused by ageing is an unknown that scientists are now keen to work out.

“Does this relate to something that keeps track of age, or is a consequence of age? I really don’t know,” Horvath told the Guardian. “The development of grey hair is a marker of ageing, but nobody would say it causes ageing,” he said.

The clock has already revealed some intriguing results. Tests on healthy heart tissue showed that its biological age – how worn out it appears to be – was around nine years younger than expected. Female breast tissue aged faster than the rest of the body, on average appearing two years older.

Diseased tissues also aged at different rates, with cancers speeding up the clock by an average of 36 years. Some brain cancer tissues taken from children had a biological age of more than 80 years.

“Female breast tissue, even healthy tissue, seems to be older than other tissues of the human body. That’s interesting in the light that breast cancer is the most common cancer in women. Also, age is one of the primary risk factors of cancer, so these types of results could explain why cancer of the breast is so common,” Horvath said.

Healthy tissue surrounding a breast tumour was on average 12 years older than the rest of the woman’s body, the scientist’s tests revealed.

Writing in the journal Genome Biology, Horvath showed that the biological clock was reset to zero when cells plucked from an adult were reprogrammed back to a stem-cell-like state. The process for converting adult cells into stem cells, which can grow into any tissue in the body, won the Nobel prize in 2012 for Sir John Gurdon at Cambridge University and Shinya Yamanaka at Kyoto University.

“It provides a proof of concept that one can reset the clock,” said Horvath. The scientist now wants to run tests to see how neurodegenerative and infectious diseases affect, or are affected by, the biological clock.

Read the entire article here.

Image: Artist rendition of DNA fragment. Courtesy of Zoonar GmbH/Alamy.

Rushing to be Late

You’re either a cat person or you are a dog person. You’re either an early bird or a night owl, and similarly you’re either usually early or habitually late.

From the Washington Post:

I’m a late person.

I don’t think of myself as late, though. Every single time that it happens (and it invariably happens) I think of it as an exceptional fluke that will not occur again. Me, chronically late? No! Unforeseen things just happen on my way to getting places. If I were honest I would admit that these miscalculations never result in my being early, but I am not honest. If we were completely honest, who could even get out of bed in the morning?

Here is a translation guide, if you know someone like me:

I am coming downstairs: I will respond to an email, eight minutes will pass, then I will come downstairs.
I am a block away: I am two blocks away.
I am five minutes away: I am ten minutes away.
I am seventeen minutes away: I am giving you an oddly specific number to disguise the fact that I am probably something like half an hour away.
Twenty minutes away!: I am lost somewhere miles away, but optimistic.
I’m en route!: I am still in my apartment
See you at [Time we originally agreed upon]: I’m about to go take a shower, then get dressed, and then I will leave at the time we agreed to meet.

And if you say “I’m running five minutes late” this, to me, translates to “Hey, you now have time to watch a 90 minute film before you get dressed!”

I haven’t always been a late person. I didn’t think of myself as a late person until last week, when it finally happened.

“Dinner is at 7:00,” a friend told me. I showed up at 7:15, after a slight miscalculation or two while getting dressed that I had totally not foreseen, and then we waited for fifteen more minutes. Dinner was at 7:30. I had been assigned my own time zone. I was That Late Person.

The curse of the habitually late person is to be surrounded by early people. Early people do not think of themselves as Early People. They think of themselves as Right. “You have to be early in order to be on time,” they point out. Being on time is important to them. The forty minutes between when they arrive ten minutes early in order to “scout the place out” and “get in line” and when you show up mumbling excuses is the time it takes them to perfect the reproachful but resigned expression they are wearing when you get there. It is an expression that would not look out of place on a medieval saint. It is luminous with a kind of righteous indignation, eyes lifted skyward to someone who appreciates the value of time, a sad, small smile curving the lips to show that they forgive you, because they always forgive you, because you know not what you do.

“Well,” you say, “there was traffic.” This is never a lie. There is always traffic somewhere. But it is seldom actually why you are late. You might as well say, “I hear in Los Angeles today there was a bear running around and the police had to subdue it” for the relevance this story has to your arrival time. You hit every green light. The traffic parted for you, effortlessly, as though you were Moses. You were still half an hour late.

Still, it is best to say something. The next best thing to not being late, you have always felt, is to have an amusing excuse for why. “I am sorry I’m late,” you say. “I ran into Constance Moondragon, that crazy lady from the bus!” This is, technically, true — you saw her on the sidewalk, but did not actually speak to her — and it buys you time.

Sometimes this compounds. When you realize you are late, the thought sometimes occurs to you that “Well, since I’m going to be late, I should bring a gift to atone.” Then you are two hours late because all the liquor stores were closed, instead of forty-five minutes late, as planned.

Being late is a kind of optimism. Every time I leave to go somewhere I always think, on some level, “Maybe this is the day that leaving exactly when the event starts will get me there on time.” I am not sure how this will work, but hope springs eternal.

Besides, isn’t there is a kind of graciousness to being late, as some writers of etiquette books will tell you? If you show up precisely on time, you run the risk of catching your hosts in the inevitable last-minute scramble to make the place look decent, pour the wine, and hide their collections of werewolf erotica under the settee. To arrive 15 minutes after the scheduled time shows not disrespect for your hosts’ time, but a respect for their effort to make hosting seem like an effortless flow of magic.

The hosts never quite see things that way, of course.

By this point, you have probably lost all sympathy for me. The first comment on this piece will, I assume, be someone saying, “You sound like you are deeply self-centered and don’t care at all about the feelings of others, and I feel sorry for you.” And the thing is, all the evidence points to your being right, except for my feeble assertion that in my heart of hearts, I really do value your time, I never consciously intend to be late in a cruel way, and I am not the terrible person I appear. And that doesn’t go very far.

And all this being said, the life of a late person is great. I don’t do it on purpose, but it has much to recommend it. “People who show up late for things are always so much more cheerful than the people who have to wait for them,” E. V. Lucas said. This is true. One time I showed up early for something by mistake, and it was awful! I had to wait around for half an hour! Being late, you get all the fun of being there, with none of the pain of having to wait for other people to get there. You show up, and the party has already started. You get to do That Fun Thing That You Were Doing Right Before You Left and then join in That Fun Thing Everyone Is Doing When You Arrive. It’s the best of all possible worlds. You never have to stand alone in the rain anywhere waiting for anyone to assemble. Your host is never in the shower when you show up. You miss a couple of trailers, but you never have to see those long-form infomercials or answer movie theater trivia. You never have to be the first one at a party, making awkward small talk to the host and volunteering to help saute the onions. Do you really look like someone who would be good at sauteing onions? Of course not. What are you doing here? Why didn’t you wait half an hour like everyone else? You could be watching a video of a cat and a horse being friends!

Read the entire article here.

Left Brain, Right Brain or Top Brain, Bottom Brain?

Are you analytical and logical? If so, you are likely to be labeled as being “left-brained”. On the other hand, if you are emotional and creative, you are more likely to be labeled “right-brained”. And so the popular narrative of brain function continues. But this generalized distinction is a myth. Our brains’ hemispheres do specialize, but not in such an overarching way. Recent research points to another distinction: top brain and bottom brain.

From WSJ:

Who hasn’t heard that people are either left-brained or right-brained—either analytical and logical or artistic and intuitive, based on the relative “strengths” of the brain’s two hemispheres? How often do we hear someone remark about thinking with one side or the other?

A flourishing industry of books, videos and self-help programs has been built on this dichotomy. You can purportedly “diagnose” your brain, “motivate” one or both sides, indulge in “essence therapy” to “restore balance” and much more. Everyone from babies to elders supposedly can benefit. The left brain/right brain difference seems to be a natural law.

Except that it isn’t. The popular left/right story has no solid basis in science. The brain doesn’t work one part at a time, but rather as a single interactive system, with all parts contributing in concert, as neuroscientists have long known. The left brain/right brain story may be the mother of all urban legends: It sounds good and seems to make sense—but just isn’t true.

The origins of this myth lie in experimental surgery on some very sick epileptics a half-century ago, conducted under the direction of Roger Sperry, a renowned neuroscientist at the California Institute of Technology. Seeking relief for their intractable epilepsy, and encouraged by Sperry’s experimental work with animals, 16 patients allowed the Caltech team to cut the corpus callosum, the massive bundle of nerve fibers that connects the two sides of the brain. The patients’ suffering was alleviated, and Sperry’s postoperative studies of these volunteers confirmed that the two halves do, indeed, have distinct cognitive capabilities.

But these capabilities are not the stuff of popular narrative: They reflect very specific differences in function—such as attending to overall shape versus details during perception—not sweeping distinctions such as being “logical” versus “intuitive.” This important fine print got buried in the vast mainstream publicity that Sperry’s research generated.

There is a better way to understand the functioning of the brain, based on another, ordinarily overlooked anatomical division—between its top and bottom parts. We call this approach “the theory of cognitive modes.” Built on decades of unimpeachable research that has largely remained inside scientific circles, it offers a new way of viewing thought and behavior that may help us understand the actions of people as diverse as Oprah Winfrey, the Dalai Lama, Tiger Woods and Elizabeth Taylor.

Our theory has emerged from the field of neuropsychology, the study of higher cognitive functioning—thoughts, wishes, hopes, desires and all other aspects of mental life. Higher cognitive functioning is seated in the cerebral cortex, the rind-like outer layer of the brain that consists of four lobes. Illustrations of this wrinkled outer brain regularly show a top-down view of the two hemispheres, which are connected by thick bundles of neuronal tissue, notably the corpus callosum, an impressive structure consisting of some 250 million nerve fibers.

If you move the view to the side, however, you can see the top and bottom parts of the brain, demarcated largely by the Sylvian fissure, the crease-like structure named for the 17th-century Dutch physician who first described it. The top brain comprises the entire parietal lobe and the top (and larger) portion of the frontal lobe. The bottom comprises the smaller remainder of the frontal lobe and all of the occipital and temporal lobes.

Our theory’s roots lie in a landmark report published in 1982 by Mortimer Mishkin and Leslie G. Ungerleider of the National Institute of Mental Health. Their trailblazing research examined rhesus monkeys, which have brains that process visual information in much the same way as the human brain. Hundreds of subsequent studies in several fields have helped to shape our theory, by researchers such as Gregoire Borst of Paris Descartes University, Martha Farah of the University of Pennsylvania, Patricia Goldman-Rakic of Yale University, Melvin Goodale of the University of Western Ontario and Maria Kozhevnikov of the National University of Singapore.

This research reveals that the top-brain system uses information about the surrounding environment (in combination with other sorts of information, such as emotional reactions and the need for food or drink) to figure out which goals to try to achieve. It actively formulates plans, generates expectations about what should happen when a plan is executed and then, as the plan is being carried out, compares what is happening with what was expected, adjusting the plan accordingly.

The bottom-brain system organizes signals from the senses, simultaneously comparing what is being perceived with all the information previously stored in memory. It then uses the results of such comparisons to classify and interpret the object or event, allowing us to confer meaning on the world.

The top- and bottom-brain systems always work together, just as the hemispheres always do. Our brains are not engaged in some sort of constant cerebral tug of war, with one part seeking dominance over another. (What a poor evolutionary strategy that would have been!) Rather, they can be likened roughly to the parts of a bicycle: the frame, seat, wheels, handlebars, pedals, gears, brakes and chain that work together to provide transportation.

But here’s the key to our theory: Although the top and bottom parts of the brain are always used during all of our waking lives, people do not rely on them to an equal degree. To extend the bicycle analogy, not everyone rides a bike the same way. Some may meander, others may race.

Read the entire article here.

Image: Left-brain, right-brain cartoon. Courtesy of HuffingtonPost.

A Little Give and Take

[Or, we could have titled this post, “The Original Old World to New World and Visa Versa Infectious Disease Vector].

Syphilis is not some god’s vengeful jest on humanity for certain “deviant” behaviors, according to various televangelists. But, it may well be revenge for the Bubonic Plague. And, Columbus and his fellow explorers could well be to blame for bringing Syphilis back to Europe in exchange for introducing the Americas to smallpox, measles and the plague. What a legacy!

From the Guardian:

Last month, Katherine Wright was awarded the Wellcome Trust science writing prize at a ceremony at the Observer’s offices at Kings Place, London. Wright, who is studying for a DPhil in structural biology at Oxford University, was judged the winner of category A “for professional scientists of postgraduate level and above” from more than 600 entries by a panel including BBC journalist Maggie Philbin, scientist and broadcaster Helen Czerski and the Observer’s Carole Cadwalladr. “I am absolutely thrilled to have won the science writing prize,” says Wright. “This experience has inspired me to continue science writing in the future.”

In the 1490s, a gruesome new disease exploded across Europe. It moved with terrifying speed. Within five years of the first reported cases, among the mercenary army hired by Charles VIII of France to conquer Naples, it was all over the continent and reaching into north Africa. The first symptom was a lesion, or chancre, in the genital region. After that, the disease slowly progressed to the increasingly excruciating later stages. The infected watched their bodies disintegrate, with rashes and disfigurements, while they gradually descended into madness. Eventually, deformed and demented, they died.

Some called it the French disease. To the French, it was the Neapolitan disease. The Russians blamed the Polish. In 1530, an Italian physician penned an epic poem about a young shepherd named Syphilis, who so angered Apollo that the god struck him down with a disfiguring malady to destroy his good looks. It was this fictional shepherd (rather than national rivalries) who donated the name that eventually stuck: the disease, which first ravaged the 16th-century world and continues to affect untold millions today, is now known as syphilis.

As its many names attest, contemporaries of the first spread of syphilis did not know where this disease had come from. Was it indeed the fault of the French? Was it God’s punishment on earthly sinners?

Another school of thought, less xenophobic and less religious, soon gained traction. Columbus’s historic voyage to the New World was in 1492. The Italian soldiers were noticing angry chancres on their genitals by 1494. What if Columbus had brought the disease back to Europe with him as an unwelcome stowaway aboard the Pinta or the Niña?

Since the 1500s, we have discovered a lot more about syphilis. We know it is caused by a spiral-shaped bacterium called Treponema pallidum, and we know that we can destroy this bacterium and cure the disease using antibiotics. (Thankfully we no longer “treat” syphilis with poisonous, potentially deadly mercury, which was used well into the 19th century.)

However, scientists, anthropologists, and historians still disagree about the origin of syphilis. Did Columbus and his sailors really transport the bacterium back from the New World? Or was it just coincidental timing, that the first cases were recorded soon after the adventurers’ triumphant return to the Old World? Perhaps syphilis was already present in the population, but doctors had only just begun to distinguish between syphilis and other disfiguring illnesses such as leprosy; or perhaps the disease suddenly increased in virulence at the end of the 15th century. The “Columbian” hypothesis insists that Columbus is responsible, and the “pre-Columbian” hypothesis that he had nothing to do with it.

Much of the evidence to distinguish between these two hypotheses comes from the skeletal record. Late-stage syphilis causes significant and identifiable changes in the structure of bone, including abnormal growths. To prove that syphilis was already lurking in Europe before Columbus returned, anthropologists would need to identify European skeletons with the characteristic syphilitic lesions, and date those skeletons accurately to a time before 1493.

This has proved a tricky exercise in practice. Identifying past syphilis sufferers in the New World is straightforward: ancient graveyards are overflowing with clearly syphilitic corpses, dating back centuries before Columbus was even born. However, in the Old World, a mere scattering of pre-Columbian syphilis candidates have been unearthed.

Are these 50-odd skeletons the sought-after evidence of pre-Columbian syphilitics? With such a small sample size, it is difficult to definitely diagnose these skeletons with syphilis. There are only so many ways bone can be damaged, and several diseases produce a bone pattern similar to syphilis. Furthermore, the dating methods used can be inexact, thrown off by hundreds of years because of a fish-rich diet, for example.

A study published in 2011 has systematically compared these European skeletons, using rigorous criteria for bone diagnosis and dating. None of the candidate skeletons passed both tests. In all cases, ambiguity in the bone record or the dating made it impossible to say for certain that the skeleton was both syphilitic and pre-Columbian. In other words, there is very little evidence to support the pre-Columbian hypothesis. It seems increasingly likely that Columbus and his crew were responsible for transporting syphilis from the New World to the Old.

Of course, Treponema pallidum was not the only microbial passenger to hitch a ride across the Atlantic with Columbus. But most of the traffic was going the other way: smallpox, measles, and bubonic plague were only some of the Old World diseases which infiltrated the New World, swiftly decimating thousands of Native Americans. Syphilis was not the French disease, or the Polish disease. It was the disease – and the revenge – of the Americas.

Read the entire article here.

Image: Christopher Columbus by Sebastiano del Piombo (1485–1547). Courtesy of Wikipedia / Metropolitan Museum of Art.

Britain’s Genomics NHS

The United Kingdom is plotting a visionary strategy that will put its treasured National Health Service (NHS) at the heart of the new revolution in genomics-based medical care.

From Technology Review:

By sequencing the genomes of 100,000 patients and integrating the resulting data into medical care, the U.K. could become the first country to introduce genome sequencing into its mainstream health system. The U.K. government hopes that the investment will improve patient outcomes while also building a genomic medicine industry. But the project will test the practical challenges of integrating and safeguarding genomic data within an expansive health service.

Officials breathed life into the ambitious sequencing project in June when they announced the formation of Genomics England, a company set up to execute the £100 million project. The goal is to “transform how the NHS uses genomic medicine,” says the company’s chief scientist, Mark Caulfield.

Those changes will take many shapes. First, by providing whole-genome sequencing and analysis for National Health Service patients with rare diseases, Genomics England could help families understand the origin of these conditions and help doctors better treat them. Second, the company will sequence the genomes of cancer patients and their tumors, which could help doctors identify the best drugs to treat the disease. Finally, say leaders of the 100,000 genomes project, the efforts could uncover the basis for bacterial and viral resistance to medicines.

“We hope that the legacy at the end of 2017, when we conclude the 100,000 whole-genome sequences, will be a transformed capacity and capability in the NHS to use this data,” says Caulfield.

In the last few years, the cost and time required to sequence DNA have plummeted (see “Bases to Bytes”), making the technology more feasible to use as part of clinical care. Governments around the world are investing in large-scale projects to identify the best way to harness genome technology in a medical setting. For example, the Faroe Islands, a sovereign state within the Kingdom of Denmark, is offering sequencing to all of its citizens to understand the basis of genetic diseases prevalent in the isolated population. The U.S. has funded several large grants to study how to best use medical genomic data, and in 2011 it announced an effort to sequence thousands of veterans’ genomes. In 1999, the Chinese government helped establish the Beijing Genomics Institute, which would later become the world’s most prolific genome institute, providing sequences for projects based in China and abroad (see “Inside China’s Genome Factory”).

But the U.K. project stands out for the large number of genomes planned and the integration of the data into a national health-care system that serves more than 60 million people. The initial program will focus on rare inherited diseases, cancer, and infectious pathogens. Initially, the greatest potential will be in giving families long-sought-after answers as to why a rare disorder afflicts them or their children, and “in 10 or 20 years, there may be treatments sprung from it,” says Caulfield.

In addition to exploring how to best handle and use genomic data, the projects taking place in 2014 will give Genomics England time to explore different sequencing technologies offered by commercial providers. The San Diego-based sequencing company Illumina will provide sequencing at existing facilities in England, but Caulfeld emphasizes that the project will want to use the sequencing services of multiple commercial providers. “We are keen to encourage competitiveness in this marketplace as a route to bring down the price for everybody.”

To help control costs for the lofty project, and to foster investment in genomic medicine in the U.K., Genomics England will ask commercial providers to set up sequencing centers in England. “Part of this program is to generate wealth, and that means U.K. jobs,” he says. “We want the sequencing providers to invest in the U.K.” The sequencing centers will be ready by 2015, when the project kicks off in earnest. “Then we will be sequencing 30,000 whole-genome sequences a year,” says Caulfield.

Read the entire article here.

Image: Argonne’s Midwest Center for Structural Genomics deposits 1,000th protein structure. Courtesy of Wikipedia.

Graffiti Gets Good

Modern graffiti has come a long way since the days of “Kilroy Was Here” during the Second World War. Nowadays its a fully fledged alternative art form having been fully assimilated into pop culture and, for a lucky few, into contemporary art establishment. And, like Banksy, some graffiti artists are making a name as well as innovative and engaging street art.

See more graffiti here.

Image: Woman’s face in Collingwood, Melbourne by Rone. Courtesy of Guardian.

The Outliner as Outlier

Outlining tools for the composition of text are intimately linked with the evolution of the personal computer industry. Yet while outliners were some of the earliest “apps” to appear, their true power, as mechanisms to think new thoughts — has yet to be fully realized.

From Technology Review:

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

It’s an elitist view of software, and maybe self-defeating. Perhaps most users, who just want to compose two-page documents and quick e-mails, don’t need the structure that Fargo imposes.

But I sympathize with Winer. I’m an outliner person. I’ve used many outliners over the decades. Right now, my favorite is the open-source Org-mode in the Emacs text editor. Learning an outliner’s commands is a pleasure, because the payoff—the ability to distill a bubbling cauldron of thought into a list, and then to expand that bulleted list into an essay, a report, anything—is worth it. An outliner treats a text as a set of Lego bricks to be pulled apart and reassembled until the most pleasing structure is found.

Fargo is an excellent outline editor, and it’s innovative because it’s a true Web application, running all its code inside the browser and storing versions of files in Dropbox. (Winer also recently released Concord, the outlining engine inside Fargo, under a free software license so that any developer can insert an outline into any Web application.) As you move words and ideas around, Fargo feels jaunty. Click on one of those lines in your outline and drag it, and arrows show you where else in the hierarchy that line might fit. They’re good arrows: fat, clear, obvious, informative.

For a while, bloggers using Fargo could publish posts with a free hosted service operated by Winer. But this fall the service broke, and Winer said he didn’t see how to fix it. Perhaps that’s just as well: an outline creates a certain unresolved tension with the dominant model for blogging. For Winer, a blog is a big outline of one’s days and intellectual development. But most blog publishing systems treat each post in isolation: a title, some text, maybe an image or video. Are bloggers ready to see a blog as one continuous document, a set of branches hanging off a common trunk? That’s the thing about outlines: they can become anything.

Read the entire article here.

Good Job Mr.Snowden

Far from being a communist sympathizer and U.S. traitor, Edward Snowden has done the United States and the world a great service. Single-handedly he is responsible for some of the most important revelations concerning the inner machinations of the U.S. government, particularly its vast surveillance apparatus headed by the National Security Agency (NSA). Once held in high esteem by much of the world, for its openness and transparency, the continuing revelations now paint the United States as nothing more than a paranoid, security state akin to the ex-Soviet Union.

Mr.Snowden, your life for the foreseeable future is likely to be hellish, but may you sleep soundly in the knowledge that you have helped open our eyes to the egregious actions of a country many no longer trust.

From the Guardian:

The National Security Agency monitored the phone conversations of 35 world leaders after being given the numbers by an official in another US government department, according to a classified document provided by whistleblower Edward Snowden.

The confidential memo reveals that the NSA encourages senior officials in its “customer” departments, such the White House, State and the Pentagon, to share their “Rolodexes” so the agency can add the phone numbers of leading foreign politicians to their surveillance systems.

The document notes that one unnamed US official handed over 200 numbers, including those of the 35 world leaders, none of whom is named. These were immediately “tasked” for monitoring by the NSA.

The revelation is set to add to mounting diplomatic tensions between the US and its allies, after the German chancellor Angela Merkel on Wednesday accused the US of tapping her mobile phone.

After Merkel’s allegations became public, White House press secretary Jay Carney issued a statement that said the US “is not monitoring and will not monitor” the German chancellor’s communications. But that failed to quell the row, as officials in Berlin quickly pointed out that the US did not deny monitoring the phone in the past.

The NSA memo obtained by the Guardian suggests that such surveillance was not isolated, as the agency routinely monitors the phone numbers of world leaders – and even asks for the assistance of other US officials to do so.

The memo, dated October 2006 and which was issued to staff in the agency’s Signals Intelligence Directorate (SID), was titled “Customers Can Help SID Obtain Targetable Phone Numbers”.

It begins by setting out an example of how US officials who mixed with world leaders and politicians could help agency surveillance.

“In one recent case,” the memo notes, “a US official provided NSA with 200 phone numbers to 35 world leaders … Despite the fact that the majority is probably available via open source, the PCs [intelligence production centers] have noted 43 previously unknown phone numbers. These numbers plus several others have been tasked.”

The document continues by saying the new phone numbers had helped the agency discover still more new contact details to add to their monitoring: “These numbers have provided lead information to other numbers that have subsequently been tasked.”

But the memo acknowledges that eavesdropping on the numbers had produced “little reportable intelligence”. In the wake of the Merkel row, the US is facing growing international criticism that any intelligence benefit from spying on friendly governments is far outweighed by the potential diplomatic damage.

The memo then asks analysts to think about any customers they currently serve who might similarly be happy to turn over details of their contacts.

“This success leads S2 [signals intelligence] to wonder if there are NSA liaisons whose supported customers may be willing to share their ‘Rolodexes’ or phone lists with NSA as potential sources of intelligence,” it states. “S2 welcomes such information!”

The document suggests that sometimes these offers come unsolicited, with US “customers” spontaneously offering the agency access to their overseas networks.

“From time to time, SID is offered access to the personal contact databases of US officials,” it states. “Such ‘Rolodexes’ may contain contact information for foreign political or military leaders, to include direct line, fax, residence and cellular numbers.”

The Guardian approached the Obama administration for comment on the latest document. Officials declined to respond directly to the new material, instead referring to comments delivered by Carney at Thursday’s daily briefing.

Carney told reporters: “The [NSA] revelations have clearly caused tension in our relationships with some countries, and we are dealing with that through diplomatic channels.

“These are very important relations both economically and for our security, and we will work to maintain the closest possible ties.”

The public accusation of spying on Merkel adds to mounting political tensions in Europe about the scope of US surveillance on the governments of its allies, after a cascade of backlashes and apologetic phone calls with leaders across the continent over the course of the week.

Asked on Wednesday evening if the NSA had in the past tracked the German chancellor’s communications, Caitlin Hayden, the White House’s National Security Council spokeswoman, said: “The United States is not monitoring and will not monitor the communications of Chancellor Merkel. Beyond that, I’m not in a position to comment publicly on every specific alleged intelligence activity.”

At the daily briefing on Thursday, Carney again refused to answer repeated questions about whether the US had spied on Merkel’s calls in the past.

The NSA memo seen by the Guardian was written halfway through George W Bush’s second term, when Condoleezza Rice was secretary of state and Donald Rumsfeld was in his final months as defence secretary.

Merkel, who, according to Reuters, suspected the surveillance after finding her mobile phone number written on a US document, is said to have called for US surveillance to be placed on a new legal footing during a phone call to President Obama.

“The [German] federal government, as a close ally and partner of the US, expects in the future a clear contractual basis for the activity of the services and their co-operation,” she told the president.

Read the entire article here.

Hotels of the Future

Fantastic — in the original sense of the word — designs for some futuristic hotels, some of which have arrived in the present.

See more designs here.

Image: The Heart hotel, designed by Arina Agieieva and Dmitry Zhuikov, is a proposed design for a New York hotel. The project aims to draw local residents and hotel visitors closer together by embedding the hotel into city life; bedrooms are found in the converted offices that flank the core of the structure – its heart – and leisure facilities are available for the use of everyone. Courtesy of Telegraph.

Wrong Decisions, Bad Statistics

Each of us makes countless decisions daily. A not insignificant number of these — each day — is probably wrong. And, in most cases we continue, recover, readjust, move on, and sometimes even correct ourselves and learn. In the majority of instances these wrong decisions lead to inconsequential results.

However, sometimes the results are much more tragic, leading to accidents, injury and death. When those incorrect decisions are made by healthcare professionals the consequences are much more stark. By some estimates, around 50,000 hospital deaths could be prevented each year in Canada and the U.S. from misdiagnosis.

From the New York Times:

Six years ago I was struck down with a mystery illness. My weight dropped by 30 pounds in three months. I experienced searing stomach pain, felt utterly exhausted and no matter how much I ate, I couldn’t gain an ounce.

I went from slim to thin to emaciated. The pain got worse, a white heat in my belly that made me double up unexpectedly in public and in private. Delivering on my academic and professional commitments became increasingly challenging.

It was terrifying. I did not know whether I had an illness that would kill me or stay with me for the rest of my life or whether what was wrong with me was something that could be cured if I could just find out what on earth it was.

Trying to find the answer, I saw doctors in London, New York, Minnesota and Chicago.

I was offered a vast range of potential diagnoses. Cancer was quickly and thankfully ruled out. But many other possibilities remained on the table, from autoimmune diseases to rare viruses to spinal conditions to debilitating neural illnesses.

Treatments suggested ranged from a five-hour, high-risk surgery to remove a portion of my stomach, to lumbar spine injections to numb nerve paths, to a prescription of antidepressants.

Faced with all these confusing and conflicting opinions, I had to work out which expert to trust, whom to believe and whose advice to follow. As an economist specializing in the global economy, international trade and debt, I have spent most of my career helping others make big decisions — prime ministers, presidents and chief executives — and so I’m all too aware of the risks and dangers of poor choices in the public as well as the private sphere. But up until then I hadn’t thought much about the process of decision making. So in between M.R.I.’s, CT scans and spinal taps, I dove into the academic literature on decision making. Not just in my field but also in neuroscience, psychology, sociology, information science, political science and history.

What did I learn?

Physicians do get things wrong, remarkably often. Studies have shown that up to one in five patients are misdiagnosed. In the United States and Canada it is estimated that 50,000 hospital deaths each year could have been prevented if the real cause of illness had been correctly identified.

Yet people are loath to challenge experts. In a 2009 experiment carried out at Emory University, a group of adults was asked to make a decision while contemplating an expert’s claims, in this case, a financial expert. A functional M.R.I. scanner gauged their brain activity as they did so. The results were extraordinary: when confronted with the expert, it was as if the independent decision-making parts of many subjects’ brains pretty much switched off. They simply ceded their power to decide to the expert.

If we are to control our own destinies, we have to switch our brains back on and come to our medical consultations with plenty of research done, able to use the relevant jargon. If we can’t do this ourselves we need to identify someone in our social or family network who can do so on our behalf.

Anxiety, stress and fear — emotions that are part and parcel of serious illness — can distort our choices. Stress makes us prone to tunnel vision, less likely to take in the information we need. Anxiety makes us more risk-averse than we would be regularly and more deferential.

We need to know how we are feeling. Mindfully acknowledging our feelings serves as an “emotional thermostat” that recalibrates our decision making. It’s not that we can’t be anxious, it’s that we need to acknowledge to ourselves that we are.

It is also crucial to ask probing questions not only of the experts but of ourselves. This is because we bring into our decision-making process flaws and errors of our own. All of us show bias when it comes to what information we take in. We typically focus on anything that agrees with the outcome we want.

Read the entire article here.

MondayMap: Slavery 2013

A modern day map for a detestable blight that humans refuse to eradicate. We have notions that slavery was a distant problem caused by the ancient Egyptians or the Roman colonizers or, more recently, 18th century plantation owners. But, unfortunately, there are around 30 million slaves today — a thoroughly shameful statistic.

From the Washington Post:

We think of slavery as a practice of the past, an image from Roman colonies or 18th-century American plantations, but the practice of enslaving human beings as property still exists. There are 29.8 million people living as slaves right now, according to a comprehensive new report issued by the Australia-based Walk Free Foundation.

This is not some softened, by-modern-standards definition of slavery. These 30 million people are living as forced laborers, forced prostitutes, child soldiers, child brides in forced marriages and, in all ways that matter, as pieces of property, chattel in the servitude of absolute ownership. Walk Free investigated 162 countries and found slaves in every single one. But the practice is far worse in some countries than others.

The country where you are most likely to be enslaved is Mauritania. Although this vast West African nation has tried three times to outlaw slavery within its borders, it remains so common that it is nearly normal. The report estimates that four percent of Mauritania is enslaved – one out of every 25 people. (The aid group SOS Slavery, using a broader definition of slavery, estimated several years ago that as  many as 20 percent of Mauritanians might be enslaved.)

The map at the top of this page shows almost every country in the world colored according to the share of its population that is enslaved. The rate of slavery is also alarmingly high in Haiti, in Pakistan and in India, the world’s second-most populous country. In all three, more than 1 percent of the population is estimated to live in slavery.

A few trends are immediately clear from the map up top. First, rich, developed countries tend to have by far the lowest rates of slavery. The report says that effective government policies, rule of law, political stability and development levels all make slavery less likely. The vulnerable are less vulnerable, those who would exploit them face higher penalties and greater risk of getting caught. A war, natural disaster or state collapse is less likely to force helpless children or adults into bondage. Another crucial factor in preventing slavery is discrimination. When society treats women, ethnic groups or religious minorities as less valuable or less worthy of protection, they are more likely to become slaves.

Then there are the worst-affected regions. Sub-Saharan Africa is a swath of red, with many countries having roughly 0.7 percent of the population enslaved — or one in every 140 people. The legacies of the transatlantic slave trade and European colonialism are still playing out in the region; ethnic divisions and systems of economic exploitation engineered there during the colonial era are still, to some extent, in place. Slavery is also driven by extreme poverty, high levels of corruption and toleration of child “marriages” of young girls to adult men who pay their parents a “dowry.”

Two other bright red regions are Southeast Asia and Eastern Europe. Both are blighted particularly by sex trafficking, a practice that bears little resemblance to popular Western conceptions of prostitution. Women and men are coerced into participating, often starting at a very young age, and are completely reliant on their traffickers for not just their daily survival but basic life choices; they have no say in where they go or what they do and are physically prevented from leaving. International sex traffickers have long targeted these two regions, whose women and men are prized for their skin tones and appearance by Western patrons.

Yes, this map can be a little misleading. The United States, per capita, has a very low rate of slavery: just 0.02 percent, or one in every 5,000 people. But that adds up to a lot: an estimated 60,000 slaves, right here in America.

If your goal is to have as few slaves as possible — Walk Free says it is working to eradicate the practice in one generation’s time — then this map is very important, because it shows you which countries have the most slaves and thus which governments can do the most to reduce the global number of slaves. In that sense, the United States could stand to do a lot.

You don’t have to go far to see slavery in America. Here in Washington, D.C., you can sometimes spot them on certain streets, late at night. Not all sex workers or “prostitutes” are slaves, of course; plenty have chosen the work voluntarily and can leave it freely. But, as the 2007 documentary “Very Young Girls” demonstrated, many are coerced into participating at a young age and gradually shifted into a life that very much resembles slavery.

A less visible but still prevalent form of slavery in America involves illegal migrant laborers who are lured with the promise of work and then manipulated into forced servitude, living without wages or freedom of movement, under constant threat of being turned over to the police should they let up in their work. Walk Free cites “a highly developed criminal economy that preys on economic migrants, trafficking and enslaving them.” That economy stretches from the migrants’ home countries right to the United States.

The country that is most marked by slavery, though, is clearly India. There are an estimated 14 million slaves in India – it would be as if the entire population of Pennsylvania were forced into slavery. The country suffers deeply from all major forms of slavery, according to the report. Forced labor is common, due in part to a system of hereditary debt bondage; many Indian children are born “owing” sums they could never possibly pay to masters who control them as chattel their entire lives. Others fall into forced labor when they move to a different region looking for work, and turn to an unlicensed “broker” who promises work but delivers them into servitude. The country’s caste system and widespread discrimination abet social norms that make it easier to turn a blind eye to the problem. Women and girls from underprivileged classes are particularly vulnerable to sexual slavery, whether under the guise of “child marriages” or not, although men and boys often fall victim as well.

Read the entire article here.

Image courtesy of Washington Post.

Propaganda Art From Pyongyang

While the North Korean regime is clearly bonkers (“crazy” for our U.S. readers), it does still turn out some fascinating art.

From the Guardian:

A jovial group of Red Guards bask in the golden glow of cornfields, waving their flags at the magnificent harvest, while a rustic farming couple look on, carrying an overflowing basket of perfectly plump red apples. It could be one of the many thousands of posters issued by the Chinese Communist Party’s Propaganda Department in the 1950s, of rosy-cheeked comrades brimming with vim and vigour. But something’s not quite right.

In the centre of this vision of optimism, where once might have beamed the cheerful face of Mao, stands the twisted loop of the China Central Television (CCTV) headquarters, radiating a lilac sheen. Framed by the vapour trail of a trio of jet-planes performing a victory flypast into the sunset, the building stands like a triumphal gateway to some promised land of Socialism with Chinese characteristics.

The image could well be the mischievous work of its own architects, the Rotterdam-based practice OMA, which has made its own collage of the building alongside Kim Jong-il, George W Bush, Saddam Hussein and Jesus for a book cover, as well as an image of it bursting into flames behind a spread-legged porn-star. But it is in fact the product of artists from a North Korean painting unit – the very same that used to produce such propaganda images for the Kim regime, but now find themselves designing food packaging in Pyongyang.

The Beautiful Future, which comprises six such paintings to date, is the brainchild of British ex-pat duo Nick Bonner and Dominic Johnson-Hill, who both arrived in the Chinese capital 20 years ago and caught the Beijing bug. Bonner runs Koryo Tours, a travel company specialising in trips to the DPRK, while Johnson-Hill presides over a street-wear empire, Plastered, producing T-shirts emblazoned with Maoist kitsch. The paintings, on show earlier this month as part of Beijing Design Week, are the inevitable result of their mutual obsessions.

“North Korean artists are the best people at delivering a message without slogans,” says Bonner, who collects North Korean art and has produced documentaries exploring life in the DPRK – as well as what he describes as “North Korea’s first feature-length rom-com” last year, Comrade Kim Goes Flying. “We wanted to show contemporary China as it could have been, if it had continued with Maoist ideology.”

One painting shows a line of excited comrades, obediently dressed in Mao suits, filing towards Herzog and de Meuron’s Bird’s Nest stadium. The skyline is proudly choked with glassy skyscrapers on one side and a thicket of cooling towers on the other, belching smoke productively into the pink skies. An elderly tourist and his granddaughter look on in awe at the spirited scene.

See more propaganda art here.

Image: “KTV Gives Us a Voice” Image: The Beautiful Future. Courtesy of Guardian.

Innovation in Education?

While many aspects of our lives have changed, mostly for the better, over the last two to three hundred years, one area remains relatively untouched. Education. Since the industrial revolution that began in Western Europe and then swept the world not much has changed in the way we educate our children. It is still very much an industrial, factory-oriented process.

You could argue that technology has altered how we learn, and you would be partly correct. You could also argue that our children are also much more informed and learned compared with their peers in Victorian England, and, again, you would be partly correct. But the most critical element remains the same — the process. The regimented approach, the rote teaching, the focus on tests and testing, and the systematic squelching of creativity still remains solidly in place.

William Torrey Harris, one of the founders of the U.S. public school system in the late-1800s, once said,

“Ninety-nine [students] out of a hundred are automata, careful to walk in prescribed paths, careful to follow the prescribed custom. This is not an accident but the result of substantial education, which, scientifically defined, is the subsumption of the individual.”

And, in testament to his enduring legacy, much of this can still be seen in action today in most Western public schools.

Yet, in some pockets of the world there is hope. Sir Ken Robinson’s vision may yet come to fruition.

From Wired:

José Urbina López Primary School sits next to a dump just across the US border in Mexico. The school serves residents of Matamoros, a dusty, sunbaked city of 489,000 that is a flash point in the war on drugs. There are regular shoot-outs, and it’s not uncommon for locals to find bodies scattered in the street in the morning. To get to the school, students walk along a white dirt road that parallels a fetid canal. On a recent morning there was a 1940s-era tractor, a decaying boat in a ditch, and a herd of goats nibbling gray strands of grass. A cinder-block barrier separates the school from a wasteland—the far end of which is a mound of trash that grew so big, it was finally closed down. On most days, a rotten smell drifts through the cement-walled classrooms. Some people here call the school un lugar de castigo—”a place of punishment.”

For 12-year-old Paloma Noyola Bueno, it was a bright spot. More than 25 years ago, her family moved to the border from central Mexico in search of a better life. Instead, they got stuck living beside the dump. Her father spent all day scavenging for scrap, digging for pieces of aluminum, glass, and plastic in the muck. Recently, he had developed nosebleeds, but he didn’t want Paloma to worry. She was his little angel—the youngest of eight children.

After school, Paloma would come home and sit with her father in the main room of their cement-and-wood home. Her father was a weather-beaten, gaunt man who always wore a cowboy hat. Paloma would recite the day’s lessons for him in her crisp uniform—gray polo, blue-and-white skirt—and try to cheer him up. She had long black hair, a high forehead, and a thoughtful, measured way of talking. School had never been challenging for her. She sat in rows with the other students while teachers told the kids what they needed to know. It wasn’t hard to repeat it back, and she got good grades without thinking too much. As she headed into fifth grade, she assumed she was in for more of the same—lectures, memorization, and busy work.

Sergio Juárez Correa was used to teaching that kind of class. For five years, he had stood in front of students and worked his way through the government-mandated curriculum. It was mind-numbingly boring for him and the students, and he’d come to the conclusion that it was a waste of time. Test scores were poor, and even the students who did well weren’t truly engaged. Something had to change.

He too had grown up beside a garbage dump in Matamoros, and he had become a teacher to help kids learn enough to make something more of their lives. So in 2011—when Paloma entered his class—Juárez Correa decided to start experimenting. He began reading books and searching for ideas online. Soon he stumbled on a video describing the work of Sugata Mitra, a professor of educational technology at Newcastle University in the UK. In the late 1990s and throughout the 2000s, Mitra conducted experiments in which he gave children in India access to computers. Without any instruction, they were able to teach themselves a surprising variety of things, from DNA replication to English.

Juárez Correa didn’t know it yet, but he had happened on an emerging educational philosophy, one that applies the logic of the digital age to the classroom. That logic is inexorable: Access to a world of infinite information has changed how we communicate, process information, and think. Decentralized systems have proven to be more productive and agile than rigid, top-down ones. Innovation, creativity, and independent thinking are increasingly crucial to the global economy.

And yet the dominant model of public education is still fundamentally rooted in the industrial revolution that spawned it, when workplaces valued punctuality, regularity, attention, and silence above all else. (In 1899, William T. Harris, the US commissioner of education, celebrated the fact that US schools had developed the “appearance of a machine,” one that teaches the student “to behave in an orderly manner, to stay in his own place, and not get in the way of others.”) We don’t openly profess those values nowadays, but our educational system—which routinely tests kids on their ability to recall information and demonstrate mastery of a narrow set of skills—doubles down on the view that students are material to be processed, programmed, and quality-tested. School administrators prepare curriculum standards and “pacing guides” that tell teachers what to teach each day. Legions of managers supervise everything that happens in the classroom; in 2010 only 50 percent of public school staff members in the US were teachers.

The results speak for themselves: Hundreds of thousands of kids drop out of public high school every year. Of those who do graduate from high school, almost a third are “not prepared academically for first-year college courses,” according to a 2013 report from the testing service ACT. The World Economic Forum ranks the US just 49th out of 148 developed and developing nations in quality of math and science instruction. “The fundamental basis of the system is fatally flawed,” says Linda Darling-Hammond, a professor of education at Stanford and founding director of the National Commission on Teaching and America’s Future. “In 1970 the top three skills required by the Fortune 500 were the three Rs: reading, writing, and arithmetic. In 1999 the top three skills in demand were teamwork, problem-solving, and interpersonal skills. We need schools that are developing these skills.”

That’s why a new breed of educators, inspired by everything from the Internet to evolutionary psychology, neuroscience, and AI, are inventing radical new ways for children to learn, grow, and thrive. To them, knowledge isn’t a commodity that’s delivered from teacher to student but something that emerges from the students’ own curiosity-fueled exploration. Teachers provide prompts, not answers, and then they step aside so students can teach themselves and one another. They are creating ways for children to discover their passion—and uncovering a generation of geniuses in the process.

At home in Matamoros, Juárez Correa found himself utterly absorbed by these ideas. And the more he learned, the more excited he became. On August 21, 2011—the start of the school year — he walked into his classroom and pulled the battered wooden desks into small groups. When Paloma and the other students filed in, they looked confused. Juárez Correa invited them to take a seat and then sat down with them.

He started by telling them that there were kids in other parts of the world who could memorize pi to hundreds of decimal points. They could write symphonies and build robots and airplanes. Most people wouldn’t think that the students at José Urbina López could do those kinds of things. Kids just across the border in Brownsville, Texas, had laptops, high-speed Internet, and tutoring, while in Matamoros the students had intermittent electricity, few computers, limited Internet, and sometimes not enough to eat.

“But you do have one thing that makes you the equal of any kid in the world,” Juárez Correa said. “Potential.”

He looked around the room. “And from now on,” he told them, “we’re going to use that potential to make you the best students in the world.”

Paloma was silent, waiting to be told what to do. She didn’t realize that over the next nine months, her experience of school would be rewritten, tapping into an array of educational innovations from around the world and vaulting her and some of her classmates to the top of the math and language rankings in Mexico.

“So,” Juárez Correa said, “what do you want to learn?”

In 1999, Sugata Mitra was chief scientist at a company in New Delhi that trains software developers. His office was on the edge of a slum, and on a hunch one day, he decided to put a computer into a nook in a wall separating his building from the slum. He was curious to see what the kids would do, particularly if he said nothing. He simply powered the computer on and watched from a distance. To his surprise, the children quickly figured out how to use the machine.

Over the years, Mitra got more ambitious. For a study published in 2010, he loaded a computer with molecular biology materials and set it up in Kalikuppam, a village in southern India. He selected a small group of 10- to 14-year-olds and told them there was some interesting stuff on the computer, and might they take a look? Then he applied his new pedagogical method: He said no more and left.

Over the next 75 days, the children worked out how to use the computer and began to learn. When Mitra returned, he administered a written test on molecular biology. The kids answered about one of four questions correctly. After another 75 days, with the encouragement of a friendly local, they were getting every other question right. “If you put a computer in front of children and remove all other adult restrictions, they will self-organize around it,” Mitra says, “like bees around a flower.”

A charismatic and convincing proselytizer, Mitra has become a darling in the tech world. In early 2013 he won a $1 million grant from TED, the global ideas conference, to pursue his work. He’s now in the process of establishing seven “schools in the cloud,” five in India and two in the UK. In India, most of his schools are single-room buildings. There will be no teachers, curriculum, or separation into age groups—just six or so computers and a woman to look after the kids’ safety. His defining principle: “The children are completely in charge.”

Read the entire article here.

Image: William Torrey Harris, (September 10, 1835 – November 5, 1909), American educator, philosopher, and lexicographer. Courtesy of Wikipedia.

Why Sleep?

There are more theories on why we sleep than there are cable channels in the U.S. But that hasn’t prevented researchers from proposing yet another one — it’s all about flushing waste.

From the Guardian:

Scientists in the US claim to have a new explanation for why we sleep: in the hours spent slumbering, a rubbish disposal service swings into action that cleans up waste in the brain.

Through a series of experiments on mice, the researchers showed that during sleep, cerebral spinal fluid is pumped around the brain, and flushes out waste products like a biological dishwasher.

The process helps to remove the molecular detritus that brain cells churn out as part of their natural activity, along with toxic proteins that can lead to dementia when they build up in the brain, the researchers say.

Maiken Nedergaard, who led the study at the University of Rochester, said the discovery might explain why sleep is crucial for all living organisms. “I think we have discovered why we sleep,” Nedergaard said. “We sleep to clean our brains.”

Writing in the journal Science, Nedergaard describes how brain cells in mice shrank when they slept, making the space between them on average 60% greater. This made the cerebral spinal fluid in the animals’ brains flow ten times faster than when the mice were awake.

The scientists then checked how well mice cleared toxins from their brains by injecting traces of proteins that are implicated in Alzheimer’s disease. These amyloid beta proteins were removed faster from the brains of sleeping mice, they found.

Nedergaard believes the clean-up process is more active during sleep because it takes too much energy to pump fluid around the brain when awake. “You can think of it like having a house party. You can either entertain the guests or clean up the house, but you can’t really do both at the same time,” she said in a statement.

According to the scientist, the cerebral spinal fluid flushes the brain’s waste products into what she calls the “glymphatic system” which carries it down through the body and ultimately to the liver where it is broken down.

Other researchers were sceptical of the study, and said it was too early to know if the process goes to work in humans, and how to gauge the importance of the mechanism. “It’s very attractive, but I don’t think it’s the main function of sleep,” said Raphaelle Winsky-Sommerer, a specialist on sleep and circadian rhythms at Surrey University. “Sleep is related to everything: your metabolism, your physiology, your digestion, everything.” She said she would like to see other experiments that show a build up of waste in the brains of sleep-deprived people, and a reduction of that waste when they catch up on sleep.

Vladyslav Vyazovskiy, another sleep expert at Surrey University, was also sceptical. “I’m not fully convinced. Some of the effects are so striking they are hard to believe. I would like to see this work replicated independently before it can be taken seriously,” he said.

Jim Horne, professor emeritus and director of the sleep research centre at Loughborough University, cautioned that what happened in the fairly simple mouse brain might be very different to what happened in the more complex human brain. “Sleep in humans has evolved far more sophisticated functions for our cortex than that for the mouse, even though the present findings may well be true for us,” he said.

But Nedergaard believes she will find the same waste disposal system at work in humans. The work, she claims, could pave the way for medicines that slow the onset of dementias caused by the build-up of waste in the brain, and even help those who go without enough sleep. “It may be that we can reduce the need at least, because it’s so annoying to waste so much time sleeping,” she said.

Read the entire article here.

Image courtesy of Telegraph.

It’s Pretty Ugly Online

This is a compelling and sad story of people with ugly minds who have nothing better to do than demean others. The others in this story are those that pervade social media in search of attention and a modicum of self-esteem. Which group is in need of most in help? Well, you decide.

From Wired:

Live artist Louise Orwin has created a show—Pretty Ugly—based on her research into the phenomenon of teenage girls discussing body issues on social media.

“OK, guys, this is a serious matter… I want to know whether I’m pretty or not,” says a teenage girl with a high-pitched voice and heavily made-up eyes going by the name of girlsite101.

She goes on to explain with pageant participant peppiness that her classmates say she is pretty and she “wins homecoming queen every year,” but that she’s not convinced. The only way to settle the situation is to ask the impartial commenters of YouTube.

The video has notched up more than 110,000 views and the comments are, frankly, brutal: “Bitch” and “You have an ugly personality and you’re making this shit up. You’re ugly” rank the highest. But there are many, many more: “You look like a bug!”; “You’re ugly as fuck […] You might want to cover up that third eye you twig. And you’re ears are fucking tiny. Like seriously, stick them up your ass. And stop telling lies”; “stupid slut”; “attention seeker”; “a pretty face destroyed by an ugly personality.” There are 5,500 of these comments—the vast majority of them are negative.

Girlsite101’s video is not a one-off. There are almost 600,000 results when you search for “am I pretty or ugly” on YouTube. It’s this phenomenon that live artist Louise Orwin has set out to explore in a performance called Pretty Ugly.

Orwin’s journey started when she came across the “Thinspiration” community on Tumblr, where pictures of slim women—ranging from the naturally slim to the emaciated—are shared as a source of inspiration for those trying to lose weight. “I got obsessed with the way these teenage girls were using Tumblr,” she told Wired.co.uk. “I felt like Alice tumbling down the rabbit hole.”

At the same time, Orwin was exploring how teenage girls use social media compared to the outlets she had as a teenager. “When I was a teenager I was writing in a diary; today teenagers are posting onto Tumblr.”

“I was horrified by it”

During the course of her research, she chanced upon one of the aforementioned “am I pretty or ugly?” videos. “I saw a really young girl pouting and posing in front of the camera. Her language was something that struck me. It was really teenage language; she was talking about how boys at school were picking on her but there was one guy who fancied her and she didn’t know why boys didn’t like her,” Orwin explains. The girl on camera then asked whether her audience thought she was pretty or ugly. “I was horrified by it,” said Orwin. “Then you look at the comments below; they were horrific.”

Orwin then spotted the many related videos alongside it. “The thing that struck me is that it seemed like a really brave thing to do. I couldn’t imagine myself posting a video like that because I would have thought that she was opening herself up to a huge amount of criticism.”

After trying to contact some of the girls who made the videos, Orwin decided to post some of her own. She came up with a number of teenage alter-egos: an emo girl called Becky, a nerdy girl called Amanda, and another character called Baby.

“I got torrents of abuse. People were telling me to fuck off and die,” Orwin explained. The emo girl Becky was targeted particularly aggressively. Three weeks after the video was posted, there was a spike of interest and Orwin received 200 comment notifications. One of the comments said: “Your so fucking dumb, yes you are ugly, just because you made this shitty video I think your the ugliest cunt out, take off that eye shadow no girl ever can pull off that much especially not you, and if you really think being ugly is such a surprise to you, life is going to fucking suck for you.”

“I woke up and read all of this abuse and I really felt it in my stomach. I had to remind myself that it’s not me, it’s the character.”

Orwin makes a point about the characters being 15-years-old in her videos (she’s actually 26), but that didn’t stop her from receiving hundreds of private messages, the vast majority from men, many of which were asking for her to send more videos. One man said “I think ur pretty. Don’t let anyone tell u any different OK. Can u do a dance vid so I can see more of sexy u?xx.”

When Orwin sat down to analyze the comments and messages she had received on her videos, she found that 70 percent of the feedback was from men, “and most of them were definitely over 18.” Most of the women who commented were under 18.

One commenter who stood out for Orwin was a user called RookhKshatriya, who wrote under Becky’s video, “You’re a 4 and without glasses you are a 5.” The commenter is actually a London-based academic who works in education and calls himself an “anti-feminist,” believing that the Anglo-American brand of feminism that emerged in the ’60s has an ulterior misandrist agenda. You can check out his blog, Anglobitch, here. “He takes himself very seriously, but he’s going on YouTube and rating 15-year-old girls,” muses Orwin.

One of the things that intrigues Orwin about these videos is that they explore the idea of anonymity as well as performance. “Part of the reason that a lot of them post the videos is yes, they want to know whether they are pretty. But they also see the trend going round and it’s just another subject to make a video on. Which is strange.”

Orwin’s show, Pretty Ugly, follows the trail of her research, looking at the relationships Becky, Amanda, and Baby have with their commenters and the people who messaged them. “Conversations with trolls, friendships… it also covers all the creepy side of it,” she explains.

The show starts with Orwin asking the audience the central question: do they think she is pretty or ugly. “I need to show how irrelevant that question should be. Would you go up to a person on the street and ask them that? I am trying to make this anonymous world into a live face-to-face world.”

Orwin is particularly struck by the way digital media is changing the way we perceive ourselves and each other. “And what does it mean for feminism today?”

When she compares her own teenage years to those being lived out today, she says she remembers getting to a certain age when people were starting to talk about the pressures of the media, which was selling unattainable images of perfection and beauty. “But it was about the media. Now if you look on Tumblr, YouTube, Twitter, it’s not the media, but the teenage girls themselves perpetuating this myth. They are resharing these images, reblogging. There’s always going to be peer pressure but I think [social media] makes these issues worse.”

Read the entire article here.

Big Bad Data; Growing Discrimination

You may be an anonymous data point online, but it does not follow that you’ll not still be a victim of personal discrimination. As technology to gather and track your every move online steadily improves so do the opportunities to misuse that information. Many of us are already unwitting participants in the growing internet filter bubble — a phenomenon that amplifies our personal tastes, opinions and shopping habits by pre-screening and delivering only more of the same based on our online footprints. Many argue that this is benign and even beneficial — after all isn’t it wonderful when Google’s ad network pops up product recommendations for you on “random” websites based on your previous searches, or isn’t it that much more effective when news organizations only deliver stories based on your previous browsing history, interests, affiliations or demographic?

Not so. We are in ever-increasing danger of allowing others to control what we see and hear online. So kiss discovery and serendipity goodbye. More troubling still, beyond the ability to deliver personalized experiences online, as corporations gather more and more data from and about you, they can decide if you are of value. While your data may be aggregated and anonymized, the results can still help a business target you, or not, whether you are explicitly identified by name or not.

So, perhaps your previous online shopping history divulged a proclivity for certain medications; well, kiss goodbye to that pre-existing health condition waiver. Or, perhaps the online groups that you belong to are rather left-of-center or way out in left-field; well, say hello to a smaller annual bonus from your conservative employer. Perhaps, the news or social groups that you subscribe to don’t align very well with the values of your landlord or prospective employer. Or, perhaps, Amazon will not allow you to shop online any more because the company knows your annual take-home pay and that you are a potential credit risk. You get the idea.

Without adequate safe-guards and controls those who gather the data about you will be in the driver’s seat. Whereas, put simply, it should be the other way around — you should own the data that describes who you are and what your do, and you should determine who gets to see it and how it’s used. Welcome to the age of Big (Bad) Data and the new age of data-driven discrimination.

From Technology Review:

Data analytics are being used to implement a subtle form of discrimination, while anonymous data sets can be mined to reveal health data and other private information, a Microsoft researcher warned this morning at MIT Technology Review’s EmTech conference.

Kate Crawford, principal researcher at Microsoft Research, argued that these problems could be addressed with new legal approaches to the use of personal data.

In a new paper, she and a colleague propose a system of “due process” that would give people more legal rights to understand how data analytics are used in determinations made against them, such as denial of health insurance or a job. “It’s the very start of a conversation about how to do this better,” Crawford, who is also a visiting professor at the MIT Center for Civic Media, said in an interview before the event. “People think ‘big data’ avoids the problem of discrimination, because you are dealing with big data sets, but in fact big data is being used for more and more precise forms of discrimination—a form of data redlining.”

During her talk this morning, Crawford added that with big data, “you will never know what those discriminations are, and I think that’s where the concern begins.”

Health data is particularly vulnerable, the researcher says. Search terms for disease symptoms, online purchases of medical supplies, and even the RFID tags on drug packaging can provide websites and retailers with information about a person’s health.

As Crawford and Jason Schultz, a professor at New York University Law School, wrote in their paper: “When these data sets are cross-referenced with traditional health information, as big data is designed to do, it is possible to generate a detailed picture about a person’s health, including information a person may never have disclosed to a health provider.”

And a recent Cambridge University study, which Crawford alluded to during her talk, found that “highly sensitive personal attributes”— including sexual orientation, personality traits, use of addictive substances, and even parental separation—are highly predictable by analyzing what people click on to indicate they “like” on Facebook. The study analyzed the “likes” of 58,000 Facebook users.

Similarly, purchasing histories, tweets, and demographic, location, and other information gathered about individual Web users, when combined with data from other sources, can result in new kinds of profiles that an employer or landlord might use to deny someone a job or an apartment.

In response to such risks, the paper’s authors propose a legal framework they call “big data due process.” Under this concept, a person who has been subject to some determination—whether denial of health insurance, rejection of a job or housing application, or an arrest—would have the right to learn how big data analytics were used.

This would entail the sorts of disclosure and cross-examination rights that are already enshrined in the legal systems of the United States and many other nations. “Before there can be greater social acceptance of big data’s role in decision-making, especially within government, it must also appear fair, and have an acceptable degree of predictability, transparency, and rationality,” the authors write.

Data analytics can also get things deeply wrong, Crawford notes. Even the formerly successful use of Google search terms to identify flu outbreaks failed last year, when actual cases fell far short of predictions. Increased flu-related media coverage and chatter about the flu in social media were mistaken for signs of people complaining they were sick, leading to the overestimates.  “This is where social media data can get complicated,” Crawford said.

Read the entire article here.

Mid-21st Century Climate

Call it what you may, but regardless of labels most climate scientists agree that our future weather systems are much more likely to be more extreme: more prolonged and more violent.

From ars technica:

If there was one overarching point that the fifth Intergovernmental Panel on Climate Change report took pains to stress, it was that the degree of change in the global climate system since the mid-1950s is unusual in scope. Depending on what exactly you measure, the planet hasn’t seen conditions like these for decades to millennia. But that conclusion leaves us with a question: when exactly can we expect the climate to look radically new, with features that have no historical precedent?

The answer, according to a modeling study published in this week’s issue of Nature, is “very soon”—as soon as 2047 under a “business-as-usual” emission scenario and only 22 years later under a reduced emissions scenario. Tropical countries will likely be the first to enter this new age of climatic erraticness and could experience extreme temperatures monthly after 2050. This, the authors argue, underscores the need for robust efforts targeted not only at protecting those vulnerable countries but also the rich biodiversity that they harbor.

Developing an index, one model at a time

Before attempting to peer into the future, the authors, led by the University of Hawaii’s Camilo Mora, first had to ensure that they could accurately replicate the recent past. To do so, they pooled together the predictive capabilities of 39 different models, using near-surface air temperature as their indicator of choice.

For each model, they established the bounds of natural climate variability as the minimum and maximum values attained between 1860 and 2005. Simultaneously crunching the outputs from all of these models proved to be the right decision, as Mora and his colleagues consistently found that a multi-model average best fit the real data.

Next, they turned to two widely used emission scenarios, or Representative Concentration Pathways (RCP) as they’re known in modeling vernacular, to predict the arrival of different climates over a period extending from 2006 to 2100. The first scenario, RCP45, assumes a concerted mitigation initiative and anticipates CO2 concentrations of up to 538 ppm by 2100 (up from the current 393 ppm). The second, RCP85, is the trusty “business-as-usual” scenario that anticipates concentrations of up to 936 ppm by the same year.

Timing the new normals

While testing the sensitivity of their index, Mora and his colleagues concluded that the length of the reference period—the number of years between 1860 and 2005 used as a basis for establishing the limits of historical climate variability—had no effect on the ultimate outcome. A longer period would include more instances of temperature extremes, both low and high, so you would expect that it would yield a broader range of limits. That would mean that any projections of extreme future events might not seem so extreme by comparison.

In practice, it didn’t matter whether the authors used 20 years or 140 years as the length of their reference period. What did matter, they found, was the number of consecutive years where the climate was out of historical bounds. This makes intuitive sense: if you consider fewer consecutive years, the departure from “normal” will come sooner.

Rather than pick one arbitrary number of consecutive years versus another, the authors simply used all of the possible values from each of the 39 models. That accounts for the relatively large standard deviations in the estimated starting dates of exceptional climates—18 years for the RCP45 scenario and 14 years for the RCP85 scenario. That means that the first clear climate shift could occur as early as 2033 or as late as 2087.

Though temperature served as the main proxy for climate in their study, the authors also analyzed four other variables for the atmosphere and two for the ocean. These included evaporation, transpiration, sensible heat flux (the conductive transfer of heat from the planet’s surface to the atmosphere) and precipitation, as well as sea surface temperature and surface pH in the ocean.

Replacing temperature with, or considering it alongside, any of the other four variables for atmosphere did not change the timing of climate departures. This is because temperature is the most sensitive variable and therefore also the earliest to exceed the normal bounds of historical variability.

When examining the ocean through the prism of sea surface temperature, the researchers determined that it would reach its tipping point by 2051 or 2072 under the RCP85 and RCP45 scenarios, respectively. However, when they considered both sea surface temperature and surface pH together, the estimated tipping point was moved all the way up to this decade.

Seawater pH has an extremely narrow range of historical variability, and it moved out of this range 5 years ago, which caused the year of the climate departure to jump forward several decades. This may be an extreme case, but it serves as a stark reminder that the ocean is already on the edge of uncharted territory.

Read the entire article here.

Image courtesy of Salon.

Goals and Passion Are For Losers

Forget career advice from your boss or the business suit sitting in airline seat 7A. Forget start-up mentors and the advisory board; forget angel investors and analysts with their binders of business suggestions. Forget using your family or local business leaders as a sounding board for your existing (or next) enterprise. Forget the biography of the corporate titan or the entrepreneurial whiz with the obligatory garage.

The best career advise comes from one source, Scott Adams: it’s all about failure.

From WSJ:

If you’re already as successful as you want to be, both personally and professionally, congratulations! Here’s the not-so-good news: All you are likely to get from this article is a semientertaining tale about a guy who failed his way to success. But you might also notice some familiar patterns in my story that will give you confirmation (or confirmation bias) that your own success wasn’t entirely luck.

If you’re just starting your journey toward success—however you define it—or you’re wondering what you’ve been doing wrong until now, you might find some novel ideas here. Maybe the combination of what you know plus what I think I know will be enough to keep you out of the wood chipper.

Let me start with some tips on what not to do. Beware of advice about successful people and their methods. For starters, no two situations are alike. Your dreams of creating a dry-cleaning empire won’t be helped by knowing that Thomas Edison liked to take naps. Secondly, biographers never have access to the internal thoughts of successful people. If a biographer says Henry Ford invented the assembly line to impress women, that’s probably a guess.

But the most dangerous case of all is when successful people directly give advice. For example, you often hear them say that you should “follow your passion.” That sounds perfectly reasonable the first time you hear it. Passion will presumably give you high energy, high resistance to rejection and high determination. Passionate people are more persuasive, too. Those are all good things, right?

Here’s the counterargument: When I was a commercial loan officer for a large bank, my boss taught us that you should never make a loan to someone who is following his passion. For example, you don’t want to give money to a sports enthusiast who is starting a sports store to pursue his passion for all things sporty. That guy is a bad bet, passion and all. He’s in business for the wrong reason.

My boss, who had been a commercial lender for over 30 years, said that the best loan customer is someone who has no passion whatsoever, just a desire to work hard at something that looks good on a spreadsheet. Maybe the loan customer wants to start a dry-cleaning store or invest in a fast-food franchise—boring stuff. That’s the person you bet on. You want the grinder, not the guy who loves his job.

For most people, it’s easy to be passionate about things that are working out, and that distorts our impression of the importance of passion. I’ve been involved in several dozen business ventures over the course of my life, and each one made me excited at the start. You might even call it passion.

The ones that didn’t work out—and that would be most of them—slowly drained my passion as they failed. The few that worked became more exciting as they succeeded. For example, when I invested in a restaurant with an operating partner, my passion was sky high. And on day one, when there was a line of customers down the block, I was even more passionate. In later years, as the business got pummeled, my passion evolved into frustration and annoyance.

On the other hand, Dilbert started out as just one of many get-rich schemes I was willing to try. When it started to look as if it might be a success, my passion for cartooning increased because I realized it could be my golden ticket. In hindsight, it looks as if the projects that I was most passionate about were also the ones that worked. But objectively, my passion level moved with my success. Success caused passion more than passion caused success.

So forget about passion. And while you’re at it, forget about goals, too.

Just after college, I took my first airplane trip, destination California, in search of a job. I was seated next to a businessman who was probably in his early 60s. I suppose I looked like an odd duck with my serious demeanor, bad haircut and cheap suit, clearly out of my element. I asked what he did for a living, and he told me he was the CEO of a company that made screws. He offered me some career advice. He said that every time he got a new job, he immediately started looking for a better one. For him, job seeking was not something one did when necessary. It was a continuing process.

This makes perfect sense if you do the math. Chances are that the best job for you won’t become available at precisely the time you declare yourself ready. Your best bet, he explained, was to always be looking for a better deal. The better deal has its own schedule. I believe the way he explained it is that your job is not your job; your job is to find a better job.

This was my first exposure to the idea that one should have a system instead of a goal. The system was to continually look for better options.

Throughout my career I’ve had my antennae up, looking for examples of people who use systems as opposed to goals. In most cases, as far as I can tell, the people who use systems do better. The systems-driven people have found a way to look at the familiar in new and more useful ways.

To put it bluntly, goals are for losers. That’s literally true most of the time. For example, if your goal is to lose 10 pounds, you will spend every moment until you reach the goal—if you reach it at all—feeling as if you were short of your goal. In other words, goal-oriented people exist in a state of nearly continuous failure that they hope will be temporary.

If you achieve your goal, you celebrate and feel terrific, but only until you realize that you just lost the thing that gave you purpose and direction. Your options are to feel empty and useless, perhaps enjoying the spoils of your success until they bore you, or to set new goals and re-enter the cycle of permanent presuccess failure.

I have a friend who is a gifted salesman. He could have sold anything, from houses to toasters. The field he chose (which I won’t reveal because he wouldn’t appreciate the sudden flood of competition) allows him to sell a service that almost always auto-renews. In other words, he can sell his service once and enjoy ongoing commissions until the customer dies or goes out of business. His biggest problem in life is that he keeps trading his boat for a larger one, and that’s a lot of work.

Observers call him lucky. What I see is a man who accurately identified his skill set and chose a system that vastly increased his odds of getting “lucky.” In fact, his system is so solid that it could withstand quite a bit of bad luck without buckling. How much passion does this fellow have for his chosen field? Answer: zero. What he has is a spectacular system, and that beats passion every time.

As for my own system, when I graduated from college, I outlined my entrepreneurial plan. The idea was to create something that had value and—this next part is the key—I wanted the product to be something that was easy to reproduce in unlimited quantities. I didn’t want to sell my time, at least not directly, because that model has an upward limit. And I didn’t want to build my own automobile factory, for example, because cars are not easy to reproduce. I wanted to create, invent, write, or otherwise concoct something widely desired that would be easy to reproduce.

My system of creating something the public wants and reproducing it in large quantities nearly guaranteed a string of failures. By design, all of my efforts were long shots. Had I been goal-oriented instead of system-oriented, I imagine I would have given up after the first several failures. It would have felt like banging my head against a brick wall.

But being systems-oriented, I felt myself growing more capable every day, no matter the fate of the project that I happened to be working on. And every day during those years I woke up with the same thought, literally, as I rubbed the sleep from my eyes and slapped the alarm clock off.

Today’s the day.

If you drill down on any success story, you always discover that luck was a huge part of it. You can’t control luck, but you can move from a game with bad odds to one with better odds. You can make it easier for luck to find you. The most useful thing you can do is stay in the game. If your current get-rich project fails, take what you learned and try something else. Keep repeating until something lucky happens. The universe has plenty of luck to go around; you just need to keep your hand raised until it’s your turn. It helps to see failure as a road and not a wall.

I’m an optimist by nature, or perhaps by upbringing—it’s hard to know where one leaves off and the other begins—but whatever the cause, I’ve long seen failure as a tool, not an outcome. I believe that viewing the world in that way can be useful for you too.

Nietzsche famously said, “What doesn’t kill us makes us stronger.” It sounds clever, but it’s a loser philosophy. I don’t want my failures to simply make me stronger, which I interpret as making me better able to survive future challenges. (To be fair to Nietzsche, he probably meant the word “stronger” to include anything that makes you more capable. I’d ask him to clarify, but ironically he ran out of things that didn’t kill him.)

Becoming stronger is obviously a good thing, but it’s only barely optimistic. I do want my failures to make me stronger, of course, but I also want to become smarter, more talented, better networked, healthier and more energized. If I find a cow turd on my front steps, I’m not satisfied knowing that I’ll be mentally prepared to find some future cow turd. I want to shovel that turd onto my garden and hope the cow returns every week so I never have to buy fertilizer again. Failure is a resource that can be managed.

Before launching Dilbert, and after, I failed at a long series of day jobs and entrepreneurial adventures. Here are just a few of the worst ones. I include them because successful people generally gloss over their most aromatic failures, and it leaves the impression that they have some magic you don’t.

When you’re done reading this list, you won’t have that delusion about me, and that’s the point. Success is entirely accessible, even if you happen to be a huge screw-up 95% of the time.

Read the entire article here.

Image courtesy of Google Search.

Great 21st Century Battles: The Overhead Bin

For flyers, finding adequate space in the overhead bin or under the seat in front, for that increasingly small carry-on item, is one of the most stressful stages of any flight. For airlines, it’s become the next competitive battleground, one that yields significant revenues, at little cost.

So, next time you fly with a dead catfish, kitchen sink or archery set, keep in mind that airlines may want you sitting comfortably, but they want your money much more.

From NYT:

A frosted cake. A 10-gallon hat. A car muffler.

People have crammed all sorts of things — including a kitchen sink — into airplane overhead compartments.

But now the battle of the bins, that preflight scrum over precious carry-on space, has turned into something else for airlines: the business of the bins.

After starting to charge fees for checking baggage, airlines are finding new ways to make money from carry-ons. Overhead compartments, it turns out, are valuable real estate — and these days, they go to the highest bidders.

Using an airline credit card? Come on down. Flying first class? Right this way. Paying an extra fee when you book? It’s your turn. Priority is increasingly given to those who pay.

“There are multiple ways you can improve upon your boarding zone,” said Andy Jacobs, the president of a candy company who travels about twice a month. “As a diamond member on Delta, I never have a problem securing space.”

Many travelers may not realize it, but a seat ticket does not automatically entitle them to overhead space. Once space runs out, passengers must check their luggage at the gate, without paying a fee — and then wait for it at the baggage claim at their destination. Airlines are capitalizing on the fact that many fliers are willing to pay for carry-on convenience.

When Mr. Jacobs is not flying Delta, he relies on credit cards to jump to the front of the boarding line. “Those cards are worth it,” he said. “I actually carry three different airline affinity credit cards.”

He has good reason to get to the front. “As a chocolate salesperson, I need to bring my bags on the plane so the chocolate won’t melt,” he said. “When you’re flying to a major customer and you pick up your bags at baggage claim and your samples are melted, that becomes a pretty big problem.”

As Mr. Jacobs has learned, there is only so much room.

And there is little chance that the free-for-all at check-in will ease soon. Airlines rely increasingly on fees, experts say. In 2012, domestic carriers collectively earned roughly $3.5 billion in checked-bag fees, up from less than half a billion dollars five years earlier, according to the Department of Transportation. Although exact figures are not yet available for early boarding revenue, analysts say it is increasing.

“There is growth there,” said Jay Sorenson, president of the airline consulting firm IdeaWorksCompany. “Airlines will implement more of these fees.”

Brian Easley, a career flight attendant, has watched the competition for overhead space become increasingly acrimonious.

“When someone gets to their row and looks up and sees something’s there, they kind of freak out about it,” he said. “They will throw a fit and they will start screaming at whoever put their stuff in their spot. We’ve had to throw people off the plane just because they refused to walk up a few feet and stick it in another overhead bin.”

People will carry on anything and everything, Mr. Easley said. On one flight to the Dominican Republic, a passenger brought a kitchen sink wrapped in a trash bag. “Luckily, at the time we were flying this particular Airbus that just has a ton of overhead bin space.”

Another time on the same route, a passenger carried on a car muffler. “I think in that case we did put it in a closet, next to a hula hoop someone had brought,” he said.

Robert W. Mann, an airline industry consultant, said that although new planes were designed to accommodate more carry-on bags, “there’s an infinite demand for overhead bin space,” especially as airlines squeeze more people than ever onto planes.

Airlines say they are providing an option travelers want.

“It’s something our customers desire,” said an American Airlines spokesman, Matt Miller. Charlie Hobart, a United Airlines spokesman, said, “We’re always looking for ways to make travel more convenient for our customers.” United let customers pay for priority boarding before its merger with Continental and reintroduced it this year. “Customers did enjoy it when we had it” before the merger, Mr. Hobart said.

Read the entire article here.

Image courtesy of USA Today.

A Home for Art or A Home for Artists

Most art is made in a location that is very different and often far removed from the location in which it is displayed and/or purchased. In this time, it is highly unlikely that any new or emerging professional artist will make and sell art in the same place. This is particularly evident in a place like New York city where starving artists and wealthy patrons co-exist side by side.

From the New York Times:

Last week The Guardian published an essay by the singer-songwriter David Byrne, which received a fair amount of attention online, arriving under the headline “If the 1% Stifles New York’s Creative Talent, I’m Out of Here.”

What followed was considerably more nuanced than the kind of diatribe, now familiar, often delivered by artists and others who came of age in the city during the 1970s and yearn for the seductions of a vanished danger. In this view, the start of the last quarter of the 20th century left New York populated entirely by addicts and hustlers, painters and drug pushers, and the city was a better, more enlivening place for the anxieties it bred.

“I don’t romanticize the bad old days,” Mr. Byrne said in his piece. “I have no illusions that there was a connection between that city on its knees and a flourishing of creativity.” What he laments instead is that our cultural capital now languishes completely in the hands of a brash upper class.

On one level it seems difficult to argue with him. Current market realities make it inconceivable that anyone could arrive today in New York at 23 with a knapsack and a handful of Luna bars and become David Byrne.

We also famously live in an era of diminishing support for the arts. According to a report released last month, government arts financing reached a record low in 2011 at the same time the proportion of American households giving money to the arts dwindled to 8.6 percent. But perhaps the problem is one of paradox, not exclusion, which is to say that while New York has become an increasingly inhospitable place to incubate a career as an artist, it has become an ever easier place to experience and consume the arts. The evolution of Downtown Brooklyn’s cultural district is emblematic of this new democracy. Last week saw the official opening of BRIC House, a 66,000-square-foot building with a gallery space and another space for film screenings, readings, lectures and so on, all with no admission charges.

BRIC House, which is under the direction of Leslie Greisbach Schultz and occupies an old vaudeville theater into which the city has poured $41 million, also contains a flexible performance space where it will be possible to see dance and music from emerging and established artists largely for under $20. The ticket price of plays, offered as works in progress, is $10.

The upper floors are host to something called Urban Glass, a monument to the art of glass blowing. “There are people in this city who get as excited about glass blowing as I get about Junior’s,” the Brooklyn borough president, Marty Markowitz, marveled to me.

Both BRIC, which offers classes in digital photography and video production for nothing or next to nothing, and the nearby Mark Morris Dance Center involve residents of Brooklyn public housing in free dance instruction. At Mark Morris it costs less to enroll a 3-year-old in a dance class with a teacher who is studying for a doctorate in philosophy than it does to enroll a child in Super Soccer Stars.

Further challenging claims about the end of culture in the city is that the number of public art exhibits grew under Mayor Michael R. Bloomberg’s tenure. Additionally, through his private philanthropic efforts, Mr. Bloomberg has donated more than $230 million since 2002 to arts and social service organizations across the city. Over the summer, his foundation announced an additional contribution of $15 million to a handful of cultural institutions to help them enhance visitors’ experiences through mobile technology.

At BRIC — “the epicenter of the center of the artistic universe,” Mr. Markowitz calls it — as with other Brooklyn cultural institutions, a good deal of the progress has come about with the help of a quiet philanthropic community that exists far from the world of hedge-fund vanity. A handful of wealthy residents support the borough’s institutions, their names not the kind to appear in Women’s Wear Daily.

Read the entire article here.

Six Rules to Super-Charge Your Creativity

Creative minds by their very nature are all different. Yet upon further examination it seems that there are some key elements and common routines that underlie many of the great, innovative thinkers. First and foremost, of course, is to be an early-bird.

From the Guardian:

One morning this summer, I got up at first light – I’d left the blinds open the night before – then drank a strong cup of coffee, sat near-naked by an open window for an hour, worked all morning, then had a martini with lunch. I took a long afternoon walk, and for the rest of the week experimented with never working for more than three hours at a stretch.

This was all in an effort to adopt the rituals of some great artists and thinkers: the rising-at-dawn bit came from Ernest Hemingway, who was up at around 5.30am, even if he’d been drinking the night before; the strong coffee was borrowed from Beethoven, who personally counted out the 60 beans his morning cup required. Benjamin Franklin swore by “air baths”, which was his term for sitting around naked in the morning, whatever the weather. And the midday cocktail was a favourite of VS Pritchett (among many others). I couldn’t try every trick I discovered in a new book, Daily Rituals: How Great Minds Make Time, Find Inspiration And Get To Work; oddly, my girlfriend was unwilling to play the role of Freud’s wife, who put toothpaste on his toothbrush each day to save him time. Still, I learned a lot. For example: did you know that lunchtime martinis aren’t conducive to productivity?

As a writer working from home, of course, I have an unusual degree of control over my schedule – not everyone could run such an experiment. But for anyone who thinks of their work as creative, or who pursues creative projects in their spare time, reading about the habits of the successful, can be addictive. Partly, that’s because it’s comforting to learn that even Franz Kafka struggled with the demands of his day job, or that Franklin was chronically disorganised. But it’s also because of a covert thought that sounds delusionally arrogant if expressed out loud: just maybe, if I took very hot baths like Flaubert, or amphetamines like Auden, I might inch closer to their genius.

Several weeks later, I’m no longer taking “air baths”, while the lunchtime martini didn’t last more than a day (I mean, come on). But I’m still rising early and, when time allows, taking long walks. Two big insights have emerged. One is how ill-suited the nine-to-five routine is to most desk-based jobs involving mental focus; it turns out I get far more done when I start earlier, end a little later, and don’t even pretend to do brain work for several hours in the middle. The other is the importance of momentum. When I get straight down to something really important early in the morning, before checking email, before interruptions from others, it beneficially alters the feel of the whole day: once interruptions do arise, they’re never quite so problematic. Another technique I couldn’t manage without comes from the writer and consultant Tony Schwartz: use a timer to work in 90-minute “sprints”, interspersed with signficant breaks. (Thanks to this, I’m far better than I used to be at separating work from faffing around, rather than spending half the day flailing around in a mixture of the two.)

The one true lesson of the book, says its author, Mason Currey, is that “there’s no one way to get things done”. For every Joyce Carol Oates, industriously plugging away from 8am to 1pm and again from 4pm to 7pm, or Anthony Trollope, timing himself typing 250 words per quarter-hour, there’s a Sylvia Plath, unable to stick to a schedule. (Or a Friedrich Schiller, who could only write in the presence of the smell of rotting apples.) Still, some patterns do emerge. Here, then, are six lessons from history’s most creative minds.

1. Be a morning person

It’s not that there aren’t successful night owls: Marcel Proust, for one, rose sometime between 3pm and 6pm, immediately smoked opium powders to relieve his asthma, then rang for his coffee and croissant. But very early risers form a clear majority, including everyone from Mozart to Georgia O’Keeffe to Frank Lloyd Wright. (The 18th-century theologian Jonathan Edwards, Currey tells us, went so far as to argue that Jesus had endorsed early rising “by his rising from the grave very early”.) For some, waking at 5am or 6am is a necessity, the only way to combine their writing or painting with the demands of a job, raising children, or both. For others, it’s a way to avoid interruption: at that hour, as Hemingway wrote, “There is no one to disturb you and it is cool or cold and you come to your work and warm as you write.” There’s another, surprising argument in favour of rising early, which might persuade sceptics: that early-morning drowsiness might actually be helpful. At one point in his career, the novelist Nicholson Baker took to getting up at 4.30am, and he liked what it did to his brain: “The mind is newly cleansed, but it’s also befuddled… I found that I wrote differently then.”

Psychologists categorise people by what they call, rather charmingly, “morningness” and “eveningness”, but it’s not clear that either is objectively superior. There is evidence that morning people are happier and more conscientious, but also that night owls might be more intelligent. If you’re determined to join the ranks of the early risers, the crucial trick is to start getting up at the same time daily, but to go to bed only when you’re truly tired. You might sacrifice a day or two to exhaustion, but you’ll adjust to your new schedule more rapidly.

2. Don’t give up the day job

Time is short, my strength is limited, the office is a horror, the apartment is noisy,” Franz Kafka complained to his fiancee, “and if a pleasant, straightforward life is not possible, then one must try to wriggle through by subtle manoeuvres.” He crammed in his writing between 10.30pm and the small hours of the morning. But in truth, a “pleasant, straightforward life” might not have been preferable, artistically speaking: Kafka, who worked in an insurance office, was one of many artists who have thrived on fitting creative activities around the edges of a busy life. William Faulkner wrote As I Lay Dying in the afternoons, before commencing his night shift at a power plant; TS Eliot’s day job at Lloyds bank gave him crucial financial security; William Carlos Williams, a paediatrician, scribbled poetry on the backs of his prescription pads. Limited time focuses the mind, and the self-discipline required to show up for a job seeps back into the processes of art. “I find that having a job is one of the best things in the world that could happen to me,” wrote Wallace Stevens, an insurance executive and poet. “It introduces discipline and regularity into one’s life.” Indeed, one obvious explanation for the alcoholism that pervades the lives of full-time authors is that it’s impossible to focus on writing for more than a few hours a day, and, well, you’ve got to make those other hours pass somehow.

3. Take lots of walks

There’s no shortage of evidence to suggest that walking – especially walking in natural settings, or just lingering amid greenery, even if you don’t actually walk much – is associated with increased productivity and proficiency at creative tasks. But Currey was surprised, in researching his book, by the sheer ubiquity of walking, especially in the daily routines of composers, including Beethoven, Mahler, Erik Satie and Tchaikovksy, “who believed he had to take a walk of exactly two hours a day and that if he returned even a few minutes early, great misfortunes would befall him”. It’s long been observed that doing almost anything other than sitting at a desk can be the best route to novel insights. These days, there’s surely an additional factor at play: when you’re on a walk, you’re physically removed from many of the sources of distraction – televisions, computer screens – that might otherwise interfere with deep thought.

Read the entire article here.

Image: Frank Lloyd Wright, architect, c. March 1, 1926. Courtesy of U.S. Library of Congress.

The Golden Age of Travel

Travel to far flung destinations was once a luxurious — some would say elitist — affair. Now that much of the process, and to some extent the end result, has been commoditized, we are left to dream of an age that once seemed glamorous and out of reach for most. And, what better way to market these dreams than through colorful, engaging travel posters. A collection of wonderful marketing posters from that “golden age” is up for auction.

Many of these beautiful works of art were published as commercial pieces so the artists often worked under the covers of their advertising or design agencies. While a few, such as Willy Burger, Maurice Logan, went on to be recognized by the art establishment, most worked in anonymity. However, the travel poster art they produced beginning at the turn of the previous century formed at key part of the Art Nouveau and later the Art Deco movements. Luckily this continues to influence art and design and still makes us dream of the romance of travel and exotic destinations to this day.

See a sample of the collection here.

Image: Roger Broders, Sports D’Hiver, c 1929. Courtesy: Swann Auction Galleries

Me, Myself and I

It’s common sense — the frequency with which you use the personal pronoun “I” tells a lot about you. Now there’s some great research that backs this up, but not in a way that you would have expected.

From WSJ:

You probably don’t think about how often you say the word “I.”

You should. Researchers say that your usage of the pronoun says more about you than you may realize.

Surprising new research from the University of Texas suggests that people who often say “I” are less powerful and less sure of themselves than those who limit their use of the word. Frequent “I” users subconsciously believe they are subordinate to the person to whom they are talking.

Pronouns, in general, tell us a lot about what people are paying attention to, says James W. Pennebaker, chair of the psychology department at the University of Texas at Austin and an author on the study. Pronouns signal where someone’s internal focus is pointing, says Dr. Pennebaker, who has pioneered this line of research. Often, people using “I” are being self-reflective. But they may also be self-conscious or insecure, in physical or emotional pain, or simply trying to please.

Dr. Pennebaker and colleagues conducted five studies of the way relative rank is revealed by the use of pronouns. The research was published last month in the Journal of Language and Social Psychology. In each experiment, people deemed to have higher status used “I” less.

The findings go against the common belief that people who say “I” a lot are full of themselves, maybe even narcissists.

“I” is more powerful than you may realize. It drives perceptions in a conversation so much so that marriage therapists have long held that people should use “I” instead of “you” during a confrontation with a partner or when discussing something emotional. (“I feel unheard.” Not: “You never listen.”) The word “I” is considered less accusatory.

“There is a misconception that people who are confident, have power, have high-status tend to use ‘I’ more than people who are low status,” says Dr. Pennebaker, author of “The Secret Life of Pronouns.” “That is completely wrong. The high-status person is looking out at the world and the low-status person is looking at himself.”

So, how often should you use “I”? More—to sound humble (and not critical when speaking to your spouse)? Or less—to come across as more assured and authoritative?

The answer is “mostly more,” says Dr. Pennebaker. (Although he does say you should try and say it at the same rate as your spouse or partner, to keep the power balance in the relationship.)

In the first language-analysis study Dr. Pennebaker led, business-school students were divided into 41 four-person, mixed-sex groups and asked to work as a team to improve customer service for a fictitious company. One person in each group was randomly assigned to be the leader. The result: The leaders used “I” in 4.5% of their words. Non-leaders used the word 5.6%. (The leaders also used “we” more than followers did.)

In the second study, 112 psychology students were assigned to same-sex groups of two. The pairs worked to solve a series of complex problems. All interaction took place online. No one was assigned to a leadership role, but participants were asked at the end of the experiment who they thought had power and status. Researchers found that the higher the person’s perceived power, the less he or she used “I.”

In study three, 50 pairs of people chatted informally face-to-face, asking questions to get to know one another, as if at a cocktail party. When asked which person had more status or power, they tended to agree—and that person had used “I” less.

Study four looked at emails. Nine people turned over their incoming and outgoing emails with about 15 other people. They rated how much status they had in relation to each correspondent. In each exchange, the person with the higher status used “I” less.

The fifth study was the most unusual. Researchers looked at email communication that the U.S. government had collected (and translated) from the Iraqi military, made public for a period of time as the Iraqi Perspectives Project. They randomly selected 40 correspondences. In each case, the person with higher military rank used “I” less.

People curb their use of “I” subconsciously, Dr. Pennebaker says. “If I am the high-status person, I am thinking of what you need to do. If I am the low-status person, I am more humble and am thinking, ‘I should be doing this.’ “

Dr. Pennebaker has found heavy “I” users across many people: Women (who are typically more reflective than men), people who are more at ease with personal topics, younger people, caring people as well as anxious and depressed people. (Surprisingly, he says, narcissists do not use “I” more than others, according to a meta-analysis of a large number of studies.)

And who avoids using “I,” other than the high-powered? People who are hiding the truth. Avoiding the first-person pronoun is distancing.

Read the entire article here.

Bots That Build Themselves

[tube]6aZbJS6LZbs[/tube]

Wouldn’t it be a glorious breakthrough if your next furniture purchase could assemble itself? No more sifting though stepwise Scandinavian manuals describing your next “Fjell” or “Bestå” pieces from IKEA; no more looking for a magnifying glass to decipher strange text from Asia; no more searches for an Allen wrench that fits those odd hexagonal bolts. Now, to set your expectations, recent innovations at the macro-mechanical level are not yet quite in the same league as planet-sized self-assembling spaceships (from the mind of Iain Banks). But, researchers and engineers are making progress.

From ars technica:

At a certain level of complexity and obligation, sets of blocks can easily go from fun to tiresome to assemble. Legos? K’Nex? Great. Ikea furniture? Bridges? Construction scaffolding? Not so much. To make things easier, three scientists at MIT recently exhibited a system of self-assembling cubic robots that could in theory automate the process of putting complex systems together.

The blocks, dubbed M-Blocks, use a combination of magnets and an internal flywheel to move around and stick together. The flywheels, running off an internal battery, generate angular momentum that allows the blocks to flick themselves at each other, spinning them through the air. Magnets on the surfaces of the blocks allow them to click into position.

Each flywheel inside the blocks can spin at up to 20,000 rotations per minute. Motion happens when the flywheel spins and then is suddenly braked by a servo motor that tightens a belt encircling the flywheel, imparting its angular momentum to the body of the blocks. That momentum sends the block flying at a certain velocity toward its fellow blocks (if there is a lot of it) or else rolling across the ground (if there’s less of it). Watching a video of the blocks self-assembling, the effect is similar to watching Sid’s toys rally in Toy Story—a little off-putting to see so many parts moving into a whole at once, unpredictably moving together like balletic dying fish.

Each of the blocks is controlled by a 32-bit ARM microprocessor and three 3.7 volt batteries that afford each one between 20 and 100 moves before the battery life is depleted. Rolling is the least complicated motion, though the blocks can also use their flywheels to turn corners, climb over each other, or even complete a leap from ground level to three blocks high, sticking the landing on top of a column 51 percent of the time.

The blocks use 6-axis inertial measurement units, like those found on planes, ships, or spacecrafts, to figure out how they are oriented in space. Each cube has an IR LED and a photodiode that cubes use to communicate with each other.

The authors note that the cubes’ motion is not very precise yet; one cube is considered to have moved successfully if it hits its goal position within three tries. The researchers found the RPMs needed to generate momentum for different movements through trial and error.

If the individual cube movements weren’t enough, groups of the cubes can also move together in either a cluster or as a row of cubes rolling in lockstep. A set of four cubes arranged in a square attempting to roll together in a block approaches the limits of the cubes’ hardware, the authors write. The cubes can even work together to get around an obstacle, rolling over each other and stacking together World War Z-zombie style until the bump in the road has been crossed.

Read the entire article here.

Video: M-Blocks. Courtesy of ars technica.

Personalized Care Courtesy of Big Data

The era of truly personalized medicine and treatment plans may still be a fair way off, but thanks to big data initiatives predictive and preventative health is making significant progress. This bodes well for over-stretched healthcare systems, medical professionals, and those who need care and/or pay for it.

That said, it is useful to keep in mind how similar data in other domains such as shopping travel and media, has been delivering personalized content and services for quite some time. So, healthcare information technology certainly lags, where it should be leading. One single answer may be impossible to agree upon. However, it is encouraging to see the healthcare and medical information industries catching up.

From Technology Review:

On the ground floor of the Mount Sinai Medical Center’s new behemoth of a research and hospital building in Manhattan, rows of empty black metal racks sit waiting for computer processors and hard disk drives. They’ll house the center’s new computing cluster, adding to an existing $3 million supercomputer that hums in the basement of a nearby building.

The person leading the design of the new computer is Jeff Hammerbacher, a 30-year-old known for being Facebook’s first data scientist. Now Hammerbacher is applying the same data-crunching techniques used to target online advertisements, but this time for a powerful engine that will suck in medical information and spit out predictions that could cut the cost of health care.

With $3 trillion spent annually on health care in the U.S., it could easily be the biggest job for “big data” yet. “We’re going out on a limb—we’re saying this can deliver value to the hospital,” says Hammerbacher.

Mount Sinai has 1,406 beds plus a medical school and treats half a million patients per year. Increasingly, it’s run like an information business: it’s assembled a biobank with 26,735 patient DNA and plasma samples, it finished installing a $120 million electronic medical records system this year, and it has been spending heavily to recruit computing experts like Hammerbacher.

It’s all part of a “monstrously large bet that [data] is going to matter,” says Eric Schadt, the computational biologist who runs Mount Sinai’s Icahn Institute for Genomics and Multiscale Biology, where Hammerbacher is based, and who was himself recruited from the gene sequencing company Pacific Biosciences two years ago.

Mount Sinai hopes data will let it succeed in a health-care system that’s shifting dramatically. Perversely, because hospitals bill by the procedure, they tend to earn more the sicker their patients become. But health-care reform in Washington is pushing hospitals toward a new model, called “accountable care,” in which they will instead be paid to keep people healthy.

Mount Sinai is already part of an experiment that the federal agency overseeing Medicare has organized to test these economic ideas. Last year it joined 250 U.S. doctor’s practices, clinics, and other hospitals in agreeing to track patients more closely. If the medical organizations can cut costs with better results, they’ll share in the savings. If costs go up, they can face penalties.

The new economic incentives, says Schadt, help explain the hospital’s sudden hunger for data, and its heavy spending to hire 150 people during the last year just in the institute he runs. “It’s become ‘Hey, use all your resources and data to better assess the population you are treating,’” he says.

One way Mount Sinai is doing that already is with a computer model where factors like disease, past hospital visits, even race, are used to predict which patients stand the highest chance of returning to the hospital. That model, built using hospital claims data, tells caregivers which chronically ill people need to be showered with follow-up calls and extra help. In a pilot study, the program cut readmissions by half; now the risk score is being used throughout the hospital.

Hammerbacher’s new computing facility is designed to supercharge the discovery of such insights. It will run a version of Hadoop, software that spreads data across many computers and is popular in industries, like e-commerce, that generate large amounts of quick-changing information.

Patient data are slim by comparison, and not very dynamic. Records get added to infrequently—not at all if a patient visits another hospital. That’s a limitation, Hammerbacher says. Yet he hopes big-data technology will be used to search for connections between, say, hospital infections and the DNA of microbes present in an ICU, or to track data streaming in from patients who use at-home monitors.

One person he’ll be working with is Joel Dudley, director of biomedical informatics at Mount Sinai’s medical school. Dudley has been running information gathered on diabetes patients (like blood sugar levels, height, weight, and age) through an algorithm that clusters them into a weblike network of nodes. In “hot spots” where diabetic patients appear similar, he’s then trying to find out if they share genetic attributes. That way DNA information might add to predictions about patients, too.

A goal of this work, which is still unpublished, is to replace the general guidelines doctors often use in deciding how to treat diabetics. Instead, new risk models—powered by genomics, lab tests, billing records, and demographics—could make up-to-date predictions about the individual patient a doctor is seeing, not unlike how a Web ad is tailored according to who you are and sites you’ve visited recently.

That is where the big data comes in. In the future, every patient will be represented by what Dudley calls “large dossier of data.” And before they are treated, or even diagnosed, the goal will be to “compare that to every patient that’s ever walked in the door at Mount Sinai,” he says. “[Then] you can say quantitatively what’s the risk for this person based on all the other patients we’ve seen.”

Read the entire article here.

Painting the Night

Photographer Noel Kerns turns abandoned roadside attractions into luminous nightscapes using a combination of moonlight and artificial lighting. His book of stunning and eerie images of quintessential, nocturnal Americana — motels, truck stops, classic cars and drive-ins — is titled Nightwatch.

See more of Kerns images here.

Image: Chevys in Bowie, Texas. April 2009. Courtesy of Noel Kerns.

Mr. Higgs

A fascinating profile of Peter Higgs, the theoretical physicist whose name has become associated with the most significant scientific finding of recent times.

From the Guardian:

For scientists of a certain calibre, these early days of October can bring on a bad case of the jitters. The nominations are in. The reports compiled. All that remains is for the Nobel committees to cast their final votes. There are no sure bets on who will win the most prestigious prize in science this year, but there are expectations aplenty. Speak to particle physicists, for example, and one name comes up more than any other. Top of their wishlist of winners – the awards are announced next Tuesday – is the self-deprecating British octagenarian, Peter Higgs.

Higgs, 84, is no household name, but he is closer to being one than any Nobel physics laureate since Richard Feynman, the Manhattan project scientist, who accepted the award reluctantly in 1964. But while Feynman was a showman who adored attention, Higgs is happy when eclipsed by the particle that bears his name, the elusive boson that scientists at Cern’s Large Hadron Collider triumphantly discovered last year.

“He’s modest and actually almost to a fault,” said Alan Walker, a fellow physicist at Edinburgh University, who sat next to Higgs at Cern when scientists revealed they had found the particle.

“You meet many physicists who will tell you how good they are. Peter doesn’t do that.”

Higgs, now professor emeritus at Edinburgh, made his breakthrough the same year Feynman won the Nobel. It was an era when the tools of the trade were pencil and paper. He outlined what came to be known as the Higgs mechanism, an explanation for how elementary particles, which make up all that is around us, gained their masses in the earliest moments after the big bang. Before 1964, the question of why the simplest particles weighed anything at all was met with an embarrassed but honest shrug.

Higgs plays down his role in developing the idea, but there is no dismissing the importance of the theory itself. “He didn’t produce a great deal, but what he did produce is actually quite profound and is one of the keystones of what we now understand as the fundamental building blocks of nature,” Walker said.

Higgs was born in Newcastle in 1929. His father, a BBC sound engineer, brought the family south to Birmingham and then onwards to Bristol. There, Higgs enrolled at what is now Cotham School. He got off to a bad start. One of the first things he did was tumble into a crater left by a second world war bomb in the playground and fracture his left arm. But he was a brilliant student. He won prizes in a haul of subjects – although not, as it happens, in physics.

To the teenage Higgs, physics lacked excitement. The best teachers were off at war, and that no doubt contributed to his attitude. It changed through a chance encounter. While standing around at the back of morning assembly Higgs noticed a name that appeared more than once on the school’s honours board. Higgs wondered who PAM Dirac was and read up on the former pupil. He learned that Paul Dirac was a founding father of quantum theory, and the closest Britain had to an Einstein. Through Dirac, Higgs came to relish the arcane world of theoretical physics.

Higgs found that he was not cut out for experiments, a fact driven home by a series of sometimes dramatic mishaps, but at university he proved himself a formidable theorist. He was the first to sit a six-hour theory exam at Kings College London, and for the want of a better idea, his tutors posed him a question that had recently been solved in a leading physics journal.

“Peter sailed ahead, took it seriously, thought about it, and in that six-hour time scale had managed to solve it, had written it up and presented it,” said Michael Fisher, a friend from Kings.

But getting the right answer was only the start. “In the long run it turned out, when it was actually graded, that Peter had done a better paper than the original they took from the literature.”

Higgs’s great discovery came at Edinburgh University, where he was considered an outsider for plugging away at ideas that many physicists had abandoned. But his doggedness paid off.

At the time an argument was raging in the field over a way that particles might gain their masses. The theory in question was clearly wrong, but Higgs saw why and how to fix it. He published a short note in September 1964 and swiftly wrote a more expansive follow-up paper.

To his dismay the article was rejected, ironically by an editor at Cern. Indignant at the decision, Higgs added two paragraphs to the paper and published it in a rival US journal instead. In the penultimate sentence was the first mention of what became known as the Higgs boson.

At first, there was plenty of resistance to Higgs’s theory. Before giving a talk at Harvard in 1966, a senior physicist, the late Sidney Coleman, told his class some idiot was coming to see them. “And you’re going to tear him to shreds.” Higgs stuck to his guns. Eventually he won them over.

Ken Peach, an Oxford physics professor who worked with Higgs in Edinburgh, said the determination was classic Peter: “There is an inner toughness, some steely resolve, which is not quite immediately apparent,” he said.

It was on display again when Stephen Hawking suggested the Higgs boson would never be found. Higgs hit back, saying that Hawking’s celebrity status meant he got away with pronouncements that others would not.

Higgs was at one time deeply involved in the Campaign for Nuclear Disarmament, but left when the organisation extended its protests to nuclear power. He felt CND had confused controlled and uncontrolled release of nuclear energy. He also joined Greenpeace but quit that organisation, too, when he felt its ideologies had started to trump its science.

“The one thing you get from Peter is that he is his own person,” said Walker.

Higgs was not the only scientist to come up with the theory of particle masses in 1964. François Englert and Robert Brout at the Free University in Brussels beat him into print by two weeks, but failed to mention the crucial new particle that scientists would need to prove the theory right. Three others, Gerry Guralnik, , Dick Hagen and Tom Kibble, had worked out the theory too, and published a month later.

Higgs is not comfortable taking all the credit for the work, and goes to great pains to list all the others whose work he built on. But in the community he is revered. When Higgs walked into the Cern auditorium last year to hear scientists tell the world about the discovery, he was welcomed with a standing ovation. He nodded off during the talks, but was awake at the end, when the crowd erupted as the significance of the achievement became clear. At that moment, he was caught on camera reaching for a handkerchief and dabbing his eyes. “He was tearful,” said Walker. “He was really deeply moved. I think he was absolutely surprised by the atmosphere of the room.”

Read the entire article here.

Image: Ken Currie, Portrait of Peter Higgs, 2008. Courtesy of Wikipedia.