High Altitude Snake Oil

IMG_4353

I can attest to the fact that living at high altitude, say above 6,000 ft, has it’s benefits. The air is usually crisper and cleaner, and the views go on forever. But, one of the drawbacks is that the air is also thinner; there’s less oxygen floating around. Some people are more susceptible to the oxygen deficit than others. The milder symptoms usually manifest themselves in the form of headache, dizziness and disorientation. In more serious cases, acute mountain sickness (AMS) can lead to severe nausea, cognitive impairment and, even, death. When AMS strikes the best advice is to seek a lower elevation immediately and rest.

But, of course, wherever there is a human ailment, there will be a snake oil salesman ready to peddle a miraculous new cure; and AMS is no different. So, if you’re visiting the high country soon beware of the altitude revival massage, oxygen-rich structured water and the high altitude lotions. Caveat emptor.

From the NYT:

When the pop band Panic! at the Disco played in Colorado at the Red Rocks amphitheater more than a mile above sea level, the frontman, Brendon Urie, joked that his “drug of choice” was oxygen.

Mr. Urie tripled his elevation to 6,400 feet when he traveled from a gig in Las Vegas to the stage outside Denver in October, so he kept an oxygen tank nearby for quick hits when he felt what he called “lightheaded” during the performance.

“It acted as a kind of security blanket,” he said in an email.

And there are a lot of security blankets being sold to Rocky Mountain visitors: oxygen therapies, oils, pills and wristbands, to name a few. They come with claims of preventing or reducing altitude sickness, promises that in most cases aren’t backed by research.

Still, many skiers are willing to spend freely on these treatments, and perhaps it’s not surprising. People can be desperate to salvage their vacations when the thin air causes headaches, nausea, fatigue, dizziness and worse. But acute mountain sickness (AMS) can be a serious condition, so it behooves travelers to understand that it can often be prevented, and that if it strikes, not all remedies are equal.

Over the last two decades, 32 people have died in Colorado from the effects of high altitude, according to data provided by Mark Salley, spokesman for the Colorado Department of Public Health and Environment. In addition, there were 1,350 trips to the state’s emergency rooms for altitude sickness last year, with 85 percent of those patients coming from out of state, he said.

Not everyone is affected by altitude, but among visitors to Colorado’s Summit County — location of the ski resorts Breckenridge, Copper Mountain, Arapahoe Basin, Loveland and Keystone — 22 percent of those staying at 7,000 to 9,000 feet experienced AMS, while at 10,000 feet, it rose to 42 percent, according to medical studies cited in an article by Dr. Peter Hackett and Robert Roach published in The New England Journal of Medicine in July 2001.

It’s impossible to predict who will be affected, though research has found that those who are obese tend to be more susceptible. Meanwhile, those over age 60 have a slightly lower risk. But whether a person is a child or adult, male or female, fit or out of shape doesn’t seem to make a significant difference, said Mr. Roach, now director of the Altitude Research Center at the University of Colorado Anschutz Medical Campus in Aurora, Colo.

Acute mountain sickness is caused by the lack of oxygen in the lower air pressure that exists at higher altitudes. It usually doesn’t affect people below 8,000 feet, although it can, according to the National Institutes of Health.

“It’s horrible,” said Laura Lane, 32, who, despite living at 5,000 feet in Fort Collins, Colo., is one of those routinely affected by higher elevations. Early on, it makes her nauseated and gives her a “crushing” feeling, while simultaneously making her feel as if her head “is being split in two,” she said.

Read the entire article here.

Image: View from South Arapaho Peak, Indian Peaks Wilderness, Colorado. Altitude 13,397 ft. Courtesy of the author.

Congressional Climate Science Twilight Zone

Kiribati-Marshall-Islands-001

By its own actions (or lack thereof) and admissions the US Congress is a place where nothing gets done, and that nothing is done by non-experts who know nothing — other than politics, of course. So, when climate science skeptics in the US Senate held their most recent, “scientific” hearing, titled: “Data or Dogma: Promoting Open Inquiry in the Debate over the Magnitude of Human Impact on Earth’s Climate”, you can imagine what ensued.

Without an ironic nod to the name of their own hearing, Senators proceeded to inquire only from scientists who support the idea that human created climate change is a myth. Our Senators decried the climate change lobby for persecuting this minority, suggesting that their science should carry as much weight as that from the other camp. Yet this sham of an “open inquiry” fails to recognize that 99 percent of peer-reviewed climate science is unequivocal in pointing the finger at humans. Our so-called leaders, yet again, continue to do us all — the whole planet — a thorough disservice.

By the way, the Senate Subcommittee on Science, Technology and Competitiveness is chaired by Senator Ted Cruz. He believes that his posse of climate change deniers are latter-day Galileo Galileis — a persecuted minority. But he fails to recognize that they are in the minority because the real science shows the minority to be wrong; Galileo was in the minority, but he was backed by science, not dogmatic opinion. I think Senator Cruz would make a great president, in the time of Galileo Galilei, since that is where his understanding of “science” and the scientific method still seems to reside.

From Wired:

You are entering the world of another dimension—a dimension of sight (look at the people who don’t like scientists), of sound (people talking a lot), and of mind (well, maybe not so much). There’s the signpost for the Dirksen Senate Office Building up ahead. Your next stop: Senator Ted Cruz’s hearing on climate change earlier this week, which felt very much like something from the Twilight Zone.

Cruz himself is an intense guy in a dark suit—but that’s where the evident similarities between the senator and Twilight Zone creator Rod Serling end. Serling was an abject, romantic humanist. Cruz’s hearing was more like one of the side-shifted worlds Twilight Zone stories always seemed to happen in, at the crossroads of science and superstition, fear and knowledge.

Stranger than the choreography and theatrics (police tossed a protester, Cruz spent plenty of time denouncing a witness who either didn’t show up or wasn’t invited, and a Canadian blogger barely contained his anger during a back-and-forth with Democratic Senator Ed Markey) was the topsy-turvy line of questioning pursued by Cruz, a Texas Republican and chairman of the Senate Subcommittee on Science, Technology and Competitiveness.

He opened the hearing—“Data or Dogma: Promoting Open Inquiry in the Debate over the Magnitude of Human Impact on Earth’s Climate”—with a tale of a 2013 expedition by New Zealand scientists. They were investigating Antarctic sea ice—”ice that the climate-industrial complex had assured us was vanishing,” Cruz said. “It was there to document how the ice was vanishing in the Antarctic, but the ship became stuck. It had run into an inconvenient truth, as Al Gore might put it. Facts matter, science matters, data matters.”

So OK. To bolster that us-versus-them narrative, Cruz invited scientists who believe they are being persecuted (or denied government funding)—just like Galileo was by the Catholic Church, they kept saying.

The other side of the aisle responded that these scientists aren’t being funded because their research and ideas don’t measure up to peer-review standards—or are just plain wrong.

Read the entire article here.

Image: The Demon Town cemetery in Majuro has lost many graves during a decade of constant inundations. The local people have moved their relatives’ remains and graves further inland, 2008. Courtesy of The Marshall Island Journal / Guardian Newspapers.

The Random Darknet Shopper

Good art pushes our boundaries; it causes us to question our accepted views and perceptions. Good art makes us think.

So, here’s a great example — the Random Darknet Shopper.

Briefly, the Random Darknet Shopper is an automated shopping robot; actually an automated process running on a laptop. It makes random purchases online, and then has its booty delivered to an art gallery in London where it is displayed. Once a week the shopping bot will spend up to $100 on Alpha Bay, one of the Darknet’s largest marketplaces — a trade zone for many dubious and often illegal goods and services.

Google-search-RDS

During its first run from October 2014 to January 2015, the Random Darknet Shopper bought a dozen items from the deepweb market Agora, including: replica Diesel jeans, Hungarian passport scan, Sprite stash can, baseball cap with integrated spy camera, ecstasy pills, fake Nike trainers, platinum Visa credit card.

This may not be altogether visually appealing, but it’s thought provoking nonetheless, with an added twist — the artists and art gallery may end up in legal hot water should the robot make some dubious purchases.

Read more about the artists and the project.

From the Independent:

On balance, it’s unlikely that police will swoop on a south London art gallery this week and apprehend a laptop that will be busy making random purchases from a secretive part of the web known as the Darknet.

Then again, it depends what the automated shopping ’bot known as Random Darknet Shopper chooses to buy online and have delivered to the gallery. Fake trainers or a counterfeit designer T-shirt are unlikely to attract the interest of the authorities, but Class A drugs or a gun would be a different matter.

“We just don’t know what’s going to turn up [at the gallery] which is what makes it difficult legally,” said Susan Singleton, the solicitor who has provided legal advice to the Swiss artists who designed the Shopper. “The major caveat here is that the artists are not telling it to buy drugs, so they wouldn’t be responsible. But once the goods come into their possession you move to an entirely separate set of offences.”

Artists Domagoj Smoljo and Carmen Weisskopf are well aware that their creation may land them in hot water when it begins an eight-week shopping spree at the Horatio Junior gallery in Rotherhithe on Friday. Every Wednesday, the ’bot will spend up to $100 (£66) in Bitcoins on an item selected at random from Alpha Bay, one of the largest marketplaces on the Darknet. Each item will be delivered to the gallery, where the artists will add them to a display they describe as a “Darknet landscape”.

“It is both exciting and nerve wracking,” said Smoljo, 36, who created the Shopper with Weisskopf, 39, last year as a means of exploring and understanding a secret part of the web. “I sleep badly the night before it goes shopping  … it is something that is out of our control. We feel vulnerable, but at the same time we like it.”

When Darknet Shopper was exhibited in Switzerland last year its random purchases included a pair of fake Nike trainers, counterfeit designer jeans from China and 10 packets of cigarettes from Ukraine. Swiss police took an interest when it added a bag of 10 ecstasy tablets to its haul and the pills were put on display.

Read the entire story here.

Image courtesy of Google Search.

 

Fight or Flight (or Record?)

Google-search-danger

Psychologists, social scientists and researchers of the human brain have long maintained that we have three typical responses to an existential, usually physical, threat. First, we may stand our ground to tackle and fight the threat. Second, we may turn and run from danger. Third, we may simply freeze with indecision and inaction. These responses have been studied, documented and confirmed over the decades. Further, they tend to mirror those of other animals when faced with a life-threatening situation.

But, now that humans have entered the smartphone age, it appears that there is a fourth response — to film or record the threat. This may seem hard to believe and foolhardy, but quite disturbingly it’s is a growing trend, especially among younger people.

From the Telegraph:

If you witnessed a violent attack on an innocent victim, would you:

a) help
b) run
c) freeze

Until now, that was the hypothetical question we all asked ourselves when reading about horrific events such as terror attacks.

What survival instinct would come most naturally? Fight or flight?

No longer. Over the last couple of years it’s become very obvious that there’s a fourth option:

d) record it all on your smartphone.

This reaction of filming traumatic events has become more prolific in recent weeks. Last month’s terror attacks in Paris saw mobile phone footage of people being shot, photos of bodies lying in the street, and perhaps most memorably, a pregnant woman clinging onto a window ledge.

Saturday [December 5, 2015] night saw another example when a terror suspect started attacking passengers on the Tube at Leytonstone Station. Most of the horrific incident was captured on video, as people stood filming him.

One brave man, 33-year-old engineer David Pethers, tried to fight the attacker. He ended up with a cut to the neck as he tried to protect passing children. But while he was intervening, others just held up their phones.

“There were so many opportunities where someone could have grabbed him,” he told the Daily Mail. “One guy came up to me afterwards and said ‘well done, I want to shake your hand, you are the only one who did anything, I got the whole thing on film.’

“I was so angry, I nearly turned on him but I walked away. I though, ‘Are you crazy? You are standing there filming and did nothing.’ I was really angry afterwards.”

It’s hard to disagree. Most of us know heroism is rare and admirable. We can easily understand people trying to escape and save themselves, or even freezing in the face of terror.

But deliberately doing nothing and choosing to film the whole thing? That’s a lot harder to sympathise with.

Psychotherapist Richard Reid agrees – “the sensible option would be to think about your own safety and get out, or think about helping people” – but he says it’s important we understand this new reaction.

“Because events like terror attacks are so outside our experience, people don’t fully connect with it,” he explains.

“It’s like they’re watching a film. It doesn’t occur to them they could be in danger or they could be helping. The reality only sinks in after the event. It’s a natural phenomenon. It’s not necessarily the most useful response, but we have to accept it.”

Read the entire story here.

Image courtesy of Google Search.

A Googol Years From Now

If humanity makes it the next few years and decades without destroying itself and the planet, we can ponder the broader fate of our universal home. Assuming humanity escapes the death of our beautiful local star (in 4-5 billion years or so) and the merging of our very own Milky Way and the Andromeda galaxy (around 7-10 billion years), we’ll be toast in a googol years. Actually, we and everything else in the cosmos will be more like a cold, dark particle soup. By the way, a googol is a rather large number — 10100. That gives us plenty of time to fix ourselves.

From Space:

Yes, the universe is dying. Get over it.

 Well, let’s back up. The universe, as defined as “everything there is, in total summation,” isn’t going anywhere anytime soon. Or ever. If the universe changes into something else far into the future, well then, that’s just more universe, isn’t it?

But all the stuff in the universe? That’s a different story. When we’re talking all that stuff, then yes, everything in the universe is dying, one miserable day at a time.

You may not realize it by looking at the night sky, but the ultimate darkness is already settling in. Stars first appeared on the cosmic stage rather early — more than 13 billion years ago; just a few hundred million years into this Great Play. But there’s only so much stuff in the universe, and only so many opportunities to make balls of it dense enough to ignite nuclear fusion, creating the stars that fight against the relentless night.

The expansion of the universe dilutes everything in it, meaning there are fewer and fewer chances to make the nuclear magic happen. And around 10 billion years ago, the expansion reached a tipping point. The matter in the cosmos was spread too thin. The engines of creation shut off. The curtain was called: the epoch of peak star formation has already passed, and we are currently living in the wind-down stage. Stars are still born all the time, but the birth rate is dropping.

At the same time, that dastardly dark energy is causing the expansion of the universe to accelerate, ripping galaxies away from each other faster than the speed of light (go ahead, say that this violates some law of physics, I dare you), drawing them out of the range of any possible contact — and eventually, visibility — with their neighbors. With the exception of the Andromeda Galaxy and a few pathetic hangers-on, no other galaxies will be visible. We’ll become very lonely in our observable patch of the universe.

The infant universe was a creature of heat and light, but the cosmos of the ancient future will be a dim, cold animal.

The only consolation is the time scale involved. You thought 14 billion years was a long time? The numbers I’m going to present are ridiculous, even with exponential notation. You can’t wrap your head around it. They’re just … big.

For starters, we have at least 2 trillion years until the last sun is born, but the smallest stars will continue to burn slow and steady for another 100 trillion years in a cosmic Children of Men. Our own sun will be long gone by then, heaving off its atmosphere within the next 5 billion years and charcoaling the Earth. Around the same time, the Milky Way and Andromeda galaxies will collide, making a sorry mess of the local system.

At the end of this 100-trillion-year “stelliferous” era, the universe will only be left with the … well, leftovers: white dwarves (some cooled to black dwarves), neutron stars and black holes. Lots of black holes.

Welcome to the Degenerate Era, a state that is as sad as it sounds. But even that isn’t the end game. Oh no, it gets worse. After countless gravitational interactions, planets will get ejected from their decaying systems and galaxies themselves will dissolve. Losing cohesion, our local patch of the universe will be a disheveled wreck of a place, with dim, dead stars scattered about randomly and black holes haunting the depths.

The early universe was a very strange place, and the late universe will be equally bizarre. Given enough time, things that seem impossible become commonplace, and objects that appear immutable … uh, mutate. Through a process called quantum tunneling, any solid object will slowly “leak” atoms, dissolving. Because of this, gone will be the white dwarves, the planets, the asteroids, the solid.

Even fundamental particles are not immune: given 10^34 years, the neutrons in neutron stars will break apart into their constituent particles. We don’t yet know if the proton is stable, but if it isn’t, it’s only got 10^40 years before it meets its end.

With enough time (and trust me, we’ve got plenty of time), the universe will consist of nothing but light particles (electrons, neutrinos and their ilk), photons and black holes. The black holes themselves will probably dissolve via Hawking Radiation, briefly illuminating the impenetrable darkness as they decay.

After 10^100 years (but who’s keeping track at this point?), nothing macroscopic remains. Just a weak soup of particles and photons, spread so thin that they hardly ever interact.

Read the entire article here.

In case, you’ve forgotten, a googol is 10100 (10 to the power of 100) or 10 followed by 100 zeros. And, yes, that’s how the company Google derived its name.

Now We Can All Be Michael Scott And Number 6

Or, if you are from the UK — you can be David Brent. That is, we can all aspire to be a terrible boss. And, it’s all courtesy of the techno-enabled Uberified gig-economy.

Those of us who have a boss will identify with the mostly excruciating ritual that is the annual performance review; your work, your attitude, your personality is dissected, sliced and diced, scored, rated and ranked. However, as traumatic as this may be for you, remember that at least your boss actually interacts (usually) with you, and may actually have come to know you (somewhat), over a period of some years.

[tube]nW-bFGzNMXw[/tube]

But, how would it feel to be evaluated in this way — scored and rated — by complete strangers during a fleeting interaction that may only have lasted minutes? Online social media tools make this scoring wonderfully easy and convenient — just check a box or select 1-5 stars or a thumbs up/down. Add to this the sharing / gig economy, and we now have millions of people ready (and eager) to score millions of others for waiting tables, chauffeuring a car, delivering pizza, writing an app, cleaning a house, walking your dog, mowing your lawn. And, the list grows each day. Thus, you may be an employee to any numbers of managers throughout each day — it’s just that each manager is actually one of your customers, and each customer is armed with your score.

Where will this lead us? Should we rank our partners and spouses each day, indeed, several times each day? Will we score our kids for table etiquette, manners, talk-back? Should we score the check-out employee, the bank clerk, the bus driver, barista, nurse practitioner, car mechanic, surgeon? Ugh.

But you can certainly see why corporate executives are falling over themselves to have customers anonymously score their customer-facing employees. For the process devolves power to the customer, and removes management from having to make the once tough personnel decisions. So, why not have hordes of anonymous reviews and aggregated scores from customers determine the fate of low-level service employees? This would seem to be the ultimate customer service.

Yet, by replacing the human connection between employer/customer and employee/service worker with scores and algorithms we are further commoditizing ourselves. We erode our humanity by allowing ourselves to be quantified and enumerated, and for doing the same to others, known and unknown. Having the power to score and rate another person at the press of a finger — anonymously — may make for savvy 21st century management but it makes for a colder, crueler world, which increasingly reads like a dystopian novel.

From the Verge:

Soon, you’ll be able to go to the Olive Garden and order your fettuccine alfredo from a tablet mounted to the table. After paying, you’ll rate the server.

Then you can use that tablet to hail an Uber driver, whom you’ll also rate, from one to five stars. You can take it to your Airbnb, which you’ll award one to five stars across several categories, and get a TaskRabbit or Postmates worker to pick up groceries — rate them too. Maybe you’ll check on the web developer you’ve hired through Upwork, perusing the screenshots taken automatically from her computer, and think about how you’ll rate her when the job is done. You could hire someone from Handy to clean the place before you leave. More stars.

The on-demand economy has scrambled the roles of employer and employee in ways that courts and regulators are just beginning to parse. So far, the debate has focused on whether workers should be contractors or employees, a question sometimes distilled into an argument about who’s the boss: are workers their own bosses, as the companies often claim, or is the platform their boss, policing their work through algorithms and rules?

But there’s a third party that’s often glossed over: the customer. The rating systems used by these companies have turned customers into unwitting and sometimes unwittingly ruthless middle managers, more efficient than any boss a company could hope to hire. They’re always there, working for free, hypersensitive to the smallest error. All the algorithm has to do is tally up their judgments and deactivate accordingly.

Ratings help these companies to achieve enormous scale, managing large pools of untrained contract workers without having to hire supervisors. It’s a nice arrangement for customers too, who get cheap service with a smile — even if it’s an anxious one. But for the workers, already in the precarious position of contract labor, making every customer a boss is a terrifying prospect. After all, they — we — can be entitled jerks.

“You get pretty good at kissing ass just because you have to,” an Uber driver told me. “Uber and Lyft have created this monstrous brand of customer where they expect Ritz Carlton service at McDonald’s prices.”

In March, when Judge Edward Chen denied Uber’s motion for summary judgement on the California drivers’ class action suit, he seized on the idea that ratings aren’t just a customer feedback tool — they represent a new level of monitoring, far more pervasive than any watchful boss. Customer ratings, Chen wrote, give Uber an “arguably tremendous amount of control over the ‘manner and means’ of its drivers’ performance.” Quoting from Michel Foucault’s Discipline and Punish, he wrote that a “state of conscious and permanent visibility assures the automatic functioning of power.”

Starting with Ebay, rating systems have typically been described as way of establishing trust between strangers. Some commentators go so far as to say ratings are more effective than government regulation. “Uber and Airbnb are in fact some of the most regulated ecosystems in the world,” said Joshua Gans, an economist at the University of Toronto, at an FTC workshop earlier this year. Rather than a single certification before you can begin work, everyone is regulated constantly through a system of mutually assured judgment.

Certainly customers sometimes have awful experiences — reckless driving, creepy comments — and the rating system can help report them. But when it comes to policing dangerous behavior, most of these platforms have come to rely not on ratings but on traditional safety measures — identity verification, background checks, and the knowledge that any illegal actions can be investigated and enforced through the tracking devices every worker carries. We can’t rate for criminal histories, poor training, or negligent car maintenance.

So what do we rate for? We rate for the routes drivers take, for price fluctuations beyond their control, for slow traffic, for refusing to speed, for talking too much or too little, for failing to perform large tasks unrealistically quickly, for the food being cold when they delivered it, for telling us that, No, we can’t bring beer in the car and put our friend in the trunk — really, for any reason at all, including subconscious biases about race or gender, a proven problem on many crowdsourced platforms. This would be a nuisance if feedback were just feedback, but ratings have become the primary metric in automated systems determining employment. If you imagine the things customers rate down for as firing decisions in a traditional workplace, they look capricious and harsh. It’s a strange amount of power for customers to hold, all the more so considering that many don’t know they wield it.

Sometimes, as in Uber’s system, workers have the opportunity to rate customers back. An Uber spokesperson told me that, “Uber’s priority is to connect you with a safe, reliable ride — no matter who you are, where you’re coming from, or where you’re going. Achieving that goal for our community means maintaining an environment of mutual accountability and respect. We want everyone to have a great ride, every time, and two-way feedback is one of the many ways we work to make that possible. “

Read more here.

Video: The Prisoner – I’m not a number, I’m a free man! 1967. Courtesy: Patrick  McGoohan / ITC Entertainment.

 

Re-Innovation: Silicon Valley’s Trivial Pursuit Problem

I read and increasing number of articles like the one excerpted below, which cause me to sigh with exasperation yet again. Is Silicon Valley — that supposed beacon of global innovation — in danger of becoming a drainage ditch of regurgitated sameness, of me-too banality?

It’s frustrating to watch many of our self-proclaimed brightest tech minds re-package colorful “new” solutions to our local trivialities, yet again, and over and over. So, here we are, celebrating the arrival of the “next big thing”; the next tech unicorn with a valuation above $1 billion, which proposes to upend and improve all our lives, yet again.

DoorDash. Seamless. Deliveroo. HelloFresh. HomeChef. SpoonRocket. Sprig. GrubHub. Instacart. These are all great examples of too much money chasing too few truly original ideas. I hope you’ll agree: a cool compound name is a cool compound name, but it certainly does not for innovation make. By the way, whatever happened to WebVan?

Where are my slippers? Yawn.

From Wired:

Founded in 2013, DoorDash is a food delivery service. It’s also the latest startup to be eying a valuation of more than $1 billion. DoorDash already raised $40 million in March; according to Bloomberg, it may soon reap another round of funding that would put the company in the same lofty territory as Uber, Airbnb, and more than 100 other so-called unicorns.

Not that DoorDash is doing anything terribly original. Startups bringing food to your door are everywhere. There’s Instacart, which wants to shop for groceries for you. Deliveroo and Postmastes, like DoorDash, are looking to overtake Seamless as the way we get takeout at home. Munchery, SpoonRocket, and Sprig offer pre-made meals. Blue Apron, Gobble, HelloFresh, and HomeChef deliver ingredients to make the food ourselves. For the moment, investors are giddily rushing to subsidize this race to our doors. But skeptics say that the payout those investors are banking on might never come.

Even in a crowded field, funding for these delivery startups continues to grow. CB Insights, a research group that tracks startup investments, said this summer that the sector was “starting to get a little crowded.” Last year, venture-backed food delivery startups based in the US reaped more than $1 billion in equity funding; during first half of this year, they pulled in $750 million more, CB Insights found.

The enormous waves of funding may prove money poorly spent if Silicon Valley finds itself in a burst bubble. Bill Gurley, the well-known investor and a partner at venture firm Benchmark, believes delivery startups may soon be due for a rude awakening. Unlike the first dotcom bubble, he said, smartphones might offer help, because startups are able to collect more data. But he compared the optimism investors are showing for such low-margin operations to the misplaced enthusiasms of 1999.  “It’s the same shit,” Gurley said during a recent appearance. (Gurley’s own investment in food delivery service, GrubHub, went public in April 2014 and is now valued at more than $2.2 billion.)

Read the entire article here.

 

Rudeness Goes Viral

We know intuitively, anecdotally and through scientific study that aggressive behavior can be transmitted to others through imitation. The famous Bobo doll experiment devised by researchers at Stanford University in the early 1960s, and numerous precursors, showed that subjects given an opportunity to observe aggressive models later reproduced a good deal of physical and verbal aggression substantially identical with that of the model. In these studies the model was usually someone with a higher social status or with greater authority (e.g., an adult) than the observer (e.g., a child).

Recent updates to these studies now show that low-intensity behaviors such as rudeness can be as equally contagious as more intense behaviors like violence. Fascinatingly, the contagion seems to work equally well even if the model and observer are peers.

So, keep this in mind: watching rude behaviors leads us to be rude to others.

From Scientific American:

Flu season is nearly upon us, and in an effort to limit contagion and spare ourselves misery, many of us will get vaccinated. The work of Jonas Salk and Thomas Francis has helped restrict the spread of the nasty bug for generations, and the influenza vaccine is credited with saving tens of thousands of lives. But before the vaccine could be developed, scientists first had to identify the cause of influenza — and, importantly, recognize that it was contagious.

New research by Trevor Foulk, Andrew Woolum, and Amir Erez at the University of Florida takes that same first step in identifying a different kind of contagious menace: rudeness. In a series of studies, Foulk and colleagues demonstrate that being the target of rude behavior, or even simply witnessing rude behavior, induces rudeness. People exposed to rude behavior tend to have concepts associated with rudeness activated in their minds, and consequently may interpret ambiguous but benign behaviors as rude. More significantly, they themselves are more likely to behave rudely toward others, and to evoke hostility, negative affect, and even revenge from others.

The finding that negative behavior can beget negative behavior is not exactly new, as researchers demonstrated decades ago that individuals learn vicariously and will repeat destructive actions.  In the now infamous Bobo doll experiment, for example, children who watched an adult strike a Bobo doll with a mallet or yell at it were themselves abusive toward the doll.  Similarly, supervisors who believe they are mistreated by managers tend to pass on this mistreatment to their employees.

Previous work on the negative contagion effect, however, has focused primarily on high-intensity behaviors like hitting or abusive supervision that are (thankfully) relatively infrequent in everyday life.  In addition, in most previous studies the destructive behavior was modeled by someone with a higher status than the observer. These extreme negative behaviors may thus get repeated because (a) they are quite salient and (b) the observer is consciously and intentionally trying to emulate the behavior of someone with an elevated social status.

To examine whether this sensitivity impacts social behavior, Foulk’s team conducted another study in which participants were asked to play the part of an employee at a local bookstore.  Participants first observed a video showing either a polite or a rude interaction among coworkers.  They were then asked to respond to an email from a customer.  The email was either neutral (e.g., “I am writing to check on an order I placed a few weeks ago.”), highly aggressive (e.g., “I guess you or one of your incompetent staff must have lost my order.”), or moderately rude (I’m really surprised by this as EVERYBODY said you guys give really good customer service???).

Foulk and colleagues again found that prior exposure to rude behavior creates a specific sensitivity to rudeness. Notably, the type of video participants observed did not affect their responses to the neutral or aggressive emails; instead, the nature of those emails drove the response.  That is, all participants were more likely to send a hostile response to the aggressive email than to neutral email, regardless of whether they had previously observed a polite or rude employee interaction.  However, the type of video participants observed early in the study did affect their interpretation of and response to the rude email.  Those who had seen the polite video adopted a benign interpretation of the moderately rude email and delivered a neutral response, while those who had seen the rude video adopted a malevolent interpretation and delivered a hostile response.  Thus, observing rude behaviors, even those committed by coworkers or peers, resulted in greater sensitivity and heightened response to rudeness.

Read the entire article here.

Clowns, Ducks and Dancing Girls

OK, OK. I’ve had to break my own rule (again). You know, the one that states that I’m not supposed to write about politics. The subject is far too divisive, I’m told. However, as a US-based, Brit and hence a somewhat removed observer — though I can actually vote — I cannot stay on the sidelines.

Politics-Cruz-ducks-15Jan2016

For US politics and its never-ending election season is a process that must be observed, studied, dissected and savored. After all, it’s not really politics — it’s a hysterically entertaining reality TV show complete with dancing girls, duck hunting, character assassination, clowns, demagogues, guns, hypocrisy, plaid shirts, lies and so much more. Best of all, there are no policies or substantive ideas of any kind; just pure entertainment. Netflix should buy the exclusive rights!

Politics-Trump-rally-15Jan2016

Image, top: Phil Robertson, star of the Duck Dynasty reality TV show, says Cruz is the man for the job because he is godly, loves America, and is willing to kill a duck to make gumbo soup. Courtesy of the Guardian.

Image, bottom, Political rally for Donald Trump featuring gyrating dancing girls and warnings to the “enemy”. Courtesy of Fox News.

It’s Time for a Cat 6 Hurricane

Patricia_2015-10-23_1730Z

Amateur meteorologist that I am I wonder when my professional colleagues will extend the Saffir–Simpson hurricane wind scale (SSHWS). Currently the scale classifies hurricanes — western hemisphere tropical cyclones — on a scale of 1 to 5. Category 1 hurricanes encompass winds between 74-95 miles per hour; category 5 storms gyrate in excess of 157 miles per hour. As our global climate becomes increasingly warmer — which it is — our weather is becoming increasingly more volatile and extreme — longer hotter droughts, greater torrential rainfall, higher floods.

Category 5 will soon need to be supplemented. After all, the recent storm to hit the Pacific coast of Mexico — hurricane Patricia — reached sustained winds of 201 miles per hour; Typhoon Haiyan had a top wind spend of only 195 mph when it slammed the Phillippines in 2013 killing over 7,000 residents.

From the NYT:

Hurricane Patricia was a surprise. The eastern Pacific hurricane strengthened explosively before hitting the coast of Mexico, far exceeding projections of scientists who study such storms.

And while the storm’s strength dissipated quickly when it struck land, a question remained. What made it such a monster?

Explanations were all over the map, with theories that included climate change (or not), and El Niño.

But the answer is more complicated. The interplay of all the different kinds of warming going on in the Pacific at the moment can be difficult to sort out and, as with the recent hurricane, attributing a weather event to a single cause is unrealistic.

Gabriel Vecchi, head of the climate variations and predictability group at the geophysical fluid dynamics laboratory of the National Oceanic and Atmospheric Administration in Princeton, N.J., likened the challenge to the board game Clue.

“There’s all these suspects, and we have them all in the room right now,” he said. “The key is to go and systematically figure out who was where and when, so we can exclude people or phenomena.” Extending the metaphor, he noted that criminal suspects could work together as accomplices, and there could be a character not yet known. And, as in all mysteries, “You can have a twist ending.”

At the moment, the world’s largest ocean is a troublesome place, creating storms and causing problems for people and marine life across the Pacific Rim and beyond. A partial list includes the strong El Niño system that has formed along the Equator, and another unusually persistent zone of warm water that has been sitting off the North American coast, wryly called “the Blob.”

And a longer-term cycle of heating and cooling known as the Pacific Decadal Oscillation may be switching from a cooling phase to a warming phase. On top of all that is the grinding progress of climate change, caused by accumulation of greenhouse gases generated by human activity.

Each of these phenomena operates on a different time scale, but for now they appear to be synchronized, a little like the way the second hand, minute hand and hour hand line up at the stroke of midnight. And the collective effects could be very powerful.

Read the entire story here.

Image: Hurricane Patricia at peak intensity and approaching the Western Mexico on October 23, 2015. Courtesy: MODIS image, NASA’s Terra satellite. Public Domain.

The Man With No Phone

If Hitchcock were alive today the title of this post — The Man With No Phone — might be a fitting description of his latest noir, celluloid masterpiece. For in many the notion of being phone-less distills deep nightmarish visions of blood-curdling terror.

Does The Man With No Phone lose track of all reality, family, friends, appointments, status updates, sales records, dinner, grocery list, transportation schedules and news, turning into an empty neurotic shell of a human being? Or, does lack of constant connectivity and elimination of instant, digital gratification lead The Man With No Phone to become a schizoid, feral monster? Let’s read on to find out.

[tube]uWhkbDMISl8[/tube]

Large swathes of the world are still phone-less, and much of the global population — at least those of us over the age of 35 — grew up smartphone-less and even cellphone-less. So, it’s rather disconcerting to read Steve Hilton’s story; he’s been phone-less for 3 years now. However, it’s not disconcerting that he’s without a phone — I find it inspiring (and normal), it’s disconcerting that many people are wondering how on earth he can live without one. And, even more perplexing — why would anyone need a digital detox or mindfulness app on their smartphone? Just hide the thing in your junk draw for a week (or more) and breathe out. Long live The Man With No Phone!

From the Guardian:

Before you read on, I want to make one thing clear: I’m not trying to convert you. I’m not trying to lecture you or judge you. Honestly, I’m not. It may come over like that here and there, but believe me, that’s not my intent. In this piece, I’m just trying to … explain.

People who knew me in a previous life as a policy adviser to the British prime minister are mildly surprised that I’m now the co-founder and CEO of a tech startup . And those who know that I’ve barely read a book since school are surprised that I have now actually written one.

But the single thing that no one seems able to believe – the thing that apparently demands explanation – is the fact that I am phone-free. That’s right: I do not own a cellphone; I do not use a cellphone. I do not have a phone. No. Phone. Not even an old-fashioned dumb one. Nothing. You can’t call me unless you use my landline – yes, landline! Can you imagine? At home. Or call someone else that I happen to be with (more on that later).

When people discover this fact about my life, they could not be more surprised than if I had let slip that I was actually born with a chicken’s brain. “But how do you live?” they cry. And then: “How does your wife feel about it?” More on that too, later.

As awareness has grown about my phone-free status (and its longevity: this is no passing fad, people – I haven’t had a phone for over three years), I have received numerous requests to “tell my story”. People seem to be genuinely interested in how someone living and working in the heart of the most tech-obsessed corner of the planet, Silicon Valley, can possibly exist on a day-to-day basis without a smartphone.

So here we go. Look, I know it’s not exactly Caitlyn Jenner, but still: here I am, and here’s my story.

In the spring of 2012, I moved to the San Francisco bay area with my wife and two young sons. Rachel was then a senior executive at Google, which involved a punishing schedule to take account of the eight-hour time difference. I had completed two years at 10 Downing Street as senior adviser to David Cameron – let’s just put it diplomatically and say that I and the government machine had had quite enough of each other. To make both of our lives easier, we moved to California.

I took with me my old phone, which had been paid for by the taxpayer. It was an old Nokia phone – I always hated touch-screens and refused to have a smartphone; neither did I want a BlackBerry or any other device on which the vast, endless torrent of government emails could follow me around. Once we moved to the US my government phone account was of course stopped and telephonically speaking, I was on my own.

I tried to get hold of one of my beloved old Nokia handsets, but they were no longer available. Madly, for a couple of months I used old ones procured through eBay, with a pay-as-you-go plan from a UK provider. The handsets kept breaking and the whole thing cost a fortune. Eventually, I had enough when the charging outlet got blocked by sand after a trip to the beach. “I’m done with this,” I thought, and just left it.

I remember the exact moment when I realized something important had happened. I was on my bike, cycling to Stanford, and it struck me that a week had gone by without my having a phone. And everything was just fine. Better than fine, actually. I felt more relaxed, carefree, happier. Of course a lot of that had to do with moving to California. But this was different. I felt this incredibly strong sense of just thinking about things during the day. Being able to organize those thoughts in my mind. Noticing things.

Read the entire story here.

Video: Hanging on the Telephone, Blondie. Courtesy: EMI Music.

Design Thinking Versus Product Development

Out with product managers; in with design thinkers. Time for some corporate creativity. Think user journeys and empathy roadmaps.

A different corporate mantra is beginning to take hold at some large companies like IBM. It’s called design thinking, and while it’s not necessarily new, it holds promise for companies seeking to meet the needs of their customers at a fundamental level. Where design is often thought of in terms of defining and constructing cool-looking products, design thinking is used to capture a business problem at a broader level, shape business strategy and deliver a more holistic, deeper solution to customers. And, importantly, to do so more quickly than through a typical product development life-cycle.

From NYT:

Phil Gilbert is a tall man with a shaved head and wire-rimmed glasses. He typically wears cowboy boots and bluejeans to work — hardly unusual these days, except he’s an executive at IBM, a company that still has a button-down suit-and-tie reputation. And in case you don’t get the message from his wardrobe, there’s a huge black-and-white photograph hanging in his office of a young Bob Dylan, hunched over sheet music, making changes to songs in the “Highway 61 Revisited” album. It’s an image, Mr. Gilbert will tell you, that conveys both a rebel spirit and hard work.

Let’s not get carried away. Mr. Gilbert, who is 59 years old, is not trying to redefine an entire generation. On the other hand, he wants to change the habits of a huge company as it tries to adjust to a new era, and that is no small task.

IBM, like many established companies, is confronting the relentless advance of digital technology. For these companies, the question is: Can you grow in the new businesses faster than your older, lucrative businesses decline?

Mr. Gilbert answers that question with something called design thinking. (His title is general manager of design.) Among other things, design thinking flips traditional technology product development on its head. The old way is that you come up with a new product idea and then try to sell it to customers. In the design thinking way, the idea is to identify users’ needs as a starting point.

Mr. Gilbert and his team talk a lot about “iteration cycles,” “lateral thinking,” “user journeys” and “empathy maps.” To the uninitiated, the canons of design thinking can sound mushy and self-evident. But across corporate America, there is a rising enthusiasm for design thinking not only to develop products but also to guide strategy and shape decisions of all kinds. The September cover article of the Harvard Business Review was “The Evolution of Design Thinking.” Venture capital firms are hiring design experts, and so are companies in many industries.

Still, the IBM initiative stands out. The company is well on its way to hiring more than 1,000 professional designers, and much of its management work force is being trained in design thinking. “I’ve never seen any company implement it on the scale of IBM,” said William Burnett, executive director of the design program at Stanford University. “To try to change a culture in a company that size is a daunting task.”

Daunting seems an understatement. IBM has more than 370,000 employees. While its revenues are huge, the company’s quarterly reports have shown them steadily declining in the last two years. The falloff in revenue is partly intentional, as the company sold off less profitable operations, but the sometimes disappointing profits are not, and they reflect IBM’s struggle with its transition. Last month, the company shaved its profit target for 2015.

In recent years, the company has invested heavily in new fields, including data analytics, cloud computing, mobile technology, security, social media software for business and its Watson artificial intelligence technology. Those businesses are growing rapidly, generating revenue of $25 billion last year, and IBM forecasts that they will contribute $40 billion by 2018, through internal growth and acquisitions. Just recently, for example, IBM agreed to pay $2 billion for the Weather Company (not including its television channel), gaining its real-time and historical weather data to feed into Watson and analytics software.

But IBM’s biggest businesses are still the traditional ones — conventional hardware, software and services — which contribute 60 percent of its revenue and most of its profit. And these IBM mainstays are vulnerable, as customers increasingly prefer to buy software as a service, delivered over the Internet from remote data centers.

Recognizing the importance of design is not new, certainly not at IBM. In the 1950s, Thomas J. Watson Jr., then the company’s chief executive, brought on Eliot Noyes, a distinguished architect and industrial designer, to guide a design program at IBM. And Noyes, in turn, tapped others including Paul Rand, Charles Eames and Eero Saarinen in helping design everything from corporate buildings to the eight-bar corporate logo to the IBM Selectric typewriter with its golf-ball-shaped head.

At that time, and for many years, design meant creating eye-pleasing, functional products. Now design thinking has broader aims, as a faster, more productive way of organizing work: Look at problems first through the prism of users’ needs, research those needs with real people and then build prototype products quickly.

Defining problems more expansively is part of the design-thinking ethos. At a course in New York recently, a group of IBM managers were given pads and felt-tip pens and told to sketch designs for “the thing that holds flowers on a table” in two minutes. The results, predictably, were vases of different sizes and shapes.

Next, they were given two minutes to design “a better way for people to enjoy flowers in their home.” In Round 2, the ideas included wall placements, a rotating flower pot run by solar power and a software app for displaying images of flowers on a home TV screen.

Read the entire story here.

North Korea + Oil = Saudi Arabia

Most of us in the West — myself included — take our rights and freedoms very much for granted. This is a mistake. We should celebrate every day. And here’s a stark reminder from the Middle East. The latest collection of royal decrees from the rulers of Saudi Arabia now declare that atheists are terrorists.

At some point in our future I still have to believe that the majority of humanity will come to realize that morality, compassion, altruism, kindness are basic human traits — they come to be despite religion, not because of it. At that point, perhaps, more nations will remove the shackles of religious dogma that constrain their citizens and join in the celebration of truly secular and global human rights: freedom of expression, freedom of assembly, freedom to think, freedom to dance, freedom to drive, freedom to joke, freedom to be spiritual but not religious. And, of course those who desire to still believe in whatever they wish, should be free to do so.

From the Independent:

Saudi Arabia has introduced a series of new laws which define atheists as terrorists, according to a report from Human Rights Watch.

In a string of royal decrees and an overarching new piece of legislation to deal with terrorism generally, the Saudi King Abdullah has clamped down on all forms of political dissent and protests that could “harm public order”.

The new laws have largely been brought in to combat the growing number of Saudis travelling to take part in the civil war in Syria, who have previously returned with newfound training and ideas about overthrowing the monarchy.

To that end, King Abdullah issued Royal Decree 44, which criminalises “participating in hostilities outside the kingdom” with prison sentences of between three and 20 years, Human Rights Watch said.

Yet last month further regulations were issued by the Saudi interior ministry, identifying a broad list of groups which the government considers to be terrorist organisations – including the Muslim Brotherhood.

Article one of the new provisions defines terrorism as “calling for atheist thought in any form, or calling into question the fundamentals of the Islamic religion on which this country is based”.

Read the entire article here.

Celebrating 10 Years of Blogging

On this day in 2006, I began my journey into the blogsphere. What an anachronism that must seem to the many people who communicate in micro-burst-speak — 21st century equivalents of morse code, such as Twitter, WhatsApp and SnapChat. [If only my readers knew that I also write using a fountain pen with real ink!]

Much has changed in the intervening years — wilder climate, ubiquitous social media, Uber, online shaming, selfies. Much has also — sadly — remained the same: vacuous politicians with no policies, gun violence, terrorism, rich getting richer, bigotry and racism, gender inequality. The good news is that the progress of science marches on, so there is (some) hope for humanity amid all the turmoil.

Google-analytics-2Jan2016

Over the last decade I’ve connected my blog to a growing readership around the globe. And, while I may only have one or two readers in the 20 lowest ranked territories, according to Google Analytics, I thank each and every one of you — and the Tubes of the Internets — for allowing my thoughts and digital pen into your home.

Image courtesy of Google Analytics.

See, Earth is at the Center of the Cosmos

A single image of the entire universe from 2012 has been collecting lots of attention recently. Not only is it beautiful, it shows the Earth and our solar system clearly in the correct location — at the rightful center!

Some seem to be using this to claim that the circa 2,000 year old, geo-centric view of the cosmos must be right.

Observable_universe_logarithmic_illustration

Well, sorry creationists, flat-earthers, and followers of Ptolemy, this gorgeous image is a logarithmic illustration.

Image: Artist’s logarithmic scale conception of the observable universe with the Solar System at the center, inner and outer planets, Kuiper belt, Oort cloud, Alpha Centauri, Perseus Arm, Milky Way galaxy, Andromeda galaxy, nearby galaxies, Cosmic Web, Cosmic microwave radiation and Big Bang’s invisible plasma on the edge. Courtesy: Pablo Carlos Budassi / Wikipedia.

PhotoMash: A Blind Girl Sees; A Sighted Man is Blind

Today’s juxtaposition of images and stories comes courtesy of the Independent, from December 15, 2015. One is literally blind, the other figuratively.

The girl on the left is a 14-year old from Malawi. Her name is Rose. As a result of severe eye cataracts she was blind since birth. A recent operation restored her sight.

The man on the right can see, and according to his doctors is in excellent health. But he remains blind to all around, except his own reflection.

Photomash-Blind-versus-Blind

Images courtesy of Independent, UK.

New Year. New Look

Eagle-eyed readers may have noticed a few subtle changes to the blog. While the focus remains the same, I’ve updated the look in keeping with a fresh new year — more responsive layout, improved performance, easier to browse and find content, and updated typography.

I hope this is more pleasing to your eye and more efficient for your browser whether you’re chained to a desk or on the move. Please drop me a line if you have any feedback. Thanks and Happy New Year!

On the Joys of Not Being Twenty Again

I’m not twenty, and am constantly reminded that I’m not — both from internal alerts and external messages. Would I like to be younger? Of course. But it certainly comes at a price. So, after reading the exploits of a 20-something forced to live without her smartphone for a week, I realize it’s not all that bad being a cranky old luddite.

I hope that the ordeal, excerpted below, is tongue-very-much-in-cheek but I suspect it’s not: constant status refreshes, morning selfies, instant content gratification, nano-scale attention span, over-stimulation, life-stream documentation, peer ranking, group-think, interrupted interruptions. Thus, I realize I’m rather content not to be twenty after all.

From the Telegraph:

I have a confession to make: I am addicted to my smartphone. I use it as an alarm clock, map, notepad, mirror and camera.

I spend far too much time on Twitter and Instagram and have this week realised I have a nervous tick where I repeatedly unlock my smartphone.

And because of my phone’s many apps which organise my life and help me navigate the world, like many people my age, I am quite literally lost without it.

I am constantly told off by friends and family for using my phone during conversations, and I recently found out (to my horror) that I have taken over 5,000 selfies.

So when my phone broke I seized the opportunity to spend an entire week without it, and kept a diary each day.

Day One: Thursday

Frazzled, I reached to my bedside table, so I could take a morning selfie and send it to my friends.

Realising why that could not happen, my hand and my heart both felt empty. I knew at this point it was going to be a long week.

Day Two: Friday

I basked in the fact my colleagues could not contact me – and if I did not reply to their emails straight away it would not be the end of the world.

I then took the train home to see my parents outside London.

I couldn’t text my mother about any delays which may have happened (they didn’t), and she couldn’t tell me if she was going to be late to the station (she wasn’t). The lack of phone did nothing but make me feel anxious and prevent me from being able to tweet about the irritating children screaming on the train.

Day Three: Saturday

It is a bit weird feeling completely cut off from the outside world; I am not chained to my computer like I am at work and I am not allowed to constantly be on my laptop like a teen hacker.

It was nice though – a real detox. We went on a walk with our spaniel in the countryside near the Chiltern Hills. I had to properly talk to everyone, instead of constantly refreshing Twitter, which was novel.

I do feel like my attention span is improving every day, but I equally feel anchorless and lost without having any way of contacting anyone, or documenting my life.

….

Day Seven: Wednesday

My attention span and patience have grown somewhat, and I have noticed I daydream and have thoughts independent of Twitter far more often than usual.

Read the entire account here.

To Another Year

Let me put aside humanity’s destructive failings for a moment, with the onset of a New Year, to celebrate one of our most fundamental positive traits: our need to know — how things work, how and why we’re here, and if we’re alone. We are destined to explore, discover and learn more about ourselves and our surroundings. I hope and trust that 2016 will bring us yet more knowledge (and more really cool images). We are fortunate indeed.

pluto-psychedelic

Image: New Horizons scientists false color image of Pluto. Image data collected by the spacecraft’s Ralph/MVIC color camera on July 14, 2015 from a range of 22,000 miles. Courtesy: NASA/JHUAPL/SwRI.

pluto-mountainousshoreline

Image: Highest-resolution image from NASA’s New Horizons spacecraft shows huge blocks of Pluto’s water-ice crust jammed together in the informally named al-Idrisi mountains. The mountains end abruptly at the shoreline of the informally named Sputnik Planum, where the soft, nitrogen-rich ices of the plain form a nearly level surface, broken only by the fine trace work of striking, cellular boundaries. Courtesy: NASA/JHUAPL/SwRI.

 

 

Back to the Future

France_in_XXI_Century_Latest_fashionJust over a hundred years ago, at the turn of the 19th century, Jean-Marc Côté and some of his fellow French artists were commissioned to imagine what the world would look like in 2000. Their colorful sketches and paintings portrayed some interesting inventions, though all seem to be grounded in familiar principles and incremental innovations — mechanical helpers, ubiquitous propellers and wings. Interestingly, none of these artist-futurists imagined a world beyond Victorian dress, gender inequality and wars. But these are gems nonetheless.

France_in_XXI_Century._Air_cabSome of their works found their way into cigar boxes and cigarette cases, others were exhibited at the 1900 World Exhibition in Paris. My three favorites: a Tailor of the Latest Fashion, the Aero-cab Station and the Whale Bus. See the full complement of these remarkable futuristic visions at the Public Domain Review, and check out the House Rolling Through the Countryside and At School.

I suspect our contemporary futurists — born in the late 20th or early 21st-century — will fall prey to the same narrow visions when asked to sketch our planet in 3000. But despite the undoubted wealth of new gadgets and gizmos a thousand years from now the challenge would be to see if their imagined worlds might be at peace and with equality for all.
France_in_XXI_Century_Whale_busImages courtesy of the Public Domain Review, a project of the Open Knowledge Foundation. Public Domain.

 

The American Dream: Socialism for the Rich Or Capitalism For All?

You know that something’s up when the Wall Street Journal begins running op-ed columns that question capitalism. Has even the WSJ now realized that American capitalism thrives by two sets of rules: one for the rich socialists, the crony capitalists who manipulate markets (and politics), invent loopholes, skirt regulation, and place enormous bets with others’ wealth; the other, for the poor capitalists, who innovate, work hard and create tangible value.

Now even Bill Gates — the world’s richest citizen — tells us that only socialism can address climate change! It’s clear that the continued appeal of Bernie Sanders to those on the political left, and the likes of Ben Carson and that-other-guy-with-the-strange-hair-and-big-mouth-and-even-bigger-ego to those on the right, highlights significant public distaste for our societal inequality and political morass. At times I feel as if I’ve been transported to a parallel universe, a la 1Q84, where the 99 percent will rise and finally realize meaningful change through social and economic justice. Can it really happen?

Nah! It’ll never happen. The tentacles that connect politicians and their donors are too intertwined; the pathways that connect the billionaires, oligarchs, plutocrats and corporations to lobbyists to regulators to lawmakers are too well-protected, too ingrained. Until these links are broken the rich will continue to get richer and the poor will continue to dream. So, for the time being remember: the rich are just too big to fail.

From the WSJ:

If you want to find people who still believe in “the American dream”—the magnetic idea that anyone can build a better life for themselves and their families, regardless of circumstance—you might be best advised to travel to Mumbai. Half of the Indians in a recent poll agreed that “the next generation will probably be richer, safer and healthier than the last.”

The Indians are the most sanguine of the more than 1,000 adults in each of seven nations surveyed in early September by the market-research firm YouGov for the London-based Legatum Institute (with which I am affiliated). The percentage of optimists drops to 42 in Thailand, 39 in Indonesia, 29 in Brazil, 19 in the U.K. and 15 in Germany. But it isn’t old-world Britain or Germany that is gloomiest about the future. It is new-world America, where only 14% of those surveyed think that life will be better for their children, and 52% disagree.

The trajectory of the world doesn’t justify this pessimism. People are living longer on every continent. They’re doing less arduous, backbreaking work. Natural disasters are killing fewer people. Fewer crops are failing. Some 100,000 people are being lifted out of poverty every day, according to World Bank data.

Life is also getting better in the U.S., on multiple measures, but the survey found that 55% of Americans think the “rich get richer” and the “poor get poorer” under capitalism. Sixty-five percent agree that most big businesses have “dodged taxes, damaged the environment or bought special favors from politicians,” and 58% want restrictions on the import of manufactured goods.

Friends of capitalism cannot be complacent, however. The findings of the survey underline the extent to which people think that wealth creation is a dirty business. When big majorities in so many major nations think that big corporations behave unethically and even illegally, it is a system that is always vulnerable to attack from populist politicians.

John Mackey, the CEO of Whole Foods, has long worried about the sustainability of the free enterprise system if large numbers of voters come to think of businesses as “basically a bunch of psychopaths running around trying to line their own pockets.” If the public doesn’t think business is fundamentally good, he has argued, then business is inviting destructive regulation. If, by contrast, business shows responsibility to all its stakeholders—customers, employees, investors, suppliers and the wider community—“the impulse to regulate and control would be lessened.”

Read the entire column here.

Barbie the Surveillance Officer

Google-search-hello-barbie

There are probably any number of reasons that you, and your kids, may choose to steer clear of Barbie (the Mattel doll that is). Detractors will point to a growing list of problems for which Barbie is to blame, including: gender stereotyping, body image distortion, vacuum cleaner accidents with her fake hair, eating disorders, and poor self esteem. However, it may not have occurred to you that the latest incarnation of the doll — interactive Hello Barbie — could also be spying on you and your family. Could the CIA, NSA or MI5 be keeping tabs on you through your kid’s doll? Creepy, and oh, she’s still far too thin.

From the Guardian:

Mattel’s latest Wi-Fi enabled Barbie doll can easily be hacked to turn it into a surveillance device for spying on children and listening into conversations without the owner’s knowledge.

The Hello Barbie doll is billed as the world’s first “interactive doll” capable of listening to a child and responding via voice, in a similar way to Apple’s Siri, Google’s Now and Microsoft’s Cortana.

It connects to the internet via Wi-Fi and has a microphone to record children and send that information off to third-parties for processing before responding with natural language responses.

But US security researcher Matt Jakubowski discovered that when connected to Wi-Fi the doll was vulnerable to hacking, allowing him easy access to the doll’s system information, account information, stored audio files and direct access to the microphone.

Jakubowski told NBC: “You can take that information and find out a person’s house or business. It’s just a matter of time until we are able to replace their servers with ours and have her say anything we want.”

Once Jakubowski took control of where the data was sent the snooping possibilities were apparent. The doll only listens in on a conversation when a button is pressed and the recorded audio is encrypted before being sent over the internet, but once a hacker has control of the doll the privacy features could be overridden.

It was the ease with which the doll was compromise that was most concerning. The information stored by the doll could allow hackers to take over a home Wi-Fi network and from there gain access to other internet connected devices, steal personal information and cause other problems for the owners, potentially without their knowledge.

Read the entire story here.

Image courtesy of Google Search.

A Positive Female Role Model

Margaret_Hamilton_in_action

Our society does a better, but still poor, job of promoting positive female role models. Most of our — let’s face it — male designed images of women fall into rather narrowly defined stereotypical categories: nurturing care-giver, stay-at-home soccer mom, matriarchal office admin, overly bossy middle-manager, vacuous reality-TV spouse or scantily clad vixen.

But every now and then the media seems to discover another unsung, female who made significant contributions in a male-dominated and male-overshadowed world. Take the case of computer scientist Margaret Hamilton — she developed on-board flight software for the Apollo space program while director of the Software Engineering Division of the MIT Instrumentation Laboratory. Aside from developing technology that put people on the Moon, she helped NASA understand the true power of software and the consequences of software-driven technology.

From Wired:

Margaret Hamilton wasn’t supposed to invent the modern concept of software and land men on the moon. It was 1960, not a time when women were encouraged to seek out high-powered technical work. Hamilton, a 24-year-old with an undergrad degree in mathematics, had gotten a job as a programmer at MIT, and the plan was for her to support her husband through his three-year stint at Harvard Law. After that, it would be her turn—she wanted a graduate degree in math.

But the Apollo space program came along. And Hamilton stayed in the lab to lead an epic feat of engineering that would help change the future of what was humanly—and digitally—possible.

As a working mother in the 1960s, Hamilton was unusual; but as a spaceship programmer, Hamilton was positively radical. Hamilton would bring her daughter Lauren by the lab on weekends and evenings. While 4-year-old Lauren slept on the floor of the office overlooking the Charles River, her mother programmed away, creating routines that would ultimately be added to the Apollo’s command module computer.

“People used to say to me, ‘How can you leave your daughter? How can you do this?’” Hamilton remembers. But she loved the arcane novelty of her job. She liked the camaraderie—the after-work drinks at the MIT faculty club; the geek jokes, like saying she was “going to branch left minus” around the hallway. Outsiders didn’t have a clue. But at the lab, she says, “I was one of the guys.”

Then, as now, “the guys” dominated tech and engineering. Like female coders in today’s diversity-challenged tech industry, Hamilton was an outlier. It might surprise today’s software makers that one of the founding fathers of their boys’ club was, in fact, a mother—and that should give them pause as they consider why the gender inequality of the Mad Men era persists to this day.

As Hamilton’s career got under way, the software world was on the verge of a giant leap, thanks to the Apollo program launched by John F. Kennedy in 1961. At the MIT Instrumentation Lab where Hamilton worked, she and her colleagues were inventing core ideas in computer programming as they wrote the code for the world’s first portable computer. She became an expert in systems programming and won important technical arguments. “When I first got into it, nobody knew what it was that we were doing. It was like the Wild West. There was no course in it. They didn’t teach it,” Hamilton says.

This was a decade before Microsoft and nearly 50 years before Marc Andreessen would observe that software is, in fact, “eating the world.” The world didn’t think much at all about software back in the early Apollo days. The original document laying out the engineering requirements of the Apollo mission didn’t even mention the word software, MIT aeronautics professor David Mindell writes in his book Digital Apollo. “Software was not included in the schedule, and it was not included in the budget.” Not at first, anyhow.

Read the entire story here.

Image: Margaret Hamilton during her time as lead Apollo flight software designer. Courtesy NASA. Public Domain.

Forget The Millennials — It’s Time For Generation K

Blame fickle social scientists. After the baby-boomers the most researched generation has been that of the millennials — so-called due to their coming of age at the turn of the century. We know what millennails like to eat and drink, how they dress, their politics; we know about their proclivity to sharing, their need for meaning and fun at work; we know they need attention and constant feedback. In fact, we have learned so much — and perhaps so little — from the thousands of, often-conflicting, research studies of millennials that some researchers have decided to move on to new blood. Yes, it’s time to tap another rich vein of research material — Generation K. But I’ll stop after relating what the “K” means in Generation K, and let you form your own conclusions.

[tube]n-7K_OjsDCQ[/tube]

Generation K is named for Katniss, as in the Hunger Games‘ hero Katniss Everdeen. That’s right, if you were born between 1995 and 2002, according to economist Noreena Hertz you are Gen-Katniss.

From the Guardian:

The brutal, bleak series that has captured the hearts of a generation will come to a brutal, bleak end in November when The Hunger Games: Mockingjay – Part 2 arrives in cinemas. It is the conclusion of the Hunger Games saga, which has immersed the young in a cleverly realised world of trauma, violence, mayhem and death.

For fans of Suzanne Collins’s trilogy about a young girl, Katniss Everdeen, forced to fight for survival in a country ruled by fear and fuelled by televised gladiatorial combat, this is the moment they have been waiting for.

Since the first book in the trilogy was published in 2008, Collins’s tale has sold more than 65 million copies in the US alone. The films, the first of which was released in 2012, have raked in more than $2bn worldwide at the box office and made a global star of their leading lady, Jennifer Lawrence, who plays the increasingly traumatised Katniss with a perfect mix of fury and resignation. For the huge appeal of The Hunger Games goes deeper than the fact that it’s an exciting tale well told. The generation who came to Katniss as young teens and have grown up ploughing through the books and queuing for the movies respond to her story in a particularly personal way.

As to why that might be, the economist and academic Noreena Hertz, who coined the term Generation K (after Katniss) for those born between 1995 and 2002, says that this is a generation riddled with anxiety, distrustful of traditional institutions from government to marriage, and, “like their heroine Katniss Everdeen, [imbued with] a strong sense of what is right and fair”.

“I think The Hunger Games resonates with them so much because they are Katniss navigating a dark and difficult world,” says Hertz, who interviewed 2,000 teenagers from the UK and the US about their hopes, fears and beliefs, concluding that today’s teens are shaped by three factors: technology, recession and coming of age in a time of great unease.

“This is a generation who grew up through 9/11, the Madrid bombings, the London bombings and Islamic State terrors. They see danger piped down their smartphones and beheadings on their Facebook page,” she says. “My data showed very clearly how anxious they are about everything from getting into debt or not getting a job, to wider issues such as climate change and war – 79% of those who took part in my survey worried about getting a job, 72% worried about debt, and you have to remember these are teenagers.

“In previous generations teenagers did not think in this way. Unlike the first-era millennials [who Hertz classes as those aged between 20 and 30] who grew up believing that the world was their oyster and ‘Yes we can’, this new generation knows the world is an unequal and harsh place.”

Writer and activist Laurie Penny, herself a first-era millennial at the age of 29, agrees. “I think what today’s young people have grasped that my generation didn’t get until our early 20s, is that adults don’t know everything,” she says. “They might be trying their best but they don’t always have your best interests at heart. The current generation really understands that – they’re more politically engaged and they have more sense of community because they’re able to find each other easily thanks to their use of technology.”

One of the primary appeals of the Hunger Games trilogy is its refusal to sugarcoat the scenarios Katniss finds herself in. In contrast to JK Rowling’s Harry Potter series, there are no reliable adult figures to dispense helpful advice and no one in authority she can truly trust (notably even the most likeable adult figures in the books tend to be flawed at best and fraudulent at worst). Even her friends may not always have her back, hard as they try – Dumbledore’s Army would probably find themselves taken out before they’d uttered a single counter-curse in the battlegrounds of Panem. At the end of the day, Katniss can only rely on one person, herself.

“Ultimately, the message of the Hunger Games is that everything’s not going to be OK,” says Penny. “One of the reasons Jennifer Lawrence is so good is because she lets you see that while Katniss is heroic, she’s also frightened all of the time. She spends the whole story being forced into situations she doesn’t want to be in. Kids respond because they can imagine what it’s like to be terrified but know that you have to carry on.”

It’s incontestable that we live in difficult times and that younger generations in particular may be more acutely aware that things aren’t improving any time soon, but is it a reach to say that fans of the Hunger Games are responding as much to the world around them as to the books?

Read the entire story here.

Video: The Hunger Games: Mockingjay Part 2 Official Trailer – “We March Together”. Courtesy of the Hunger Games franchise.

Perchance Art Thou Smitten by Dapper Hipsters? Verily Methinks

Linguistic-trends-2015As the (mostly) unidirectional tide of cultural influence flows from the U.S to the United Kingdom, the English mother tongue is becoming increasingly (and distressingly, I might add) populated by Americanisms: trash instead of rubbish, fries not chips, deplane instead of disembark, shopping cart instead of trolley, bangs rather than fringe, period instead of full stop. And there’s more: 24/7, heads-up, left-field, normalcy, a savings of, deliverable, the ask, winningest.

All, might I say, utterly cringeworthy.

Yet, there may be a slight glimmer of hope, and all courtesy of the hipster generation. Hipsters, you see, crave an authentic, artisanal experience — think goat cheese and bespoke hats — that also seems to embrace language. So, in 2015, compared with a mere decade earlier, you’re more likely to hear some of the following words, which would normally be more attributable to an archaic, even Shakespearean, era:

perchance, mayhaps, parlor, amidst, amongst, whilst, unbeknownst, thou, thee, ere, hath

I’m all for it. My only hope now, is that these words will flow against the tide and into the U.S. to repair some of the previous linguistic deforestation. Methinks I’ll put some of these to immediate, good use.

From the Independent:

Hipsters are famous for their love of all things old-fashioned: 19th Century beards, pickle-making, Amish outerwear, naming their kids Clementine or Atticus. Now, they may be excavating archaic language, too.

As Chi Luu points out at JSTOR Daily  — the blog of a database of academic journals, what could be more hipster than that? — old-timey words like bespoke, peruse, smitten and dapper appear to be creeping back into the lexicon.

This data comes from Google’s Ngram viewer, which charts the frequencies of words appearing in printed sources between 1800 and 2012.

Google’s Ngram shows that lots of archaic words appear to be resurfacing — including gems like perchance, mayhaps and parlor.

The same trend is visible for words like amongst, amidst, whilst and unbeknownst, which are are archaic forms of among, amid, while and unknown.

Read the story in its entirety here.

Image courtesy of Google’s Ngram viewer / Independent.

Science, Politics and Experts

NOAA-climate-data-trend

Nowhere is the prickly relationship between science and politics more evident than in the climate change debate. The skeptics, many of whom seem to reside right of center in Congress, disbelieve any and all causal links between human activity and global warming. The fossil-fuel burning truckloads of data continue to show upward trends in all measures from mean sea-level and average temperature, to more frequent severe weather and longer droughts. Yet, the self-proclaimed, non-expert policy-makers in Congress continue to disbelieve the science, the data, the analysis and the experts.

But, could the tide be turning? The Republican Chair of the House Committee on Science, Space, and Technology, Texas Congressman Lamar Smith, wants to see the detail behind the ongoing analysis that shows an ever-warming planet; he’s actually interested in seeing the raw climate data. Joy, at last! Representative Smith has decided to become an expert, right? Wrong. He’s trawling around for evidence that might show tampering of data and biased peer-reviewed analysis — science, after all, is just one great, elitist conspiracy theory.

One has to admire the Congressman’s tenacity. He and his herd of climate-skeptic apologists will continue to fiddle while Rome ignites and burns. But I suppose the warming of our planet is a good thing for Congressman Smith and his disbelieving (in science) followers, for it may well portend the End of Days that they believe (in biblical prophecy) and desire so passionately.

Oh, and the fact that Congressman Lamar Smith is Chair of  the House Committee on Science, Space, and Technology?! Well, that will have to remain the subject of another post. What next, Donald Trump as head of the ACLU?

From ars technica:

In his position as Chair of the House Committee on Science, Space, and Technology, Texas Congressman Lamar Smith has spent much of the last few years pressuring the National Science Foundation to ensure that it only funds science he thinks is worthwhile and “in the national interest.” His views on what’s in the national interest may not include the earth sciences, as Smith rejects the conclusions of climate science—as we saw first hand when we saw him speak at the Heartland Institute’s climate “skeptic” conference earlier this year.

So when National Oceanic and Atmospheric Administration (NOAA) scientists published an update to the agency’s global surface temperature dataset that slightly increased the short-term warming trend since 1998, Rep. Smith was suspicious. The armada of contrarian blog posts that quickly alleged fraud may have stoked these suspicions. But since, again, he’s the chair of the House Committee on Science, Space, and Technology, Rep. Smith was able to take action. He’s sent a series of requests to NOAA, which Ars obtained from Committee staff.

The requests started on July 14 when Smith wrote to the NOAA about the paper published in Science by Thomas Karl and his NOAA colleagues. The letter read, in part, “When corrections to scientific data are made, the quality of the analysis and decision-making is brought into question. The conclusions brought forth in this new study have lasting impacts and provide the basis for further action through regulations. With such broad implications, it is imperative that the underlying data and the analysis are made publicly available to ensure that the conclusions found and methods used are of the highest quality.”

Rep. Smith requested that the NOAA provide his office with “[a]ll data related to this study and the updated global datasets” along with the details of the analysis and “all documents and communications” related to part of that analysis.

In the publication at issue, the NOAA researchers had pulled in a new, larger database of weather station measurements and updated to the latest version of an ocean surface measurement dataset. The ocean data had new corrections for the different methods ships have used over the years to make temperature measurements. Most significantly, they estimated the difference between modern buoy stations and older thermometer-in-a-bucket measurements.

All the major temperature datasets go through revisions like these, as researchers are able to pull in more data and standardize disparate methods more effectively. Since the NOAA’s update, for example, NASA has pulled the same ocean temperature database into its dataset and updated its weather station database. The changes are always quite small, but they can sometimes alter estimates of some very short-term trends.

The NOAA responded to Rep. Smith’s request by pointing him to the relevant data and methods, all of which had already been publicly available. But on September 10, Smith sent another letter. “After review, I have additional questions related to the datasets used to adjust historical temperature records, as well as NOAA’s practices surrounding its use of climate data,” he wrote. The available data wasn’t enough, and he requested various subsets of the data—buoy readings separated out, for example, with both the raw and corrected data provided.

Read the entire story here.

Image: NOAA temperature record. Courtesy of NOAA.

Your Job is Killing You

Women_mealtime_st_pancras_workhouse

Many of us complain about the daily stresses from our jobs and our bosses, even our coworkers. We even bemoan the morning commute and the work we increasingly bring back home to complete in the evening. Many of us can be heard to say, “this job is killing me!”. Metaphorically, of course.

Well, researchers at Stanford and Harvard now find that in some cases your job is actually, quite literally, killing you. This may seem self-evident, but the data shows that workers with less education are significantly more likely to be employed in jobs that are more stressful and dangerous, and have less healthy workplace practices. This, in turn, leads to a significantly lower average life span than that for those with higher educational attainment. Researchers measured typical employment-related stressors such as: unemployment, layoffs, absence of employer subsidized health insurance, shift work, long working hours, job insecurity and work-family conflict. The less education a worker has, the more likely that she or he will suffer a greater burden from one or more of these stressors.

Looks like we’re gradually reverting to well-tested principles of Victorian worker exploitation. Check out more details from the study here.

From Washington Post:

People often like to groan about how their job is “killing” them. Tragically, for some groups of people in the U.S., that statement appears to be true.

A new study by researchers at Harvard and Stanford has quantified just how much a stressful workplace may be shaving off of Americans’ life spans. It suggests that the amount of life lost to stress varies significantly for people of different races, educational levels and genders, and ranges up to nearly three years of life lost for some groups.

Past research has shown an incredible variation in life expectancy around the United States, depending on who you are and where you live. Mapping life expectancy around the nation by both county of residence and race, you can see that people in some parts of the U.S. live as many as 33 years longer on average than people in other parts of the country, the researchers say.

Those gaps appear to be getting worse, as the wealthy extend their life spans and other groups are stagnant. One study found that men and women with fewer than 12 years of education had life expectancies that were still on par with most adults in the 1950s and 1960s — suggesting the economic gains of the last few decades have gone mostly to more educated people. The financial crisis and subsequent recession, which put many people in economic jeopardy, may have worsened this effect.

There are lots of reasons that people with lower incomes and educations tend to have lower life expectancies: differences in access to health care, in exposure to air and water pollution, in nutrition and health care early in life, and in behaviors, such as smoking, exercise and diet. Past research has also shown that job insecurity, long hours, heavy demands at work and other stresses can also cut down on a worker’s life expectancy by taking a heavy toll on a worker’s health. (If you work in an office, here are some exercises you might try to prevent this.)

But researchers say this is the first study to look at the ways that a workplace’s influence on life expectancy specifically break down by racial and educational lines.

To do their analysis, they divided people into 18 different groups by race, education and sex. They then looked at 10 different workplace factors — including unemployment and layoffs, the absence of health insurance, shift work, long working hours, job insecurity and work-family conflict — and estimated the effect that each would have on annual mortality and life expectancy.

The data show that people with less education are much more likely to end up in jobs with more unhealthy workplace practices that cut down on one’s life span. People with the highest educational attainment were less affected by workplace stress than people with the least education, the study says.

Read the entire story here.

Image: Women mealtime at St Pancras workhouse, London. Courtesy: Peter Higginbothom. Licensed under Public Domain via Commons.