Tag Archives: change

FOMO Reshaping You and Your Network

Fear of missing out (FOMO) and other negative feelings are greatly disproportional to good ones in online social networks. The phenomenon is widespread and well-documented. Compound this with the observation — though unintuitive — that your online friends will have more friends and be more successful than you, and you have a recipe for a growing, deep-seated inferiority complex. Add to this other behavioral characteristics that are peculiar or exaggerated in online social networks and you have a more fundamental recipe — one that threatens the very fabric of the network itself. Just consider how online trolling, status lurking, persona-curation, passive monitoring, stalking and deferred (dis-)liking are re-fashioning our behaviors and the networks themselves.

From ars technica:

I found out my new college e-mail address in 2005 from a letter in the mail. Right after opening the envelope, I went straight to the computer. I was part of a LiveJournal group made of incoming students, and we had all been eagerly awaiting our college e-mail addresses, which had a use above and beyond corresponding with professors or student housing: back then, they were required tokens for entry to the fabled thefacebook.com.

That was nine years ago, and Facebook has now been in existence for 10. But even in those early days, Facebook’s cultural impact can’t be overstated. A search for “Facebook” on Google Scholar alone now produces 1.2 million results from 2006 on; “Physics” only returns 456,000.

But in terms of presence, Facebook is flopping around a bit now. The ever-important “teens” despise it, and it’s not the runaway success, happy addiction, or awe-inspiring source of information it once was. We’ve curated our identities so hard and had enough experiences with unforeseen online conflict that Facebook can now feel more isolating than absorbing. But what we are dissatisfied with is what Facebook has been, not what it is becoming.

Even if the grand sociological experiment that was Facebook is now running a little dry, the company knows this—which is why it’s transforming Facebook into a completely different entity. And the cause of all this built-up disarray that’s pushing change? It’s us. To prove it, let’s consider the social constructs and weirdnesses Facebook gave rise to, how they ultimately undermined the site, and how these ideas are shaping Facebook into the company it is now and will become.

Cue that Randy Newman song

Facebook arrived late to the concept of online friending, long after researchers started wondering about the structure of these social networks. What Facebook did for friending, especially reciprocal friending, was write it so large that it became a common concern. How many friends you had, who did and did not friend you back, and who should friend each other first all became things that normal people worried about.

Once Facebook opened beyond colleges, it became such a one-to-one representation of an actual social network that scientists started to study it. They applied social theories like those of weak ties or identity creation to see how they played out sans, or in supplement to, face-to-face interactions.

In a 2007 study, when Facebook was still largely campus-bound, a group of researchers said that Facebook “appears to play an important role in the process by which students form and maintain social capital.” They were using it to keep in touch with old friends and “to maintain or intensify relationships characterized by some form of offline connection.”

This sounds mundane now, since Facebook is so integrated into much of our lives. Seeing former roommates or childhood friends posting updates to Facebook feels as commonplace as literally seeing them nearly every day back when we were still roommates at 20 or friends at eight.

But the ability to keep tabs on someone without having to be proactive about it—no writing an e-mail, making a phone call, etc.—became the unique selling factor of Facebook. Per the 2007 study above, Facebook became a rich opportunity for “convert[ing] latent ties into weak ties,” connections that are valuable because they are with people who are sufficiently distant socially to bring in new information and opportunities.

Some romantic pixels have been spilled about the way no one is ever lost to anyone anymore; most people, including ex-lovers, estranged family members, or missed connections are only a Wi-Fi signal away.

“Modern technology has made our worlds smaller, but perhaps it also has diminished life’s mysteries, and with them, some sense of romance,” writes David Vecsey in The New York Times. Vecsey cites a time when he tracked down a former lover “across two countries and an ocean,” something he would not have done in the absence of passive social media monitoring. “It was only in her total absence, in a total vacuum away from her, that I was able to appreciate the depth of love I felt.”

The art of the Facebook-stalk

While plenty of studies have been conducted on the productive uses of Facebook—forming or maintaining weak ties, supplementing close relationships, or fostering new, casual ones—there are plenty that also touch on the site as a means for passive monitoring. Whether it was someone we’d never met, a new acquaintance, or an unrequited infatuation, Facebook eventually had enough breadth that you could call up virtually anyone’s profile, if only to see how fat they’ve gotten.

One study referred to this process as “social investigation.” We developed particular behaviors to avoid creating suspicion: do not “like” anything by the object of a stalking session, or if we do like it, don’t “like” too quickly; be careful not to type a name we want to search into the status field by accident; set an object of monitoring as a “close friend,” even if they aren’t, so their updates show up without fail; friend their friends; surreptitiously visit profile pages multiple times a day in case we missed anything.

This passive monitoring is one of the more utilitarian uses of Facebook. It’s also one of the most addictive. The (fictionalized) movie The Social Network closes with Facebook’s founder, Mark Zuckerberg, gazing at the Facebook profile of a high-school crush. Facebook did away with the necessity of keeping tabs on anyone. You simply had all of the tabs, all of the time, with the most recent information whenever you wanted to look at them.

The book Digital Discourse cites a classic example of the Facebook stalk in an IM conversation between two teenagers:

“I just saw what Tanya Eisner wrote on your Facebook wall. Go to her house,” one says.
“Woah, didn’t even see that til right now,” replies the other.
“Haha it looks like I stalk you… which I do,” says the first.
“I stalk u too its ok,” comforts the second.

But even innocent, casual information recon in the form of a Facebook stalk can rub us the wrong way. Any instance of a Facebook interaction that ends with an unexpected third body’s involvement can taint the rest of users’ Facebook behavior, making us feel watched.

Digital Discourse states that “when people feel themselves to be the objects of stalking, creeping, or lurking by third parties, they express annoyance or even moral outrage.” It cites an example of another teenager who gets a wall post from a person she barely knows, and it explains something she wrote about in a status update. “Don’t stalk my status,” she writes in mocking command to another friend, as if talking to the interloper.

You are who you choose to be

“The advent of the Internet has changed the traditional conditions of identity production,” reads a study from 2008 on how people presented themselves on Facebook. People had been curating their presences online for a long time before Facebook, but the fact that Facebook required real names and, for a long time after its inception, association with an educational institution made researchers wonder if it would make people hew a little closer to reality.

But beyond the bounds of being tied to a real name, users still projected an idealized self to others; a type of “possible self,” or many possible selves, depending on their sharing settings. Rather than try to describe themselves to others, users projected a sort of aspirational identity.

People were more likely to associate themselves with cultural touchstones, like movies, books, or music, than really identify themselves. You might not say you like rock music, but you might write Led Zeppelin as one of your favorite bands, and everyone else can infer your taste in music as well as general taste and coolness from there.

These identity proxies also became vectors for seeking approval. “The appeal is as much to the likeability of my crowd, the desirability of my boyfriend, or the magic of my music as it is to the personal qualities of the Facebook users themselves,” said the study. The authors also noted that, for instance, users tended to post photos of themselves mostly in groups in social situations. Even the profile photos, which would ostensibly have a single subject, were socially styled.

As the study concluded, “identity is not an individual characteristic; it is not an expression of something innate in a person, it is rather a social product, the outcome of a given social environment and hence performed differently in varying contexts.” Because Facebook was so susceptible to this “performance,” so easily controlled and curated, it quickly became less about real people and more about highlight reels.

We came to Facebook to see other real people, but everyone, even casual users, saw it could be gamed for personal benefit. Inflicting our groomed identities on each other soon became its own problem.

Fear of missing out

A long-time problem of social networks has been that the bad feelings they can generate are greatly disproportional to good ones.

In strict terms of self-motivation, posting something and getting a good reception feels good. But most of Facebook use is watching other people post about their own accomplishments and good times. For a social network of 300 friends with an even distribution of auspicious life events, you are seeing 300 times as many good things happen to others as happen to you (of course, everyone has the same amount of good luck, but in bulk for the consumer, it doesn’t feel that way). If you were happy before looking at Facebook, or even after posting your own good news, you’re not now.

The feelings of inadequacy did start to drive people back to Facebook. Even in the middle of our own vacations, celebration dinners, or weddings, we might check Facebook during or after to compare notes and see if we really had the best time possible.

That feeling became known as FOMO, “fear of missing out.” As Jenna Wortham wrote in The New York Times, “When we scroll through pictures and status updates, the worry that tugs at the corners of our minds is set off by the fear of regret… we become afraid that we’ve made the wrong decision about how to spend our time.”

Even if you had your own great stuff to tell Facebook about, someone out there is always doing better. And Facebook won’t let you forget. The brewing feeling of inferiority means users don’t post about stuff that might be too lame. They might start to self-censor, and then the bar for what is worth the “risk” of posting rises higher and higher. As people stop posting, there is less to see, less reason to come back and interact, like, or comment on other people’s material. Ultimately, people, in turn, have less reason to post.

Read the entire article here.

Fast Fashion and Smartphones

google-search-teen-fashion

Teen retail isn’t what it used to be. Once dominated by the likes of Aeropostale, Abercrombie and Fitch, and American Eagle, the sector is in a downward spiral. Many retail analysts place the blame on the internet. While discretionary income is down and unemployment is up among teens, there are two other key factors driving the change: first, smartphones loaded with apps seem to be more important to a teen’s self identity than an emblazoned tee-shirt; second, fast-fashion houses, such as H&M, can churn out fresh designs at a fraction thanks to fully integrated, on-demand supply chains. Perhaps, the silver lining in all of this, if you could call it such, is that malls may soon become the hang-out for old-timers.

From the NYT:

Luring young shoppers into traditional teenage clothing stores has become a tough sell.

When 19-year-old Tsarina Merrin thinks of a typical shopper at some of the national chains, she doesn’t think of herself, her friends or even contemporaries.

“When I think of who is shopping at Abercrombie,” she said, “I think it’s more of people’s parents shopping for them.”

Sales are down across the shelves of many traditional teenage apparel retailers, and some analysts and others suggest that it’s not just a tired fashion sense causing the slump. The competition for teenage dollars, at a time of high unemployment within that age group, spans from more stores to shop in to more tempting technology.

And sometimes phones loaded with apps or a game box trump the latest in jeans.

Mainstays in the industry like Abercrombie & Fitch, American Eagle Outfitters and Aéropostale, which dominated teenage closets for years, have been among those hit hard.

The grim reports of the last holiday season have already proved punishing for senior executives at the helm of a few retailers. In a move that caught many analysts by surprise, the chief executive of American Eagle, Robert L. Hanson, announced he was leaving the company last week. And on Tuesday, Abercrombie announced they were making several changes to the company’s board and leadership, including separating the role of chief executive and chairman.

Aside from those shake-ups, analysts are saying they do not expect much improvement in this retail sector any time soon.

According to a survey of analysts conducted by Thomson Reuters, sales at teenage apparel retailers open for more than a year, like Wet Seal, Zumiez, Abercrombie and American Eagle, are expected to be 6.4 percent lower in the fourth quarter over the previous period. That is worse than any other retail category.

“It’s enough to make you think the teen is going to be walking around naked,” said John D. Morris, an analyst at BMO Capital Markets. “What happened to them?”

Paul Lejuez, an analyst at Wells Fargo, said he and his team put out a note in May on the health of the teenage sector and department stores called “Watch Out for the Kid With the Cough.” (Aéropostale was the coughing teenager.) Nonetheless, he said, “We ended up being surprised just how bad things got so quickly. There’s really no sign of life anywhere among the traditional players.”

Causes are ticked off easily. Mentioned often is the high teenage unemployment rate, reaching 20.2 percent among 16- to 19-year-olds, far above the national rate of 6.7 percent.

Cheap fashion has also driven a more competitive market. So-called fast-fashion companies, like Forever 21 and H&M, which sell trendy clothes at low prices, have muscled into the space, while some department stores and discount retailers like T. J. Maxx now cater to teenagers, as well.

“You can buy a plaid shirt at Abercrombie that’s like $70,” said Daniela Donayre, 17, standing in a Topshop in Manhattan. “Or I can go to Forever 21 and buy the same shirt for $20.”

Online shopping, which has been roiling the industry for years, may play an especially pronounced role in the teenage sector, analysts say. A study of a group of teenagers released in the fall by Piper Jaffray found that more than three-fourths of young men and women said they shopped online.

Not only did teenagers grow up on the Internet, but it has shaped and accelerated fashion cycles. Things take off quickly and fade even faster, watched by teenagers who are especially sensitive to the slightest shift in the winds of a trend.

Matthew McClintock, an analyst at Barclays, pointed to Justin Bieber as an example.

“Today, if you saw that Justin Bieber got arrested drag-racing,” Mr. McClintock said, “and you saw in the picture that he had on a cool red shirt, then you can go online and find that cool red shirt and have it delivered to you in two days from some boutique in Los Angeles.

“Ten years ago, teens were dependent on going to Abercrombie & Fitch and buying from the select items that Mike Jeffries, the C.E.O., thought would be popular nine months ago.”

Read the entire story here.

Image courtesy of Google Search.

Of Shoons, Shakes and Slumgullions

One of the keys to the success of the English language is its flexibility — over time it has proven rather adept at borrowing and stealing from other languages. Of course, as the language adapts and evolves it sheds lesser used words and phrases. For writers this is a double-edged sword — new words enable an author to delve into the contemporary lexicon, but some beautiful old words fall out of favor and daily use.

From the New York Times:

A “slumgullion” is a stew of leftovers, and while the dish has been described as “watery,” the word itself is delectably unusual and juicily descriptive. Alas, you won’t find many people cooking up anything with that name these days, so we’re denied the pleasure of rolling the lovely sounds of slumgullion — let alone its more questionable flavors — on the tongue.

A certain kind of novelist — my kind — looks for opportunities to use such interesting bits of English, and one way to do that is to set a novel in the past. My predilection for stories of squalor and glitter, hysteria and moral complexity, led me most recently to 19th-century New York, which offers interesting parallels to the present-day city, and a dragon’s pile of linguistic loot. It’s an era recent enough that its speech is still comprehensible, but it’s sufficiently long ago to offer up lost words and expressions that reinvigorate language and make the past come alive.

The problem for a writer who has seized upon a story set in the past is how to create a narrative voice that conjures the atmosphere of its historical times, without alienating contemporary readers. It’s a complicated sort of ventriloquism. The worst perils and most intense attractions lie in dictionaries.

The Oxford English Dictionary, for example, guardian of the mother tongue, regularly offers up such treasures as “I’ll misguggle your thrapple! I’ll mashackerel ye to rights!” This dazzling way of saying, “I’ll choke you,” was written by the Scottish playwright James Bridie, in his 1930 play “The Anatomist,” using language first documented a hundred years earlier.

My favorite of all dictionaries is “The Secret Language of Crime” a mother lode of forgotten words. This little volume was published in 1859 by the New York City police chief, George W. Matsell. Mr. Matsell was also the editor of a newspaper, The Police Gazette, which fed New Yorkers a steady diet of murder, rape, abduction and thievery.

He kept notes on the slang of thugs and criminals, and wrote up a guide, so his cops and reporters would know what the bad guys were talking about when they went on like this: “He told Jack as how Bill had flimped a yack, and pinched a swell of a spark-fawney.” In other words, “He told Jack that Bill had hustled a person, and obtained a watch, and also robbed a well-dressed gentleman of a diamond ring.”

According to Mr. Matsell, a “shickster” was a woman. “A shake” was a prostitute. A “shoon” was a lout. And that’s just three words in the “sh” section. His “vocabulum” or “Rogue’s Lexicon” is a mash-up of all the languages that have made American English the vibrant and evolving idiom we know, with words derivative of Irish, Italian, Yiddish, Spanish, German. “Shickster,” for example, is probably how Chief Matsell heard “shiksa,” the Yiddish word for a non-Jewish woman. A “fen” he defines as “a common woman” but in Ireland, a “fen” is a boggy marsh — which gives us a good idea of how an insult seeds itself and germinates on new soil.

But woe to the novelist who succumbs entirely to such specialized vernacular, whether it be a “rogue’s lexicon,” modern street slang or regional dialect. There’s no faster way to alienate a reader than to write, as Matsell did in his lexicon: “Jack speeled to the crib, when he found Johnny Doyle had been pulling down sawney for grub.” (Translation: “Jack fled home and saw that Johnny had stolen some bacon to eat.”) That’s far too much “vocabulum” to wade through, and readers have little patience for such thickets of gobbledygook. Novels overburdened in this way make good projectiles for heaving at the wall.

The best writers — from Charles Frazier in “Cold Mountain” to Junot Diaz in “The Brief Wondrous Life of Oscar Wao” — deploy foreign or arcane words sparingly, to give a realistic flavor of an era or a culture, but they also channel the atmosphere of time and place through the rhythms of speech.

“I am an old gimper,” says Knucks, a character in “The Waterworks,” E.L. Doctorow’s novel of New York in the 1870’s. “I must live by wits alone… and the wits tell me a man mustn’t show himself too inquirous about such dark matters.”

Reading this bit of dialogue, we know we’re not in the present. The word “gimper” is not in common use, but needs no translation. The syntax, too — “a man mustn’t show himself too inquirous” — is stiffer and more formal than a contemporary speaker’s. Certainly Doctorow’s characters talk in a manner true to their times, but his own narrative voice hews to a more contemporary English, and his work never crosses the line into overkill.

For novelists to get a realistic feel for “what it was like” in the past, reading original texts of the period is invaluable. Old newspapers, for example, full of advertisements for medicines like “liver invigorator,” or devices like the “toilet mask,” and headlines screaming about the crimes of a certain “Hag of Misery,” or “The Ghoul of Chatham Street,” help color the imagination with a sense of how the world looked and sounded, what people dreamed of and feared, how they went about their lives while wearing cage crinolines, deerstalker hats and whalebone corsets, before they were turned all sepia-tinted by time.

By perusing period novels, magazines, advice books, letters, medical texts and sermons, contemporary novelists can conjure up a fresh narrative voice not only out of the vocabulary of bygone days, but from the rhythms of speech, the values of an era. A 19th-century “swell” is not going to speak the “secret language of crime,” but will have his own “vocabulum,” one that will reflect a worldview. For example, the Rev. Charles Loring Brace, who founded the Children’s Aid Society in 1853, referred to homeless children as a “happy race of little heathens,” or “flibbertigibbets,” which reflected the 19th-century belief that such children were lighthearted and “merry.”

Read the entire article here.

A Post-PC, Post-Laptop World

Not too long ago the founders and shapers of much of our IT world were dreaming up new information technologies, tools and processes that we didn’t know we needed. These tinkerers became the establishment luminaries that we still ove or hate — Microsoft, Dell, HP, Apple, Motorola and IBM. And, of course, they are still around.

But the world that they constructed is imploding and nobody really knows where it is heading. Will the leaders of the next IT revolution come from the likes of Google or Facebook? Or as is more likely, is this just a prelude to a more radical shift, with seeds being sown in anonymous garages and labs across the U.S. and other tech hubs. Regardless, we are in for some unpredictable and exciting times.

From ars technica:

Change happens in IT whether you want it to or not. But even with all the talk of the “post-PC” era and the rise of the horrifically named “bring your own device” hype, change has happened in a patchwork. Despite the disruptive technologies documented on Ars and elsewhere, the fundamentals of enterprise IT have evolved slowly over the past decade.

But this, naturally, is about to change. The model that we’ve built IT on for the past 10 years is in the midst of collapsing on itself, and the companies that sold us the twigs and straw it was built with—Microsoft, Dell, and Hewlett-Packard to name a few—are facing the same sort of inflection points in their corporate life cycles that have ripped past IT giants to shreds. These corporate giants are faced with moments of truth despite making big bets on acquisitions to try to position themselves for what they saw as the future.

Predicting the future is hard, especially when you have an installed base to consider. But it’s not hard to identify the economic, technological, and cultural forces that are converging right now to shape the future of enterprise IT in the short term. We’re not entering a “post-PC” era in IT—we’re entering an era where the device we use to access applications and information is almost irrelevant. Nearly everything we do as employees or customers will be instrumented, analyzed, and aggregated.

“We’re not on a 10-year reinvention path anymore for enterprise IT,” said David Nichols, Americas IT Transformation Leader at Ernst & Young. “It’s more like [a] five-year or four-year path. And it’s getting faster. It’s going to happen at a pace we haven’t seen before.”

While the impact may be revolutionary, the cause is more evolutionary. A host of technologies that have been the “next big thing” for much of the last decade—smart mobile devices, the “Internet of Things,” deep analytics, social networking, and cloud computing—have finally reached a tipping point. The demand for mobile applications has turned what were once called “Web services” into a new class of managed application programming interfaces. These are changing not just how users interact with data, but the way enterprises collect and share data, write applications, and secure them.

Add the technologies pushed forward by government and defense in the last decade (such as facial recognition) and an abundance of cheap sensors, and you have the perfect “big data” storm. This sea of structured and unstructured data could change the nature of the enterprise or drown IT departments in the process. It will create social challenges as employees and customers start to understand the level to which they are being tracked by enterprises. And it will give companies more ammunition to continue to squeeze more productivity out of a shrinking workforce, as jobs once done by people are turned over to software robots.

There has been a lot of talk about how smartphones and tablets have supplanted the PC. In many ways, that talk is true. In fact, we’re still largely using smartphones and tablets as if they were PCs.

But aside from mobile Web browsing and the use of tablets as a replacement for notebook PCs in presentations, most enterprises still use mobile devices the same way they used the BlackBerry in 1999—for e-mail. Mobile apps are the new webpage: everybody knows they need one to engage customers, but few are really sure what to do with them beyond what customers use their websites for. And while companies are trying to engage customers using social media on mobile, they’re largely not using the communications tools available on smart mobile devices to engage their own employees.

“I think right now, mobile adoption has been greatly overstated in terms of what people say they do with mobile versus mobile’s potential,” said Nichols. “Every CIO out there says, ‘Oh, we have mobile-enabled our workforce using tablets and smartphones.’ They’ve done mobile enablement but not mobile integration. Mobility at this point has not fundamentally changed the way the majority of the workforce works, at least not in the last five to six years.”

Smartphones make very poor PCs. But they have something no desktop PC has—a set of sensors that can provide a constant flow of data about where their user is. There’s visual information pulled in through a camera, motion and acceleration data, and even proximity. When combined with backend analytics, they can create opportunities to change how people work, collaborate, and interact with their environment.

Machine-to-machine (M2M) communications is a big part of that shift, according to Nichols. “Allowing devices with sensors to interact in a meaningful way is the next step,” he said. That step spans from the shop floor to the data center to the boardroom, as the devices we carry track our movements and our activities and interact with the systems around us.

Retailers are beginning to catch on to that, using mobile devices’ sensors to help close sales. “Everybody gets the concept that a mobile app is a necessity for a business-to-consumer retailer,” said Brian Kirschner, the director of Apigee Institute, a research organization created by the application infrastructure vendor Apigee in collaboration with executives of large enterprises and academic researchers. “But they don’t always get the transformative force on business that apps can have. Some can be small. For example, Home Depot has an app to help you search the store you’re in for what you’re looking for. We know that failure to find something in the store is a cause of lost sales and that Web search is useful and signs over aisles are ineffective. So the mobile app has a real impact on sales.”

But if you’ve already got stock information, location data for a customer, and e-commerce capabilities, why stop at making the app useful only during business hours? “If you think of the full potential of a mobile app, why can’t you buy something at the store when it’s closed if you’re near the store?” Kirschner said. “Instead of dropping you to a traditional Web process and offering you free shipping, they could have you pick it up at the store where you are tomorrow.”

That’s a change that’s being forced on many retailers, as noted in an article from the most recent MIT Sloan Management Review by a trio of experts: Erik Brynjolfsson, a professor at MIT’s Sloan School of Management and the director of the MIT Center for Digital Business; Yu Jeffrey Hu of the Georgia Institute of Technology; and Mohammed Rahman of the University of Calgary. If retailers don’t offer a way to meet mobile-equipped customers, they’ll buy it online elsewhere—often while standing in their store. Offering customers a way to extend their experience beyond the store’s walls is the kind of mobile use that’s going to create competitive advantage from information technology. And it’s the sort of competitive advantage that has long been milked out of the old IT model.

Nichols sees the same sort of technology transforming not just relationships with customers but the workplace itself. Say, for example, you’re in New York, and you want to discuss something with two colleagues. You request an appointment using your mobile device, and based on your location data, the location data of your colleagues, and the timing of the meeting, backend systems automatically book you a conference room and set up a video link to a co-worker out of town.

Based on analytics and the title of the meeting, relevant documents are dropped into a collaboration space. Your device records the meeting to an archive and notes who has attended in person. And this conversation is automatically transcribed, tagged, and forwarded to team members for review.

“Having location data to reserve conference rooms and calls and having all other logistics be handled in background changes the size of the organization I need to support that,” Nichols said.

The same applies to manufacturing, logistics, and other areas where applications can be tied into sensors and computing power. “If I have a factory where a machine has a belt that needs to be reordered every five years and it auto re-orders and it gets shipped without the need for human interaction, that changes the whole dynamics of how you operate,” Nichols said. “If you can take that and plug it into a proper workflow, you’re going to see an entirely new sort of workforce. That’s not that far away.”

Wearable devices like Google’s Glass will also feed into the new workplace. Wearable tech has been in use in some industries for decades, and in some cases it’s just an evolution from communication systems already used in many retail and manufacturing environments. But the ability to add augmented reality—a data overlay on top of a real world location—and to collect information without reaching for a device will quickly get traction in many enterprises.

Read the entire article here.

Image: Commodore PET (Personal Electronic Transactor) 2001 Series, circa 1977. Courtesy of Wikipedia.

You Are Different From Yourself

The next time your spouse tells you that you’re “just not the same person anymore” there may be some truth to it. After all, we are not who we thought we would become, nor are we likely to become what we think. That’s the overall result of a recent study of human personality changes in around 20,000 people over time.

[div class=attrib]From Independent:[end-div]

When we remember our past selves, they seem quite different. We know how much our personalities and tastes have changed over the years. But when we look ahead, somehow we expect ourselves to stay the same, a team of psychologists said Thursday, describing research they conducted of people’s self-perceptions.

They called this phenomenon the “end of history illusion,” in which people tend to “underestimate how much they will change in the future.” According to their research, which involved more than 19,000 people ages 18 to 68, the illusion persists from teenage years into retirement.

“Middle-aged people — like me — often look back on our teenage selves with some mixture of amusement and chagrin,” said one of the authors, Daniel T. Gilbert, a psychologist at Harvard. “What we never seem to realize is that our future selves will look back and think the very same thing about us. At every age we think we’re having the last laugh, and at every age we’re wrong.”

Other psychologists said they were intrigued by the findings, published Thursday in the journal Science, and were impressed with the amount of supporting evidence. Participants were asked about their personality traits and preferences — their favorite foods, vacations, hobbies and bands — in years past and present, and then asked to make predictions for the future. Not surprisingly, the younger people in the study reported more change in the previous decade than did the older respondents.

But when asked to predict what their personalities and tastes would be like in 10 years, people of all ages consistently played down the potential changes ahead.

Thus, the typical 20-year-old woman’s predictions for her next decade were not nearly as radical as the typical 30-year-old woman’s recollection of how much she had changed in her 20s. This sort of discrepancy persisted among respondents all the way into their 60s.

And the discrepancy did not seem to be because of faulty memories, because the personality changes recalled by people jibed quite well with independent research charting how personality traits shift with age. People seemed to be much better at recalling their former selves than at imagining how much they would change in the future.

Why? Dr. Gilbert and his collaborators, Jordi Quoidbach of Harvard and Timothy D. Wilson of the University of Virginia, had a few theories, starting with the well-documented tendency of people to overestimate their own wonderfulness.

“Believing that we just reached the peak of our personal evolution makes us feel good,” Dr. Quoidbach said. “The ‘I wish that I knew then what I know now’ experience might give us a sense of satisfaction and meaning, whereas realizing how transient our preferences and values are might lead us to doubt every decision and generate anxiety.”

Or maybe the explanation has more to do with mental energy: predicting the future requires more work than simply recalling the past. “People may confuse the difficulty of imagining personal change with the unlikelihood of change itself,” the authors wrote in Science.

The phenomenon does have its downsides, the authors said. For instance, people make decisions in their youth — about getting a tattoo, say, or a choice of spouse — that they sometimes come to regret.

And that illusion of stability could lead to dubious financial expectations, as the researchers showed in an experiment asking people how much they would pay to see their favorite bands.

When asked about their favorite band from a decade ago, respondents were typically willing to shell out $80 to attend a concert of the band today. But when they were asked about their current favorite band and how much they would be willing to spend to see the band’s concert in 10 years, the price went up to $129. Even though they realized that favorites from a decade ago like Creed or the Dixie Chicks have lost some of their luster, they apparently expect Coldplay and Rihanna to blaze on forever.

“The end-of-history effect may represent a failure in personal imagination,” said Dan P. McAdams, a psychologist at Northwestern who has done separate research into the stories people construct about their past and future lives. He has often heard people tell complex, dynamic stories about the past but then make vague, prosaic projections of a future in which things stay pretty much the same.

[div class=attrib]Read the entire article after the jump.[end-div]

Extreme Weather as the New Norm

Melting glaciers at the poles, wildfires in the western United States, severe flooding across Europe and parts of Asia, hurricanes in northern Australia, warmer temperatures across the globe. According to a many climatologists, including a growing number of ex-climate change skeptics, this is the new normal for our foreseeable future. Welcome to the changed climate.

[div class=attrib]From the New York Times:[end-div]

BY many measurements, this summer’s drought is one for the record books. But so was last year’s drought in the South Central states. And it has been only a decade since an extreme five-year drought hit the American West. Widespread annual droughts, once a rare calamity, have become more frequent and are set to become the “new normal.”

Until recently, many scientists spoke of climate change mainly as a “threat,” sometime in the future. But it is increasingly clear that we already live in the era of human-induced climate change, with a growing frequency of weather and climate extremes like heat waves, droughts, floods and fires.

Future precipitation trends, based on climate model projections for the coming fifth assessment from the Intergovernmental Panel on Climate Change, indicate that droughts of this length and severity will be commonplace through the end of the century unless human-induced carbon emissions are significantly reduced. Indeed, assuming business as usual, each of the next 80 years in the American West is expected to see less rainfall than the average of the five years of the drought that hit the region from 2000 to 2004.

That extreme drought (which we have analyzed in a new study in the journal Nature-Geoscience) had profound consequences for carbon sequestration, agricultural productivity and water resources: plants, for example, took in only half the carbon dioxide they do normally, thanks to a drought-induced drop in photosynthesis.

In the drought’s worst year, Western crop yields were down by 13 percent, with many local cases of complete crop failure. Major river basins showed 5 percent to 50 percent reductions in flow. These reductions persisted up to three years after the drought ended, because the lakes and reservoirs that feed them needed several years of average rainfall to return to predrought levels.

In terms of severity and geographic extent, the 2000-4 drought in the West exceeded such legendary events as the Dust Bowl of the 1930s. While that drought saw intervening years of normal rainfall, the years of the turn-of-the-century drought were consecutive. More seriously still, long-term climate records from tree-ring chronologies show that this drought was the most severe event of its kind in the western United States in the past 800 years. Though there have been many extreme droughts over the last 1,200 years, only three other events have been of similar magnitude, all during periods of “megadroughts.”

Most frightening is that this extreme event could become the new normal: climate models point to a warmer planet, largely because of greenhouse gas emissions. Planetary warming, in turn, is expected to create drier conditions across western North America, because of the way global-wind and atmospheric-pressure patterns shift in response.

Indeed, scientists see signs of the relationship between warming and drought in western North America by analyzing trends over the last 100 years; evidence suggests that the more frequent drought and low precipitation events observed for the West during the 20th century are associated with increasing temperatures across the Northern Hemisphere.

These climate-model projections suggest that what we consider today to be an episode of severe drought might even be classified as a period of abnormal wetness by the end of the century and that a coming megadrought — a prolonged, multidecade period of significantly below-average precipitation — is possible and likely in the American West.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the Sun.[end-div]