Tag Archives: future

Deep Time, Nuclear Semiotics and Atomic Priests

un-radioactive_warning_signTime seems to unfold over different — lengthier — scales in the desert southwest of the United States. Perhaps it’s the vastness of the eerie landscape that puts fleeting human moments into the context of deep geologic time. Or, perhaps it’s our monumental human structures that aim to encode our present for the distant future. Structures like the Hoover Dam, which regulates the mighty Colorado River, and the ill-fated Yucca Mountain project, once designed to store the nation’s nuclear waste, were conceived to last many centuries.

Yet these monuments to our impermanence raise a important issue beyond their construction — how are we to communicate their intent to humans living in a distant future, humans who will no longer be using any of our existing languages? Directions and warnings in English or contextual signs and images will not suffice. Consider Yucca Mountain. Now shuttered, Yucca Mountain was designed to be a repository for nuclear byproducts and waste from military and civilian programs. Keep in mind that some products of nuclear reactors, such as various isotopes of uranium, plutonium, technetium and neptunium, remain highly radioactive for tens of thousands to millions of years. So, how would we post warnings at Yucca Mountain about the entombed dangers to generations living 10,000 years and more from now? Those behind the Yucca Mountain project considered a number of fantastic (in its original sense) programs to carry dire warnings into the distant future including hostile architecture, radioactive cats and a pseudo-religious order. This was the work of the Human Interference Task Force.

From Motherboard:

Building the Hoover Dam rerouted the most powerful river in North America. It claimed the lives of 96 workers, and the beloved site dog, Little Niggy, who is entombed by the walkway in the shade of the canyon wall. Diverting the Colorado destroyed the ecology of the region, threatening fragile native plant life and driving several species of fish nearly to extinction. The dam brought water to 8 million people and created more than 5000 jobs. It required 6.6 million metric tons of concrete, all made from the desert; enough, famously, to pave a two lane road coast to coast across the US. Inside the dam’s walls that concrete is still curing, and will be for another 60 years.

Erik, photojournalist, and I have come here to try and get the measure of this place. Nevada is the uncanny locus of disparate monuments all concerned with charting deep time, leaving messages for future generations of human beings to puzzle over the meaning of: a star map, a nuclear waste repository and a clock able to keep time for 10,000 years—all of them within a few hours drive of Las Vegas through the harsh desert.

Hoover Dam is theorized in some structural stress projections to stand for tens of thousands of years from now, and what could be its eventual undoing is mussels. The mollusks which grow in the dam’s grates will no longer be scraped away, and will multiply eventually to such density that the built up stress of the river will burst the dam’s wall. That is if the Colorado continues to flow. Otherwise erosion will take much longer to claim the structure, and possibly Oskar J.W. Hansen’s vision will be realized: future humans will find the dam 14,000 years from now, at the end of the current Platonic Year.

A Platonic Year lasts for roughly 26,000 years. It’s also known as the precession of the equinoxes, first written into the historical record in the second century BC by the Greek mathematician, Hipparchus, though there is evidence that earlier people also solved this complex equation. Earth rotates in three ways: 365 days around the sun, on its 24 hours axis and on its precessional axis. The duration of the last is the Platonic Year, where Earth is incrementally turning on a tilt pointing to its true north as the Sun’s gravity pulls on us, leaving our planet spinning like a very slow top along its orbit around the sun.

Now Earth’s true-north pole star is Polaris, in Ursa Minor, as it was at the completion of Hoover Dam. At the end of the current Platonic Year it will be Vega, in the constellation Lyra. Hansen included this information in an amazingly accurate astronomical clock, or celestial map, embedded in the terrazzo floor of the dam’s dedication monument. Hansen wanted any future humans who came across the dam to be able to know exactly when it was built.

He used the clock to mark major historical events of the last several thousand years including the birth of Christ and the building of the pyramids, events which he thought were equal to the engineering feat of men bringing water to a desert in the 1930s. He reasoned that though current languages could be dead in this future, any people who had survived that long would have advanced astronomy, math and physics in their arsenal of survival tactics. Despite this, the monument is written entirely in English, which is for the benefit of current visitors, not our descendents of millennia from now.

The Hoover Dam is staggering. It is frankly impossible, even standing right on top of it, squinting in the blinding sunlight down its vertiginous drop, to imagine how it was ever built by human beings; even as I watch old documentary footage on my laptop back in the hotel at night on Fremont Street, showing me that exact thing, I don’t believe it. I cannot square it in my mind. I cannot conceive of nearly dying every day laboring in the brutally dry 100 degree heat, in a time before air-conditioning, in a time before being able to ever get even the slightest relief from the elements.

Hansen was more than aware of our propensity to build great monuments to ourselves and felt the weight of history as he submitted his bid for the job to design the dedication monument, writing, “Mankind itself is the subject of the sculptures at Hoover Dam.” Joan Didion described it as the most existentially terrifying place in America: “Since the afternoon in 1967 when I first saw Hoover Dam, its image has never been entirely absent from my inner eye.” Thirty-two people have chosen the dam as their place of suicide. It has no fences.

The reservoir is now the lowest it has ever been and California is living through the worst drought in 1200 years. You can swim in Lake Mead, so we did, sort of. It did provide some cool respite for a moment from the unrelenting heat of the desert. We waded around only up to our ankles because it smelled pretty terrible, the shoreline dirty with garbage.

Radioactive waste from spent nuclear fuel has a shelf life of hundreds of thousands of years. Maybe even more than a million, it’s not possible to precisely predict. Nuclear power plants around the US have produced 150 million metric tons of highly active nuclear waste that sits at dozens of sites around the country, awaiting a place to where it can all be carted and buried thousands of feet underground to be quarantined for the rest of time. For now a lot of it sits not far from major cities.

Yucca Mountain, 120 miles from Hoover Dam, is not that place. The site is one of the most intensely geologically surveyed and politically controversial pieces of land on Earth. Since 1987 it has been, at the cost of billions of dollars, the highly contested resting place for the majority of America’s high-risk nuclear waste. Those plans were officially shuttered in 2012, after states sued each other, states sued the federal Government, the Government sued contractors, and the people living near Yucca Mountain didn’t want, it turned out, for thousands of tons of nuclear waste to be carted through their counties and sacred lands via rail. President Obama cancelled its funding and officially ended the project.

It was said that there was a fault line running directly under the mountain; that the salt rock was not as absorbent as it was initially thought to be and that it posed the threat of leaking radiation into the water table; that more recently the possibility of fracking in the area would beget an ecological disaster. That a 10,000 year storage solution was nowhere near long enough to inculcate the Earth from the true shelf-life of the waste, which is realistically thought to be dangerous for many times that length of time. The site is now permanently closed, visible only from a distance through a cacophony of government warning signs blockading a security checkpoint.

We ask around the community of Amargosa Valley about the mountain. Sitting on 95 it’s the closest place to the site and consists only of a gas station, which trades in a huge amount of Area 51 themed merchandise, a boldly advertised sex shop, an alien motel and a firework store where you can let off rockets in the car park. Across the road is the vacant lot of what was once an RV park, with a couple of badly busted up vehicles looted beyond recognition and a small aquamarine boat lying on its side in the dirt.

At the gas station register a woman explains that no one really liked the idea of having waste so close to their homes (she repeats the story of the fault line), but they did like the idea of jobs, hundreds of which disappeared along with the project, leaving the surrounding areas, mainly long-tapped out mining communities, even more severely depressed.

We ask what would happen if we tried to actually get to the mountain itself, on government land.

“Plenty of people do try,” she says. “They’re trying to get to Area 51. They have sensors though, they’ll come get you real quick in their truck.”

Would we get shot?

“Shot? No. But they would throw you on the ground, break all your cameras and interrogate you for a long time.”

We decide just to take the road that used to go to the mountain as far as we can to the checkpoint, where in the distance beyond the electric fences at the other end of a stretch of desert land we see buildings and cars parked and most definitely some G-men who would see us before we even had the chance to try and sneak anywhere.

Before it was shut for good, Yucca Mountain had kilometers of tunnels bored into it and dozens of experiments undertaken within it, all of it now sealed behind an enormous vault door. It was also the focus of a branch of linguistics established specifically to warn future humans of the dangers of radioactive waste: nuclear semiotics. The Human Interference Task Force—a consortium of archeologists, architects, linguists, philosophers, engineers, designers—faced the opposite problem to Oskar Hansen at Hoover Dam; the Yucca Mountain repository was not hoping to attract the attentions of future humans to tell them of the glory of their forebears; it was to tell them that this place would kill them if they trod too near.

To create a universally readable warning system for humans living thirty generations from now, the signs will have to be instantly recognizable as expressing an immediate and lethal danger, as well as a deep sense of shunning: these were impulses that came up against each other; how to adequately express that the place was deadly while not at the same time enticing people to explore it, thinking it must contain something of great value if so much trouble had been gone to in order to keep people away? How to express this when all known written languages could very easily be dead? Signs as we know them now would almost certainly be completely unintelligible free of their social contexts which give them current meaning; a nuclear waste sign is just a dot with three rounded triangles sticking out of it to anyone not taught over a lifetime to know its warning.

Read the entire story here.

Image: United Nations radioactive symbol, 2007.

The Future Tubes of the Internets

CerfKahnMedalOfFreedom

Back in 1973, when computer scientists Vint Cerf and Robert Kahn sketched out plans to connect a handful of government networks little did they realize the scale of their invention — TCP/IP (a standard protocol for the interconnection of computer networks. Now, the two patriarchs of the Internet revolution — with no Al Gore in sight — prognosticate on the next 40 years of the internet.

From the NYT:

Will 2014 be the year that the Internet is reined in?

When Edward J. Snowden, the disaffected National Security Agency contract employee, purloined tens of thousands of classified documents from computers around the world, his actions — and their still-reverberating consequences — heightened international pressure to control the network that has increasingly become the world’s stage. At issue is the technical principle that is the basis for the Internet, its “any-to-any” connectivity. That capability has defined the technology ever since Vinton Cerf and Robert Kahn sequestered themselves in the conference room of a Palo Alto, Calif., hotel in 1973, with the task of interconnecting computer networks for an elite group of scientists, engineers and military personnel.

The two men wound up developing a simple and universal set of rules for exchanging digital information — the conventions of the modern Internet. Despite many technological changes, their work prevails.

But while the Internet’s global capability to connect anyone with anything has affected every nook and cranny of modern life — with politics, education, espionage, war, civil liberties, entertainment, sex, science, finance and manufacturing all transformed — its growth increasingly presents paradoxes.

It was, for example, the Internet’s global reach that made classified documents available to Mr. Snowden — and made it so easy for him to distribute them to news organizations.

Yet the Internet also made possible widespread surveillance, a practice that alarmed Mr. Snowden and triggered his plan to steal and publicly release the information.

With the Snowden affair starkly highlighting the issues, the new year is likely to see renewed calls to change the way the Internet is governed. In particular, governments that do not favor the free flow of information, especially if it’s through a system designed by Americans, would like to see the Internet regulated in a way that would “Balkanize” it by preventing access to certain websites.

The debate right now involves two international organizations, usually known by their acronyms, with different views: Icann, the Internet Corporation for Assigned Names and Numbers, and the I.T.U., or International Telecommunication Union.

Icann, a nonprofit that oversees the Internet’s basic functions, like the assignment of names to websites, was established in 1998 by the United States government to create an international forum for “governing” the Internet. The United States continues to favor this group.

The I.T.U., created in 1865 as the International Telegraph Convention, is the United Nations telecommunications regulatory agency. Nations like Brazil, China and Russia have been pressing the United States to switch governance of the Internet to this organization.

Dr. Cerf, 70, and Dr. Kahn, 75, have taken slightly different positions on the matter. Dr. Cerf, who was chairman of Icann from 2000-7, has become known as an informal “Internet ambassador” and a strong proponent of an Internet that remains independent of state control. He has been one of the major supporters of the idea of “network neutrality” — the principle that Internet service providers should enable access to all content and applications, regardless of the source.

Dr. Kahn has made a determined effort to stay out of the network neutrality debate. Nevertheless, he has been more willing to work with the I.T.U., particularly in attempting to build support for a system, known as Digital Object Architecture, for tracking and authenticating all content distributed through the Internet.

Both men agreed to sit down, in separate interviews, to talk about their views on the Internet’s future. The interviews were edited and condensed.

The Internet Ambassador

After serving as a program manager at the Pentagon’s Defense Advanced Research Projects Agency, Vinton Cerf joined MCI Communications Corp., an early commercial Internet company that was purchased by Verizon in 2006, to lead the development of electronic mail systems for the Internet. In 2005, he became a vice president and “Internet evangelist” for Google. Last year he became the president of the Association for Computing Machinery, a leading international educational and scientific computing society.

Q. Edward Snowden’s actions have raised a new storm of controversy about the role of the Internet. Is it a significant new challenge to an open and global Internet?

A. The answer is no, I don’t think so. There are some similar analogues in history. The French historically copied every telex or every telegram that you sent, and they shared it with businesses in order to remain competitive. And when that finally became apparent, it didn’t shut down the telegraph system.

The Snowden revelations will increase interest in end-to-end cryptography for encrypting information both in transit and at rest. For many of us, including me, who believe that is an important capacity to have, this little crisis may be the trigger that induces people to spend time and energy learning how to use it.

You’ve drawn the analogy to a road or highway system. That brings to mind the idea of requiring a driver’s license to use the Internet, which raises questions about responsibility and anonymity.

I still believe that anonymity is an important capacity, that people should have the ability to speak anonymously. It’s argued that people will be encouraged to say untrue things, harmful things, especially if they believe they are anonymous.

There is a tension there, because in some environments the only way you will be able to behave safely is to have some anonymity.

Read the entire article here.

Image: Vinton Cerf and Robert Kahn receiving the Presidential Medal of Freedom from President George W. Bush in 2005. Courtesy of Wikipedia.

2014: The Year of Big Stuff

new-years-eve-2013

Over the closing days of each year, or the first few days of the coming one, prognosticators the world over tell us about the future. Yet, while no one, to date, has yet been proven to have prescient skills — despite what your psychic tells you — we all like to dabble in art of prediction. Google’s Eric Schmidt has one big prediction for 2014: big. Everything will be big — big data, big genomics, smartphones will be even bigger, and of course, so will mistakes.

So, with that, a big Happy New Year to all our faithful readers and seers across our fragile and beautiful blue planet.

From the Guardian:

What does 2014 hold? According to Eric Schmidt, Google’s executive chairman, it means smartphones everywhere – and also the possibility of genetics data being used to develop new cures for cancer.

In an appearance on Bloomberg TV, Schmidt laid out his thoughts about general technological change, Google’s biggest mistake, and how Google sees the economy going in 2014.

“The biggest change for consumers is going to be that everyone’s going to have a smartphone,” Schmidt says. “And the fact that so many people are connected to what is essentially a supercomputer means a whole new generation of applications around entertainment, education, social life, those kinds of things. The trend has been that mobile is winning; it’s now won. There are more tablets and phones being sold than personal computers – people are moving to this new architecture very fast.”

It’s certainly true that tablets and smartphones are outselling PCs – in fact smartphones alone have been doing that since the end of 2010. This year, it’s forecast that tablets will have passed “traditional” PCs (desktops, fixed-keyboard laptops) too.

Disrupting business

Next, Schmidt says there’s a big change – a disruption – coming for business through the arrival of “big data”: “The biggest disruptor that we’re sure about is the arrival of big data and machine intelligence everywhere – so the ability [for businesses] to find people, to talk specifically to them, to judge them, to rank what they’re doing, to decide what to do with your products, changes every business globally.”

But he also sees potential in the field of genomics – the parsing of all the data being collected from DNA and gene sequencing. That might not be surprising, given that Google is an investor in 23andme, a gene sequencing company which aims to collect the genomes of a million people so that it can do data-matching analysis on their DNA. (Unfortunately, that plan has hit a snag: 23andme has been told to cease operating by the US Food and Drug Administration because it has failed to respond to inquiries about its testing methods and publication of results.)

Here’s what Schmidt has to say on genomics: “The biggest disruption that we don’t really know what’s going to happen is probably in the genetics area. The ability to have personal genetics records and the ability to start gathering all of the gene sequencing into places will yield discoveries in cancer treatment and diagnostics over the next year that that are unfathomably important.”

It may be worth mentioning that “we’ll find cures through genomics” has been the promise held up by scientists every year since the human genome was first sequenced. So far, it hasn’t happened – as much as anything because human gene variation is remarkably big, and there’s still a lot that isn’t known about the interaction of what appears to be non-functional parts of our DNA (which doesn’t seem to code to produce proteins) and the parts that do code for proteins.

Biggest mistake

As for Google’s biggest past mistake, Schmidt says it’s missing the rise of Facebook and Twitter: “At Google the biggest mistake that I made was not anticipating the rise of the social networking phenomenon – not a mistake we’re going to make again. I guess in our defence were working on many other things, but we should have been in that area, and I take responsibility for that.” The results of that effort to catch up can be seen in the way that Google+ is popping up everywhere – though it’s wrong to think of Google+ as a social network, since it’s more of a way that Google creates a substrate on the web to track individuals.

And what is Google doing in 2014? “Google is very much investing, we’re hiring globally, we see strong growth all around the world with the arrival of the internet everywhere. It’s all green in that sense from the standpoint of the year. Google benefits from transitions from traditional industries, and shockingly even when things are tough in a country, because we’re “return-on-investment”-based advertising – it’s smarter to move your advertising from others to Google, so we win no matter whether the industries are in good shape or not, because people need our services, we’re very proud of that.”

For Google, the sky’s the limit: “the key limiter on our growth is our rate of innovation, how smart are we, how clever are we, how quickly can we get these new systems deployed – we want to do that as fast as we can.”

It’s worth noting that Schmidt has a shaky track record on predictions. At Le Web in 2011 he famously forecast that developers would be shunning iOS to start developing on Android first, and that Google TV would be installed on 50% of all TVs on sale by summer 2012.

It didn’t turn out that way: even now, many apps start on iOS, and Google TV fizzled out as companies such as Logitech found that it didn’t work as well as Android to tempt buyers.

Since that, Schmidt has been a lot more cautious about predicting trends and changes – although he hasn’t been above the occasional comment which seems calculated to get a rise from his audience, such as telling executives at a Gartner conference that Android was more secure than the iPhone – which they apparently found humourous.

Read the entire article here.

Image: Happy New Year, 2014 Google doodle. Courtesy of Google.

Predicting the Future is Highly Overrated

Contrary to what political pundits, stock market talking heads and your local strip mall psychic will have you believe, no one, yet, can predict the future. And, it is no more possible for the current generation of tech wunderkinds or Silicon Valley venture fund investors or the armies of analysts.

From WSJ:

I believe the children aren’t our future. Teach them well, but when it comes to determining the next big thing in tech, let’s not fall victim to the ridiculous idea that they lead the way.

Yes, I’m talking about Snapchat.

Last week my colleagues reported that Facebook FB -2.71% recently offered $3 billion to acquire the company behind the hyper-popular messaging app. Stunningly, Evan Spiegel, Snapchat’s 23-year-old co-founder and CEO, rebuffed the offer.

If you’ve never used Snapchat—and I implore you to try it, because Snapchat can be pretty fun if you’re into that sort of thing, which I’m not, because I’m grumpy and old and I have two small kids and no time for fun, which I think will be evident from the rest of this column, and also would you please get off my lawn?—there are a few things you should know about the app.

First, Snapchat’s main selling point is ephemerality. When I send you a photo and caption using the app, I can select how long I want you to be able to view the picture. After you look at it for the specified time—1 to 10 seconds—the photo and all trace of our having chatted disappear from your phone. (Or, at least, they are supposed to. Snapchat’s security measures have frequently been defeated.)

Second, and relatedly, Snapchat is used primarily by teens and people in college. This explains much of Silicon Valley’s obsession with the company.

The app doesn’t make any money—its executives have barely even mentioned any desire to make money—but in the ad-supported tech industry, youth is the next best thing to revenue. For tech execs, youngsters are the canaries in the gold mine.

That logic follows a widely shared cultural belief: We all tend to assume that young people are on the technological vanguard, that they somehow have got an inside scoop on what’s next. If today’s kids are Snapchatting instead of Facebooking, the thinking goes, tomorrow we’ll all be Snapchatting, too, because tech habits, like hairstyles, flow only one way: young to old.

There is only one problem with elevating young people’s tastes this way: Kids are often wrong. There is little evidence to support the idea that the youth have any closer insight on the future than the rest of us do. Sometimes they are first to flock to technologies that turn out to be huge; other times, the young pick products and services that go nowhere. They can even be late adopters, embracing innovations that older people understood first. To butcher another song: The kids could be all wrong.

Here’s a thought exercise. How many of the products and services that you use every day were created or first used primarily by people under 25?

A few will spring to mind, Facebook the biggest of all. Yet the vast majority of your most-used things weren’t initially popular among teens. The iPhone, the iPad, the iPod, the Google search engine, YouTube, Twitter, TWTR -1.86% Gmail, Google Maps, Pinterest, LinkedIn, the Kindle, blogs, the personal computer, none of these were initially targeted to, or primarily used by, high-school or college-age kids. Indeed, many of the most popular tech products and services were burdened by factors that were actively off-putting to kids, such as high prices, an emphasis on productivity and a distinct lack of fun. Yet they succeeded anyway.

Even the exceptions suggest we should be wary of catering to youth. It is true that in 2004, Mark Zuckerberg designed Facebook for his Harvard classmates, and the social network was first made available only to college students. At the time, though, Facebook looked vastly more “grown up” than its competitors. The site prevented you from uglifying your page with your own design elements, something you could do with Myspace, which, incidentally, was the reigning social network among the pubescent set.

Mr. Zuckerberg deliberately avoided catering to this group. He often told his co-founders that he wanted Facebook to be useful, not cool. That is what makes the persistent worry about Facebook’s supposedly declining cachet among teens so bizarre; Facebook has never really been cool, but neither are a lot of other billion-dollar companies. Just ask Myspace how far being cool can get you.

Incidentally, though 20-something tech founders like Mr. Zuckerberg, Steve Jobs and Bill Gates get a lot of ink, they are unusual. A recent study by the VC firm Cowboy Ventures found that among tech startups that have earned a valuation of at least $1 billion since 2003, the average founder’s age was 34. “The twentysomething inexperienced founder is an outlier, not the norm,” wrote Cowboy’s founder Aileen Lee.

If you think about it for a second, the fact that young people aren’t especially reliable predictors of tech trends shouldn’t come as a surprise. Sure, youth is associated with cultural flexibility, a willingness to try new things that isn’t necessarily present in older folk. But there are other, less salutary hallmarks of youth, including capriciousness, immaturity, and a deference to peer pressure even at the cost of common sense. This is why high school is such fertile ground for fads. And it’s why, in other cultural areas, we don’t put much stock in teens’ choices. No one who’s older than 18, for instance, believes One Direction is the future of music.

That brings us back to Snapchat. Is the app just a youthful fad, just another boy band, or is it something more permanent; is it the Beatles?

To figure this out, we would need to know why kids are using it. Are they reaching for Snapchat for reasons that would resonate with older people—because, like the rest of us, they’ve grown wary of the public-sharing culture promoted by Facebook and Twitter? Or are they using it for less universal reasons, because they want to evade parental snooping, send risqué photos, or avoid feeling left out of a fad everyone else has adopted?

Read the entire article here.

Image: Snapchat logo. Courtesy of Snapchat / Wikipedia.

A Post-PC, Post-Laptop World

Not too long ago the founders and shapers of much of our IT world were dreaming up new information technologies, tools and processes that we didn’t know we needed. These tinkerers became the establishment luminaries that we still ove or hate — Microsoft, Dell, HP, Apple, Motorola and IBM. And, of course, they are still around.

But the world that they constructed is imploding and nobody really knows where it is heading. Will the leaders of the next IT revolution come from the likes of Google or Facebook? Or as is more likely, is this just a prelude to a more radical shift, with seeds being sown in anonymous garages and labs across the U.S. and other tech hubs. Regardless, we are in for some unpredictable and exciting times.

From ars technica:

Change happens in IT whether you want it to or not. But even with all the talk of the “post-PC” era and the rise of the horrifically named “bring your own device” hype, change has happened in a patchwork. Despite the disruptive technologies documented on Ars and elsewhere, the fundamentals of enterprise IT have evolved slowly over the past decade.

But this, naturally, is about to change. The model that we’ve built IT on for the past 10 years is in the midst of collapsing on itself, and the companies that sold us the twigs and straw it was built with—Microsoft, Dell, and Hewlett-Packard to name a few—are facing the same sort of inflection points in their corporate life cycles that have ripped past IT giants to shreds. These corporate giants are faced with moments of truth despite making big bets on acquisitions to try to position themselves for what they saw as the future.

Predicting the future is hard, especially when you have an installed base to consider. But it’s not hard to identify the economic, technological, and cultural forces that are converging right now to shape the future of enterprise IT in the short term. We’re not entering a “post-PC” era in IT—we’re entering an era where the device we use to access applications and information is almost irrelevant. Nearly everything we do as employees or customers will be instrumented, analyzed, and aggregated.

“We’re not on a 10-year reinvention path anymore for enterprise IT,” said David Nichols, Americas IT Transformation Leader at Ernst & Young. “It’s more like [a] five-year or four-year path. And it’s getting faster. It’s going to happen at a pace we haven’t seen before.”

While the impact may be revolutionary, the cause is more evolutionary. A host of technologies that have been the “next big thing” for much of the last decade—smart mobile devices, the “Internet of Things,” deep analytics, social networking, and cloud computing—have finally reached a tipping point. The demand for mobile applications has turned what were once called “Web services” into a new class of managed application programming interfaces. These are changing not just how users interact with data, but the way enterprises collect and share data, write applications, and secure them.

Add the technologies pushed forward by government and defense in the last decade (such as facial recognition) and an abundance of cheap sensors, and you have the perfect “big data” storm. This sea of structured and unstructured data could change the nature of the enterprise or drown IT departments in the process. It will create social challenges as employees and customers start to understand the level to which they are being tracked by enterprises. And it will give companies more ammunition to continue to squeeze more productivity out of a shrinking workforce, as jobs once done by people are turned over to software robots.

There has been a lot of talk about how smartphones and tablets have supplanted the PC. In many ways, that talk is true. In fact, we’re still largely using smartphones and tablets as if they were PCs.

But aside from mobile Web browsing and the use of tablets as a replacement for notebook PCs in presentations, most enterprises still use mobile devices the same way they used the BlackBerry in 1999—for e-mail. Mobile apps are the new webpage: everybody knows they need one to engage customers, but few are really sure what to do with them beyond what customers use their websites for. And while companies are trying to engage customers using social media on mobile, they’re largely not using the communications tools available on smart mobile devices to engage their own employees.

“I think right now, mobile adoption has been greatly overstated in terms of what people say they do with mobile versus mobile’s potential,” said Nichols. “Every CIO out there says, ‘Oh, we have mobile-enabled our workforce using tablets and smartphones.’ They’ve done mobile enablement but not mobile integration. Mobility at this point has not fundamentally changed the way the majority of the workforce works, at least not in the last five to six years.”

Smartphones make very poor PCs. But they have something no desktop PC has—a set of sensors that can provide a constant flow of data about where their user is. There’s visual information pulled in through a camera, motion and acceleration data, and even proximity. When combined with backend analytics, they can create opportunities to change how people work, collaborate, and interact with their environment.

Machine-to-machine (M2M) communications is a big part of that shift, according to Nichols. “Allowing devices with sensors to interact in a meaningful way is the next step,” he said. That step spans from the shop floor to the data center to the boardroom, as the devices we carry track our movements and our activities and interact with the systems around us.

Retailers are beginning to catch on to that, using mobile devices’ sensors to help close sales. “Everybody gets the concept that a mobile app is a necessity for a business-to-consumer retailer,” said Brian Kirschner, the director of Apigee Institute, a research organization created by the application infrastructure vendor Apigee in collaboration with executives of large enterprises and academic researchers. “But they don’t always get the transformative force on business that apps can have. Some can be small. For example, Home Depot has an app to help you search the store you’re in for what you’re looking for. We know that failure to find something in the store is a cause of lost sales and that Web search is useful and signs over aisles are ineffective. So the mobile app has a real impact on sales.”

But if you’ve already got stock information, location data for a customer, and e-commerce capabilities, why stop at making the app useful only during business hours? “If you think of the full potential of a mobile app, why can’t you buy something at the store when it’s closed if you’re near the store?” Kirschner said. “Instead of dropping you to a traditional Web process and offering you free shipping, they could have you pick it up at the store where you are tomorrow.”

That’s a change that’s being forced on many retailers, as noted in an article from the most recent MIT Sloan Management Review by a trio of experts: Erik Brynjolfsson, a professor at MIT’s Sloan School of Management and the director of the MIT Center for Digital Business; Yu Jeffrey Hu of the Georgia Institute of Technology; and Mohammed Rahman of the University of Calgary. If retailers don’t offer a way to meet mobile-equipped customers, they’ll buy it online elsewhere—often while standing in their store. Offering customers a way to extend their experience beyond the store’s walls is the kind of mobile use that’s going to create competitive advantage from information technology. And it’s the sort of competitive advantage that has long been milked out of the old IT model.

Nichols sees the same sort of technology transforming not just relationships with customers but the workplace itself. Say, for example, you’re in New York, and you want to discuss something with two colleagues. You request an appointment using your mobile device, and based on your location data, the location data of your colleagues, and the timing of the meeting, backend systems automatically book you a conference room and set up a video link to a co-worker out of town.

Based on analytics and the title of the meeting, relevant documents are dropped into a collaboration space. Your device records the meeting to an archive and notes who has attended in person. And this conversation is automatically transcribed, tagged, and forwarded to team members for review.

“Having location data to reserve conference rooms and calls and having all other logistics be handled in background changes the size of the organization I need to support that,” Nichols said.

The same applies to manufacturing, logistics, and other areas where applications can be tied into sensors and computing power. “If I have a factory where a machine has a belt that needs to be reordered every five years and it auto re-orders and it gets shipped without the need for human interaction, that changes the whole dynamics of how you operate,” Nichols said. “If you can take that and plug it into a proper workflow, you’re going to see an entirely new sort of workforce. That’s not that far away.”

Wearable devices like Google’s Glass will also feed into the new workplace. Wearable tech has been in use in some industries for decades, and in some cases it’s just an evolution from communication systems already used in many retail and manufacturing environments. But the ability to add augmented reality—a data overlay on top of a real world location—and to collect information without reaching for a device will quickly get traction in many enterprises.

Read the entire article here.

Image: Commodore PET (Personal Electronic Transactor) 2001 Series, circa 1977. Courtesy of Wikipedia.

Ray Bradbury’s Real World Dystopia

Ray Bradbury’s death on June 5 reminds us of his uncanny gift for inventing a future that is much like our modern day reality.

Bradbury’s body of work beginning in the early 1940s introduced us to ATMs, wall mounted flat screen TVs, ear-piece radios, online social networks, self-driving cars, and electronic surveillance. Bravely and presciently he also warned us of technologically induced cultural amnesia, social isolation, indifference to violence, and dumbed-down 24/7 mass media.

An especially thoughtful opinion from author Tim Kreider on Bradbury’s life as a “misanthropic humanist”.

[div class=attrib]From the New York Times:[end-div]

IF you’d wanted to know which way the world was headed in the mid-20th century, you wouldn’t have found much indication in any of the day’s literary prizewinners. You’d have been better advised to consult a book from a marginal genre with a cover illustration of a stricken figure made of newsprint catching fire.

Prescience is not the measure of a science-fiction author’s success — we don’t value the work of H. G. Wells because he foresaw the atomic bomb or Arthur C. Clarke for inventing the communications satellite — but it is worth pausing, on the occasion of Ray Bradbury’s death, to notice how uncannily accurate was his vision of the numb, cruel future we now inhabit.

Mr. Bradbury’s most famous novel, “Fahrenheit 451,” features wall-size television screens that are the centerpieces of “parlors” where people spend their evenings watching interactive soaps and vicious slapstick, live police chases and true-crime dramatizations that invite viewers to help catch the criminals. People wear “seashell” transistor radios that fit into their ears. Note the perversion of quaint terms like “parlor” and “seashell,” harking back to bygone days and vanished places, where people might visit with their neighbors or listen for the sound of the sea in a chambered nautilus.

Mr. Bradbury didn’t just extrapolate the evolution of gadgetry; he foresaw how it would stunt and deform our psyches. “It’s easy to say the wrong thing on telephones; the telephone changes your meaning on you,” says the protagonist of the prophetic short story “The Murderer.” “First thing you know, you’ve made an enemy.”

Anyone who’s had his intended tone flattened out or irony deleted by e-mail and had to explain himself knows what he means. The character complains that he’s relentlessly pestered with calls from friends and employers, salesmen and pollsters, people calling simply because they can. Mr. Bradbury’s vision of “tired commuters with their wrist radios, talking to their wives, saying, ‘Now I’m at Forty-third, now I’m at Forty-fourth, here I am at Forty-ninth, now turning at Sixty-first” has gone from science-fiction satire to dreary realism.

“It was all so enchanting at first,” muses our protagonist. “They were almost toys, to be played with, but the people got too involved, went too far, and got wrapped up in a pattern of social behavior and couldn’t get out, couldn’t admit they were in, even.”

Most of all, Mr. Bradbury knew how the future would feel: louder, faster, stupider, meaner, increasingly inane and violent. Collective cultural amnesia, anhedonia, isolation. The hysterical censoriousness of political correctness. Teenagers killing one another for kicks. Grown-ups reading comic books. A postliterate populace. “I remember the newspapers dying like huge moths,” says the fire captain in “Fahrenheit,” written in 1953. “No one wanted them back. No one missed them.” Civilization drowned out and obliterated by electronic chatter. The book’s protagonist, Guy Montag, secretly trying to memorize the Book of Ecclesiastes on a train, finally leaps up screaming, maddened by an incessant jingle for “Denham’s Dentrifice.” A man is arrested for walking on a residential street. Everyone locked indoors at night, immersed in the social lives of imaginary friends and families on TV, while the government bombs someone on the other side of the planet. Does any of this sound familiar?

The hero of “The Murderer” finally goes on a rampage and smashes all the yammering, blatting devices around him, expressing remorse only over the Insinkerator — “a practical device indeed,” he mourns, “which never said a word.” It’s often been remarked that for a science-fiction writer, Mr. Bradbury was something of a Luddite — anti-technology, anti-modern, even anti-intellectual. (“Put me in a room with a pad and a pencil and set me up against a hundred people with a hundred computers,” he challenged a Wired magazine interviewer, and swore he would “outcreate” every one.)

But it was more complicated than that; his objections were not so much reactionary or political as they were aesthetic. He hated ugliness, noise and vulgarity. He opposed the kind of technology that deadened imagination, the modernity that would trash the past, the kind of intellectualism that tried to centrifuge out awe and beauty. He famously did not care to drive or fly, but he was a passionate proponent of space travel, not because of its practical benefits but because he saw it as the great spiritual endeavor of the age, our generation’s cathedral building, a bid for immortality among the stars.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Technorati.[end-div]