Tag Archives: information

The Post-Capitalism Dream

Anti-capitalism_color

I’m not sure that I fully agree with the premises and conclusions that author Paul Mason outlines in his essay below excerpted from his new book, Postcapitalism (published on 30 July 2015). However, I’d like to believe that we could all very soon thrive in a much more equitable and socially just future society. While the sharing economy has gone someway to democratizing work effort, Mason points out other, and growing, areas of society that are marching to the beat of a different, non-capitalist drum: volunteerism, alternative currencies, cooperatives, gig-economy, self-managed spaces, social sharing, time banks. This is all good.

It will undoubtedly take generations for society to grapple with the consequences of these shifts and more importantly dealing with the ongoing and accelerating upheaval wrought by ubiquitous automation. Meanwhile, the vested interests — the capitalist heads of state, the oligarchs, the monopolists, the aging plutocrats and their assorted (political) sycophants  — will most certainly fight until the very bitter end to maintain an iron grip on the invisible hand of the market.

From the Guardian:

The red flags and marching songs of Syriza during the Greek crisis, plus the expectation that the banks would be nationalised, revived briefly a 20th-century dream: the forced destruction of the market from above. For much of the 20th century this was how the left conceived the first stage of an economy beyond capitalism. The force would be applied by the working class, either at the ballot box or on the barricades. The lever would be the state. The opportunity would come through frequent episodes of economic collapse.

Instead over the past 25 years it has been the left’s project that has collapsed. The market destroyed the plan; individualism replaced collectivism and solidarity; the hugely expanded workforce of the world looks like a “proletariat”, but no longer thinks or behaves as it once did.

If you lived through all this, and disliked capitalism, it was traumatic. But in the process technology has created a new route out, which the remnants of the old left – and all other forces influenced by it – have either to embrace or die. Capitalism, it turns out, will not be abolished by forced-march techniques. It will be abolished by creating something more dynamic that exists, at first, almost unseen within the old system, but which will break through, reshaping the economy around new values and behaviours. I call this postcapitalism.

As with the end of feudalism 500 years ago, capitalism’s replacement by postcapitalism will be accelerated by external shocks and shaped by the emergence of a new kind of human being. And it has started.

Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages. The coming wave of automation, currently stalled because our social infrastructure cannot bear the consequences, will hugely diminish the amount of work needed – not just to subsist but to provide a decent life for all.

Second, information is corroding the market’s ability to form prices correctly. That is because markets are based on scarcity while information is abundant. The system’s defence mechanism is to form monopolies – the giant tech companies – on a scale not seen in the past 200 years, yet they cannot last. By building business models and share valuations based on the capture and privatisation of all socially produced information, such firms are constructing a fragile corporate edifice at odds with the most basic need of humanity, which is to use ideas freely.

Third, we’re seeing the spontaneous rise of collaborative production: goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy. The biggest information product in the world – Wikipedia – is made by volunteers for free, abolishing the encyclopedia business and depriving the advertising industry of an estimated $3bn a year in revenue.

Almost unnoticed, in the niches and hollows of the market system, whole swaths of economic life are beginning to move to a different rhythm. Parallel currencies, time banks, cooperatives and self-managed spaces have proliferated, barely noticed by the economics profession, and often as a direct result of the shattering of the old structures in the post-2008 crisis.

You only find this new economy if you look hard for it. In Greece, when a grassroots NGO mapped the country’s food co-ops, alternative producers, parallel currencies and local exchange systems they found more than 70 substantive projects and hundreds of smaller initiatives ranging from squats to carpools to free kindergartens. To mainstream economics such things seem barely to qualify as economic activity – but that’s the point. They exist because they trade, however haltingly and inefficiently, in the currency of postcapitalism: free time, networked activity and free stuff. It seems a meagre and unofficial and even dangerous thing from which to craft an entire alternative to a global system, but so did money and credit in the age of Edward III.

New forms of ownership, new forms of lending, new legal contracts: a whole business subculture has emerged over the past 10 years, which the media has dubbed the “sharing economy”. Buzzwords such as the “commons” and “peer-production” are thrown around, but few have bothered to ask what this development means for capitalism itself.

I believe it offers an escape route – but only if these micro-level projects are nurtured, promoted and protected by a fundamental change in what governments do. And this must be driven by a change in our thinking – about technology, ownership and work. So that, when we create the elements of the new system, we can say to ourselves, and to others: “This is no longer simply my survival mechanism, my bolt hole from the neoliberal world; this is a new way of living in the process of formation.”

The power of imagination will become critical. In an information society, no thought, debate or dream is wasted – whether conceived in a tent camp, prison cell or the table football space of a startup company.

As with virtual manufacturing, in the transition to postcapitalism the work done at the design stage can reduce mistakes in the implementation stage. And the design of the postcapitalist world, as with software, can be modular. Different people can work on it in different places, at different speeds, with relative autonomy from each other. If I could summon one thing into existence for free it would be a global institution that modelled capitalism correctly: an open source model of the whole economy; official, grey and black. Every experiment run through it would enrich it; it would be open source and with as many datapoints as the most complex climate models.

The main contradiction today is between the possibility of free, abundant goods and information; and a system of monopolies, banks and governments trying to keep things private, scarce and commercial. Everything comes down to the struggle between the network and the hierarchy: between old forms of society moulded around capitalism and new forms of society that prefigure what comes next.

Is it utopian to believe we’re on the verge of an evolution beyond capitalism? We live in a world in which gay men and women can marry, and in which contraception has, within the space of 50 years, made the average working-class woman freer than the craziest libertine of the Bloomsbury era. Why do we, then, find it so hard to imagine economic freedom?

It is the elites – cut off in their dark-limo world – whose project looks as forlorn as that of the millennial sects of the 19th century. The democracy of riot squads, corrupt politicians, magnate-controlled newspapers and the surveillance state looks as phoney and fragile as East Germany did 30 years ago.

All readings of human history have to allow for the possibility of a negative outcome. It haunts us in the zombie movie, the disaster movie, in the post-apocalytic wasteland of films such as The Road or Elysium. But why should we not form a picture of the ideal life, built out of abundant information, non-hierarchical work and the dissociation of work from wages?

Millions of people are beginning to realise they have been sold a dream at odds with what reality can deliver. Their response is anger – and retreat towards national forms of capitalism that can only tear the world apart. Watching these emerge, from the pro-Grexit left factions in Syriza to the Front National and the isolationism of the American right has been like watching the nightmares we had during the Lehman Brothers crisis come true.

We need more than just a bunch of utopian dreams and small-scale horizontal projects. We need a project based on reason, evidence and testable designs, that cuts with the grain of history and is sustainable by the planet. And we need to get on with it.

Read the excerpt here.

Image: The Industrial Workers of the World poster “Pyramid of Capitalist System” (1911). Courtesy of Wikipedia. Public Domain.

Multitasking: A Powerful and Diabolical Illusion

Our ever-increasingly ubiquitous technology makes possible all manner of things that would have been insurmountable just decades ago. We carry smartphones that envelope more computational power than mainframes just a generation ago. Yet for all this power at our fingertips we seem to forget that we are still very much human animals with limitations. One such “shortcoming” [your friendly editor believes it’s a boon] is our inability to multitask like our phones. I’ve written about this before, and am compelled to do so again after reading this thoughtful essay by Daniel J. Levitin, extracted from his book The Organized Mind: Thinking Straight in the Age of Information Overload. I even had to use his phrasing for the title of this post.

From the Guardian:

Our brains are busier than ever before. We’re assaulted with facts, pseudo facts, jibber-jabber, and rumour, all posing as information. Trying to figure out what you need to know and what you can ignore is exhausting. At the same time, we are all doing more. Thirty years ago, travel agents made our airline and rail reservations, salespeople helped us find what we were looking for in shops, and professional typists or secretaries helped busy people with their correspondence. Now we do most of those things ourselves. We are doing the jobs of 10 different people while still trying to keep up with our lives, our children and parents, our friends, our careers, our hobbies, and our favourite TV shows.

Our smartphones have become Swiss army knife–like appliances that include a dictionary, calculator, web browser, email, Game Boy, appointment calendar, voice recorder, guitar tuner, weather forecaster, GPS, texter, tweeter, Facebook updater, and flashlight. They’re more powerful and do more things than the most advanced computer at IBM corporate headquarters 30 years ago. And we use them all the time, part of a 21st-century mania for cramming everything we do into every single spare moment of downtime. We text while we’re walking across the street, catch up on email while standing in a queue – and while having lunch with friends, we surreptitiously check to see what our other friends are doing. At the kitchen counter, cosy and secure in our domicile, we write our shopping lists on smartphones while we are listening to that wonderfully informative podcast on urban beekeeping.

But there’s a fly in the ointment. Although we think we’re doing several things at once, multitasking, this is a powerful and diabolical illusion. Earl Miller, a neuroscientist at MIT and one of the world experts on divided attention, says that our brains are “not wired to multitask well… When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.” So we’re not actually keeping a lot of balls in the air like an expert juggler; we’re more like a bad amateur plate spinner, frantically switching from one task to another, ignoring the one that is not right in front of us but worried it will come crashing down any minute. Even though we think we’re getting a lot done, ironically, multitasking makes us demonstrably less efficient.

Multitasking has been found to increase the production of the stress hormone cortisol as well as the fight-or-flight hormone adrenaline, which can overstimulate your brain and cause mental fog or scrambled thinking. Multitasking creates a dopamine-addiction feedback loop, effectively rewarding the brain for losing focus and for constantly searching for external stimulation. To make matters worse, the prefrontal cortex has a novelty bias, meaning that its attention can be easily hijacked by something new – the proverbial shiny objects we use to entice infants, puppies, and kittens. The irony here for those of us who are trying to focus amid competing activities is clear: the very brain region we need to rely on for staying on task is easily distracted. We answer the phone, look up something on the internet, check our email, send an SMS, and each of these things tweaks the novelty- seeking, reward-seeking centres of the brain, causing a burst of endogenous opioids (no wonder it feels so good!), all to the detriment of our staying on task. It is the ultimate empty-caloried brain candy. Instead of reaping the big rewards that come from sustained, focused effort, we instead reap empty rewards from completing a thousand little sugar-coated tasks.

In the old days, if the phone rang and we were busy, we either didn’t answer or we turned the ringer off. When all phones were wired to a wall, there was no expectation of being able to reach us at all times – one might have gone out for a walk or been between places – and so if someone couldn’t reach you (or you didn’t feel like being reached), it was considered normal. Now more people have mobile phones than have toilets. This has created an implicit expectation that you should be able to reach someone when it is convenient for you, regardless of whether it is convenient for them. This expectation is so ingrained that people in meetings routinely answer their mobile phones to say, “I’m sorry, I can’t talk now, I’m in a meeting.” Just a decade or two ago, those same people would have let a landline on their desk go unanswered during a meeting, so different were the expectations for reachability.

Just having the opportunity to multitask is detrimental to cognitive performance. Glenn Wilson, former visiting professor of psychology at Gresham College, London, calls it info-mania. His research found that being in a situation where you are trying to concentrate on a task, and an email is sitting unread in your inbox, can reduce your effective IQ by 10 points. And although people ascribe many benefits to marijuana, including enhanced creativity and reduced pain and stress, it is well documented that its chief ingredient, cannabinol, activates dedicated cannabinol receptors in the brain and interferes profoundly with memory and with our ability to concentrate on several things at once. Wilson showed that the cognitive losses from multitasking are even greater than the cognitive losses from pot?smoking.

Russ Poldrack, a neuroscientist at Stanford, found that learning information while multitasking causes the new information to go to the wrong part of the brain. If students study and watch TV at the same time, for example, the information from their schoolwork goes into the striatum, a region specialised for storing new procedures and skills, not facts and ideas. Without the distraction of TV, the information goes into the hippocampus, where it is organised and categorised in a variety of ways, making it easier to retrieve. MIT’s Earl Miller adds, “People can’t do [multitasking] very well, and when they say they can, they’re deluding themselves.” And it turns out the brain is very good at this deluding business.

Then there are the metabolic costs that I wrote about earlier. Asking the brain to shift attention from one activity to another causes the prefrontal cortex and striatum to burn up oxygenated glucose, the same fuel they need to stay on task. And the kind of rapid, continual shifting we do with multitasking causes the brain to burn through fuel so quickly that we feel exhausted and disoriented after even a short time. We’ve literally depleted the nutrients in our brain. This leads to compromises in both cognitive and physical performance. Among other things, repeated task switching leads to anxiety, which raises levels of the stress hormone cortisol in the brain, which in turn can lead to aggressive and impulsive behaviour. By contrast, staying on task is controlled by the anterior cingulate and the striatum, and once we engage the central executive mode, staying in that state uses less energy than multitasking and actually reduces the brain’s need for glucose.

To make matters worse, lots of multitasking requires decision-making: Do I answer this text message or ignore it? How do I respond to this? How do I file this email? Do I continue what I’m working on now or take a break? It turns out that decision-making is also very hard on your neural resources and that little decisions appear to take up as much energy as big ones. One of the first things we lose is impulse control. This rapidly spirals into a depleted state in which, after making lots of insignificant decisions, we can end up making truly bad decisions about something important. Why would anyone want to add to their daily weight of information processing by trying to multitask?

Read the entire article here.

Google: The Standard Oil of Our Age

Google’s aim to organize the world’s information sounds benign enough. But delve a little deeper into its research and development efforts or witness its boundless encroachment into advertising, software, phones, glasses, cars, home automation, travel, internet services, artificial intelligence, robotics, online shopping (and so on), and you may get a more uneasy and prickly sensation. Is Google out to organize information or you? Perhaps it’s time to begin thinking about Google as a corporate hegemony, not quite a monopoly yet, but so powerful that counter-measures become warranted.

An open letter, excerpted below, from Mathias Döpfner, CEO of Axel Springer AG, does us all a service by raising the alarm bells.

From the Guardian:

Dear Eric Schmidt,

As you know, I am a great admirer of Google’s entrepreneurial success. Google’s employees are always extremely friendly to us and to other publishing houses, but we are not communicating with each other on equal terms. How could we? Google doesn’t need us. But we need Google. We are afraid of Google. I must state this very clearly and frankly, because few of my colleagues dare do so publicly. And as the biggest among the small, perhaps it is also up to us to be the first to speak out in this debate. You yourself speak of the new power of the creators, owners, and users.

In the long term I’m not so sure about the users. Power is soon followed by powerlessness. And this is precisely the reason why we now need to have this discussion in the interests of the long-term integrity of the digital economy’s ecosystem. This applies to competition – not only economic, but also political. As the situation stands, your company will play a leading role in the various areas of our professional and private lives – in the house, in the car, in healthcare, in robotronics. This is a huge opportunity and a no less serious threat. I am afraid that it is simply not enough to state, as you do, that you want to make the world a “better place”.

Google lists its own products, from e-commerce to pages from its own Google+ network, higher than those of its competitors, even if these are sometimes of less value for consumers and should not be displayed in accordance with the Google algorithm. It is not even clearly pointed out to the user that these search results are the result of self-advertising. Even when a Google service has fewer visitors than that of a competitor, it appears higher up the page until it eventually also receives more visitors.

You know very well that this would result in long-term discrimination against, and weakening of, any competition, meaning that Google would be able to develop its superior market position still further. And that this would further weaken the European digital economy in particular.

This also applies to the large and even more problematic set of issues concerning data security and data utilisation. Ever since Edward Snowden triggered the NSA affair, and ever since the close relations between major American online companies and the American secret services became public, the social climate – at least in Europe – has fundamentally changed. People have become more sensitive about what happens to their user data. Nobody knows as much about its customers as Google. Even private or business emails are read by Gmail and, if necessary, can be evaluated. You yourself said in 2010: “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.” This is a remarkably honest sentence. The question is: are users happy with the fact that this information is used not only for commercial purposes – which may have many advantages, yet a number of spooky negative aspects as well – but could end up in the hands of the intelligence services, and to a certain extent already has?

Google is sitting on the entire current data trove of humanity, like the giant Fafner in The Ring of the Nibelung: “Here I lie and here I hold.” I hope you are aware of your company’s special responsibility. If fossil fuels were the fuels of the 20th century, then those of the 21st century are surely data and user profiles. We need to ask ourselves whether competition can generally still function in the digital age, if data is so extensively concentrated in the hands of one party.

There is a quote from you in this context that concerns me. In 2009 you said: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” The essence of freedom is precisely the fact that I am not obliged to disclose everything that I am doing, that I have a right to confidentiality and, yes, even to secrets; that I am able to determine for myself what I wish to disclose about myself. The individual right to this is what makes a democracy. Only dictatorships want transparent citizens instead of a free press.

Against this background, it greatly concerns me that Google – which has just announced the acquisition of drone manufacturer Titan Aerospace – has been seen for some time as being behind a number of planned enormous ships and floating working environments that can cruise and operate in the open ocean. What is the reason for this development? You don’t have to be a conspiracy theorist to find this alarming.

Historically, monopolies have never survived in the long term. Either they have failed as a result of their complacency, which breeds its own success, or they have been weakened by competition – both unlikely scenarios in Google’s case. Or they have been restricted by political initiatives.

Another way would be voluntary self-restraint on the part of the winner. Is it really smart to wait until the first serious politician demands the breakup of Google? Or even worse – until the people refuse to follow?

Sincerely yours,

Mathias Döpfner

Read the entire article here.

 

A Case for Slow Reading

With 24/7 infotainment available to us through any device, anywhere it is more than likely that these immense torrents of competing words, images and sounds will have an effect on our reading. This is particularly evident online where consumers of information are increasingly scanning and skimming — touching only the bare surface of an article — before clicking a link and moving elsewhere (and so on) across the digital ocean. The fragmentation of this experience is actually rewiring our brains, and as some researchers suggest, perhaps not for the best.

From the Washington Post.

Claire Handscombe has a commitment problem online. Like a lot of Web surfers, she clicks on links posted on social networks, reads a few sentences, looks for exciting words, and then grows restless, scampering off to the next page she probably won’t commit to.

“I give it a few seconds — not even minutes — and then I’m moving again,” says Handscombe, a 35-year-old graduate student in creative writing at American University.

But it’s not just online anymore. She finds herself behaving the same way with a novel.

“It’s like your eyes are passing over the words but you’re not taking in what they say,” she confessed. “When I realize what’s happening, I have to go back and read again and again.”

To cognitive neuroscientists, Handscombe’s experience is the subject of great fascination and growing alarm. Humans, they warn, seem to be developing digital brains with new circuits for skimming through the torrent of information online. This alternative way of reading is competing with traditional deep reading circuitry developed over several millennia.

“I worry that the superficial way we read during the day is affecting us when we have to read with more in-depth processing,” said Maryanne Wolf, a Tufts University cognitive neuroscientist and the author of “Proust and the Squid: The Story and Science of the Reading Brain.”

If the rise of nonstop cable TV news gave the world a culture of sound bites, the Internet, Wolf said, is bringing about an eye byte culture. Time spent online — on desktop and mobile devices — was expected to top five hours per day in 2013 for U.S. adults, according to eMarketer, which tracks digital behavior. That’s up from three hours in 2010.

Word lovers and scientists have called for a “slow reading” movement, taking a branding cue from the “slow food” movement. They are battling not just cursory sentence galloping but the constant social network and e-mail temptations that lurk on our gadgets — the bings and dings that interrupt “Call me Ishmael.”

Researchers are working to get a clearer sense of the differences between online and print reading — comprehension, for starters, seems better with paper — and are grappling with what these differences could mean not only for enjoying the latest Pat Conroy novel but for understanding difficult material at work and school. There is concern that young children’s affinity and often mastery of their parents’ devices could stunt the development of deep reading skills.

The brain is the innocent bystander in this new world. It just reflects how we live.

“The brain is plastic its whole life span,” Wolf said. “The brain is constantly adapting.”

Wolf, one of the world’s foremost experts on the study of reading, was startled last year to discover her brain was apparently adapting, too. After a day of scrolling through the Web and hundreds of e-mails, she sat down one evening to read Hermann Hesse’s “The Glass Bead Game.”

“I’m not kidding: I couldn’t do it,” she said. “It was torture getting through the first page. I couldn’t force myself to slow down so that I wasn’t skimming, picking out key words, organizing my eye movements to generate the most information at the highest speed. I was so disgusted with myself.”

Adapting to read

The brain was not designed for reading. There are no genes for reading like there are for language or vision. But spurred by the emergence of Egyptian hieroglyphics, the Phoenician alphabet, Chinese paper and, finally, the Gutenberg press, the brain has adapted to read.

Before the Internet, the brain read mostly in linear ways — one page led to the next page, and so on. Sure, there might be pictures mixed in with the text, but there didn’t tend to be many distractions. Reading in print even gave us a remarkable ability to remember where key information was in a book simply by the layout, researchers said. We’d know a protagonist died on the page with the two long paragraphs after the page with all that dialogue.

The Internet is different. With so much information, hyperlinked text, videos alongside words and interactivity everywhere, our brains form shortcuts to deal with it all — scanning, searching for key words, scrolling up and down quickly. This is nonlinear reading, and it has been documented in academic studies. Some researchers believe that for many people, this style of reading is beginning to invade when dealing with other mediums as well.

“We’re spending so much time touching, pushing, linking, scroll­ing and jumping through text that when we sit down with a novel, your daily habits of jumping, clicking, linking is just ingrained in you,” said Andrew Dillon, a University of Texas professor who studies reading. “We’re in this new era of information behavior, and we’re beginning to see the consequences of that.”

Brandon Ambrose, a 31-year-old Navy financial analyst who lives in Alexandria, knows of those consequences.

His book club recently read “The Interestings,” a best-seller by Meg Wolitzer. When the club met, he realized he had missed a number of the book’s key plot points. It hit him that he had been scanning for information about one particular aspect of the book, just as he might scan for one particular fact on his computer screen, where he spends much of his day.

“When you try to read a novel,” he said, “it’s almost like we’re not built to read them anymore, as bad as that sounds.”

Ramesh Kurup noticed something even more troubling. Working his way recently through a number of classic authors — George Eliot, Marcel Proust, that crowd — Kurup, 47, discovered that he was having trouble reading long sentences with multiple, winding clauses full of background information. Online sentences tend to be shorter, and the ones containing complicated information tend to link to helpful background material.

“In a book, there are no graphics or links to keep you on track,” Kurup said.

It’s easier to follow links, he thinks, than to keep track of so many clauses in page after page of long paragraphs.

 

Read the entire article here (but don’t click anywhere else).

An Ode to the Sinclair ZX81

Sinclair-ZX81What do the PDP-11, Commodore PET, APPLE II and Sinclair’s ZX81 have in common? And, more importantly, for anyone under the age of 35, what on earth are they?  Well, these are respectively, the first time-share mainframe, first personal computer, first Apple computer, and the first home-based computer programmed by theDiagonal’s friendly editor back in the pioneering days of computation.

The article below on technological nostalgia pushed the recall button, bringing back vivid memories of dot matrix printers, FORTRAN, large floppy diskettes (5 1/4 inch), reel-to-reel tape storage, and the 1Kb of programmable memory on the ZX81. In fact, despite the tremendous and now laughable limitations of the ZX81 — one had to save and load programs via a tape cassette — programming the device at home was a true revelation.

Some would go so far as to say that the first computer is very much like the first kiss or the first date. Well, not so. But fun nonetheless, and responsible for much in the way of future career paths.

From ars technica:

Being a bunch of technology journalists who make our living on the Web, we at Ars all have a fairly intimate relationship with computers dating back to our childhood—even if for some of us, that childhood is a bit more distant than others. And our technological careers and interests are at least partially shaped by the devices we started with.

So when Cyborgology’s David Banks recently offered up an autobiography of himself based on the computing devices he grew up with, it started a conversation among us about our first computing experiences. And being the most (chronologically) senior of Ars’ senior editors, the lot fell to me to pull these recollections together—since, in theory, I have the longest view of the bunch.

Considering the first computer I used was a Digital Equipment Corp. PDP-10, that theory is probably correct.

The DEC PDP-10 and DECWriter II Terminal

In 1979, I was a high school sophomore at Longwood High School in Middle Island, New York, just a short distance from the Department of Energy’s Brookhaven National Labs. And it was at Longwood that I got the first opportunity to learn how to code, thanks to a time-share connection we had to a DEC PDP-10 at the State University of New York at Stony Brook.

The computer lab at Longwood, which was run by the math department and overseen by my teacher Mr. Dennis Schultz, connected over a leased line to SUNY. It had, if I recall correctly, six LA36 DECWriter II terminals connected back to the mainframe—essentially dot-matrix printers with keyboards on them. Turn one on while the mainframe was down, and it would print over and over:

PDP-10 NOT AVAILABLE

Time at the terminals was a precious resource, so we were encouraged to write out all of our code by hand first on graph paper and then take a stack of cards over to the keypunch. This process did wonders for my handwriting. I spent an inordinate amount of time just writing BASIC and FORTRAN code in block letters on graph-paper notebooks.

One of my first fully original programs was an aerial combat program that used three-dimensional arrays to track the movement of the player’s and the programmed opponent’s airplanes as each maneuvered to get the other in its sights. Since the program output to pin-fed paper, that could be a tedious process.

At a certain point, Mr. Shultz, who had been more than tolerant of my enthusiasm, had to crack down—my code was using up more than half the school’s allotted system storage. I can’t imagine how much worse it would have been if we had video terminals.

Actually, I can imagine, because in my senior year I was introduced to the Apple II, video, and sound. The vastness of 360 kilobytes of storage and the ability to code at the keyboard were such a huge luxury after the spartan world of punch cards that I couldn’t contain myself. I soon coded a student parking pass database for my school—while also coding a Dungeons & Dragons character tracking system, complete with combat resolution and hit point tracking.

—Sean Gallagher

A printer terminal and an acoustic coupler

I never saw the computer that gave me my first computing experience, and I have little idea what it actually was. In fact, if I ever knew where it was located, I’ve since forgotten. But I do distinctly recall the gateway to it: a locked door to the left of the teacher’s desk in my high school biology lab. Fortunately, the guardian—commonly known as Mr. Dobrow—was excited about introducing some of his students to computers, and he let a number of us spend our lunch hours experimenting with the system.

And what a system it was. Behind the physical door was another gateway, this one electronic. Since the computer was located in another town, you had to dial in by modem. The modems of the day were something different entirely from what you may recall from AOL’s dialup heyday. Rather than plugging straight in to your phone line, you dialed in manually—on a rotary phone, no less—then dropped the speaker and mic carefully into two rubber receptacles spaced to accept the standard-issue hardware of the day. (And it was standard issue; AT&T was still a monopoly at the time.)

That modem was hooked into a sort of combination of line printer and keyboard. When you were entering text, the setup acted just like a typewriter. But as soon as you hit the return key, it transmitted, and the mysterious machine at the other end responded, sending characters back that were dutifully printed out by the same machine. This meant that an infinite loop would unleash a spray of paper, and it had to be terminated by hanging up the phone.

It took us a while to get to infinite loops, though. Mr. Dobrow started us off on small simulations of things like stock markets and malaria control. Eventually, we found a way to list all the programs available and discovered a Star Trek game. Photon torpedoes were deadly, but the phasers never seemed to work, so before too long one guy had the bright idea of trying to hack the game (although that wasn’t the term that we used). We were off.

John Timmer

Read the entire article here.

Image: Sinclair ZX81. Courtesy of Wikipedia.

Pre-Twittersphere Infectious Information

While our 21st century always-on media and information sharing circus pervades every nook and cranny of our daily lives, it is useful to note that pre-Twittersphere, ideas and information did get shared. Yes, useful news and even trivial memes did go viral back in the 18oos.

From Wired:

The story had everything — exotic locale, breathtaking engineering, Napoleon Bonaparte. No wonder the account of a lamplit flat-bottom boat journey through the Paris sewer
went viral after it was published — on May 23, 1860.

At least 15 American newspapers reprinted it, exposing tens of thousands of readers to the dank wonders of the French city’s “splendid system of sewerage.”

Twitter is faster and HuffPo more sophisticated, but the parasitic dynamics of networked media were fully functional in the 19th century. For proof, look no further than the Infectious Texts project, a collaboration of humanities scholars and computer scientists.

The project expects to launch by the end of the month. When it does, researchers and the public will be able to comb through widely reprinted texts identified by mining 41,829 issues of 132 newspapers from the Library of Congress. While this first stage focuses on texts from before the Civil War, the project eventually will include the later 19th century and expand to include magazines and other publications, says Ryan Cordell, an assistant professor of English at Northeastern University and a leader of the project.

Some of the stories were printed in 50 or more newspapers, each with thousands to tens of thousands of subscribers. The most popular of them most likely were read by hundreds of thousands of people, Cordell says. Most have been completely forgotten. “Almost none of those are texts that scholars have studied, or even knew existed,” he said.

The tech may have been less sophisticated, but some barriers to virality were low in the 1800s. Before modern copyright laws there were no legal or even cultural barriers to borrowing content, Cordell says. Newspapers borrowed freely. Large papers often had an “exchange editor” whose job it was to read through other papers and clip out interesting pieces. “They were sort of like BuzzFeed employees,” Cordell said.

Clips got sorted into drawers according to length; when the paper needed, say, a 3-inch piece to fill a gap, they’d pluck out a story of the appropriate length and publish it, often verbatim.

Fast forward a century and a half and many of these newspapers have been scanned and digitized. Northeastern computer scientist David Smith developed an algorithm that mines
this vast trove of text for reprinted items by hunting for clusters of five words that appear in the same sequence in multiple publications (Google uses a similar concept for its Ngram viewer).

The project is sponsored by the NULab for Texts, Maps, and Networks at Northeastern and the Office of Digital Humanities at the National Endowment for the Humanities. Cordell says the main goal is to build a resource for other scholars, but he’s already capitalizing on it for his own research, using modern mapping and network analysis tools to explore how things went viral back then.

Counting page views from two centuries ago is anything but an exact science, but Cordell has used Census records to estimate how many people were living within a certain distance of where a particular piece was published and combined that with newspaper circulation data to estimate what fraction of the population would have seen it (a quarter to a third, for the most infectious texts, he says).

He’s also interested in mapping how the growth of the transcontinental railroad — and later the telegraph and wire services — changed the way information moved across the country. The animation below shows the spread of a single viral text, a poem by the Scottish poet Charles MacKay, overlaid on the developing railroad system. The one at the very bottom depicts how newspapers grew with the country from the colonial era to modern times, often expanding into a territory before the political boundaries had been drawn.

Read the entire article here.

Image: Courtesy of Ryan Cordell / Infectious texts project. Thicker lines indicate more content-sharing between 19th century newspapers.

Personalized Care Courtesy of Big Data

The era of truly personalized medicine and treatment plans may still be a fair way off, but thanks to big data initiatives predictive and preventative health is making significant progress. This bodes well for over-stretched healthcare systems, medical professionals, and those who need care and/or pay for it.

That said, it is useful to keep in mind how similar data in other domains such as shopping travel and media, has been delivering personalized content and services for quite some time. So, healthcare information technology certainly lags, where it should be leading. One single answer may be impossible to agree upon. However, it is encouraging to see the healthcare and medical information industries catching up.

From Technology Review:

On the ground floor of the Mount Sinai Medical Center’s new behemoth of a research and hospital building in Manhattan, rows of empty black metal racks sit waiting for computer processors and hard disk drives. They’ll house the center’s new computing cluster, adding to an existing $3 million supercomputer that hums in the basement of a nearby building.

The person leading the design of the new computer is Jeff Hammerbacher, a 30-year-old known for being Facebook’s first data scientist. Now Hammerbacher is applying the same data-crunching techniques used to target online advertisements, but this time for a powerful engine that will suck in medical information and spit out predictions that could cut the cost of health care.

With $3 trillion spent annually on health care in the U.S., it could easily be the biggest job for “big data” yet. “We’re going out on a limb—we’re saying this can deliver value to the hospital,” says Hammerbacher.

Mount Sinai has 1,406 beds plus a medical school and treats half a million patients per year. Increasingly, it’s run like an information business: it’s assembled a biobank with 26,735 patient DNA and plasma samples, it finished installing a $120 million electronic medical records system this year, and it has been spending heavily to recruit computing experts like Hammerbacher.

It’s all part of a “monstrously large bet that [data] is going to matter,” says Eric Schadt, the computational biologist who runs Mount Sinai’s Icahn Institute for Genomics and Multiscale Biology, where Hammerbacher is based, and who was himself recruited from the gene sequencing company Pacific Biosciences two years ago.

Mount Sinai hopes data will let it succeed in a health-care system that’s shifting dramatically. Perversely, because hospitals bill by the procedure, they tend to earn more the sicker their patients become. But health-care reform in Washington is pushing hospitals toward a new model, called “accountable care,” in which they will instead be paid to keep people healthy.

Mount Sinai is already part of an experiment that the federal agency overseeing Medicare has organized to test these economic ideas. Last year it joined 250 U.S. doctor’s practices, clinics, and other hospitals in agreeing to track patients more closely. If the medical organizations can cut costs with better results, they’ll share in the savings. If costs go up, they can face penalties.

The new economic incentives, says Schadt, help explain the hospital’s sudden hunger for data, and its heavy spending to hire 150 people during the last year just in the institute he runs. “It’s become ‘Hey, use all your resources and data to better assess the population you are treating,’” he says.

One way Mount Sinai is doing that already is with a computer model where factors like disease, past hospital visits, even race, are used to predict which patients stand the highest chance of returning to the hospital. That model, built using hospital claims data, tells caregivers which chronically ill people need to be showered with follow-up calls and extra help. In a pilot study, the program cut readmissions by half; now the risk score is being used throughout the hospital.

Hammerbacher’s new computing facility is designed to supercharge the discovery of such insights. It will run a version of Hadoop, software that spreads data across many computers and is popular in industries, like e-commerce, that generate large amounts of quick-changing information.

Patient data are slim by comparison, and not very dynamic. Records get added to infrequently—not at all if a patient visits another hospital. That’s a limitation, Hammerbacher says. Yet he hopes big-data technology will be used to search for connections between, say, hospital infections and the DNA of microbes present in an ICU, or to track data streaming in from patients who use at-home monitors.

One person he’ll be working with is Joel Dudley, director of biomedical informatics at Mount Sinai’s medical school. Dudley has been running information gathered on diabetes patients (like blood sugar levels, height, weight, and age) through an algorithm that clusters them into a weblike network of nodes. In “hot spots” where diabetic patients appear similar, he’s then trying to find out if they share genetic attributes. That way DNA information might add to predictions about patients, too.

A goal of this work, which is still unpublished, is to replace the general guidelines doctors often use in deciding how to treat diabetics. Instead, new risk models—powered by genomics, lab tests, billing records, and demographics—could make up-to-date predictions about the individual patient a doctor is seeing, not unlike how a Web ad is tailored according to who you are and sites you’ve visited recently.

That is where the big data comes in. In the future, every patient will be represented by what Dudley calls “large dossier of data.” And before they are treated, or even diagnosed, the goal will be to “compare that to every patient that’s ever walked in the door at Mount Sinai,” he says. “[Then] you can say quantitatively what’s the risk for this person based on all the other patients we’ve seen.”

Read the entire article here.

Totalitarianism in the Age of the Internet

Google chair Eric Schmidt is in a very elite group. Not only does he run a major and very profitable U.S. corporation, and by extrapolation is thus a “googillionaire”, he’s also been to North Korea.

We excerpt below Schmidt’s recent essay, with co-author Jared Cohen, about freedom in both the real and digital worlds.

From the Wall Street Journal:

How do you explain to people that they are a YouTube sensation, when they have never heard of YouTube or the Internet? That’s a question we faced during our January visit to North Korea, when we attempted to engage with the Pyongyang traffic police. You may have seen videos on the Web of the capital city’s “traffic cops,” whose ballerina-like street rituals, featured in government propaganda videos, have made them famous online. The men and women themselves, however—like most North Koreans—have never seen a Web page, used a desktop computer, or held a tablet or smartphone. They have never even heard of Google (or Bing, for that matter).

Even the idea of the Internet has not yet permeated the public’s consciousness in North Korea. When foreigners visit, the government stages Internet browsing sessions by having “students” look at pre-downloaded and preapproved content, spending hours (as they did when we were there) scrolling up and down their screens in totalitarian unison. We ended up trying to describe the Internet to North Koreans we met in terms of its values: free expression, freedom of assembly, critical thinking, meritocracy. These are uncomfortable ideas in a society where the “Respected Leader” is supposedly the source of all information and where the penalty for defying him is the persecution of you and your family for three generations.

North Korea is at the beginning of a cat-and-mouse game that’s playing out all around the world between repressive regimes and their people. In most of the world, the spread of connectivity has transformed people’s expectations of their governments. North Korea is one of the last holdouts. Until only a few years ago, the price for being caught there with an unauthorized cellphone was the death penalty. Cellphones are now more common in North Korea since the government decided to allow one million citizens to have them; and in parts of the country near the border, the Internet is sometimes within reach as citizens can sometimes catch a signal from China. None of this will transform the country overnight, but one thing is certain: Though it is possible to curb and monitor technology, once it is available, even the most repressive regimes are unable to put it back in the box.

What does this mean for governments and would-be revolutionaries? While technology has great potential to bring about change, there is a dark side to the digital revolution that is too often ignored. There is a turbulent transition ahead for autocratic regimes as more of their citizens come online, but technology doesn’t just help the good guys pushing for democratic reform—it can also provide powerful new tools for dictators to suppress dissent.

Fifty-seven percent of the world’s population still lives under some sort of autocratic regime. In the span of a decade, the world’s autocracies will go from having a minority of their citizens online to a majority. From Tehran to Beijing, autocrats are building the technology and training the personnel to suppress democratic dissent, often with the help of Western companies.

Of course, this is no easy task—and it isn’t cheap. The world’s autocrats will have to spend a great deal of money to build systems capable of monitoring and containing dissident energy. They will need cell towers and servers, large data centers, specialized software, legions of trained personnel and reliable supplies of basic resources like electricity and Internet connectivity. Once such an infrastructure is in place, repressive regimes then will need supercomputers to manage the glut of information.

Despite the expense, everything a regime would need to build an incredibly intimidating digital police state—including software that facilitates data mining and real-time monitoring of citizens—is commercially available right now. What’s more, once one regime builds its surveillance state, it will share what it has learned with others. We know that autocratic governments share information, governance strategies and military hardware, and it’s only logical that the configuration that one state designs (if it works) will proliferate among its allies and assorted others. Companies that sell data-mining software, surveillance cameras and other products will flaunt their work with one government to attract new business. It’s the digital analog to arms sales, and like arms sales, it will not be cheap. Autocracies rich in national resources—oil, gas, minerals—will be able to afford it. Poorer dictatorships might be unable to sustain the state of the art and find themselves reliant on ideologically sympathetic patrons.

And don’t think that the data being collected by autocracies is limited to Facebook posts or Twitter comments. The most important data they will collect in the future is biometric information, which can be used to identify individuals through their unique physical and biological attributes. Fingerprints, photographs and DNA testing are all familiar biometric data types today. Indeed, future visitors to repressive countries might be surprised to find that airport security requires not just a customs form and passport check, but also a voice scan. In the future, software for voice and facial recognition will surpass all the current biometric tests in terms of accuracy and ease of use.

Today’s facial-recognition systems use a camera to zoom in on an individual’s eyes, mouth and nose, and extract a “feature vector,” a set of numbers that describes key aspects of the image, such as the precise distance between the eyes. (Remember, in the end, digital images are just numbers.) Those numbers can be fed back into a large database of faces in search of a match. The accuracy of this software is limited today (by, among other things, pictures shot in profile), but the progress in this field is remarkable. A team at Carnegie Mellon demonstrated in a 2011 study that the combination of “off-the-shelf” facial recognition software and publicly available online data (such as social-network profiles) can match a large number of faces very quickly. With cloud computing, it takes just seconds to compare millions of faces. The accuracy improves with people who have many pictures of themselves available online—which, in the age of Facebook, is practically everyone.

Dictators, of course, are not the only beneficiaries from advances in technology. In recent years, we have seen how large numbers of young people in countries such as Egypt and Tunisia, armed with little more than mobile phones, can fuel revolutions. Their connectivity has helped them to challenge decades of authority and control, hastening a process that, historically, has often taken decades. Still, given the range of possible outcomes in these situations—brutal crackdown, regime change, civil war, transition to democracy—it is also clear that technology is not the whole story.

Observers and participants alike have described the recent Arab Spring as “leaderless”—but this obviously has a downside to match its upside. In the day-to-day process of demonstrating, it was possible to retain a decentralized command structure (safer too, since the regimes could not kill the movement simply by capturing the leaders). But, over time, some sort of centralized authority must emerge if a democratic movement is to have any direction. Popular uprisings can overthrow dictators, but they’re only successful afterward if opposition forces have a plan and can execute it. Building a Facebook page does not constitute a plan.

History suggests that opposition movements need time to develop. Consider the African National Congress in South Africa. During its decades of exile from the apartheid state, the organization went through multiple iterations, and the men who would go on to become South African presidents (Nelson Mandela, Thabo Mbeki and Jacob Zuma) all had time to build their reputations, credentials and networks while honing their operational skills. Likewise with Lech Walesa and his Solidarity trade union in Eastern Europe. A decade passed before Solidarity leaders could contest seats in the Polish parliament, and their victory paved the way for the fall of communism.

Read the entire essay after the jump.

Image: North Korean students work in a computer lab. Courtesy of AP Photo/David Guttenfelder / Washington Post.

Technology: Mind Exp(a/e)nder

Rattling off esoteric facts to friends and colleagues at a party or in the office is often seen as a simple way to impress. You may have tried this at some point — to impress a prospective boy or girl friend or a group of peers or even your boss. Not surprisingly, your facts will impress if they are relevant to the discussion at hand. However, your audience will be even more agog at your uncanny, intellectual prowess if the facts and figures relate to some wildly obtuse domain — quotes from authors, local bird species, gold prices through the years, land-speed records through the ages, how electrolysis works, etymology of polysyllabic words, and so it goes.

So, it comes as no surprise that many technology companies fall over themselves to promote their products as a way to make you, the smart user, even smarter. But does having constant, realtime access to a powerful computer or smartphone or spectacles linked to an immense library of interconnected content, make you smarter? Some would argue that it does; that having access to a vast, virtual disk drive of information will improve your cognitive abilities. There is no doubt that our technology puts an unparalleled repository of information within instant and constant reach: we can read all the classic literature — for that matter we can read the entire contents of the Library of Congress; we can find an answer to almost any question — it’s just a Google search away; we can find fresh research and rich reference material on every subject imaginable.

Yet, all this information will not directly make us any smarter; it is not applied knowledge nor is it experiential wisdom. It will not make us more creative or insightful. However, it is more likely to influence our cognition indirectly — freed from our need to carry volumes of often useless facts and figures in our heads, we will be able to turn our minds to more consequential and noble pursuits — to think, rather than to memorize. That is a good thing.

From Slate:

Quick, what’s the square root of 2,130? How many Roadmaster convertibles did Buick build in 1949? What airline has never lost a jet plane in a crash?

If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.

True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.

The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.

Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.

The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.

So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”

Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.

The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”

So here’s a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What’s the exact recipe for your favorite homemade pastry?

Read the entire article after the jump.

Image: Google Glass. Courtesy of Google.

Your City as an Information Warehouse

Big data keeps getting bigger and computers keep getting faster. Some theorists believe that the universe is a giant computer or a computer simulation; that principles of information science govern the cosmos. While this notion is one of the most recent radical ideas to explain our existence, there is no doubt that information is our future. Data surrounds us, we are becoming data-points and our cities are our information-rich databases.

[div class=attrib]From the Economist:[end-div]

IN 1995 GEORGE GILDER, an American writer, declared that “cities are leftover baggage from the industrial era.” Electronic communications would become so easy and universal that people and businesses would have no need to be near one another. Humanity, Mr Gilder thought, was “headed for the death of cities”.

It hasn’t turned out that way. People are still flocking to cities, especially in developing countries. Cisco’s Mr Elfrink reckons that in the next decade 100 cities, mainly in Asia, will reach a population of more than 1m. In rich countries, to be sure, some cities are sad shadows of their old selves (Detroit, New Orleans), but plenty are thriving. In Silicon Valley and the newer tech hubs what Edward Glaeser, a Harvard economist, calls “the urban ability to create collaborative brilliance” is alive and well.

Cheap and easy electronic communication has probably helped rather than hindered this. First, connectivity is usually better in cities than in the countryside, because it is more lucrative to build telecoms networks for dense populations than for sparse ones. Second, electronic chatter may reinforce rather than replace the face-to-face kind. In his 2011 book, “Triumph of the City”, Mr Glaeser theorises that this may be an example of what economists call “Jevons’s paradox”. In the 19th century the invention of more efficient steam engines boosted rather than cut the consumption of coal, because they made energy cheaper across the board. In the same way, cheap electronic communication may have made modern economies more “relationship-intensive”, requiring more contact of all kinds.

Recent research by Carlo Ratti, director of the SENSEable City Laboratory at the Massachusetts Institute of Technology, and colleagues, suggests there is something to this. The study, based on the geographical pattern of 1m mobile-phone calls in Portugal, found that calls between phones far apart (a first contact, perhaps) are often followed by a flurry within a small area (just before a meeting).

Data deluge

A third factor is becoming increasingly important: the production of huge quantities of data by connected devices, including smartphones. These are densely concentrated in cities, because that is where the people, machines, buildings and infrastructures that carry and contain them are packed together. They are turning cities into vast data factories. “That kind of merger between physical and digital environments presents an opportunity for us to think about the city almost like a computer in the open air,” says Assaf Biderman of the SENSEable lab. As those data are collected and analysed, and the results are recycled into urban life, they may turn cities into even more productive and attractive places.

Some of these “open-air computers” are being designed from scratch, most of them in Asia. At Songdo, a South Korean city built on reclaimed land, Cisco has fitted every home and business with video screens and supplied clever systems to manage transport and the use of energy and water. But most cities are stuck with the infrastructure they have, at least in the short term. Exploiting the data they generate gives them a chance to upgrade it. Potholes in Boston, for instance, are reported automatically if the drivers of the cars that hit them have an app called Street Bump on their smartphones. And, particularly in poorer countries, places without a well-planned infrastructure have the chance of a leap forward. Researchers from the SENSEable lab have been working with informal waste-collecting co-operatives in São Paulo whose members sift the city’s rubbish for things to sell or recycle. By attaching tags to the trash, the researchers have been able to help the co-operatives work out the best routes through the city so they can raise more money and save time and expense.

Exploiting data may also mean fewer traffic jams. A few years ago Alexandre Bayen, of the University of California, Berkeley, and his colleagues ran a project (with Nokia, then the leader of the mobile-phone world) to collect signals from participating drivers’ smartphones, showing where the busiest roads were, and feed the information back to the phones, with congested routes glowing red. These days this feature is common on smartphones. Mr Bayen’s group and IBM Research are now moving on to controlling traffic and thus easing jams rather than just telling drivers about them. Within the next three years the team is due to build a prototype traffic-management system for California’s Department of Transportation.

Cleverer cars should help, too, by communicating with each other and warning drivers of unexpected changes in road conditions. Eventually they may not even have drivers at all. And thanks to all those data they may be cleaner, too. At the Fraunhofer FOKUS Institute in Berlin, Ilja Radusch and his colleagues show how hybrid cars can be automatically instructed to switch from petrol to electric power if local air quality is poor, say, or if they are going past a school.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Images of cities courtesy of Google search.[end-div]

The Promise of Quantum Computation

Advanced in quantum physics and in the associated realm of quantum information promise to revolutionize computing. Imagine a computer several trillions of times faster than the present day supercomputers — well, that’s where we are heading.

[div class=attrib]From the New York Times:[end-div]

THIS summer, physicists celebrated a triumph that many consider fundamental to our understanding of the physical world: the discovery, after a multibillion-dollar effort, of the Higgs boson.

Given its importance, many of us in the physics community expected the event to earn this year’s Nobel Prize in Physics. Instead, the award went to achievements in a field far less well known and vastly less expensive: quantum information.

It may not catch as many headlines as the hunt for elusive particles, but the field of quantum information may soon answer questions even more fundamental — and upsetting — than the ones that drove the search for the Higgs. It could well usher in a radical new era of technology, one that makes today’s fastest computers look like hand-cranked adding machines.

The basis for both the work behind the Higgs search and quantum information theory is quantum physics, the most accurate and powerful theory in all of science. With it we created remarkable technologies like the transistor and the laser, which, in time, were transformed into devices — computers and iPhones — that reshaped human culture.

But the very usefulness of quantum physics masked a disturbing dissonance at its core. There are mysteries — summed up neatly in Werner Heisenberg’s famous adage “atoms are not things” — lurking at the heart of quantum physics suggesting that our everyday assumptions about reality are no more than illusions.

Take the “principle of superposition,” which holds that things at the subatomic level can be literally two places at once. Worse, it means they can be two things at once. This superposition animates the famous parable of Schrödinger’s cat, whereby a wee kitty is left both living and dead at the same time because its fate depends on a superposed quantum particle.

For decades such mysteries were debated but never pushed toward resolution, in part because no resolution seemed possible and, in part, because useful work could go on without resolving them (an attitude sometimes called “shut up and calculate”). Scientists could attract money and press with ever larger supercolliders while ignoring such pesky questions.

But as this year’s Nobel recognizes, that’s starting to change. Increasingly clever experiments are exploiting advances in cheap, high-precision lasers and atomic-scale transistors. Quantum information studies often require nothing more than some equipment on a table and a few graduate students. In this way, quantum information’s progress has come not by bludgeoning nature into submission but by subtly tricking it to step into the light.

Take the superposition debate. One camp claims that a deeper level of reality lies hidden beneath all the quantum weirdness. Once the so-called hidden variables controlling reality are exposed, they say, the strangeness of superposition will evaporate.

Another camp claims that superposition shows us that potential realities matter just as much as the single, fully manifested one we experience. But what collapses the potential electrons in their two locations into the one electron we actually see? According to this interpretation, it is the very act of looking; the measurement process collapses an ethereal world of potentials into the one real world we experience.

And a third major camp argues that particles can be two places at once only because the universe itself splits into parallel realities at the moment of measurement, one universe for each particle location — and thus an infinite number of ever splitting parallel versions of the universe (and us) are all evolving alongside one another.

These fundamental questions might have lived forever at the intersection of physics and philosophy. Then, in the 1980s, a steady advance of low-cost, high-precision lasers and other “quantum optical” technologies began to appear. With these new devices, researchers, including this year’s Nobel laureates, David J. Wineland and Serge Haroche, could trap and subtly manipulate individual atoms or light particles. Such exquisite control of the nano-world allowed them to design subtle experiments probing the meaning of quantum weirdness.

Soon at least one interpretation, the most common sense version of hidden variables, was completely ruled out.

At the same time new and even more exciting possibilities opened up as scientists began thinking of quantum physics in terms of information, rather than just matter — in other words, asking if physics fundamentally tells us more about our interaction with the world (i.e., our information) than the nature of the world by itself (i.e., matter). And so the field of quantum information theory was born, with very real new possibilities in the very real world of technology.

What does this all mean in practice? Take one area where quantum information theory holds promise, that of quantum computing.

Classical computers use “bits” of information that can be either 0 or 1. But quantum-information technologies let scientists consider “qubits,” quantum bits of information that are both 0 and 1 at the same time. Logic circuits, made of qubits directly harnessing the weirdness of superpositions, allow a quantum computer to calculate vastly faster than anything existing today. A quantum machine using no more than 300 qubits would be a million, trillion, trillion, trillion times faster than the most modern supercomputer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bloch sphere representation of a qubit, the fundamental building block of quantum computers. Courtesy of Wikipedia.[end-div]

What’s All the Fuss About Big Data?

We excerpt an interview with big data pioneer and computer scientist, Alex Pentland, via the Edge. Pentland is a leading thinker in computational social science and currently directs the Human Dynamics Laboratory at MIT.

While there is no exact definition of “big data” it tends to be characterized quantitatively and qualitatively differently from data commonly used by most organizations. Where regular data can be stored, processed and analyzed using common database tools and analytical engines, big data refers to vast collections of data that often lie beyond the realm of regular computation. So, often big data requires vast and specialized storage and enormous processing capabilities. Data sets that fall into the big data area cover such areas as climate science, genomics, particle physics, and computational social science.

Big data holds true promise. However, while storage and processing power now enable quick and efficient crunching of tera- and even petabytes of data, tools for comprehensive analysis and visualization lag behind.

[div class=attrib]Alex Pentland via the Edge:[end-div]

Recently I seem to have become MIT’s Big Data guy, with people like Tim O’Reilly and “Forbes” calling me one of the seven most powerful data scientists in the world. I’m not sure what all of that means, but I have a distinctive view about Big Data, so maybe it is something that people want to hear.

I believe that the power of Big Data is that it is information about people’s behavior instead of information about their beliefs. It’s about the behavior of customers, employees, and prospects for your new business. It’s not about the things you post on Facebook, and it’s not about your searches on Google, which is what most people think about, and it’s not data from internal company processes and RFIDs. This sort of Big Data comes from things like location data off of your cell phone or credit card, it’s the little data breadcrumbs that you leave behind you as you move around in the world.

What those breadcrumbs tell is the story of your life. It tells what you’ve chosen to do. That’s very different than what you put on Facebook. What you put on Facebook is what you would like to tell people, edited according to the standards of the day. Who you actually are is determined by where you spend time, and which things you buy. Big data is increasingly about real behavior, and by analyzing this sort of data, scientists can tell an enormous amount about you. They can tell whether you are the sort of person who will pay back loans. They can tell you if you’re likely to get diabetes.

They can do this because the sort of person you are is largely determined by your social context, so if I can see some of your behaviors, I can infer the rest, just by comparing you to the people in your crowd. You can tell all sorts of things about a person, even though it’s not explicitly in the data, because people are so enmeshed in the surrounding social fabric that it determines the sorts of things that they think are normal, and what behaviors they will learn from each other.

As a consequence analysis of Big Data is increasingly about finding connections, connections with the people around you, and connections between people’s behavior and outcomes. You can see this in all sorts of places. For instance, one type of Big Data and connection analysis concerns financial data. Not just the flash crash or the Great Recession, but also all the other sorts of bubbles that occur. What these are is these are systems of people, communications, and decisions that go badly awry. Big Data shows us the connections that cause these events. Big data gives us the possibility of understanding how these systems of people and machines work, and whether they’re stable.

The notion that it is connections between people that is really important is key, because researchers have mostly been trying to understand things like financial bubbles using what is called Complexity Science or Web Science. But these older ways of thinking about Big Data leaves the humans out of the equation. What actually matters is how the people are connected together by the machines and how, as a whole, they create a financial market, a government, a company, and other social structures.

Because it is so important to understand these connections Asu Ozdaglar and I have recently created the MIT Center for Connection Science and Engineering, which spans all of the different MIT departments and schools. It’s one of the very first MIT-wide Centers, because people from all sorts of specialties are coming to understand that it is the connections between people that is actually the core problem in making transportation systems work well, in making energy grids work efficiently, and in making financial systems stable. Markets are not just about rules or algorithms; they’re about people and algorithms together.

Understanding these human-machine systems is what’s going to make our future social systems stable and safe. We are getting beyond complexity, data science and web science, because we are including people as a key part of these systems. That’s the promise of Big Data, to really understand the systems that make our technological society. As you begin to understand them, then you can build systems that are better. The promise is for financial systems that don’t melt down, governments that don’t get mired in inaction, health systems that actually work, and so on, and so forth.

The barriers to better societal systems are not about the size or speed of data. They’re not about most of the things that people are focusing on when they talk about Big Data. Instead, the challenge is to figure out how to analyze the connections in this deluge of data and come to a new way of building systems based on understanding these connections.

Changing The Way We Design Systems

With Big Data traditional methods of system building are of limited use. The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works, because almost everything is significant!  As a consequence the normal laboratory-based question-and-answering process, the method that we have used to build systems for centuries, begins to fall apart.

Big data and the notion of Connection Science is outside of our normal way of managing things. We live in an era that builds on centuries of science, and our methods of building of systems, governments, organizations, and so on are pretty well defined. There are not a lot of things that are really novel. But with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.

With Big Data you can easily get false correlations, for instance, “On Mondays, people who drive to work are more likely to get the flu.” If you look at the data using traditional methods, that may actually be true, but the problem is why is it true? Is it causal? Is it just an accident? You don’t know. Normal analysis methods won’t suffice to answer those questions. What we have to come up with is new ways to test the causality of connections in the real world far more than we have ever had to do before. We no can no longer rely on laboratory experiments; we need to actually do the experiments in the real world.

The other problem with Big Data is human understanding. When you find a connection that works, you’d like to be able to use it to build new systems, and that requires having human understanding of the connection. The managers and the owners have to understand what this new connection means. There needs to be a dialogue between our human intuition and the Big Data statistics, and that’s not something that’s built into most of our management systems today. Our managers have little concept of how to use big data analytics, what they mean, and what to believe.

In fact, the data scientists themselves don’t have much of intuition either…and that is a problem. I saw an estimate recently that said 70 to 80 percent of the results that are found in the machine learning literature, which is a key Big Data scientific field, are probably wrong because the researchers didn’t understand that they were overfitting the data. They didn’t have that dialogue between intuition and causal processes that generated the data. They just fit the model and got a good number and published it, and the reviewers didn’t catch it either. That’s pretty bad because if we start building our world on results like that, we’re going to end up with trains that crash into walls and other bad things. Management using Big Data is actually a radically new thing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Techcrunch.[end-div]

Contain this!

[div class=attrib]From Eurozine:[end-div]

WikiLeaks’ series of exposés is causing a very different news and informational landscape to emerge. Whilst acknowledging the structural leakiness of networked organisations, Felix Stalder finds deeper reasons for the crisis of information security and the new distribution of investigative journalism.

WikiLeaks is one of the defining stories of the Internet, which means by now, one of the defining stories of the present, period. At least four large-scale trends which permeate our societies as a whole are fused here into an explosive mixture whose fall-out is far from clear. First is a change in the materiality of communication. Communication becomes more extensive, more recorded, and the records become more mobile. Second is a crisis of institutions, particularly in western democracies, where moralistic rhetoric and the ugliness of daily practice are diverging ever more at the very moment when institutional personnel are being encouraged to think more for themselves. Third is the rise of new actors, “super-empowered” individuals, capable of intervening into historical developments at a systemic level. Finally, fourth is a structural transformation of the public sphere (through media consolidation at one pole, and the explosion of non-institutional publishers at the other), to an extent that rivals the one described by Habermas with the rise of mass media at the turn of the twentieth century.

Leaky containers

Imagine dumping nearly 400 000 paper documents into a dead drop located discreetly on the hard shoulder of a road. Impossible. Now imagine the same thing with digital records on a USB stick, or as an upload from any networked computer. No problem at all. Yet, the material differences between paper and digital records go much further than mere bulk. Digital records are the impulses travelling through the nervous systems of dynamic, distributed organisations of all sizes. They are intended, from the beginning, to circulate with ease. Otherwise such organisations would fall apart and dynamism would grind to a halt. The more flexible and distributed organisations become, the more records they need to produce and the faster these need to circulate. Due to their distributed aspect and the pressure for cross-organisational cooperation, it is increasingly difficult to keep records within particular organisations whose boundaries are blurring anyway. Surveillance researchers such as David Lyon have long been writing about the leakiness of “containers”, meaning the tendency for sensitive digital records to cross the boundaries of the institutions which produce them. This leakiness is often driven by commercial considerations (private data being sold), but it happens also out of incompetence (systems being secured insufficiently), or because insiders deliberately violate organisational policies for their own purposes. Either they are whistle-blowers motivated by conscience, as in the case of WikiLeaks, or individuals selling information for private gain, as in the case of the numerous employees of Swiss banks who recently copied the details of private accounts and sold them to tax authorities across Europe. Within certain organisation such as banks and the military, virtually everything is classified and large number of people have access to this data, not least mid-level staff who handle the streams of raw data such as individuals’ records produced as part of daily procedure.

[div class=attrib]More from theSource here.[end-div]

The society of the query and the Googlization of our lives

[div class=attrib]From Eurozine:[end-div]

“There is only one way to turn signals into information, through interpretation”, wrote the computer critic Joseph Weizenbaum. As Google’s hegemony over online content increases, argues Geert Lovink, we should stop searching and start questioning.

A spectre haunts the world’s intellectual elites: information overload. Ordinary people have hijacked strategic resources and are clogging up once carefully policed media channels. Before the Internet, the mandarin classes rested on the idea that they could separate “idle talk” from “knowledge”. With the rise of Internet search engines it is no longer possible to distinguish between patrician insights and plebeian gossip. The distinction between high and low, and their co-mingling on occasions of carnival, belong to a bygone era and should no longer concern us. Nowadays an altogether new phenomenon is causing alarm: search engines rank according to popularity, not truth. Search is the way we now live. With the dramatic increase of accessed information, we have become hooked on retrieval tools. We look for telephone numbers, addresses, opening times, a person’s name, flight details, best deals and in a frantic mood declare the ever growing pile of grey matter “data trash”. Soon we will search and only get lost. Old hierarchies of communication have not only imploded, communication itself has assumed the status of cerebral assault. Not only has popular noise risen to unbearable levels, we can no longer stand yet another request from colleagues and even a benign greeting from friends and family has acquired the status of a chore with the expectation of reply. The educated class deplores that fact that chatter has entered the hitherto protected domain of science and philosophy, when instead they should be worrying about who is going to control the increasingly centralized computing grid.

What today’s administrators of noble simplicity and quiet grandeur cannot express, we should say for them: there is a growing discontent with Google and the way the Internet organizes information retrieval. The scientific establishment has lost control over one of its key research projects – the design and ownership of computer networks, now used by billions of people. How did so many people end up being that dependent on a single search engine? Why are we repeating the Microsoft saga once again? It seems boring to complain about a monopoly in the making when average Internet users have such a multitude of tools at their disposal to distribute power. One possible way to overcome this predicament would be to positively redefine Heidegger’s Gerede. Instead of a culture of complaint that dreams of an undisturbed offline life and radical measures to filter out the noise, it is time to openly confront the trivial forms of Dasein today found in blogs, text messages and computer games. Intellectuals should no longer portray Internet users as secondary amateurs, cut off from a primary and primordial relationship with the world. There is a greater issue at stake and it requires venturing into the politics of informatic life. It is time to address the emergence of a new type of corporation that is rapidly transcending the Internet: Google.

The World Wide Web, which should have realized the infinite library Borges described in his short story The Library of Babel (1941), is seen by many of its critics as nothing but a variation of Orwell’s Big Brother (1948). The ruler, in this case, has turned from an evil monster into a collection of cool youngsters whose corporate responsibility slogan is “Don’t be evil”. Guided by a much older and experienced generation of IT gurus (Eric Schmidt), Internet pioneers (Vint Cerf) and economists (Hal Varian), Google has expanded so fast, and in such a wide variety of fields, that there is virtually no critic, academic or business journalist who has been able to keep up with the scope and speed with which Google developed in recent years. New applications and services pile up like unwanted Christmas presents. Just add Google’s free email service Gmail, the video sharing platform YouTube, the social networking site Orkut, GoogleMaps and GoogleEarth, its main revenue service AdWords with the Pay-Per-Click advertisements, office applications such as Calendar, Talks and Docs. Google not only competes with Microsoft and Yahoo, but also with entertainment firms, public libraries (through its massive book scanning program) and even telecom firms. Believe it or not, the Google Phone is coming soon. I recently heard a less geeky family member saying that she had heard that Google was much better and easier to use than the Internet. It sounded cute, but she was right. Not only has Google become the better Internet, it is taking over software tasks from your own computer so that you can access these data from any terminal or handheld device. Apple’s MacBook Air is a further indication of the migration of data to privately controlled storage bunkers. Security and privacy of information are rapidly becoming the new economy and technology of control. And the majority of users, and indeed companies, are happily abandoning the power to self-govern their informational resources.

[div class=attrib]More from theSource here.[end-div]