Tag Archives: computing

Art And Algorithms And Code And Cash

#!/usr/bin/perl
# 472-byte qrpff, Keith Winstein and Marc Horowitz <sipb-iap-dvd@mit.edu>
# MPEG 2 PS VOB file -> descrambled output on stdout.
# usage: perl -I <k1>:<k2>:<k3>:<k4>:<k5> qrpff
# where k1..k5 are the title key bytes in least to most-significant order

s''$/=\2048;while(<>){G=29;R=142;if((@a=unqT="C*",_)[20]&48){D=89;_=unqb24,qT,@
b=map{ord qB8,unqb8,qT,_^$a[--D]}@INC;s/...$/1$&/;Q=unqV,qb25,_;H=73;O=$b[4]<<9
|256|$b[3];Q=Q>>8^(P=(E=255)&(Q>>12^Q>>4^Q/8^Q))<<17,O=O>>8^(E&(F=(S=O>>14&7^O)
^S*8^S<<6))<<9,_=(map{U=_%16orE^=R^=110&(S=(unqT,"\xb\ntd\xbz\x14d")[_/16%8]);E
^=(72,@z=(64,72,G^=12*(U-2?0:S&17)),H^=_%64?12:0,@z)[_%8]}(16..271))[_]^((D>>=8
)+=P+(~F&E))for@a[128..$#a]}print+qT,@a}';s/[D-HO-U_]/\$$&/g;s/q/pack+/g;eval

You know that hacking has gone mainstream when the WSJ features it on the from page. Further, you know it must be passé when the WSJ claims that the art world is now purveying chunks of code as, well, art. You have to love this country for its entrepreneurial capitalist acumen!

So, if you are an enterprising (ex-)coder and have some cool Fortran, C++, or better yet, Assembler, lying around, dust off the diskette (or floppy, or better, yet, a punch card) and make haste to your nearest art gallery. You could become the first Picasso of programming — onward to the Gagosian! My story began with PL/1, IMS and then C, so my code may only be worthy of the artistic C-list.

From WSJ:

In March, Daniel Benitez, a cinema executive in Miami, paid $2,500 for a necktie. It wasn’t just any strip of designer neckwear. Imprinted on the blue silk were six lines of computer code that once brought the motion picture industry to its knees.

To the unschooled eye, the algorithm script on the tie, known formally as “qrpff,” looks like a lengthy typographical error.

But to Mr. Benitez and other computer cognoscenti, the algorithm it encodes is an artifact of rare beauty that embodies a kind of performance art. He framed it.

The algorithm sets out a procedure for what copyright holders once deemed a criminal act: picking the software lock on the digital scrambling system that Hollywood uses to protect its DVDs. At the turn of the century, hackers encoded it in many ways and distributed them freely—as programs, lines of poetry, lyrics in a rock song, and a square dance routine. They printed it on T-shirts and ties, like the item Mr. Benitez purchased. They proclaimed it free speech. No matter how many times the entertainment industry sued, their lawyers found the algorithm as hard to eradicate as kudzu.

Now it is exhibit A in the art world’s newest collecting trend.

Dealers in digital art are amassing algorithms, the computerized formulas that automate processes from stock-market sales to social networks.

In March, the online art brokerage Artsy and a digital code gallery called Ruse Laboratories held the world’s first algorithm art auction in New York. The Cooper Hewitt, Smithsonian Design Museum, where the auction was held as a fundraiser, is assembling a collection of computer code. In April, the Museum of Modern Art convened a gathering of computer experts and digital artists to discuss algorithms and design.

It is a small step for technology but a leap, perhaps, for the art world. “It is a whole new dimension we are trying to grapple with,” said curatorial director Cara McCarty at the Cooper Hewitt museum. “The art term I keep hearing is code.”

Read the entire article here.

Code snippet: Qrpff. A Perl script for decoding DVD content scrambling.

Neuromorphic Chips

Neuromorphic chips are here. But don’t worry these are not brain implants that you might expect to see in a William Gibson or Iain Banks novel. Neuromorphic processors are designed to simulate brain function, and learn or mimic certain types of human processes such as sensory perception, image processing and object recognition. The field is making tremendous advances, with companies like Qualcomm — better known for its mobile and wireless chips — leading the charge. Until recently complex sensory and mimetic processes had been the exclusive realm of supercomputers.

From Technology Review:

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.

This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.

Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.

Qualcomm’s chips won’t become available until next year at the earliest; the company will spend 2014 signing up researchers to try out the technology. But if it delivers, the project—known as the Zeroth program—would be the first large-scale commercial platform for neuromorphic computing. That’s on top of promising efforts at universities and at corporate labs such as IBM Research and HRL Laboratories, which have each developed neuromorphic chips under a $100 million project for the Defense Advanced Research Projects Agency. Likewise, the Human Brain Project in Europe is spending roughly 100 million euros on neuromorphic projects, including efforts at Heidelberg University and the University of Manchester. Another group in Germany recently reported using a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.

Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-­intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off.

Continuing to improve the performance of such processors requires their manufacturers to pack in ever more, ever faster transistors, silicon memory caches, and data pathways, but the sheer heat generated by all those components is limiting how fast chips can be operated, especially in power-stingy mobile devices. That could halt progress toward devices that effectively process images, sound, and other sensory information and then apply it to tasks such as face recognition and robot or vehicle navigation.

No one is more acutely interested in getting around those physical challenges than Qualcomm, maker of wireless chips used in many phones and tablets. Increasingly, users of mobile devices are demanding more from these machines. But today’s personal-assistant services, such as Apple’s Siri and Google Now, are limited because they must call out to the cloud for more powerful computers to answer or anticipate queries. “We’re running up against walls,” says Jeff Gehlhaar, the Qualcomm vice president of technology who heads the Zeroth engineering team.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same. That’s why Qualcomm’s robot—even though for now it’s merely running software that simulates a neuromorphic chip—can put Spider-Man in the same location as Captain America without having seen Spider-Man before.

Read the entire article here.

Father of Distributed Computing

Leslie_LamportDistributed computing is a foundational element for most modern day computing. It paved the way for processing to be shared across multiple computers and, nowadays, within the cloud. Most technology companies, including IBM, Google, Amazon, and Facebook, use distributed computing to provide highly scalable and reliable computing power for their systems and services. Yet, Bill Gates did not invent distributed computing, nor did Steve Jobs. In fact, it was pioneered in the mid-1970s by an unsung hero of computer science, Leslie Lamport. Know aged 73 Leslie Lamport was recognized with this year’s Turing Award.

From Technology Review:

This year’s winner of the Turing Award—often referred to as the Nobel Prize of computing—was announced today as Leslie Lamport, a computer scientist whose research made possible the development of the large, networked computer systems that power, among other things, today’s cloud and Web services. The Association for Computing Machinery grants the award annually, with an associated prize of $250,000.

Lamport, now 73 and a researcher with Microsoft, was recognized for a series of major breakthroughs that began in the 1970s. He devised algorithms that make it possible for software to function reliably even if it is running on a collection of independent computers or components that suffer from delays in communication or sometimes fail altogether.

That work, within a field now known as distributed computing, remains crucial to the sprawling data centers used by Internet giants, and is also involved in coördinating the multiple cores of modern processors in computers and mobile devices. Lamport talked to MIT Technology Review’s Tom Simonite about why his ideas have lasted.

Why is distributed computing important?

Distribution is not something that you just do, saying “Let’s distribute things.” The question is ‘How do you get it to behave coherently?’”

My Byzantine Generals work [on making software fault-tolerant, in 1980] came about because I went to SRI and had a contract to build a reliable prototype computer for flying airplanes for NASA. That used multiple computers that could fail, and so there you have a distributed system. Today there are computers in Palo Alto and Beijing and other places, and we want to use them together, so we build distributed systems. Computers with multiple processors inside are also distributed systems.

We no longer use computers like those you worked with in the 1970s and ’80s. Why have your distributed-computing algorithms survived?

Some areas have had enormous changes, but the aspect of things I was looking at, the fundamental notions of synchronization, are the same.

Running multiple processes on a single computer is very different from a set of different computers talking over a relatively slow network, for example. [But] when you’re trying to reason mathematically about their correctness, there’s no fundamental difference between the two systems.

I [developed] Paxos [in 1989] because people at DEC [Digital Equipment Corporation] were building a distributed file system. The Paxos algorithm is very widely used now. Look inside of Bing or Google or Amazon—where they’ve got rooms full of computers, they’ll probably be running an instance of Paxos.

More recently, you have worked on ways to improve how software is built. What’s wrong with how it’s done now?

People seem to equate programming with coding, and that’s a problem. Before you code, you should understand what you’re doing. If you don’t write down what you’re doing, you don’t know whether you understand it, and you probably don’t if the first thing you write down is code. If you’re trying to build a bridge or house without a blueprint—what we call a specification—it’s not going to be very pretty or reliable. That’s how most code is written. Every time you’ve cursed your computer, you’re cursing someone who wrote a program without thinking about it in advance.

There’s something about the culture of software that has impeded the use of specification. We have a wonderful way of describing things precisely that’s been developed over the last couple of millennia, called mathematics. I think that’s what we should be using as a way of thinking about what we build.

Read the entire story here.

Image: Leslie Lamport, 2005. Courtesy of Wikipedia.

Your Toaster on the Internet

Toaster

Billions of people have access to the Internet. Now, whether a significant proportion of these do anything productive with this tremendous resource is open to debate — many preferring only to post pictures of their breakfasts, themselves or to watch last-minute’s viral video hit.

Despite all these humans clogging up the Tubes of the Internets most traffic along the information superhighway is in fact not even human. Over 60 percent of all activity comes from computer systems, such as web crawlers, botnets, and increasingly, industrial control systems, ranging from security and monitoring devices, to in-home devices such as your thermostat, refrigerator, smart TV , smart toilet and toaster. So, soon Google will know what you eat and when, and your fridge will tell you what you should eat (or not) based on what it knows of your body mass index (BMI) from your bathroom scales.

Jokes aside, the Internet of Things (IoT) promises to herald an even more significant information revolution over the coming decades as all our devices and machines, from home to farm to factory, are connected and inter-connected.

From the ars technica:

If you believe what the likes of LG and Samsung have been promoting this week at CES, everything will soon be smart. We’ll be able to send messages to our washing machines, run apps on our fridges, and have TVs as powerful as computers. It may be too late to resist this movement, with smart TVs already firmly entrenched in the mid-to-high end market, but resist it we should. That’s because the “Internet of things” stands a really good chance of turning into the “Internet of unmaintained, insecure, and dangerously hackable things.”

These devices will inevitably be abandoned by their manufacturers, and the result will be lots of “smart” functionality—fridges that know what we buy and when, TVs that know what shows we watch—all connected to the Internet 24/7, all completely insecure.

While the value of smart watches or washing machines isn’t entirely clear, at least some smart devices—I think most notably phones and TVs—make sense. The utility of the smartphone, an Internet-connected computer that fits in your pocket, is obvious. The growth of streaming media services means that your antenna or cable box are no longer the sole source of televisual programming, so TVs that can directly use these streaming services similarly have some appeal.

But these smart features make the devices substantially more complex. Your smart TV is not really a TV so much as an all-in-one computer that runs Android, WebOS, or some custom operating system of the manufacturer’s invention. And where once it was purely a device for receiving data over a coax cable, it’s now equipped with bidirectional networking interfaces, exposing the Internet to the TV and the TV to the Internet.

The result is a whole lot of exposure to security problems. Even if we assume that these devices ship with no known flaws—a questionable assumption in and of itself if SOHO routers are anything to judge by—a few months or years down the line, that will no longer be the case. Flaws and insecurities will be uncovered, and the software components of these smart devices will need to be updated to address those problems. They’ll need these updates for the lifetime of the device, too. Old software is routinely vulnerable to newly discovered flaws, so there’s no point in any reasonable timeframe at which it’s OK to stop updating the software.

In addition to security, there’s also a question of utility. Netflix and Hulu may be hot today, but that may not be the case in five years’ time. New services will arrive; old ones will die out. Even if the service lineup remains the same, its underlying technology is unlikely to be static. In the future, Netflix, for example, might want to deprecate old APIs and replace them with new ones; Netflix apps will need to be updated to accommodate the changes. I can envision changes such as replacing the H.264 codec with H.265 (for reduced bandwidth and/or improved picture quality), which would similarly require updated software.

To remain useful, app platforms need up-to-date apps. As such, for your smart device to remain safe, secure, and valuable, it needs a lifetime of software fixes and updates.

A history of non-existent updates

Herein lies the problem, because if there’s one thing that companies like Samsung have demonstrated in the past, it’s a total unwillingness to provide a lifetime of software fixes and updates. Even smartphones, which are generally assumed to have a two-year lifecycle (with replacements driven by cheap or “free” contract-subsidized pricing), rarely receive updates for the full two years (Apple’s iPhone being the one notable exception).

A typical smartphone bought today will remain useful and usable for at least three years, but its system software support will tend to dry up after just 18 months.

This isn’t surprising, of course. Samsung doesn’t make any money from making your two-year-old phone better. Samsung makes its money when you buy a new Samsung phone. Improving the old phones with software updates would cost money, and that tends to limit sales of new phones. For Samsung, it’s lose-lose.

Our fridges, cars, and TVs are not even on a two-year replacement cycle. Even if you do replace your TV after it’s a couple years old, you probably won’t throw the old one away. It will just migrate from the living room to the master bedroom, and then from the master bedroom to the kids’ room. Likewise, it’s rare that a three-year-old car is simply consigned to the scrap heap. It’s given away or sold off for a second, third, or fourth “life” as someone else’s primary vehicle. Your fridge and washing machine will probably be kept until they blow up or you move houses.

These are all durable goods, kept for the long term without any equivalent to the smartphone carrier subsidy to promote premature replacement. If they’re going to be smart, software-powered devices, they’re going to need software lifecycles that are appropriate to their longevity.

That costs money, it requires a commitment to providing support, and it does little or nothing to promote sales of the latest and greatest devices. In the software world, there are companies that provide this level of support—the Microsofts and IBMs of the world—but it tends to be restricted to companies that have at least one eye on the enterprise market. In the consumer space, you’re doing well if you’re getting updates and support five years down the line. Consumer software fixes a decade later are rare, especially if there’s no system of subscriptions or other recurring payments to monetize the updates.

Of course, the companies building all these products have the perfect solution. Just replace all our stuff every 18-24 months. Fridge no longer getting updated? Not a problem. Just chuck out the still perfectly good fridge you have and buy a new one. This is, after all, the model that they already depend on for smartphones. Of course, it’s not really appropriate even to smartphones (a mid/high-end phone bought today will be just fine in three years), much less to stuff that will work well for 10 years.

These devices will be abandoned by their manufacturers, and it’s inevitable that they are abandoned long before they cease to be useful.

Superficially, this might seem to be no big deal. Sure, your TV might be insecure, but your NAT router will probably provide adequate protection, and while it wouldn’t be tremendously surprising to find that it has some passwords for online services or other personal information on it, TVs are sufficiently diverse that people are unlikely to expend too much effort targeting specific models.

Read the entire story here.

Image: A classically styled chrome two-slot automatic electric toaster. Courtesy of Wikipedia.

An Ode to the Sinclair ZX81

Sinclair-ZX81What do the PDP-11, Commodore PET, APPLE II and Sinclair’s ZX81 have in common? And, more importantly, for anyone under the age of 35, what on earth are they?  Well, these are respectively, the first time-share mainframe, first personal computer, first Apple computer, and the first home-based computer programmed by theDiagonal’s friendly editor back in the pioneering days of computation.

The article below on technological nostalgia pushed the recall button, bringing back vivid memories of dot matrix printers, FORTRAN, large floppy diskettes (5 1/4 inch), reel-to-reel tape storage, and the 1Kb of programmable memory on the ZX81. In fact, despite the tremendous and now laughable limitations of the ZX81 — one had to save and load programs via a tape cassette — programming the device at home was a true revelation.

Some would go so far as to say that the first computer is very much like the first kiss or the first date. Well, not so. But fun nonetheless, and responsible for much in the way of future career paths.

From ars technica:

Being a bunch of technology journalists who make our living on the Web, we at Ars all have a fairly intimate relationship with computers dating back to our childhood—even if for some of us, that childhood is a bit more distant than others. And our technological careers and interests are at least partially shaped by the devices we started with.

So when Cyborgology’s David Banks recently offered up an autobiography of himself based on the computing devices he grew up with, it started a conversation among us about our first computing experiences. And being the most (chronologically) senior of Ars’ senior editors, the lot fell to me to pull these recollections together—since, in theory, I have the longest view of the bunch.

Considering the first computer I used was a Digital Equipment Corp. PDP-10, that theory is probably correct.

The DEC PDP-10 and DECWriter II Terminal

In 1979, I was a high school sophomore at Longwood High School in Middle Island, New York, just a short distance from the Department of Energy’s Brookhaven National Labs. And it was at Longwood that I got the first opportunity to learn how to code, thanks to a time-share connection we had to a DEC PDP-10 at the State University of New York at Stony Brook.

The computer lab at Longwood, which was run by the math department and overseen by my teacher Mr. Dennis Schultz, connected over a leased line to SUNY. It had, if I recall correctly, six LA36 DECWriter II terminals connected back to the mainframe—essentially dot-matrix printers with keyboards on them. Turn one on while the mainframe was down, and it would print over and over:

PDP-10 NOT AVAILABLE

Time at the terminals was a precious resource, so we were encouraged to write out all of our code by hand first on graph paper and then take a stack of cards over to the keypunch. This process did wonders for my handwriting. I spent an inordinate amount of time just writing BASIC and FORTRAN code in block letters on graph-paper notebooks.

One of my first fully original programs was an aerial combat program that used three-dimensional arrays to track the movement of the player’s and the programmed opponent’s airplanes as each maneuvered to get the other in its sights. Since the program output to pin-fed paper, that could be a tedious process.

At a certain point, Mr. Shultz, who had been more than tolerant of my enthusiasm, had to crack down—my code was using up more than half the school’s allotted system storage. I can’t imagine how much worse it would have been if we had video terminals.

Actually, I can imagine, because in my senior year I was introduced to the Apple II, video, and sound. The vastness of 360 kilobytes of storage and the ability to code at the keyboard were such a huge luxury after the spartan world of punch cards that I couldn’t contain myself. I soon coded a student parking pass database for my school—while also coding a Dungeons & Dragons character tracking system, complete with combat resolution and hit point tracking.

—Sean Gallagher

A printer terminal and an acoustic coupler

I never saw the computer that gave me my first computing experience, and I have little idea what it actually was. In fact, if I ever knew where it was located, I’ve since forgotten. But I do distinctly recall the gateway to it: a locked door to the left of the teacher’s desk in my high school biology lab. Fortunately, the guardian—commonly known as Mr. Dobrow—was excited about introducing some of his students to computers, and he let a number of us spend our lunch hours experimenting with the system.

And what a system it was. Behind the physical door was another gateway, this one electronic. Since the computer was located in another town, you had to dial in by modem. The modems of the day were something different entirely from what you may recall from AOL’s dialup heyday. Rather than plugging straight in to your phone line, you dialed in manually—on a rotary phone, no less—then dropped the speaker and mic carefully into two rubber receptacles spaced to accept the standard-issue hardware of the day. (And it was standard issue; AT&T was still a monopoly at the time.)

That modem was hooked into a sort of combination of line printer and keyboard. When you were entering text, the setup acted just like a typewriter. But as soon as you hit the return key, it transmitted, and the mysterious machine at the other end responded, sending characters back that were dutifully printed out by the same machine. This meant that an infinite loop would unleash a spray of paper, and it had to be terminated by hanging up the phone.

It took us a while to get to infinite loops, though. Mr. Dobrow started us off on small simulations of things like stock markets and malaria control. Eventually, we found a way to list all the programs available and discovered a Star Trek game. Photon torpedoes were deadly, but the phasers never seemed to work, so before too long one guy had the bright idea of trying to hack the game (although that wasn’t the term that we used). We were off.

John Timmer

Read the entire article here.

Image: Sinclair ZX81. Courtesy of Wikipedia.

Big Data and Even Bigger Problems

First a definition. Big data: typically a collection of large and complex datasets that are too cumbersome to process and analyze using traditional computational approaches and database applications. Usually the big data moniker will be accompanied by an IT vendor’s pitch for shiny new software (and possible hardware) solution able to crunch through petabytes (one petabyte is a million gigabytes) of data and produce a visualizable result that mere mortals can decipher.

Many companies see big data and related solutions as a panacea to a range of business challenges: customer service, medical diagnostics, product development, shipping and logistics, climate change studies, genomic analysis and so on. A great example was the last U.S. election. Many political wonks — from both sides of the aisle — agreed that President Obama was significantly aided in his won re-election with the help of big data. So, with that in mind, many are now looking at more important big data problems.

From Technology Review:

As chief scientist for President Obama’s reëlection effort, Rayid Ghani helped revolutionize the use of data in politics. During the final 18 months of the campaign, he joined a sprawling team of data and software experts who sifted, collated, and combined dozens of pieces of information on each registered U.S. voter to discover patterns that let them target fund-raising appeals and ads.

Now, with Obama again ensconced in the Oval Office, some veterans of the campaign’s data squad are applying lessons from the campaign to tackle social issues such as education and environmental stewardship. Edgeflip, a startup Ghani founded in January with two other campaign members, plans to turn the ad hoc data analysis tools developed for Obama for America into software that can make nonprofits more effective at raising money and recruiting volunteers.

Ghani isn’t the only one thinking along these lines. In Chicago, Ghani’s hometown and the site of Obama for America headquarters, some campaign members are helping the city make available records of utility usage and crime statistics so developers can build apps that attempt to improve life there. It’s all part of a bigger idea to engineer social systems by scanning the numerical exhaust from mundane activities for patterns that might bear on everything from traffic snarls to human trafficking. Among those pursuing such humanitarian goals are startups like DataKind as well as large companies like IBM, which is redrawing bus routes in Ivory Coast (see “African Bus Routes Redrawn Using Cell-Phone Data”), and Google, with its flu-tracking software (see “Sick Searchers Help Track Flu”).

Ghani, who is 35, has had a longstanding interest in social causes, like tutoring disadvantaged kids. But he developed his data-mining savvy during 10 years as director of analytics at Accenture, helping retail chains forecast sales, creating models of consumer behavior, and writing papers with titles like “Data Mining for Business Applications.”

Before joining the Obama campaign in July 2011, Ghani wasn’t even sure his expertise in machine learning and predicting online prices could have an impact on a social cause. But the campaign’s success in applying such methods on the fly to sway voters is now recognized as having been potentially decisive in the election’s outcome (see “A More Perfect Union”).

“I realized two things,” says Ghani. “It’s doable at the massive scale of the campaign, and that means it’s doable in the context of other problems.”

At Obama for America, Ghani helped build statistical models that assessed each voter along five axes: support for the president; susceptibility to being persuaded to support the president; willingness to donate money; willingness to volunteer; and likelihood of casting a vote. These models allowed the campaign to target door knocks, phone calls, TV spots, and online ads to where they were most likely to benefit Obama.

One of the most important ideas he developed, dubbed “targeted sharing,” now forms the basis of Edgeflip’s first product. It’s a Facebook app that prompts people to share information from a nonprofit, but only with those friends predicted to respond favorably. That’s a big change from the usual scattershot approach of posting pleas for money or help and hoping they’ll reach the right people.

Edgeflip’s app, like the one Ghani conceived for Obama, will ask people who share a post to provide access to their list of friends. This will pull in not only friends’ names but also personal details, like their age, that can feed models of who is most likely to help.

Say a hurricane strikes the southeastern United States and the Red Cross needs clean-up workers. The app would ask Facebook users to share the Red Cross message, but only with friends who live in the storm zone, are young and likely to do manual labor, and have previously shown interest in content shared by that user. But if the same person shared an appeal for donations instead, he or she would be prompted to pass it along to friends who are older, live farther away, and have donated money in the past.

Michael Slaby, a senior technology official for Obama who hired Ghani for the 2012 election season, sees great promise in the targeted sharing technique. “It’s one of the most compelling innovations to come out of the campaign,” says Slaby. “It has the potential to make online activism much more efficient and effective.”

For instance, Ghani has been working with Fidel Vargas, CEO of the Hispanic Scholarship Fund, to increase that organization’s analytical savvy. Vargas thinks social data could predict which scholarship recipients are most likely to contribute to the fund after they graduate. “Then you’d be able to give away scholarships to qualified students who would have a higher probability of giving back,” he says. “Everyone would be much better off.”

Ghani sees a far bigger role for technology in the social sphere. He imagines online petitions that act like open-source software, getting passed around and improved. Social programs, too, could get constantly tested and improved. “I can imagine policies being designed a lot more collaboratively,” he says. “I don’t know if the politicians are ready to deal with it.” He also thinks there’s a huge amount of untapped information out there about childhood obesity, gang membership, and infant mortality, all ready for big data’s touch.

Read the entire article here.

Inforgraphic courtesy of visua.ly. See the original here.

Your City as an Information Warehouse

Big data keeps getting bigger and computers keep getting faster. Some theorists believe that the universe is a giant computer or a computer simulation; that principles of information science govern the cosmos. While this notion is one of the most recent radical ideas to explain our existence, there is no doubt that information is our future. Data surrounds us, we are becoming data-points and our cities are our information-rich databases.

[div class=attrib]From the Economist:[end-div]

IN 1995 GEORGE GILDER, an American writer, declared that “cities are leftover baggage from the industrial era.” Electronic communications would become so easy and universal that people and businesses would have no need to be near one another. Humanity, Mr Gilder thought, was “headed for the death of cities”.

It hasn’t turned out that way. People are still flocking to cities, especially in developing countries. Cisco’s Mr Elfrink reckons that in the next decade 100 cities, mainly in Asia, will reach a population of more than 1m. In rich countries, to be sure, some cities are sad shadows of their old selves (Detroit, New Orleans), but plenty are thriving. In Silicon Valley and the newer tech hubs what Edward Glaeser, a Harvard economist, calls “the urban ability to create collaborative brilliance” is alive and well.

Cheap and easy electronic communication has probably helped rather than hindered this. First, connectivity is usually better in cities than in the countryside, because it is more lucrative to build telecoms networks for dense populations than for sparse ones. Second, electronic chatter may reinforce rather than replace the face-to-face kind. In his 2011 book, “Triumph of the City”, Mr Glaeser theorises that this may be an example of what economists call “Jevons’s paradox”. In the 19th century the invention of more efficient steam engines boosted rather than cut the consumption of coal, because they made energy cheaper across the board. In the same way, cheap electronic communication may have made modern economies more “relationship-intensive”, requiring more contact of all kinds.

Recent research by Carlo Ratti, director of the SENSEable City Laboratory at the Massachusetts Institute of Technology, and colleagues, suggests there is something to this. The study, based on the geographical pattern of 1m mobile-phone calls in Portugal, found that calls between phones far apart (a first contact, perhaps) are often followed by a flurry within a small area (just before a meeting).

Data deluge

A third factor is becoming increasingly important: the production of huge quantities of data by connected devices, including smartphones. These are densely concentrated in cities, because that is where the people, machines, buildings and infrastructures that carry and contain them are packed together. They are turning cities into vast data factories. “That kind of merger between physical and digital environments presents an opportunity for us to think about the city almost like a computer in the open air,” says Assaf Biderman of the SENSEable lab. As those data are collected and analysed, and the results are recycled into urban life, they may turn cities into even more productive and attractive places.

Some of these “open-air computers” are being designed from scratch, most of them in Asia. At Songdo, a South Korean city built on reclaimed land, Cisco has fitted every home and business with video screens and supplied clever systems to manage transport and the use of energy and water. But most cities are stuck with the infrastructure they have, at least in the short term. Exploiting the data they generate gives them a chance to upgrade it. Potholes in Boston, for instance, are reported automatically if the drivers of the cars that hit them have an app called Street Bump on their smartphones. And, particularly in poorer countries, places without a well-planned infrastructure have the chance of a leap forward. Researchers from the SENSEable lab have been working with informal waste-collecting co-operatives in São Paulo whose members sift the city’s rubbish for things to sell or recycle. By attaching tags to the trash, the researchers have been able to help the co-operatives work out the best routes through the city so they can raise more money and save time and expense.

Exploiting data may also mean fewer traffic jams. A few years ago Alexandre Bayen, of the University of California, Berkeley, and his colleagues ran a project (with Nokia, then the leader of the mobile-phone world) to collect signals from participating drivers’ smartphones, showing where the busiest roads were, and feed the information back to the phones, with congested routes glowing red. These days this feature is common on smartphones. Mr Bayen’s group and IBM Research are now moving on to controlling traffic and thus easing jams rather than just telling drivers about them. Within the next three years the team is due to build a prototype traffic-management system for California’s Department of Transportation.

Cleverer cars should help, too, by communicating with each other and warning drivers of unexpected changes in road conditions. Eventually they may not even have drivers at all. And thanks to all those data they may be cleaner, too. At the Fraunhofer FOKUS Institute in Berlin, Ilja Radusch and his colleagues show how hybrid cars can be automatically instructed to switch from petrol to electric power if local air quality is poor, say, or if they are going past a school.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Images of cities courtesy of Google search.[end-div]

Computers in the Movies

Most of us now carry around inside our smartphones more computing power than NASA once had in the Apollo command module. So, it’s interesting to look back at old movies to see how celluloid fiction portrayed computers. Most from the 1950s and 60s were replete with spinning tape drives and enough lights to resemble the Manhattan skyline. Our favorite here at theDiagonal is the first “Bat Computer” from the original 1960’s TV series, which could be found churning away in Batman’s crime-fighting nerve center beneath Wayne Mansion.

[div class=attrib]From Wired:[end-div]

The United States government powered up its SAGE defense system in July 1958, at an Air Force base near Trenton, New Jersey. Short for Semi-Automatic Ground Environment, SAGE would eventually span 24 command and control stations across the US and Canada, warning against potential air attacks via radar and an early IBM computer called the AN/FSQ-7.

“It automated air defense,” says Mike Loewen, who worked with SAGE while serving with the Air Force in the 1980s. “It used a versatile, programmable, digital computer to process all this incoming radar data from various sites around the region and display it in a format that made sense to people. It provided a computer display of the digitally processed radar information.”

Fronted by a wall of dials, switches, neon lights, and incandescent lamps — and often plugged into spinning tape drives stretching from floor to ceiling — the AN/FSQ-7 looked like one of those massive computing systems that turned up in Hollywood movies and prime time TV during the ’60s and the ’70s. This is mainly because it is one those massive computing systems that turned up in Hollywood movies and TV during the ’60s and ’70s — over and over and over again. Think Lost In Space. Get Smart. Fantastic Voyage. In Like Flint. Or our person favorite: The Towering Inferno.

That’s the AN/FSQ-7 in The Towering Inferno at the top of this page, operated by a man named OJ Simpson, trying to track a fire that’s threatening to bring down the world’s tallest building.

For decades, the AN/FSQ-7 — Q7 for short — helped define the image of a computer in the popular consciousness. Nevermind that it was just a radar system originally backed by tens of thousands of vacuum tubes. For moviegoers everywhere, this was the sort of thing that automated myriad tasks not only in modern-day America but the distant future.

It never made much sense. But sometimes, it made even less sense. In the ’60s and ’70s, some films didn’t see the future all that clearly. Woody Allen’s Sleeper is set in 2173, and it shows the AN/FSQ-7 helping 22nd-century Teamsters make repairs to robotic man servants. Other films just didn’t see the present all that clearly. Independence Day was made in 1996, and apparently, its producers were unaware that the Air Force decommissioned SAGE 13 years earlier.

Of course, the Q7 is only part of the tale. The history of movies and TV is littered with big, beefy, photogenic machines that make absolutely no sense whatsoever. Sometimes they’re real machines doing unreal tasks. And sometimes they’re unreal machines doing unreal tasks. But we love them all. Oh so very much.

Mike Loewen first noticed the Q7 in a mid-’60s prime time TV series called The Time Tunnel. Produced by the irrepressible Irwin Allen, Time Tunnel concerned a secret government project to build a time machine beneath a trap door in the Arizona desert. A Q7 powered this subterranean time machine, complete with all those dials, switches, neon lights, and incandescent lamps.

No, an AN/FSQ-7 couldn’t really power a time machine. But time machines don’t exist. So it all works out quite nicely.

At first, Loewen didn’t know it was a Q7. But then, after he wound up in front of a SAGE system while in the Air Force many years later, it all came together. “I realized that these computer banks running the Time Tunnel were large sections of panels from the SAGE computer,” Loewen says. “And that’s where I got interested.”

He noticed the Q7 in TV show after TV show, movie after movie — and he started documenting these SAGE star turns on his personal homepage. In each case, the Q7 was seen doing stuff it couldn’t possibly do, but there was no doubt this was the Q7 — or at least part of it.

Here’s that subterranean time machine that caught the eye of Mike Loewen in The Time Tunnel (1966). The cool thing about the Time Tunnel AN/FSQ-7 is that even when it traps two government scientists in an endless time warp, it always sends them to dates of extremely important historical significance. Otherwise, you’d have one boring TV show on your hands.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The Time Tunnel (1966). Courtesy of Wired.[end-div]

The Promise of Quantum Computation

Advanced in quantum physics and in the associated realm of quantum information promise to revolutionize computing. Imagine a computer several trillions of times faster than the present day supercomputers — well, that’s where we are heading.

[div class=attrib]From the New York Times:[end-div]

THIS summer, physicists celebrated a triumph that many consider fundamental to our understanding of the physical world: the discovery, after a multibillion-dollar effort, of the Higgs boson.

Given its importance, many of us in the physics community expected the event to earn this year’s Nobel Prize in Physics. Instead, the award went to achievements in a field far less well known and vastly less expensive: quantum information.

It may not catch as many headlines as the hunt for elusive particles, but the field of quantum information may soon answer questions even more fundamental — and upsetting — than the ones that drove the search for the Higgs. It could well usher in a radical new era of technology, one that makes today’s fastest computers look like hand-cranked adding machines.

The basis for both the work behind the Higgs search and quantum information theory is quantum physics, the most accurate and powerful theory in all of science. With it we created remarkable technologies like the transistor and the laser, which, in time, were transformed into devices — computers and iPhones — that reshaped human culture.

But the very usefulness of quantum physics masked a disturbing dissonance at its core. There are mysteries — summed up neatly in Werner Heisenberg’s famous adage “atoms are not things” — lurking at the heart of quantum physics suggesting that our everyday assumptions about reality are no more than illusions.

Take the “principle of superposition,” which holds that things at the subatomic level can be literally two places at once. Worse, it means they can be two things at once. This superposition animates the famous parable of Schrödinger’s cat, whereby a wee kitty is left both living and dead at the same time because its fate depends on a superposed quantum particle.

For decades such mysteries were debated but never pushed toward resolution, in part because no resolution seemed possible and, in part, because useful work could go on without resolving them (an attitude sometimes called “shut up and calculate”). Scientists could attract money and press with ever larger supercolliders while ignoring such pesky questions.

But as this year’s Nobel recognizes, that’s starting to change. Increasingly clever experiments are exploiting advances in cheap, high-precision lasers and atomic-scale transistors. Quantum information studies often require nothing more than some equipment on a table and a few graduate students. In this way, quantum information’s progress has come not by bludgeoning nature into submission but by subtly tricking it to step into the light.

Take the superposition debate. One camp claims that a deeper level of reality lies hidden beneath all the quantum weirdness. Once the so-called hidden variables controlling reality are exposed, they say, the strangeness of superposition will evaporate.

Another camp claims that superposition shows us that potential realities matter just as much as the single, fully manifested one we experience. But what collapses the potential electrons in their two locations into the one electron we actually see? According to this interpretation, it is the very act of looking; the measurement process collapses an ethereal world of potentials into the one real world we experience.

And a third major camp argues that particles can be two places at once only because the universe itself splits into parallel realities at the moment of measurement, one universe for each particle location — and thus an infinite number of ever splitting parallel versions of the universe (and us) are all evolving alongside one another.

These fundamental questions might have lived forever at the intersection of physics and philosophy. Then, in the 1980s, a steady advance of low-cost, high-precision lasers and other “quantum optical” technologies began to appear. With these new devices, researchers, including this year’s Nobel laureates, David J. Wineland and Serge Haroche, could trap and subtly manipulate individual atoms or light particles. Such exquisite control of the nano-world allowed them to design subtle experiments probing the meaning of quantum weirdness.

Soon at least one interpretation, the most common sense version of hidden variables, was completely ruled out.

At the same time new and even more exciting possibilities opened up as scientists began thinking of quantum physics in terms of information, rather than just matter — in other words, asking if physics fundamentally tells us more about our interaction with the world (i.e., our information) than the nature of the world by itself (i.e., matter). And so the field of quantum information theory was born, with very real new possibilities in the very real world of technology.

What does this all mean in practice? Take one area where quantum information theory holds promise, that of quantum computing.

Classical computers use “bits” of information that can be either 0 or 1. But quantum-information technologies let scientists consider “qubits,” quantum bits of information that are both 0 and 1 at the same time. Logic circuits, made of qubits directly harnessing the weirdness of superpositions, allow a quantum computer to calculate vastly faster than anything existing today. A quantum machine using no more than 300 qubits would be a million, trillion, trillion, trillion times faster than the most modern supercomputer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bloch sphere representation of a qubit, the fundamental building block of quantum computers. Courtesy of Wikipedia.[end-div]

What’s All the Fuss About Big Data?

We excerpt an interview with big data pioneer and computer scientist, Alex Pentland, via the Edge. Pentland is a leading thinker in computational social science and currently directs the Human Dynamics Laboratory at MIT.

While there is no exact definition of “big data” it tends to be characterized quantitatively and qualitatively differently from data commonly used by most organizations. Where regular data can be stored, processed and analyzed using common database tools and analytical engines, big data refers to vast collections of data that often lie beyond the realm of regular computation. So, often big data requires vast and specialized storage and enormous processing capabilities. Data sets that fall into the big data area cover such areas as climate science, genomics, particle physics, and computational social science.

Big data holds true promise. However, while storage and processing power now enable quick and efficient crunching of tera- and even petabytes of data, tools for comprehensive analysis and visualization lag behind.

[div class=attrib]Alex Pentland via the Edge:[end-div]

Recently I seem to have become MIT’s Big Data guy, with people like Tim O’Reilly and “Forbes” calling me one of the seven most powerful data scientists in the world. I’m not sure what all of that means, but I have a distinctive view about Big Data, so maybe it is something that people want to hear.

I believe that the power of Big Data is that it is information about people’s behavior instead of information about their beliefs. It’s about the behavior of customers, employees, and prospects for your new business. It’s not about the things you post on Facebook, and it’s not about your searches on Google, which is what most people think about, and it’s not data from internal company processes and RFIDs. This sort of Big Data comes from things like location data off of your cell phone or credit card, it’s the little data breadcrumbs that you leave behind you as you move around in the world.

What those breadcrumbs tell is the story of your life. It tells what you’ve chosen to do. That’s very different than what you put on Facebook. What you put on Facebook is what you would like to tell people, edited according to the standards of the day. Who you actually are is determined by where you spend time, and which things you buy. Big data is increasingly about real behavior, and by analyzing this sort of data, scientists can tell an enormous amount about you. They can tell whether you are the sort of person who will pay back loans. They can tell you if you’re likely to get diabetes.

They can do this because the sort of person you are is largely determined by your social context, so if I can see some of your behaviors, I can infer the rest, just by comparing you to the people in your crowd. You can tell all sorts of things about a person, even though it’s not explicitly in the data, because people are so enmeshed in the surrounding social fabric that it determines the sorts of things that they think are normal, and what behaviors they will learn from each other.

As a consequence analysis of Big Data is increasingly about finding connections, connections with the people around you, and connections between people’s behavior and outcomes. You can see this in all sorts of places. For instance, one type of Big Data and connection analysis concerns financial data. Not just the flash crash or the Great Recession, but also all the other sorts of bubbles that occur. What these are is these are systems of people, communications, and decisions that go badly awry. Big Data shows us the connections that cause these events. Big data gives us the possibility of understanding how these systems of people and machines work, and whether they’re stable.

The notion that it is connections between people that is really important is key, because researchers have mostly been trying to understand things like financial bubbles using what is called Complexity Science or Web Science. But these older ways of thinking about Big Data leaves the humans out of the equation. What actually matters is how the people are connected together by the machines and how, as a whole, they create a financial market, a government, a company, and other social structures.

Because it is so important to understand these connections Asu Ozdaglar and I have recently created the MIT Center for Connection Science and Engineering, which spans all of the different MIT departments and schools. It’s one of the very first MIT-wide Centers, because people from all sorts of specialties are coming to understand that it is the connections between people that is actually the core problem in making transportation systems work well, in making energy grids work efficiently, and in making financial systems stable. Markets are not just about rules or algorithms; they’re about people and algorithms together.

Understanding these human-machine systems is what’s going to make our future social systems stable and safe. We are getting beyond complexity, data science and web science, because we are including people as a key part of these systems. That’s the promise of Big Data, to really understand the systems that make our technological society. As you begin to understand them, then you can build systems that are better. The promise is for financial systems that don’t melt down, governments that don’t get mired in inaction, health systems that actually work, and so on, and so forth.

The barriers to better societal systems are not about the size or speed of data. They’re not about most of the things that people are focusing on when they talk about Big Data. Instead, the challenge is to figure out how to analyze the connections in this deluge of data and come to a new way of building systems based on understanding these connections.

Changing The Way We Design Systems

With Big Data traditional methods of system building are of limited use. The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works, because almost everything is significant!  As a consequence the normal laboratory-based question-and-answering process, the method that we have used to build systems for centuries, begins to fall apart.

Big data and the notion of Connection Science is outside of our normal way of managing things. We live in an era that builds on centuries of science, and our methods of building of systems, governments, organizations, and so on are pretty well defined. There are not a lot of things that are really novel. But with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.

With Big Data you can easily get false correlations, for instance, “On Mondays, people who drive to work are more likely to get the flu.” If you look at the data using traditional methods, that may actually be true, but the problem is why is it true? Is it causal? Is it just an accident? You don’t know. Normal analysis methods won’t suffice to answer those questions. What we have to come up with is new ways to test the causality of connections in the real world far more than we have ever had to do before. We no can no longer rely on laboratory experiments; we need to actually do the experiments in the real world.

The other problem with Big Data is human understanding. When you find a connection that works, you’d like to be able to use it to build new systems, and that requires having human understanding of the connection. The managers and the owners have to understand what this new connection means. There needs to be a dialogue between our human intuition and the Big Data statistics, and that’s not something that’s built into most of our management systems today. Our managers have little concept of how to use big data analytics, what they mean, and what to believe.

In fact, the data scientists themselves don’t have much of intuition either…and that is a problem. I saw an estimate recently that said 70 to 80 percent of the results that are found in the machine learning literature, which is a key Big Data scientific field, are probably wrong because the researchers didn’t understand that they were overfitting the data. They didn’t have that dialogue between intuition and causal processes that generated the data. They just fit the model and got a good number and published it, and the reviewers didn’t catch it either. That’s pretty bad because if we start building our world on results like that, we’re going to end up with trains that crash into walls and other bad things. Management using Big Data is actually a radically new thing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Techcrunch.[end-div]

Living Organism as Software

For the first time scientists have built a computer software model of an entire organism from its molecular building blocks. This allows the model to predict previously unobserved cellular biological processes and behaviors. While the organism in question is a simple bacterium, this represents another huge advance in computational biology.

[div class=attrib]From the New York Times:[end-div]

Scientists at Stanford University and the J. Craig Venter Institute have developed the first software simulation of an entire organism, a humble single-cell bacterium that lives in the human genital and respiratory tracts.

The scientists and other experts said the work was a giant step toward developing computerized laboratories that could carry out complete experiments without the need for traditional instruments.

For medical researchers and drug designers, cellular models will be able to supplant experiments during the early stages of screening for new compounds. And for molecular biologists, models that are of sufficient accuracy will yield new understanding of basic biological principles.

The simulation of the complete life cycle of the pathogen, Mycoplasma genitalium, was presented on Friday in the journal Cell. The scientists called it a “first draft” but added that the effort was the first time an entire organism had been modeled in such detail — in this case, all of its 525 genes.

“Where I think our work is different is that we explicitly include all of the genes and every known gene function,” the team’s leader, Markus W. Covert, an assistant professor of bioengineering at Stanford, wrote in an e-mail. “There’s no one else out there who has been able to include more than a handful of functions or more than, say, one-third of the genes.”

The simulation, which runs on a cluster of 128 computers, models the complete life span of the cell at the molecular level, charting the interactions of 28 categories of molecules — including DNA, RNA, proteins and small molecules known as metabolites that are generated by cell processes.

“The model presented by the authors is the first truly integrated effort to simulate the workings of a free-living microbe, and it should be commended for its audacity alone,” wrote the Columbia scientists Peter L. Freddolino and Saeed Tavazoie in a commentary that accompanied the article. “This is a tremendous task, involving the interpretation and integration of a massive amount of data.”

They called the simulation an important advance in the new field of computational biology, which has recently yielded such achievements as the creation of a synthetic life form — an entire bacterial genome created by a team led by the genome pioneer J. Craig Venter. The scientists used it to take over an existing cell.

For their computer simulation, the researchers had the advantage of extensive scientific literature on the bacterium. They were able to use data taken from more than 900 scientific papers to validate the accuracy of their software model.

Still, they said that the model of the simplest biological system was pushing the limits of their computers.

“Right now, running a simulation for a single cell to divide only one time takes around 10 hours and generates half a gigabyte of data,” Dr. Covert wrote. “I find this fact completely fascinating, because I don’t know that anyone has ever asked how much data a living thing truly holds. We often think of the DNA as the storage medium, but clearly there is more to it than that.”

In designing their model, the scientists chose an approach that parallels the design of modern software systems, known as object-oriented programming. Software designers organize their programs in modules, which communicate with one another by passing data and instructions back and forth.

Similarly, the simulated bacterium is a series of modules that mimic the different functions of the cell.

“The major modeling insight we had a few years ago was to break up the functionality of the cell into subgroups which we could model individually, each with its own mathematics, and then to integrate these sub-models together into a whole,” Dr. Covert said. “It turned out to be a very exciting idea.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A Whole-Cell Computational Model Predicts Phenotype from Genotype. Courtesy of Cell / Elsevier Inc.[end-div]

Quantum Computer Leap

The practical science behind quantum computers continues to make exciting progress. Quantum computers promise, in theory, immense gains in power and speed through the use of atomic scale parallel processing.

[div class=attrib]From the Observer:[end-div]

The reality of the universe in which we live is an outrage to common sense. Over the past 100 years, scientists have been forced to abandon a theory in which the stuff of the universe constitutes a single, concrete reality in exchange for one in which a single particle can be in two (or more) places at the same time. This is the universe as revealed by the laws of quantum physics and it is a model we are forced to accept – we have been battered into it by the weight of the scientific evidence. Without it, we would not have discovered and exploited the tiny switches present in their billions on every microchip, in every mobile phone and computer around the world. The modern world is built using quantum physics: through its technological applications in medicine, global communications and scientific computing it has shaped the world in which we live.

Although modern computing relies on the fidelity of quantum physics, the action of those tiny switches remains firmly in the domain of everyday logic. Each switch can be either “on” or “off”, and computer programs are implemented by controlling the flow of electricity through a network of wires and switches: the electricity flows through open switches and is blocked by closed switches. The result is a plethora of extremely useful devices that process information in a fantastic variety of ways.

Modern “classical” computers seem to have almost limitless potential – there is so much we can do with them. But there is an awful lot we cannot do with them too. There are problems in science that are of tremendous importance but which we have no hope of solving, not ever, using classical computers. The trouble is that some problems require so much information processing that there simply aren’t enough atoms in the universe to build a switch-based computer to solve them. This isn’t an esoteric matter of mere academic interest – classical computers can’t ever hope to model the behaviour of some systems that contain even just a few tens of atoms. This is a serious obstacle to those who are trying to understand the way molecules behave or how certain materials work – without the possibility to build computer models they are hampered in their efforts. One example is the field of high-temperature superconductivity. Certain materials are able to conduct electricity “for free” at surprisingly high temperatures (still pretty cold, though, at well but still below -100 degrees celsius). The trouble is, nobody really knows how they work and that seriously hinders any attempt to make a commercially viable technology. The difficulty in simulating physical systems of this type arises whenever quantum effects are playing an important role and that is the clue we need to identify a possible way to make progress.

It was American physicist Richard Feynman who, in 1981, first recognised that nature evidently does not need to employ vast computing resources to manufacture complicated quantum systems. That means if we can mimic nature then we might be able to simulate these systems without the prohibitive computational cost. Simulating nature is already done every day in science labs around the world – simulations allow scientists to play around in ways that cannot be realised in an experiment, either because the experiment would be too difficult or expensive or even impossible. Feynman’s insight was that simulations that inherently include quantum physics from the outset have the potential to tackle those otherwise impossible problems.

Quantum simulations have, in the past year, really taken off. The ability to delicately manipulate and measure systems containing just a few atoms is a requirement of any attempt at quantum simulation and it is thanks to recent technical advances that this is now becoming possible. Most recently, in an article published in the journal Nature last week, physicists from the US, Australia and South Africa have teamed up to build a device capable of simulating a particular type of magnetism that is of interest to those who are studying high-temperature superconductivity. Their simulator is esoteric. It is a small pancake-like layer less than 1 millimetre across made from 300 beryllium atoms that is delicately disturbed using laser beams… and it paves the way for future studies into quantum magnetism that will be impossible using a classical computer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A crystal of beryllium ions confined by a large magnetic field at the US National Institute of Standards and Technology’s quantum simulator. The outermost electron of each ion is a quantum bit (qubit), and here they are fluorescing blue, which indicates they are all in the same state. Photograph courtesy of Britton/NIST, Observer.[end-div]

Turing Test 2.0 – Intelligent Behavior Free of Bigotry

One wonders what the world would look like today had Alan Turing been criminally prosecuted and jailed by the British government for his homosexuality before the Second World War, rather than in 1952. Would the British have been able to break German Naval ciphers encoded by their Enigma machine? Would the German Navy have prevailed, and would the Nazis have gone on to conquer the British Isles?

Actually, Turing was not imprisoned in 1952 — rather, he “accepted” chemical castration at the hands of the British government rather than face jail. He died two years later of self-inflicted cyanide poisoning, just short of his 42nd birthday.

Now a hundred years on from his birthday, historians are reflecting on his short life and his lasting legacy. Turing is widely regarded to have founded the discipline of artificial intelligence and he made significant contributions to computing. Yet most of his achievements went unrecognized for many decades or were given short shrift, perhaps, due to his confidential work for the government, or more likely, because of his persona non grata status.

In 2009 the British government offered Turing an apology. And, of course, we now have the Turing Test. (The Turing Test is a test of a machine’s ability to exhibit intelligent behavior). So, one hundred years after Turing’s birth to honor his life we should launch a new and improved Turing Test. Let’s call it the Turing Test 2.0.

This test would measure a human’s ability to exhibit intelligent behavior free of bigotry.

[div class=attrib]From Nature:[end-div]

Alan Turing is always in the news — for his place in science, but also for his 1952 conviction for having gay sex (illegal in Britain until 1967) and his suicide two years later. Former Prime Minister Gordon Brown issued an apology to Turing in 2009, and a campaign for a ‘pardon’ was rebuffed earlier this month.

Must you be a great figure to merit a ‘pardon’ for being gay? If so, how great? Is it enough to break the Enigma ciphers used by Nazi Germany in the Second World War? Or do you need to invent the computer as well, with artificial intelligence as a bonus? Is that great enough?

Turing’s reputation has gone from zero to hero, but defining what he achieved is not simple. Is it correct to credit Turing with the computer? To historians who focus on the engineering of early machines, Turing is an also-ran. Today’s scientists know the maxim ‘publish or perish’, and Turing just did not publish enough about computers. He quickly became perishable goods. His major published papers on computability (in 1936) and artificial intelligence (in 1950) are some of the most cited in the scientific literature, but they leave a yawning gap. His extensive computer plans of 1946, 1947 and 1948 were left as unpublished reports. He never put into scientific journals the simple claim that he had worked out how to turn his 1936 “universal machine” into the practical electronic computer of 1945. Turing missed those first opportunities to explain the theory and strategy of programming, and instead got trapped in the technicalities of primitive storage mechanisms.

He could have caught up after 1949, had he used his time at the University of Manchester, UK, to write a definitive account of the theory and practice of computing. Instead, he founded a new field in mathematical biology and left other people to record the landscape of computers. They painted him out of it. The first book on computers to be published in Britain, Faster than Thought (Pitman, 1953), offered this derisive definition of Turing’s theoretical contribution:

“Türing machine. In 1936 Dr. Turing wrote a paper on the design and limitations of computing machines. For this reason they are sometimes known by his name. The umlaut is an unearned and undesirable addition, due, presumably, to an impression that anything so incomprehensible must be Teutonic.”

That a book on computers should describe the theory of computing as incomprehensible neatly illustrates the climate Turing had to endure. He did make a brief contribution to the book, buried in chapter 26, in which he summarized computability and the universal machine. However, his low-key account never conveyed that these central concepts were his own, or that he had planned the computer revolution.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Alan Mathison Turing at the time of his election to a Fellowship of the Royal Society. Photograph was taken at the Elliott & Fry studio on 29 March 1951.[end-div]

C is For Dennis Richie

Last week on October 8, 2011, Dennis Richie passed away. Most of the mainstream media failed to report his death — after all he was never quite as flamboyant as another technology darling, Steve Jobs. However, his contributions to the worlds of technology and computer science should certainly place him in the same club.

After all, Dennis Richie developed the computer language C, and he significantly influenced the development of other languages. He also pioneered the operating system, Unix. Both C and Unix now run much of the world’s computer systems.

Dennis Ritchie, and co-developer, Ken Thompson, were awarded the National Medal of Technology in 1999 by President Bill Clinton.

[div class=attrib]Image courtesy of Wikipedia.[end-div]

Dependable Software by Design

[div class=attrib]From Scientific American:[end-div]

Computers fly our airliners and run most of the world’s banking, communications, retail and manufacturing systems. Now powerful analysis tools will at last help software engineers ensure the reliability of their designs.

An architectural marvel when it opened 11 years ago, the new Denver International Airport’s high-tech jewel was to be its automated baggage handler. It would autonomously route luggage around 26 miles of conveyors for rapid, seamless delivery to planes and passengers. But software problems dogged the system, delaying the airport’s opening by 16 months and adding hundreds of millions of dollars in cost overruns. Despite years of tweaking, it never ran reliably. Last summer airport managers finally pulled the plug–reverting to traditional manually loaded baggage carts and tugs with human drivers. The mechanized handler’s designer, BAE Automated Systems, was liquidated, and United Airlines, its principal user, slipped into bankruptcy, in part because of the mess.

The high price of poor software design is paid daily by millions of frustrated users. Other notorious cases include costly debacles at the U.S. Internal Revenue Service (a failed $4-billion modernization effort in 1997, followed by an equally troubled $8-billion updating project); the Federal Bureau of Investigation (a $170-million virtual case-file management system was scrapped in 2005); and the Federal Aviation Administration (a lingering and still unsuccessful attempt to renovate its aging air-traffic control system).

[div class=attrib]More from theSource here.[end-div]

Computing with Quantum Knots

[div class=attrib]From Scientific American:[end-div]

A machine based on bizarre particles called anyons that represents a calculation as a set of braids in spacetime might be a shortcut to practical quantum computation.

Quantum computers promise to perform calculations believed to be impossible for ordinary computers. Some of those calculations are of great real-world importance. For example, certain widely used encryption methods could be cracked given a computer capable of breaking a large number into its component factors within a reasonable length of time. Virtually all encryption methods used for highly sensitive data are vulnerable to one quantum algorithm or another.

The extra power of a quantum computer comes about because it operates on information represented as qubits, or quantum bits, instead of bits. An ordinary classical bit can be either a 0 or a 1, and standard microchip architectures enforce that dichotomy rigorously. A qubit, in contrast, can be in a so-called superposition state, which entails proportions of 0 and 1 coexisting together. One can think of the possible qubit states as points on a sphere. The north pole is a classical 1, the south pole a 0, and all the points in between are all the possible superpositions of 0 and 1 [see “Rules for a Complex Quantum World,” by Michael A. Nielsen; Scientific American, November 2002]. The freedom that qubits have to roam across the entire sphere helps to give quantum computers their unique capabilities.

[div class=attrib]More from theSource here.[end-div]