Tag Archives: computer

The Rembrandt Algorithm

new-rembrandt

Over the last few decades robots have been steadily replacing humans in industrial and manufacturing sectors. Increasingly, robots are appearing in a broader array of service sectors; they’re stocking shelves, cleaning hotels, buffing windows, tending bar, dispensing cash.

Nowadays you’re likely to be the recipient of news articles filtered, and in some cases written, by pieces of code and business algorithms. Indeed, many boilerplate financial reports are now “written” by “analysts” who reside, not as flesh-and-bones, but virtually, inside server-farms. Just recently a collection of circuitry and software trounced a human being at the strategic board game, Go.

So, can computers progress from repetitive, mechanical and programmatic roles to more creative, free-wheeling vocations? Can computers become artists?

A group of data scientists, computer engineers, software developers and art historians set out to answer the question.

Jonathan Jones over at the Guardian has a few choice words on the result:

I’ve been away for a few days and missed the April Fool stories in Friday’s papers – until I spotted the one about a team of Dutch “data analysts, developers, engineers and art historians” creating a new painting using digital technology: a virtual Rembrandt painted by a Rembrandt app. Hilarious! But wait, this was too late to be an April Fool’s joke. This is a real thing that is actually happening.

What a horrible, tasteless, insensitive and soulless travesty of all that is creative in human nature. What a vile product of our strange time when the best brains dedicate themselves to the stupidest “challenges”, when technology is used for things it should never be used for and everybody feels obliged to applaud the heartless results because we so revere everything digital.

Hey, they’ve replaced the most poetic and searching portrait painter in history with a machine. When are we going to get Shakespeare’s plays and Bach’s St Matthew Passion rebooted by computers? I cannot wait for Love’s Labours Have Been Successfully Functionalised by William Shakesbot.

You cannot, I repeat, cannot, replicate the genius of Rembrandt van Rijn. His art is not a set of algorithms or stylistic tics that can be recreated by a human or mechanical imitator. He can only be faked – and a fake is a dead, dull thing with none of the life of the original. What these silly people have done is to invent a new way to mock art. Bravo to them! But the Dutch art historians and museums who appear to have lent their authority to such a venture are fools.

Rembrandt lived from 1606 to 1669. His art only has meaning as a historical record of his encounters with the people, beliefs and anguishes of his time. Its universality is the consequence of the depth and profundity with which it does so. Looking into the eyes of Rembrandt’s Self-Portrait at the Age of 63, I am looking at time itself: the time he has lived, and the time since he lived. A man who stared, hard, at himself in his 17th-century mirror now looks back at me, at you, his gaze so deep his mottled flesh is just the surface of what we see.

We glimpse his very soul. It’s not style and surface effects that make his paintings so great but the artist’s capacity to reveal his inner life and make us aware in turn of our own interiority – to experience an uncanny contact, soul to soul. Let’s call it the Rembrandt Shudder, that feeling I long for – and get – in front of every true Rembrandt masterpiece..

Is that a mystical claim? The implication of the digital Rembrandt is that we get too sentimental and moist-eyed about art, that great art is just a set of mannerisms that can be digitised. I disagree. If it’s mystical to see Rembrandt as a special and unique human being who created unrepeatable, inexhaustible masterpieces of perception and intuition then count me a mystic.

Read the entire story here.

Image: The Next Rembrandt (based on 168,263 Rembrandt painting fragments). Courtesy: Microsoft, Delft University of Technology,  Mauritshuis (Hague), Rembrandt House Museum (Amsterdam).

Send to Kindle

The Emperor and/is the Butterfly

In an earlier post I touched on the notion proposed by some cosmologists that our entire universe is some kind of highly advanced simulation. The hypothesis is that perhaps we are merely information elements within a vast mathematical fabrication, playthings of a much superior consciousness. Some draw upon parallels to The Matrix movie franchise.

Follow some of the story and video interviews here to learn more of this fascinating and somewhat unsettling idea. More unsettling still: did our overlord programmers leave a backdoor?

Video: David Brin – Could Our Universe Be a Fake? Courtesy of Closer to Truth.

Send to Kindle

Computer Generated Reality

Computer games have come a very long way since the pioneering days of Pong and Pacman. Games are now so realistic that many are indistinguishable from the real-world characters and scenarios they emulate. It is a testament to the skill and ingenuity of hardware and software engineers and the creativity of developers who bring all the diverse underlying elements of a game together. Now, however, they have a match in the form of computer system that is able to generate richly  imagined and rendered world for use in the games themselves. It’s all done through algorithms.

From Technology Review:

Read the entire story here.

Video: No Man’s Sky. Courtesy of Hello Games.

 

 

Send to Kindle

Goostman Versus Turing

eugene-goostman

Some computer scientists believe that “Eugene Goostman” may have overcome the famous hurdle proposed by Alan Turning, by cracking the eponymous Turning Test. Eugene is a 13 year-old Ukrainian “boy” constructed from computer algorithms designed to feign intelligence and mirror human thought processes. During a text-based exchange Eugene managed to convince his human interrogators that he was a real boy — and thus his creators claim to have broken the previously impenetrable Turing barrier.

Other researchers and philosophers disagree: they claim that it’s easier to construct an artificial intelligence that converses in good, but limited English — Eugene is Ukrainian after all — than it would be to develop a native anglophone adult. So, the Turning Test barrier may yet stand.

From the Guardian:

From 2001: a Space Odyssey to Her, the idea of an intelligent computer that can hold conversations with humans has long been a dream of science-fiction writers, but that fantasy may just have taken a step closer to becoming reality. On Saturday, a computer algorithm claiming to be a 13-year-old boy called Eugene Goostman passed the Turing test, it has been claimed. However, some commentators have suggested that pretending to be a 13-year-old Ukrainian child is not as difficult as pretending to be an Anglophone adult, and that the result should not count.

The program convinced 33% of the human judges at an event at the Royal Society that it was actually a human. The Turing test, which was devised in 1950 by the pioneering computer scientist and second wold war codebreaker Alan Turing is a classic benchmark in artificial intelligence – it determines a machine’s ability to show intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Transcripts of the conversations from the latest Royal Society test are not yet available but below are transcripts from a previous event at Bletchley Park on 23 June 2012, with the same program.

Can you tell Eugene isn’t human?

Test 1

[12:43:37] Judge: Hi, I’m Cathy.

[12:43:43] Eugene: Cathy! Nice to meet you! You’re a woman, if I’m not mistaken.

[12:44:20] Judge: Right. What is your name?

[12:44:29] Eugene: People call me Eugene. Or “Zhenya”. It is a short name for Eugene in Russia and Ukraine.

[12:45:17] Judge: Nice name. Are you from either area?

[12:45:41] Eugene: Agree 🙂 Oh, what a fruitful conversation;-)

[12:45:52] Judge: Do you really think so?

[12:46:04] Eugene: No, your guess was wrong. I don’t think so. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

[12:46:56] Judge: What speciality? What’s yours?

[12:47:01] Eugene: Huh? And I forgot to ask you where you are from.

[12:47:51] Judge: The universe and you?

[12:48:01] Eugene: Is it your hobby to ask little poor Jewish boys such tricky questions they don’t have an answer for?

Test 2

[16:11:25] Judge: what music do you like?

[16:11:32] Eugene: I like Eminem – because he doesn’t have a thing to do with “polite correctness”.

[16:11:46] Judge: OMG i love eminem, he’s my favourite artist!

[16:12:00] Eugene: I wonder what Eminem’s nick mean. I associate it only with a word “enema”. But he is a cool guy, I like his The Real Slim Shady.

[16:12:55] Judge: i know what his nickname means, his real name is Marshall Mathers. From the initials M&M phynetically sounding like Eminem.

[16:13:06] Eugene: Eminem is a famous crapper. No, raper… ripper… Yeah, I mean – rapper. I like his Stan and The Real Slim Shady.

[16:13:32] Judge: I like Hailies Song.

[16:13:46] Eugene: Interesting tastes. I like science-fiction books, by the way. Where do you came from, by the way? Could you tell me about the place where you live?

[16:14:22] Judge: Epsom, home of the derby. yourself?

[16:14:27] Eugene: My guinea pig says that name Derby sounds very nice.

[16:14:41] Judge: i shopuld [sic] talk with him sometime.

[16:14:48] Eugene: I think you should take some typing lessons. Or just type more accurately.

[16:15:04] Judge: That’s very rude.

[16:15:09] Eugene: Ummm … Frankly, I didn’t get your question 🙁

[16:15:49] Judge: “Ummm” isnt a word. i think you need typing lessons

[16:15:58] Eugene: My thoughts are same. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Test 3

[16:06:31] Judge: Hi there, do you prefer to wear a dress or trousers to work?

[16:06:37] Eugene: I am a scholar. I’m too young to make money.

[16:07:32] Judge: What was the last film you saw at the cinema?

[16:07:38] Eugene: Huh? Could you tell me what are you? I mean your profession.

[16:08:11] Judge: I am an IT manager, and you?

[16:08:16] Eugene: Manager? So how is it to work as manager?

[16:09:13] Judge: Hard work but rewarding, what do you do?

[16:09:18] Eugene: I am only 13, so I’m attending school so far.

[16:09:51] Judge: Have you seen teletubbies?

[16:10:04] Eugene: No, I haven’t seen it, unfortunately. But I hope it is better than these crappy Star Wars. And I forgot to ask you where you are from …

[16:10:15] Judge: Brighton, and you?

Read the entire story and Eugene’s conversation with real humans here.

Image: A conversation with Eugene Goostman. Courtesy of BBC.

Send to Kindle

You May Be Living Inside a Simulation

real-and-simulated-cosmos

Some theorists posit that we are living inside a simulation, that the entire universe is one giant, evolving model inside a grander reality. This is a fascinating idea, but may never be experimentally verifiable. So just relax — you and I may not be real, but we’ll never know.

On the other hand, but in a similar vein, researchers have themselves developed the broadest and most detailed simulation of the universe to date. Now, there are no “living” things yet inside this computer model, but it’s probably only a matter of time before our increasingly sophisticated simulations start wondering if they are simulations as well.

From the BBC:

An international team of researchers has created the most complete visual simulation of how the Universe evolved.

The computer model shows how the first galaxies formed around clumps of a mysterious, invisible substance called dark matter.

It is the first time that the Universe has been modelled so extensively and to such great resolution.

The research has been published in the journal Nature.

Now we can get to grips with how stars and galaxies form and relate it to dark matter”

The simulation will provide a test bed for emerging theories of what the Universe is made of and what makes it tick.

One of the world’s leading authorities on galaxy formation, Professor Richard Ellis of the California Institute of Technology (Caltech) in Pasadena, described the simulation as “fabulous”.

“Now we can get to grips with how stars and galaxies form and relate it to dark matter,” he told BBC News.

The computer model draws on the theories of Professor Carlos Frenk of Durham University, UK, who said he was “pleased” that a computer model should come up with such a good result assuming that it began with dark matter.

“You can make stars and galaxies that look like the real thing. But it is the dark matter that is calling the shots”.

Cosmologists have been creating computer models of how the Universe evolved for more than 20 years. It involves entering details of what the Universe was like shortly after the Big Bang, developing a computer program which encapsulates the main theories of cosmology and then letting the programme run.

The simulated Universe that comes out at the other end is usually a very rough approximation of what astronomers really see.

The latest simulation, however, comes up with the Universe that is strikingly like the real one.

Immense computing power has been used to recreate this virtual Universe. It would take a normal laptop nearly 2,000 years to run the simulation. However, using state-of-the-art supercomputers and clever software called Arepo, researchers were able to crunch the numbers in three months.

Cosmic tree

In the beginning, it shows strands of mysterious material which cosmologists call “dark matter” sprawling across the emptiness of space like branches of a cosmic tree. As millions of years pass by, the dark matter clumps and concentrates to form seeds for the first galaxies.

Then emerges the non-dark matter, the stuff that will in time go on to make stars, planets and life emerge.

But early on there are a series of cataclysmic explosions when it gets sucked into black holes and then spat out: a chaotic period which was regulating the formation of stars and galaxies. Eventually, the simulation settles into a Universe that is similar to the one we see around us.

According to Dr Mark Vogelsberger of Massachusetts Institute of Technology (MIT), who led the research, the simulations back many of the current theories of cosmology.

“Many of the simulated galaxies agree very well with the galaxies in the real Universe. It tells us that the basic understanding of how the Universe works must be correct and complete,” he said.

In particular, it backs the theory that dark matter is the scaffold on which the visible Universe is hanging.

“If you don’t include dark matter (in the simulation) it will not look like the real Universe,” Dr Vogelsberger told BBC News.

Read the entire article here.

Image: On the left: the real universe imaged via the Hubble telescope. On the right: a view of what emerges from the computer simulation. Courtesy of BBC / Illustris Collaboration.

Send to Kindle

Nuclear Codes and Floppy Disks

Floppy_disksSometimes a good case can be made for remaining a technological Luddite; sometimes eschewing the latest-and-greatest technical gizmo may actually work for you.

 

Take the case of the United States’ nuclear deterrent. A recent report on CBS 60 Minutes showed us how part of the computer system responsible for launch control of US intercontinental ballistic missiles (ICBM) still uses antiquated 8-inch floppy disks. This part of the national defense is so old and arcane it’s actually more secure than most contemporary computing systems and communications infrastructure. So, next time your internet-connected, cloud-based tablet or laptop gets hacked consider reverting to a pre-1980s device.

From ars technica:

In a report that aired on April 27, CBS 60 Minutes correspondent Leslie Stahl expressed surprise that part of the computer system responsible for controlling the launch of the Minuteman III intercontinental ballistic missiles relied on data loaded from 8-inch floppy disks. Most of the young officers stationed at the launch control center had never seen a floppy disk before they became “missileers.”

An Air Force officer showed Stahl one of the disks, marked “Top Secret,” which is used with the computer that handles what was once called the Strategic Air Command Digital Network (SACDIN), a communication system that delivers launch commands to US missile forces. Beyond the floppies, a majority of the systems in the Wyoming US Air Force launch control center (LCC) Stahl visited dated back to the 1960s and 1970s, offering the Air Force’s missile forces an added level of cyber security, ICBM forces commander Major General Jack Weinstein told 60 Minutes.

“A few years ago we did a complete analysis of our entire network,” Weinstein said. “Cyber engineers found out that the system is extremely safe and extremely secure in the way it’s developed.”

However, not all of the Minuteman launch control centers’ aging hardware is an advantage. The analog phone systems, for example, often make it difficult for the missileers to communicate with each other or with their base. The Air Force commissioned studies on updating the ground-based missile force last year, and it’s preparing to spend $19 million this year on updates to the launch control centers. The military has also requested $600 million next year for further improvements.

Read the entire article here.

Image: Various floppy disks. Courtesy: George George Chernilevsky,  2009 / Wikipedia.

Send to Kindle

An Ode to the Sinclair ZX81

Sinclair-ZX81What do the PDP-11, Commodore PET, APPLE II and Sinclair’s ZX81 have in common? And, more importantly, for anyone under the age of 35, what on earth are they?  Well, these are respectively, the first time-share mainframe, first personal computer, first Apple computer, and the first home-based computer programmed by theDiagonal’s friendly editor back in the pioneering days of computation.

The article below on technological nostalgia pushed the recall button, bringing back vivid memories of dot matrix printers, FORTRAN, large floppy diskettes (5 1/4 inch), reel-to-reel tape storage, and the 1Kb of programmable memory on the ZX81. In fact, despite the tremendous and now laughable limitations of the ZX81 — one had to save and load programs via a tape cassette — programming the device at home was a true revelation.

Some would go so far as to say that the first computer is very much like the first kiss or the first date. Well, not so. But fun nonetheless, and responsible for much in the way of future career paths.

From ars technica:

Being a bunch of technology journalists who make our living on the Web, we at Ars all have a fairly intimate relationship with computers dating back to our childhood—even if for some of us, that childhood is a bit more distant than others. And our technological careers and interests are at least partially shaped by the devices we started with.

So when Cyborgology’s David Banks recently offered up an autobiography of himself based on the computing devices he grew up with, it started a conversation among us about our first computing experiences. And being the most (chronologically) senior of Ars’ senior editors, the lot fell to me to pull these recollections together—since, in theory, I have the longest view of the bunch.

Considering the first computer I used was a Digital Equipment Corp. PDP-10, that theory is probably correct.

The DEC PDP-10 and DECWriter II Terminal

In 1979, I was a high school sophomore at Longwood High School in Middle Island, New York, just a short distance from the Department of Energy’s Brookhaven National Labs. And it was at Longwood that I got the first opportunity to learn how to code, thanks to a time-share connection we had to a DEC PDP-10 at the State University of New York at Stony Brook.

The computer lab at Longwood, which was run by the math department and overseen by my teacher Mr. Dennis Schultz, connected over a leased line to SUNY. It had, if I recall correctly, six LA36 DECWriter II terminals connected back to the mainframe—essentially dot-matrix printers with keyboards on them. Turn one on while the mainframe was down, and it would print over and over:

PDP-10 NOT AVAILABLE

Time at the terminals was a precious resource, so we were encouraged to write out all of our code by hand first on graph paper and then take a stack of cards over to the keypunch. This process did wonders for my handwriting. I spent an inordinate amount of time just writing BASIC and FORTRAN code in block letters on graph-paper notebooks.

One of my first fully original programs was an aerial combat program that used three-dimensional arrays to track the movement of the player’s and the programmed opponent’s airplanes as each maneuvered to get the other in its sights. Since the program output to pin-fed paper, that could be a tedious process.

At a certain point, Mr. Shultz, who had been more than tolerant of my enthusiasm, had to crack down—my code was using up more than half the school’s allotted system storage. I can’t imagine how much worse it would have been if we had video terminals.

Actually, I can imagine, because in my senior year I was introduced to the Apple II, video, and sound. The vastness of 360 kilobytes of storage and the ability to code at the keyboard were such a huge luxury after the spartan world of punch cards that I couldn’t contain myself. I soon coded a student parking pass database for my school—while also coding a Dungeons & Dragons character tracking system, complete with combat resolution and hit point tracking.

—Sean Gallagher

A printer terminal and an acoustic coupler

I never saw the computer that gave me my first computing experience, and I have little idea what it actually was. In fact, if I ever knew where it was located, I’ve since forgotten. But I do distinctly recall the gateway to it: a locked door to the left of the teacher’s desk in my high school biology lab. Fortunately, the guardian—commonly known as Mr. Dobrow—was excited about introducing some of his students to computers, and he let a number of us spend our lunch hours experimenting with the system.

And what a system it was. Behind the physical door was another gateway, this one electronic. Since the computer was located in another town, you had to dial in by modem. The modems of the day were something different entirely from what you may recall from AOL’s dialup heyday. Rather than plugging straight in to your phone line, you dialed in manually—on a rotary phone, no less—then dropped the speaker and mic carefully into two rubber receptacles spaced to accept the standard-issue hardware of the day. (And it was standard issue; AT&T was still a monopoly at the time.)

That modem was hooked into a sort of combination of line printer and keyboard. When you were entering text, the setup acted just like a typewriter. But as soon as you hit the return key, it transmitted, and the mysterious machine at the other end responded, sending characters back that were dutifully printed out by the same machine. This meant that an infinite loop would unleash a spray of paper, and it had to be terminated by hanging up the phone.

It took us a while to get to infinite loops, though. Mr. Dobrow started us off on small simulations of things like stock markets and malaria control. Eventually, we found a way to list all the programs available and discovered a Star Trek game. Photon torpedoes were deadly, but the phasers never seemed to work, so before too long one guy had the bright idea of trying to hack the game (although that wasn’t the term that we used). We were off.

John Timmer

Read the entire article here.

Image: Sinclair ZX81. Courtesy of Wikipedia.

Send to Kindle

You May Be Just a Line of Code

Some very logical and rational people — scientists and philosophers — argue that we are no more than artificial constructs. They suggest that it is more likely that we are fleeting constructions in a simulated universe rather than organic beings in a real cosmos; that we are, in essence, like the oblivious Neo in the classic sci-fi movie The Matrix. One supposes that the minds proposing this notion are themselves simulations…

From Discovery:

In the 1999 sci-fi film classic The Matrix, the protagonist, Neo, is stunned to see people defying the laws of physics, running up walls and vanishing suddenly. These superhuman violations of the rules of the universe are possible because, unbeknownst to him, Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation created by sentient machines.

The action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”

Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space. As fanciful as it sounds, some philosophers have long argued that we’re actually more likely to be artificial intelligences trapped in a fake universe than we are organic minds in the “real” one.

But if that were true, the very laws of physics that allow us to devise such reality-checking technology may have little to do with the fundamental rules that govern the meta-universe inhabited by our simulators. To us, these programmers would be gods, able to twist reality on a whim.

So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing?

Worlds in Our Grasp

The first serious attempt to find the truth about our universe came in 2001, when an effort to calculate the resources needed for a universe-size simulation made the prospect seem impossible.

Seth Lloyd, a quantum-mechanical engineer at MIT, estimated the number of “computer operations” our universe has performed since the Big Bang — basically, every event that has ever happened. To repeat them, and generate a perfect facsimile of reality down to the last atom, would take more energy than the universe has.

“The computer would have to be bigger than the universe, and time would tick more slowly in the program than in reality,” says Lloyd. “So why even bother building it?”

But others soon realized that making an imperfect copy of the universe that’s just good enough to fool its inhabitants would take far less computational power. In such a makeshift cosmos, the fine details of the microscopic world and the farthest stars might only be filled in by the programmers on the rare occasions that people study them with scientific equipment. As soon as no one was looking, they’d simply vanish.

In theory, we’d never detect these disappearing features, however, because each time the simulators noticed we were observing them again, they’d sketch them back in.

That realization makes creating virtual universes eerily possible, even for us. Today’s supercomputers already crudely model the early universe, simulating how infant galaxies grew and changed. Given the rapid technological advances we’ve witnessed over past decades — your cell phone has more processing power than NASA’s computers had during the moon landings — it’s not a huge leap to imagine that such simulations will eventually encompass intelligent life.

“We may be able to fit humans into our simulation boxes within a century,” says Silas Beane, a nuclear physicist at the University of Washington in Seattle. Beane develops simulations that re-create how elementary protons and neutrons joined together to form ever larger atoms in our young universe.

Legislation and social mores could soon be all that keeps us from creating a universe of artificial, but still feeling, humans — but our tech-savvy descendants may find the power to play God too tempting to resist.

They could create a plethora of pet universes, vastly outnumbering the real cosmos. This thought led philosopher Nick Bostrom at the University of Oxford to conclude in 2003 that it makes more sense to bet that we’re delusional silicon-based artificial intelligences in one of these many forgeries, rather than carbon-based organisms in the genuine universe. Since there seemed no way to tell the difference between the two possibilities, however, bookmakers did not have to lose sleep working out the precise odds.

Learning the Truth

That changed in 2007 when John D. Barrow, professor of mathematical sciences at Cambridge University, suggested that an imperfect simulation of reality would contain detectable glitches. Just like your computer, the universe’s operating system would need updates to keep working.

As the simulation degrades, Barrow suggested, we might see aspects of nature that are supposed to be static — such as the speed of light or the fine-structure constant that describes the strength of the electromagnetic force — inexplicably drift from their “constant” values.

Last year, Beane and colleagues suggested a more concrete test of the simulation hypothesis. Most physicists assume that space is smooth and extends out infinitely. But physicists modeling the early universe cannot easily re-create a perfectly smooth background to house their atoms, stars and galaxies. Instead, they build up their simulated space from a lattice, or grid, just as television images are made up from multiple pixels.

The team calculated that the motion of particles within their simulation, and thus their energy, is related to the distance between the points of the lattice: the smaller the grid size, the higher the energy particles can have. That means that if our universe is a simulation, we’ll observe a maximum energy amount for the fastest particles. And as it happens, astronomers have noticed that cosmic rays, high-speed particles that originate in far-flung galaxies, always arrive at Earth with a specific maximum energy of about 1020 electron volts.

The simulation’s lattice has another observable effect that astronomers could pick up. If space is continuous, then there is no underlying grid that guides the direction of cosmic rays — they should come in from every direction equally. If we live in a simulation based on a lattice, however, the team has calculated that we wouldn’t see this even distribution. If physicists do see an uneven distribution, it would be a tough result to explain if the cosmos were real.

Astronomers need much more cosmic ray data to answer this one way or another. For Beane, either outcome would be fine. “Learning we live in a simulation would make no more difference to my life than believing that the universe was seeded at the Big Bang,” he says. But that’s because Beane imagines the simulators as driven purely to understand the cosmos, with no desire to interfere with their simulations.

Unfortunately, our almighty simulators may instead have programmed us into a universe-size reality show — and are capable of manipulating the rules of the game, purely for their entertainment. In that case, maybe our best strategy is to lead lives that amuse our audience, in the hope that our simulator-gods will resurrect us in the afterlife of next-generation simulations.

The weird consequences would not end there. Our simulators may be simulations themselves — just one rabbit hole within a linked series, each with different fundamental physical laws. “If we’re indeed a simulation, then that would be a logical possibility, that what we’re measuring aren’t really the laws of nature, they’re some sort of attempt at some sort of artificial law that the simulators have come up with. That’s a depressing thought!” says Beane.

This cosmic ray test may help reveal whether we are just lines of code in an artificial Matrix, where the established rules of physics may be bent, or even broken. But if learning that truth means accepting that you may never know for sure what’s real — including yourself — would you want to know?

There is no turning back, Neo: Do you take the blue pill, or the red pill?

Read the entire article here.

Image: The Matrix, promotional poster for the movie. Courtesy of Silver Pictures / Warner Bros. Entertainment Inc.

Send to Kindle

The Outliner as Outlier

Outlining tools for the composition of text are intimately linked with the evolution of the personal computer industry. Yet while outliners were some of the earliest “apps” to appear, their true power, as mechanisms to think new thoughts — has yet to be fully realized.

From Technology Review:

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

It’s an elitist view of software, and maybe self-defeating. Perhaps most users, who just want to compose two-page documents and quick e-mails, don’t need the structure that Fargo imposes.

But I sympathize with Winer. I’m an outliner person. I’ve used many outliners over the decades. Right now, my favorite is the open-source Org-mode in the Emacs text editor. Learning an outliner’s commands is a pleasure, because the payoff—the ability to distill a bubbling cauldron of thought into a list, and then to expand that bulleted list into an essay, a report, anything—is worth it. An outliner treats a text as a set of Lego bricks to be pulled apart and reassembled until the most pleasing structure is found.

Fargo is an excellent outline editor, and it’s innovative because it’s a true Web application, running all its code inside the browser and storing versions of files in Dropbox. (Winer also recently released Concord, the outlining engine inside Fargo, under a free software license so that any developer can insert an outline into any Web application.) As you move words and ideas around, Fargo feels jaunty. Click on one of those lines in your outline and drag it, and arrows show you where else in the hierarchy that line might fit. They’re good arrows: fat, clear, obvious, informative.

For a while, bloggers using Fargo could publish posts with a free hosted service operated by Winer. But this fall the service broke, and Winer said he didn’t see how to fix it. Perhaps that’s just as well: an outline creates a certain unresolved tension with the dominant model for blogging. For Winer, a blog is a big outline of one’s days and intellectual development. But most blog publishing systems treat each post in isolation: a title, some text, maybe an image or video. Are bloggers ready to see a blog as one continuous document, a set of branches hanging off a common trunk? That’s the thing about outlines: they can become anything.

Read the entire article here.

Send to Kindle

Time-Off for Being Productive

If you are an IT or knowledge-worker, computer engineer, software developer or just use a computer for the majority of your working day, keep the following in mind the next time you negotiate benefits with your supervisor.

In Greece, computer-using public sector employees get 6 extra days-off per year because they use a computer. But austerity is now taking its ugly toll as the Greek government works to scrap this privilege — it already eliminated benefits to workers who show up at the office on time.

From the Wall Street Journal:

Greek civil servants stand to lose the six extra days of paid vacation they get each year—just for using a computer—after the government moved Friday to rescind a privilege that has been around for more than two decades.

The bonus, known as “computer leave,” applied to workers whose job involved sitting in front of a computer for more than five hours a day—basically most of the staff working in ministries and public services.

“It belongs to another era,” Kyriakos Mitsotakis, the administrative reform minister, said. “Today, in the era of crisis, we cannot maintain anachronistic privileges.”

Doing away with this bonus, which dates to 1989, represents “a small, yet symbolic, step in modernizing public administration,” he said.

But the public-sector union Adedy said it would fight the decision in court.

“According to the European regulation, those using a computer should take a 15-minute break every two hours,” the general secretary Ermolaos Kasses said. “It is not easy to have all those breaks during the day, so it was decided back then that it should be given as a day off every two months.”

Inspectors from the International Monetary Fund, the European Commission and the European Central Bank are expected in Athens later this month to review Greece’s performance in meeting the terms of its second bailout.

Apart from shrinking the public sector, raising taxes and cutting wages and pensions, the government wants to show that it is moving forward with abolishing costly perks.

It has already limited the pensions that unmarried daughters are allowed to collect when their father dies, and scrapped a bonus for showing up to work on time. It has also extended the work week for teachers.

Read the entire article here.

Image: City of Oia, Santorini. Courtesy of Wikipedia.

Send to Kindle

A Post-PC, Post-Laptop World

Not too long ago the founders and shapers of much of our IT world were dreaming up new information technologies, tools and processes that we didn’t know we needed. These tinkerers became the establishment luminaries that we still ove or hate — Microsoft, Dell, HP, Apple, Motorola and IBM. And, of course, they are still around.

But the world that they constructed is imploding and nobody really knows where it is heading. Will the leaders of the next IT revolution come from the likes of Google or Facebook? Or as is more likely, is this just a prelude to a more radical shift, with seeds being sown in anonymous garages and labs across the U.S. and other tech hubs. Regardless, we are in for some unpredictable and exciting times.

From ars technica:

Change happens in IT whether you want it to or not. But even with all the talk of the “post-PC” era and the rise of the horrifically named “bring your own device” hype, change has happened in a patchwork. Despite the disruptive technologies documented on Ars and elsewhere, the fundamentals of enterprise IT have evolved slowly over the past decade.

But this, naturally, is about to change. The model that we’ve built IT on for the past 10 years is in the midst of collapsing on itself, and the companies that sold us the twigs and straw it was built with—Microsoft, Dell, and Hewlett-Packard to name a few—are facing the same sort of inflection points in their corporate life cycles that have ripped past IT giants to shreds. These corporate giants are faced with moments of truth despite making big bets on acquisitions to try to position themselves for what they saw as the future.

Predicting the future is hard, especially when you have an installed base to consider. But it’s not hard to identify the economic, technological, and cultural forces that are converging right now to shape the future of enterprise IT in the short term. We’re not entering a “post-PC” era in IT—we’re entering an era where the device we use to access applications and information is almost irrelevant. Nearly everything we do as employees or customers will be instrumented, analyzed, and aggregated.

“We’re not on a 10-year reinvention path anymore for enterprise IT,” said David Nichols, Americas IT Transformation Leader at Ernst & Young. “It’s more like [a] five-year or four-year path. And it’s getting faster. It’s going to happen at a pace we haven’t seen before.”

While the impact may be revolutionary, the cause is more evolutionary. A host of technologies that have been the “next big thing” for much of the last decade—smart mobile devices, the “Internet of Things,” deep analytics, social networking, and cloud computing—have finally reached a tipping point. The demand for mobile applications has turned what were once called “Web services” into a new class of managed application programming interfaces. These are changing not just how users interact with data, but the way enterprises collect and share data, write applications, and secure them.

Add the technologies pushed forward by government and defense in the last decade (such as facial recognition) and an abundance of cheap sensors, and you have the perfect “big data” storm. This sea of structured and unstructured data could change the nature of the enterprise or drown IT departments in the process. It will create social challenges as employees and customers start to understand the level to which they are being tracked by enterprises. And it will give companies more ammunition to continue to squeeze more productivity out of a shrinking workforce, as jobs once done by people are turned over to software robots.

There has been a lot of talk about how smartphones and tablets have supplanted the PC. In many ways, that talk is true. In fact, we’re still largely using smartphones and tablets as if they were PCs.

But aside from mobile Web browsing and the use of tablets as a replacement for notebook PCs in presentations, most enterprises still use mobile devices the same way they used the BlackBerry in 1999—for e-mail. Mobile apps are the new webpage: everybody knows they need one to engage customers, but few are really sure what to do with them beyond what customers use their websites for. And while companies are trying to engage customers using social media on mobile, they’re largely not using the communications tools available on smart mobile devices to engage their own employees.

“I think right now, mobile adoption has been greatly overstated in terms of what people say they do with mobile versus mobile’s potential,” said Nichols. “Every CIO out there says, ‘Oh, we have mobile-enabled our workforce using tablets and smartphones.’ They’ve done mobile enablement but not mobile integration. Mobility at this point has not fundamentally changed the way the majority of the workforce works, at least not in the last five to six years.”

Smartphones make very poor PCs. But they have something no desktop PC has—a set of sensors that can provide a constant flow of data about where their user is. There’s visual information pulled in through a camera, motion and acceleration data, and even proximity. When combined with backend analytics, they can create opportunities to change how people work, collaborate, and interact with their environment.

Machine-to-machine (M2M) communications is a big part of that shift, according to Nichols. “Allowing devices with sensors to interact in a meaningful way is the next step,” he said. That step spans from the shop floor to the data center to the boardroom, as the devices we carry track our movements and our activities and interact with the systems around us.

Retailers are beginning to catch on to that, using mobile devices’ sensors to help close sales. “Everybody gets the concept that a mobile app is a necessity for a business-to-consumer retailer,” said Brian Kirschner, the director of Apigee Institute, a research organization created by the application infrastructure vendor Apigee in collaboration with executives of large enterprises and academic researchers. “But they don’t always get the transformative force on business that apps can have. Some can be small. For example, Home Depot has an app to help you search the store you’re in for what you’re looking for. We know that failure to find something in the store is a cause of lost sales and that Web search is useful and signs over aisles are ineffective. So the mobile app has a real impact on sales.”

But if you’ve already got stock information, location data for a customer, and e-commerce capabilities, why stop at making the app useful only during business hours? “If you think of the full potential of a mobile app, why can’t you buy something at the store when it’s closed if you’re near the store?” Kirschner said. “Instead of dropping you to a traditional Web process and offering you free shipping, they could have you pick it up at the store where you are tomorrow.”

That’s a change that’s being forced on many retailers, as noted in an article from the most recent MIT Sloan Management Review by a trio of experts: Erik Brynjolfsson, a professor at MIT’s Sloan School of Management and the director of the MIT Center for Digital Business; Yu Jeffrey Hu of the Georgia Institute of Technology; and Mohammed Rahman of the University of Calgary. If retailers don’t offer a way to meet mobile-equipped customers, they’ll buy it online elsewhere—often while standing in their store. Offering customers a way to extend their experience beyond the store’s walls is the kind of mobile use that’s going to create competitive advantage from information technology. And it’s the sort of competitive advantage that has long been milked out of the old IT model.

Nichols sees the same sort of technology transforming not just relationships with customers but the workplace itself. Say, for example, you’re in New York, and you want to discuss something with two colleagues. You request an appointment using your mobile device, and based on your location data, the location data of your colleagues, and the timing of the meeting, backend systems automatically book you a conference room and set up a video link to a co-worker out of town.

Based on analytics and the title of the meeting, relevant documents are dropped into a collaboration space. Your device records the meeting to an archive and notes who has attended in person. And this conversation is automatically transcribed, tagged, and forwarded to team members for review.

“Having location data to reserve conference rooms and calls and having all other logistics be handled in background changes the size of the organization I need to support that,” Nichols said.

The same applies to manufacturing, logistics, and other areas where applications can be tied into sensors and computing power. “If I have a factory where a machine has a belt that needs to be reordered every five years and it auto re-orders and it gets shipped without the need for human interaction, that changes the whole dynamics of how you operate,” Nichols said. “If you can take that and plug it into a proper workflow, you’re going to see an entirely new sort of workforce. That’s not that far away.”

Wearable devices like Google’s Glass will also feed into the new workplace. Wearable tech has been in use in some industries for decades, and in some cases it’s just an evolution from communication systems already used in many retail and manufacturing environments. But the ability to add augmented reality—a data overlay on top of a real world location—and to collect information without reaching for a device will quickly get traction in many enterprises.

Read the entire article here.

Image: Commodore PET (Personal Electronic Transactor) 2001 Series, circa 1977. Courtesy of Wikipedia.

Send to Kindle

Quantum Computation: Spooky Arithmetic

Quantum computation holds the promise of vastly superior performance over traditional digital systems based on bits that are either “on” or “off”. Yet for all the theory, quantum computation still remains very much a research enterprise in its very infancy. And, because of the peculiarities of the quantum world — think Schrödinger’s cat, both dead and alive — it’s even difficult to measure a quantum computer at work.

From Wired:

In early May, news reports gushed that a quantum computation device had for the first time outperformed classical computers, solving certain problems thousands of times faster. The media coverage sent ripples of excitement through the technology community. A full-on quantum computer, if ever built, would revolutionize large swaths of computer science, running many algorithms dramatically faster, including one that could crack most encryption protocols in use today.

Over the following weeks, however, a vigorous controversy surfaced among quantum computation researchers. Experts argued over whether the device, created by D-Wave Systems, in Burnaby, British Columbia, really offers the claimed speedups, whether it works the way the company thinks it does, and even whether it is really harnessing the counterintuitive weirdness of quantum physics, which governs the world of elementary particles such as electrons and photons.

Most researchers have no access to D-Wave’s proprietary system, so they can’t simply examine its specifications to verify the company’s claims. But even if they could look under its hood, how would they know it’s the real thing?

Verifying the processes of an ordinary computer is easy, in principle: At each step of a computation, you can examine its internal state — some series of 0s and 1s — to make sure it is carrying out the steps it claims.

A quantum computer’s internal state, however, is made of “qubits” — a mixture (or “superposition”) of 0 and 1 at the same time, like Schrödinger’s fabled quantum mechanical cat, which is simultaneously alive and dead. Writing down the internal state of a large quantum computer would require an impossibly large number of parameters. The state of a system containing 1,000 qubits, for example, could need more parameters than the estimated number of particles in the universe.

And there’s an even more fundamental obstacle: Measuring a quantum system “collapses” it into a single classical state instead of a superposition of many states. (When Schrödinger’s cat is measured, it instantly becomes alive or dead.) Likewise, examining the inner workings of a quantum computer would reveal an ordinary collection of classical bits. A quantum system, said Umesh Vazirani of the University of California, Berkeley, is like a person who has an incredibly rich inner life, but who, if you ask him “What’s up?” will just shrug and say, “Nothing much.”

“How do you ever test a quantum system?” Vazirani asked. “Do you have to take it on faith? At first glance, it seems that the obvious answer is yes.”

It turns out, however, that there is a way to probe the rich inner life of a quantum computer using only classical measurements, if the computer has two separate “entangled” components.

In the April 25 issue of the journal Nature, Vazirani, together with Ben Reichardt of the University of Southern California in Los Angeles and Falk Unger of Knight Capital Group Inc. in Santa Clara, showed how to establish the precise inner state of such a computer using a favorite tactic from TV police shows: Interrogate the two components in separate rooms, so to speak, and check whether their stories are consistent. If the two halves of the computer answer a particular series of questions successfully, the interrogator can not only figure out their internal state and the measurements they are doing, but also issue instructions that will force the two halves to jointly carry out any quantum computation she wishes.

“It’s a huge achievement,” said Stefano Pironio, of the Université Libre de Bruxelles in Belgium.

The finding will not shed light on the D-Wave computer, which is constructed along very different principles, and it may be decades before a computer along the lines of the Nature paper — or indeed any fully quantum computer — can be built. But the result is an important proof of principle, said Thomas Vidick, who recently completed his post-doctoral research at the Massachusetts Institute of Technology. “It’s a big conceptual step.”

In the short term, the new interrogation approach offers a potential security boost to quantum cryptography, which has been marketed commercially for more than a decade. In principle, quantum cryptography offers “unconditional” security, guaranteed by the laws of physics. Actual quantum devices, however, are notoriously hard to control, and over the past decade, quantum cryptographic systems have repeatedly been hacked.

The interrogation technique creates a quantum cryptography protocol that, for the first time, would transmit a secret key while simultaneously proving that the quantum devices are preventing any potential information leak. Some version of this protocol could very well be implemented within the next five to 10 years, predicted Vidick and his former adviser at MIT, the theoretical computer scientist Scott Aaronson.

“It’s a new level of security that solves the shortcomings of traditional quantum cryptography,” Pironio said.

Spooky Action

In 1964, the Irish physicist John Stewart Bell came up with a test to try to establish, once and for all, that the bafflingly counterintuitive principles of quantum physics are truly inherent properties of the universe — that the decades-long effort of Albert Einstein and other physicists to develop a more intuitive physics could never bear fruit.

Einstein was deeply disturbed by the randomness at the core of quantum physics — God “is not playing at dice,” he famously wrote to the physicist Max Born in 1926.

In 1935, Einstein, together with his colleagues Boris Podolsky and Nathan Rosen, described a strange consequence of this randomness, now called the EPR paradox (short for Einstein, Podolsky, Rosen). According to the laws of quantum physics, it is possible for two particles to interact briefly in such a way that their states become “entangled” as “EPR pairs.” Even if the particles then travel many light years away from each other, one particle somehow instantly seems to “know” the outcome of a measurement on the other particle: When asked the same question, it will give the same answer, even though quantum physics says that the first particle chose its answer randomly. Since the theory of special relativity forbids information from traveling faster than the speed of light, how does the second particle know the answer?
To Einstein, these “spooky actions at a distance” implied that quantum physics was an incomplete theory. “Quantum mechanics is certainly imposing,” he wrote to Born. “But an inner voice tells me that it is not yet the real thing.”

Over the remaining decades of his life, Einstein searched for a way that the two particles could use classical physics to come up with their answers — hidden variables that could explain the behavior of the particles without a need for randomness or spooky actions.

But in 1964, Bell realized that the EPR paradox could be used to devise an experiment that determines whether quantum physics or a local hidden-variables theory correctly explains the real world. Adapted five years later into a format called the CHSH game (after the researchers John Clauser, Michael Horne, Abner Shimony and Richard Holt), the test asks a system to prove its quantum nature by performing a feat that is impossible using only classical physics.

The CHSH game is a coordination game, in which two collaborating players — Bonnie and Clyde, say — are questioned in separate interrogation rooms. Their joint goal is to give either identical answers or different answers, depending on what questions the “detective” asks them. Neither player knows what question the detective is asking the other player.

If Bonnie and Clyde can use only classical physics, then no matter how many “hidden variables” they share, it turns out that the best they can do is decide on a story before they get separated and then stick to it, no matter what the detective asks them, a strategy that will win the game 75 percent of the time. But if Bonnie and Clyde share an EPR pair of entangled particles — picked up in a bank heist, perhaps — then they can exploit the spooky action at a distance to better coordinate their answers and win the game about 85.4 percent of the time.

Bell’s test gave experimentalists a specific way to distinguish between quantum physics and any hidden-variables theory. Over the decades that followed, physicists, most notably Alain Aspect, currently at the École Polytechnique in Palaiseau, France, carried out this test repeatedly, in increasingly controlled settings. Almost every time, the outcome has been consistent with the predictions of quantum physics, not with hidden variables.

Aspect’s work “painted hidden variables into a corner,” Aaronson said. The experiments had a huge role, he said, in convincing people that the counterintuitive weirdness of quantum physics is here to stay.

If Einstein had known about the Bell test, Vazirani said, “he wouldn’t have wasted 30 years of his life looking for an alternative to quantum mechanics.” He simply would have convinced someone to do the experiment.

Read the whole article here.

Send to Kindle