Tag Archives: software

The Rembrandt Algorithm

new-rembrandt

Over the last few decades robots have been steadily replacing humans in industrial and manufacturing sectors. Increasingly, robots are appearing in a broader array of service sectors; they’re stocking shelves, cleaning hotels, buffing windows, tending bar, dispensing cash.

Nowadays you’re likely to be the recipient of news articles filtered, and in some cases written, by pieces of code and business algorithms. Indeed, many boilerplate financial reports are now “written” by “analysts” who reside, not as flesh-and-bones, but virtually, inside server-farms. Just recently a collection of circuitry and software trounced a human being at the strategic board game, Go.

So, can computers progress from repetitive, mechanical and programmatic roles to more creative, free-wheeling vocations? Can computers become artists?

A group of data scientists, computer engineers, software developers and art historians set out to answer the question.

Jonathan Jones over at the Guardian has a few choice words on the result:

I’ve been away for a few days and missed the April Fool stories in Friday’s papers – until I spotted the one about a team of Dutch “data analysts, developers, engineers and art historians” creating a new painting using digital technology: a virtual Rembrandt painted by a Rembrandt app. Hilarious! But wait, this was too late to be an April Fool’s joke. This is a real thing that is actually happening.

What a horrible, tasteless, insensitive and soulless travesty of all that is creative in human nature. What a vile product of our strange time when the best brains dedicate themselves to the stupidest “challenges”, when technology is used for things it should never be used for and everybody feels obliged to applaud the heartless results because we so revere everything digital.

Hey, they’ve replaced the most poetic and searching portrait painter in history with a machine. When are we going to get Shakespeare’s plays and Bach’s St Matthew Passion rebooted by computers? I cannot wait for Love’s Labours Have Been Successfully Functionalised by William Shakesbot.

You cannot, I repeat, cannot, replicate the genius of Rembrandt van Rijn. His art is not a set of algorithms or stylistic tics that can be recreated by a human or mechanical imitator. He can only be faked – and a fake is a dead, dull thing with none of the life of the original. What these silly people have done is to invent a new way to mock art. Bravo to them! But the Dutch art historians and museums who appear to have lent their authority to such a venture are fools.

Rembrandt lived from 1606 to 1669. His art only has meaning as a historical record of his encounters with the people, beliefs and anguishes of his time. Its universality is the consequence of the depth and profundity with which it does so. Looking into the eyes of Rembrandt’s Self-Portrait at the Age of 63, I am looking at time itself: the time he has lived, and the time since he lived. A man who stared, hard, at himself in his 17th-century mirror now looks back at me, at you, his gaze so deep his mottled flesh is just the surface of what we see.

We glimpse his very soul. It’s not style and surface effects that make his paintings so great but the artist’s capacity to reveal his inner life and make us aware in turn of our own interiority – to experience an uncanny contact, soul to soul. Let’s call it the Rembrandt Shudder, that feeling I long for – and get – in front of every true Rembrandt masterpiece..

Is that a mystical claim? The implication of the digital Rembrandt is that we get too sentimental and moist-eyed about art, that great art is just a set of mannerisms that can be digitised. I disagree. If it’s mystical to see Rembrandt as a special and unique human being who created unrepeatable, inexhaustible masterpieces of perception and intuition then count me a mystic.

Read the entire story here.

Image: The Next Rembrandt (based on 168,263 Rembrandt painting fragments). Courtesy: Microsoft, Delft University of Technology,  Mauritshuis (Hague), Rembrandt House Museum (Amsterdam).

Human Bloatware

Most software engineers and IT people are familiar with the term “bloatware“. The word is usually applied to a software application that takes up so much disk space and/or memory that its functional benefits are greatly diminished or rendered useless. Operating systems such as Windows and OSX are often characterized as bloatware — a newer version always seems to require an ever-expanding need for extra disk space (and memory) to accommodate an expanding array of new (often trivial) features with marginal added benefit.

DNA_Structure

But it seems that humans did not invent such obesity through our technology. Rather, a new genetic analysis shows that humans (and other animals) actually consist of biological bloatware, through a process which began when molecules of DNA first assembled the genes of the earliest living organisms.

From ars technica:

Eukaryotes like us are more complex than prokaryotes. We have cells with lots of internal structures, larger genomes with more genes, and our genes are more complex. Since there seems to be no apparent evolutionary advantage to this complexity—evolutionary advantage being defined as fitness, not as things like consciousness or sex—evolutionary biologists have spent much time and energy puzzling over how it came to be.

In 2010, Nick Lane and William Martin suggested that because they don’t have mitochondria, prokaryotes just can’t generate enough energy to maintain large genomes. Thus it was the acquisition of mitochondria and their ability to generate cellular energy that allowed eukaryotic genomes to expand. And with the expansion came the many different types of genes that render us so complex and diverse.

Michael Lynch and Georgi Marinov are now proposing a counter offer. They analyzed the bioenergetic costs of a gene and concluded that there is in fact no energetic barrier to genetic complexity. Rather, eukaryotes can afford bigger genomes simply because they have bigger cells.

First they looked at the lifetime energetic requirements of a cell, defined as the number of times that cell hydrolyzes ATP into ADP, a reaction that powers most cellular processes. This energy requirement rose linearly and smoothly with cell size from bacteria to eukaryotes with no break between them, suggesting that complexity alone, independently of cell volume, requires no more energy.

Then they calculated the cumulative cost of a gene—how much energy it takes to replicate it once per cell cycle, how much energy it takes to transcribe it into mRNA, and how much energy it takes to then translate that mRNA transcript into a functional protein. Genes may provide selective advantages, but those must be sufficient to overcome and justify these energetic costs.

At the levels of replication (copying the DNA) and transcription (making an RNA copy), eukaryotic genes are more costly than prokaryotic genes because they’re bigger and require more processing. But even though these costs are higher, they take up proportionally less of the total energy budget of the cell. That’s because bigger cells take more energy to operate in general (as we saw just above), while things like copying DNA only happens once per cell division. Bigger cells help here, too, as they divide less often.

Read the entire article here.

A Positive Female Role Model

Margaret_Hamilton_in_action

Our society does a better, but still poor, job of promoting positive female role models. Most of our — let’s face it — male designed images of women fall into rather narrowly defined stereotypical categories: nurturing care-giver, stay-at-home soccer mom, matriarchal office admin, overly bossy middle-manager, vacuous reality-TV spouse or scantily clad vixen.

But every now and then the media seems to discover another unsung, female who made significant contributions in a male-dominated and male-overshadowed world. Take the case of computer scientist Margaret Hamilton — she developed on-board flight software for the Apollo space program while director of the Software Engineering Division of the MIT Instrumentation Laboratory. Aside from developing technology that put people on the Moon, she helped NASA understand the true power of software and the consequences of software-driven technology.

From Wired:

Margaret Hamilton wasn’t supposed to invent the modern concept of software and land men on the moon. It was 1960, not a time when women were encouraged to seek out high-powered technical work. Hamilton, a 24-year-old with an undergrad degree in mathematics, had gotten a job as a programmer at MIT, and the plan was for her to support her husband through his three-year stint at Harvard Law. After that, it would be her turn—she wanted a graduate degree in math.

But the Apollo space program came along. And Hamilton stayed in the lab to lead an epic feat of engineering that would help change the future of what was humanly—and digitally—possible.

As a working mother in the 1960s, Hamilton was unusual; but as a spaceship programmer, Hamilton was positively radical. Hamilton would bring her daughter Lauren by the lab on weekends and evenings. While 4-year-old Lauren slept on the floor of the office overlooking the Charles River, her mother programmed away, creating routines that would ultimately be added to the Apollo’s command module computer.

“People used to say to me, ‘How can you leave your daughter? How can you do this?’” Hamilton remembers. But she loved the arcane novelty of her job. She liked the camaraderie—the after-work drinks at the MIT faculty club; the geek jokes, like saying she was “going to branch left minus” around the hallway. Outsiders didn’t have a clue. But at the lab, she says, “I was one of the guys.”

Then, as now, “the guys” dominated tech and engineering. Like female coders in today’s diversity-challenged tech industry, Hamilton was an outlier. It might surprise today’s software makers that one of the founding fathers of their boys’ club was, in fact, a mother—and that should give them pause as they consider why the gender inequality of the Mad Men era persists to this day.

As Hamilton’s career got under way, the software world was on the verge of a giant leap, thanks to the Apollo program launched by John F. Kennedy in 1961. At the MIT Instrumentation Lab where Hamilton worked, she and her colleagues were inventing core ideas in computer programming as they wrote the code for the world’s first portable computer. She became an expert in systems programming and won important technical arguments. “When I first got into it, nobody knew what it was that we were doing. It was like the Wild West. There was no course in it. They didn’t teach it,” Hamilton says.

This was a decade before Microsoft and nearly 50 years before Marc Andreessen would observe that software is, in fact, “eating the world.” The world didn’t think much at all about software back in the early Apollo days. The original document laying out the engineering requirements of the Apollo mission didn’t even mention the word software, MIT aeronautics professor David Mindell writes in his book Digital Apollo. “Software was not included in the schedule, and it was not included in the budget.” Not at first, anyhow.

Read the entire story here.

Image: Margaret Hamilton during her time as lead Apollo flight software designer. Courtesy NASA. Public Domain.

Computer Generated Reality

[tube]nLtmEjqzg7M[/tube]

Computer games have come a very long way since the pioneering days of Pong and Pacman. Games are now so realistic that many are indistinguishable from the real-world characters and scenarios they emulate. It is a testament to the skill and ingenuity of hardware and software engineers and the creativity of developers who bring all the diverse underlying elements of a game together. Now, however, they have a match in the form of computer system that is able to generate richly  imagined and rendered world for use in the games themselves. It’s all done through algorithms.

From Technology Review:

Read the entire story here.

Video: No Man’s Sky. Courtesy of Hello Games.

 

 

Post-Siri Relationships

siri

What are we to make of a world when software-driven intelligent agents, artificial intelligence and language processing capabilities combine to deliver a human experience? After all, what does it really mean to be human and can a machine be sentient? We should all be pondering such weighty issues, since this emerging reality may well happen within our lifetimes.

From Technology Review:

In the movie Her, which was nominated for the Oscar for Best Picture this year, a middle-aged writer named Theodore Twombly installs and rapidly falls in love with an artificially intelligent operating system who christens herself Samantha.

Samantha lies far beyond the faux “artificial intelligence” of Google Now or Siri: she is as fully and unambiguously conscious as any human. The film’s director and writer, Spike Jonze, employs this premise for limited and prosaic ends, so the film limps along in an uncanny valley, neither believable as near-future reality nor philosophically daring enough to merit suspension of disbelief. Nonetheless, Her raises questions about how humans might relate to computers. Twombly is suffering a painful separation from his wife; can Samantha make him feel better?

Samantha’s self-awareness does not echo real-world trends for automated assistants, which are heading in a very different direction. Making personal assistants chatty, let alone flirtatious, would be a huge waste of resources, and most people would find them as irritating as the infamous Microsoft Clippy.

But it doesn’t necessarily follow that these qualities would be unwelcome in a different context. When dementia sufferers in nursing homes are invited to bond with robot seal pups, and a growing list of psychiatric conditions are being addressed with automated dialogues and therapy sessions, it can only be a matter of time before someone tries to create an app that helps people overcome ordinary loneliness. Suppose we do reach the point where it’s possible to feel genuinely engaged by repartee with a piece of software. What would that mean for the human participants?

Perhaps this prospect sounds absurd or repugnant. But some people already take comfort from immersion in the lives of fictional characters. And much as I wince when I hear someone say that “my best friend growing up was Elizabeth Bennet,” no one would treat it as evidence of psychotic delusion. Over the last two centuries, the mainstream perceptions of novel reading have traversed a full spectrum: once seen as a threat to public morality, it has become a badge of empathy and emotional sophistication. It’s rare now to hear claims that fiction is sapping its readers of time, energy, and emotional resources that they ought to be devoting to actual human relationships.

Of course, characters in Jane Austen novels cannot banter with the reader—and it’s another question whether it would be a travesty if they could—but what I’m envisaging are not characters from fiction “brought to life,” or even characters in a game world who can conduct more realistic dialogue with human players. A software interlocutor—an “SI”—would require some kind of invented back story and an ongoing “life” of its own, but these elements need not have been chosen as part of any great dramatic arc. Gripping as it is to watch an egotistical drug baron in a death spiral, or Raskolnikov dragged unwillingly toward his creator’s idea of redemption, the ideal SI would be more like a pen pal, living an ordinary life untouched by grand authorial schemes but ready to discuss anything, from the mundane to the metaphysical.

There are some obvious pitfalls to be avoided. It would be disastrous if the user really fell for the illusion of personhood, but then, most of us manage to keep the distinction clear in other forms of fiction. An SI that could be used to rehearse pathological fantasies of abusive relationships would be a poisonous thing—but conversely, one that stood its ground against attempts to manipulate or cower it might even do some good.

The art of conversation, of listening attentively and weighing each response, is not a universal gift, any more than any other skill. If it becomes possible to hone one’s conversational skills with a computer—discovering your strengths and weaknesses while enjoying a chat with a character that is no less interesting for failing to exist—that might well lead to better conversations with fellow humans.

Read the entire story here.

Image: Siri icon. Courtesy of Cult of Mac / Apple.

The Outliner as Outlier

Outlining tools for the composition of text are intimately linked with the evolution of the personal computer industry. Yet while outliners were some of the earliest “apps” to appear, their true power, as mechanisms to think new thoughts — has yet to be fully realized.

From Technology Review:

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

It’s an elitist view of software, and maybe self-defeating. Perhaps most users, who just want to compose two-page documents and quick e-mails, don’t need the structure that Fargo imposes.

But I sympathize with Winer. I’m an outliner person. I’ve used many outliners over the decades. Right now, my favorite is the open-source Org-mode in the Emacs text editor. Learning an outliner’s commands is a pleasure, because the payoff—the ability to distill a bubbling cauldron of thought into a list, and then to expand that bulleted list into an essay, a report, anything—is worth it. An outliner treats a text as a set of Lego bricks to be pulled apart and reassembled until the most pleasing structure is found.

Fargo is an excellent outline editor, and it’s innovative because it’s a true Web application, running all its code inside the browser and storing versions of files in Dropbox. (Winer also recently released Concord, the outlining engine inside Fargo, under a free software license so that any developer can insert an outline into any Web application.) As you move words and ideas around, Fargo feels jaunty. Click on one of those lines in your outline and drag it, and arrows show you where else in the hierarchy that line might fit. They’re good arrows: fat, clear, obvious, informative.

For a while, bloggers using Fargo could publish posts with a free hosted service operated by Winer. But this fall the service broke, and Winer said he didn’t see how to fix it. Perhaps that’s just as well: an outline creates a certain unresolved tension with the dominant model for blogging. For Winer, a blog is a big outline of one’s days and intellectual development. But most blog publishing systems treat each post in isolation: a title, some text, maybe an image or video. Are bloggers ready to see a blog as one continuous document, a set of branches hanging off a common trunk? That’s the thing about outlines: they can become anything.

Read the entire article here.

Living Organism as Software

For the first time scientists have built a computer software model of an entire organism from its molecular building blocks. This allows the model to predict previously unobserved cellular biological processes and behaviors. While the organism in question is a simple bacterium, this represents another huge advance in computational biology.

[div class=attrib]From the New York Times:[end-div]

Scientists at Stanford University and the J. Craig Venter Institute have developed the first software simulation of an entire organism, a humble single-cell bacterium that lives in the human genital and respiratory tracts.

The scientists and other experts said the work was a giant step toward developing computerized laboratories that could carry out complete experiments without the need for traditional instruments.

For medical researchers and drug designers, cellular models will be able to supplant experiments during the early stages of screening for new compounds. And for molecular biologists, models that are of sufficient accuracy will yield new understanding of basic biological principles.

The simulation of the complete life cycle of the pathogen, Mycoplasma genitalium, was presented on Friday in the journal Cell. The scientists called it a “first draft” but added that the effort was the first time an entire organism had been modeled in such detail — in this case, all of its 525 genes.

“Where I think our work is different is that we explicitly include all of the genes and every known gene function,” the team’s leader, Markus W. Covert, an assistant professor of bioengineering at Stanford, wrote in an e-mail. “There’s no one else out there who has been able to include more than a handful of functions or more than, say, one-third of the genes.”

The simulation, which runs on a cluster of 128 computers, models the complete life span of the cell at the molecular level, charting the interactions of 28 categories of molecules — including DNA, RNA, proteins and small molecules known as metabolites that are generated by cell processes.

“The model presented by the authors is the first truly integrated effort to simulate the workings of a free-living microbe, and it should be commended for its audacity alone,” wrote the Columbia scientists Peter L. Freddolino and Saeed Tavazoie in a commentary that accompanied the article. “This is a tremendous task, involving the interpretation and integration of a massive amount of data.”

They called the simulation an important advance in the new field of computational biology, which has recently yielded such achievements as the creation of a synthetic life form — an entire bacterial genome created by a team led by the genome pioneer J. Craig Venter. The scientists used it to take over an existing cell.

For their computer simulation, the researchers had the advantage of extensive scientific literature on the bacterium. They were able to use data taken from more than 900 scientific papers to validate the accuracy of their software model.

Still, they said that the model of the simplest biological system was pushing the limits of their computers.

“Right now, running a simulation for a single cell to divide only one time takes around 10 hours and generates half a gigabyte of data,” Dr. Covert wrote. “I find this fact completely fascinating, because I don’t know that anyone has ever asked how much data a living thing truly holds. We often think of the DNA as the storage medium, but clearly there is more to it than that.”

In designing their model, the scientists chose an approach that parallels the design of modern software systems, known as object-oriented programming. Software designers organize their programs in modules, which communicate with one another by passing data and instructions back and forth.

Similarly, the simulated bacterium is a series of modules that mimic the different functions of the cell.

“The major modeling insight we had a few years ago was to break up the functionality of the cell into subgroups which we could model individually, each with its own mathematics, and then to integrate these sub-models together into a whole,” Dr. Covert said. “It turned out to be a very exciting idea.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A Whole-Cell Computational Model Predicts Phenotype from Genotype. Courtesy of Cell / Elsevier Inc.[end-div]

Software is Eating the World

[div class=attrib]By Marc Andreesen for the WSJ:[end-div]

This week, Hewlett-Packard (where I am on the board) announced that it is exploring jettisoning its struggling PC business in favor of investing more heavily in software, where it sees better potential for growth. Meanwhile, Google plans to buy up the cellphone handset maker Motorola Mobility. Both moves surprised the tech world. But both moves are also in line with a trend I’ve observed, one that makes me optimistic about the future growth of the American and world economies, despite the recent turmoil in the stock market.

In short, software is eating the world.

More than 10 years after the peak of the 1990s dot-com bubble, a dozen or so new Internet companies like Facebook and Twitter are sparking controversy in Silicon Valley, due to their rapidly growing private market valuations, and even the occasional successful IPO. With scars from the heyday of Webvan and Pets.com still fresh in the investor psyche, people are asking, “Isn’t this just a dangerous new bubble?”

I, along with others, have been arguing the other side of the case. (I am co-founder and general partner of venture capital firm Andreessen-Horowitz, which has invested in Facebook, Groupon, Skype, Twitter, Zynga, and Foursquare, among others. I am also personally an investor in LinkedIn.) We believe that many of the prominent new Internet companies are building real, high-growth, high-margin, highly defensible businesses.

. . .

Why is this happening now?

Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.

. . .

Perhaps the single most dramatic example of this phenomenon of software eating a traditional business is the suicide of Borders and corresponding rise of Amazon. In 2001, Borders agreed to hand over its online business to Amazon under the theory that online book sales were non-strategic and unimportant.

Oops.

Today, the world’s largest bookseller, Amazon, is a software company—its core capability is its amazing software engine for selling virtually everything online, no retail stores necessary. On top of that, while Borders was thrashing in the throes of impending bankruptcy, Amazon rearranged its web site to promote its Kindle digital books over physical books for the first time. Now even the books themselves are software.

[div class=attrib]More from theSource here.[end-div]

Dependable Software by Design

[div class=attrib]From Scientific American:[end-div]

Computers fly our airliners and run most of the world’s banking, communications, retail and manufacturing systems. Now powerful analysis tools will at last help software engineers ensure the reliability of their designs.

An architectural marvel when it opened 11 years ago, the new Denver International Airport’s high-tech jewel was to be its automated baggage handler. It would autonomously route luggage around 26 miles of conveyors for rapid, seamless delivery to planes and passengers. But software problems dogged the system, delaying the airport’s opening by 16 months and adding hundreds of millions of dollars in cost overruns. Despite years of tweaking, it never ran reliably. Last summer airport managers finally pulled the plug–reverting to traditional manually loaded baggage carts and tugs with human drivers. The mechanized handler’s designer, BAE Automated Systems, was liquidated, and United Airlines, its principal user, slipped into bankruptcy, in part because of the mess.

The high price of poor software design is paid daily by millions of frustrated users. Other notorious cases include costly debacles at the U.S. Internal Revenue Service (a failed $4-billion modernization effort in 1997, followed by an equally troubled $8-billion updating project); the Federal Bureau of Investigation (a $170-million virtual case-file management system was scrapped in 2005); and the Federal Aviation Administration (a lingering and still unsuccessful attempt to renovate its aging air-traffic control system).

[div class=attrib]More from theSource here.[end-div]