Five Big Additions to Darwin’s Theory of Evolution

From Discover:

Charles Darwin would have turned 200 in 2009, the same year his book On the Origin of Species celebrated its 150th anniversary. Today, with the perspective of time, Darwin’s theory of evolution by natural selection looks as impressive as ever. In fact, the double anniversary year saw progress on fronts that Darwin could never have anticipated, bringing new insights into the origin of life—a topic that contributed to his panic attacks, heart palpitations, and, as he wrote, “for 25 years extreme spasmodic daily and nightly flatulence.” One can only dream of what riches await in the biology textbooks of 2159.

1. Evolution happens on the inside, too. The battle for survival is waged not just between the big dogs but within the dog itself, as individual genes jockey for prominence. From the moment of conception, a father’s genes favor offspring that are large, strong, and aggressive (the better to court the ladies), while the mother’s genes incline toward smaller progeny that will be less of a burden, making it easier for her to live on and procreate. Genome-versus-genome warfare produces kids that are somewhere in between.

Not all genetic conflicts are resolved so neatly. In flour beetles, babies that do not inherit the selfish genetic element known as Medea succumb to a toxin while developing in the egg. Some unborn mice suffer the same fate. Such spiteful genes have become widespread not by helping flour beetles and mice survive but by eliminating individuals that do not carry the killer’s code. “There are two ways of winning a race,” says Caltech biologist Bruce Hay. “Either you can be better than everyone else, or you can whack the other guys on the legs.”

Hay is trying to harness the power of such genetic cheaters, enlisting them in the fight against malaria. He created a Medea-like DNA element that spreads through experimental fruit flies like wildfire, permeating an entire population within 10 generations. This year he and his team have been working on encoding immune-system boosters into those Medea genes, which could then be inserted into male mosquitoes. If it works, the modified mosquitoes should quickly replace competitors who do not carry the new genes; the enhanced immune systems of the new mosquitoes, in turn, would resist the spread of the malaria parasite.

2. Identity is not written just in the genes. According to modern evolutionary theory, there is no way that what we eat, do, and encounter can override the basic rules of inheritance: What is in the genes stays in the genes. That single rule secured Darwin’s place in the science books. But now biologists are finding that nature can break those rules. This year Eva Jablonka, a theoretical biologist at Tel Aviv University, published a compendium of more than 100 hereditary changes that are not carried in the DNA sequence. This “epigenetic” inheritance spans bacteria, fungi, plants, and animals.

More from theSource here.

Send to Kindle

The meaning of network culture

From Eurozine:

Whereas in postmodernism, being was left in a free-floating fabric of emotional intensities, in contemporary culture the existence of the self is affirmed through the network. Kazys Varnelis discusses what this means for the democratic public sphere.

Not all at once but rather slowly, in fits and starts, a new societal condition is emerging: network culture. As digital computing matures and meshes with increasingly mobile networking technology, society is also changing, undergoing a cultural shift. Just as modernism and postmodernism served as crucial heuristic devices in their day, studying network culture as a historical phenomenon allows us to better understand broader sociocultural trends and structures, to give duration and temporality to our own, ahistorical time.

If more subtle than the much-talked about economic collapse of fall 2008, this shift in society is real and far more radical, underscoring even the logic of that collapse. During the space of a decade, the network has become the dominant cultural logic. Our economy, public sphere, culture, even our subjectivity are mutating rapidly and show little evidence of slowing down the pace of their evolution. The global economic crisis only demonstrated our faith in the network and its dangers. Over the last two decades, markets and regulators had increasingly placed their faith in the efficient market hypothesis, which posited that investors were fundamentally rational and, fed information by highly efficient data networks, would always make the right decision. The failure came when key parts of the network – the investors, regulators, and the finance industry – failed to think through the consequences of their actions and placed their trust in each other.

The collapse of the markets seems to have been sudden, but it was actually a long-term process, beginning with bad decisions made longer before the collapse. Most of the changes in network culture are subtle and only appear radical in retrospect. Take our relationship with the press. One morning you noted with interest that your daily newspaper had established a website. Another day you decided to stop buying the paper and just read it online. Then you started reading it on a mobile Internet platform, or began listening to a podcast of your favourite column while riding a train. Perhaps you dispensed with official news entirely, preferring a collection of blogs and amateur content. Eventually the paper may well be distributed only on the net, directly incorporating user comments and feedback. Or take the way cell phones have changed our lives. When you first bought a mobile phone, were you aware of how profoundly it would alter your life? Soon, however, you found yourself abandoning the tedium of scheduling dinner plans with friends in advance, instead coordinating with them en route to a particular neighbourhood. Or if your friends or family moved away to university or a new career, you found that through a social networking site like Facebook and through the every-present telematic links of the mobile phone, you did not lose touch with them.

If it is difficult to realize the radical impact of the contemporary, this is in part due to the hype about the near-future impact of computing on society in the 1990s. The failure of the near-future to be realized immediately, due to the limits of the technology of the day, made us jaded. The dot.com crash only reinforced that sense. But slowly, technology advanced and society changed, finding new uses for it, in turn spurring more change. Network culture crept up on us. Its impact on us today is radical and undeniable.

More from theSource here.

Send to Kindle

The Madness of Crowds and an Internet Delusion

From The New York Times:

RETHINKING THE WEB Jaron Lanier, pictured here in 1999, was an early proponent of the Internet’s open culture. His new book examines the downsides.

In the 1990s, Jaron Lanier was one of the digital pioneers hailing the wonderful possibilities that would be realized once the Internet allowed musicians, artists, scientists and engineers around the world to instantly share their work. Now, like a lot of us, he is having second thoughts.

Mr. Lanier, a musician and avant-garde computer scientist — he popularized the term “virtual reality” — wonders if the Web’s structure and ideology are fostering nasty group dynamics and mediocre collaborations. His new book, “You Are Not a Gadget,” is a manifesto against “hive thinking” and “digital Maoism,” by which he means the glorification of open-source software, free information and collective work at the expense of individual creativity.

He blames the Web’s tradition of “drive-by anonymity” for fostering vicious pack behavior on blogs, forums and social networks. He acknowledges the examples of generous collaboration, like Wikipedia, but argues that the mantras of “open culture” and “information wants to be free” have produced a destructive new social contract.

“The basic idea of this contract,” he writes, “is that authors, journalists, musicians and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising.”

I find his critique intriguing, partly because Mr. Lanier isn’t your ordinary Luddite crank, and partly because I’ve felt the same kind of disappointment with the Web. In the 1990s, when I was writing paeans to the dawning spirit of digital collaboration, it didn’t occur to me that the Web’s “gift culture,” as anthropologists called it, could turn into a mandatory potlatch for so many professions — including my own.

So I have selfish reasons for appreciating Mr. Lanier’s complaints about masses of “digital peasants” being forced to provide free material to a few “lords of the clouds” like Google and YouTube. But I’m not sure Mr. Lanier has correctly diagnosed the causes of our discontent, particularly when he blames software design for leading to what he calls exploitative monopolies on the Web like Google.

He argues that old — and bad — digital systems tend to get locked in place because it’s too difficult and expensive for everyone to switch to a new one. That basic problem, known to economists as lock-in, has long been blamed for stifling the rise of superior technologies like the Dvorak typewriter keyboard and Betamax videotapes, and for perpetuating duds like the Windows operating system.

It can sound plausible enough in theory — particularly if your Windows computer has just crashed. In practice, though, better products win out, according to the economists Stan Liebowitz and Stephen Margolis. After reviewing battles like Dvorak-qwerty and Betamax-VHS, they concluded that consumers had good reasons for preferring qwerty keyboards and VHS tapes, and that sellers of superior technologies generally don’t get locked out. “Although software is often brought up as locking in people,” Dr. Liebowitz told me, “we have made a careful examination of that issue and find that the winning products are almost always the ones thought to be better by reviewers.” When a better new product appears, he said, the challenger can take over the software market relatively quickly by comparison with other industries.

Dr. Liebowitz, a professor at the University of Texas at Dallas, said the problem on the Web today has less to do with monopolies or software design than with intellectual piracy, which he has also studied extensively. In fact, Dr. Liebowitz used to be a favorite of the “information-wants-to-be-free” faction.

In the 1980s he asserted that photocopying actually helped copyright owners by exposing more people to their work, and he later reported that audio and video taping technologies offered large benefits to consumers without causing much harm to copyright owners in Hollywood and the music and television industries.

But when Napster and other music-sharing Web sites started becoming popular, Dr. Liebowitz correctly predicted that the music industry would be seriously hurt because it was so cheap and easy to make perfect copies and distribute them. Today he sees similar harm to other industries like publishing and television (and he is serving as a paid adviser to Viacom in its lawsuit seeking damages from Google for allowing Viacom’s videos to be posted on YouTube).

Trying to charge for songs and other digital content is sometimes dismissed as a losing cause because hackers can crack any copy-protection technology. But as Mr. Lanier notes in his book, any lock on a car or a home can be broken, yet few people do so — or condone break-ins.

“An intelligent person feels guilty for downloading music without paying the musician, but they use this free-open-culture ideology to cover it,” Mr. Lanier told me. In the book he disputes the assertion that there’s no harm in copying a digital music file because you haven’t damaged the original file.

“The same thing could be said if you hacked into a bank and just added money to your online account,” he writes. “The problem in each case is not that you stole from a specific person but that you undermined the artificial scarcities that allow the economy to function.”

Mr. Lanier was once an advocate himself for piracy, arguing that his fellow musicians would make up for the lost revenue in other ways. Sure enough, some musicians have done well selling T-shirts and concert tickets, but it is striking how many of the top-grossing acts began in the predigital era, and how much of today’s music is a mash-up of the old.

“It’s as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump,” Mr. Lanier writes. Or, to use another of his grim metaphors: “Creative people — the new peasants — come to resemble animals converging on shrinking oases of old media in a depleted desert.”

To save those endangered species, Mr. Lanier proposes rethinking the Web’s ideology, revising its software structure and introducing innovations like a universal system of micropayments. (To debate reforms, go to Tierney Lab at nytimes.com/tierneylab.

Dr. Liebowitz suggests a more traditional reform for cyberspace: punishing thieves. The big difference between Web piracy and house burglary, he says, is that the penalties for piracy are tiny and rarely enforced. He expects people to keep pilfering (and rationalizing their thefts) as long as the benefits of piracy greatly exceed the costs.

In theory, public officials could deter piracy by stiffening the penalties, but they’re aware of another crucial distinction between online piracy and house burglary: There are a lot more homeowners than burglars, but there are a lot more consumers of digital content than producers of it.

The result is a problem a bit like trying to stop a mob of looters. When the majority of people feel entitled to someone’s property, who’s going to stand in their way?

More from theSource here.

Send to Kindle

Your Digital Privacy? It May Already Be an Illusion

From Discover:

As his friends flocked to social networks like Facebook and MySpace, Alessandro Acquisti, an associate professor of information technology at Carnegie Mellon University, worried about the downside of all this online sharing. “The personal information is not particularly sensitive, but what happens when you combine those pieces together?” he asks. “You can come up with something that is much more sensitive than the individual pieces.”

Acquisti tested his idea in a study, reported earlier this year in Proceedings of the National Academy of Sciences. He took seemingly innocuous pieces of personal data that many people put online (birthplace and date of birth, both frequently posted on social networking sites) and combined them with information from the Death Master File, a public database from the U.S. Social Security Administration. With a little clever analysis, he found he could determine, in as few as 1,000 tries, someone’s Social Security number 8.5 percent of the time. Data thieves could easily do the same thing: They could keep hitting the log-on page of a bank account until they got one right, then go on a spending spree. With an automated program, making thousands of attempts is no trouble at all.

The problem, Acquisti found, is that the way the Death Master File numbers are created is predictable. Typically the first three digits of a Social Security number, the “area number,” are based on the zip code of the person’s birthplace; the next two, the “group number,” are assigned in a predetermined order within a particular area-number group; and the final four, the “serial number,” are assigned consecutively within each group number. When Acquisti plotted the birth information and corresponding Social Security numbers on a graph, he found that the set of possible IDs that could be assigned to a person with a given date and place of birth fell within a restricted range, making it fairly simple to sift through all of the possibilities.

To check the accuracy of his guesses, Acquisti used a list of students who had posted their birth information on a social network and whose Social Security numbers were matched anon­ymously by the university they attended. His system worked—yet another reason why you should never use your Social Security number as a password for sensitive transactions.

Welcome to the unnerving world of data mining, the fine art (some might say black art) of extracting important or sensitive pieces from the growing cloud of information that surrounds almost all of us. Since data persist essentially forever online—just check out the Internet Archive Wayback Machine, the repository of almost everything that ever appeared on the Internet—some bit of seemingly harmless information that you post today could easily come back to haunt you years from now.

More from theSource here.

Send to Kindle

For Expatriates in China, Creative Lives of Plenty

From The New York Times:

THERE was a chill in the morning air in 2005 when dozens of artists from China, Europe and North America emerged from their red-brick studios here to find the police blocking the gates to Suojiacun, their compound on the city’s outskirts. They were told that the village of about 100 illegally built structures was to be demolished, and were given two hours to pack.

By noon bulldozers were smashing the walls of several studios, revealing ripped-apart canvases and half-glazed clay vases lying in the rubble. But then the machines ceased their pulverizing, and the police dispersed, leaving most of the buildings unscathed. It was not the first time the authorities had threatened to evict these artists, nor would it be the last. But it was still frightening.

“I had invested everything in my studio,” said Alessandro Rolandi, a sculptor and performance artist originally from Italy who had removed his belongings before the destruction commenced. “I was really worried about my work being destroyed.”

He eventually left Suojiacun, but he has remained in China. Like the artists’ colony, the country offers challenges, but expatriates here say that the rewards outweigh the hardships. Mr. Rolandi is one of many artists (five are profiled here) who have left the United States and Europe for China, seeking respite from tiny apartments, an insular art world and nagging doubts about whether it’s best to forgo art for a reliable office job. They have discovered a land of vast creative possibility, where scale is virtually limitless and costs are comically low. They can rent airy studios, hire assistants, experiment in costly mediums like bronze and fiberglass.

“Today China has become one of the most important places to create and invent,” said Jérôme Sans, director of the Ullens Center for Contemporary Art in Beijing. “A lot of Western artists are coming here to live the dynamism and make especially crazy work they could never do anywhere else in the world.”

Rania Ho

A major challenge for foreigners, no matter how fluent or familiar with life here, is that even if they look like locals, it is virtually impossible to feel truly of this culture. For seven years Rania Ho, the daughter of Chinese immigrants born and raised in San Francisco, has lived in Beijing, where she runs a small gallery in a hutong, or alley, near one of the city’s main temples. “Being Chinese-American makes it easier to be an observer of what’s really happening because I’m camouflaged,” she said. “But it doesn’t mean I understand any more what people are thinking.”

Still, Ms. Ho, 40, revels in her role as outsider in a society that she says is blindly enthusiastic about remaking itself. She creates and exhibits work by both foreign and Chinese artists that often plays with China’s fetishization of mechanized modernity.

Because she lives so close to military parades and futuristic architecture, she said that her own pieces — like a water fountain gushing on the roof of her gallery and a cardboard table that levitates a Ping-Pong ball — chuckle at the “hypnotic properties of unceasing labor.” She said they are futile responses to the absurd experiences she shares with her neighbors, who are constantly seeing their world transform before their eyes. “Being in China forces one to reassess everything,” she said, “which is at times difficult and exhausting, but for a majority of the time it’s all very amusing and enlightening.”

More from theSource here.

Send to Kindle

Are Black Holes the Architects of the Universe?

From Discover:

Black holes are finally winning some respect. After long regarding them as agents of destruction or dismissing them as mere by-products of galaxies and stars, scientists are recalibrating their thinking. Now it seems that black holes debuted in a constructive role and appeared unexpectedly soon after the Big Bang. “Several years ago, nobody imagined that there were such monsters in the early universe,” says Penn State astrophysicist Yuexing Li. “Now we see that black holes were essential in creating the universe’s modern structure.”

Black holes, tortured regions of space where the pull of gravity is so intense that not even light can escape, did not always have such a high profile. They were once thought to be very rare; in fact, Albert Einstein did not believe they existed at all. Over the past several decades, though, astronomers have realized that black holes are not so unusual after all: Supermassive ones, millions or billions of times as hefty as the sun, seem to reside at the center of most, if not all, galaxies. Still, many people were shocked in 2003 when a detailed sky survey found that giant black holes were already common nearly 13 billion years ago, when the universe was less than a billion years old. Since then, researchers have been trying to figure out where these primordial holes came from and how they influenced the cosmic events that followed.

In August, researchers at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University ran a supercomputer simulation of the early universe and provided a tantalizing glimpse into the lives of the first black holes. The story began 200 million years after the Big Bang, when the universe’s first stars formed. These beasts, about 100 times the mass of the sun, were so large and energetic that they burned all their hydrogen fuel in just a few million years. With no more energy from hydrogen fusion to counteract the enormous inward pull of their gravity, the stars collapsed until all of their mass was compressed into a point of infinite density.

The first-generation black holes were puny compared with the monsters we see at the centers of galaxies today. They grew only slowly at first—adding just 1 percent to their bulk in the next 200 million years—because the hyperactive stars that spawned them had blasted away most of the nearby gas that they could have devoured. Nevertheless, those modest-size black holes left a big mark by performing a form of stellar birth control: Radiation from the trickle of material falling into the holes heated surrounding clouds of gas to about 5,000 degrees Fahrenheit, so hot that the gas could no longer easily coalesce. “You couldn’t really form stars in that stuff,” says Marcelo Alvarez, lead author of the Kavli study.

More from theSource here.

Image courtesy of KIPAC/SLAC/M.Alvarez, T. Able, and J. Wise.

Send to Kindle