The Chess Master and the Computer

[div class=attrib]By Gary Kasparov, From the New York Review of Books:[end-div]

In 1985, in Hamburg, I played against thirty-two different chess computers at the same time in what is known as a simultaneous exhibition. I walked from one machine to the next, making my moves over a period of more than five hours. The four leading chess computer manufacturers had sent their top models, including eight named after me from the electronics firm Saitek.

It illustrates the state of computer chess at the time that it didn’t come as much of a surprise when I achieved a perfect 32–0 score, winning every game, although there was an uncomfortable moment. At one point I realized that I was drifting into trouble in a game against one of the “Kasparov” brand models. If this machine scored a win or even a draw, people would be quick to say that I had thrown the game to get PR for the company, so I had to intensify my efforts. Eventually I found a way to trick the machine with a sacrifice it should have refused. From the human perspective, or at least from my perspective, those were the good old days of man vs. machine chess.

Eleven years later I narrowly defeated the supercomputer Deep Blue in a match. Then, in 1997, IBM redoubled its efforts—and doubled Deep Blue’s processing power—and I lost the rematch in an event that made headlines around the world. The result was met with astonishment and grief by those who took it as a symbol of mankind’s submission before the almighty computer. (“The Brain’s Last Stand” read the Newsweek headline.) Others shrugged their shoulders, surprised that humans could still compete at all against the enormous calculating power that, by 1997, sat on just about every desk in the first world.

It was the specialists—the chess players and the programmers and the artificial intelligence enthusiasts—who had a more nuanced appreciation of the result. Grandmasters had already begun to see the implications of the existence of machines that could play—if only, at this point, in a select few types of board configurations—with godlike perfection. The computer chess people were delighted with the conquest of one of the earliest and holiest grails of computer science, in many cases matching the mainstream media’s hyperbole. The 2003 book Deep Blue by Monty Newborn was blurbed as follows: “a rare, pivotal watershed beyond all other triumphs: Orville Wright’s first flight, NASA’s landing on the moon….”

[div class=attrib]More from theSource here.[end-div]

The Man Who Builds Brains

[div class=attrib]From Discover:[end-div]

On the quarter-mile walk between his office at the École Polytechnique Fédérale de Lausanne in Switzerland and the nerve center of his research across campus, Henry Markram gets a brisk reminder of the rapidly narrowing gap between human and machine. At one point he passes a museumlike display filled with the relics of old supercomputers, a memorial to their technological limitations. At the end of his trip he confronts his IBM Blue Gene/P—shiny, black, and sloped on one side like a sports car. That new supercomputer is the center­piece of the Blue Brain Project, tasked with simulating every aspect of the workings of a living brain.

Markram, the 47-year-old founder and codirector of the Brain Mind Institute at the EPFL, is the project’s leader and cheerleader. A South African neuroscientist, he received his doctorate from the Weizmann Institute of Science in Israel and studied as a Fulbright Scholar at the National Institutes of Health. For the past 15 years he and his team have been collecting data on the neocortex, the part of the brain that lets us think, speak, and remember. The plan is to use the data from these studies to create a comprehensive, three-dimensional simulation of a mammalian brain. Such a digital re-creation that matches all the behaviors and structures of a biological brain would provide an unprecedented opportunity to study the fundamental nature of cognition and of disorders such as depression and schizophrenia.

Until recently there was no computer powerful enough to take all our knowledge of the brain and apply it to a model. Blue Gene has changed that. It contains four monolithic, refrigerator-size machines, each of which processes data at a peak speed of 56 tera­flops (teraflops being one trillion floating-point operations per second). At $2 million per rack, this Blue Gene is not cheap, but it is affordable enough to give Markram a shot with this ambitious project. Each of Blue Gene’s more than 16,000 processors is used to simulate approximately one thousand virtual neurons. By getting the neurons to interact with one another, Markram’s team makes the computer operate like a brain. In its trial runs Markram’s Blue Gene has emulated just a single neocortical column in a two-week-old rat. But in principle, the simulated brain will continue to get more and more powerful as it attempts to rival the one in its creator’s head. “We’ve reached the end of phase one, which for us is the proof of concept,” Markram says. “We can, I think, categorically say that it is possible to build a model of the brain.” In fact, he insists that a fully functioning model of a human brain can be built within a decade.

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Michelangelo’s Labor Pains

[div class=attrib]By Robert Pinsky for Slate:[end-div]

After a certain point, reverence can become automatic. Our admiration for great works of art can get a bit reflexive, then synthetic, then can harden into a pious coating that repels real attention. Michelangelo’s painted ceiling of the Sistine Chapel in the Vatican might be an example of such automatic reverence. Sometimes, a fresh look or a hosing-down is helpful—if only by restoring the meaning of “work” to the phrase “work of art.”

Michelangelo (1475-1564) himself provides a refreshing dose of reality. A gifted poet as well as a sculptor and painter, he wrote energetically about despair, detailing with relish the unpleasant side of his work on the famous ceiling. The poem, in Italian, is an extended (or “tailed”) sonnet, with a coda of six lines appended to the standard 14. The translation I like best is by the American poet Gail Mazur. Her lines are musical but informal, with a brio conveying that the Italian artist knew well enough that he and his work were great—but that he enjoyed vigorously lamenting his discomfort, pain, and inadequacy to the task. No wonder his artistic ideas are bizarre and no good, says Michelangelo: They must come through the medium of his body, that “crooked blowpipe” (Mazur’s version of “cerbottana torta“). Great artist, great depression, great imaginative expression of it. This is a vibrant, comic, but heartfelt account of the artist’s work:

Michelangelo: To Giovanni da Pistoia
“When the Author Was Painting the Vault of the Sistine Chapel” —1509

I’ve already grown a goiter from this torture,
hunched up here like a cat in Lombardy
(or anywhere else where the stagnant water’s poison).
My stomach’s squashed under my chin, my beard’s
pointing at heaven, my brain’s crushed in a casket,
my breast twists like a harpy’s. My brush,
above me all the time, dribbles paint
so my face makes a fine floor for droppings!

My haunches are grinding into my guts,
my poor ass strains to work as a counterweight,
every gesture I make is blind and aimless.
My skin hangs loose below me, my spine’s
all knotted from folding over itself.
I’m bent taut as a Syrian bow.

Because I’m stuck like this, my thoughts
are crazy, perfidious tripe:
anyone shoots badly through a crooked blowpipe.

My painting is dead.
Defend it for me, Giovanni, protect my honor.
I am not in the right place—I am not a painter.

[div class=attrib]More from theSource here.[end-div]

The Graphene Revolution

[div class=attrib]From Discover:[end-div]

Flexible, see-through, one-atom-thick sheets of carbon could be a key component for futuristic solar cells, batteries, and roll-up LCD screens—and perhaps even microchips.

Under a transmission electron microscope it looks deceptively simple: a grid of hexa­gons resembling a volleyball net or a section of chicken wire. But graphene, a form of carbon that can be produced in sheets only one atom thick, seems poised to shake up the world of electronics. Within five years, it could begin powering faster and better transistors, computer chips, and LCD screens, according to researchers who are smitten with this new supermaterial.

Graphene’s standout trait is its uncanny facility with electrons, which can travel much more quickly through it than they can through silicon. As a result, graphene-based computer chips could be thousands of times as efficient as existing ones. “What limits conductivity in a normal material is that electrons will scatter,” says Michael Strano, a chemical engineer at MIT. “But with graphene the electrons can travel very long distances without scattering. It’s like the thinnest, most stable electrical conducting framework you can think of.”

In 2009 another MIT researcher, Tomas Palacios, devised a graphene chip that doubles the frequency of an electromagnetic signal. Using multiple chips could make the outgoing signal many times higher in frequency than the original. Because frequency determines the clock speed of the chip, boosting it enables faster transfer of data through the chip. Graphene’s extreme thinness means that it is also practically transparent, making it ideal for transmitting signals in devices containing solar cells or LEDs.

[div class=attrib]More from theSource here.[end-div]

J. Craig Venter

[div class=attrib]From Discover:[end-div]

J. Craig Venter keeps riding the cusp of each new wave in biology. When researchers started analyzing genes, he launched the Institute for Genomic Research (TIGR), decoding the genome of a bacterium for the first time in 1992. When the government announced its plan to map the human genome, he claimed he would do it first—and then he delivered results in 2001, years ahead of schedule. Armed with a deep understanding of how DNA works, Venter is now moving on to an even more extraordinary project. Starting with the stunning genetic diversity that exists in the wild, he is aiming to build custom-designed organisms that could produce clean energy, help feed the planet, and treat cancer. Venter has already transferred the genome of one species into the cell body of another. This past year he reached a major milestone, using the machinery of yeast to manufacture a genome from scratch. When he combines the steps—perhaps next year—he will have crafted a truly synthetic organism. Senior editor Pamela Weintraub discussed the implications of these efforts with Venter in DISCOVER’s editorial offices.

Here you are talking about constructing life, but you started out in deconstruction: charting the human genome, piece by piece.
Actually, I started out smaller, studying the adrenaline receptor. I was looking at one protein and its single gene for a decade. Then, in the late 1980s, I was drawn to the idea of the whole genome, and I stopped everything and switched my lab over. I had the first automatic DNA sequencer. It was the ultimate in reductionist biology—getting down to the genetic code, interpreting what it meant, including all 6 billion letters of my own genome. Only by understanding things at that level can we turn around and go the other way.

In your latest work you are trying to create “synthetic life.” What is that?
It’s a catchy phrase that people have begun using to replace “molecular biology.” The term has been overused, so we have defined a separate field that we call synthetic genomics—the digitization of biology using only DNA and RNA. You start by sequencing genomes and putting their digital code into a computer. Then you use the computer to take that information and design new life-forms.

How do you build a life-form? Throw in some mito­chondria here and some ribosomes there, surround ?it all with a membrane—?and voilà?
We started down that road, but now we are coming from the other end. We’re starting with the accomplishments of three and a half billion years of evolution by using what we call the software of life: DNA. Our software builds its own hardware. By writing new software, we can come up with totally new species. It would be as if once you put new software in your computer, somehow a whole new machine would materialize. We’re software engineers rather than construction workers.

[div class=attrib]More from theSource here[end-div]

Five Big Additions to Darwin’s Theory of Evolution

[div class=attrib]From Discover:[end-div]

Charles Darwin would have turned 200 in 2009, the same year his book On the Origin of Species celebrated its 150th anniversary. Today, with the perspective of time, Darwin’s theory of evolution by natural selection looks as impressive as ever. In fact, the double anniversary year saw progress on fronts that Darwin could never have anticipated, bringing new insights into the origin of life—a topic that contributed to his panic attacks, heart palpitations, and, as he wrote, “for 25 years extreme spasmodic daily and nightly flatulence.” One can only dream of what riches await in the biology textbooks of 2159.

1. Evolution happens on the inside, too. The battle for survival is waged not just between the big dogs but within the dog itself, as individual genes jockey for prominence. From the moment of conception, a father’s genes favor offspring that are large, strong, and aggressive (the better to court the ladies), while the mother’s genes incline toward smaller progeny that will be less of a burden, making it easier for her to live on and procreate. Genome-versus-genome warfare produces kids that are somewhere in between.

Not all genetic conflicts are resolved so neatly. In flour beetles, babies that do not inherit the selfish genetic element known as Medea succumb to a toxin while developing in the egg. Some unborn mice suffer the same fate. Such spiteful genes have become widespread not by helping flour beetles and mice survive but by eliminating individuals that do not carry the killer’s code. “There are two ways of winning a race,” says Caltech biologist Bruce Hay. “Either you can be better than everyone else, or you can whack the other guys on the legs.”

Hay is trying to harness the power of such genetic cheaters, enlisting them in the fight against malaria. He created a Medea-like DNA element that spreads through experimental fruit flies like wildfire, permeating an entire population within 10 generations. This year he and his team have been working on encoding immune-system boosters into those Medea genes, which could then be inserted into male mosquitoes. If it works, the modified mosquitoes should quickly replace competitors who do not carry the new genes; the enhanced immune systems of the new mosquitoes, in turn, would resist the spread of the malaria parasite.

2. Identity is not written just in the genes. According to modern evolutionary theory, there is no way that what we eat, do, and encounter can override the basic rules of inheritance: What is in the genes stays in the genes. That single rule secured Darwin’s place in the science books. But now biologists are finding that nature can break those rules. This year Eva Jablonka, a theoretical biologist at Tel Aviv University, published a compendium of more than 100 hereditary changes that are not carried in the DNA sequence. This “epigenetic” inheritance spans bacteria, fungi, plants, and animals.

[div class=attrib]More from theSource here.[end-div]

The meaning of network culture

[div class=attrib]From Eurozine:[end-div]

Whereas in postmodernism, being was left in a free-floating fabric of emotional intensities, in contemporary culture the existence of the self is affirmed through the network. Kazys Varnelis discusses what this means for the democratic public sphere.

Not all at once but rather slowly, in fits and starts, a new societal condition is emerging: network culture. As digital computing matures and meshes with increasingly mobile networking technology, society is also changing, undergoing a cultural shift. Just as modernism and postmodernism served as crucial heuristic devices in their day, studying network culture as a historical phenomenon allows us to better understand broader sociocultural trends and structures, to give duration and temporality to our own, ahistorical time.

If more subtle than the much-talked about economic collapse of fall 2008, this shift in society is real and far more radical, underscoring even the logic of that collapse. During the space of a decade, the network has become the dominant cultural logic. Our economy, public sphere, culture, even our subjectivity are mutating rapidly and show little evidence of slowing down the pace of their evolution. The global economic crisis only demonstrated our faith in the network and its dangers. Over the last two decades, markets and regulators had increasingly placed their faith in the efficient market hypothesis, which posited that investors were fundamentally rational and, fed information by highly efficient data networks, would always make the right decision. The failure came when key parts of the network – the investors, regulators, and the finance industry – failed to think through the consequences of their actions and placed their trust in each other.

The collapse of the markets seems to have been sudden, but it was actually a long-term process, beginning with bad decisions made longer before the collapse. Most of the changes in network culture are subtle and only appear radical in retrospect. Take our relationship with the press. One morning you noted with interest that your daily newspaper had established a website. Another day you decided to stop buying the paper and just read it online. Then you started reading it on a mobile Internet platform, or began listening to a podcast of your favourite column while riding a train. Perhaps you dispensed with official news entirely, preferring a collection of blogs and amateur content. Eventually the paper may well be distributed only on the net, directly incorporating user comments and feedback. Or take the way cell phones have changed our lives. When you first bought a mobile phone, were you aware of how profoundly it would alter your life? Soon, however, you found yourself abandoning the tedium of scheduling dinner plans with friends in advance, instead coordinating with them en route to a particular neighbourhood. Or if your friends or family moved away to university or a new career, you found that through a social networking site like Facebook and through the every-present telematic links of the mobile phone, you did not lose touch with them.

If it is difficult to realize the radical impact of the contemporary, this is in part due to the hype about the near-future impact of computing on society in the 1990s. The failure of the near-future to be realized immediately, due to the limits of the technology of the day, made us jaded. The dot.com crash only reinforced that sense. But slowly, technology advanced and society changed, finding new uses for it, in turn spurring more change. Network culture crept up on us. Its impact on us today is radical and undeniable.

[div class=attrib]More from theSource here.[end-div]

The Madness of Crowds and an Internet Delusion

[div class=attrib]From The New York Times:[end-div]

RETHINKING THE WEB Jaron Lanier, pictured here in 1999, was an early proponent of the Internet’s open culture. His new book examines the downsides.

In the 1990s, Jaron Lanier was one of the digital pioneers hailing the wonderful possibilities that would be realized once the Internet allowed musicians, artists, scientists and engineers around the world to instantly share their work. Now, like a lot of us, he is having second thoughts.

Mr. Lanier, a musician and avant-garde computer scientist — he popularized the term “virtual reality” — wonders if the Web’s structure and ideology are fostering nasty group dynamics and mediocre collaborations. His new book, “You Are Not a Gadget,” is a manifesto against “hive thinking” and “digital Maoism,” by which he means the glorification of open-source software, free information and collective work at the expense of individual creativity.

He blames the Web’s tradition of “drive-by anonymity” for fostering vicious pack behavior on blogs, forums and social networks. He acknowledges the examples of generous collaboration, like Wikipedia, but argues that the mantras of “open culture” and “information wants to be free” have produced a destructive new social contract.

“The basic idea of this contract,” he writes, “is that authors, journalists, musicians and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising.”

I find his critique intriguing, partly because Mr. Lanier isn’t your ordinary Luddite crank, and partly because I’ve felt the same kind of disappointment with the Web. In the 1990s, when I was writing paeans to the dawning spirit of digital collaboration, it didn’t occur to me that the Web’s “gift culture,” as anthropologists called it, could turn into a mandatory potlatch for so many professions — including my own.

So I have selfish reasons for appreciating Mr. Lanier’s complaints about masses of “digital peasants” being forced to provide free material to a few “lords of the clouds” like Google and YouTube. But I’m not sure Mr. Lanier has correctly diagnosed the causes of our discontent, particularly when he blames software design for leading to what he calls exploitative monopolies on the Web like Google.

He argues that old — and bad — digital systems tend to get locked in place because it’s too difficult and expensive for everyone to switch to a new one. That basic problem, known to economists as lock-in, has long been blamed for stifling the rise of superior technologies like the Dvorak typewriter keyboard and Betamax videotapes, and for perpetuating duds like the Windows operating system.

It can sound plausible enough in theory — particularly if your Windows computer has just crashed. In practice, though, better products win out, according to the economists Stan Liebowitz and Stephen Margolis. After reviewing battles like Dvorak-qwerty and Betamax-VHS, they concluded that consumers had good reasons for preferring qwerty keyboards and VHS tapes, and that sellers of superior technologies generally don’t get locked out. “Although software is often brought up as locking in people,” Dr. Liebowitz told me, “we have made a careful examination of that issue and find that the winning products are almost always the ones thought to be better by reviewers.” When a better new product appears, he said, the challenger can take over the software market relatively quickly by comparison with other industries.

Dr. Liebowitz, a professor at the University of Texas at Dallas, said the problem on the Web today has less to do with monopolies or software design than with intellectual piracy, which he has also studied extensively. In fact, Dr. Liebowitz used to be a favorite of the “information-wants-to-be-free” faction.

In the 1980s he asserted that photocopying actually helped copyright owners by exposing more people to their work, and he later reported that audio and video taping technologies offered large benefits to consumers without causing much harm to copyright owners in Hollywood and the music and television industries.

But when Napster and other music-sharing Web sites started becoming popular, Dr. Liebowitz correctly predicted that the music industry would be seriously hurt because it was so cheap and easy to make perfect copies and distribute them. Today he sees similar harm to other industries like publishing and television (and he is serving as a paid adviser to Viacom in its lawsuit seeking damages from Google for allowing Viacom’s videos to be posted on YouTube).

Trying to charge for songs and other digital content is sometimes dismissed as a losing cause because hackers can crack any copy-protection technology. But as Mr. Lanier notes in his book, any lock on a car or a home can be broken, yet few people do so — or condone break-ins.

“An intelligent person feels guilty for downloading music without paying the musician, but they use this free-open-culture ideology to cover it,” Mr. Lanier told me. In the book he disputes the assertion that there’s no harm in copying a digital music file because you haven’t damaged the original file.

“The same thing could be said if you hacked into a bank and just added money to your online account,” he writes. “The problem in each case is not that you stole from a specific person but that you undermined the artificial scarcities that allow the economy to function.”

Mr. Lanier was once an advocate himself for piracy, arguing that his fellow musicians would make up for the lost revenue in other ways. Sure enough, some musicians have done well selling T-shirts and concert tickets, but it is striking how many of the top-grossing acts began in the predigital era, and how much of today’s music is a mash-up of the old.

“It’s as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump,” Mr. Lanier writes. Or, to use another of his grim metaphors: “Creative people — the new peasants — come to resemble animals converging on shrinking oases of old media in a depleted desert.”

To save those endangered species, Mr. Lanier proposes rethinking the Web’s ideology, revising its software structure and introducing innovations like a universal system of micropayments. (To debate reforms, go to Tierney Lab at nytimes.com/tierneylab.

Dr. Liebowitz suggests a more traditional reform for cyberspace: punishing thieves. The big difference between Web piracy and house burglary, he says, is that the penalties for piracy are tiny and rarely enforced. He expects people to keep pilfering (and rationalizing their thefts) as long as the benefits of piracy greatly exceed the costs.

In theory, public officials could deter piracy by stiffening the penalties, but they’re aware of another crucial distinction between online piracy and house burglary: There are a lot more homeowners than burglars, but there are a lot more consumers of digital content than producers of it.

The result is a problem a bit like trying to stop a mob of looters. When the majority of people feel entitled to someone’s property, who’s going to stand in their way?

[div class=attrib]More from theSource here.[end-div]

Your Digital Privacy? It May Already Be an Illusion

[div class=attrib]From Discover:[end-div]

As his friends flocked to social networks like Facebook and MySpace, Alessandro Acquisti, an associate professor of information technology at Carnegie Mellon University, worried about the downside of all this online sharing. “The personal information is not particularly sensitive, but what happens when you combine those pieces together?” he asks. “You can come up with something that is much more sensitive than the individual pieces.”

Acquisti tested his idea in a study, reported earlier this year in Proceedings of the National Academy of Sciences. He took seemingly innocuous pieces of personal data that many people put online (birthplace and date of birth, both frequently posted on social networking sites) and combined them with information from the Death Master File, a public database from the U.S. Social Security Administration. With a little clever analysis, he found he could determine, in as few as 1,000 tries, someone’s Social Security number 8.5 percent of the time. Data thieves could easily do the same thing: They could keep hitting the log-on page of a bank account until they got one right, then go on a spending spree. With an automated program, making thousands of attempts is no trouble at all.

The problem, Acquisti found, is that the way the Death Master File numbers are created is predictable. Typically the first three digits of a Social Security number, the “area number,” are based on the zip code of the person’s birthplace; the next two, the “group number,” are assigned in a predetermined order within a particular area-number group; and the final four, the “serial number,” are assigned consecutively within each group number. When Acquisti plotted the birth information and corresponding Social Security numbers on a graph, he found that the set of possible IDs that could be assigned to a person with a given date and place of birth fell within a restricted range, making it fairly simple to sift through all of the possibilities.

To check the accuracy of his guesses, Acquisti used a list of students who had posted their birth information on a social network and whose Social Security numbers were matched anon­ymously by the university they attended. His system worked—yet another reason why you should never use your Social Security number as a password for sensitive transactions.

Welcome to the unnerving world of data mining, the fine art (some might say black art) of extracting important or sensitive pieces from the growing cloud of information that surrounds almost all of us. Since data persist essentially forever online—just check out the Internet Archive Wayback Machine, the repository of almost everything that ever appeared on the Internet—some bit of seemingly harmless information that you post today could easily come back to haunt you years from now.

[div class=attrib]More from theSource here.[end-div]

For Expatriates in China, Creative Lives of Plenty

[div class=attrib]From The New York Times:[end-div]

THERE was a chill in the morning air in 2005 when dozens of artists from China, Europe and North America emerged from their red-brick studios here to find the police blocking the gates to Suojiacun, their compound on the city’s outskirts. They were told that the village of about 100 illegally built structures was to be demolished, and were given two hours to pack.

By noon bulldozers were smashing the walls of several studios, revealing ripped-apart canvases and half-glazed clay vases lying in the rubble. But then the machines ceased their pulverizing, and the police dispersed, leaving most of the buildings unscathed. It was not the first time the authorities had threatened to evict these artists, nor would it be the last. But it was still frightening.

“I had invested everything in my studio,” said Alessandro Rolandi, a sculptor and performance artist originally from Italy who had removed his belongings before the destruction commenced. “I was really worried about my work being destroyed.”

He eventually left Suojiacun, but he has remained in China. Like the artists’ colony, the country offers challenges, but expatriates here say that the rewards outweigh the hardships. Mr. Rolandi is one of many artists (five are profiled here) who have left the United States and Europe for China, seeking respite from tiny apartments, an insular art world and nagging doubts about whether it’s best to forgo art for a reliable office job. They have discovered a land of vast creative possibility, where scale is virtually limitless and costs are comically low. They can rent airy studios, hire assistants, experiment in costly mediums like bronze and fiberglass.

“Today China has become one of the most important places to create and invent,” said Jérôme Sans, director of the Ullens Center for Contemporary Art in Beijing. “A lot of Western artists are coming here to live the dynamism and make especially crazy work they could never do anywhere else in the world.”

Rania Ho

A major challenge for foreigners, no matter how fluent or familiar with life here, is that even if they look like locals, it is virtually impossible to feel truly of this culture. For seven years Rania Ho, the daughter of Chinese immigrants born and raised in San Francisco, has lived in Beijing, where she runs a small gallery in a hutong, or alley, near one of the city’s main temples. “Being Chinese-American makes it easier to be an observer of what’s really happening because I’m camouflaged,” she said. “But it doesn’t mean I understand any more what people are thinking.”

Still, Ms. Ho, 40, revels in her role as outsider in a society that she says is blindly enthusiastic about remaking itself. She creates and exhibits work by both foreign and Chinese artists that often plays with China’s fetishization of mechanized modernity.

Because she lives so close to military parades and futuristic architecture, she said that her own pieces — like a water fountain gushing on the roof of her gallery and a cardboard table that levitates a Ping-Pong ball — chuckle at the “hypnotic properties of unceasing labor.” She said they are futile responses to the absurd experiences she shares with her neighbors, who are constantly seeing their world transform before their eyes. “Being in China forces one to reassess everything,” she said, “which is at times difficult and exhausting, but for a majority of the time it’s all very amusing and enlightening.”

[div class=attrib]More from theSource here.[end-div]

Are Black Holes the Architects of the Universe?

[div class=attrib]From Discover:[end-div]

Black holes are finally winning some respect. After long regarding them as agents of destruction or dismissing them as mere by-products of galaxies and stars, scientists are recalibrating their thinking. Now it seems that black holes debuted in a constructive role and appeared unexpectedly soon after the Big Bang. “Several years ago, nobody imagined that there were such monsters in the early universe,” says Penn State astrophysicist Yuexing Li. “Now we see that black holes were essential in creating the universe’s modern structure.”

Black holes, tortured regions of space where the pull of gravity is so intense that not even light can escape, did not always have such a high profile. They were once thought to be very rare; in fact, Albert Einstein did not believe they existed at all. Over the past several decades, though, astronomers have realized that black holes are not so unusual after all: Supermassive ones, millions or billions of times as hefty as the sun, seem to reside at the center of most, if not all, galaxies. Still, many people were shocked in 2003 when a detailed sky survey found that giant black holes were already common nearly 13 billion years ago, when the universe was less than a billion years old. Since then, researchers have been trying to figure out where these primordial holes came from and how they influenced the cosmic events that followed.

In August, researchers at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University ran a supercomputer simulation of the early universe and provided a tantalizing glimpse into the lives of the first black holes. The story began 200 million years after the Big Bang, when the universe’s first stars formed. These beasts, about 100 times the mass of the sun, were so large and energetic that they burned all their hydrogen fuel in just a few million years. With no more energy from hydrogen fusion to counteract the enormous inward pull of their gravity, the stars collapsed until all of their mass was compressed into a point of infinite density.

The first-generation black holes were puny compared with the monsters we see at the centers of galaxies today. They grew only slowly at first—adding just 1 percent to their bulk in the next 200 million years—because the hyperactive stars that spawned them had blasted away most of the nearby gas that they could have devoured. Nevertheless, those modest-size black holes left a big mark by performing a form of stellar birth control: Radiation from the trickle of material falling into the holes heated surrounding clouds of gas to about 5,000 degrees Fahrenheit, so hot that the gas could no longer easily coalesce. “You couldn’t really form stars in that stuff,” says Marcelo Alvarez, lead author of the Kavli study.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of KIPAC/SLAC/M.Alvarez, T. Able, and J. Wise.[end-div]