Category Archives: Technica

Equation: How GPS Bends Time

[div class=attrib]From Wired:[end-div]

Einstein knew what he was talking about with that relativity stuff. For proof, just look at your GPS. The global positioning system relies on 24 satellites that transmit time-stamped information on where they are. Your GPS unit registers the exact time at which it receives that information from each satellite and then calculates how long it took for the individual signals to arrive. By multiplying the elapsed time by the speed of light, it can figure out how far it is from each satellite, compare those distances, and calculate its own position.

For accuracy to within a few meters, the satellites’ atomic clocks have to be extremely precise—plus or minus 10 nanoseconds. Here’s where things get weird: Those amazingly accurate clocks never seem to run quite right. One second as measured on the satellite never matches a second as measured on Earth—just as Einstein predicted.

According to Einstein’s special theory of relativity, a clock that’s traveling fast will appear to run slowly from the perspective of someone standing still. Satellites move at about 9,000 mph—enough to make their onboard clocks slow down by 8 microseconds per day from the perspective of a GPS gadget and totally screw up the location data. To counter this effect, the GPS system adjusts the time it gets from the satellites by using the equation here. (Don’t even get us started on the impact of general relativity.)

[div class=attrib]More from theSource here.[end-div]

Hello Internet; Goodbye Memory

Imagine a world without books; you’d have to commit useful experiences, narratives and data to handwritten form and memory.Imagine a world without the internet and real-time search; you’d have to rely on a trusted expert or a printed dictionary to find answers to your questions. Imagine a world without the written word; you’d have to revert to memory and oral tradition to pass on meaningful life lessons and stories.

Technology is a wonderfully double-edged mechanism. It brings convenience. It helps in most aspects of our lives. Yet, it also brings fundamental cognitive change that brain scientists have only recently begun to fathom. Recent studies, including the one cited below from Columbia University explore this in detail.

[div class=attrib]From Technology Review:[end-div]

A study says that we rely on external tools, including the Internet, to augment our memory.

The flood of information available online with just a few clicks and finger-taps may be subtly changing the way we retain information, according to a new study. But this doesn’t mean we’re becoming less mentally agile or thoughtful, say the researchers involved. Instead, the change can be seen as a natural extension of the way we already rely upon social memory aids—like a friend who knows a particular subject inside out.

Researchers and writers have debated over how our growing reliance on Internet-connected computers may be changing our mental faculties. The constant assault of tweets and YouTube videos, the argument goes, might be making us more distracted and less thoughtful—in short, dumber. However, there is little empirical evidence of the Internet’s effects, particularly on memory.

Betsy Sparrow, assistant professor of psychology at Columbia University and lead author of the new study, put college students through a series of four experiments to explore this question.

One experiment involved participants reading and then typing out a series of statements, like “Rubber bands last longer when refrigerated,” on a computer. Half of the participants were told that their statements would be saved, and the other half were told they would be erased. Additionally, half of the people in each group were explicitly told to remember the statements they typed, while the other half were not. Participants who believed the statements would be erased were better at recalling them, regardless of whether they were told to remember them.

[div class=attrib]More from theSource here.[end-div]

3D Printing – A demonstration

Three dimensional “printing” has been around for a few years now, but the technology continues to advance by leaps and bounds. The technology has already progressed to such an extent that some 3D print machines can now “print” objects with moving parts and in color as well. And, we all thought those cool replicator machines in Star Trek were the stuff of science fiction.

[tube]LQfYm4ZVcVI[/tube]

The Allure of Steampunk Videotelephony and the Telephonoscope

Video telephony as imagined in 1910

A concept for the videophone surfaced just a couple of years after the telephone was patented in the United States. The telephonoscope as it was called first appeared in Victorian journals and early French science fiction in 1878.

In 1891 Alexander Graham Bell recorded his concept of an electrical radiophone, which discussed, “…the possibility of seeing by electricity”. He later went on to predict that, “…the day would come when the man at the telephone would be able to see the distant person to whom he was speaking”.

The world’s first videophone entered service in 1934, in Germany. The service was offered in select post offices linking several major German cities, and provided bi-directional voice and image on 8 inch square displays. In the U.S., AT&T launched the Picturephone in the mid-1960s. However, the costly equipment, high-cost per call, and inconveniently located public video-telephone booths ensured that the service would never gain public acceptance. Similar to the U.S., experience major telephone companies in France, Japan and Sweden had limited success with video-telephony during the 1970s-80s.

Major improvements in video technology, telecommunications deregulation and increases in bandwidth during the 1980s-90s brought the price point down considerably. However, significant usage remained mostly within the realm of major corporations due to the still not insignificant investment in equipment and cost of bandwidth.

Fast forward to the 21st century. Skype and other IP (internet protocol) based services have made videochat commonplace and affordable, and in most cases free.It now seems that videchat has become almost ubiquitous. Recent moves into this space by tech heavyweights like Apple with Facetime, Microsoft with its acquisition of Skype, Google with its Google Plus social network video calling component, and Facebook’s new video calling service will in all likelihood add further momentum.

Of course, while videochat is an effective communication tool it does have a cost in terms of personal and social consequences over its non-video cousin, the telephone. Next time you videochat rather than make a telephone call you will surely be paying greater attention to your bad hair and poor grooming, your crumpled clothes, uncoordinated pajamas or lack thereof, the unwanted visitors in the background shot, and the not so subtle back-lighting that focuses attention on the clutter in your office or bedroom. Doesn’t it make you harken back for the days of the simple telephone? Either that or perhaps you are drawn to the more alluring and elegant steampunk form of videochat as imagined by the Victorians, in the image above.

The Homogenous Culture of “Like”

[div class=attrib]Echo and Narcissus, John William Waterhouse [Public domain], via Wikimedia Commons[end-div]

About 12 months ago I committed suicide — internet suicide that is. I closed my personal Facebook account after recognizing several important issues. First, it was a colossal waste of time; time that I could and should be using more productively. Second, it became apparent that following, belonging and agreeing with others through the trivial “wall” status-in-a-can postings and now pervasive “like button” was nothing other than a declaration of mindless group-think and a curious way to maintain social standing. So, my choice was clear: become part of a group that had similar interests, like-minded activities, same politics, parallel beliefs, common likes and dislikes; or revert to my own weirdly independent path. I chose the latter, rejecting the road towards a homogeneity of ideas and a points-based system of instant self-esteem.

This facet of the Facebook ecosystem has an affect similar to the filter bubble that I described is a previous post, The Technology of Personalization and the Bubble Syndrome. In both cases my explicit choices on Facebook, such as which friends I follow or which content I “like”, and my implicit browsing behaviors that increasingly filter what I see and don’t see causes a narrowing of the world of ideas to which I am a exposed. This cannot be good.

So, although I may incur the wrath of author Neil Strauss for including an excerpt of his recent column below, I cannot help but “like” what he has to say. More importantly, he does a much more eloquent job of describing the issue which commoditizes social relationships and, dare I say it, lowers the barrier to entry for narcissists to grow and fine tune their skills.

[div class=attrib]By Neil Strauss for the Wall Street Journal:[end-div]

If you happen to be reading this article online, you’ll notice that right above it, there is a button labeled “like.” Please stop reading and click on “like” right now.

Thank you. I feel much better. It’s good to be liked.

Don’t forget to comment on, tweet, blog about and StumbleUpon this article. And be sure to “+1” it if you’re on the newly launched Google+ social network. In fact, if you don’t want to read the rest of this article, at least stay on the page for a few minutes before clicking elsewhere. That way, it will appear to the site analytics as if you’ve read the whole thing.

Once, there was something called a point of view. And, after much strife and conflict, it eventually became a commonly held idea in some parts of the world that people were entitled to their own points of view.

Unfortunately, this idea is becoming an anachronism. When the Internet first came into public use, it was hailed as a liberation from conformity, a floating world ruled by passion, creativity, innovation and freedom of information. When it was hijacked first by advertising and then by commerce, it seemed like it had been fully co-opted and brought into line with human greed and ambition.

But there was one other element of human nature that the Internet still needed to conquer: the need to belong. The “like” button began on the website FriendFeed in 2007, appeared on Facebook in 2009, began spreading everywhere from YouTube to Amazon to most major news sites last year, and has now been officially embraced by Google as the agreeable, supportive and more status-conscious “+1.” As a result, we can now search not just for information, merchandise and kitten videos on the Internet, but for approval.

Just as stand-up comedians are trained to be funny by observing which of their lines and expressions are greeted with laughter, so too are our thoughts online molded to conform to popular opinion by these buttons. A status update that is met with no likes (or a clever tweet that isn’t retweeted) becomes the equivalent of a joke met with silence. It must be rethought and rewritten. And so we don’t show our true selves online, but a mask designed to conform to the opinions of those around us.

Conversely, when we’re looking at someone else’s content—whether a video or a news story—we are able to see first how many people liked it and, often, whether our friends liked it. And so we are encouraged not to form our own opinion but to look to others for cues on how to feel.

“Like” culture is antithetical to the concept of self-esteem, which a healthy individual should be developing from the inside out rather than from the outside in. Instead, we are shaped by our stats, which include not just “likes” but the number of comments generated in response to what we write and the number of friends or followers we have. I’ve seen rock stars agonize over the fact that another artist has far more Facebook “likes” and Twitter followers than they do.

[div class=attrib]More from theSource here.[end-div]

Solar power from space: Beam it down, Scotty

[div class=attrib]From the Economist:[end-div]

THE idea of collecting solar energy in space and beaming it to Earth has been around for at least 70 years. In “Reason”, a short story by Isaac Asimov that was published in 1941, a space station transmits energy collected from the sun to various planets using microwave beams.

The advantage of intercepting sunlight in space, instead of letting it find its own way through the atmosphere, is that so much gets absorbed by the air. By converting it to the right frequency first (one of the so-called windows in the atmosphere, in which little energy is absorbed) a space-based collector could, enthusiasts claim, yield on average five times as much power as one located on the ground.

The disadvantage is cost. Launching and maintaining suitable satellites would be ludicrously expensive. But perhaps not, if the satellites were small and the customers specialised. Military expeditions, rescuers in disaster zones, remote desalination plants and scientific-research bases might be willing to pay for such power from the sky. And a research group based at the University of Surrey, in England, hopes that in a few years it will be possible to offer it to them.

This summer, Stephen Sweeney and his colleagues will test a laser that would do the job which Asimov assigned to microwaves. Certainly, microwaves would work: a test carried out in 2008 transmitted useful amounts of microwave energy between two Hawaiian islands 148km (92 miles) apart, so penetrating the 100km of the atmosphere would be a doddle. But microwaves spread out as they propagate. A collector on Earth that was picking up power from a geostationary satellite orbiting at an altitude of 35,800km would need to be spread over hundreds of square metres. Using a laser means the collector need be only tens of square metres in area.

[div class=attrib]More from theSource here.[end-div]

Life of a Facebook Photo

Before photo-sharing, photo blogs, photo friending, “PhotoShopping” and countless other photo-enabled apps and services, there was compose, point, focus, click, develop, print. The process seemed a lot simpler way back then. Perhaps, this was due to lack of options for both input and output. Input? Simple. Go buy a real camera. Output? Simple. Slide or prints. The end.

The options for input and output have exploded by orders of magnitude over the last couple of decades. Nowadays, even my toaster can take pictures and I can output them on my digital refrigerator, sans, of course, real photographs with that limp, bendable magnetic backing. The entire end-to-end process of taking a photograph and sharing it with someone else is now replete with so many choices and options that today it seems to have become inordinately more complex.

So, to help all prehistoric photographers like me, here’s an interesting process flow for your digital images in the age of Facebook.

[div class=attrib]From Pixable:[end-div]

The Technology of Personalization and the Bubble Syndrome

A decade ago in another place and era during my days as director of technology research for a Fortune X company I tinkered with a cool array of then new personalization tools. The aim was simple, use some of these emerging technologies to deliver a more customized and personalized user experience for our customers and suppliers. What could be wrong with that? Surely, custom tools and more personalized data could do nothing but improve knowledge and enhance business relationships for all concerned. Our customers would benefit from seeing only the information they asked for, our suppliers would benefit from better analysis and filtered feedback, and we, the corporation in the middle, would benefit from making everyone in our supply chain more efficient and happy. Advertisers would be even happier since with more focused data they would be able to deliver messages that were increasingly more precise and relevant based on personal context.

Fast forward to the present. Customization, or filtering, technologies have indeed helped optimize the supply chain; personalization tools and services have made customer experiences more focused and efficient. In today’s online world it’s so much easier to find, navigate and transact when the supplier at the other end of our browser knows who we are, where we live, what we earn, what we like and dislike, and so on. After all, if a supplier knows my needs, requirements, options, status and even personality, I’m much more likely to only receive information, services or products that fall within the bounds that define “me” in the supplier’s database.

And, therein lies the crux of the issue that has helped me to realize that personalization offers a false promise despite the seemingly obvious benefits to all concerned. The benefits are outweighed by two key issues: erosion of privacy and the bubble syndrome.

Privacy as Commodity

I’ll not dwell too long on the issue of privacy since in this article I’m much more concerned with the personalization bubble. However, as we have increasingly seen in recent times privacy in all its forms is becoming a scarce, and tradable commodity. Much of our data is now in the hands of a plethora of suppliers, intermediaries and their partners, ready for continued monetization. Our locations are constantly pinged and polled; our internet browsers note our web surfing habits and preferences; our purchases generate genius suggestions and recommendations to further whet our consumerist desires. Now in digital form this data is open to legitimate sharing and highly vulnerable to discovery by hackers, phishers and spammers and any with technical or financial resources.

Bubble Syndrome

Personalization technologies filter content at various levels, minutely and broadly, both overtly and covertly. For instance, I may explicitly signal my preferences for certain types of clothing deals at my favorite online retailer by answering a quick retail survey or checking a handful of specific preference buttons on a website.

However, my previous online purchases, browsing behaviors, time spent of various online pages, visits to other online retailers and a range of other flags deliver a range of implicit or “covert” information to the same retailer (and others). This helps the retailer filter, customize and personalize what I get to see even before I have made a conscious decision to limit my searches and exposure to information. Clearly, this is not too concerning when my retailer knows I’m male and usually purchase size 32 inch jeans; after all why would I need to see deals or product information for women’s shoes.

But, this type of covert filtering becomes more worrisome when the data being filtered and personalized is information, news, opinion and comment in all its glorious diversity. Sophisticated media organizations, information portals, aggregators and news services can deliver personalized and filtered information based on your overt and covert personal preferences as well. So, if you subscribe only to a certain type of information based on topic, interest, political persuasion or other dimension your personalized news services will continue to deliver mostly or only this type of information. And, as I have already described, your online behaviors will deliver additional filtering parameters to these news and information providers so that they may further personalize and narrow your consumption of information.

Increasingly, we will not be aware of what we don’t know. Whether explicitly or not, our use of personalization technologies will have the ability to build a filter, a bubble, around us, which will permit only information that we wish to see or that which our online suppliers wish us to see. We’ll not even get exposed to peripheral and tangential information — that information which lies outside the bubble. This filtering of the rich oceans of diverse information to a mono-dimensional stream will have profound implications for our social and cultural fabric.

I assume that our increasingly crowded planet will require ever more creativity, insight, tolerance and empathy as we tackle humanity’s many social and political challenges in the future. And, these very seeds of creativity, insight, tolerance and empathy are those that are most at risk from the personalization filter. How are we to be more tolerant of others’ opinions if we are never exposed to them in the first place? How are we to gain insight when disparate knowledge is no longer available for serendipitous discovery? How are we to become more creative if we are less exposed to ideas outside of our normal sphere, our bubble?

For some ideas on how to punch a few holes in your online filter bubble read Eli Pariser’s practical guide, here.

Filter Bubble image courtesy of TechCrunch.

Lemonade without the Lemons: New Search Engine Looks for Uplifting News

[div class=attrib]From Scientific American:[end-div]

Good news, if you haven’t noticed, has always been a rare commodity. We all have our ways of coping, but the media’s pessimistic proclivity presented a serious problem for Jurriaan Kamp, editor of the San Francisco-based Ode magazine—a must-read for “intelligent optimists”—who was in dire need of an editorial pick-me-up, last year in particular. His bright idea: an algorithm that can sense the tone of daily news and separate the uplifting stories from the Debbie Downers.

Talk about a ripe moment: A Pew survey last month found the number of Americans hearing “mostly bad” news about the economy and other issues is at its highest since the downturn in 2008. That is unlikely to change anytime soon: global obesity rates are climbing, the Middle East is unstable, and campaign 2012 vitriol is only just beginning to spew in the U.S. The problem is not trivial. A handful of studies, including one published in the Clinical Psychology Review in 2010, have linked positive thinking to better health. Another from the Journal of Economic Psychology the year prior found upbeat people can even make more money.

Kamp, realizing he could be a purveyor of optimism in an untapped market, partnered with Federated Media Publishing, a San Francisco–based company that leads the field in search semantics. The aim was to create an automated system for Ode to sort and aggregate news from the world’s 60 largest news sources based on solutions, not problems. The system, released last week in public beta testing online and to be formally introduced in the next few months, runs thousands of directives to find a story’s context. “It’s kind of like playing 20 questions, building an ontology to find either optimism or pessimism,” says Tim Musgrove, the chief scientist who designed the broader system, which has been dubbed a “slant engine”. Think of the word “hydrogen” paired with “energy” rather than “bomb.”

Web semantics developers in recent years have trained computers to classify news topics based on intuitive keywords and recognizable names. But the slant engine dives deeper into algorithmic programming. It starts by classifying a story’s topic as either a world problem (disease and poverty, for example) or a social good (health care and education). Then it looks for revealing phrases. “Efforts against” in a story, referring to a world problem, would signal something good. “Setbacks to” a social good, likely bad. Thousands of questions later every story is eventually assigned a score between 0 and 1—above 0.95 fast-tracks the story to Ode’s Web interface, called OdeWire. Below that, a score higher than 0.6 is reviewed by a human. The system is trained to only collect themes that are “meaningfully optimistic,” meaning it throws away flash-in-the-pan stories about things like sports or celebrities.

[div class=attrib]More from theSource here.[end-div]

Self-Published Author Sells a Million E-Books on Amazon

[div class=attrib]From ReadWriteWeb:[end-div]

Since the Kindle’s launch, Amazon has heralded each new arrival into what it calls the “Kindle Million Club,” the group of authors who have sold over 1 million Kindle e-books. There have been seven authors in this club up ’til now – some of the big names in publishing: Stieg Larsson, James Patterson, and Nora Roberts for example.

But the admission today of the eighth member of this club is really quite extraordinary. Not because John Locke is a 60 year old former insurance salesman from Kentucky with no writing or publishing background. But because John Locke has accomplished the feat of selling one million e-books as a completely self-published author.

Rather than being published by major publishing house – and all the perks that have long been associated with that (marketing, book tours, prime shelf space in retail stores) – Locke has sold 1,010,370 Kindle books (as of yesterday) having used Kindle Direct Publishing to get his e-books into the Amazon store. No major publisher. No major marketing.

Locke writes primarily crime and adventure stories, including Vegas Moon, Wish List, and the New York Times E-Book Bestseller, Saving Rachel. Most of the e-books sell for $.99, and he says he makes 35 cents on every sale. That sort of per book profit is something that authors would never get from a traditional book deal.

[div class=attrib]More from theSource here.[end-div]

How Free Is Your Will?

[div class=attrib]From Scientific American:[end-div]

Think about the last time you got bored with the TV channel you were watching and decided to change it with the remote control. Or a time you grabbed a magazine off a newsstand, or raised a hand to hail a taxi. As we go about our daily lives, we constantly make choices to act in certain ways. We all believe we exercise free will in such actions – we decide what to do and when to do it. Free will, however, becomes more complicated when you try to think how it can arise from brain activity.

Do we control our neurons or do they control us? If everything we do starts in the brain, what kind of neural activity would reflect free choice? And how would you feel about your free will if we were to tell you that neuroscientists can look at your brain activity, and tell that you are about to make a decision to move – and that they could do this a whole second and a half before you yourself became aware of your own choice?

Scientists from UCLA and Harvard — Itzhak Fried, Roy Mukamel and Gabriel Kreiman — have taken an audacious step in the search for free will, reported in a new article in the journal Neuron. They used a powerful tool – intracranial recording – to find neurons in the human brain whose activity predicts decisions to make a movement, challenging conventional notions of free will.

Fried is one of a handful of neurosurgeons in the world who perform the delicate procedure of inserting electrodes into a living human brain, and using them to record activity from individual neurons. He does this to pin down the source of debilitating seizures in the brains of epileptic patients. Once he locates the part of the patients’ brains that sparks off the seizures, he can remove it, pulling the plug on their neuronal electrical storms.

[div class=attrib]More from theSource here.[end-div]

Search Engine History

It’s hard to believe that internet based search engines have been in the mainstream consciousness for around twenty years now. It seems not too long ago that we were all playing Pong and searching index cards at the local library. Infographics Labs puts the last twenty years of search in summary for us below.

[div class-attrib]From Infographic Labs:[end-div]

Search Engine History

Infographic: Search Engine History by Infographiclabs

Commonplaces of technology critique

[div class=attrib]From Eurozine:[end-div]

What is it good for? A passing fad! It makes you stupid! Today’s technology critique is tomorrow’s embarrassing error of judgement, as Katrin Passig shows. Her suggestion: one should try to avoid repeating the most commonplace critiques, particularly in public.

In a 1969 study on colour designations in different cultures, anthropologist Brent Berlin and linguist Paul Kay described how the sequence of levels of observed progression was always the same. Cultures with only two colour concepts distinguish between “light” and “dark” shades. If the culture recognizes three colours, the third will be red. If the language differentiates further, first come green and/or yellow, then blue. All languages with six colour designations distinguish between black, white, red, green, blue and yellow. The next level is brown, then, in varying sequences, orange, pink, purple and/or grey, with light blue appearing last of all.

The reaction to technical innovations, both in the media and in our private lives, follows similarly preconceived paths. The first, entirely knee-jerk dismissal is the “What the hell is it good for?” (Argument No.1) with which IBM engineer Robert Lloyd greeted the microprocessor in 1968. Even practices and techniques that only constitute a variation on the familiar – the electric typewriter as successor to the mechanical version, for instance – are met with distaste in the cultural criticism sector. Inventions like the telephone or the Internet, which open up a whole new world, have it even tougher. If cultural critics had existed at the dawn of life itself, they would have written grumpily in their magazines: “Life – what is it good for? Things were just fine before.”

Because the new throws into confusion processes that people have got used to, it is often perceived not only as useless but as a downright nuisance. The student Friedrich August Köhler wrote in 1790 after a journey on foot from Tübingen to Ulm: “[Signposts] had been put up everywhere following an edict of the local prince, but their existence proved short-lived, since they tended to be destroyed by a boisterous rabble in most places. This was most often the case in areas where the country folk live scattered about on farms, and when going on business to the next city or village more often than not come home inebriated and, knowing the way as they do, consider signposts unnecessary.”

The Parisians seem to have greeted the introduction of street lighting in 1667 under Louis XIV with a similar lack of enthusiasm. Dietmar Kammerer conjectured in the Süddeutsche Zeitung that the regular destruction of these street lamps represented a protest on the part of the citizens against the loss of their private sphere, since it seemed clear to them that here was “a measure introduced by the king to bring the streets under his control”. A simpler explanation would be that citizens tend in the main to react aggressively to unsupervised innovations in their midst. Recently, Deutsche Bahn explained that the initial vandalism of their “bikes for hire” had died down, now that locals had “grown accustomed to the sight of the bicycles”.

When it turns out that the novelty is not as useless as initially assumed, there follows the brief interregnum of Argument No.2: “Who wants it anyway?” “That’s an amazing invention,” gushed US President Rutherford B. Hayes of the telephone, “but who would ever want to use one of them?” And the film studio boss Harry M. Warner is quoted as asking in 1927, “Who the hell wants to hear actors talk?”.

[div class=attrib]More from theSource here.[end-div]

Social networking: Failure to connect

[div class=attrib]From the Guardian:[end-div]

The first time I joined Facebook, I had to quit again immediately. It was my first week of university. I was alone, along with thousands of other students, in a sea of club nights and quizzes and tedious conversations about other people’s A-levels. This was back when the site was exclusively for students. I had been told, in no uncertain terms, that joining was mandatory. Failure to do so was a form of social suicide worse even than refusing to drink alcohol. I had no choice. I signed up.

Users of Facebook will know the site has one immutable feature. You don’t have to post a profile picture, or share your likes and dislikes with the world, though both are encouraged. You can avoid the news feed, the apps, the tweet-like status updates. You don’t even have to choose a favourite quote. The one thing you cannot get away from is your friend count. It is how Facebook keeps score.

Five years ago, on probably the loneliest week of my life, my newly created Facebook page looked me square in the eye and announced: “You have 0 friends.” I closed the account.

Facebook is not a good place for a lonely person, and not just because of how precisely it quantifies your isolation. The news feed, the default point of entry to the site, is a constantly updated stream of your every friend’s every activity, opinion and photograph. It is a Twitter feed in glorious technicolour, complete with pictures, polls and videos. It exists to make sure you know exactly how much more popular everyone else is, casually informing you that 14 of your friends were tagged in the album “Fun without Tom Meltzer”. It can be, to say the least, disheartening. Without a real-world social network with which to interact, social networking sites act as proof of the old cliché: you’re never so alone as when you’re in a crowd.

The pressures put on teenagers by sites such as Facebook are well-known. Reports of cyber-bullying, happy-slapping, even self-harm and suicide attempts motivated by social networking sites have become increasingly common in the eight years since Friendster – and then MySpace, Bebo and Facebook – launched. But the subtler side-effects for a generation that has grown up with these sites are only now being felt. In March this year, the NSPCC published a detailed breakdown of calls made to ChildLine in the last five years. Though overall the number of calls from children and teenagers had risen by just 10%, calls about loneliness had nearly tripled, from 1,853 five years ago to 5,525 in 2009. Among boys, the number of calls about loneliness was more than five times higher than it had been in 2004.

This is not just a teenage problem. In May, the Mental Health Foundation released a report called The Lonely Society? Its survey found that 53% of 18-34-year-olds had felt depressed because of loneliness, compared with just 32% of people over 55. The question of why was, in part, answered by another of the report’s findings: nearly a third of young people said they spent too much time communicating online and not enough in person.

[div class=attrib]More from theSource here.[end-div]

What is HTML5

There is much going on in the world on internet and web standards, including the gradual roll-out of IPv6 and HTML5. HTML5 is a much more functional markup language than its predecessors and is better suited for developing richer user interfaces and interactions. Major highlights of HTML from the infographic below.

[div class=attrib]From Focus.com:[end-div]

[div class=attrib]More from theSource here.[end-div]

The internet: Everything you ever need to know

[div class=attrib]From The Observer:[end-div]

In spite of all the answers the internet has given us, its full potential to transform our lives remains the great unknown. Here are the nine key steps to understanding the most powerful tool of our age – and where it’s taking us.

A funny thing happened to us on the way to the future. The internet went from being something exotic to being boring utility, like mains electricity or running water – and we never really noticed. So we wound up being totally dependent on a system about which we are terminally incurious. You think I exaggerate about the dependence? Well, just ask Estonia, one of the most internet-dependent countries on the planet, which in 2007 was more or less shut down for two weeks by a sustained attack on its network infrastructure. Or imagine what it would be like if, one day, you suddenly found yourself unable to book flights, transfer funds from your bank account, check bus timetables, send email, search Google, call your family using Skype, buy music from Apple or books from Amazon, buy or sell stuff on eBay, watch clips on YouTube or BBC programmes on the iPlayer – or do the 1,001 other things that have become as natural as breathing.

The internet has quietly infiltrated our lives, and yet we seem to be remarkably unreflective about it. That’s not because we’re short of information about the network; on the contrary, we’re awash with the stuff. It’s just that we don’t know what it all means. We’re in the state once described by that great scholar of cyberspace, Manuel Castells, as “informed bewilderment”.

Mainstream media don’t exactly help here, because much – if not most – media coverage of the net is negative. It may be essential for our kids’ education, they concede, but it’s riddled with online predators, seeking children to “groom” for abuse. Google is supposedly “making us stupid” and shattering our concentration into the bargain. It’s also allegedly leading to an epidemic of plagiarism. File sharing is destroying music, online news is killing newspapers, and Amazon is killing bookshops. The network is making a mockery of legal injunctions and the web is full of lies, distortions and half-truths. Social networking fuels the growth of vindictive “flash mobs” which ambush innocent columnists such as Jan Moir. And so on.

All of which might lead a detached observer to ask: if the internet is such a disaster, how come 27% of the world’s population (or about 1.8 billion people) use it happily every day, while billions more are desperate to get access to it?

So how might we go about getting a more balanced view of the net ? What would you really need to know to understand the internet phenomenon? Having thought about it for a while, my conclusion is that all you need is a smallish number of big ideas, which, taken together, sharply reduce the bewilderment of which Castells writes so eloquently.

But how many ideas? In 1956, the psychologist George Miller published a famous paper in the journal Psychological Review. Its title was “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information” and in it Miller set out to summarise some earlier experiments which attempted to measure the limits of people’s short-term memory. In each case he reported that the effective “channel capacity” lay between five and nine choices. Miller did not draw any firm conclusions from this, however, and contented himself by merely conjecturing that “the recurring sevens might represent something deep and profound or be just coincidence”. And that, he probably thought, was that.

But Miller had underestimated the appetite of popular culture for anything with the word “magical’ in the title. Instead of being known as a mere aggregator of research results, Miller found himself identified as a kind of sage — a discoverer of a profound truth about human nature. “My problem,” he wrote, “is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals… Either there really is something unusual about the number or else I am suffering from delusions of persecution.”

[div class=attrib]More from theSource here.[end-div]

What Is I.B.M.’s Watson?

[div class=attrib]From The New York Times:[end-div]

“Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.”

This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge”), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?

Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.

For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.

Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a roomful of servers working at speeds thousands of times faster than most ordinary desktops. Over its three-year life, Watson stored the content of tens of millions of documents, which it now accessed to answer questions about almost anything. (Watson is not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is already in its “brain.”) During the sparring matches, Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set. When it answered the Burj clue — “What is Dubai?” (“Jeopardy!” answers must be phrased as questions) — it sounded like a perkier cousin of the computer in the movie “WarGames” that nearly destroyed the world by trying to start a nuclear war.

[div class=attrib]More from theSource here.[end-div]

Forget Avatar, the real 3D revolution is coming to your front room

[div class=attrib]From The Guardian:[end-div]

Enjoy eating goulash? Fed up with needing three pieces of cutlery? It could be that I have a solution for you – and not just for you but for picnickers who like a bit of bread with their soup, too. Or indeed for anyone who has dreamed of seeing the spoon and the knife incorporated into one, easy to use, albeit potentially dangerous instrument. Ladies and gentlemen, I would like to introduce you to the Knoon.

The Knoon came to me in a dream – I had a vision of a soup spoon with a knife stuck to its top, blade pointing upwards. Given the potential for lacerating your mouth on the Knoon’s sharp edge, maybe my dream should have stayed just that. But thanks to a technological leap that is revolutionising manufacturing and, some hope, may even change the nature of our consumer society, I now have a Knoon sitting right in front of me. I had the idea, I drew it up and then I printed my cutlery out.

3D is this year’s buzzword in Hollywood. From Avatar to Clash of the Titans, it’s a new take on an old fad that’s coming to save the movie industry. But with less glitz and a degree less fanfare, 3D printing is changing our vision of the world too, and ultimately its effects might prove a degree more special.

Thinglab is a company that specialises in 3D printing. Based in a nondescript office building in east London, its team works mainly with commercial clients to print models that would previously have been assembled by hand. Architects design their buildings in 3D software packages and pass them to Thinglab to print scale models. When mobile phone companies come up with a new handset, they print prototypes first in order to test size, shape and feel. Jewellers not only make prototypes, they use them as a basis for moulds. Sculptors can scan in their original works, adjust the dimensions and rattle off a series of duplicates (signatures can be added later).

All this work is done in the Thinglab basement, a kind of temple to 3D where motion capture suits hang from the wall and a series of next generation TV screens (no need for 3D glasses) sit in the corner. In the middle of the room lurk two hulking 3D printers. Their facades give them the faces of miserable robots.

“We had David Hockney in here recently and he was gobsmacked,” says Robin Thomas, one of Thinglab’s directors, reeling a list of intrigued celebrities who have made a pilgrimage to his basement. “Boy George came in and we took a scan of his face.” Above the printers sit a collection of the models they’ve produced: everything from a car’s suspension system to a rendering of John Cleese’s head. “If a creative person wakes up in the morning with an idea,” says Thomas, “they could have a model by the end of the day. People who would have spent days, weeks months on these type of models can now do it with a printer. If they can think of it, we can make it.”

[div class=attrib]More from theSource here.[end-div]

The Man Who Builds Brains

[div class=attrib]From Discover:[end-div]

On the quarter-mile walk between his office at the École Polytechnique Fédérale de Lausanne in Switzerland and the nerve center of his research across campus, Henry Markram gets a brisk reminder of the rapidly narrowing gap between human and machine. At one point he passes a museumlike display filled with the relics of old supercomputers, a memorial to their technological limitations. At the end of his trip he confronts his IBM Blue Gene/P—shiny, black, and sloped on one side like a sports car. That new supercomputer is the center­piece of the Blue Brain Project, tasked with simulating every aspect of the workings of a living brain.

Markram, the 47-year-old founder and codirector of the Brain Mind Institute at the EPFL, is the project’s leader and cheerleader. A South African neuroscientist, he received his doctorate from the Weizmann Institute of Science in Israel and studied as a Fulbright Scholar at the National Institutes of Health. For the past 15 years he and his team have been collecting data on the neocortex, the part of the brain that lets us think, speak, and remember. The plan is to use the data from these studies to create a comprehensive, three-dimensional simulation of a mammalian brain. Such a digital re-creation that matches all the behaviors and structures of a biological brain would provide an unprecedented opportunity to study the fundamental nature of cognition and of disorders such as depression and schizophrenia.

Until recently there was no computer powerful enough to take all our knowledge of the brain and apply it to a model. Blue Gene has changed that. It contains four monolithic, refrigerator-size machines, each of which processes data at a peak speed of 56 tera­flops (teraflops being one trillion floating-point operations per second). At $2 million per rack, this Blue Gene is not cheap, but it is affordable enough to give Markram a shot with this ambitious project. Each of Blue Gene’s more than 16,000 processors is used to simulate approximately one thousand virtual neurons. By getting the neurons to interact with one another, Markram’s team makes the computer operate like a brain. In its trial runs Markram’s Blue Gene has emulated just a single neocortical column in a two-week-old rat. But in principle, the simulated brain will continue to get more and more powerful as it attempts to rival the one in its creator’s head. “We’ve reached the end of phase one, which for us is the proof of concept,” Markram says. “We can, I think, categorically say that it is possible to build a model of the brain.” In fact, he insists that a fully functioning model of a human brain can be built within a decade.

[div class=attrib]More from theSource here.[end-div]

The Madness of Crowds and an Internet Delusion

[div class=attrib]From The New York Times:[end-div]

RETHINKING THE WEB Jaron Lanier, pictured here in 1999, was an early proponent of the Internet’s open culture. His new book examines the downsides.

In the 1990s, Jaron Lanier was one of the digital pioneers hailing the wonderful possibilities that would be realized once the Internet allowed musicians, artists, scientists and engineers around the world to instantly share their work. Now, like a lot of us, he is having second thoughts.

Mr. Lanier, a musician and avant-garde computer scientist — he popularized the term “virtual reality” — wonders if the Web’s structure and ideology are fostering nasty group dynamics and mediocre collaborations. His new book, “You Are Not a Gadget,” is a manifesto against “hive thinking” and “digital Maoism,” by which he means the glorification of open-source software, free information and collective work at the expense of individual creativity.

He blames the Web’s tradition of “drive-by anonymity” for fostering vicious pack behavior on blogs, forums and social networks. He acknowledges the examples of generous collaboration, like Wikipedia, but argues that the mantras of “open culture” and “information wants to be free” have produced a destructive new social contract.

“The basic idea of this contract,” he writes, “is that authors, journalists, musicians and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising.”

I find his critique intriguing, partly because Mr. Lanier isn’t your ordinary Luddite crank, and partly because I’ve felt the same kind of disappointment with the Web. In the 1990s, when I was writing paeans to the dawning spirit of digital collaboration, it didn’t occur to me that the Web’s “gift culture,” as anthropologists called it, could turn into a mandatory potlatch for so many professions — including my own.

So I have selfish reasons for appreciating Mr. Lanier’s complaints about masses of “digital peasants” being forced to provide free material to a few “lords of the clouds” like Google and YouTube. But I’m not sure Mr. Lanier has correctly diagnosed the causes of our discontent, particularly when he blames software design for leading to what he calls exploitative monopolies on the Web like Google.

He argues that old — and bad — digital systems tend to get locked in place because it’s too difficult and expensive for everyone to switch to a new one. That basic problem, known to economists as lock-in, has long been blamed for stifling the rise of superior technologies like the Dvorak typewriter keyboard and Betamax videotapes, and for perpetuating duds like the Windows operating system.

It can sound plausible enough in theory — particularly if your Windows computer has just crashed. In practice, though, better products win out, according to the economists Stan Liebowitz and Stephen Margolis. After reviewing battles like Dvorak-qwerty and Betamax-VHS, they concluded that consumers had good reasons for preferring qwerty keyboards and VHS tapes, and that sellers of superior technologies generally don’t get locked out. “Although software is often brought up as locking in people,” Dr. Liebowitz told me, “we have made a careful examination of that issue and find that the winning products are almost always the ones thought to be better by reviewers.” When a better new product appears, he said, the challenger can take over the software market relatively quickly by comparison with other industries.

Dr. Liebowitz, a professor at the University of Texas at Dallas, said the problem on the Web today has less to do with monopolies or software design than with intellectual piracy, which he has also studied extensively. In fact, Dr. Liebowitz used to be a favorite of the “information-wants-to-be-free” faction.

In the 1980s he asserted that photocopying actually helped copyright owners by exposing more people to their work, and he later reported that audio and video taping technologies offered large benefits to consumers without causing much harm to copyright owners in Hollywood and the music and television industries.

But when Napster and other music-sharing Web sites started becoming popular, Dr. Liebowitz correctly predicted that the music industry would be seriously hurt because it was so cheap and easy to make perfect copies and distribute them. Today he sees similar harm to other industries like publishing and television (and he is serving as a paid adviser to Viacom in its lawsuit seeking damages from Google for allowing Viacom’s videos to be posted on YouTube).

Trying to charge for songs and other digital content is sometimes dismissed as a losing cause because hackers can crack any copy-protection technology. But as Mr. Lanier notes in his book, any lock on a car or a home can be broken, yet few people do so — or condone break-ins.

“An intelligent person feels guilty for downloading music without paying the musician, but they use this free-open-culture ideology to cover it,” Mr. Lanier told me. In the book he disputes the assertion that there’s no harm in copying a digital music file because you haven’t damaged the original file.

“The same thing could be said if you hacked into a bank and just added money to your online account,” he writes. “The problem in each case is not that you stole from a specific person but that you undermined the artificial scarcities that allow the economy to function.”

Mr. Lanier was once an advocate himself for piracy, arguing that his fellow musicians would make up for the lost revenue in other ways. Sure enough, some musicians have done well selling T-shirts and concert tickets, but it is striking how many of the top-grossing acts began in the predigital era, and how much of today’s music is a mash-up of the old.

“It’s as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump,” Mr. Lanier writes. Or, to use another of his grim metaphors: “Creative people — the new peasants — come to resemble animals converging on shrinking oases of old media in a depleted desert.”

To save those endangered species, Mr. Lanier proposes rethinking the Web’s ideology, revising its software structure and introducing innovations like a universal system of micropayments. (To debate reforms, go to Tierney Lab at nytimes.com/tierneylab.

Dr. Liebowitz suggests a more traditional reform for cyberspace: punishing thieves. The big difference between Web piracy and house burglary, he says, is that the penalties for piracy are tiny and rarely enforced. He expects people to keep pilfering (and rationalizing their thefts) as long as the benefits of piracy greatly exceed the costs.

In theory, public officials could deter piracy by stiffening the penalties, but they’re aware of another crucial distinction between online piracy and house burglary: There are a lot more homeowners than burglars, but there are a lot more consumers of digital content than producers of it.

The result is a problem a bit like trying to stop a mob of looters. When the majority of people feel entitled to someone’s property, who’s going to stand in their way?

[div class=attrib]More from theSource here.[end-div]

CERN celebrates 20th anniversary of World Wide Web

theDiagonal doesn’t normally post “newsy” items. So, we are making an exception in this case for two reasons: first, the “web” wasn’t around in 1989 so we wouldn’t have been able to post a news release on our blog announcing its birth; second, in 1989 Tim Berners-Lee’s then manager waved off his proposal with a “Vague, but exciting” annotation, so without the benefit of the hindsight we now have and lacking in foresight that we so desire, we may just have dismissed it. The rest, as they say “is history”.

[div class=attrib]From Interactions.org:[end-div]

Web inventor Tim Berners-Lee today returned to the birthplace of his brainchild, 20 years after submitting his paper ‘Information Management: A Proposal’ to his manager Mike Sendall in March 1989. By writing the words ‘Vague, but exciting’ on the document’s cover, and giving Berners-Lee the go-ahead to continue, Sendall signed into existence the information revolution of our time: the World Wide Web. In September the following year, Berners-Lee took delivery of a computer called a NeXT cube, and by December 1990 the Web was up and running, albeit between just a couple of computers at CERN*.

Today’s event takes a look back at some of the early history, and pre-history, of the World Wide Web at CERN, includes a keynote speech from Tim Berners-Lee, and concludes with a series of talks from some of today’s Web pioneers.

“It’s a pleasure to be back at CERN today,” said Berners-Lee. “CERN has come a long way since 1989, and so has the Web, but its roots will always be here.”

The World Wide Web is undoubtedly the most well known spin-off from CERN, but it’s not the only one. Technologies developed at CERN have found applications in domains as varied as solar energy collection and medical imaging.

“When CERN scientists find a technological hurdle in the way of their ambitions, they have a tendency to solve it,” said CERN Director General Rolf Heuer. “I’m pleased to say that the spirit of innovation that allowed Tim Berners-Lee to invent the Web at CERN, and allowed CERN to nurture it, is alive and well today.”

[div class=attrib]More from theSource here.[end-div]

The society of the query and the Googlization of our lives

[div class=attrib]From Eurozine:[end-div]

“There is only one way to turn signals into information, through interpretation”, wrote the computer critic Joseph Weizenbaum. As Google’s hegemony over online content increases, argues Geert Lovink, we should stop searching and start questioning.

A spectre haunts the world’s intellectual elites: information overload. Ordinary people have hijacked strategic resources and are clogging up once carefully policed media channels. Before the Internet, the mandarin classes rested on the idea that they could separate “idle talk” from “knowledge”. With the rise of Internet search engines it is no longer possible to distinguish between patrician insights and plebeian gossip. The distinction between high and low, and their co-mingling on occasions of carnival, belong to a bygone era and should no longer concern us. Nowadays an altogether new phenomenon is causing alarm: search engines rank according to popularity, not truth. Search is the way we now live. With the dramatic increase of accessed information, we have become hooked on retrieval tools. We look for telephone numbers, addresses, opening times, a person’s name, flight details, best deals and in a frantic mood declare the ever growing pile of grey matter “data trash”. Soon we will search and only get lost. Old hierarchies of communication have not only imploded, communication itself has assumed the status of cerebral assault. Not only has popular noise risen to unbearable levels, we can no longer stand yet another request from colleagues and even a benign greeting from friends and family has acquired the status of a chore with the expectation of reply. The educated class deplores that fact that chatter has entered the hitherto protected domain of science and philosophy, when instead they should be worrying about who is going to control the increasingly centralized computing grid.

What today’s administrators of noble simplicity and quiet grandeur cannot express, we should say for them: there is a growing discontent with Google and the way the Internet organizes information retrieval. The scientific establishment has lost control over one of its key research projects – the design and ownership of computer networks, now used by billions of people. How did so many people end up being that dependent on a single search engine? Why are we repeating the Microsoft saga once again? It seems boring to complain about a monopoly in the making when average Internet users have such a multitude of tools at their disposal to distribute power. One possible way to overcome this predicament would be to positively redefine Heidegger’s Gerede. Instead of a culture of complaint that dreams of an undisturbed offline life and radical measures to filter out the noise, it is time to openly confront the trivial forms of Dasein today found in blogs, text messages and computer games. Intellectuals should no longer portray Internet users as secondary amateurs, cut off from a primary and primordial relationship with the world. There is a greater issue at stake and it requires venturing into the politics of informatic life. It is time to address the emergence of a new type of corporation that is rapidly transcending the Internet: Google.

The World Wide Web, which should have realized the infinite library Borges described in his short story The Library of Babel (1941), is seen by many of its critics as nothing but a variation of Orwell’s Big Brother (1948). The ruler, in this case, has turned from an evil monster into a collection of cool youngsters whose corporate responsibility slogan is “Don’t be evil”. Guided by a much older and experienced generation of IT gurus (Eric Schmidt), Internet pioneers (Vint Cerf) and economists (Hal Varian), Google has expanded so fast, and in such a wide variety of fields, that there is virtually no critic, academic or business journalist who has been able to keep up with the scope and speed with which Google developed in recent years. New applications and services pile up like unwanted Christmas presents. Just add Google’s free email service Gmail, the video sharing platform YouTube, the social networking site Orkut, GoogleMaps and GoogleEarth, its main revenue service AdWords with the Pay-Per-Click advertisements, office applications such as Calendar, Talks and Docs. Google not only competes with Microsoft and Yahoo, but also with entertainment firms, public libraries (through its massive book scanning program) and even telecom firms. Believe it or not, the Google Phone is coming soon. I recently heard a less geeky family member saying that she had heard that Google was much better and easier to use than the Internet. It sounded cute, but she was right. Not only has Google become the better Internet, it is taking over software tasks from your own computer so that you can access these data from any terminal or handheld device. Apple’s MacBook Air is a further indication of the migration of data to privately controlled storage bunkers. Security and privacy of information are rapidly becoming the new economy and technology of control. And the majority of users, and indeed companies, are happily abandoning the power to self-govern their informational resources.

[div class=attrib]More from theSource here.[end-div]

A Digital Life

[div class=attrib]From Scientific American:[end-div]

New systems may allow people to record everything they see and hear–and even things they cannot sense–and to store all these data in a personal digital archive.

Human memory can be maddeningly elusive. We stumble upon its limitations every day, when we forget a friend’s telephone number, the name of a business contact or the title of a favorite book. People have developed a variety of strategies for combating forgetfulness–messages scribbled on Post-it notes, for example, or electronic address books carried in handheld devices–but important information continues to slip through the cracks. Recently, however, our team at Microsoft Research has begun a quest to digitally chronicle every aspect of a person’s life, starting with one of our own lives (Bell’s). For the past six years, we have attempted to record all of Bell’s communications with other people and machines, as well as the images he sees, the sounds he hears and the Web sites he visits–storing everything in a personal digital archive that is both searchable and secure.

Digital memories can do more than simply assist the recollection of past events, conversations and projects. Portable sensors can take readings of things that are not even perceived by humans, such as oxygen levels in the blood or the amount of carbon dioxide in the air. Computers can then scan these data to identify patterns: for instance, they might determine which environmental conditions worsen a child’s asthma. Sensors can also log the three billion or so heartbeats in a person’s lifetime, along with other physiological indicators, and warn of a possible heart attack. This information would allow doctors to spot irregularities early, providing warnings before an illness becomes serious. Your physician would have access to a detailed, ongoing health record, and you would no longer have to rack your brain to answer questions such as “When did you first feel this way?”

[div class=attrib]More from theSource here.[end-div]

Viral Nanoelectronics

[div class=attrib]From Scientific American:[end-div]

M.I.T. breeds viruses that coat themselves in selected substances, then self-assemble into such devices as liquid crystals, nanowires and electrodes.

For many years, materials scientists wanted to know how the abalone, a marine snail, constructed its magnificently strong shell from unpromising minerals, so that they could make similar materials themselves. Angela M. Belcher asked a different question: Why not get the abalone to make things for us?

She put a thin glass slip between the abalone and its shell, then removed it. “We got a flat pearl,” she says, “which we could use to study shell formation on an hour-by-hour basis, without having to sacrifice the animal.” It turns out the abalone manufactures proteins that induce calcium carbonate molecules to adopt two distinct yet seamlessly melded crystalline forms–one strong, the other fast-growing. The work earned her a Ph.D. from the University of California, Santa Barbara, in 1997 and paved her way to consultancies with the pearl industry, a professorship at the Massachusetts Institute of Technology, and a founding role in a start-up company called Cambrios in Mountain View, Calif.
[div class=attrib]More from theSource here.[end-div]

A Plan to Keep Carbon in Check

[div class=attrib]By Robert H. Socolow and Stephen W. Pacala, From Scientific American:[end-div]

Getting a grip on greenhouse gases is daunting but doable. The technologies already exist. But there is no time to lose.

Retreating glaciers, stronger hurricanes, hotter summers, thinner polar bears: the ominous harbingers of global warming are driving companies and governments to work toward an unprecedented change in the historical pattern of fossil-fuel use. Faster and faster, year after year for two centuries, human beings have been transferring carbon to the atmosphere from below the surface of the earth. Today the world’s coal, oil and natural gas industries dig up and pump out about seven billion tons of carbon a year, and society burns nearly all of it, releasing carbon dioxide (CO2). Ever more people are convinced that prudence dictates a reversal of the present course of rising CO2 emissions.

The boundary separating the truly dangerous consequences of emissions from the merely unwise is probably located near (but below) a doubling of the concentration of CO2 that was in the atmosphere in the 18th century, before the Industrial Revolution began. Every increase in concentration carries new risks, but avoiding that danger zone would reduce the likelihood of triggering major, irreversible climate changes, such as the disappearance of the Greenland ice cap. Two years ago the two of us provided a simple framework to relate future CO2 emissions to this goal.

[div class=attrib]More from theSource here.[end-div]

Plan B for Energy

[div class=attrib]From Scientific American:[end-div]

If efficiency improvements and incremental advances in today’s technologies fail to halt global warming, could revolutionary new carbon-free energy sources save the day? Don’t count on it–but don’t count it out, either.

To keep this world tolerable for life as we like it, humanity must complete a marathon of technological change whose finish line lies far over the horizon. Robert H. Socolow and Stephen W. Pacala of Princeton University have compared the feat to a multigenerational relay race [see their article “A Plan to Keep Carbon in Check”]. They outline a strategy to win the first 50-year leg by reining back carbon dioxide emissions from a century of unbridled acceleration. Existing technologies, applied both wisely and promptly, should carry us to this first milestone without trampling the global economy. That is a sound plan A.

The plan is far from foolproof, however. It depends on societies ramping up an array of carbon-reducing practices to form seven “wedges,” each of which keeps 25 billion tons of carbon in the ground and out of the air. Any slow starts or early plateaus will pull us off track. And some scientists worry that stabilizing greenhouse gas emissions will require up to 18 wedges by 2056, not the seven that Socolow and Pacala forecast in their most widely cited model.
[div class=attrib]More from theSource here.[end-div]