Tag Archives: digital life

Documenting the Self

Samuel_PepysIs Nicolas Felton the Samuel Pepys of our digital age?

They both chronicled their observations over a period of 10 years, but separated by 345 years. However, that’s where the similarity between the two men ends.

Samuel Pepys was a 17th century member of British Parliament and naval bureaucrat, famous for the decade-long private diary. Pepys kept detailed personal notes from 1660 to 1669. The diary was subsequently published in the 19th century, and is now regarded as one of the principal sources of information of the Restoration period (return of the monarchy under Charles II). Many a British school kid [myself included] has been exposed to Pepys’ observations of momentous events, including his tales of the plague and the Great Fire of London.

Nicolas Felton a graphic designer and ex-Facebook employee cataloged his life from 2005 to 2015. Based in New York, Felton began obsessively recording the minutiae of his life in 2005. He first tracked his locations and time spent in each followed by his music-listening habits. Then he began counting his emails, correspondence, calendar entries, photos. Felton eventually compiled his detailed digital tracks into a visually fascinating annual Feltron Report.

So, Felton is certainly no Pepys, but his data trove remains interesting nonetheless — for different reasons. Pepys recorded history during a tumultuous time in England; his very rare, detailed first-person account across an entire decade has no parallel. His diary is now an invaluable literary chronicle for scholars and history buffs.

Our world is rather different today. Our technologies now enable institutions and individuals to record and relate their observations ad nauseam. Thus Felton’s data is not unique per se, though his decade-long obsession certainly provides us with a quantitative trove of data, which is not necessarily useful to us for historical reasons, but more so for those who study our tracks and needs, and market to us.

Read Samuel Pepys diary here. Read more about Nicolas Felton here.

Image: Samuel Pepys by John Hayls, oil on canvas, 1666. National Portrait Gallery. Public Domain.

Biological Transporter

Molecular-biology entrepreneur and genomics engineering pioneer, Craig Venter, is at it again. In his new book, Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, Venter explains his grand ideas and the coming era of discovery.

From ars technica:

J Craig Venter has been a molecular-biology pioneer for two decades. After developing expressed sequence tags in the 90s, he led the private effort to map the human genome, publishing the results in 2001. In 2010, the J Craig Venter Institute manufactured the entire genome of a bacterium, creating the first synthetic organism.

Now Venter, author of Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, explains the coming era of discovery.

Wired: In Life at the Speed of Light, you argue that humankind is entering a new phase of evolution. How so?

J Craig Venter: As the industrial age is drawing to a close, I think that we’re witnessing the dawn of the era of biological design. DNA, as digitized information, is accumulating in computer databases. Thanks to genetic engineering, and now the field of synthetic biology, we can manipulate DNA to an unprecedented extent, just as we can edit software in a computer. We can also transmit it as an electromagnetic wave at or near the speed of light and, via a “biological teleporter,” use it to recreate proteins, viruses, and living cells at another location, changing forever how we view life.

So you view DNA as the software of life?

All the information needed to make a living, self-replicating cell is locked up within the spirals of DNA’s double helix. As we read and interpret that software of life, we should be able to completely understand how cells work, then change and improve them by writing new cellular software.

The software defines the manufacture of proteins that can be viewed as its hardware, the robots and chemical machines that run a cell. The software is vital because the cell’s hardware wears out. Cells will die in minutes to days if they lack their genetic-information system. They will not evolve, they will not replicate, and they will not live.

Of all the experiments you have done over the past two decades involving the reading and manipulation of the software of life, which are the most important?

I do think the synthetic cell is my most important contribution. But if I were to select a single study, paper, or experimental result that has really influenced my understanding of life more than any other, I would choose one that my team published in 2007, in a paper with the title Genome Transplantation in Bacteria: Changing One Species to Another. The research that led to this paper in the journal Science not only shaped my view of the fundamentals of life but also laid the groundwork to create the first synthetic cell. Genome transplantation not only provided a way to carry out a striking transformation, converting one species into another, but would also help prove that DNA is the software of life.

What has happened since your announcement in 2010 that you created a synthetic cell, JCVI-syn1.0?

At the time, I said that the synthetic cell would give us a better understanding of the fundamentals of biology and how life works, help develop techniques and tools for vaccine and pharmaceutical development, enable development of biofuels and biochemicals, and help to create clean water, sources of food, textiles, bioremediation. Three years on that vision is being borne out.

Your book contains a dramatic account of the slog and setbacks that led to the creation of this first synthetic organism. What was your lowest point?

When we started out creating JCVI-syn1.0 in the lab, we had selected M. genitalium because of its extremely small genome. That decision we would come to really regret: in the laboratory, M. genitalium grows slowly. So whereas E. coli divides into daughter cells every 20 minutes, M. genitalium requires 12 hours to make a copy of itself. With logarithmic growth, it’s the difference between having an experimental result in 24 hours versus several weeks. It felt like we were working really hard to get nowhere at all. I changed the target to the M. mycoides genome. It’s twice as large as that of genitalium, but it grows much faster. In the end, that move made all the difference.

Some of your peers were blown away by the synthetic cell; others called it a technical tour de force. But there were also those who were underwhelmed because it was not “life from scratch.”

They haven’t thought much about what they are actually trying to say when they talk about “life from scratch.” How about baking a cake “from scratch”? You could buy one and then ice it at home. Or buy a cake mix, to which you add only eggs, water and oil. Or combining the individual ingredients, such as baking powder, sugar, salt, eggs, milk, shortening and so on. But I doubt that anyone would mean formulating his own baking powder by combining sodium, hydrogen, carbon, and oxygen to produce sodium bicarbonate, or producing homemade corn starch. If we apply the same strictures to creating life “from scratch,” it could mean producing all the necessary molecules, proteins, lipids, organelles, DNA, and so forth from basic chemicals or perhaps even from the fundamental elements carbon, hydrogen, oxygen, nitrogen, phosphate, iron, and so on.

There’s a parallel effort to create virtual life, which you go into in the book. How sophisticated are these models of cells in silico?

In the past year we have really seen how virtual cells can help us understand the real things. This work dates back to 1996 when Masaru Tomita and his students at the Laboratory for Bioinformatics at Keio started investigating the molecular biology of Mycoplasma genitalium—which we had sequenced in 1995—and by the end of that year had established the E-Cell Project. The most recent work on Mycoplasma genitalium has been done in America, by the systems biologist Markus W Covert, at Stanford University. His team used our genome data to create a virtual version of the bacterium that came remarkably close to its real-life counterpart.

You’ve discussed the ethics of synthetic organisms for a long time—where is the ethical argument today?

The Janus-like nature of innovation—its responsible use and so on—was evident at the very birth of human ingenuity, when humankind first discovered how to make fire on demand. (Do I use it burn down a rival’s settlement, or to keep warm?) Every few months, another meeting is held to discuss how powerful technology cuts both ways. It is crucial that we invest in underpinning technologies, science, education, and policy in order to ensure the safe and efficient development of synthetic biology. Opportunities for public debate and discussion on this topic must be sponsored, and the lay public must engage. But it is important not to lose sight of the amazing opportunities that this research presents. Synthetic biology can help address key challenges facing the planet and its population. Research in synthetic biology may lead to new things such as programmed cells that self-assemble at the sites of disease to repair damage.

What worries you more: bioterror or bioerror?

I am probably more concerned about an accidental slip. Synthetic biology increasingly relies on the skills of scientists who have little experience in biology, such as mathematicians and electrical engineers. The democratization of knowledge and the rise of “open-source biology;” the availability of kitchen-sink versions of key laboratory tools, such as the DNA-copying method PCR, make it easier for anyone—including those outside the usual networks of government, commercial, and university laboratories and the culture of responsible training and biosecurity—to play with the software of life.

Following the precautionary principle, should we abandon synthetic biology?

My greatest fear is not the abuse of technology, but that we will not use it at all, and turn our backs to an amazing opportunity at a time when we are over-populating our planet and changing environments forever.

You’re bullish about where this is headed.

I am—and a lot of that comes from seeing the next generation of synthetic biologists. We can get a view of what the future holds from a series of contests that culminate in a yearly event in Cambridge, Massachusetts—the International Genetically Engineered Machine (iGEM) competition. High-school and college students shuffle a standard set of DNA subroutines into something new. It gives me hope for the future.

You’ve been working to convert DNA into a digital signal that can be transmitted to a unit which then rebuilds an organism.

At Synthetic Genomics, Inc [which Venter founded with his long-term collaborator, the Nobel laureate Ham Smith], we can feed digital DNA code into a program that works out how to re-synthesize the sequence in the lab. This automates the process of designing overlapping pieces of DNA base-pairs, called oligonucleotides, adding watermarks, and then feeding them into the synthesizer. The synthesizer makes the oligonucleotides, which are pooled and assembled using what we call our Gibson-assembly robot (named after my talented colleague Dan Gibson). NASA has funded us to carry out experiments at its test site in the Mojave Desert. We will be using the JCVI mobile lab, which is equipped with soil-sampling, DNA-isolation and DNA sequencing equipment, to test the steps for autonomously isolating microbes from soil, sequencing their DNA and then transmitting the information to the cloud with what we call a “digitized-life-sending unit”. The receiving unit, where the transmitted DNA information can be downloaded and reproduced anew, has a number of names at present, including “digital biological converter,” “biological teleporter,” and—the preference of former US Wired editor-in-chief and CEO of 3D Robotics, Chris Anderson—”life replicator”.

Read the entire article here.

Image: J Craig Venter. Courtesy of Wikipedia.

RIP: Fare Thee Well

With smartphones and tweets taking over our planet, the art of letter writing is fast becoming a subject of history lessons. Our written communications are now modulated by the keypad, emoticons, acronyms and the backspace; our attentions ever-fractured by the noise of the digital world and the dumbed-down 24/7 media monster.

So, as Matthew Malady over at Slate argues, it’s time for the few remaining Luddites, pen still in hand, to join the trend towards curtness and to ditch the signoffs. You know, the words that anyone over the age of 50 once used to put at the end of a hand-written letter, and can still be found at the close of an email and, less frequently, a text: “Best regards“, “Warmest wishes“, “Most Sincerely“, “Cheers“, “Faithfully yours“.

Your friendly editor, for now, refuses to join the tidal wave of signoff slayers, and continues to take solace from his ink (fountain, if you please!) pens. There is still room for well-crafted prose in a sea of txt-speak.

[div class=attrib]From Slate:[end-div]

For the 20 years that I have used email, I have been a fool. For two decades, I never called bullshit when burly, bearded dudes from places like Pittsburgh and Park Slope bid me email adieu with the vaguely British “Cheers!” And I never batted an eye at the hundreds of “XOXO” email goodbyes from people I’d never met, much less hugged or kissed. When one of my best friends recently ended an email to me by using the priggish signoff, “Always,” I just rolled with it.

But everyone has a breaking point. For me, it was the ridiculous variations on “Regards” that I received over the past holiday season. My transition from signoff submissive to signoff subversive began when a former colleague ended an email to me with “Warmest regards.”

Were these scalding hot regards superior to the ordinary “Regards” I had been receiving on a near-daily basis? Obviously they were better than the merely “Warm Regards” I got from a co-worker the following week. Then I received “Best Regards” in a solicitation email from the New Republic. Apparently when urging me to attend a panel discussion, the good people at the New Republic were regarding me in a way that simply could not be topped.

After 10 or 15 more “Regards” of varying magnitudes, I could take no more. I finally realized the ridiculousness of spending even one second thinking about the totally unnecessary words that we tack on to the end of emails. And I came to the following conclusion: It’s time to eliminate email signoffs completely. Henceforth, I do not want—nay, I will not accept—any manner of regards. Nor will I offer any. And I urge you to do the same.

Think about it. Email signoffs are holdovers from a bygone era when letter writing—the kind that required ink and paper—was a major means of communication. The handwritten letters people sent included information of great import and sometimes functioned as the only communication with family members and other loved ones for months. In that case, it made sense to go to town, to get flowery with it. Then, a formal signoff was entirely called for. If you were, say, a Boston resident writing to his mother back home in Ireland in the late 19th century, then ending a correspondence with “I remain your ever fond son in Christ Our Lord J.C.,” as James Chamberlain did in 1891, was entirely reasonable and appropriate.

But those times have long since passed. And so has the era when individuals sought to win the favor of the king via dedication letters and love notes ending with “Your majesty’s Most bounden and devoted,” or “Fare thee as well as I fare.” Also long gone are the days when explorers attempted to ensure continued support for their voyages from monarchs and benefactors via fawning formal correspondence related to the initial successes of this or that expedition. Francisco Vázquez de Coronado had good reason to end his 1541 letter to King Charles I of Spain, relaying details about parts of what is now the southwestern United States, with a doozy that translates to “Your Majesty’s humble servant and vassal, who would kiss the royal feet and hands.”

But in 2013, when bots outnumber benefactors by a wide margin, the continued and consistent use of antiquated signoffs in email is impossible to justify. At this stage of the game, we should be able to interact with one another in ways that reflect the precise manner of communication being employed, rather than harkening back to old standbys popular during the age of the Pony Express.

I am not an important person. Nonetheless, each week, on average, I receive more than 300 emails. I send out about 500. These messages do not contain the stuff of old-timey letters. They’re about the pizza I had for lunch (horrendous) and must-see videos of corgis dressed in sweaters (delightful). I’m trading thoughts on various work-related matters with people who know me and don’t need to be “Best”-ed. Emails, over time, have become more like text messages than handwritten letters. And no one in their right mind uses signoffs in text messages.

What’s more, because no email signoff is exactly right for every occasion, it’s not uncommon for these add-ons to cause affirmative harm. Some people take offense to different iterations of “goodbye,” depending on the circumstances. Others, meanwhile, can’t help but wonder, “What did he mean by that?” or spend entire days worrying about the implications of a sudden shift from “See you soon!” in one email, to “Best wishes” in the next. So, naturally, we consider, and we overthink, and we agonize about how best to close out our emails. We ask others for advice on the matter, and we give advice on it when asked.

[div class=attrib]Read the entire article after the jump.[end-div]

Travel Photo Clean-up


We’ve all experienced this phenomenon when on vacation: you’re at a beautiful location with a significant other, friends or kids; the backdrop is idyllic, the subjects are exquisitely posed, you need to preserve and share this perfect moment with a photograph, you get ready to snap the shutter. Then, at that very moment an oblivious tourist, unperturbed locals or a stray goat wander into the picture, too late, the picture is ruined, and it’s getting dark, so there’s no time to reinvent that perfect scene! Oh well, you’ll still be able to talk about the scene’s unspoiled perfection when you get home.

But now, there’s an app for that.

[div class=attrib]From New Scientist:[end-div]


It’s the same scene played out at tourist sites the world over: You’re trying to take a picture of a partner or friend in front of some monument, statue or building and other tourists keep striding unwittingly – or so they say – into the frame.

Now a new smartphone app promises to let you edit out these unwelcome intruders, leaving just leave your loved one and a beautiful view intact.

Remove, developed by Swedish photography firm Scalada, takes a burst of shots of your scene. It then identifies the objects which are moving – based on their relative position in each frame. These objects are then highlighted and you can delete the ones you don’t want and keep the ones you do, leaving you with a nice, clean composite shot.

Loud party of schoolchildren stepping in front of the Trevi Fountain? Select and delete. Unwanted, drunken stag party making the Charles Bridge in Prague look untidy? See you later.

Remove uses similar technology to the firm’s Rewind app, launched last year, which merges composite group shots to create the best single image.

The app is just a prototype at the moment – as is the video above – but Scalado will demonstrate a full version at the 2012 Mobile World Conference in Barcelona later this month.

Kodak: The Final Picture?

If you’re over 30 years old, then you may still recall having used roll film with your analog, chemically-based camera. If you did then it’s likely you would have used a product, such as Kodachrome, manufactured by Eastman Kodak. The company was founded by George Eastman in 1892. Eastman invented roll film and helped make photography a mainstream pursuit.

Kodak had been synonymous with photography for around a 100 years. However, in recent years it failed to change gears during the shift to digital media. Indeed it finally ceased production and processing of Kodachrome in 2009. While other companies, such as Nikon and Canon, managed the transition to a digital world, Kodak failed to anticipate and capitalize. Now, the company is struggling for survival.

[div class=attrib]From Wired:[end-div]

Eastman Kodak Co. is hemorrhaging money, the latest Polaroid to be wounded by the sweeping collapse of the market for analog film.

In a statement to the Securities and Exchange Commission, Kodak reported that it needs to make more money out of its patent portfolio or to raise money by selling debt.

Kodak has tried to recalibrate operations around printing, as the sale of film and cameras steadily decline, but it appears as though its efforts have been fruitless: in Q3 of last year, Kodak reported it had $1.4 billion in cash, ending the same quarter this year with just $862 million — 10 percent less than the quarter before.

Recently, the patent suits have been a crutch for the crumbling company, adding a reliable revenue to the shrinking pot. But this year the proceeds from this sadly demeaning revenue stream just didn’t pan out. With sales down 17 percent, this money is critical, given the amount of cash being spent on restructuring lawyers and continued production.

Though the company has no plans to seek bankruptcy, one thing is clear: Kodak’s future depends on its ability to make its Intellectual Property into a profit, no matter the method.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Wired.[end-div]

Communicating Meaning in Cyberspace

Clarifying intent, emotion, wishes and meaning is a rather tricky and cumbersome process that we all navigate each day. Online in the digital world this is even more challenging, if not sometimes impossible. The pre-digital method of exchanging information in a social context would have been face-to-face. Such a method provides the full gamut of verbal and non-verbal dialogue between two or more parties. Importantly, it also provides a channel for the exchange of unconscious cues between people, which researchers are increasingly finding to be of critical importance during communication.

So, now replace the the face-to-face interaction with email, texting, instant messaging, video chat, and other forms of digital communication and you have a new playground for researchers in cognitive and social sciences. The intriguing question for researchers, and all of us for that matter, is: how do we ensure our meaning, motivations and intent are expressed clearly through digital communications?

There are some partial answers over at Anthropology in Practice, which looks at how users of digital media express emotion, resolve ambiguity and communicate cross-culturally.

[div class=attrib]Anthropology in Practice:[end-div]

The ability to interpret social data is rooted in our theory of mind—our capacity to attribute mental states (beliefs, intents, desires, knowledge, etc.) to the self and to others. This cognitive development reflects some understanding of how other individuals relate to the world, allowing for the prediction of behaviors.1 As social beings we require consistent and frequent confirmation of our social placement. This confirmation is vital to the preservation of our networks—we need to be able to gauge the state of our relationships with others.

Research has shown that children whose capacity to mentalize is diminished find other ways to successfully interpret nonverbal social and visual cues 2-6, suggesting that the capacity to mentalize is necessary to social life. Digitally-mediated communication, such as text messaging and instant messaging, does not readily permit social biofeedback. However cyber communicators still find ways of conveying beliefs, desires, intent, deceit, and knowledge online, which may reflect an effort to preserve the capacity to mentalize in digital media.

The Challenges of Digitally-Mediated Communication

In its most basic form DMC is text-based, although the growth of video conferencing technology indicates DMC is still evolving. One of the biggest criticisms of DMC has been the lack of nonverbal cues which are an important indicator to the speaker’s meaning, particularly when the message is ambiguous.

Email communicators are all too familiar with this issue. After all, in speech the same statement can have multiple meanings depending on tone, expression, emphasis, inflection, and gesture. Speech conveys not only what is said, but how it is said—and consequently, reveals a bit of the speaker’s mind to interested parties. In a plain-text environment like email only the typist knows whether a statement should be read with sarcasm.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Wikipedia / Creative Commons.[end-div]

A Digital Life

[div class=attrib]From Scientific American:[end-div]

New systems may allow people to record everything they see and hear–and even things they cannot sense–and to store all these data in a personal digital archive.

Human memory can be maddeningly elusive. We stumble upon its limitations every day, when we forget a friend’s telephone number, the name of a business contact or the title of a favorite book. People have developed a variety of strategies for combating forgetfulness–messages scribbled on Post-it notes, for example, or electronic address books carried in handheld devices–but important information continues to slip through the cracks. Recently, however, our team at Microsoft Research has begun a quest to digitally chronicle every aspect of a person’s life, starting with one of our own lives (Bell’s). For the past six years, we have attempted to record all of Bell’s communications with other people and machines, as well as the images he sees, the sounds he hears and the Web sites he visits–storing everything in a personal digital archive that is both searchable and secure.

Digital memories can do more than simply assist the recollection of past events, conversations and projects. Portable sensors can take readings of things that are not even perceived by humans, such as oxygen levels in the blood or the amount of carbon dioxide in the air. Computers can then scan these data to identify patterns: for instance, they might determine which environmental conditions worsen a child’s asthma. Sensors can also log the three billion or so heartbeats in a person’s lifetime, along with other physiological indicators, and warn of a possible heart attack. This information would allow doctors to spot irregularities early, providing warnings before an illness becomes serious. Your physician would have access to a detailed, ongoing health record, and you would no longer have to rack your brain to answer questions such as “When did you first feel this way?”

[div class=attrib]More from theSource here.[end-div]