Tag Archives: research

Clicks or Truth

The internet is a tremendous resource for learning, entertainment and communication. It’s also a vast, accreting blob of misinformation, lies, rumor, exaggeration and just plain bulls**t.

So, is there any hope for those of us who care about fact and truth over truthiness? Well, the process of combating conspiracies and mythology is likely to remain a difficult and continuous one for the foreseeable future.

But, there are small pockets on the internet where the important daily fight against disinformation thrives. As managing editor Brooke Binkowski at the fact-checking site Snopes.com puts it, “In cases where clickability and virality trump fact, we feel that knowledge is the best antidote to fear.”

From Washington Post:

In a famous xkcd cartoon, “Duty Calls,” a man’s partner beckons him to bed as he sits alone at his computer. “I can’t. This is important,” he demurs, pecking furiously at the keyboard. “What?” comes the reply. His answer: “Someone is wrong on the Internet.”

His nighttime frustration is my day job. I work at Snopes.com, the fact-checking site pledged to running down rumors, debunking cant and calling out liars. Just this past week, for instance, we wrestled with a mysterious lump on Hillary Clinton’s back that turned out to be a mic pack (not the defibrillator some had alleged). It’s a noble and worthwhile calling, but it’s also a Sisyphean one. On the Internet, no matter how many facts you marshal, someone is always wrong.

Every day, the battle against error begins with email. At Snopes, which is supported entirely by advertising, our staff of about a dozen writers and editors plows through some 1,000 messages that have accumulated overnight, which helps us get a feel for what our readers want to know about this morning. Unfortunately, it also means a healthy helping of venom, racism and fury. A Boston-based email specialist on staff helps sort the wheat (real questions we could answer) from the vituperative chaff.

Out in the physical world (where we rarely get to venture during the election season, unless it’s to investigate yet another rumor about Pokémon Go), our interactions with the site’s readers are always positive. But in the virtual world, anonymous communication emboldens the disaffected to treat us as if we were agents of whatever they’re perturbed by today. The writers of these missives, who often send the same message over and over, think they’re on to us: We’re shills for big government, big pharma, the Department of Defense or any number of other prominent, arguably shadowy organizations. You have lost all credibility! they tell us. They never consider that the actual truth is what’s on our website — that we’re completely independent.

Read the entire article here.

Comfort, Texas, the Timeship and Technological Immortality

Timeship-screenshot

There’s a small town deep in the heart of Texas’ Hill Country called Comfort. It was founded in the mid-19th century by German immigrants. Its downtown area is held to be one of the most well-preserved historic business districts in Texas. Now, just over 160 years on there’s another preservation effort underway in Comfort.

This time, however, the work goes well beyond preserving buildings; Comfort may soon be the global hub for life-extension research and human cryopreservation. The ambitious, and not without controversy, project is known as the Timeship, and is the brainchild of architect Stephen Valentine and the Stasis Foundation.

Since one the the key aims of the Timeship is to preserve biological material — DNA, tissue and organ samples, and even cryopreserved humans — the building design presents some rather unique and stringent challenges. The building must withstand a nuclear blast or other attack; its electrical and mechanical systems must remain functional and stable for hundreds of years; it must be self-sustaining and highly secure.

Read more about the building and much more about the Timeship here.

Image: Timeship screenshot. Courtesy of Timeship.

Climate Change Threat Grows

A_Flood_on_Java_1865-1876

Eventually science and reason does prevail. But, in the case of climate change, our global response is fast becoming irrelevant. New research shows accelerating polar ice melt, accelerating global warming and an acceleration in mean sea-level rise. James Hansen and colleagues paint a much more dire picture than previously expected.

[tube]3&v=JP-cRqCQRc8[/tube]

From the Guardian:

The current rate of global warming could raise sea levels by “several meters” over the coming century, rendering most of the world’s coastal cities uninhabitable and helping unleash devastating storms, according to a paper published by James Hansen, the former Nasa scientist who is considered the father of modern climate change awareness.

The research, published in Atmospheric Chemistry and Physics, references past climatic conditions, recent observations and future models to warn the melting of the Antarctic and Greenland ice sheets will contribute to a far worse sea level increase than previously thought.

Without a sharp reduction in greenhouse gas emissions, the global sea level is likely to increase “several meters over a timescale of 50 to 150 years”, the paper states, warning that the Earth’s oceans were six to nine meters higher during the Eemian period – an interglacial phase about 120,000 years ago that was less than 1C warmer than it is today.

Global warming of 2C above pre-industrial times – the world is already halfway to this mark – would be “dangerous” and risk submerging cities, the paper said. A separate study, released in February, warned that New York, London, Rio de Janeiro and Shanghai will be among the cities at risk from flooding by 2100.

Hansen’s research, written with 18 international colleagues, warns that humanity would not be able to properly adapt to such changes, although the paper concedes its conclusions “differ fundamentally from existing climate change assessments”.

The IPCC has predicted a sea level rise of up to one meter by 2100, if emissions are not constrained. Hansen, and other scientists, have argued the UN body’s assessment is too conservative as it doesn’t factor in the potential disintegration of the polar ice sheets.

Hansen’s latest work has proved controversial because it was initially published in draft form last July without undergoing a peer review process. Some scientists have questioned the assumptions made by Hansen and the soaring rate of sea level rise envisioned by his research, which has now been peer-reviewed and published.

Michael Mann, a prominent climate scientist at Pennsylvania State University, said the revised paper still has the same issues that initially “caused me concern”.

“Namely, the projected amounts of meltwater seem … large, and the ocean component of their model doesn’t resolve key wind-driven current systems (e.g. the Gulf Stream) which help transport heat poleward,” Mann said in an email to the Guardian.

“I’m always hesitant to ignore the findings and warnings of James Hansen; he has proven to be so very prescient when it comes to his early prediction about global warming. That having been said, I’m unconvinced that we could see melting rates over the next few decades anywhere near his exponential predictions, and everything else is contingent upon those melting rates being reasonable.”

Read the entire story here.

Image: A Flood on Java (c.1865-1876) by Raden Saleh, lithograph. Courtesy: Royal Netherlands Institute of Southeast Asian and the Caribbean Studies. Public Domain.

Video: Ice Melt, Sea Level Rise and Superstorms Video Abstract. Courtesy: Climate Science, Awareness and Solutions.

 

Forget The Millennials — It’s Time For Generation K

Blame fickle social scientists. After the baby-boomers the most researched generation has been that of the millennials — so-called due to their coming of age at the turn of the century. We know what millennails like to eat and drink, how they dress, their politics; we know about their proclivity to sharing, their need for meaning and fun at work; we know they need attention and constant feedback. In fact, we have learned so much — and perhaps so little — from the thousands of, often-conflicting, research studies of millennials that some researchers have decided to move on to new blood. Yes, it’s time to tap another rich vein of research material — Generation K. But I’ll stop after relating what the “K” means in Generation K, and let you form your own conclusions.

[tube]n-7K_OjsDCQ[/tube]

Generation K is named for Katniss, as in the Hunger Games‘ hero Katniss Everdeen. That’s right, if you were born between 1995 and 2002, according to economist Noreena Hertz you are Gen-Katniss.

From the Guardian:

The brutal, bleak series that has captured the hearts of a generation will come to a brutal, bleak end in November when The Hunger Games: Mockingjay – Part 2 arrives in cinemas. It is the conclusion of the Hunger Games saga, which has immersed the young in a cleverly realised world of trauma, violence, mayhem and death.

For fans of Suzanne Collins’s trilogy about a young girl, Katniss Everdeen, forced to fight for survival in a country ruled by fear and fuelled by televised gladiatorial combat, this is the moment they have been waiting for.

Since the first book in the trilogy was published in 2008, Collins’s tale has sold more than 65 million copies in the US alone. The films, the first of which was released in 2012, have raked in more than $2bn worldwide at the box office and made a global star of their leading lady, Jennifer Lawrence, who plays the increasingly traumatised Katniss with a perfect mix of fury and resignation. For the huge appeal of The Hunger Games goes deeper than the fact that it’s an exciting tale well told. The generation who came to Katniss as young teens and have grown up ploughing through the books and queuing for the movies respond to her story in a particularly personal way.

As to why that might be, the economist and academic Noreena Hertz, who coined the term Generation K (after Katniss) for those born between 1995 and 2002, says that this is a generation riddled with anxiety, distrustful of traditional institutions from government to marriage, and, “like their heroine Katniss Everdeen, [imbued with] a strong sense of what is right and fair”.

“I think The Hunger Games resonates with them so much because they are Katniss navigating a dark and difficult world,” says Hertz, who interviewed 2,000 teenagers from the UK and the US about their hopes, fears and beliefs, concluding that today’s teens are shaped by three factors: technology, recession and coming of age in a time of great unease.

“This is a generation who grew up through 9/11, the Madrid bombings, the London bombings and Islamic State terrors. They see danger piped down their smartphones and beheadings on their Facebook page,” she says. “My data showed very clearly how anxious they are about everything from getting into debt or not getting a job, to wider issues such as climate change and war – 79% of those who took part in my survey worried about getting a job, 72% worried about debt, and you have to remember these are teenagers.

“In previous generations teenagers did not think in this way. Unlike the first-era millennials [who Hertz classes as those aged between 20 and 30] who grew up believing that the world was their oyster and ‘Yes we can’, this new generation knows the world is an unequal and harsh place.”

Writer and activist Laurie Penny, herself a first-era millennial at the age of 29, agrees. “I think what today’s young people have grasped that my generation didn’t get until our early 20s, is that adults don’t know everything,” she says. “They might be trying their best but they don’t always have your best interests at heart. The current generation really understands that – they’re more politically engaged and they have more sense of community because they’re able to find each other easily thanks to their use of technology.”

One of the primary appeals of the Hunger Games trilogy is its refusal to sugarcoat the scenarios Katniss finds herself in. In contrast to JK Rowling’s Harry Potter series, there are no reliable adult figures to dispense helpful advice and no one in authority she can truly trust (notably even the most likeable adult figures in the books tend to be flawed at best and fraudulent at worst). Even her friends may not always have her back, hard as they try – Dumbledore’s Army would probably find themselves taken out before they’d uttered a single counter-curse in the battlegrounds of Panem. At the end of the day, Katniss can only rely on one person, herself.

“Ultimately, the message of the Hunger Games is that everything’s not going to be OK,” says Penny. “One of the reasons Jennifer Lawrence is so good is because she lets you see that while Katniss is heroic, she’s also frightened all of the time. She spends the whole story being forced into situations she doesn’t want to be in. Kids respond because they can imagine what it’s like to be terrified but know that you have to carry on.”

It’s incontestable that we live in difficult times and that younger generations in particular may be more acutely aware that things aren’t improving any time soon, but is it a reach to say that fans of the Hunger Games are responding as much to the world around them as to the books?

Read the entire story here.

Video: The Hunger Games: Mockingjay Part 2 Official Trailer – “We March Together”. Courtesy of the Hunger Games franchise.

Girlfriend or Nuclear Reactor?

YellowcakeAsk a typical 14 year-old boy if he’d prefer to have a girlfriend or a home-made nuclear fission reactor he’s highly likely to gravitate towards the former. Not so Taylor Wilson; he seems to prefer the company of Geiger counters, particle accelerators, vacuum tubes and radioactive materials.

From the Guardian:

Taylor Wilson has a Geiger counter watch on his wrist, a sleek, sporty-looking thing that sounds an alert in response to radiation. As we enter his parents’ garage and approach his precious jumble of electrical equipment, it emits an ominous beep. Wilson is in full flow, explaining the old-fashioned control panel in the corner, and ignores it. “This is one of the original atom smashers,” he says with pride. “It would accelerate particles up to, um, 2.5m volts – so kind of up there, for early nuclear physics work.” He pats the knobs.

It was in this garage that, at the age of 14, Wilson built a working nuclear fusion reactor, bringing the temperature of its plasma core to 580mC – 40 times as hot as the core of the sun. This skinny kid from Arkansas, the son of a Coca-Cola bottler and a yoga instructor, experimented for years, painstakingly acquiring materials, instruments and expertise until he was able to join the elite club of scientists who have created a miniature sun on Earth.

Not long after, Wilson won $50,000 at a science fair, for a device that can detect nuclear materials in cargo containers – a counter-terrorism innovation he later showed to a wowed Barack Obama at a White House-sponsored science fair.

Wilson’s two TED talks (Yup, I Built A Nuclear Fusion Reactor and My Radical Plan For Small Nuclear Fission Reactors) have been viewed almost 4m times. A Hollywood biopic is planned, based on an imminent biography. Meanwhile, corporations have wooed him and the government has offered to buy some of his inventions. Former US under-secretary for energy, Kristina Johnson, told his biographer, Tom Clynes: “I would say someone like him comes along maybe once in a generation. He’s not just smart – he’s cool and articulate. I think he may be the most amazing kid I’ve ever met.”

Seven years on from fusing the atom, the gangly teen with a mop of blond hair is now a gangly 21-year-old with a mop of blond hair, who shuttles between his garage-cum-lab in the family’s home in Reno, Nevada, and other more conventional labs. In addition to figuring out how to intercept dirty bombs, he looks at ways of improving cancer treatment and lowering energy prices – while plotting a hi-tech business empire around the patents.

As we tour his parents’ garage, Wilson shows me what appears to be a collection of nuggets. His watch sounds another alert, but he continues lovingly to detail his inventory. “The first thing I got for my fusion project was a mass spectrometer from an ex-astronaut in Houston, Texas,” he explains. This was a treasure he obtained simply by writing a letter asking for it. He ambles over to a large steel safe, with a yellow and black nuclear hazard sticker on the front. He spins the handle, opens the door and extracts a vial with pale powder in it.

“That’s some yellowcake I made – the famous stuff that Saddam Hussein was supposedly buying from Niger. This is basically the starting point for nuclear, whether it’s a weapons programme or civilian energy production.” He gives the vial a shake. A vision of dodgy dossiers, atomic intrigue and mushroom clouds swims before me, a reverie broken by fresh beeping. “That’ll be the allanite. It’s a rare earth mineral,” Wilson explains. He picks up a dark, knobbly little rock streaked with silver. “It has thorium, a potential nuclear fuel.”

I think now may be a good moment to exit the garage, but the tour is not over. “One of the things people are surprised by is how ubiquitous radiation and radioactivity is,” Wilson says, giving me a reassuring look. “I’m very cautious. I’m actually a bit of a hypochondriac. It’s all about relative risk.”

He paces over to a plump steel tube, elevated to chest level – an object that resembles an industrial vacuum cleaner, and gleams in the gloom. This is the jewel in Wilson’s crown, the reactor he built at 14, and he gives it a tender caress. “This is safer than many things,” he says, gesturing to his Aladdin’s cave of atomic accessories. “For instance, horse riding. People fear radioactivity because it is very mysterious. You want to have respect for it, but not be paralysed by fear.”

The Wilson family home is a handsome, hacienda-style house tucked into foothills outside Reno. Unusually for the high desert at this time of year, grey clouds with bellies of rain rumble overhead. Wilson, by contrast, is all sunny smiles. He is still the slightly ethereal figure you see in the TED talks (I have to stop myself from offering him a sandwich), but the handshake is firm, the eye contact good and the energy enviable – even though Wilson has just flown back from a weekend visiting friends in Los Angeles. “I had an hour’s sleep last night. Three hours the night before that,” he says, with a hint of pride.

He does not drink or smoke, is a natty dresser (in suede jacket, skinny tie, jeans and Converse-style trainers) and he is a talker. From the moment we meet until we part hours later, he talks and talks, great billows of words about the origin of his gift and the responsibility it brings; about trying to be normal when he knows he’s special; about Fukushima, nuclear power and climate change; about fame and ego, and seeing his entire life chronicled in a book for all the world to see when he’s barely an adult and still wrestling with how to ask a girl out on a date.

The future feels urgent and mysterious. “My life has been this series of events that I didn’t see coming. It’s both exciting and daunting to know you’re going to be constantly trying to one-up yourself,” he says. “People can have their opinions about what I should do next, but my biggest pressure is internal. I hate resting on laurels. If I burn out, I burn out – but I don’t see that happening. I’ve more ideas than I have time to execute.”

Wilson credits his parents with huge influence, but wavers on the nature versus nurture debate: was he born brilliant or educated into it? “I don’t have an answer. I go back and forth.” The pace of technological change makes predicting his future a fool’s errand, he says. “It’s amazing – amazing – what I can do today that I couldn’t have done if I was born 10 years earlier.” And his ambitions are sky-high: he mentions, among many other plans, bringing electricity and state-of-the-art healthcare to the developing world.

Read the entire fascinating story here.

Image: Yellowcake, a type of uranium concentrate powder, an intermediate step in the processing of uranium ores. Courtesy of United States Department of Energy. Public Domain.

Your Tax Dollars At Work — Leetspeak

US-FBI-ShadedSealIt’s fascinating to see what our government agencies are doing with some of our hard earned tax dollars.

In this head-scratching example, the FBI — the FBI’s Intelligence Research Support Unit, no less — has just completed a 83-page glossary of Internet slang or “leetspeak”. LOL and Ugh! (the latter is not an acronym).

Check out the document via Muckrock here — they obtained the “secret” document through the Freedom of Information Act.

From the Washington Post:

The Internet is full of strange and bewildering neologisms, which anyone but a text-addled teen would struggle to understand. So the fine, taxpayer-funded people of the FBI — apparently not content to trawl Urban Dictionary, like the rest of us — compiled a glossary of Internet slang.

An 83-page glossary. Containing nearly 3,000 terms.

The glossary was recently made public through a Freedom of Information request by the group MuckRock, which posted the PDF, called “Twitter shorthand,” online. Despite its name, this isn’t just Twitter slang: As the FBI’s Intelligence Research Support Unit explains in the introduction, it’s a primer on shorthand used across the Internet, including in “instant messages, Facebook and Myspace.” As if that Myspace reference wasn’t proof enough that the FBI’s a tad out of touch, the IRSU then promises the list will prove useful both professionally and “for keeping up with your children and/or grandchildren.” (Your tax dollars at work!)

All of these minor gaffes could be forgiven, however, if the glossary itself was actually good. Obviously, FBI operatives and researchers need to understand Internet slang — the Internet is, increasingly, where crime goes down these days. But then we get things like ALOTBSOL (“always look on the bright side of life”) and AMOG (“alpha male of group”) … within the first 10 entries.

ALOTBSOL has, for the record, been tweeted fewer than 500 times in the entire eight-year history of Twitter. AMOG has been tweeted far more often, but usually in Spanish … as a misspelling, it would appear, of “amor” and “amigo.”

Among the other head-scratching terms the FBI considers can’t-miss Internet slang:

  1. AYFKMWTS (“are you f—— kidding me with this s—?”) — 990 tweets
  2. BFFLTDDUP (“best friends for life until death do us part) — 414 tweets
  3. BOGSAT (“bunch of guys sitting around talking”) — 144 tweets
  4. BTDTGTTSAWIO (“been there, done that, got the T-shirt and wore it out”) — 47 tweets
  5. BTWITIAILWY (“by the way, I think I am in love with you”) — 535 tweets
  6. DILLIGAD (“does it look like I give a damn?”) — 289 tweets
  7. DITYID (“did I tell you I’m depressed?”) — 69 tweets
  8. E2EG (“ear-to-ear grin”) — 125 tweets
  9. GIWIST (“gee, I wish I said that”) — 56 tweets
  10. HCDAJFU (“he could do a job for us”) — 25 tweets
  11. IAWTCSM (“I agree with this comment so much”) — 20 tweets
  12. IITYWIMWYBMAD (“if I tell you what it means will you buy me a drink?”) — 250 tweets
  13. LLTA (“lots and lots of thunderous applause”) — 855 tweets
  14. NIFOC (“naked in front of computer”) — 1,065 tweets, most of them referring to acronym guides like this one.
  15. PMYMHMMFSWGAD (“pardon me, you must have mistaken me for someone who gives a damn”) — 128 tweets
  16. SOMSW (“someone over my shoulder watching) — 170 tweets
  17. WAPCE (“women are pure concentrated evil”) — 233 tweets, few relating to women
  18. YKWRGMG (“you know what really grinds my gears?”) — 1,204 tweets

In all fairness to the FBI, they do get some things right: “crunk” is helpfully defined as “crazy and drunk,” FF is “a recommendation to follow someone referenced in the tweet,” and a whole range of online patois is translated to its proper English equivalent: hafta is “have to,” ima is “I’m going to,” kewt is “cute.”

Read the entire article here.

Image: FBI Seal. Courtesy of U.S. Government.

Metabolism Without Life

Glycolysis2-pathway

A remarkable chance discovery in a Cambridge University research lab shows that a number of life-sustaining metabolic processes can occur spontaneously and outside of living cells. This opens a rich, new vein of theories and approaches to studying the origin of life.

From the New Scientist:

Metabolic processes that underpin life on Earth have arisen spontaneously outside of cells. The serendipitous finding that metabolism – the cascade of reactions in all cells that provides them with the raw materials they need to survive – can happen in such simple conditions provides fresh insights into how the first life formed. It also suggests that the complex processes needed for life may have surprisingly humble origins.

“People have said that these pathways look so complex they couldn’t form by environmental chemistry alone,” says Markus Ralser at the University of Cambridge who supervised the research.

But his findings suggest that many of these reactions could have occurred spontaneously in Earth’s early oceans, catalysed by metal ions rather than the enzymes that drive them in cells today.

The origin of metabolism is a major gap in our understanding of the emergence of life. “If you look at many different organisms from around the world, this network of reactions always looks very similar, suggesting that it must have come into place very early on in evolution, but no one knew precisely when or how,” says Ralser.

Happy accident

One theory is that RNA was the first building block of life because it helps to produce the enzymes that could catalyse complex sequences of reactions. Another possibility is that metabolism came first; perhaps even generating the molecules needed to make RNA, and that cells later incorporated these processes – but there was little evidence to support this.

“This is the first experiment showing that it is possible to create metabolic networks in the absence of RNA,” Ralser says.

Remarkably, the discovery was an accident, stumbled on during routine quality control testing of the medium used to culture cells at Ralser’s laboratory. As a shortcut, one of his students decided to run unused media through a mass spectrometer, which spotted a signal for pyruvate – an end product of a metabolic pathway called glycolysis.

To test whether the same processes could have helped spark life on Earth, they approached colleagues in the Earth sciences department who had been working on reconstructing the chemistry of the Archean Ocean, which covered the planet almost 4 billion years ago. This was an oxygen-free world, predating photosynthesis, when the waters were rich in iron, as well as other metals and phosphate. All these substances could potentially facilitate chemical reactions like the ones seen in modern cells.

Metabolic backbone

Ralser’s team took early ocean solutions and added substances known to be starting points for modern metabolic pathways, before heating the samples to between 50 ?C and 70 ?C – the sort of temperatures you might have found near a hydrothermal vent – for 5 hours. Ralser then analysed the solutions to see what molecules were present.

“In the beginning we had hoped to find one reaction or two maybe, but the results were amazing,” says Ralser. “We could reconstruct two metabolic pathways almost entirely.”

The pathways they detected were glycolysis and the pentose phosphate pathway, “reactions that form the core metabolic backbone of every living cell,” Ralser adds. Together these pathways produce some of the most important materials in modern cells, including ATP – the molecule cells use to drive their machinery, the sugars that form DNA and RNA, and the molecules needed to make fats and proteins.

If these metabolic pathways were occurring in the early oceans, then the first cells could have enveloped them as they developed membranes.

In all, 29 metabolism-like chemical reactions were spotted, seemingly catalysed by iron and other metals that would have been found in early ocean sediments. The metabolic pathways aren’t identical to modern ones; some of the chemicals made by intermediate steps weren’t detected. However, “if you compare them side by side it is the same structure and many of the same molecules are formed,” Ralser says. These pathways could have been refined and improved once enzymes evolved within cells.

Read the entire article here.

Image: Glycolysis metabolic pathway. Courtesy of Wikipedia.

Good Mutations and Breathing

Van_andel_113

Stem cells — the factories that manufacture all our component body parts — may hold a key to divining why our bodies gradually break down as we age. A new body of research shows how the body’s population of blood stem cells mutates, and gradually dies, over a typical lifespan. Sometimes these mutations turn cancerous, sometimes not. Luckily for us, the research is centered on the blood samples of Hendrikje van Andel-Schipper — she died in 2005 at the age of 115, and donated her body to science. Her body showed a remarkable resilience — no hardening of the arteries and no deterioration of her brain tissue.  When quizzed about the secret of her longevity, she once retorted, “breathing”.

From the New Scientist:

Death is the one certainty in life – a pioneering analysis of blood from one of the world’s oldest and healthiest women has given clues to why it happens.

Born in 1890, Hendrikje van Andel-Schipper was at one point the oldest woman in the world. She was also remarkable for her health, with crystal-clear cognition until she was close to death, and a blood circulatory system free of disease. When she died in 2005, she bequeathed her body to science, with the full support of her living relatives that any outcomes of scientific analysis – as well as her name – be made public.

Researchers have now examined her blood and other tissues to see how they were affected by age.

What they found suggests, as we could perhaps expect, that our lifespan might ultimately be limited by the capacity for stem cells to keep replenishing tissues day in day out. Once the stem cells reach a state of exhaustion that imposes a limit on their own lifespan, they themselves gradually die out and steadily diminish the body’s capacity to keep regenerating vital tissues and cells, such as blood.

Two little cells

In van Andel-Schipper’s case, it seemed that in the twilight of her life, about two-thirds of the white blood cells remaining in her body at death originated from just two stem cells, implying that most or all of the blood stem cells she started life with had already burned out and died.

“Is there a limit to the number of stem cell divisions, and does that imply that there’s a limit to human life?” asks Henne Holstege of the VU University Medical Center in Amsterdam, the Netherlands, who headed the research team. “Or can you get round that by replenishment with cells saved from earlier in your life?” she says.

The other evidence for the stem cell fatigue came from observations that van Andel-Schipper’s white blood cells had drastically worn-down telomeres – the protective tips on chromosomes that burn down like wicks each time a cell divides. On average, the telomeres on the white blood cells were 17 times shorter than those on brain cells, which hardly replicate at all throughout life.

The team could establish the number of white blood cell-generating stem cells by studying the pattern of mutations found within the blood cells. The pattern was so similar in all cells that the researchers could conclude that they all came from one of two closely related “mother” stem cells.

Point of exhaustion

“It’s estimated that we’re born with around 20,000 blood stem cells, and at any one time, around 1000 are simultaneously active to replenish blood,” says Holstege. During life, the number of active stem cells shrinks, she says, and their telomeres shorten to the point at which they die – a point called stem-cell exhaustion.

Holstege says the other remarkable finding was that the mutations within the blood cells were harmless – all resulted from mistaken replication of DNA during van Andel-Schipper’s life as the “mother” blood stem cells multiplied to provide clones from which blood was repeatedly replenished.

She says this is the first time patterns of lifetime “somatic” mutations have been studied in such an old and such a healthy person. The absence of mutations posing dangers of disease and cancer suggest that van Andel-Schipper had a superior system for repairing or aborting cells with dangerous mutations.

Read the entire article here.

Image: Hendrikje van Andel-Schipper, aged 113. Courtesy of Wikipedia.

Research Without a Research Lab

Many technology companies have separate research teams, or even divisions, that play with new product ideas and invent new gizmos. The conventional wisdom suggests that businesses like Microsoft or IBM need to keep their innovative, far-sighted people away from those tasked with keeping yesterday’s products functioning and today’s customers happy. Google and a handful of other innovators on the other hand follow a different mantra; they invent in hallways and cubes — everywhere.

From Technology Review:

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.

“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Alan MacCormack, an adjunct professor at Harvard Business School who studies innovation and product development in the technology sector, says Google’s approach to research helps it deal with a conundrum facing many large companies. “Many firms are trying to balance a corporate strategy that defines who they are in five years with trying to discover new stuff that is unpredictable—this model has allowed them to do both.” Embedding people working on fundamental research into the core business also makes it possible for Google to encourage creative contributions from workers who would typically be far removed from any kind of research and development, adds MacCormack.

Spector even claims that his company’s secretive Google X division, home of Google Glass and the company’s self-driving car project (see “Glass, Darkly” and “Google’s Robot Cars Are Safer Drivers Than You or I”), is a product development shop rather than a research lab, saying that every project there is focused on a marketable end result. “They have pursued an approach like the rest of Google, a mixture of engineering and research [and] putting these things together into prototypes and products,” he says.

Cynthia Wagner Weick, a management professor at University of the Pacific, thinks that Google’s approach stems from its cofounders’ determination to avoid the usual corporate approach of keeping fundamental research isolated. “They are interested in solving major problems, and not just in the IT and communications space,” she says. Weick recently published a paper singling out Google, Edwards Lifescience, and Elon Musk’s companies, Tesla Motors and Space X, as examples of how tech companies can meet short-term needs while also thinking about far-off ideas.

Google can also draw on academia to boost its fundamental research. It spends millions each year on more than 100 research grants to universities and a few dozen PhD fellowships. At any given time it also hosts around 30 academics who “embed” at the company for up to 18 months. But it has lured many leading computing thinkers away from academia in recent years, particularly in artificial intelligence (see “Is Google Cornering the Market on Deep Learning?”). Those that make the switch get to keep publishing academic research while also gaining access to resources, tools and data unavailable inside universities.

Spector argues that it’s increasingly difficult for academic thinkers to independently advance a field like computer science without the involvement of corporations. Access to piles of data and working systems like those of Google is now a requirement to develop and test ideas that can move the discipline forward, he says. “Google’s played a larger role than almost any company in bringing that empiricism into the mainstream of the field,” he says. “Because of machine learning and operation at scale you can do things that are vastly different. You don’t want to separate researchers from data.”

It’s hard to say how long Google will be able to count on luring leading researchers, given the flush times for competing Silicon Valley startups. “We’re back to a time when there are a lot of startups out there exploring new ground,” says MacCormack, and if competitors can amass more interesting data, they may be able to leach away Google’s research mojo.

Read the entire story here.

Unification of Byzantine Fault Tolerance

The title reads rather elegantly. However, I have no idea what it means and I challenge you to find meaning as well. You see, while your friendly editor typed the title the words themselves came from a non-human author, who goes by the name SCIgen.

SCIgen is an automated scientific paper generator. Accessible via the internet the SCIgen program generates utterly random nonsense, which includes an abstract, hypothesis, test results, detailed diagrams and charts, and even academic references. At first glance the output seems highly convincing. In fact, unscrupulous individuals have been using it to author fake submissions to scientific conferences and to generate bogus research papers for publication in academic journals.

This says a great deal about the quality of some academic conferences and peer review process (or lack of one).

Access the SCIgen generator here.

Read more about the Unification of Byzantine Fault Tolerance — our very own scientific paper — below.

The Effect of Perfect Modalities on Hardware and Architecture

Bob Widgleton, Jordan LeBouth and Apropos Smythe

Abstract

The implications of pseudorandom archetypes have been far-reaching and pervasive. After years of confusing research into e-commerce, we demonstrate the refinement of rasterization, which embodies the confusing principles of cryptography [21]. We propose new modular communication, which we call Tither.

Table of Contents

1) Introduction
2) Principles
3) Implementation
4) Evaluation

5) Related Work

6) Conclusion

1  Introduction

The transistor must work. Our mission here is to set the record straight. On the other hand, a typical challenge in machine learning is the exploration of simulated annealing. Furthermore, an intuitive quandary in robotics is the confirmed unification of Byzantine fault tolerance and thin clients. Clearly, XML and Moore’s Law [22] interact in order to achieve the visualization of the location-identity split. This at first glance seems unexpected but has ample historical precedence.
We confirm not only that IPv4 can be made game-theoretic, homogeneous, and signed, but that the same is true for write-back caches. In addition, we view operating systems as following a cycle of four phases: location, location, construction, and evaluation. It should be noted that our methodology turns the stable communication sledgehammer into a scalpel. Despite the fact that it might seem unexpected, it always conflicts with the need to provide active networks to experts. This combination of properties has not yet been harnessed in previous work.
Nevertheless, this solution is fraught with difficulty, largely due to perfect information. In the opinions of many, the usual methods for the development of multi-processors do not apply in this area. By comparison, it should be noted that Tither studies event-driven epistemologies. By comparison, the flaw of this type of solution, however, is that red-black trees can be made efficient, linear-time, and replicated. This combination of properties has not yet been harnessed in existing work.
Here we construct the following contributions in detail. We disprove that although the well-known unstable algorithm for the compelling unification of I/O automata and interrupts by Ito et al. is recursively enumerable, the acclaimed collaborative algorithm for the investigation of 802.11b by Davis et al. [4] runs in ?( n ) time. We prove not only that neural networks and kernels are generally incompatible, but that the same is true for DHCP. we verify that while the foremost encrypted algorithm for the exploration of the transistor by D. Nehru [23] runs in ?( n ) time, the location-identity split and the producer-consumer problem are always incompatible.
The rest of this paper is organized as follows. We motivate the need for the partition table. Similarly, to fulfill this intent, we describe a novel approach for the synthesis of context-free grammar (Tither), arguing that IPv6 and write-back caches are continuously incompatible. We argue the construction of multi-processors. This follows from the understanding of the transistor that would allow for further study into robots. Ultimately, we conclude.

2  Principles

In this section, we present a framework for enabling model checking. We show our framework’s authenticated management in Figure 1. We consider a methodology consisting of n spreadsheets. The question is, will Tither satisfy all of these assumptions? Yes, but only in theory.

dia0.png

Figure 1: An application for the visualization of DHTs [24].

Furthermore, we assume that electronic theory can prevent compilers without needing to locate the synthesis of massive multiplayer online role-playing games. This is a compelling property of our framework. We assume that the foremost replicated algorithm for the construction of redundancy by John Kubiatowicz et al. follows a Zipf-like distribution. Along these same lines, we performed a day-long trace confirming that our framework is solidly grounded in reality. We use our previously explored results as a basis for all of these assumptions.

dia1.png

Figure 2: A decision tree showing the relationship between our framework and the simulation of context-free grammar.

Reality aside, we would like to deploy a methodology for how Tither might behave in theory. This seems to hold in most cases. Figure 1 depicts the relationship between Tither and linear-time communication. We postulate that each component of Tither enables active networks, independent of all other components. This is a key property of our heuristic. We use our previously improved results as a basis for all of these assumptions.

3  Implementation

Though many skeptics said it couldn’t be done (most notably Wu et al.), we propose a fully-working version of Tither. It at first glance seems unexpected but is supported by prior work in the field. We have not yet implemented the server daemon, as this is the least private component of Tither. We have not yet implemented the homegrown database, as this is the least appropriate component of Tither. It is entirely a significant aim but is derived from known results.

4  Evaluation

As we will soon see, the goals of this section are manifold. Our overall evaluation method seeks to prove three hypotheses: (1) that the World Wide Web no longer influences performance; (2) that an application’s effective ABI is not as important as median signal-to-noise ratio when minimizing median signal-to-noise ratio; and finally (3) that USB key throughput behaves fundamentally differently on our system. Our logic follows a new model: performance might cause us to lose sleep only as long as usability takes a back seat to simplicity constraints. Furthermore, our logic follows a new model: performance might cause us to lose sleep only as long as scalability constraints take a back seat to performance constraints. Only with the benefit of our system’s legacy code complexity might we optimize for performance at the cost of signal-to-noise ratio. Our evaluation approach will show that increasing the instruction rate of concurrent symmetries is crucial to our results.

4.1  Hardware and Software Configuration

figure0.png

Figure 3: Note that popularity of multi-processors grows as complexity decreases – a phenomenon worth exploring in its own right.

Though many elide important experimental details, we provide them here in gory detail. We instrumented a deployment on our network to prove the work of Italian mad scientist K. Ito. Had we emulated our underwater cluster, as opposed to deploying it in a chaotic spatio-temporal environment, we would have seen weakened results. For starters, we added 3 2GB optical drives to MIT’s decommissioned UNIVACs. This configuration step was time-consuming but worth it in the end. We removed 2MB of RAM from our 10-node testbed [15]. We removed more 2GHz Intel 386s from our underwater cluster. Furthermore, steganographers added 3kB/s of Internet access to MIT’s planetary-scale cluster.

figure1.png

Figure 4: These results were obtained by Noam Chomsky et al. [23]; we reproduce them here for clarity.

Tither runs on autogenerated standard software. We implemented our model checking server in x86 assembly, augmented with collectively wireless, noisy extensions. Our experiments soon proved that automating our Knesis keyboards was more effective than instrumenting them, as previous work suggested. Second, all of these techniques are of interesting historical significance; R. Tarjan and Andrew Yao investigated an orthogonal setup in 1967.

figure2.png

Figure 5: The average distance of our application, compared with the other applications.

4.2  Experiments and Results

figure3.png

Figure 6: The expected instruction rate of our application, as a function of popularity of replication.

figure4.png

Figure 7: Note that hit ratio grows as interrupt rate decreases – a phenomenon worth studying in its own right.

We have taken great pains to describe out evaluation setup; now, the payoff, is to discuss our results. That being said, we ran four novel experiments: (1) we ran von Neumann machines on 15 nodes spread throughout the underwater network, and compared them against semaphores running locally; (2) we measured database and instant messenger performance on our planetary-scale cluster; (3) we ran 87 trials with a simulated DHCP workload, and compared results to our courseware deployment; and (4) we ran 58 trials with a simulated RAID array workload, and compared results to our bioware simulation. All of these experiments completed without LAN congestion or access-link congestion.
Now for the climactic analysis of the second half of our experiments. Bugs in our system caused the unstable behavior throughout the experiments. Continuing with this rationale, bugs in our system caused the unstable behavior throughout the experiments. These expected time since 1935 observations contrast to those seen in earlier work [29], such as Alan Turing’s seminal treatise on RPCs and observed block size.
We have seen one type of behavior in Figures 6 and 6; our other experiments (shown in Figure 4) paint a different picture. Operator error alone cannot account for these results. Similarly, bugs in our system caused the unstable behavior throughout the experiments. Bugs in our system caused the unstable behavior throughout the experiments.
Lastly, we discuss the first two experiments. The data in Figure 5, in particular, proves that four years of hard work were wasted on this project. Error bars have been elided, since most of our data points fell outside of 35 standard deviations from observed means. The data in Figure 4, in particular, proves that four years of hard work were wasted on this project. Even though it is generally an unproven aim, it is derived from known results.

5  Related Work

Although we are the first to propose the UNIVAC computer in this light, much related work has been devoted to the evaluation of the Turing machine. Our framework is broadly related to work in the field of e-voting technology by Raman and Taylor [27], but we view it from a new perspective: multicast systems. A comprehensive survey [3] is available in this space. Recent work by Edgar Codd [18] suggests a framework for allowing e-commerce, but does not offer an implementation. Moore et al. [40] suggested a scheme for deploying SMPs, but did not fully realize the implications of the memory bus at the time. Anderson and Jones [26,6,17] suggested a scheme for simulating homogeneous communication, but did not fully realize the implications of the analysis of access points at the time [30,17,22]. Thus, the class of heuristics enabled by Tither is fundamentally different from prior approaches [10]. Our design avoids this overhead.

5.1  802.11 Mesh Networks

Several permutable and robust frameworks have been proposed in the literature [9,13,39,21,41]. Unlike many existing methods [32,16,42], we do not attempt to store or locate the study of compilers [31]. Obviously, comparisons to this work are unreasonable. Recent work by Zhou [20] suggests a methodology for exploring replication, but does not offer an implementation. Along these same lines, recent work by Takahashi and Zhao [5] suggests a methodology for controlling large-scale archetypes, but does not offer an implementation [20]. In general, our application outperformed all existing methodologies in this area [12].

5.2  Compilers

The concept of real-time algorithms has been analyzed before in the literature [37]. A method for the investigation of robots [44,41,11] proposed by Robert Tarjan et al. fails to address several key issues that our solution does answer. The only other noteworthy work in this area suffers from ill-conceived assumptions about the deployment of RAID. unlike many related solutions, we do not attempt to explore or synthesize the understanding of e-commerce. Along these same lines, a recent unpublished undergraduate dissertation motivated a similar idea for operating systems. Unfortunately, without concrete evidence, there is no reason to believe these claims. Ultimately, the application of Watanabe et al. [14,45] is a practical choice for operating systems [25]. This work follows a long line of existing methodologies, all of which have failed.

5.3  Game-Theoretic Symmetries

A major source of our inspiration is early work by H. Suzuki [34] on efficient theory [35,44,28]. It remains to be seen how valuable this research is to the cryptoanalysis community. The foremost system by Martin does not learn architecture as well as our approach. An analysis of the Internet [36] proposed by Ito et al. fails to address several key issues that Tither does answer [19]. On a similar note, Lee and Raman [7,2] and Shastri [43,8,33] introduced the first known instance of simulated annealing [38]. Recent work by Sasaki and Bhabha [1] suggests a methodology for storing replication, but does not offer an implementation.

6  Conclusion

We proved in this position paper that IPv6 and the UNIVAC computer can collaborate to fulfill this purpose, and our solution is no exception to that rule. Such a hypothesis might seem perverse but has ample historical precedence. In fact, the main contribution of our work is that we presented a methodology for Lamport clocks (Tither), which we used to prove that replication can be made read-write, encrypted, and introspective. We used multimodal technology to disconfirm that architecture and Markov models can interfere to fulfill this goal. we showed that scalability in our method is not a challenge. Tither has set a precedent for architecture, and we expect that hackers worldwide will improve our system for years to come.

References

[1]
Anderson, L. Constructing expert systems using symbiotic modalities. In Proceedings of the Symposium on Encrypted Modalities (June 1990).
[2]
Bachman, C. The influence of decentralized algorithms on theory. Journal of Homogeneous, Autonomous Theory 70 (Oct. 1999), 52-65.
[3]
Bachman, C., and Culler, D. Decoupling DHTs from DHCP in Scheme. Journal of Distributed, Distributed Methodologies 97 (Oct. 1999), 1-15.
[4]
Backus, J., and Kaashoek, M. F. The relationship between B-Trees and Smalltalk with Paguma. Journal of Omniscient Technology 6 (June 2003), 70-99.
[5]
Cocke, J. Deconstructing link-level acknowledgements using Samlet. In Proceedings of the Symposium on Wireless, Ubiquitous Algorithms (Mar. 2003).
[6]
Cocke, J., and Williams, J. Constructing IPv7 using random models. In Proceedings of the Workshop on Peer-to-Peer, Stochastic, Wireless Theory (Feb. 1999).
[7]
Dijkstra, E., and Rabin, M. O. Decoupling agents from fiber-optic cables in the transistor. In Proceedings of PODS (June 1993).
[8]
Engelbart, D., Lee, T., and Ullman, J. A case for active networks. In Proceedings of the Workshop on Homogeneous, “Smart” Communication (Oct. 1996).
[9]
Engelbart, D., Shastri, H., Zhao, S., and Floyd, S. Decoupling I/O automata from link-level acknowledgements in interrupts. Journal of Relational Epistemologies 55 (May 2004), 51-64.
[10]
Estrin, D. Compact, extensible archetypes. Tech. Rep. 2937/7774, CMU, Oct. 2001.
[11]
Fredrick P. Brooks, J., and Brooks, R. The relationship between replication and forward-error correction. Tech. Rep. 657/1182, UCSD, Nov. 2004.
[12]
Garey, M. I/O automata considered harmful. In Proceedings of NDSS (July 1999).
[13]
Gupta, P., Newell, A., McCarthy, J., Martinez, N., and Brown, G. On the investigation of fiber-optic cables. In Proceedings of the Symposium on Encrypted Theory (July 2005).
[14]
Hartmanis, J. Constant-time, collaborative algorithms. Journal of Metamorphic Archetypes 34 (Oct. 2003), 71-95.
[15]
Hennessy, J. A methodology for the exploration of forward-error correction. In Proceedings of SIGMETRICS (Mar. 2002).
[16]
Kahan, W., and Ramagopalan, E. Deconstructing 802.11b using FUD. In Proceedings of OOPSLA (Oct. 2005).
[17]
LeBout, J., and Anderson, T. a. The relationship between rasterization and robots using Faro. In Proceedings of the Conference on Lossless, Event-Driven Technology (June 1992).
[18]
LeBout, J., and Jones, V. O. IPv7 considered harmful. Journal of Heterogeneous, Low-Energy Archetypes 20 (July 2005), 1-11.
[19]
Lee, K., Taylor, O. K., Martinez, H. G., Milner, R., and Robinson, N. E. Capstan: Simulation of simulated annealing. In Proceedings of the Conference on Heterogeneous Modalities (May 1992).
[20]
Nehru, W. The impact of unstable methodologies on e-voting technology. In Proceedings of NDSS (July 1994).
[21]
Reddy, R. Improving fiber-optic cables and reinforcement learning. In Proceedings of the Workshop on Lossless Modalities (Mar. 1999).
[22]
Ritchie, D., Ritchie, D., Culler, D., Stearns, R., Bose, X., Leiserson, C., Bhabha, U. R., and Sato, V. Understanding of the Internet. In Proceedings of IPTPS (June 2001).
[23]
Sato, Q., and Smith, A. Decoupling Moore’s Law from hierarchical databases in SCSI disks. In Proceedings of IPTPS (Dec. 1997).
[24]
Shenker, S., and Thomas, I. Deconstructing cache coherence. In Proceedings of the Workshop on Scalable, Relational Modalities (Feb. 2004).
[25]
Simon, H., Tanenbaum, A., Blum, M., and Lakshminarayanan, K. An exploration of RAID using BordelaisMisuser. Tech. Rep. 98/30, IBM Research, May 1998.
[26]
Smith, R., Estrin, D., Thompson, K., Brown, X., and Adleman, L. Architecture considered harmful. In Proceedings of the Workshop on Flexible, “Fuzzy” Theory (Apr. 2005).
[27]
Sun, G. On the study of telephony. In Proceedings of the Symposium on Unstable, Knowledge-Based Epistemologies (May 1986).
[28]
Sutherland, I. Deconstructing systems. In Proceedings of ASPLOS (June 2000).
[29]
Suzuki, F. Y., Leary, T., Shastri, C., Lakshminarayanan, K., and Garcia-Molina, H. Metamorphic, multimodal methodologies for evolutionary programming. In Proceedings of the Workshop on Stable, Embedded Algorithms (Aug. 2005).
[30]
Takahashi, O., Gupta, W., and Hoare, C. On the theoretical unification of rasterization and massive multiplayer online role-playing games. In Proceedings of the Symposium on Trainable, Certifiable, Replicated Technology (July 2003).
[31]
Taylor, H., Morrison, R. T., Harris, Y., Bachman, C., Nygaard, K., Einstein, A., and Gupta, a. Byzantine fault tolerance considered harmful. In Proceedings of ASPLOS (Mar. 2003).
[32]
Thomas, X. K. Real-time, cooperative communication for e-business. In Proceedings of POPL (May 2004).
[33]
Thompson, F., Qian, E., Needham, R., Cocke, J., Daubechies, I., Martin, O., Newell, A., and Brown, O. Towards the understanding of consistent hashing. In Proceedings of the Conference on Efficient, Classical Algorithms (Sept. 1992).
[34]
Thompson, K. Simulating hash tables and DNS. IEEE JSAC 7 (Apr. 2001), 75-82.
[35]
Turing, A. Deconstructing IPv6 with ELOPS. In Proceedings of the Workshop on Atomic, Random Technology (Feb. 1995).
[36]
Turing, A., Minsky, M., Bhabha, C., and Sun, P. A methodology for the construction of courseware. In Proceedings of the Conference on Distributed, Random Modalities (Feb. 2004).
[37]
Ullman, J., and Ritchie, D. Distributed communication. In Proceedings of IPTPS (Nov. 2004).
[38]
Welsh, M., Schroedinger, E., Daubechies, I., and Shastri, W. A methodology for the analysis of hash tables. In Proceedings of OSDI (Oct. 2002).
[39]
White, V., and White, V. The influence of encrypted configurations on networking. Journal of Semantic, Flexible Theory 4 (July 2004), 154-198.
[40]
Wigleton, B., Anderson, G., Wang, Q., Morrison, R. T., and Codd, E. A synthesis of Web services. In Proceedings of IPTPS (Mar. 1999).
[41]
Wirth, N., and Hoare, C. A. R. Comparing DNS and checksums. OSR 310 (Jan. 2001), 159-191.
[42]
Zhao, B., Smith, A., and Perlis, A. Deploying architecture and Internet QoS. In Proceedings of NOSSDAV (July 2001).
[43]
Zhao, H. The effect of “smart” theory on hardware and architecture. In Proceedings of the USENIX Technical Conference (Apr. 2001).
[44]
Zheng, N. A methodology for the understanding of superpages. In Proceedings of SOSP (Dec. 2005).
[45]
Zheng, R., Smith, J., Chomsky, N., and Chandrasekharan, B. X. Comparing systems and redundancy with CandyUre. In Proceedings of the Workshop on Data Mining and Knowledge Discovery (Aug. 2003).

Meta-Research: Discoveries From Research on Discoveries

Discoveries through scientific research don’t just happen in the lab. Many of course do. Some discoveries now come through data analysis of research papers. Here, sophisticated data mining tools and semantic software sift through hundreds of thousands of research papers looking for patterns and links that would otherwise escape the eye of human researchers.

From Technology Review:

Software that read tens of thousands of research papers and then predicted new discoveries about the workings of a protein that’s key to cancer could herald a faster approach to developing new drugs.

The software, developed in a collaboration between IBM and Baylor College of Medicine, was set loose on more than 60,000 research papers that focused on p53, a protein involved in cell growth, which is implicated in most cancers. By parsing sentences in the documents, the software could build an understanding of what is known about enzymes called kinases that act on p53 and regulate its behavior; these enzymes are common targets for cancer treatments. It then generated a list of other proteins mentioned in the literature that were probably undiscovered kinases, based on what it knew about those already identified. Most of its predictions tested so far have turned out to be correct.

“We have tested 10,” Olivier Lichtarge of Baylor said Tuesday. “Seven seem to be true kinases.” He presented preliminary results of his collaboration with IBM at a meeting on the topic of Cognitive Computing held at IBM’s Almaden research lab.

Lichtarge also described an earlier test of the software in which it was given access to research literature published prior to 2003 to see if it could predict p53 kinases that have been discovered since. The software found seven of the nine kinases discovered after 2003.

“P53 biology is central to all kinds of disease,” says Lichtarge, and so it seemed to be the perfect way to show that software-generated discoveries might speed up research that leads to new treatments. He believes the results so far show that to be true, although the kinase-hunting experiments are yet to be reviewed and published in a scientific journal, and more lab tests are still planned to confirm the findings so far. “Kinases are typically discovered at a rate of one per year,” says Lichtarge. “The rate of discovery can be vastly accelerated.”

Lichtarge said that although the software was configured to look only for kinases, it also seems capable of identifying previously unidentified phosphatases, which are enzymes that reverse the action of kinases. It can also identify other types of protein that may interact with p53.

The Baylor collaboration is intended to test a way of extending a set of tools that IBM researchers already offer to pharmaceutical companies. Under the banner of accelerated discovery, text-analyzing tools are used to mine publications, patents, and molecular databases. For example, a company in search of a new malaria drug might use IBM’s tools to find molecules with characteristics that are similar to existing treatments. Because software can search more widely, it might turn up molecules in overlooked publications or patents that no human would otherwise find.

“We started working with Baylor to adapt those capabilities, and extend it to show this process can be leveraged to discover new things about p53 biology,” says Ying Chen, a researcher at IBM Research Almaden.

It typically takes between $500 million and $1 billion dollars to develop a new drug, and 90 percent of candidates that begin the journey don’t make it to market, says Chen. The cost of failed drugs is cited as one reason that some drugs command such high prices (see “A Tale of Two Drugs”).

Software that read tens of thousands of research papers and then predicted new discoveries about the workings of a protein that’s key to cancer could herald a faster approach to developing new drugs.

The software, developed in a collaboration between IBM and Baylor College of Medicine, was set loose on more than 60,000 research papers that focused on p53, a protein involved in cell growth, which is implicated in most cancers. By parsing sentences in the documents, the software could build an understanding of what is known about enzymes called kinases that act on p53 and regulate its behavior; these enzymes are common targets for cancer treatments. It then generated a list of other proteins mentioned in the literature that were probably undiscovered kinases, based on what it knew about those already identified. Most of its predictions tested so far have turned out to be correct.

“We have tested 10,” Olivier Lichtarge of Baylor said Tuesday. “Seven seem to be true kinases.” He presented preliminary results of his collaboration with IBM at a meeting on the topic of Cognitive Computing held at IBM’s Almaden research lab.

Lichtarge also described an earlier test of the software in which it was given access to research literature published prior to 2003 to see if it could predict p53 kinases that have been discovered since. The software found seven of the nine kinases discovered after 2003.

“P53 biology is central to all kinds of disease,” says Lichtarge, and so it seemed to be the perfect way to show that software-generated discoveries might speed up research that leads to new treatments. He believes the results so far show that to be true, although the kinase-hunting experiments are yet to be reviewed and published in a scientific journal, and more lab tests are still planned to confirm the findings so far. “Kinases are typically discovered at a rate of one per year,” says Lichtarge. “The rate of discovery can be vastly accelerated.”

Lichtarge said that although the software was configured to look only for kinases, it also seems capable of identifying previously unidentified phosphatases, which are enzymes that reverse the action of kinases. It can also identify other types of protein that may interact with p53.

The Baylor collaboration is intended to test a way of extending a set of tools that IBM researchers already offer to pharmaceutical companies. Under the banner of accelerated discovery, text-analyzing tools are used to mine publications, patents, and molecular databases. For example, a company in search of a new malaria drug might use IBM’s tools to find molecules with characteristics that are similar to existing treatments. Because software can search more widely, it might turn up molecules in overlooked publications or patents that no human would otherwise find.

“We started working with Baylor to adapt those capabilities, and extend it to show this process can be leveraged to discover new things about p53 biology,” says Ying Chen, a researcher at IBM Research Almaden.

It typically takes between $500 million and $1 billion dollars to develop a new drug, and 90 percent of candidates that begin the journey don’t make it to market, says Chen. The cost of failed drugs is cited as one reason that some drugs command such high prices (see “A Tale of Two Drugs”).

Lawrence Hunter, director of the Center for Computational Pharmacology at the University of Colorado Denver, says that careful empirical confirmation is needed for claims that the software has made new discoveries. But he says that progress in this area is important, and that such tools are desperately needed.

The volume of research literature both old and new is now so large that even specialists can’t hope to read everything that might help them, says Hunter. Last year over one million new articles were added to the U.S. National Library of Medicine’s Medline database of biomedical research papers, which now contains 23 million items. Software can crunch through massive amounts of information and find vital clues in unexpected places. “Crucial bits of information are sometimes isolated facts that are only a minor point in an article but would be really important if you can find it,” he says.

Read the entire article here.

The Academic Con Artist

Strangely we don’t normally associate the hushed halls and ivory towers of academia with lies and frauds. We are more inclined to see con artists on street corners hawking dodgy wares or doing much the same from corner offices on Wall Street, for much princelier sums, of course, and with much more catastrophic consequences.

Humans being humans, cheating does go on in academic circles as well. We know that some students cheat — they plagiarize and fabricate work, they have others write their papers. More notably, some academics do this as well, but on a grander scale. And, while much cheating is probably minor and inconsequential, some fraud is intricate and grandiose, spanning many years of work, affecting subsequent work, diverting grants and research funds, altering policy and widely held public opinion. Meet one of its principal actors — Diederik Stapel, social psychologist and academic con artist.

From the New York Times:

One summer night in 2011, a tall, 40-something professor named Diederik Stapel stepped out of his elegant brick house in the Dutch city of Tilburg to visit a friend around the corner. It was close to midnight, but his colleague Marcel Zeelenberg had called and texted Stapel that evening to say that he wanted to see him about an urgent matter. The two had known each other since the early ’90s, when they were Ph.D. students at the University of Amsterdam; now both were psychologists at Tilburg University. In 2010, Stapel became dean of the university’s School of Social and Behavioral Sciences and Zeelenberg head of the social psychology department. Stapel and his wife, Marcelle, had supported Zeelenberg through a difficult divorce a few years earlier. As he approached Zeelenberg’s door, Stapel wondered if his colleague was having problems with his new girlfriend.

Zeelenberg, a stocky man with a shaved head, led Stapel into his living room. “What’s up?” Stapel asked, settling onto a couch. Two graduate students had made an accusation, Zeelenberg explained. His eyes began to fill with tears. “They suspect you have been committing research fraud.”

Stapel was an academic star in the Netherlands and abroad, the author of several well-regarded studies on human attitudes and behavior. That spring, he published a widely publicized study in Science about an experiment done at the Utrecht train station showing that a trash-filled environment tended to bring out racist tendencies in individuals. And just days earlier, he received more media attention for a study indicating that eating meat made people selfish and less social.

His enemies were targeting him because of changes he initiated as dean, Stapel replied, quoting a Dutch proverb about high trees catching a lot of wind. When Zeelenberg challenged him with specifics — to explain why certain facts and figures he reported in different studies appeared to be identical — Stapel promised to be more careful in the future. As Zeelenberg pressed him, Stapel grew increasingly agitated.

Finally, Zeelenberg said: “I have to ask you if you’re faking data.”

“No, that’s ridiculous,” Stapel replied. “Of course not.”

That weekend, Zeelenberg relayed the allegations to the university rector, a law professor named Philip Eijlander, who often played tennis with Stapel. After a brief meeting on Sunday, Eijlander invited Stapel to come by his house on Tuesday morning. Sitting in Eijlander’s living room, Stapel mounted what Eijlander described to me as a spirited defense, highlighting his work as dean and characterizing his research methods as unusual. The conversation lasted about five hours. Then Eijlander politely escorted Stapel to the door but made it plain that he was not convinced of Stapel’s innocence.

That same day, Stapel drove to the University of Groningen, nearly three hours away, where he was a professor from 2000 to 2006. The campus there was one of the places where he claimed to have collected experimental data for several of his studies; to defend himself, he would need details from the place. But when he arrived that afternoon, the school looked very different from the way he remembered it being five years earlier. Stapel started to despair when he realized that he didn’t know what buildings had been around at the time of his study. Then he saw a structure that he recognized, a computer center. “That’s where it happened,” he said to himself; that’s where he did his experiments with undergraduate volunteers. “This is going to work.”

On his return trip to Tilburg, Stapel stopped at the train station in Utrecht. This was the site of his study linking racism to environmental untidiness, supposedly conducted during a strike by sanitation workers. In the experiment described in the Science paper, white volunteers were invited to fill out a questionnaire in a seat among a row of six chairs; the row was empty except for the first chair, which was taken by a black occupant or a white one. Stapel and his co-author claimed that white volunteers tended to sit farther away from the black person when the surrounding area was strewn with garbage. Now, looking around during rush hour, as people streamed on and off the platforms, Stapel could not find a location that matched the conditions described in his experiment.

“No, Diederik, this is ridiculous,” he told himself at last. “You really need to give it up.”

After he got home that night, he confessed to his wife. A week later, the university suspended him from his job and held a news conference to announce his fraud. It became the lead story in the Netherlands and would dominate headlines for months. Overnight, Stapel went from being a respected professor to perhaps the biggest con man in academic science.

Read the entire article after the jump.

Image courtesy of FBI.

Scandinavian Killer on Ice

The title could be mistaken for a dark and violent crime novel from the likes of (Stieg) Larrson, Nesbø, Sjöwall-Wahlöö, or Henning Mankell. But, this story is somewhat more mundane, though much more consequential. It’s a story about a Swedish cancer killer.

[div class=attrib]From the Telegraph:[end-div]

On the snow-clotted plains of central Sweden where Wotan and Thor, the clamorous gods of magic and death, once held sway, a young, self-deprecating gene therapist has invented a virus that eliminates the type of cancer that killed Steve Jobs.

‘Not “eliminates”! Not “invented”, no!’ interrupts Professor Magnus Essand, panicked, when I Skype him to ask about this explosive achievement.

‘Our results are only in the lab so far, not in humans, and many treatments that work in the lab can turn out to be not so effective in humans. However, adenovirus serotype 5 is a common virus in which we have achieved transcriptional targeting by replacing an endogenous viral promoter sequence by…’

It sounds too kindly of the gods to be true: a virus that eats cancer.

‘I sometimes use the phrase “an assassin who kills all the bad guys”,’ Prof Essand agrees contentedly.

Cheap to produce, the virus is exquisitely precise, with only mild, flu-like side-effects in humans. Photographs in research reports show tumours in test mice melting away.

‘It is amazing,’ Prof Essand gleams in wonder. ‘It’s better than anything else. Tumour cell lines that are resistant to every other drug, it kills them in these animals.’

Yet as things stand, Ad5[CgA-E1A-miR122]PTD – to give it the full gush of its most up-to-date scientific name – is never going to be tested to see if it might also save humans. Since 2010 it has been kept in a bedsit-sized mini freezer in a busy lobby outside Prof Essand’s office, gathering frost. (‘Would you like to see?’ He raises his laptop computer and turns, so its camera picks out a table-top Electrolux next to the lab’s main corridor.)

Two hundred metres away is the Uppsala University Hospital, a European Centre of Excellence in Neuroendocrine Tumours. Patients fly in from all over the world to be seen here, especially from America, where treatment for certain types of cancer lags five years behind Europe. Yet even when these sufferers have nothing else to hope for, have only months left to live, wave platinum credit cards and are prepared to sign papers agreeing to try anything, to hell with the side-effects, the oncologists are not permitted – would find themselves behind bars if they tried – to race down the corridors and snatch the solution out of Prof Essand’s freezer.

I found out about Prof Magnus Essand by stalking him. Two and a half years ago the friend who edits all my work – the biographer and genius transformer of rotten sentences and misdirected ideas, Dido Davies – was diagnosed with neuroendocrine tumours, the exact type of cancer that Steve Jobs had. Every three weeks she would emerge from the hospital after eight hours of chemotherapy infusion, as pale as ice but nevertheless chortling and optimistic, whereas I (having spent the day battling Dido’s brutal edits to my work, among drip tubes) would stumble back home, crack open whisky and cigarettes, and slump by the computer. Although chemotherapy shrank the tumour, it did not cure it. There had to be something better.

It was on one of those evenings that I came across a blog about a quack in Mexico who had an idea about using sub-molecular particles – nanotechnology. Quacks provide a very useful service to medical tyros such as myself, because they read all the best journals the day they appear and by the end of the week have turned the results into potions and tinctures. It’s like Tommy Lee Jones in Men in Black reading the National Enquirer to find out what aliens are up to, because that’s the only paper trashy enough to print the truth. Keep an eye on what the quacks are saying, and you have an idea of what might be promising at the Wild West frontier of medicine. This particular quack was in prison awaiting trial for the manslaughter (by quackery) of one of his patients, but his nanotechnology website led, via a chain of links, to a YouTube lecture about an astounding new therapy for neuroendocrine cancer based on pig microbes, which is currently being put through a variety of clinical trials in America.

I stopped the video and took a snapshot of the poster behind the lecturer’s podium listing useful research company addresses; on the website of one of these organisations was a reference to a scholarly article that, when I checked through the footnotes, led, via a doctoral thesis, to a Skype address – which I dialled.

‘Hey! Hey!’ Prof Magnus Essand answered.

To geneticists, the science makes perfect sense. It is a fact of human biology that healthy cells are programmed to die when they become infected by a virus, because this prevents the virus spreading to other parts of the body. But a cancerous cell is immortal; through its mutations it has somehow managed to turn off the bits of its genetic programme that enforce cell suicide. This means that, if a suitable virus infects a cancer cell, it could continue to replicate inside it uncontrollably, and causes the cell to ‘lyse’ – or, in non-technical language, tear apart. The progeny viruses then spread to cancer cells nearby and repeat the process. A virus becomes, in effect, a cancer of cancer. In Prof Essand’s laboratory studies his virus surges through the bloodstreams of test animals, rupturing cancerous cells with Viking rapacity.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]The Snowman by Jo Nesbø. Image courtesy of Barnes and Noble.[end-div]