Helplessness and Intelligence Go Hand in Hand

From the Wall Street Journal:

Why are children so, well, so helpless? Why did I spend a recent Sunday morning putting blueberry pancake bits on my 1-year-old grandson’s fork and then picking them up again off the floor? And why are toddlers most helpless when they’re trying to be helpful? Augie’s vigorous efforts to sweep up the pancake detritus with a much-too-large broom (“I clean!”) were adorable but not exactly effective.

This isn’t just a caregiver’s cri de coeur—it’s also an important scientific question. Human babies and young children are an evolutionary paradox. Why must big animals invest so much time and energy just keeping the little ones alive? This is especially true of our human young, helpless and needy for far longer than the young of other primates.

One idea is that our distinctive long childhood helps to develop our equally distinctive intelligence. We have both a much longer childhood and a much larger brain than other primates. Restless humans have to learn about more different physical environments than stay-at-home chimps, and with our propensity for culture, we constantly create new social environments. Childhood gives us a protected time to master new physical and social tools, from a whisk broom to a winning comment, before we have to use them to survive.

The usual museum diorama of our evolutionary origins features brave hunters pursuing a rearing mammoth. But a Pleistocene version of the scene in my kitchen, with ground cassava roots instead of pancakes, might be more accurate, if less exciting.

Of course, many scientists are justifiably skeptical about such “just-so stories” in evolutionary psychology. The idea that our useless babies are really useful learners is appealing, but what kind of evidence could support (or refute) it? There’s still controversy, but two recent studies at least show how we might go about proving the idea empirically.

One of the problems with much evolutionary psychology is that it just concentrates on humans, or sometimes on humans and chimps. To really make an evolutionary argument, you need to study a much wider variety of animals. Is it just a coincidence that we humans have both needy children and big brains? Or will we find the same evolutionary pattern in animals who are very different from us? In 2010, Vera Weisbecker of Cambridge University and a colleague found a correlation between brain size and dependence across 52 different species of marsupials, from familiar ones like kangaroos and opossums to more exotic ones like quokkas.

Quokkas are about the same size as Virginia opossums, but baby quokkas nurse for three times as long, their parents invest more in each baby, and their brains are twice as big.

Read the entire article after the jump.

Startup Ideas

For technologists the barriers to developing a new product have never been so low. Tools to develop, integrate and distribute software apps are to all intents negligible. Of course, most would recognize that development is often the easy part. The real difficulty lies in building an effective and sustainable marketing and communication strategy and getting the product adopted.

The recent headlines of 17 year old British app developer Nick D’Aloisio selling his Summly app to Yahoo! for the tidy sum of $30 million, has lots of young and seasoned developers scratching their heads. After all, if a school kid can do it, why not anybody? Why not me?

Paul Graham may have some of the answers. He sold his first company to Yahoo in 1998. He now runs YCombinator a successful startup incubator. We excerpt his recent, observant and insightful essay below.

From Paul Graham:

The way to get startup ideas is not to try to think of startup ideas. It’s to look for problems, preferably problems you have yourself.

The very best startup ideas tend to have three things in common: they’re something the founders themselves want, that they themselves can build, and that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and Facebook all began this way.

Problems

Why is it so important to work on a problem you have? Among other things, it ensures the problem really exists. It sounds obvious to say you should only work on problems that exist. And yet by far the most common mistake startups make is to solve problems no one has.

I made it myself. In 1995 I started a company to put art galleries online. But galleries didn’t want to be online. It’s not how the art business works. So why did I spend 6 months working on this stupid idea? Because I didn’t pay attention to users. I invented a model of the world that didn’t correspond to reality, and worked from that. I didn’t notice my model was wrong until I tried to convince users to pay for what we’d built. Even then I took embarrassingly long to catch on. I was attached to my model of the world, and I’d spent a lot of time on the software. They had to want it!

Why do so many founders build things no one wants? Because they begin by trying to think of startup ideas. That m.o. is doubly dangerous: it doesn’t merely yield few good ideas; it yields bad ideas that sound plausible enough to fool you into working on them.

At YC we call these “made-up” or “sitcom” startup ideas. Imagine one of the characters on a TV show was starting a startup. The writers would have to invent something for it to do. But coming up with good startup ideas is hard. It’s not something you can do for the asking. So (unless they got amazingly lucky) the writers would come up with an idea that sounded plausible, but was actually bad.

For example, a social network for pet owners. It doesn’t sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners. Not all of them perhaps, but if just 2 or 3 percent were regular visitors, you could have millions of users. You could serve them targeted offers, and maybe charge for premium features.

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

Well

When a startup launches, there have to be at least some users who really need what they’re making—not just people who could see themselves using it one day, but who want it urgently. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. Which means you have to compromise on one dimension: you can either build something a large number of people want a small amount, or something a small number of people want a large amount. Choose the latter. Not all ideas of that type are good startup ideas, but nearly all good startup ideas are of that type.

Imagine a graph whose x axis represents all the people who might want what you’re making and whose y axis represents how much they want it. If you invert the scale on the y axis, you can envision companies as holes. Google is an immense crater: hundreds of millions of people use it, and they need it a lot. A startup just starting out can’t expect to excavate that much volume. So you have two choices about the shape of hole you start with. You can either dig a hole that’s broad but shallow, or one that’s narrow and deep, like a well.

Made-up startup ideas are usually of the first type. Lots of people are mildly interested in a social network for pet owners.

Nearly all good startup ideas are of the second type. Microsoft was a well when they made Altair Basic. There were only a couple thousand Altair owners, but without this software they were programming in machine language. Thirty years later Facebook had the same shape. Their first site was exclusively for Harvard students, of which there are only a few thousand, but those few thousand users wanted it a lot.

When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they’ll use it even when it’s a crappy version one made by a two-person startup they’ve never heard of? If you can’t answer that, the idea is probably bad.

You don’t need the narrowness of the well per se. It’s depth you need; you get narrowness as a byproduct of optimizing for depth (and speed). But you almost always do get it. In practice the link between depth and narrowness is so strong that it’s a good sign when you know that an idea will appeal strongly to a specific group or type of user.

But while demand shaped like a well is almost a necessary condition for a good startup idea, it’s not a sufficient one. If Mark Zuckerberg had built something that could only ever have appealed to Harvard students, it would not have been a good startup idea. Facebook was a good idea because it started with a small market there was a fast path out of. Colleges are similar enough that if you build a facebook that works at Harvard, it will work at any college. So you spread rapidly through all the colleges. Once you have all the college students, you get everyone else simply by letting them in.

Similarly for Microsoft: Basic for the Altair; Basic for other machines; other languages besides Basic; operating systems; applications; IPO.

Self

How do you tell whether there’s a path out of an idea? How do you tell whether something is the germ of a giant company, or just a niche product? Often you can’t. The founders of Airbnb didn’t realize at first how big a market they were tapping. Initially they had a much narrower idea. They were going to let hosts rent out space on their floors during conventions. They didn’t foresee the expansion of this idea; it forced itself upon them gradually. All they knew at first is that they were onto something. That’s probably as much as Bill Gates or Mark Zuckerberg knew at first.

Occasionally it’s obvious from the beginning when there’s a path out of the initial niche. And sometimes I can see a path that’s not immediately obvious; that’s one of our specialties at YC. But there are limits to how well this can be done, no matter how much experience you have. The most important thing to understand about paths out of the initial idea is the meta-fact that these are hard to see.

So if you can’t predict whether there’s a path out of an idea, how do you choose between ideas? The truth is disappointing but interesting: if you’re the right sort of person, you have the right sort of hunches. If you’re at the leading edge of a field that’s changing fast, when you have a hunch that something is worth doing, you’re more likely to be right.

In Zen and the Art of Motorcycle Maintenance, Robert Pirsig says:

You want to know how to paint a perfect painting? It’s easy. Make yourself perfect and then just paint naturally.

I’ve wondered about that passage since I read it in high school. I’m not sure how useful his advice is for painting specifically, but it fits this situation well. Empirically, the way to have good startup ideas is to become the sort of person who has them.

Being at the leading edge of a field doesn’t mean you have to be one of the people pushing it forward. You can also be at the leading edge as a user. It was not so much because he was a programmer that Facebook seemed a good idea to Mark Zuckerberg as because he used computers so much. If you’d asked most 40 year olds in 2004 whether they’d like to publish their lives semi-publicly on the Internet, they’d have been horrified at the idea. But Mark already lived online; to him it seemed natural.

Paul Buchheit says that people at the leading edge of a rapidly changing field “live in the future.” Combine that with Pirsig and you get:

Live in the future, then build what’s missing.

That describes the way many if not most of the biggest startups got started. Neither Apple nor Yahoo nor Google nor Facebook were even supposed to be companies at first. They grew out of things their founders built because there seemed a gap in the world.

If you look at the way successful founders have had their ideas, it’s generally the result of some external stimulus hitting a prepared mind. Bill Gates and Paul Allen hear about the Altair and think “I bet we could write a Basic interpreter for it.” Drew Houston realizes he’s forgotten his USB stick and thinks “I really need to make my files live online.” Lots of people heard about the Altair. Lots forgot USB sticks. The reason those stimuli caused those founders to start companies was that their experiences had prepared them to notice the opportunities they represented.

The verb you want to be using with respect to startup ideas is not “think up” but “notice.” At YC we call ideas that grow naturally out of the founders’ own experiences “organic” startup ideas. The most successful startups almost all begin this way.

That may not have been what you wanted to hear. You may have expected recipes for coming up with startup ideas, and instead I’m telling you that the key is to have a mind that’s prepared in the right way. But disappointing though it may be, this is the truth. And it is a recipe of a sort, just one that in the worst case takes a year rather than a weekend.

If you’re not at the leading edge of some rapidly changing field, you can get to one. For example, anyone reasonably smart can probably get to an edge of programming (e.g. building mobile apps) in a year. Since a successful startup will consume at least 3-5 years of your life, a year’s preparation would be a reasonable investment. Especially if you’re also looking for a cofounder.

You don’t have to learn programming to be at the leading edge of a domain that’s changing fast. Other domains change fast. But while learning to hack is not necessary, it is for the forseeable future sufficient. As Marc Andreessen put it, software is eating the world, and this trend has decades left to run.

Knowing how to hack also means that when you have ideas, you’ll be able to implement them. That’s not absolutely necessary (Jeff Bezos couldn’t) but it’s an advantage. It’s a big advantage, when you’re considering an idea like putting a college facebook online, if instead of merely thinking “That’s an interesting idea,” you can think instead “That’s an interesting idea. I’ll try building an initial version tonight.” It’s even better when you’re both a programmer and the target user, because then the cycle of generating new versions and testing them on users can happen inside one head.

Noticing

Once you’re living in the future in some respect, the way to notice startup ideas is to look for things that seem to be missing. If you’re really at the leading edge of a rapidly changing field, there will be things that are obviously missing. What won’t be obvious is that they’re startup ideas. So if you want to find startup ideas, don’t merely turn on the filter “What’s missing?” Also turn off every other filter, particularly “Could this be a big company?” There’s plenty of time to apply that test later. But if you’re thinking about that initially, it may not only filter out lots of good ideas, but also cause you to focus on bad ones.

Most things that are missing will take some time to see. You almost have to trick yourself into seeing the ideas around you.

But you know the ideas are out there. This is not one of those problems where there might not be an answer. It’s impossibly unlikely that this is the exact moment when technological progress stops. You can be sure people are going to build things in the next few years that will make you think “What did I do before x?”

And when these problems get solved, they will probably seem flamingly obvious in retrospect. What you need to do is turn off the filters that usually prevent you from seeing them. The most powerful is simply taking the current state of the world for granted. Even the most radically open-minded of us mostly do that. You couldn’t get from your bed to the front door if you stopped to question everything.

But if you’re looking for startup ideas you can sacrifice some of the efficiency of taking the status quo for granted and start to question things. Why is your inbox overflowing? Because you get a lot of email, or because it’s hard to get email out of your inbox? Why do you get so much email? What problems are people trying to solve by sending you email? Are there better ways to solve them? And why is it hard to get emails out of your inbox? Why do you keep emails around after you’ve read them? Is an inbox the optimal tool for that?

Pay particular attention to things that chafe you. The advantage of taking the status quo for granted is not just that it makes life (locally) more efficient, but also that it makes life more tolerable. If you knew about all the things we’ll get in the next 50 years but don’t have yet, you’d find present day life pretty constraining, just as someone from the present would if they were sent back 50 years in a time machine. When something annoys you, it could be because you’re living in the future.

When you find the right sort of problem, you should probably be able to describe it as obvious, at least to you. When we started Viaweb, all the online stores were built by hand, by web designers making individual HTML pages. It was obvious to us as programmers that these sites would have to be generated by software.

Which means, strangely enough, that coming up with startup ideas is a question of seeing the obvious. That suggests how weird this process is: you’re trying to see things that are obvious, and yet that you hadn’t seen.

Since what you need to do here is loosen up your own mind, it may be best not to make too much of a direct frontal attack on the problem—i.e. to sit down and try to think of ideas. The best plan may be just to keep a background process running, looking for things that seem to be missing. Work on hard problems, driven mainly by curiousity, but have a second self watching over your shoulder, taking note of gaps and anomalies.

Give yourself some time. You have a lot of control over the rate at which you turn yours into a prepared mind, but you have less control over the stimuli that spark ideas when they hit it. If Bill Gates and Paul Allen had constrained themselves to come up with a startup idea in one month, what if they’d chosen a month before the Altair appeared? They probably would have worked on a less promising idea. Drew Houston did work on a less promising idea before Dropbox: an SAT prep startup. But Dropbox was a much better idea, both in the absolute sense and also as a match for his skills.

A good way to trick yourself into noticing ideas is to work on projects that seem like they’d be cool. If you do that, you’ll naturally tend to build things that are missing. It wouldn’t seem as interesting to build something that already existed.

Just as trying to think up startup ideas tends to produce bad ones, working on things that could be dismissed as “toys” often produces good ones. When something is described as a toy, that means it has everything an idea needs except being important. It’s cool; users love it; it just doesn’t matter. But if you’re living in the future and you build something cool that users love, it may matter more than outsiders think. Microcomputers seemed like toys when Apple and Microsoft started working on them. I’m old enough to remember that era; the usual term for people with their own microcomputers was “hobbyists.” BackRub seemed like an inconsequential science project. The Facebook was just a way for undergrads to stalk one another.

At YC we’re excited when we meet startups working on things that we could imagine know-it-alls on forums dismissing as toys. To us that’s positive evidence an idea is good.

If you can afford to take a long view (and arguably you can’t afford not to), you can turn “Live in the future and build what’s missing” into something even better:

Live in the future and build what seems interesting.

School

That’s what I’d advise college students to do, rather than trying to learn about “entrepreneurship.” “Entrepreneurship” is something you learn best by doing it. The examples of the most successful founders make that clear. What you should be spending your time on in college is ratcheting yourself into the future. College is an incomparable opportunity to do that. What a waste to sacrifice an opportunity to solve the hard part of starting a startup—becoming the sort of person who can have organic startup ideas—by spending time learning about the easy part. Especially since you won’t even really learn about it, any more than you’d learn about sex in a class. All you’ll learn is the words for things.

The clash of domains is a particularly fruitful source of ideas. If you know a lot about programming and you start learning about some other field, you’ll probably see problems that software could solve. In fact, you’re doubly likely to find good problems in another domain: (a) the inhabitants of that domain are not as likely as software people to have already solved their problems with software, and (b) since you come into the new domain totally ignorant, you don’t even know what the status quo is to take it for granted.

So if you’re a CS major and you want to start a startup, instead of taking a class on entrepreneurship you’re better off taking a class on, say, genetics. Or better still, go work for a biotech company. CS majors normally get summer jobs at computer hardware or software companies. But if you want to find startup ideas, you might do better to get a summer job in some unrelated field.

Or don’t take any extra classes, and just build things. It’s no coincidence that Microsoft and Facebook both got started in January. At Harvard that is (or was) Reading Period, when students have no classes to attend because they’re supposed to be studying for finals.

But don’t feel like you have to build things that will become startups. That’s premature optimization. Just build things. Preferably with other students. It’s not just the classes that make a university such a good place to crank oneself into the future. You’re also surrounded by other people trying to do the same thing. If you work together with them on projects, you’ll end up producing not just organic ideas, but organic ideas with organic founding teams—and that, empirically, is the best combination.

Beware of research. If an undergrad writes something all his friends start using, it’s quite likely to represent a good startup idea. Whereas a PhD dissertation is extremely unlikely to. For some reason, the more a project has to count as research, the less likely it is to be something that could be turned into a startup. [10] I think the reason is that the subset of ideas that count as research is so narrow that it’s unlikely that a project that satisfied that constraint would also satisfy the orthogonal constraint of solving users’ problems. Whereas when students (or professors) build something as a side-project, they automatically gravitate toward solving users’ problems—perhaps even with an additional energy that comes from being freed from the constraints of research.

Competition

Because a good idea should seem obvious, when you have one you’ll tend to feel that you’re late. Don’t let that deter you. Worrying that you’re late is one of the signs of a good idea. Ten minutes of searching the web will usually settle the question. Even if you find someone else working on the same thing, you’re probably not too late. It’s exceptionally rare for startups to be killed by competitors—so rare that you can almost discount the possibility. So unless you discover a competitor with the sort of lock-in that would prevent users from choosing you, don’t discard the idea.

If you’re uncertain, ask users. The question of whether you’re too late is subsumed by the question of whether anyone urgently needs what you plan to make. If you have something that no competitor does and that some subset of users urgently need, you have a beachhead.

The question then is whether that beachhead is big enough. Or more importantly, who’s in it: if the beachhead consists of people doing something lots more people will be doing in the future, then it’s probably big enough no matter how small it is. For example, if you’re building something differentiated from competitors by the fact that it works on phones, but it only works on the newest phones, that’s probably a big enough beachhead.

Err on the side of doing things where you’ll face competitors. Inexperienced founders usually give competitors more credit than they deserve. Whether you succeed depends far more on you than on your competitors. So better a good idea with competitors than a bad one without.

You don’t need to worry about entering a “crowded market” so long as you have a thesis about what everyone else in it is overlooking. In fact that’s a very promising starting point. Google was that type of idea. Your thesis has to be more precise than “we’re going to make an x that doesn’t suck” though. You have to be able to phrase it in terms of something the incumbents are overlooking. Best of all is when you can say that they didn’t have the courage of their convictions, and that your plan is what they’d have done if they’d followed through on their own insights. Google was that type of idea too. The search engines that preceded them shied away from the most radical implications of what they were doing—particularly that the better a job they did, the faster users would leave.

A crowded market is actually a good sign, because it means both that there’s demand and that none of the existing solutions are good enough. A startup can’t hope to enter a market that’s obviously big and yet in which they have no competitors. So any startup that succeeds is either going to be entering a market with existing competitors, but armed with some secret weapon that will get them all the users (like Google), or entering a market that looks small but which will turn out to be big (like Microsoft).

Filters

There are two more filters you’ll need to turn off if you want to notice startup ideas: the unsexy filter and the schlep filter.

Most programmers wish they could start a startup by just writing some brilliant code, pushing it to a server, and having users pay them lots of money. They’d prefer not to deal with tedious problems or get involved in messy ways with the real world. Which is a reasonable preference, because such things slow you down. But this preference is so widespread that the space of convenient startup ideas has been stripped pretty clean. If you let your mind wander a few blocks down the street to the messy, tedious ideas, you’ll find valuable ones just sitting there waiting to be implemented.

The schlep filter is so dangerous that I wrote a separate essay about the condition it induces, which I called schlep blindness. I gave Stripe as an example of a startup that benefited from turning off this filter, and a pretty striking example it is. Thousands of programmers were in a position to see this idea; thousands of programmers knew how painful it was to process payments before Stripe. But when they looked for startup ideas they didn’t see this one, because unconsciously they shrank from having to deal with payments. And dealing with payments is a schlep for Stripe, but not an intolerable one. In fact they might have had net less pain; because the fear of dealing with payments kept most people away from this idea, Stripe has had comparatively smooth sailing in other areas that are sometimes painful, like user acquisition. They didn’t have to try very hard to make themselves heard by users, because users were desperately waiting for what they were building.

The unsexy filter is similar to the schlep filter, except it keeps you from working on problems you despise rather than ones you fear. We overcame this one to work on Viaweb. There were interesting things about the architecture of our software, but we weren’t interested in ecommerce per se. We could see the problem was one that needed to be solved though.

Turning off the schlep filter is more important than turning off the unsexy filter, because the schlep filter is more likely to be an illusion. And even to the degree it isn’t, it’s a worse form of self-indulgence. Starting a successful startup is going to be fairly laborious no matter what. Even if the product doesn’t entail a lot of schleps, you’ll still have plenty dealing with investors, hiring and firing people, and so on. So if there’s some idea you think would be cool but you’re kept away from by fear of the schleps involved, don’t worry: any sufficiently good idea will have as many.

The unsexy filter, while still a source of error, is not as entirely useless as the schlep filter. If you’re at the leading edge of a field that’s changing rapidly, your ideas about what’s sexy will be somewhat correlated with what’s valuable in practice. Particularly as you get older and more experienced. Plus if you find an idea sexy, you’ll work on it more enthusiastically.

Recipes

While the best way to discover startup ideas is to become the sort of person who has them and then build whatever interests you, sometimes you don’t have that luxury. Sometimes you need an idea now. For example, if you’re working on a startup and your initial idea turns out to be bad.

For the rest of this essay I’ll talk about tricks for coming up with startup ideas on demand. Although empirically you’re better off using the organic strategy, you could succeed this way. You just have to be more disciplined. When you use the organic method, you don’t even notice an idea unless it’s evidence that something is truly missing. But when you make a conscious effort to think of startup ideas, you have to replace this natural constraint with self-discipline. You’ll see a lot more ideas, most of them bad, so you need to be able to filter them.

One of the biggest dangers of not using the organic method is the example of the organic method. Organic ideas feel like inspirations. There are a lot of stories about successful startups that began when the founders had what seemed a crazy idea but “just knew” it was promising. When you feel that about an idea you’ve had while trying to come up with startup ideas, you’re probably mistaken.

When searching for ideas, look in areas where you have some expertise. If you’re a database expert, don’t build a chat app for teenagers (unless you’re also a teenager). Maybe it’s a good idea, but you can’t trust your judgment about that, so ignore it. There have to be other ideas that involve databases, and whose quality you can judge. Do you find it hard to come up with good ideas involving databases? That’s because your expertise raises your standards. Your ideas about chat apps are just as bad, but you’re giving yourself a Dunning-Kruger pass in that domain.

The place to start looking for ideas is things you need. There must be things you need.

One good trick is to ask yourself whether in your previous job you ever found yourself saying “Why doesn’t someone make x? If someone made x we’d buy it in a second.” If you can think of any x people said that about, you probably have an idea. You know there’s demand, and people don’t say that about things that are impossible to build.

More generally, try asking yourself whether there’s something unusual about you that makes your needs different from most other people’s. You’re probably not the only one. It’s especially good if you’re different in a way people will increasingly be.

If you’re changing ideas, one unusual thing about you is the idea you’d previously been working on. Did you discover any needs while working on it? Several well-known startups began this way. Hotmail began as something its founders wrote to talk about their previous startup idea while they were working at their day jobs. [15]

A particularly promising way to be unusual is to be young. Some of the most valuable new ideas take root first among people in their teens and early twenties. And while young founders are at a disadvantage in some respects, they’re the only ones who really understand their peers. It would have been very hard for someone who wasn’t a college student to start Facebook. So if you’re a young founder (under 23 say), are there things you and your friends would like to do that current technology won’t let you?

The next best thing to an unmet need of your own is an unmet need of someone else. Try talking to everyone you can about the gaps they find in the world. What’s missing? What would they like to do that they can’t? What’s tedious or annoying, particularly in their work? Let the conversation get general; don’t be trying too hard to find startup ideas. You’re just looking for something to spark a thought. Maybe you’ll notice a problem they didn’t consciously realize they had, because you know how to solve it.

When you find an unmet need that isn’t your own, it may be somewhat blurry at first. The person who needs something may not know exactly what they need. In that case I often recommend that founders act like consultants—that they do what they’d do if they’d been retained to solve the problems of this one user. People’s problems are similar enough that nearly all the code you write this way will be reusable, and whatever isn’t will be a small price to start out certain that you’ve reached the bottom of the well.

One way to ensure you do a good job solving other people’s problems is to make them your own. When Rajat Suri of E la Carte decided to write software for restaurants, he got a job as a waiter to learn how restaurants worked. That may seem like taking things to extremes, but startups are extreme. We love it when founders do such things.

In fact, one strategy I recommend to people who need a new idea is not merely to turn off their schlep and unsexy filters, but to seek out ideas that are unsexy or involve schleps. Don’t try to start Twitter. Those ideas are so rare that you can’t find them by looking for them. Make something unsexy that people will pay you for.

A good trick for bypassing the schlep and to some extent the unsexy filter is to ask what you wish someone else would build, so that you could use it. What would you pay for right now?

Since startups often garbage-collect broken companies and industries, it can be a good trick to look for those that are dying, or deserve to, and try to imagine what kind of company would profit from their demise. For example, journalism is in free fall at the moment. But there may still be money to be made from something like journalism. What sort of company might cause people in the future to say “this replaced journalism” on some axis?

But imagine asking that in the future, not now. When one company or industry replaces another, it usually comes in from the side. So don’t look for a replacement for x; look for something that people will later say turned out to be a replacement for x. And be imaginative about the axis along which the replacement occurs. Traditional journalism, for example, is a way for readers to get information and to kill time, a way for writers to make money and to get attention, and a vehicle for several different types of advertising. It could be replaced on any of these axes (it has already started to be on most).

When startups consume incumbents, they usually start by serving some small but important market that the big players ignore. It’s particularly good if there’s an admixture of disdain in the big players’ attitude, because that often misleads them. For example, after Steve Wozniak built the computer that became the Apple I, he felt obliged to give his then-employer Hewlett-Packard the option to produce it. Fortunately for him, they turned it down, and one of the reasons they did was that it used a TV for a monitor, which seemed intolerably déclassé to a high-end hardware company like HP was at the time.

Are there groups of scruffy but sophisticated users like the early microcomputer “hobbyists” that are currently being ignored by the big players? A startup with its sights set on bigger things can often capture a small market easily by expending an effort that wouldn’t be justified by that market alone.

Similarly, since the most successful startups generally ride some wave bigger than themselves, it could be a good trick to look for waves and ask how one could benefit from them. The prices of gene sequencing and 3D printing are both experiencing Moore’s Law-like declines. What new things will we be able to do in the new world we’ll have in a few years? What are we unconsciously ruling out as impossible that will soon be possible?

Organic

But talking about looking explicitly for waves makes it clear that such recipes are plan B for getting startup ideas. Looking for waves is essentially a way to simulate the organic method. If you’re at the leading edge of some rapidly changing field, you don’t have to look for waves; you are the wave.

Finding startup ideas is a subtle business, and that’s why most people who try fail so miserably. It doesn’t work well simply to try to think of startup ideas. If you do that, you get bad ones that sound dangerously plausible. The best approach is more indirect: if you have the right sort of background, good startup ideas will seem obvious to you. But even then, not immediately. It takes time to come across situations where you notice something missing. And often these gaps won’t seem to be ideas for companies, just things that would be interesting to build. Which is why it’s good to have the time and the inclination to build things just because they’re interesting.

Live in the future and build what seems interesting. Strange as it sounds, that’s the real recipe.

Read the entire article after the jump.

Image: Nick D’Aloisio with his Summly app. Courtesy of Telegraph.

Farmscrapers

No, the drawing is not a construction from the mind of sci fi illustrator extraordinaire Michael Whelan. This is reality. Or, to be more precise an architectural rendering of buildings to come — in China of course.

From the Independent:

A French architecture firm has unveiled their new ambitious ‘farmscraper’ project – six towering structures which promise to change the way that we think about green living.

Vincent Callebaut Architects’ innovative Asian Cairns was planned specifically for Chinese city Shenzhen in response to the growing population, increasing CO2 emissions and urban development.

The structures will consist of a series of pebble-shaped levels – each connected by a central spinal column – which will contain residential areas, offices, and leisure spaces.

Sustainability is key to the innovative project – wind turbines will cover the roof of each tower, water recycling systems will be in place to recycle waste water, and solar panels will be installed on the buildings, providing renewable energy. The structures will also have gardens on the exterior, further adding to the project’s green credentials.

Vincent Callebaut, the Belgian architect behind the firm, is well-known for his ambitious, eco-friendly projects, winning many awards over the years.

His self-sufficient amphibious city Lilypad – ‘a floating ecopolis for climate refugees’ – is perhaps his most famous design. The model has been proposed as a long-term solution to rising water levels, and successfully meets the four challenges of climate, biodiversity, water, and health, that the OECD laid out in 2008.

Vincent Callebaut Architects said: “It is a prototype to build a green, dense, smart city connected by technology and eco-designed from biotechnologies.”

Read the entire article and see more illustrations after the jump.

Image: “Farmscrapers” take eco-friendly architecture to dizzying heights in China. Courtesy of Vincent Callebaut Architects / Independent.

Custom Does Not Freedom Make

Those of us who live relatively comfortable lives in the West are confronted with numerous and not insignificant stresses on a daily basis. There are the stresses of politics, parenting, work life balance, intolerance and financial, to name but a few.

Yet, for all the negatives it is often useful to put our toils and troubles into a clearer perspective. Sometimes a simple story is quite enough. This story is about a Saudi woman who dared to drive. In Saudi Arabia it is not illegal for women to drive, but it is against custom. May Manal al-Sharif and other “custom fighters” like her live long and prosper.

From the Wall Street Journal:

“You know when you have a bird, and it’s been in a cage all its life? When you open the cage door, it doesn’t want to leave. It was that moment.”

This is how Manal al-Sharif felt the first time she sat behind the wheel of a car in Saudi Arabia. The kingdom’s taboo against women driving is only rarely broken. To hear her recount the experience is as thrilling as it must have been to sit in the passenger seat beside her. Well, almost.

Ms. Sharif says her moment of hesitation didn’t last long. She pressed the gas pedal and in an instant her Cadillac SUV rolled forward. She spent the next hour circling the streets of Khobar, in the kingdom’s eastern province, while a friend used an iPhone camera to record the journey.

It was May 2011, when much of the Middle East was convulsed with popular uprisings. Saudi women’s-rights activists were stirring, too. They wondered if the Arab Spring would mark the end of the kingdom’s ban on women driving. “Everyone around me was complaining about the ban but no one was doing anything,” Ms. Sharif says. “The Arab Spring was happening all around us, so that inspired me to say, ‘Let’s call for an action instead of complaining.’ “

The campaign started with a Facebook page urging Saudi women to drive on a designated day, June 17, 2011. At first the page created great enthusiasm among activists. But then critics began injecting fear on and off the page. “The opponents were saying that ‘there are wolves in the street, and they will rape you if you drive,’ ” Ms. Sharif recalls. “There needed to be one person who could break that wall, to make the others understand that ‘it’s OK, you can drive in the street. No one will rape you.’ “

Ms. Sharif resolved to be that person, and the video she posted of herself driving around Khobar on May 17 became an instant YouTube hit. The news spread across Saudi media, too, and not all of the reactions were positive. Ms. Sharif received threatening phone calls and emails. “You have just opened the gates of hell on yourself,” said an Islamist cleric. “Your grave is waiting,” read one email.

Aramco, the national oil company where she was working as a computer-security consultant at the time, wasn’t pleased, either. Ms. Sharif recalls that her manager scolded her: “What the hell are you doing?” In response, Ms. Sharif requested two weeks off. Before leaving on vacation, however, she wrote a message to her boss on an office blackboard: “2011. Mark this year. It will change every single rule that you know. You cannot lecture me about what I’m doing.”

It was a stunning act of defiance in a country that takes very seriously the Quran’s teaching: “Men are in charge of women.” But less than a week after her first outing, Ms. Sharif got behind the wheel again, this time accompanied by her brother and his wife and child. “Where are the traffic police?” she recalls asking her brother as she put pedal to the metal once more. A rumor had been circulating that, since the driving ban isn’t codified in law, the police wouldn’t confront female drivers. “I wanted to test this,” she says.

The rumor was wrong. As she recounts, a traffic officer stopped the car, and soon members of the Committee for the Promotion of Virtue and Prevention of Vice, the Saudi morality police, surrounded the car. “Girl!” screamed one. “Get out! We don’t allow women to drive!” Ms. Sharif and her brother were arrested and detained for six hours, during which time she stood her ground.

“Sir, what law did I break?” she recalls repeatedly asking her interrogators. “You didn’t break any law,” they’d say. “You violated orf“—custom.

Read the entire article after the jump.

Image: Manal al-Sharif (Manal Abd Masoud Almnami al-Sharif). Courtesy of Wikipedia.

Chomsky

Chomsky. It’s highly likely that the mere sound of his name will polarize you. You will find yourself either for Noam Chomsky or adamantly against. You will either stand with him on the Arab-Israeli conflict or you won’t; you either support his libertarian-socialist views or you’re firmly against; you either agree with him on issues of privacy and authority or you don’t. However, regardless of your position on the Chomsky-support-scale you have to recognize that once he’s gone — he’s 84 years old — he’ll be recognized as one of the world’s great contemporary thinkers and writers. In the same mold as George Orwell, who was one of Chomsky’s early influences, Chomsky speaks truth to power. Whether the topic is political criticism, mass media, analytic philosophy, the military-industrial complex, computer science or linguistics the range of Chomsky’s discourse is astonishing, and his opinion not to be ignored.

From the Guardian:

It may have been pouring with rain, water overrunning the gutters and spreading fast and deep across London’s Euston Road, but this did not stop a queue forming, and growing until it snaked almost all the way back to Euston station. Inside Friends House, a Quaker-run meeting hall, the excitement was palpable. People searched for friends and seats with thinly disguised anxiety; all watched the stage until, about 15 minutes late, a short, slightly top-heavy old man climbed carefully on to the stage and sat down. The hall filled with cheers and clapping, with whoops and with whistles.

Noam Chomsky, said two speakers (one of them Mariam Said, whose late husband, Edward, this lecture honours) “needs no introduction”. A tired turn of phrase, but they had a point: in a bookshop down the road the politics section is divided into biography, reference, the Clintons, Obama, Thatcher, Marx, and Noam Chomsky. He topped the first Foreign Policy/Prospect Magazine list of global thinkers in 2005 (the most recent, however, perhaps reflecting a new editorship and a new rubric, lists him not at all). One study of the most frequently cited academic sources of all time found that he ranked eighth, just below Plato and Freud. The list included the Bible.

When he starts speaking, it is in a monotone that makes no particular rhetorical claim on the audience’s attention; in fact, it’s almost soporific. Last October, he tells his audience, he visited Gaza for the first time. Within five minutes many of the hallmarks of Chomsky’s political writing, and speaking, are displayed: his anger, his extraordinary range of reference and experience – journalism from inside Gaza, personal testimony, detailed knowledge of the old Egyptian government, its secret service, the new Egyptian government, the historical context of the Israeli occupation, recent news reports (of sewage used by the Egyptians to flood tunnels out of Gaza, and by Israelis to spray non-violent protesters). Fact upon fact upon fact, but also a withering, sweeping sarcasm – the atrocities are “tolerated politely by Europe as usual”. Harsh, vivid phrases – the “hideously charred corpses of murdered infants”; bodies “writhing in agony” – unspool until they become almost a form of punctuation.

You could argue that the latter is necessary, simply a description of atrocities that must be reported, but it is also a method that has diminishing returns. The facts speak for themselves; the adjectives and the sarcasm have the counterintuitive effect of cheapening them, of imposing on the world a disappointingly crude and simplistic argument. “The sentences,” wrote Larissa MacFarquhar in a brilliant New Yorker profile of Chomsky 10 years ago, “are accusations of guilt, but not from a position of innocence or hope for something better: Chomsky’s sarcasm is the scowl of a fallen world, the sneer of hell’s veteran to its appalled naifs” – and thus, in an odd way, static and ungenerative.

first came to prominence in 1959, with the argument, detailed in a book review (but already present in his first book, published two years earlier), that contrary to the prevailing idea that children learned language by copying and by reinforcement (ie behaviourism), basic grammatical arrangements were already present at birth. The argument revolutionised the study of linguistics; it had fundamental ramifications for anyone studying the mind. It also has interesting, even troubling ramifications for his politics. If we are born with innate structures of linguistic and by extension moral thought, isn’t this a kind of determinism that denies political agency? What is the point of arguing for any change at all?

“The most libertarian positions accept the same view,” he answers. “That there are instincts, basic conditions of human nature that lead to a preferred social order. In fact, if you’re in favour of any policy – reform, revolution, stability, regression, whatever – if you’re at least minimally moral, it’s because you think it’s somehow good for people. And good for people means conforming to their fundamental nature. So whoever you are, whatever your position is, you’re making some tacit assumptions about fundamental human nature … The question is: what do we strive for in developing a social order that is conducive to fundamental human needs? Are human beings born to be servants to masters, or are they born to be free, creative individuals who work with others to inquire, create, develop their own lives? I mean, if humans were totally unstructured creatures, they would be … a tool which can properly be shaped by outside forces. That’s why if you look at the history of what’s called radical behaviourism, [where] you can be completely shaped by outside forces – when [the advocates of this] spell out what they think society ought to be, it’s totalitarian.”

Chomsky, now 84, has been politically engaged all his life; his first published article, in fact, was against fascism, and written when he was 10. Where does the anger come from? “I grew up in the Depression. My parents had jobs, but a lot of the family were unemployed working class, so they had no jobs at all. So I saw poverty and repression right away. People would come to the door trying to sell rags – that was when I was four years old. I remember riding with my mother in a trolley car and passing a textile worker’s strike where the women were striking outside and the police were beating them bloody.”

He met Carol, who would become his wife, at about the same time, when he was five years old. They married when she was 19 and he 21, and were together until she died nearly 60 years later, in 2008. He talks about her constantly, given the chance: how she was so strict about his schedule when they travelled (she often accompanied him on lecture tours) that in Latin America they called her El Comandante; the various bureaucratic scrapes they got into, all over the world. By all accounts, she also enforced balance in his life: made sure he watched an hour of TV a night, went to movies and concerts, encouraged his love of sailing (at one point, he owned a small fleet of sailboats, plus a motorboat); she water-skied until she was 75.

But she was also politically involved: she took her daughters (they had three children: two girls and a boy) to demonstrations; he tells me a story about how, when they were protesting against the Vietnam war, they were once both arrested on the same day. “And you get one phone call. So my wife called our older daughter, who was at that time 12, I guess, and told her, ‘We’re not going to come home tonight, can you take care of the two kids?’ That’s life.” At another point, when it looked like he would be jailed for a long time, she went back to school to study for a PhD, so that she could support the children alone. It makes no sense, he told an interviewer a couple of years ago, for a woman to die before her husband, “because women manage so much better, they talk and support each other. My oldest and closest friend is in the office next door to me; we haven’t once talked about Carol.” His eldest daughter often helps him now. “There’s a transition point, in some way.”

Does he think that in all these years of talking and arguing and writing, he has ever changed one specific thing? “I don’t think any individual changes anything alone. Martin Luther King was an important figure but he couldn’t have said: ‘This is what I changed.’ He came to prominence on a groundswell that was created by mostly young people acting on the ground. In the early years of the antiwar movement we were all doing organising and writing and speaking and gradually certain people could do certain things more easily and effectively, so I pretty much dropped out of organising – I thought the teaching and writing was more effective. Others, friends of mine, did the opposite. But they’re not less influential. Just not known.”

Read the entire article following the jump.

Old Masters or Dirty Old Men?

A recent proposal to ban all pornography across Europe has raised some interesting questions. Not least of which is the issue of how to classify the numerous canvases featuring nudes — mostly women, of course — and sexual fantasies hanging prominently in most of Europe’s museums and galleries. Are Europe’s old masters, such as Titian, Botticelli, Rubens, Rousseau and Manet, pornographers?

From the Guardian:

A proposal to ban all pornography in Europe, recently unearthed by freedom of information campaigners in an EU report, raises an intriguing question. Would this only apply to photography and video, or do reformers also plan to rid Europe of all those lewd paintings by Titian and his contemporaries that joyously celebrate sex in the continent’s most civilised art galleries?

Europe’s great artists were making pornography long before the invention of the camera, let alone the internet. In my new book The Loves of the Artists, I argue that sexual gratification – of both the viewers of art, and artists themselves – was a fundamental drive of high European culture in the age of the old masters. Paintings were used as sexual stimuli, as visual lovers’ guides, as aids to fantasy. This was considered one of the most serious uses of art by no less a thinker than Leonardo da Vinci, who claimed images are better than words because pictures can directly arouse the senses. He was proud that he once painted a Madonna so sexy the owner asked for all its religious trappings to be removed, out of shame for the inappropriate lust it inspired. His painting of St John the Baptist is similarly ambiguous.

This was not a new attitude to art in the Renaissance. As the upcoming exhibition of ancient Pompeii at the British Museum will doubtless show, the ancient Romans also delighted in pornography. Some pornographic paintings now kept in the famous “Secret Museum” of ancient erotica in Naples came from Pompeii’s brothel’s – which makes their function very clear. In the Renaissance, which revered everything classical, ancient Roman sexual imagery was well known to collectors and artists. A notorious classical erotic statue owned by the plutocrat Agostino Chigi caused the 16th-century writer Pietro Aretino to remark, “why should the eyes be denied what delights them most?”

Aretino was a libertarian campaigner long before today’s ethical and political conflicts over pornography. He helped get the engraver Marcantonio Raimondi released from prison after the artist was jailed for publishing a series of erotic prints called The Positions – they depict various sexual positions – then wrote a set of obscene verses to accompany a new edition of what became a European bestseller. Aretino was a close friend of Titian, whose paintings share his licentious delight in sexuality.

Read the entire article following the jump.

Image: Venus of Urbino (Venere di Urbino), 1538 by Titian, Courtesy of Uffizi, Florence / Wikipedia.

MondayMap: Quiet News Day = Map of the Universe

It was surely a quiet news day on March 21 2013 — most major online news outlets showed a fresh map of the Cosmic Microwave Background (CMB) on the front page. It was taken by the Planck Telescope, operated by the European Space Agency, over a period of 15 months. The image shows a landscape of primordial cosmic microwaves from when the universe was only around 380,000 years old, and is often referred to as “first light”.

From ESA:

Acquired by ESA’s Planck space telescope, the most detailed map ever created of the cosmic microwave background – the relic radiation from the Big Bang – was released today revealing the existence of features that challenge the foundations of our current understanding of the Universe.

The image is based on the initial 15.5 months of data from Planck and is the mission’s first all-sky picture of the oldest light in our Universe, imprinted on the sky when it was just 380 000 years old.

At that time, the young Universe was filled with a hot dense soup of interacting protons, electrons and photons at about 2700ºC. When the protons and electrons joined to form hydrogen atoms, the light was set free. As the Universe has expanded, this light today has been stretched out to microwave wavelengths, equivalent to a temperature of just 2.7 degrees above absolute zero.

This ‘cosmic microwave background’ – CMB – shows tiny temperature fluctuations that correspond to regions of slightly different densities at very early times, representing the seeds of all future structure: the stars and galaxies of today.

According to the standard model of cosmology, the fluctuations arose immediately after the Big Bang and were stretched to cosmologically large scales during a brief period of accelerated expansion known as inflation.

Planck was designed to map these fluctuations across the whole sky with greater resolution and sensitivity than ever before. By analysing the nature and distribution of the seeds in Planck’s CMB image, we can determine the composition and evolution of the Universe from its birth to the present day.

Overall, the information extracted from Planck’s new map provides an excellent confirmation of the standard model of cosmology at an unprecedented accuracy, setting a new benchmark in our manifest of the contents of the Universe.

But because precision of Planck’s map is so high, it also made it possible to reveal some peculiar unexplained features that may well require new physics to be understood.

“The extraordinary quality of Planck’s portrait of the infant Universe allows us to peel back its layers to the very foundations, revealing that our blueprint of the cosmos is far from complete. Such discoveries were made possible by the unique technologies developed for that purpose by European industry,” says Jean-Jacques Dordain, ESA’s Director General.

“Since the release of Planck’s first all-sky image in 2010, we have been carefully extracting and analysing all of the foreground emissions that lie between us and the Universe’s first light, revealing the cosmic microwave background in the greatest detail yet,” adds George Efstathiou of the University of Cambridge, UK.

One of the most surprising findings is that the fluctuations in the CMB temperatures at large angular scales do not match those predicted by the standard model – their signals are not as strong as expected from the smaller scale structure revealed by Planck.

Read the entire article after the jump.

Image: Cosmic microwave background (CMB) seen by Planck. Courtesy of ESA (European Space Agency).

Jim’ll Paint It

Art can make you think; art can make you smile. Falling more towards the latter category is “Jim’ll Paint It“. Microsoft’s arcane Paint progra seems positively antiquated compared with more recent and powerful drawing apps. However, in the hands of an accomplished artist Paint still shines. In the hands of Jim it radiates. At his Jim’ll Paint It tumblr account Jim takes requests — however crazy — and renders them beautifully and with humor. In his own words:

I am here to make your wildest dreams a reality using nothing but Microsoft Paint (no tablets, no touch ups). Ask me to paint anything you wish and I will try no matter how specific or surreal your demands. While there aren’t enough hours in the day to physically paint every suggestion I will consider them all. Bonus points for originality and humour. Use your imagination!

From the Guardian:

Is all art nostalgic? Is it only when something is in the past, however recent, that it becomes interesting artistically?

I say this after perusing Jim’ll Paint It, where a guy called Jim offers to depict peoples’ craziest suggestions using Microsoft Paint, the graphics software included with all versions of Windows that now looks limited and “old-fashioned” compared with iPad art.

For anyone who is really trapped in the past, daddy-o, I am talking here about “painting” on a computer screen, not making a mess with gooey colours and real brushes. Using his archaically primitive Paint software, Jim has recently created scenes that include Jesus riding a motorbike into Hitler’s bunker, Nigella Lawson eating a plate of processors and Brian Blessed riding a vacuum cleaner.

His style is like a South Park storyboard, which I suppose tells us about how South Park is drawn. In fact, Jim reveals how familiar the visual lexicon of Microsoft Paint actually is in contemporary culture. By being simplified and unrealistic, it is arguably wittier, more imaginative and therefore more arty than paintings made on a tablet computer or smart phone that look like … well, like paintings.

Digital culture is as saturated in nostalgia as any previous form of culture. In a world where gadgets and software packages are constantly being reinvented, earlier phases of modernity are relegated to a sentimental past. MS Paint is still current but one day it will be as archaic as Pong.

Read the entire article following the jump.

Image: One of our favorites from Jim’ll Paint It — “Please paint me Jimi Hendrix explaining to an owl on his shoulder what a stick of chalk is, near a forest”. Courtesy of Jim’ll Paint It.

You Are a Google Datapoint

At first glance Google’s aim to make all known information accessible and searchable seems to be a fundamentally worthy goal, and in keeping with its “Do No Evil” mantra. Surely, giving all people access to the combined knowledge of the human race can do nothing but good, intellectually, politically and culturally.

However, what if that information includes you? After all, you are information: from the sequence of bases in your DNA, to the food you eat and the products you purchase, to your location and your planned vacations, your circle of friends and colleagues at work, to what you say and write and hear and see. You are a collection of datapoints, and if you don’t market and monetize them, someone else will.

Google continues to extend its technology boundaries and its vast indexed database of information. Now with the introduction of Google Glass the company extends its domain to a much more intimate level. Glass gives Google access to data on your precise location; it can record what you say and the sounds around you; it can capture what you are looking at and make it instantly shareable over the internet. Not surprisingly, this raises numerous concerns over privacy and security, and not only for the wearer of Google Glass. While active opt-in / opt-out features would allow a user a fair degree of control over how and what data is collected and shared with Google, it does not address those being observed.

So, beware the next time you are sitting in a Starbucks or shopping in a mall or riding the subway, you may be being recorded and your digital essence distributed over the internet. Perhaps, someone somewhere will even be making money from you. While the Orwellian dystopia of government surveillance and control may still be a nightmarish fiction, corporate snooping and monetization is no less troubling. Remember, to some, you are merely a datapoint (care of Google), a publication (via Facebook), and a product (courtesy of Twitter).

From the Telegraph:

In the online world – for now, at least – it’s the advertisers that make the world go round. If you’re Google, they represent more than 90% of your revenue and without them you would cease to exist.

So how do you reconcile the fact that there is a finite amount of data to be gathered online with the need to expand your data collection to keep ahead of your competitors?

There are two main routes. Firstly, try as hard as is legally possible to monopolise the data streams you already have, and hope regulators fine you less than the profit it generated. Secondly, you need to get up from behind the computer and hit the streets.

Google Glass is the first major salvo in an arms race that is going to see increasingly intrusive efforts made to join up our real lives with the digital businesses we have become accustomed to handing over huge amounts of personal data to.

The principles that underpin everyday consumer interactions – choice, informed consent, control – are at risk in a way that cannot be healthy. Our ability to walk away from a service depends on having a choice in the first place and knowing what data is collected and how it is used before we sign up.

Imagine if Google or Facebook decided to install their own CCTV cameras everywhere, gathering data about our movements, recording our lives and joining up every camera in the land in one giant control room. It’s Orwellian surveillance with fluffier branding. And this isn’t just video surveillance – Glass uses audio recording too. For added impact, if you’re not content with Google analysing the data, the person can share it to social media as they see fit too.

Yet that is the reality of Google Glass. Everything you see, Google sees. You don’t own the data, you don’t control the data and you definitely don’t know what happens to the data. Put another way – what would you say if instead of it being Google Glass, it was Government Glass? A revolutionary way of improving public services, some may say. Call me a cynic, but I don’t think it’d have much success.

More importantly, who gave you permission to collect data on the person sitting opposite you on the Tube? How about collecting information on your children’s friends? There is a gaping hole in the middle of the Google Glass world and it is one where privacy is not only seen as an annoying restriction on Google’s profit, but as something that simply does not even come into the equation. Google has empowered you to ignore the privacy of other people. Bravo.

It’s already led to reactions in the US. ‘Stop the Cyborgs’ might sound like the rallying cry of the next Terminator film, but this is the start of a campaign to ensure places of work, cafes, bars and public spaces are no-go areas for Google Glass. They’ve already produced stickers to put up informing people that they should take off their Glass.

They argue, rightly, that this is more than just a question of privacy. There’s a real issue about how much decision making is devolved to the display we see, in exactly the same way as the difference between appearing on page one or page two of Google’s search can spell the difference between commercial success and failure for small businesses. We trust what we see, it’s convenient and we don’t question the motives of a search engine in providing us with information.

The reality is very different. In abandoning critical thought and decision making, allowing ourselves to be guided by a melee of search results, social media and advertisements we do risk losing a part of what it is to be human. You can see the marketing already – Glass is all-knowing. The issue is that to be all-knowing, it needs you to help it be all-seeing.

Read the entire article after the jump.

Image: Google’s Sergin Brin wearing Google Glass. Courtesy of CBS News.

Heard the One About the Physicist and the Fashion Model?

You could be forgiven for mistakenly assuming this story to be a work of pop fiction from the colorful and restless minds of Quentin Tarrantino or the Coen brothers. But in another example of life mirroring art, it’s all true.

From the New York Times:

In November 2011, Paul Frampton, a theoretical particle physicist, met Denise Milani, a Czech bikini model, on the online dating site Mate1.com. She was gorgeous — dark-haired and dark-eyed, with a supposedly natural DDD breast size. In some photos, she looked tauntingly steamy; in others, she offered a warm smile. Soon, Frampton and Milani were chatting online nearly every day. Frampton would return home from campus — he’d been a professor in the physics and astronomy department at the University of North Carolina at Chapel Hill for 30 years — and his computer would buzz. “Are you there, honey?” They’d chat on Yahoo Messenger for a while, and then he’d go into the other room to take care of something. A half-hour later, there was the familiar buzz. It was always Milani. “What are you doing now?”

Frampton had been very lonely since his divorce three years earlier; now it seemed those days were over. Milani told him she was longing to change her life. She was tired, she said, of being a “glamour model,” of posing in her bikini on the beach while men ogled her. She wanted to settle down, have children. But she worried what he thought of her. “Do you think you could ever be proud of someone like me?” Of course he could, he assured her.

Frampton tried to get Milani to talk on the phone, but she always demurred. When she finally agreed to meet him in person, she asked him to come to La Paz, Bolivia, where she was doing a photo shoot. On Jan. 7, 2012, Frampton set out for Bolivia via Toronto and Santiago, Chile. At 68, he dreamed of finding a wife to bear him children — and what a wife. He pictured introducing her to his colleagues. One thing worried him, though. She had told him that men hit on her all the time. How did that acclaim affect her? Did it go to her head? But he remembered how comforting it felt to be chatting with her, like having a companion in the next room. And he knew she loved him. She’d said so many times.

Frampton didn’t plan on a long trip. He needed to be back to teach. So he left his car at the airport. Soon, he hoped, he’d be returning with Milani on his arm. The first thing that went wrong was that the e-ticket Milani sent Frampton for the Toronto-Santiago leg of his journey turned out to be invalid, leaving him stranded in the Toronto airport for a full day. Frampton finally arrived in La Paz four days after he set out. He hoped to meet Milani the next morning, but by then she had been called away to another photo shoot in Brussels. She promised to send him a ticket to join her there, so Frampton, who had checked into the Eva Palace Hotel, worked on a physics paper while he waited for it to arrive. He and Milani kept in regular contact. A ticket to Buenos Aires eventually came, with the promise that another ticket to Brussels was on the way. All Milani asked was that Frampton do her a favor: bring her a bag that she had left in La Paz.

While in Bolivia, Frampton corresponded with an old friend, John Dixon, a physicist and lawyer who lives in Ontario. When Frampton explained what he was up to, Dixon became alarmed. His warnings to Frampton were unequivocal, Dixon told me not long ago, still clearly upset: “I said: ‘Well, inside that suitcase sewn into the lining will be cocaine. You’re in big trouble.’ Paul said, ‘I’ll be careful, I’ll make sure there isn’t cocaine in there and if there is, I’ll ask them to remove it.’ I thought they were probably going to kidnap him and torture him to get his money. I didn’t know he didn’t have money. I said, ‘Well, you’re going to be killed, Paul, so whom should I contact when you disappear?’ And he said, ‘You can contact my brother and my former wife.’ ” Frampton later told me that he shrugged off Dixon’s warnings about drugs as melodramatic, adding that he rarely pays attention to the opinions of others.

On the evening of Jan. 20, nine days after he arrived in Bolivia, a man Frampton describes as Hispanic but whom he didn’t get a good look at handed him a bag out on the dark street in front of his hotel. Frampton was expecting to be given an Hermès or a Louis Vuitton, but the bag was an utterly commonplace black cloth suitcase with wheels. Once he was back in his room, he opened it. It was empty. He wrote to Milani, asking why this particular suitcase was so important. She told him it had “sentimental value.” The next morning, he filled it with his dirty laundry and headed to the airport.

Frampton flew from La Paz to Buenos Aires, crossing the border without incident. He says that he spent the next 40 hours in Ezeiza airport, without sleeping, mainly “doing physics” and checking his e-mail regularly in hopes that an e-ticket to Brussels would arrive. But by the time the ticket materialized, Frampton had gotten a friend to send him a ticket to Raleigh. He had been gone for 15 days and was ready to go home. Because there was always the chance that Milani would come to North Carolina and want her bag, he checked two bags, his and hers, and went to the gate. Soon he heard his name called over the loudspeaker. He thought it must be for an upgrade to first class, but when he arrived at the airline counter, he was greeted by several policemen. Asked to identify his luggage — “That’s my bag,” he said, “the other one’s not my bag, but I checked it in” — he waited while the police tested the contents of a package found in the “Milani” suitcase. Within hours, he was under arrest.

Read the entire article following the jump.

Image: Paul Frampton, theoretical physicist.Courtesy of Wikipedia.

Electronic Tattoos

arm with band-aid shaped skin graphForget wearable electronics, like Google Glass. That’s so, well, 2012. Welcome to the new world of epidermal electronics — electronic tattoos that contain circuits and sensors printed directly on to the body.

From MIT Technology Review:

Taking advantage of recent advances in flexible electronics, researchers have devised a way to “print” devices directly onto the skin so people can wear them for an extended period while performing normal daily activities. Such systems could be used to track health and monitor healing near the skin’s surface, as in the case of surgical wounds.

 

So-called “epidermal electronics” were demonstrated previously in research from the lab of John Rogers, a materials scientist at the University of Illinois at Urbana-Champaign; the devices consist of ultrathin electrodes, electronics, sensors, and wireless power and communication systems. In theory, they could attach to the skin and record and transmit electrophysiological measurements for medical purposes. These early versions of the technology, which were designed to be applied to a thin, soft elastomer backing, were “fine for an office environment,” says Rogers, “but if you wanted to go swimming or take a shower they weren’t able to hold up.” Now, Rogers and his coworkers have figured out how to print the electronics right on the skin, making the device more durable and rugged.

“What we’ve found is that you don’t even need the elastomer backing,” Rogers says. “You can use a rubber stamp to just deliver the ultrathin mesh electronics directly to the surface of the skin.” The researchers also found that they could use commercially available “spray-on bandage” products to add a thin protective layer and bond the system to the skin in a “very robust way,” he says.

Eliminating the elastomer backing makes the device one-thirtieth as thick, and thus “more conformal to the kind of roughness that’s present naturally on the surface of the skin,” says Rogers. It can be worn for up to two weeks before the skin’s natural exfoliation process causes it to flake off.

During the two weeks that it’s attached, the device can measure things like temperature, strain, and the hydration state of the skin, all of which are useful in tracking general health and wellness. One specific application could be to monitor wound healing: if a doctor or nurse attached the system near a surgical wound before the patient left the hospital, it could take measurements and transmit the information wirelessly to the health-care providers.

Read the entire article after the jump.

Image: Epidermal electronic snesor printed on the skin. Courtesy of MIT.

Technology: Mind Exp(a/e)nder

Rattling off esoteric facts to friends and colleagues at a party or in the office is often seen as a simple way to impress. You may have tried this at some point — to impress a prospective boy or girl friend or a group of peers or even your boss. Not surprisingly, your facts will impress if they are relevant to the discussion at hand. However, your audience will be even more agog at your uncanny, intellectual prowess if the facts and figures relate to some wildly obtuse domain — quotes from authors, local bird species, gold prices through the years, land-speed records through the ages, how electrolysis works, etymology of polysyllabic words, and so it goes.

So, it comes as no surprise that many technology companies fall over themselves to promote their products as a way to make you, the smart user, even smarter. But does having constant, realtime access to a powerful computer or smartphone or spectacles linked to an immense library of interconnected content, make you smarter? Some would argue that it does; that having access to a vast, virtual disk drive of information will improve your cognitive abilities. There is no doubt that our technology puts an unparalleled repository of information within instant and constant reach: we can read all the classic literature — for that matter we can read the entire contents of the Library of Congress; we can find an answer to almost any question — it’s just a Google search away; we can find fresh research and rich reference material on every subject imaginable.

Yet, all this information will not directly make us any smarter; it is not applied knowledge nor is it experiential wisdom. It will not make us more creative or insightful. However, it is more likely to influence our cognition indirectly — freed from our need to carry volumes of often useless facts and figures in our heads, we will be able to turn our minds to more consequential and noble pursuits — to think, rather than to memorize. That is a good thing.

From Slate:

Quick, what’s the square root of 2,130? How many Roadmaster convertibles did Buick build in 1949? What airline has never lost a jet plane in a crash?

If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.

True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.

The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.

Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.

The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.

So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”

Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.

The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”

So here’s a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What’s the exact recipe for your favorite homemade pastry?

Read the entire article after the jump.

Image: Google Glass. Courtesy of Google.

The War on Apostrophes

No, we don’t mean war on apostasy, for which many have been hung, drawn, quartered, burned and beheaded. And no, “apostrophes” are not a new sect of fundamentalist terrorists.

Apostrophes are punctuation, and a local city council in Britain has deemed to outlaw them. Why?

From the Guardian:

The sometimes vexing question of where and when to add an apostrophe appears to have been solved in one corner of Devon: the local authority is planning to do away with them altogether.

Later this month members of Mid Devon district council’s cabinet will discuss formally banning the pesky little punctuation marks from its (no apostrophe needed) street signs, apparently to avoid “confusion”.

The news of the Tory-controlled council’s (apostrophe required) decision provoked howls of condemnation on Friday from champions of plain English, fans of grammar, and politicians. Even the government felt the need to join the campaign to save the apostrophe.

The Plain English Campaign led the criticism. “It’s nonsense,” said Steve Jenner, spokesperson and radio presenter. “Where’s it going to stop. Are we going to declare war on commas, outlaw full stops?”

Jenner was puzzled over why the council appeared to think it a good idea not to have punctuation on signs. “If it’s to try to make things clearer, it’s not going to work. The whole purpose of punctuation is to make language easier to understand. Is it because someone at the council doesn’t understand how it works?”

Jenner suggested the council was providing a bad example to children who were – hopefully – being taught punctuation at school only to not see it being used correctly on street signs. “It seems a bit hypocritical,” he added.

Sian Harris, lecturer in English literature at Exeter University, said the proposals were likely to lead to greater confusion. She said: “Usually the best way to teach about punctuation is to show practical examples of it – removing [apostrophes] from everyday life would be a terrible shame and make that understanding increasingly difficult. English is a complicated language as it is — removing apostrophes is not going to help with that at all.”

Ben Bradshaw, the former culture secretary and Labour MP for Exeter, condemned the plans on Twitter. He wrote a precisely punctuated tweet: “Tory Mid Devon Council bans the apostrophe to ‘avoid confusion’ … Whole point of proper grammar is to avoid confusion!”

The council’s plans caused a stir 200 miles away in Whitehall, where the Department for Communities and Local Government came out in defence of punctuation. A spokesman said: “Whilst this is ultimately a matter for the local council, ministers’ view is that England’s apostrophes should be cherished.”

To be fair to modest Mid Devon, it is not the only authority to pick on the apostrophe. Birmingham did the same three years ago (the Mail went with the headline The city where apostrophes arent welcome).

The book retailer Waterstones caused a bit of a stir last year when it ditched the mark.

The council’s communications manager, Andrew Lacey, attempted to dampen down the controversy. Lacey said: “Our proposed policy on street naming and numbering covers a whole host of practical issues, many of which are aimed at reducing potential confusion over street names.

“Although there is no national guidance that stops apostrophes being used, for many years the convention we’ve followed here is for new street names not to be given apostrophes.”

He said there were only three official street names in Mid Devon which include them: Beck’s Square and Blundell’s Avenue, both in Tiverton, and St George’s Well in Cullompton. All were named many, many years ago.

“No final decision has yet been made and the proposed policy will be discussed at cabinet,” he said.

Read the entire story after the jump.

Image: Mid Devon District Council’s plan is presumably to avoid errors such as this (from Hackney, London). Courtesy of Guardian / Andy Drysdale / Rex Features.

Exoplanet Exploration

It wasn’t too long ago that astronomers found the first indirect evidence of a planet beyond our solar system. They inferred the presence of an exoplanet (extrasolar planet) from the periodic dimming or wiggle of its parental star, rather than much more difficult direct observation. Since the first confirmed exoplanet was discovered in 1995 (51 Pegasi b), researchers have definitively  catalogued around 800, and identified another 18,000 candidates. And, the list seems to now grow daily.

If that wasn’t amazing enough researchers now have directly observed several exoplanets and even measured their atmospheric composition.

[div class=attrib]From ars technica:[end-div]

The star system HR 8799 is a sort of Solar System on steroids: a beefier star, four possible planets that are much bigger than Jupiter, and signs of asteroids and cometary bodies, all spread over a bigger region. Additionally, the whole system is younger and hotter, making it one of only a few cases where astronomers can image the planets themselves. However, HR 8799 is very different from our Solar System, as astronomers are realizing thanks to two detailed studies released this week.

The first study was an overview of the four exoplanet candidates, covered by John Timmer. The second set of observations focused on one of the four planet candidates, HR 8799c. Quinn Konopacky, Travis Barman, Bruce Macintosh, and Christian Marois performed a detailed spectral analysis of the atmosphere of the possible exoplanet. They compared their findings to the known properties of a brown dwarf and concluded that they don’t match—it is indeed a young planet. Chemical differences between HR 8799c and its host star led the researchers to conclude the system likely formed in the same way the Solar System did.

The HR 8799 system was one of the first where direct imaging of the exoplanets was possible; in most cases, the evidence for a planet’s presence is indirect. (See the Ars overview of exoplanet science for more.) This serendipity is possible for two major reasons: the system is very young, and the planet candidates orbit far from their host star.

The young age means the bodies orbiting the system still retain heat from their formation and so are glowing in the infrared; older planets emit much less light. That makes it possible to image these planets at these wavelengths. (We mostly image planets in the Solar System using reflected sunlight, but that’s not a viable detection strategy at these distances). A large planet-star separation means that the star’s light doesn’t overwhelm the planets’ warm glow. Astronomers are also assisted by HR 8799’s relative closeness to us—it’s only about 130 light-years away.

However, the brightness of the exoplanet candidates also obscures their identity. They are all much larger than Jupiter—each is more than 5 times Jupiter’s mass, and the largest could be 35 times greater. That, combined with their large infrared emission, could mean that they are not planets but brown dwarfs: star-like objects with insufficient mass to engage in hydrogen fusion. Since brown dwarfs can overlap in size and mass with the largest planets, we haven’t been certain that the objects observed in the HR 8799 system are planets.

For this reason, the two recent studies aimed at measuring the chemistry of these bodies using their spectral emissions. The Palomar study described yesterday provided a broad, big-picture view of the whole HR 8799 system. By contrast, the second study used one of the 10-meter Keck telescopes for a focused, in-depth view of one object: HR 8799c, the second-farthest out of the four.

The researchers measured relatively high levels of carbon monoxide (CO) and water (H2O, just in case you forgot the formula), which were present at levels well above the abundance measured in the spectrum of the host star. According to the researchers, this difference in chemical composition indicated that the planet likely formed via “core accretion”— the gradual, bottom-up accumulation of materials to make a planet—rather than a top-down fragmentation of the disk surrounding the newborn star. The original disk in this scenario would have contained a lot of ice fragments, which merged to make a world relatively high in water content.

In many respects, HR 8799c seemed to have properties between brown dwarfs and other exoplanets, but the chemical and gravitational analyses pushed the object more toward the planet side. In particular, the size and chemistry of HR 8799c placed its surface gravity lower than expected for a brown dwarf, especially when considered with the estimated age of the star system. While this analysis says nothing about whether the other bodies in the system are planets, it does provide further hints about the way the system formed.

One final surprise was the lack of methane (CH4) in HR 8799c’s atmosphere. Methane is a chemical component present in all the Jupiter-like planets in our Solar System. The authors argued that this could be due to vigorous mixing of the atmosphere, which is expected because the exoplanet has higher temperatures and pressures than seen on Jupiter or Neptune. This mixing could enable reactions that limit methane formation. Since the HR 8799 system is much younger than the Solar System—roughly 30 million years compared with 4.5 billion years—it’s uncertain how much this chemical balance may change over time.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]One of the discovery images of the system obtained at the Keck II telescope using the adaptive optics system and NIRC2 Near-Infrared Imager. The rectangle indicates the field-of-view of the OSIRIS instrument for planet C. Courtesy of NRC-HIA, C. Marois and Keck Observatory.[end-div]

RIP: Fare Thee Well

With smartphones and tweets taking over our planet, the art of letter writing is fast becoming a subject of history lessons. Our written communications are now modulated by the keypad, emoticons, acronyms and the backspace; our attentions ever-fractured by the noise of the digital world and the dumbed-down 24/7 media monster.

So, as Matthew Malady over at Slate argues, it’s time for the few remaining Luddites, pen still in hand, to join the trend towards curtness and to ditch the signoffs. You know, the words that anyone over the age of 50 once used to put at the end of a hand-written letter, and can still be found at the close of an email and, less frequently, a text: “Best regards“, “Warmest wishes“, “Most Sincerely“, “Cheers“, “Faithfully yours“.

Your friendly editor, for now, refuses to join the tidal wave of signoff slayers, and continues to take solace from his ink (fountain, if you please!) pens. There is still room for well-crafted prose in a sea of txt-speak.

[div class=attrib]From Slate:[end-div]

For the 20 years that I have used email, I have been a fool. For two decades, I never called bullshit when burly, bearded dudes from places like Pittsburgh and Park Slope bid me email adieu with the vaguely British “Cheers!” And I never batted an eye at the hundreds of “XOXO” email goodbyes from people I’d never met, much less hugged or kissed. When one of my best friends recently ended an email to me by using the priggish signoff, “Always,” I just rolled with it.

But everyone has a breaking point. For me, it was the ridiculous variations on “Regards” that I received over the past holiday season. My transition from signoff submissive to signoff subversive began when a former colleague ended an email to me with “Warmest regards.”

Were these scalding hot regards superior to the ordinary “Regards” I had been receiving on a near-daily basis? Obviously they were better than the merely “Warm Regards” I got from a co-worker the following week. Then I received “Best Regards” in a solicitation email from the New Republic. Apparently when urging me to attend a panel discussion, the good people at the New Republic were regarding me in a way that simply could not be topped.

After 10 or 15 more “Regards” of varying magnitudes, I could take no more. I finally realized the ridiculousness of spending even one second thinking about the totally unnecessary words that we tack on to the end of emails. And I came to the following conclusion: It’s time to eliminate email signoffs completely. Henceforth, I do not want—nay, I will not accept—any manner of regards. Nor will I offer any. And I urge you to do the same.

Think about it. Email signoffs are holdovers from a bygone era when letter writing—the kind that required ink and paper—was a major means of communication. The handwritten letters people sent included information of great import and sometimes functioned as the only communication with family members and other loved ones for months. In that case, it made sense to go to town, to get flowery with it. Then, a formal signoff was entirely called for. If you were, say, a Boston resident writing to his mother back home in Ireland in the late 19th century, then ending a correspondence with “I remain your ever fond son in Christ Our Lord J.C.,” as James Chamberlain did in 1891, was entirely reasonable and appropriate.

But those times have long since passed. And so has the era when individuals sought to win the favor of the king via dedication letters and love notes ending with “Your majesty’s Most bounden and devoted,” or “Fare thee as well as I fare.” Also long gone are the days when explorers attempted to ensure continued support for their voyages from monarchs and benefactors via fawning formal correspondence related to the initial successes of this or that expedition. Francisco Vázquez de Coronado had good reason to end his 1541 letter to King Charles I of Spain, relaying details about parts of what is now the southwestern United States, with a doozy that translates to “Your Majesty’s humble servant and vassal, who would kiss the royal feet and hands.”

But in 2013, when bots outnumber benefactors by a wide margin, the continued and consistent use of antiquated signoffs in email is impossible to justify. At this stage of the game, we should be able to interact with one another in ways that reflect the precise manner of communication being employed, rather than harkening back to old standbys popular during the age of the Pony Express.

I am not an important person. Nonetheless, each week, on average, I receive more than 300 emails. I send out about 500. These messages do not contain the stuff of old-timey letters. They’re about the pizza I had for lunch (horrendous) and must-see videos of corgis dressed in sweaters (delightful). I’m trading thoughts on various work-related matters with people who know me and don’t need to be “Best”-ed. Emails, over time, have become more like text messages than handwritten letters. And no one in their right mind uses signoffs in text messages.

What’s more, because no email signoff is exactly right for every occasion, it’s not uncommon for these add-ons to cause affirmative harm. Some people take offense to different iterations of “goodbye,” depending on the circumstances. Others, meanwhile, can’t help but wonder, “What did he mean by that?” or spend entire days worrying about the implications of a sudden shift from “See you soon!” in one email, to “Best wishes” in the next. So, naturally, we consider, and we overthink, and we agonize about how best to close out our emails. We ask others for advice on the matter, and we give advice on it when asked.

[div class=attrib]Read the entire article after the jump.[end-div]

Who Doesn’t Love and Hate a Dalek?

Over the decades Hollywood has remade movie monsters and aliens into evermore terrifying and nightmarish, and often slimier, versions of ourselves. In Britain of the 1960s kids grew up with the thoroughly scary and evil Daleks, from the SciFi series Dr.Who. Their raspy electronic voices proclaiming “Exterminate! Exterminate!” and death-rays would often consign children to a restless sleep in the comfort of their parents’ beds. Nowadays the Daleks would be dismissed as laughable and amateurish constructions — after all, how could malevolent, otherworldly beings be made from what looked too much like discarded egg cartons and toilet plungers. But, they do remain iconic — a fixture of our pop culture.

[div class=attrib]From the Guardian:[end-div]

The Daleks are a masterpiece of pop art. The death of their designer Raymond Cusick is rightly national news: it was Cusick who in the early 1960s gave a visual shape to this new monster invented by Doctor Who writer Terry Nation. But in the 50th anniversary of Britain’s greatest television show, the Daleks need to be seen in historical perspective. It is all too tempting to imagine Cusick and Nation sitting in the BBC canteen looking at a pepper pot on their lunch table and realising it could be a terrifying alien cyborg. In reality, the Daleks are a living legacy of the British pop art movement.

With Roy Lichtenstein whaaming ’em at London’s Tate Modern, it is all too easy to forget that pop art began in Britain – and our version of it started as science fiction. When Eduardo Paolozzi made his collage Dr Pepper in 1948, he was not portraying the real lives of austerity-burdened postwar Britons. He was imagining a future world of impossible consumer excess – a world that already existed in America, whose cultural icons from flash cars to electric cookers populate his collage of magazine clippings. But that seemed very far from reality in war-wounded Europe. Pop art began as an ironically utopian futuristic fantasy by British artists trapped in a monochrome reality.

The exhibition that brought pop art to a wider audience was This Is Tomorrow at the Whitechapel Gallery in 1956. As its title implies, This Is Tomorrow presented pop as visual sci-fi. It included a poster for the science fiction film Forbidden Planet and was officially opened by the star of the film, Robbie the Robot.

The layout of This Is Tomorrow created a futuristic landscape from fragments of found material and imagery, just as Doctor Who would fabricate alien worlds from silver foil and plastic bottles. The reason the series would face a crisis by the end of the 1970s was that its effects were deemed old-fashioned compared with Star Wars: but the whole point of Dr Who was that it demanded imagination of its audience and presented not fetishised perfect illusions, but a kind of kitchen sink sci-fi that shared the playfulness of pop art.

The Daleks are a wonder of pop art’s fantastic vision, at once absurd and marvellous. Most of all, they share the ironic juxtaposition of real and unreal one finds in the art of Richard Hamilton. Like a Hoover collaged into an ideal home, the Daleks are at their best gliding through an unexpected setting such as central London – a metal menace invading homely old Britain.

[div class-=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Daleks in the 1966 Doctor Who serial The Power of the Daleks. Courtesy of BBC.[end-div]

The Richest Person in the Solar System

[tube]Bs6rCxU_IHY[/tube]

Forget Warren Buffet, Bill Gates and Carlos Slim or the Russian oligarchs and the emirs of the Persian Gulf. These guys are merely multi-billionaires. Their fortunes — combined — account for less than half of 1 percent of the net worth of Dennis Hope, the world’s first trillionaire. In fact, you could describe Dennis as the solar system’s first trillionaire, with an estimated wealth of $100 trillion.

So, why have you never heard of Dennis Hope, trillionaire? Where does he invest his money? And, how did he amass this jaw-dropping uber-fortune? The answer to the first question is that he lives a relatively ordinary and quiet life in Nevada. The answer to the second question is: property. The answer to the third, and most fascinating question: well, he owns most of the Moon. He also owns the majority of the planets Mars, Venus and Mercury, and 90 or so other celestial plots. You too could become an interplanetary property investor for the starting and very modest sum of $19.99. Please write your check to… Dennis Hope.

The New York Times has a recent story and documentary on Mr.Hope, here.

[div class=attrib]From Discover:[end-div]

Dennis Hope, self-proclaimed Head Cheese of the Lunar Embassy, will promise you the moon. Or at least a piece of it. Since 1980, Hope has raked in over $9 million selling acres of lunar real estate for $19.99 a pop. So far, 4.25 million people have purchased a piece of the moon, including celebrities like Barbara Walters, George Lucas, Ronald Reagan, and even the first President Bush. Hope says he exploited a loophole in the 1967 United Nations Outer Space Treaty, which prohibits nations from owning the moon.

Because the law says nothing about individual holders, he says, his claim—which he sent to the United Nations—has some clout. “It was unowned land,” he says. “For private property claims, 197 countries at one time or another had a basis by which private citizens could make claims on land and not make payment. There are no standardized rules.”

Hope is right that the rules are somewhat murky—both Japan and the United States have plans for moon colonies—and lunar property ownership might be a powder keg waiting to spark. But Ram Jakhu, law professor at the Institute of Air and Space Law at McGill University in Montreal, says that Hope’s claims aren’t likely to hold much weight. Nor, for that matter, would any nation’s. “I don’t see a loophole,” Jakhu says. “The moon is a common property of the international community, so individuals and states cannot own it. That’s very clear in the U.N. treaty. Individuals’ rights cannot prevail over the rights and obligations of a state.”

Jakhu, a director of the International Institute for Space Law, believes that entrepreneurs like Hope have misread the treaty and that the 1967 legislation came about to block property claims in outer space. Historically, “the ownership of private property has been a major cause of war,” he says. “No one owns the moon. No one can own any property in outer space.”

Hope refuses to be discouraged. And he’s focusing on expansion. “I own about 95 different planetary bodies,” he says. “The total amount of property I currently own is about 7 trillion acres. The value of that property is about $100 trillion. And that doesn’t even include mineral rights.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Video courtesy of the New York Times.[end-div]

The United States: Land of the Creative and the Crazy

It’s unlikely that you would find many people who would argue against the notion that the United States is truly the most creative and innovative nation; from art to basic scientific research, from music to engineering, from theoretical physics to food science, from genetic studies and medicine to movies. And yet perplexingly, the nation continues to yearn for its wild, pioneering past, rather than inventing a brighter and more civilized future. To many outsiders the many contradictions that make up the United States are a source laughter and much incredulity. The recent news out of South Dakota shows why.

[div class=attrib]From the New York Times:[end-div]

Gov. Dennis Daugaard of South Dakota on Friday signed into law a bill that would allow teachers to carry guns in the classroom.

While some other states have provisions in their gun laws that make it possible for teachers to be armed, South Dakota is believed to be the first state to pass a law that specifically allows teachers to carry firearms.

About two dozen states have proposed similar bills since the shootings in December at Sandy Hook Elementary School in Newtown, Conn., but all of them have stalled.

Supporters say that the measure signed by Mr. Daugaard, a Republican, is important in a rural state like South Dakota, where some schools are many miles away from emergency responders.

Opponents, which have included the state school board association and teachers association, say this is a rushed measure that does not make schools safer.

The law says that school districts may choose to allow a school employee, hired security officer or volunteer to serve as a “sentinel” who can carry a firearm in the school. The law does not require school districts to do this.

Mr. Daugaard said he was comfortable with the law because it gave school districts the right to choose whether they wanted armed individuals in schools, and that those who were armed would have to undergo firearms training similar to what law enforcement officers received.

“I think it does provide the same safety precautions that a citizen expects when a law enforcement officer enters onto a premises,” Mr. Daugaard said in an interview. But he added that he did not think that many school districts would end up taking advantage of the measure.

[div class=attrib]Read the entire article after the jump.[end-div]

MondayMap: New Jersey Under Water

We love maps here at theDiagonal. So much so that we’ve begun a new feature: MondayMap. As the name suggests, we plan to feature fascinating new maps on Mondays. For our readers who prefer their plots served up on a Saturday, sorry. Usually we like to highlight maps that cause us to look at our world differently or provide a degree of welcome amusement, such as the wonderful trove of maps over at Strange Maps curated by Frank Jacobs.

However, this first MondayMap is a little different and serious. It’s an interactive map that shows the impact of estimated sea level rise on the streets of New Jersey. Obviously, such a tool would be a great boon for emergency services and urban planners. For the rest of us, whether we live in New Jersey or not, maps like this one — of extreme weather events and projections — are likely to become much more common over the coming decades. Kudos to researchers at Rutgers University for developing the NJ Flood Mapper.

[div class=attrib]From Wall Street Journal:[end-div]

While superstorm Sandy revealed the Northeast’s vulnerability, a new map by New Jersey scientists suggests how rising seas could make future storms even worse.

The map shows ocean waters surging more than a mile into communities along Raritan Bay, engulfing nearly all of New Jersey’s barrier islands and covering northern sections of the New Jersey Turnpike and land surrounding the Port Newark Container Terminal.

Such damage could occur under a scenario in which sea levels rise 6 feet—or a 3-foot rise in tandem with a powerful coastal storm, according to the map produced by Rutgers University researchers.

The satellite-based tool, one of the first comprehensive, state-specific maps of its kind, uses a Google-maps-style interface that allows viewers to zoom into street-level detail.

“We are not trying to unduly frighten people,” said Rick Lathrop, director of the Grant F. Walton Center for Remote Sensing and Spatial Analysis at Rutgers, who led the map’s development. “This is providing people a look at where our vulnerability is.”

Still, the implications of the Rutgers project unnerve residents of Surf City, on Long Beach Island, where the map shows water pouring over nearly all of the barrier island’s six municipalities with a 6-foot increase in sea levels.

“The water is going to come over the island and there will be no island,” said Barbara Epstein, a 73-year-old resident of nearby Barnegat Light, who added that she is considering moving after 12 years there. “The storms are worsening.”

To be sure, not everyone agrees that climate change will make sea-level rise more pronounced.

Politically, climate change remains an issue of debate. New York Gov. Andrew Cuomo has said Sandy showed the need to address the issue, while New Jersey Gov. Chris Christie has declined to comment on whether Sandy was linked to climate change.

Scientists have gone ahead and started to map sea-level-rise scenarios in New Jersey, New York City and flood-prone communities along the Gulf of Mexico to help guide local development and planning.

Sea levels have risen by 1.3 feet near Atlantic City and 0.9 feet by Battery Park between 1911 and 2006, according to data from the National Oceanic and Atmospheric Administration.

A serious storm could add at least another 3 feet, with historic storm surges—Sandy-scale—registering at 9 feet. So when planning for future coastal flooding, 6 feet or higher isn’t far-fetched when combining sea-level rise with high tides and storm surges, Mr. Lathrop said.

NOAA estimated in December that increasing ocean temperatures could cause sea levels to rise by 1.6 feet in 100 years, and by 3.9 feet if considering some level of Arctic ice-sheet melt.

Such an increase amounts to 0.16 inches per year, but the eventual impact could mean that a small storm could “do the same damage that Sandy did,” said Peter Howd, co-author of a 2012 U.S. Geological Survey report that found the rate of sea level rise had increased in the northeast.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: NJ Flood Mapper. Courtesy of Grant F. Walton Center for Remote Sensing and Spatial Analysis (CRSSA), Rutgers University, in partnership with the Jacques Cousteau National Estuarine Research Reserve (JCNERR), and in collaboration with the NOAA Coastal Services Center (CSC).[end-div]

Ziggy Stardust and the Spiders from the Moon?

To honor the brilliant new album by the Thin White Duke, we came across the article excerpted below, which at first glance seems to come directly from the songbook of Ziggy Stardust him- or herself. But closer inspection reveals that NASA may have designs on deploying giant manufacturing robots to construct a base on the moon. Can you hear me, Major Tom?

[tube]gH7dMBcg-gE[/tube]

Once you’ve had your fill of Bowie, read on about NASA’s spiders.

[div class=attrib]From ars technica:[end-div]

The first lunar base on the Moon may not be built by human hands, but rather by a giant spider-like robot built by NASA that can bind the dusty soil into giant bubble structures where astronauts can live, conduct experiments, relax or perhaps even cultivate crops.

We’ve already covered the European Space Agency’s (ESA) work with architecture firm Foster + Partners on a proposal for a 3D-printed moonbase, and there are similarities between the two bases—both would be located in Shackleton Crater near the Moon’s south pole, where sunlight (and thus solar energy) is nearly constant due to the Moon’s inclination on the crater’s rim, and both use lunar dust as their basic building material. However, while the ESA’s building would be constructed almost exactly the same way a house would be 3D-printed on Earth, this latest wheeze—SinterHab—uses NASA technology for something a fair bit more ambitious.

The product of joint research first started between space architects Tomas Rousek, Katarina Eriksson and Ondrej Doule and scientists from NASA’s Jet Propulsion Laboratory (JPL), SinterHab is so-named because it involves sintering lunar dust—that is, heating it up to just below its melting point, where the fine nanoparticle powders fuse and become one solid block a bit like a piece of ceramic. To do this, the JPL engineers propose using microwaves no more powerful than those found in a kitchen unit, with tiny particles easily reaching between 1200 and 1500 degrees Celsius.

Nanoparticles of iron within lunar soil are heated at certain microwave frequencies, enabling efficient heating and binding of the dust to itself. Not having to fly binding agent from Earth along with a 3D printer is a major advantage over the ESA/Foster + Partners plan. The solar panels to power the microwaves would, like the moon base itself, be based near or on the rim of Shackleton Crater in near-perpetual sunlight.

“Bubbles” of binded dust could be built by a huge six-legged robot (OK, so it’s not technically a spider) that can then be assembled into habitats large enough for astronauts to use as a base. This “Sinterator system” would use the JPL’s Athlete rover, a half-scale prototype of which has already been built and tested. It’s a human-controlled robotic space rover with wheels at the end of its 8.2m limbs and a detachable habitable capsule mounted at the top.

Athlete’s arms have several different functions, dependent on what it needs to do at any point. It has 48 3D cameras that stream video to its operator either inside the capsule, elsewhere on the Moon or back on Earth, it’s got a payload capacity of 300kg in Earth gravity, and it can scoop, dig, grab at and generally poke around in the soil fairly easily, giving it the combined abilities of a normal rover and a construction vehicle. It can even split into two smaller three-legged rovers at any time if needed. In the Sinterator system, a microwave 3D printer would be mounted on one of the Athlete’s legs and used to build the base.

Rousek explained the background of the idea to Wired.co.uk: “Since many of my buildings have advanced geometry that you can’t cut easily from sheet material, I started using 3D printing for rapid prototyping of my architecture models. The construction industry is still lagging several decades behind car and electronics production. The buildings now are terribly wasteful and imprecise—I have always dreamed about creating a factory where the buildings would be robotically mass-produced with parametric personalization, using composite materials and 3D printing. It would be also great to use local materials and precise manufacturing on-site.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Giant NASA spider robots could 3D print lunar base using microwaves, courtesy of Wired UK. Video: The Stars (Are Out Tonight), courtesy of David Bowie, ISO Records / Columbia Records.[end-div]

Two Nations Divided by Book Covers

“England and America are two countries separated by the same language”. This oft used quote is usually attributed to Oscar Wilde or GBS (George Bernard Shaw). Regardless of who originated the phrase both authors would not be surprised to see that book covers are divided by the Atlantic Ocean as well. The Millions continues its fascinating annual comparative analysis.

American book covers on the left, British book covers on the right.

[div class=attrib]From The Millions:[end-div]

As we’ve done for several years now, we thought it might be fun to compare the U.S. and U.K. book cover designs of this year’s Morning News Tournament of Books contenders. Book cover art is an interesting element of the literary world — sometimes fixated upon, sometimes ignored — but, as readers, we are undoubtedly swayed by the little billboard that is the cover of every book we read. And, while many of us no longer do most of our reading on physical books with physical covers, those same cover images now beckon us from their grids in the various online bookstores. From my days as a bookseller, when import titles would sometimes find their way into our store, I’ve always found it especially interesting that the U.K. and U.S. covers often differ from one another. This would seem to suggest that certain layouts and imagery will better appeal to readers on one side of the Atlantic rather than the other. These differences are especially striking when we look at the covers side by side. The American covers are on the left, and the UK are on the right. Your equally inexpert analysis is encouraged in the comments.

[div class=attrib]Read the entire article and see more book covers after the jump.[end-div]

[div class=atrrib]Book cover images courtesy of The Millions and their respective authors and publishers.[end-div]

Chocolate for the Soul and Mind (But Not Body)

Hot on the heels of the recent research finding that the Mediterranean diet improves heart health, come news that choc-a-holics the world over have been anxiously awaiting — chocolate improves brain function.

Researchers have found that chocolate rich in compounds known as flavanols can improve cognitive function. Now, before you rush out the door to visit the local grocery store to purchase a mountain of Mars bars (perhaps not coincidentally, Mars, Inc., partly funded the research study), Godiva pralines, Cadbury flakes or a slab of Dove, take note that all chocolate is not created equally. Flavanols tend to be found in highest concentrations in raw cocoa. In fact, during the process of making most chocolate, including the dark kind, most flavanols tend to be removed or destroyed. Perhaps the silver lining here is that to replicate the dose of flavanols found to have a positive effect on brain function, you would have to eat around 20 bars of chocolate per day for several months. This may be good news for your brain, but not your waistline!

[div class=attrib]From Scientific American:[end-div]

It’s news chocolate lovers have been craving: raw cocoa may be packed with brain-boosting compounds. Researchers at the University of L’Aquila in Italy, with scientists from Mars, Inc., and their colleagues published findings last September that suggest cognitive function in the elderly is improved by ingesting high levels of natural compounds found in cocoa called flavanols. The study included 90 individuals with mild cognitive impairment, a precursor to Alzheimer’s disease. Subjects who drank a cocoa beverage containing either moderate or high levels of flavanols daily for eight weeks demonstrated greater cognitive function than those who consumed low levels of flavanols on three separate tests that measured factors that included verbal fluency, visual searching and attention.

Exactly how cocoa causes these changes is still unknown, but emerging research points to one flavanol in particular: (-)-epicatechin, pronounced “minus epicatechin.” Its name signifies its structure, differentiating it from other catechins, organic compounds highly abundant in cocoa and present in apples, wine and tea. The graph below shows how (-)-epicatechin fits into the world of brain-altering food molecules. Other studies suggest that the compound supports increased circulation and the growth of blood vessels, which could explain improvements in cognition, because better blood flow would bring the brain more oxygen and improve its function.

Animal research has already demonstrated how pure (-)-epicatechin enhances memory. Findings published last October in the Journal of Experimental Biology note that snails can remember a trained task—such as holding their breath in deoxygenated water—for more than a day when given (-)-epicatechin but for less than three hours without the flavanol. Salk Institute neuroscientist Fred Gage and his colleagues found previously that (-)-epicatechin improves spatial memory and increases vasculature in mice. “It’s amazing that a single dietary change could have such profound effects on behavior,” Gage says. If further research confirms the compound’s cognitive effects, flavanol supplements—or raw cocoa beans—could be just what the doctor ordered.

So, Can We Binge on Chocolate Now?

Nope, sorry. A food’s origin, processing, storage and preparation can each alter its chemical composition. As a result, it is nearly impossible to predict which flavanols—and how many—remain in your bonbon or cup of tea. Tragically for chocoholics, most methods of processing cocoa remove many of the flavanols found in the raw plant. Even dark chocolate, touted as the “healthy” option, can be treated such that the cocoa darkens while flavanols are stripped.

Researchers are only beginning to establish standards for measuring flavanol content in chocolate. A typical one and a half ounce chocolate bar might contain about 50 milligrams of flavanols, which means you would need to consume 10 to 20 bars daily to approach the flavanol levels used in the University of L’Aquila study. At that point, the sugars and fats in these sweet confections would probably outweigh any possible brain benefits. Mars Botanical nutritionist and toxicologist Catherine Kwik-Uribe, an author on the University of L’Aquila study, says, “There’s now even more reasons to enjoy tea, apples and chocolate. But diversity and variety in your diet remain key.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google Search.[end-div]

Video Game. But is it Art?

Only yesterday we posted a linguist’s claim that text-speak is an emerging language. You know, text-speak is that cryptic communication process that most teenagers engage in with their smartphones. Leaving aside the merits of including text-speak in the catalog of around 6,600 formal human languages, one thing is clear — text-speak is not Shakespearean English. So, don’t expect to see a novel written in it win the Nobel Prize for Literature, yet.

Strangely though, the same cannot be said for another recent phenomenon, the video game. Increasingly, some video games are being described in the same language that critics would normally reserve for a contemporary painting on canvas. Yes, welcome to the world of video game as art. If you have ever played the immersive game Myst, or its sequel Riven (the original games came on CDROM), you will see why many classify the beautifully designed and rendered aesthetics as art. MoMA (Museum of Modern Art) in New York thinks so too.

[div class=attrib]From the Guardian:[end-div]

New York’s Museum of Modern Art will be home to something more often associated with pasty teens and bar scenes when it opens an exhibit on video games on Friday.

Tetris, Pac-Man and the Sims are just a few of the classic games that will be housed inside a building that also displays works by Vincent Van Gogh, Claude Monet and Frida Kahlo. And though some may question whether video games are even art, the museum is incorporating the games into its Applied Design installation.

MoMA consulted scholars, digital experts, historians and critics to select games for the gallery based on their aesthetic quality – including the programming language used to create them. MoMA’s senior curator for architecture and design, Paola Antonelli, said the material used to create games is important in the same way the wood used to create a stool is.

With that as the focus, games are presented in their original formats, absent the consoles that often define them. Some will be playable with controllers, and more complex, long-running games like SimCity 2000 are presented as specially designed walkthroughs and demos.

MoMA’s curatorial team tailored controls especially for each of the playable games, including a customized joystick created for the Tetris game.

Some of the older games, which might have fragile or rare cartridges, will be displayed as “interactive emulation”, with a programmer translating the game code to something that will work on a newer computer system.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Myst, Cyan Inc. Courtesy of Cyan, Inc / Wikipedia.[end-div]

Txt-Speak: Linguistic Scourge or Beautiful New Language?

OMG! DYK wot Ur Teen is txtng?

[tube]yoF2vdLxsVQ[/tube]

Most parents of teenagers would undoubtedly side with the first characterization: texting is a disaster for the English language — and any other texted language for that matter. At first glance it would seem that most linguists and scholars of language would agree. After all, with seemingly non-existent grammar, poor syntax, complete disregard for spelling, substitution of symbols for words, and emphasis on childish phonetics, how can texting be considered anything more than a regression to a crude form of proto-human language?

Well, linguist John McWhorter holds that texting is actually a new form of speech, and for that matter, it’s rather special and evolving in real-time. LOL? Read on and you will be 😮 (surprised). Oh, and if you still need help with texting translation, check-out dtxtr.

[div class=attrib]From ars technica:[end-div]

Is texting shorthand a convenience, a catastrophe for the English language, or actually something new and special? John McWhorter, a linguist at Columbia University, sides with the latter. According to McWhorter, texting is actually a new form of speech, and he outlined the reasons why today at the TED2013 conference in Southern California.

We often hear that “texting is a scourge,” damaging the literacy of the young. But it’s “actually a miraculous thing,” McWhorter said. Texting, he argued, is not really writing at all—not in the way we have historically thought about writing. To explain this, he drew an important distinction between speech and writing as functions of language. Language was born in speech some 80,000 years ago (at least). Writing, on the other hand, is relatively new (5,000 or 6,000 years old). So humanity has been talking for longer than it has been writing, and this is especially true when you consider that writing skills have hardly been ubiquitous in human societies.

Furthermore, writing is typically not a reflection of casual speech. “We speak in word packets of seven to 10 words. It’s much more loose, much more telegraphic,” McWhorter said. Of course, speech can imitate writing, particularly in formal contexts like speechmaking. He pointed out that in those cases you might speak like you write, but it’s clearly not a natural way of speaking.

But what about writing like you speak? Historically this has been difficult. Speed is a key issue. “[Texting is] fingered-speech. Now we can write the way we talk,” McWhorter said. Yet we view this as some kind of decline. We don’t capitalize words, obey grammar or spelling rules, and the like. Yet there is an “emerging complexity…with new structure” at play. To McWhorter, this structure facilitates the speed and packeted nature of real speech.

Take “LOL,” for instance. It used to mean “laughing out loud,” but its meaning has changed. People aren’t guffawing every time they write it. Now “it’s a marker of empathy, a pragmatic particle,” he said. “It’s a way of using the language between actual people.”

This is just one example of a new battery of conventions McWhorter sees in texting. They are conventions that enable writing like we speak. Consider the rules of grammar. When you talk, you don’t think about capitalizing names or putting commas and question marks where they belong. You produce sounds, not written language. Texting leaves out many of these conventions, particularly among the young, who make extensive use of electronic communication tools.

McWhorter thinks what we are experiencing is a whole new way of writing that young people are using alongside their normal writing skills. It is a “balancing act… an expansion of their linguistic repertoire,” he argued.

The result is a whole new language, one that wouldn’t be intelligible to people in the year 1993 or 1973. And where it’s headed, it will likely be unintelligible to us were we to jump ahead 20 years in time. Nevertheless, McWhorter wants us to appreciate it now: “It’s a linguistic miracle happening right under our noses,” he said.

Forget the “death of writing” talk. Txt-speak is a new, rapidly evolving form of speech.

[div class=attrib]Follow the entire article after the jump.[end-div]

[div class=attrib]Video: John McWhorter courtesy of TED.[end-div]

Your Tax Dollars at Work

Naysayers would say that government, and hence taxpayer dollars, should not be used to fund science initiatives. After all academia and business seem to do a fairly good job of discovery and innovation without a helping hand pilfering from the public purse. And, without a doubt, and money aside, government funded projects do raise a number of thorny questions: On what should our hard-earned income tax be spent? Who decides on the priorities? How is progress to be measured? Do taxpayers get any benefit in return? After many of us cringe at the thought of an unelected bureaucrat or a committee of such spending millions if not billions of our dollars. Why not just spend the money on fixing our national potholes?

But despite our many human flaws and foibles we are at heart explorers. We seek to know more about ourselves, our world and our universe. Those who seek answers to fundamental questions of consciousness, aging, and life are pioneers in this quest to expand our domain of understanding and knowledge. These answers increasingly aid our daily lives through continuous improvement in medical science, and innovation in materials science. And, our collective lives are enriched as we increasingly learn more about the how and the why of our and our universe’s existence.

So, some of our dollars have gone towards big science at the Large Hadron Collider (LHC) beneath Switzerland looking for constituents of matter, the wild laser experiment at the National Ignition Facility designed to enable controlled fusion reactions, and the Curiosity rover exploring Mars. Yet more of our dollars have gone to research and development into enhanced radar, graphene for next generation circuitry, online courseware, stress in coral reefs, sensors to aid the elderly, ultra-high speed internet for emergency response, erosion mitigation, self-cleaning surfaces, flexible solar panels.

Now comes word that the U.S. government wants to spend $3 billion dollars — over 10 years — on building a comprehensive map of the human brain. The media has dubbed this the “connectome” following similar efforts to map our human DNA, the genome. While this is the type of big science that may yield tangible results and benefits only decades from now, it ignites the passion and curiosity of our children to continue to seek and to find answers. So, this is good news for science and the explorer who lurks within us all.

[div class=attrib]From ars technica:[end-div]

Over the weekend, The New York Times reported that the Obama administration is preparing to launch biology into its first big project post-genome: mapping the activity and processes that power the human brain. The initial report suggested that the project would get roughly $3 billion dollars over 10 years to fund projects that would provide an unprecedented understanding of how the brain operates.

But the report was remarkably short on the scientific details of what the studies would actually accomplish or where the money would actually go. To get a better sense, we talked with Brown University’s John Donoghue, who is one of the academic researchers who has been helping to provide the rationale and direction for the project. Although he couldn’t speak for the administration’s plans, he did describe the outlines of what’s being proposed and why, and he provided a glimpse into what he sees as the project’s benefits.

What are we talking about doing?

We’ve already made great progress in understanding the behavior of individual neurons, and scientists have done some excellent work in studying small populations of them. On the other end of the spectrum, decades of anatomical studies have provided us with a good picture of how different regions of the brain are connected. “There’s a big gap in our knowledge because we don’t know the intermediate scale,” Donaghue told Ars. The goal, he said, “is not a wiring diagram—it’s a functional map, an understanding.”

This would involve a combination of things, including looking at how larger populations of neurons within a single structure coordinate their activity, as well as trying to get a better understanding of how different structures within the brain coordinate their activity. What scale of neuron will we need to study? Donaghue answered that question with one of his own: “At what point does the emergent property come out?” Things like memory and consciousness emerge from the actions of lots of neurons, and we need to capture enough of those to understand the processes that let them emerge. Right now, we don’t really know what that level is. It’s certainly “above 10,” according to Donaghue. “I don’t think we need to study every neuron,” he said. Beyond that, part of the project will focus on what Donaghue called “the big question”—what emerges in the brain at these various scales?”

While he may have called emergence “the big question,” it quickly became clear he had a number of big questions in mind. Neural activity clearly encodes information, and we can record it, but we don’t always understand the code well enough to understand the meaning of our recordings. When I asked Donaghue about this, he said, “This is it! One of the big goals is cracking the code.”

Donaghue was enthused about the idea that the different aspects of the project would feed into each other. “They go hand in hand,” he said. “As we gain more functional information, it’ll inform the connectional map and vice versa.” In the same way, knowing more about neural coding will help us interpret the activity we see, while more detailed recordings of neural activity will make it easier to infer the code.

As we build on these feedbacks to understand more complex examples of the brain’s emergent behaviors, the big picture will emerge. Donaghue hoped that the work will ultimately provide “a way of understanding how you turn thought into action, how you perceive, the nature of the mind, cognition.”

How will we actually do this?

Perception and the nature of the mind have bothered scientists and philosophers for centuries—why should we think we can tackle them now? Donaghue cited three fields that had given him and his collaborators cause for optimism: nanotechnology, synthetic biology, and optical tracers. We’ve now reached the point where, thanks to advances in nanotechnology, we’re able to produce much larger arrays of electrodes with fine control over their shape, allowing us to monitor much larger populations of neurons at the same time. On a larger scale, chemical tracers can now register the activity of large populations of neurons through flashes of fluorescence, giving us a way of monitoring huge populations of cells. And Donaghue suggested that it might be possible to use synthetic biology to translate neural activity into a permanent record of a cell’s activity (perhaps stored in DNA itself) for later retrieval.

Right now, in Donaghue’s view, the problem is that the people developing these technologies and the neuroscience community aren’t talking enough. Biologists don’t know enough about the tools already out there, and the materials scientists aren’t getting feedback from them on ways to make their tools more useful.

Since the problem is understanding the activity of the brain at the level of large populations of neurons, the goal will be to develop the tools needed to do so and to make sure they are widely adopted by the bioscience community. Each of these approaches is limited in various ways, so it will be important to use all of them and to continue the technology development.

Assuming the information can be recorded, it will generate huge amounts of data, which will need to be shared in order to have the intended impact. And we’ll need to be able to perform pattern recognition across these vast datasets in order to identify correlations in activity among different populations of neurons. So there will be a heavy computational component as well.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: White matter fiber architecture of the human brain. Courtesy of the Human Connectome Project.[end-div]

Yourself, The Illusion

A growing body of evidence suggests that our brains live in the future, construct explanations for the past and that our notion of the present is an entirely fictitious concoction. On the surface this makes our lives seem like nothing more than a construction taken right out of The Matrix movies. However, while we may not be pawns in an illusion constructed by malevolent aliens, our perception of “self” does appear to be illusory. As researchers delve deeper into the inner workings of the brain it becomes clearer that our conscious selves are a beautifully derived narrative, built by the brain to make sense of the past and prepare for our future actions.

[div class=attrib]From the New Scientist:[end-div]

It seems obvious that we exist in the present. The past is gone and the future has not yet happened, so where else could we be? But perhaps we should not be so certain.

Sensory information reaches usMovie Camera at different speeds, yet appears unified as one moment. Nerve signals need time to be transmitted and time to be processed by the brain. And there are events – such as a light flashing, or someone snapping their fingers – that take less time to occur than our system needs to process them. By the time we become aware of the flash or the finger-snap, it is already history.

Our experience of the world resembles a television broadcast with a time lag; conscious perception is not “live”. This on its own might not be too much cause for concern, but in the same way the TV time lag makes last-minute censorship possible, our brain, rather than showing us what happened a moment ago, sometimes constructs a present that has never actually happened.

Evidence for this can be found in the “flash-lag” illusion. In one version, a screen displays a rotating disc with an arrow on it, pointing outwards (see “Now you see it…”). Next to the disc is a spot of light that is programmed to flash at the exact moment the spinning arrow passes it. Yet this is not what we perceive. Instead, the flash lags behind, apparently occuring after the arrow has passed.

One explanation is that our brain extrapolates into the future. Visual stimuli take time to process, so the brain compensates by predicting where the arrow will be. The static flash – which it can’t anticipate – seems to lag behind.

Neat as this explanation is, it cannot be right, as was shown by a variant of the illusion designed by David Eagleman of the Baylor College of Medicine in Houston, Texas, and Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, California.

If the brain were predicting the spinning arrow’s trajectory, people would see the lag even if the arrow stopped at the exact moment it was pointing at the spot. But in this case the lag does not occur. What’s more, if the arrow starts stationary and moves in either direction immediately after the flash, the movement is perceived before the flash. How can the brain predict the direction of movement if it doesn’t start until after the flash?

The explanation is that rather than extrapolating into the future, our brain is interpolating events in the past, assembling a story of what happened retrospectively (Science, vol 287, p 2036). The perception of what is happening at the moment of the flash is determined by what happens to the disc after it. This seems paradoxical, but other tests have confirmed that what is perceived to have occurred at a certain time can be influenced by what happens later.

All of this is slightly worrying if we hold on to the common-sense view that our selves are placed in the present. If the moment in time we are supposed to be inhabiting turns out to be a mere construction, the same is likely to be true of the self existing in that present.

[div class=attrib]Read the entire article after the jump.[end-div]

Engineering Your Food Addiction

Fast food, snack foods and all manner of processed foods are a multi-billion dollar global industry. So, it’s no surprise that companies collectively spend $100s of millions each year to perfect the perfect bite. Importantly, part of this perfection (for the businesses) is to ensure that you keep coming back for more.

By all accounts the “cheeto” is as close to processed-food-addiction-heaven as we can get — so far. It has just the right amount of salt (too much) and fat (too much), crunchiness, and something known as vanishing caloric density (melts in the mouth at the optimum rate). Aesthetically sad, but scientifically true.

[div class=attrib]From the New York Times:[end-div]

On the evening of April 8, 1999, a long line of Town Cars and taxis pulled up to the Minneapolis headquarters of Pillsbury and discharged 11 men who controlled America’s largest food companies. Nestlé was in attendance, as were Kraft and Nabisco, General Mills and Procter & Gamble, Coca-Cola and Mars. Rivals any other day, the C.E.O.’s and company presidents had come together for a rare, private meeting. On the agenda was one item: the emerging obesity epidemic and how to deal with it. While the atmosphere was cordial, the men assembled were hardly friends. Their stature was defined by their skill in fighting one another for what they called “stomach share” — the amount of digestive space that any one company’s brand can grab from the competition.

James Behnke, a 55-year-old executive at Pillsbury, greeted the men as they arrived. He was anxious but also hopeful about the plan that he and a few other food-company executives had devised to engage the C.E.O.’s on America’s growing weight problem. “We were very concerned, and rightfully so, that obesity was becoming a major issue,” Behnke recalled. “People were starting to talk about sugar taxes, and there was a lot of pressure on food companies.” Getting the company chiefs in the same room to talk about anything, much less a sensitive issue like this, was a tricky business, so Behnke and his fellow organizers had scripted the meeting carefully, honing the message to its barest essentials. “C.E.O.’s in the food industry are typically not technical guys, and they’re uncomfortable going to meetings where technical people talk in technical terms about technical things,” Behnke said. “They don’t want to be embarrassed. They don’t want to make commitments. They want to maintain their aloofness and autonomy.”

A chemist by training with a doctoral degree in food science, Behnke became Pillsbury’s chief technical officer in 1979 and was instrumental in creating a long line of hit products, including microwaveable popcorn. He deeply admired Pillsbury but in recent years had grown troubled by pictures of obese children suffering from diabetes and the earliest signs of hypertension and heart disease. In the months leading up to the C.E.O. meeting, he was engaged in conversation with a group of food-science experts who were painting an increasingly grim picture of the public’s ability to cope with the industry’s formulations — from the body’s fragile controls on overeating to the hidden power of some processed foods to make people feel hungrier still. It was time, he and a handful of others felt, to warn the C.E.O.’s that their companies may have gone too far in creating and marketing products that posed the greatest health concerns.

 

In This Article:
• ‘In This Field, I’m a Game Changer.’
• ‘Lunchtime Is All Yours’
• ‘It’s Called Vanishing Caloric Density.’
• ‘These People Need a Lot of Things, but They Don’t Need a Coke.’

 

The discussion took place in Pillsbury’s auditorium. The first speaker was a vice president of Kraft named Michael Mudd. “I very much appreciate this opportunity to talk to you about childhood obesity and the growing challenge it presents for us all,” Mudd began. “Let me say right at the start, this is not an easy subject. There are no easy answers — for what the public health community must do to bring this problem under control or for what the industry should do as others seek to hold it accountable for what has happened. But this much is clear: For those of us who’ve looked hard at this issue, whether they’re public health professionals or staff specialists in your own companies, we feel sure that the one thing we shouldn’t do is nothing.”

As he spoke, Mudd clicked through a deck of slides — 114 in all — projected on a large screen behind him. The figures were staggering. More than half of American adults were now considered overweight, with nearly one-quarter of the adult population — 40 million people — clinically defined as obese. Among children, the rates had more than doubled since 1980, and the number of kids considered obese had shot past 12 million. (This was still only 1999; the nation’s obesity rates would climb much higher.) Food manufacturers were now being blamed for the problem from all sides — academia, the Centers for Disease Control and Prevention, the American Heart Association and the American Cancer Society. The secretary of agriculture, over whom the industry had long held sway, had recently called obesity a “national epidemic.”

Mudd then did the unthinkable. He drew a connection to the last thing in the world the C.E.O.’s wanted linked to their products: cigarettes. First came a quote from a Yale University professor of psychology and public health, Kelly Brownell, who was an especially vocal proponent of the view that the processed-food industry should be seen as a public health menace: “As a culture, we’ve become upset by the tobacco companies advertising to children, but we sit idly by while the food companies do the very same thing. And we could make a claim that the toll taken on the public health by a poor diet rivals that taken by tobacco.”

“If anyone in the food industry ever doubted there was a slippery slope out there,” Mudd said, “I imagine they are beginning to experience a distinct sliding sensation right about now.”

Mudd then presented the plan he and others had devised to address the obesity problem. Merely getting the executives to acknowledge some culpability was an important first step, he knew, so his plan would start off with a small but crucial move: the industry should use the expertise of scientists — its own and others — to gain a deeper understanding of what was driving Americans to overeat. Once this was achieved, the effort could unfold on several fronts. To be sure, there would be no getting around the role that packaged foods and drinks play in overconsumption. They would have to pull back on their use of salt, sugar and fat, perhaps by imposing industrywide limits. But it wasn’t just a matter of these three ingredients; the schemes they used to advertise and market their products were critical, too. Mudd proposed creating a “code to guide the nutritional aspects of food marketing, especially to children.”

“We are saying that the industry should make a sincere effort to be part of the solution,” Mudd concluded. “And that by doing so, we can help to defuse the criticism that’s building against us.”

What happened next was not written down. But according to three participants, when Mudd stopped talking, the one C.E.O. whose recent exploits in the grocery store had awed the rest of the industry stood up to speak. His name was Stephen Sanger, and he was also the person — as head of General Mills — who had the most to lose when it came to dealing with obesity. Under his leadership, General Mills had overtaken not just the cereal aisle but other sections of the grocery store. The company’s Yoplait brand had transformed traditional unsweetened breakfast yogurt into a veritable dessert. It now had twice as much sugar per serving as General Mills’ marshmallow cereal Lucky Charms. And yet, because of yogurt’s well-tended image as a wholesome snack, sales of Yoplait were soaring, with annual revenue topping $500 million. Emboldened by the success, the company’s development wing pushed even harder, inventing a Yoplait variation that came in a squeezable tube — perfect for kids. They called it Go-Gurt and rolled it out nationally in the weeks before the C.E.O. meeting. (By year’s end, it would hit $100 million in sales.)

According to the sources I spoke with, Sanger began by reminding the group that consumers were “fickle.” (Sanger declined to be interviewed.) Sometimes they worried about sugar, other times fat. General Mills, he said, acted responsibly to both the public and shareholders by offering products to satisfy dieters and other concerned shoppers, from low sugar to added whole grains. But most often, he said, people bought what they liked, and they liked what tasted good. “Don’t talk to me about nutrition,” he reportedly said, taking on the voice of the typical consumer. “Talk to me about taste, and if this stuff tastes better, don’t run around trying to sell stuff that doesn’t taste good.”

To react to the critics, Sanger said, would jeopardize the sanctity of the recipes that had made his products so successful. General Mills would not pull back. He would push his people onward, and he urged his peers to do the same. Sanger’s response effectively ended the meeting.

“What can I say?” James Behnke told me years later. “It didn’t work. These guys weren’t as receptive as we thought they would be.” Behnke chose his words deliberately. He wanted to be fair. “Sanger was trying to say, ‘Look, we’re not going to screw around with the company jewels here and change the formulations because a bunch of guys in white coats are worried about obesity.’ ”

The meeting was remarkable, first, for the insider admissions of guilt. But I was also struck by how prescient the organizers of the sit-down had been. Today, one in three adults is considered clinically obese, along with one in five kids, and 24 million Americans are afflicted by type 2 diabetes, often caused by poor diet, with another 79 million people having pre-diabetes. Even gout, a painful form of arthritis once known as “the rich man’s disease” for its associations with gluttony, now afflicts eight million Americans.

The public and the food companies have known for decades now — or at the very least since this meeting — that sugary, salty, fatty foods are not good for us in the quantities that we consume them. So why are the diabetes and obesity and hypertension numbers still spiraling out of control? It’s not just a matter of poor willpower on the part of the consumer and a give-the-people-what-they-want attitude on the part of the food manufacturers. What I found, over four years of research and reporting, was a conscious effort — taking place in labs and marketing meetings and grocery-store aisles — to get people hooked on foods that are convenient and inexpensive. I talked to more than 300 people in or formerly employed by the processed-food industry, from scientists to marketers to C.E.O.’s. Some were willing whistle-blowers, while others spoke reluctantly when presented with some of the thousands of pages of secret memos that I obtained from inside the food industry’s operations. What follows is a series of small case studies of a handful of characters whose work then, and perspective now, sheds light on how the foods are created and sold to people who, while not powerless, are extremely vulnerable to the intensity of these companies’ industrial formulations and selling campaigns.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Cheeto puffs. Courtesy of tumblr.[end-div]

2013: Mississippi Officially Abolishes Slavery

The 13th Amendment to the United States Constitution was enacted in December 1865. It abolished slavery.

But, it seems that someone in Mississippi did not follow the formal process. So, the law was officially ratified only a couple of weeks ago — 147 years late. Thanks go to two enterprising scholars and the movie Lincoln.

[div class=attrib]From the Guardian:[end-div]

Mississippi has officially ratified the 13th amendment to the US constitution, which abolishes slavery and which was officially noted in the constitution on 6 December 1865. All 50 states have now ratified the amendment.

Mississippi’s tardiness has been put down to an oversight that was only corrected after two academics embarked on research prompted by watching Lincoln, Steven Spielberg’s Oscar-nominated film about president Abraham Lincoln’s efforts to secure the amendment.

Dr Ranjan Batra, a professor in the department of neurobiology and anatomical sciences at the University of Mississippi Medical Center, saw Spielberg’s film and wondered about the implementation of the 13th amendment after the Civil War. He discussed the issue with Ken Sullivan, an anatomical material specialist at UMC, who began to research the matter.

Sullivan, a longtime resident of the Mississippi, remembered that a 1995 move to ratify the 13th amendment had passed the state Senate and House. He tracked down a copy of the bill and learned that its last paragraph required the secretary of state to send a copy to the office of the federal register, to officially sign it into law. That copy was never sent.

Sullivan contacted the current Mississippi secretary of state, Delbert Hosemann, who filed the paperwork for the passage of the bill on 30 January. The bill passed on 7 February. Hosemann said the passage of the bill “was long overdue”.

 

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Seal of the State of Mississippi. Courtesy of Wikipedia.[end-div]