Category Archives: Technica

Startup Ideas

For technologists the barriers to developing a new product have never been so low. Tools to develop, integrate and distribute software apps are to all intents negligible. Of course, most would recognize that development is often the easy part. The real difficulty lies in building an effective and sustainable marketing and communication strategy and getting the product adopted.

The recent headlines of 17 year old British app developer Nick D’Aloisio selling his Summly app to Yahoo! for the tidy sum of $30 million, has lots of young and seasoned developers scratching their heads. After all, if a school kid can do it, why not anybody? Why not me?

Paul Graham may have some of the answers. He sold his first company to Yahoo in 1998. He now runs YCombinator a successful startup incubator. We excerpt his recent, observant and insightful essay below.

From Paul Graham:

The way to get startup ideas is not to try to think of startup ideas. It’s to look for problems, preferably problems you have yourself.

The very best startup ideas tend to have three things in common: they’re something the founders themselves want, that they themselves can build, and that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and Facebook all began this way.

Problems

Why is it so important to work on a problem you have? Among other things, it ensures the problem really exists. It sounds obvious to say you should only work on problems that exist. And yet by far the most common mistake startups make is to solve problems no one has.

I made it myself. In 1995 I started a company to put art galleries online. But galleries didn’t want to be online. It’s not how the art business works. So why did I spend 6 months working on this stupid idea? Because I didn’t pay attention to users. I invented a model of the world that didn’t correspond to reality, and worked from that. I didn’t notice my model was wrong until I tried to convince users to pay for what we’d built. Even then I took embarrassingly long to catch on. I was attached to my model of the world, and I’d spent a lot of time on the software. They had to want it!

Why do so many founders build things no one wants? Because they begin by trying to think of startup ideas. That m.o. is doubly dangerous: it doesn’t merely yield few good ideas; it yields bad ideas that sound plausible enough to fool you into working on them.

At YC we call these “made-up” or “sitcom” startup ideas. Imagine one of the characters on a TV show was starting a startup. The writers would have to invent something for it to do. But coming up with good startup ideas is hard. It’s not something you can do for the asking. So (unless they got amazingly lucky) the writers would come up with an idea that sounded plausible, but was actually bad.

For example, a social network for pet owners. It doesn’t sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners. Not all of them perhaps, but if just 2 or 3 percent were regular visitors, you could have millions of users. You could serve them targeted offers, and maybe charge for premium features.

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

Well

When a startup launches, there have to be at least some users who really need what they’re making—not just people who could see themselves using it one day, but who want it urgently. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. Which means you have to compromise on one dimension: you can either build something a large number of people want a small amount, or something a small number of people want a large amount. Choose the latter. Not all ideas of that type are good startup ideas, but nearly all good startup ideas are of that type.

Imagine a graph whose x axis represents all the people who might want what you’re making and whose y axis represents how much they want it. If you invert the scale on the y axis, you can envision companies as holes. Google is an immense crater: hundreds of millions of people use it, and they need it a lot. A startup just starting out can’t expect to excavate that much volume. So you have two choices about the shape of hole you start with. You can either dig a hole that’s broad but shallow, or one that’s narrow and deep, like a well.

Made-up startup ideas are usually of the first type. Lots of people are mildly interested in a social network for pet owners.

Nearly all good startup ideas are of the second type. Microsoft was a well when they made Altair Basic. There were only a couple thousand Altair owners, but without this software they were programming in machine language. Thirty years later Facebook had the same shape. Their first site was exclusively for Harvard students, of which there are only a few thousand, but those few thousand users wanted it a lot.

When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they’ll use it even when it’s a crappy version one made by a two-person startup they’ve never heard of? If you can’t answer that, the idea is probably bad.

You don’t need the narrowness of the well per se. It’s depth you need; you get narrowness as a byproduct of optimizing for depth (and speed). But you almost always do get it. In practice the link between depth and narrowness is so strong that it’s a good sign when you know that an idea will appeal strongly to a specific group or type of user.

But while demand shaped like a well is almost a necessary condition for a good startup idea, it’s not a sufficient one. If Mark Zuckerberg had built something that could only ever have appealed to Harvard students, it would not have been a good startup idea. Facebook was a good idea because it started with a small market there was a fast path out of. Colleges are similar enough that if you build a facebook that works at Harvard, it will work at any college. So you spread rapidly through all the colleges. Once you have all the college students, you get everyone else simply by letting them in.

Similarly for Microsoft: Basic for the Altair; Basic for other machines; other languages besides Basic; operating systems; applications; IPO.

Self

How do you tell whether there’s a path out of an idea? How do you tell whether something is the germ of a giant company, or just a niche product? Often you can’t. The founders of Airbnb didn’t realize at first how big a market they were tapping. Initially they had a much narrower idea. They were going to let hosts rent out space on their floors during conventions. They didn’t foresee the expansion of this idea; it forced itself upon them gradually. All they knew at first is that they were onto something. That’s probably as much as Bill Gates or Mark Zuckerberg knew at first.

Occasionally it’s obvious from the beginning when there’s a path out of the initial niche. And sometimes I can see a path that’s not immediately obvious; that’s one of our specialties at YC. But there are limits to how well this can be done, no matter how much experience you have. The most important thing to understand about paths out of the initial idea is the meta-fact that these are hard to see.

So if you can’t predict whether there’s a path out of an idea, how do you choose between ideas? The truth is disappointing but interesting: if you’re the right sort of person, you have the right sort of hunches. If you’re at the leading edge of a field that’s changing fast, when you have a hunch that something is worth doing, you’re more likely to be right.

In Zen and the Art of Motorcycle Maintenance, Robert Pirsig says:

You want to know how to paint a perfect painting? It’s easy. Make yourself perfect and then just paint naturally.

I’ve wondered about that passage since I read it in high school. I’m not sure how useful his advice is for painting specifically, but it fits this situation well. Empirically, the way to have good startup ideas is to become the sort of person who has them.

Being at the leading edge of a field doesn’t mean you have to be one of the people pushing it forward. You can also be at the leading edge as a user. It was not so much because he was a programmer that Facebook seemed a good idea to Mark Zuckerberg as because he used computers so much. If you’d asked most 40 year olds in 2004 whether they’d like to publish their lives semi-publicly on the Internet, they’d have been horrified at the idea. But Mark already lived online; to him it seemed natural.

Paul Buchheit says that people at the leading edge of a rapidly changing field “live in the future.” Combine that with Pirsig and you get:

Live in the future, then build what’s missing.

That describes the way many if not most of the biggest startups got started. Neither Apple nor Yahoo nor Google nor Facebook were even supposed to be companies at first. They grew out of things their founders built because there seemed a gap in the world.

If you look at the way successful founders have had their ideas, it’s generally the result of some external stimulus hitting a prepared mind. Bill Gates and Paul Allen hear about the Altair and think “I bet we could write a Basic interpreter for it.” Drew Houston realizes he’s forgotten his USB stick and thinks “I really need to make my files live online.” Lots of people heard about the Altair. Lots forgot USB sticks. The reason those stimuli caused those founders to start companies was that their experiences had prepared them to notice the opportunities they represented.

The verb you want to be using with respect to startup ideas is not “think up” but “notice.” At YC we call ideas that grow naturally out of the founders’ own experiences “organic” startup ideas. The most successful startups almost all begin this way.

That may not have been what you wanted to hear. You may have expected recipes for coming up with startup ideas, and instead I’m telling you that the key is to have a mind that’s prepared in the right way. But disappointing though it may be, this is the truth. And it is a recipe of a sort, just one that in the worst case takes a year rather than a weekend.

If you’re not at the leading edge of some rapidly changing field, you can get to one. For example, anyone reasonably smart can probably get to an edge of programming (e.g. building mobile apps) in a year. Since a successful startup will consume at least 3-5 years of your life, a year’s preparation would be a reasonable investment. Especially if you’re also looking for a cofounder.

You don’t have to learn programming to be at the leading edge of a domain that’s changing fast. Other domains change fast. But while learning to hack is not necessary, it is for the forseeable future sufficient. As Marc Andreessen put it, software is eating the world, and this trend has decades left to run.

Knowing how to hack also means that when you have ideas, you’ll be able to implement them. That’s not absolutely necessary (Jeff Bezos couldn’t) but it’s an advantage. It’s a big advantage, when you’re considering an idea like putting a college facebook online, if instead of merely thinking “That’s an interesting idea,” you can think instead “That’s an interesting idea. I’ll try building an initial version tonight.” It’s even better when you’re both a programmer and the target user, because then the cycle of generating new versions and testing them on users can happen inside one head.

Noticing

Once you’re living in the future in some respect, the way to notice startup ideas is to look for things that seem to be missing. If you’re really at the leading edge of a rapidly changing field, there will be things that are obviously missing. What won’t be obvious is that they’re startup ideas. So if you want to find startup ideas, don’t merely turn on the filter “What’s missing?” Also turn off every other filter, particularly “Could this be a big company?” There’s plenty of time to apply that test later. But if you’re thinking about that initially, it may not only filter out lots of good ideas, but also cause you to focus on bad ones.

Most things that are missing will take some time to see. You almost have to trick yourself into seeing the ideas around you.

But you know the ideas are out there. This is not one of those problems where there might not be an answer. It’s impossibly unlikely that this is the exact moment when technological progress stops. You can be sure people are going to build things in the next few years that will make you think “What did I do before x?”

And when these problems get solved, they will probably seem flamingly obvious in retrospect. What you need to do is turn off the filters that usually prevent you from seeing them. The most powerful is simply taking the current state of the world for granted. Even the most radically open-minded of us mostly do that. You couldn’t get from your bed to the front door if you stopped to question everything.

But if you’re looking for startup ideas you can sacrifice some of the efficiency of taking the status quo for granted and start to question things. Why is your inbox overflowing? Because you get a lot of email, or because it’s hard to get email out of your inbox? Why do you get so much email? What problems are people trying to solve by sending you email? Are there better ways to solve them? And why is it hard to get emails out of your inbox? Why do you keep emails around after you’ve read them? Is an inbox the optimal tool for that?

Pay particular attention to things that chafe you. The advantage of taking the status quo for granted is not just that it makes life (locally) more efficient, but also that it makes life more tolerable. If you knew about all the things we’ll get in the next 50 years but don’t have yet, you’d find present day life pretty constraining, just as someone from the present would if they were sent back 50 years in a time machine. When something annoys you, it could be because you’re living in the future.

When you find the right sort of problem, you should probably be able to describe it as obvious, at least to you. When we started Viaweb, all the online stores were built by hand, by web designers making individual HTML pages. It was obvious to us as programmers that these sites would have to be generated by software.

Which means, strangely enough, that coming up with startup ideas is a question of seeing the obvious. That suggests how weird this process is: you’re trying to see things that are obvious, and yet that you hadn’t seen.

Since what you need to do here is loosen up your own mind, it may be best not to make too much of a direct frontal attack on the problem—i.e. to sit down and try to think of ideas. The best plan may be just to keep a background process running, looking for things that seem to be missing. Work on hard problems, driven mainly by curiousity, but have a second self watching over your shoulder, taking note of gaps and anomalies.

Give yourself some time. You have a lot of control over the rate at which you turn yours into a prepared mind, but you have less control over the stimuli that spark ideas when they hit it. If Bill Gates and Paul Allen had constrained themselves to come up with a startup idea in one month, what if they’d chosen a month before the Altair appeared? They probably would have worked on a less promising idea. Drew Houston did work on a less promising idea before Dropbox: an SAT prep startup. But Dropbox was a much better idea, both in the absolute sense and also as a match for his skills.

A good way to trick yourself into noticing ideas is to work on projects that seem like they’d be cool. If you do that, you’ll naturally tend to build things that are missing. It wouldn’t seem as interesting to build something that already existed.

Just as trying to think up startup ideas tends to produce bad ones, working on things that could be dismissed as “toys” often produces good ones. When something is described as a toy, that means it has everything an idea needs except being important. It’s cool; users love it; it just doesn’t matter. But if you’re living in the future and you build something cool that users love, it may matter more than outsiders think. Microcomputers seemed like toys when Apple and Microsoft started working on them. I’m old enough to remember that era; the usual term for people with their own microcomputers was “hobbyists.” BackRub seemed like an inconsequential science project. The Facebook was just a way for undergrads to stalk one another.

At YC we’re excited when we meet startups working on things that we could imagine know-it-alls on forums dismissing as toys. To us that’s positive evidence an idea is good.

If you can afford to take a long view (and arguably you can’t afford not to), you can turn “Live in the future and build what’s missing” into something even better:

Live in the future and build what seems interesting.

School

That’s what I’d advise college students to do, rather than trying to learn about “entrepreneurship.” “Entrepreneurship” is something you learn best by doing it. The examples of the most successful founders make that clear. What you should be spending your time on in college is ratcheting yourself into the future. College is an incomparable opportunity to do that. What a waste to sacrifice an opportunity to solve the hard part of starting a startup—becoming the sort of person who can have organic startup ideas—by spending time learning about the easy part. Especially since you won’t even really learn about it, any more than you’d learn about sex in a class. All you’ll learn is the words for things.

The clash of domains is a particularly fruitful source of ideas. If you know a lot about programming and you start learning about some other field, you’ll probably see problems that software could solve. In fact, you’re doubly likely to find good problems in another domain: (a) the inhabitants of that domain are not as likely as software people to have already solved their problems with software, and (b) since you come into the new domain totally ignorant, you don’t even know what the status quo is to take it for granted.

So if you’re a CS major and you want to start a startup, instead of taking a class on entrepreneurship you’re better off taking a class on, say, genetics. Or better still, go work for a biotech company. CS majors normally get summer jobs at computer hardware or software companies. But if you want to find startup ideas, you might do better to get a summer job in some unrelated field.

Or don’t take any extra classes, and just build things. It’s no coincidence that Microsoft and Facebook both got started in January. At Harvard that is (or was) Reading Period, when students have no classes to attend because they’re supposed to be studying for finals.

But don’t feel like you have to build things that will become startups. That’s premature optimization. Just build things. Preferably with other students. It’s not just the classes that make a university such a good place to crank oneself into the future. You’re also surrounded by other people trying to do the same thing. If you work together with them on projects, you’ll end up producing not just organic ideas, but organic ideas with organic founding teams—and that, empirically, is the best combination.

Beware of research. If an undergrad writes something all his friends start using, it’s quite likely to represent a good startup idea. Whereas a PhD dissertation is extremely unlikely to. For some reason, the more a project has to count as research, the less likely it is to be something that could be turned into a startup. [10] I think the reason is that the subset of ideas that count as research is so narrow that it’s unlikely that a project that satisfied that constraint would also satisfy the orthogonal constraint of solving users’ problems. Whereas when students (or professors) build something as a side-project, they automatically gravitate toward solving users’ problems—perhaps even with an additional energy that comes from being freed from the constraints of research.

Competition

Because a good idea should seem obvious, when you have one you’ll tend to feel that you’re late. Don’t let that deter you. Worrying that you’re late is one of the signs of a good idea. Ten minutes of searching the web will usually settle the question. Even if you find someone else working on the same thing, you’re probably not too late. It’s exceptionally rare for startups to be killed by competitors—so rare that you can almost discount the possibility. So unless you discover a competitor with the sort of lock-in that would prevent users from choosing you, don’t discard the idea.

If you’re uncertain, ask users. The question of whether you’re too late is subsumed by the question of whether anyone urgently needs what you plan to make. If you have something that no competitor does and that some subset of users urgently need, you have a beachhead.

The question then is whether that beachhead is big enough. Or more importantly, who’s in it: if the beachhead consists of people doing something lots more people will be doing in the future, then it’s probably big enough no matter how small it is. For example, if you’re building something differentiated from competitors by the fact that it works on phones, but it only works on the newest phones, that’s probably a big enough beachhead.

Err on the side of doing things where you’ll face competitors. Inexperienced founders usually give competitors more credit than they deserve. Whether you succeed depends far more on you than on your competitors. So better a good idea with competitors than a bad one without.

You don’t need to worry about entering a “crowded market” so long as you have a thesis about what everyone else in it is overlooking. In fact that’s a very promising starting point. Google was that type of idea. Your thesis has to be more precise than “we’re going to make an x that doesn’t suck” though. You have to be able to phrase it in terms of something the incumbents are overlooking. Best of all is when you can say that they didn’t have the courage of their convictions, and that your plan is what they’d have done if they’d followed through on their own insights. Google was that type of idea too. The search engines that preceded them shied away from the most radical implications of what they were doing—particularly that the better a job they did, the faster users would leave.

A crowded market is actually a good sign, because it means both that there’s demand and that none of the existing solutions are good enough. A startup can’t hope to enter a market that’s obviously big and yet in which they have no competitors. So any startup that succeeds is either going to be entering a market with existing competitors, but armed with some secret weapon that will get them all the users (like Google), or entering a market that looks small but which will turn out to be big (like Microsoft).

Filters

There are two more filters you’ll need to turn off if you want to notice startup ideas: the unsexy filter and the schlep filter.

Most programmers wish they could start a startup by just writing some brilliant code, pushing it to a server, and having users pay them lots of money. They’d prefer not to deal with tedious problems or get involved in messy ways with the real world. Which is a reasonable preference, because such things slow you down. But this preference is so widespread that the space of convenient startup ideas has been stripped pretty clean. If you let your mind wander a few blocks down the street to the messy, tedious ideas, you’ll find valuable ones just sitting there waiting to be implemented.

The schlep filter is so dangerous that I wrote a separate essay about the condition it induces, which I called schlep blindness. I gave Stripe as an example of a startup that benefited from turning off this filter, and a pretty striking example it is. Thousands of programmers were in a position to see this idea; thousands of programmers knew how painful it was to process payments before Stripe. But when they looked for startup ideas they didn’t see this one, because unconsciously they shrank from having to deal with payments. And dealing with payments is a schlep for Stripe, but not an intolerable one. In fact they might have had net less pain; because the fear of dealing with payments kept most people away from this idea, Stripe has had comparatively smooth sailing in other areas that are sometimes painful, like user acquisition. They didn’t have to try very hard to make themselves heard by users, because users were desperately waiting for what they were building.

The unsexy filter is similar to the schlep filter, except it keeps you from working on problems you despise rather than ones you fear. We overcame this one to work on Viaweb. There were interesting things about the architecture of our software, but we weren’t interested in ecommerce per se. We could see the problem was one that needed to be solved though.

Turning off the schlep filter is more important than turning off the unsexy filter, because the schlep filter is more likely to be an illusion. And even to the degree it isn’t, it’s a worse form of self-indulgence. Starting a successful startup is going to be fairly laborious no matter what. Even if the product doesn’t entail a lot of schleps, you’ll still have plenty dealing with investors, hiring and firing people, and so on. So if there’s some idea you think would be cool but you’re kept away from by fear of the schleps involved, don’t worry: any sufficiently good idea will have as many.

The unsexy filter, while still a source of error, is not as entirely useless as the schlep filter. If you’re at the leading edge of a field that’s changing rapidly, your ideas about what’s sexy will be somewhat correlated with what’s valuable in practice. Particularly as you get older and more experienced. Plus if you find an idea sexy, you’ll work on it more enthusiastically.

Recipes

While the best way to discover startup ideas is to become the sort of person who has them and then build whatever interests you, sometimes you don’t have that luxury. Sometimes you need an idea now. For example, if you’re working on a startup and your initial idea turns out to be bad.

For the rest of this essay I’ll talk about tricks for coming up with startup ideas on demand. Although empirically you’re better off using the organic strategy, you could succeed this way. You just have to be more disciplined. When you use the organic method, you don’t even notice an idea unless it’s evidence that something is truly missing. But when you make a conscious effort to think of startup ideas, you have to replace this natural constraint with self-discipline. You’ll see a lot more ideas, most of them bad, so you need to be able to filter them.

One of the biggest dangers of not using the organic method is the example of the organic method. Organic ideas feel like inspirations. There are a lot of stories about successful startups that began when the founders had what seemed a crazy idea but “just knew” it was promising. When you feel that about an idea you’ve had while trying to come up with startup ideas, you’re probably mistaken.

When searching for ideas, look in areas where you have some expertise. If you’re a database expert, don’t build a chat app for teenagers (unless you’re also a teenager). Maybe it’s a good idea, but you can’t trust your judgment about that, so ignore it. There have to be other ideas that involve databases, and whose quality you can judge. Do you find it hard to come up with good ideas involving databases? That’s because your expertise raises your standards. Your ideas about chat apps are just as bad, but you’re giving yourself a Dunning-Kruger pass in that domain.

The place to start looking for ideas is things you need. There must be things you need.

One good trick is to ask yourself whether in your previous job you ever found yourself saying “Why doesn’t someone make x? If someone made x we’d buy it in a second.” If you can think of any x people said that about, you probably have an idea. You know there’s demand, and people don’t say that about things that are impossible to build.

More generally, try asking yourself whether there’s something unusual about you that makes your needs different from most other people’s. You’re probably not the only one. It’s especially good if you’re different in a way people will increasingly be.

If you’re changing ideas, one unusual thing about you is the idea you’d previously been working on. Did you discover any needs while working on it? Several well-known startups began this way. Hotmail began as something its founders wrote to talk about their previous startup idea while they were working at their day jobs. [15]

A particularly promising way to be unusual is to be young. Some of the most valuable new ideas take root first among people in their teens and early twenties. And while young founders are at a disadvantage in some respects, they’re the only ones who really understand their peers. It would have been very hard for someone who wasn’t a college student to start Facebook. So if you’re a young founder (under 23 say), are there things you and your friends would like to do that current technology won’t let you?

The next best thing to an unmet need of your own is an unmet need of someone else. Try talking to everyone you can about the gaps they find in the world. What’s missing? What would they like to do that they can’t? What’s tedious or annoying, particularly in their work? Let the conversation get general; don’t be trying too hard to find startup ideas. You’re just looking for something to spark a thought. Maybe you’ll notice a problem they didn’t consciously realize they had, because you know how to solve it.

When you find an unmet need that isn’t your own, it may be somewhat blurry at first. The person who needs something may not know exactly what they need. In that case I often recommend that founders act like consultants—that they do what they’d do if they’d been retained to solve the problems of this one user. People’s problems are similar enough that nearly all the code you write this way will be reusable, and whatever isn’t will be a small price to start out certain that you’ve reached the bottom of the well.

One way to ensure you do a good job solving other people’s problems is to make them your own. When Rajat Suri of E la Carte decided to write software for restaurants, he got a job as a waiter to learn how restaurants worked. That may seem like taking things to extremes, but startups are extreme. We love it when founders do such things.

In fact, one strategy I recommend to people who need a new idea is not merely to turn off their schlep and unsexy filters, but to seek out ideas that are unsexy or involve schleps. Don’t try to start Twitter. Those ideas are so rare that you can’t find them by looking for them. Make something unsexy that people will pay you for.

A good trick for bypassing the schlep and to some extent the unsexy filter is to ask what you wish someone else would build, so that you could use it. What would you pay for right now?

Since startups often garbage-collect broken companies and industries, it can be a good trick to look for those that are dying, or deserve to, and try to imagine what kind of company would profit from their demise. For example, journalism is in free fall at the moment. But there may still be money to be made from something like journalism. What sort of company might cause people in the future to say “this replaced journalism” on some axis?

But imagine asking that in the future, not now. When one company or industry replaces another, it usually comes in from the side. So don’t look for a replacement for x; look for something that people will later say turned out to be a replacement for x. And be imaginative about the axis along which the replacement occurs. Traditional journalism, for example, is a way for readers to get information and to kill time, a way for writers to make money and to get attention, and a vehicle for several different types of advertising. It could be replaced on any of these axes (it has already started to be on most).

When startups consume incumbents, they usually start by serving some small but important market that the big players ignore. It’s particularly good if there’s an admixture of disdain in the big players’ attitude, because that often misleads them. For example, after Steve Wozniak built the computer that became the Apple I, he felt obliged to give his then-employer Hewlett-Packard the option to produce it. Fortunately for him, they turned it down, and one of the reasons they did was that it used a TV for a monitor, which seemed intolerably déclassé to a high-end hardware company like HP was at the time.

Are there groups of scruffy but sophisticated users like the early microcomputer “hobbyists” that are currently being ignored by the big players? A startup with its sights set on bigger things can often capture a small market easily by expending an effort that wouldn’t be justified by that market alone.

Similarly, since the most successful startups generally ride some wave bigger than themselves, it could be a good trick to look for waves and ask how one could benefit from them. The prices of gene sequencing and 3D printing are both experiencing Moore’s Law-like declines. What new things will we be able to do in the new world we’ll have in a few years? What are we unconsciously ruling out as impossible that will soon be possible?

Organic

But talking about looking explicitly for waves makes it clear that such recipes are plan B for getting startup ideas. Looking for waves is essentially a way to simulate the organic method. If you’re at the leading edge of some rapidly changing field, you don’t have to look for waves; you are the wave.

Finding startup ideas is a subtle business, and that’s why most people who try fail so miserably. It doesn’t work well simply to try to think of startup ideas. If you do that, you get bad ones that sound dangerously plausible. The best approach is more indirect: if you have the right sort of background, good startup ideas will seem obvious to you. But even then, not immediately. It takes time to come across situations where you notice something missing. And often these gaps won’t seem to be ideas for companies, just things that would be interesting to build. Which is why it’s good to have the time and the inclination to build things just because they’re interesting.

Live in the future and build what seems interesting. Strange as it sounds, that’s the real recipe.

Read the entire article after the jump.

Image: Nick D’Aloisio with his Summly app. Courtesy of Telegraph.

You Are a Google Datapoint

At first glance Google’s aim to make all known information accessible and searchable seems to be a fundamentally worthy goal, and in keeping with its “Do No Evil” mantra. Surely, giving all people access to the combined knowledge of the human race can do nothing but good, intellectually, politically and culturally.

However, what if that information includes you? After all, you are information: from the sequence of bases in your DNA, to the food you eat and the products you purchase, to your location and your planned vacations, your circle of friends and colleagues at work, to what you say and write and hear and see. You are a collection of datapoints, and if you don’t market and monetize them, someone else will.

Google continues to extend its technology boundaries and its vast indexed database of information. Now with the introduction of Google Glass the company extends its domain to a much more intimate level. Glass gives Google access to data on your precise location; it can record what you say and the sounds around you; it can capture what you are looking at and make it instantly shareable over the internet. Not surprisingly, this raises numerous concerns over privacy and security, and not only for the wearer of Google Glass. While active opt-in / opt-out features would allow a user a fair degree of control over how and what data is collected and shared with Google, it does not address those being observed.

So, beware the next time you are sitting in a Starbucks or shopping in a mall or riding the subway, you may be being recorded and your digital essence distributed over the internet. Perhaps, someone somewhere will even be making money from you. While the Orwellian dystopia of government surveillance and control may still be a nightmarish fiction, corporate snooping and monetization is no less troubling. Remember, to some, you are merely a datapoint (care of Google), a publication (via Facebook), and a product (courtesy of Twitter).

From the Telegraph:

In the online world – for now, at least – it’s the advertisers that make the world go round. If you’re Google, they represent more than 90% of your revenue and without them you would cease to exist.

So how do you reconcile the fact that there is a finite amount of data to be gathered online with the need to expand your data collection to keep ahead of your competitors?

There are two main routes. Firstly, try as hard as is legally possible to monopolise the data streams you already have, and hope regulators fine you less than the profit it generated. Secondly, you need to get up from behind the computer and hit the streets.

Google Glass is the first major salvo in an arms race that is going to see increasingly intrusive efforts made to join up our real lives with the digital businesses we have become accustomed to handing over huge amounts of personal data to.

The principles that underpin everyday consumer interactions – choice, informed consent, control – are at risk in a way that cannot be healthy. Our ability to walk away from a service depends on having a choice in the first place and knowing what data is collected and how it is used before we sign up.

Imagine if Google or Facebook decided to install their own CCTV cameras everywhere, gathering data about our movements, recording our lives and joining up every camera in the land in one giant control room. It’s Orwellian surveillance with fluffier branding. And this isn’t just video surveillance – Glass uses audio recording too. For added impact, if you’re not content with Google analysing the data, the person can share it to social media as they see fit too.

Yet that is the reality of Google Glass. Everything you see, Google sees. You don’t own the data, you don’t control the data and you definitely don’t know what happens to the data. Put another way – what would you say if instead of it being Google Glass, it was Government Glass? A revolutionary way of improving public services, some may say. Call me a cynic, but I don’t think it’d have much success.

More importantly, who gave you permission to collect data on the person sitting opposite you on the Tube? How about collecting information on your children’s friends? There is a gaping hole in the middle of the Google Glass world and it is one where privacy is not only seen as an annoying restriction on Google’s profit, but as something that simply does not even come into the equation. Google has empowered you to ignore the privacy of other people. Bravo.

It’s already led to reactions in the US. ‘Stop the Cyborgs’ might sound like the rallying cry of the next Terminator film, but this is the start of a campaign to ensure places of work, cafes, bars and public spaces are no-go areas for Google Glass. They’ve already produced stickers to put up informing people that they should take off their Glass.

They argue, rightly, that this is more than just a question of privacy. There’s a real issue about how much decision making is devolved to the display we see, in exactly the same way as the difference between appearing on page one or page two of Google’s search can spell the difference between commercial success and failure for small businesses. We trust what we see, it’s convenient and we don’t question the motives of a search engine in providing us with information.

The reality is very different. In abandoning critical thought and decision making, allowing ourselves to be guided by a melee of search results, social media and advertisements we do risk losing a part of what it is to be human. You can see the marketing already – Glass is all-knowing. The issue is that to be all-knowing, it needs you to help it be all-seeing.

Read the entire article after the jump.

Image: Google’s Sergin Brin wearing Google Glass. Courtesy of CBS News.

Electronic Tattoos

arm with band-aid shaped skin graphForget wearable electronics, like Google Glass. That’s so, well, 2012. Welcome to the new world of epidermal electronics — electronic tattoos that contain circuits and sensors printed directly on to the body.

From MIT Technology Review:

Taking advantage of recent advances in flexible electronics, researchers have devised a way to “print” devices directly onto the skin so people can wear them for an extended period while performing normal daily activities. Such systems could be used to track health and monitor healing near the skin’s surface, as in the case of surgical wounds.

 

So-called “epidermal electronics” were demonstrated previously in research from the lab of John Rogers, a materials scientist at the University of Illinois at Urbana-Champaign; the devices consist of ultrathin electrodes, electronics, sensors, and wireless power and communication systems. In theory, they could attach to the skin and record and transmit electrophysiological measurements for medical purposes. These early versions of the technology, which were designed to be applied to a thin, soft elastomer backing, were “fine for an office environment,” says Rogers, “but if you wanted to go swimming or take a shower they weren’t able to hold up.” Now, Rogers and his coworkers have figured out how to print the electronics right on the skin, making the device more durable and rugged.

“What we’ve found is that you don’t even need the elastomer backing,” Rogers says. “You can use a rubber stamp to just deliver the ultrathin mesh electronics directly to the surface of the skin.” The researchers also found that they could use commercially available “spray-on bandage” products to add a thin protective layer and bond the system to the skin in a “very robust way,” he says.

Eliminating the elastomer backing makes the device one-thirtieth as thick, and thus “more conformal to the kind of roughness that’s present naturally on the surface of the skin,” says Rogers. It can be worn for up to two weeks before the skin’s natural exfoliation process causes it to flake off.

During the two weeks that it’s attached, the device can measure things like temperature, strain, and the hydration state of the skin, all of which are useful in tracking general health and wellness. One specific application could be to monitor wound healing: if a doctor or nurse attached the system near a surgical wound before the patient left the hospital, it could take measurements and transmit the information wirelessly to the health-care providers.

Read the entire article after the jump.

Image: Epidermal electronic snesor printed on the skin. Courtesy of MIT.

Technology: Mind Exp(a/e)nder

Rattling off esoteric facts to friends and colleagues at a party or in the office is often seen as a simple way to impress. You may have tried this at some point — to impress a prospective boy or girl friend or a group of peers or even your boss. Not surprisingly, your facts will impress if they are relevant to the discussion at hand. However, your audience will be even more agog at your uncanny, intellectual prowess if the facts and figures relate to some wildly obtuse domain — quotes from authors, local bird species, gold prices through the years, land-speed records through the ages, how electrolysis works, etymology of polysyllabic words, and so it goes.

So, it comes as no surprise that many technology companies fall over themselves to promote their products as a way to make you, the smart user, even smarter. But does having constant, realtime access to a powerful computer or smartphone or spectacles linked to an immense library of interconnected content, make you smarter? Some would argue that it does; that having access to a vast, virtual disk drive of information will improve your cognitive abilities. There is no doubt that our technology puts an unparalleled repository of information within instant and constant reach: we can read all the classic literature — for that matter we can read the entire contents of the Library of Congress; we can find an answer to almost any question — it’s just a Google search away; we can find fresh research and rich reference material on every subject imaginable.

Yet, all this information will not directly make us any smarter; it is not applied knowledge nor is it experiential wisdom. It will not make us more creative or insightful. However, it is more likely to influence our cognition indirectly — freed from our need to carry volumes of often useless facts and figures in our heads, we will be able to turn our minds to more consequential and noble pursuits — to think, rather than to memorize. That is a good thing.

From Slate:

Quick, what’s the square root of 2,130? How many Roadmaster convertibles did Buick build in 1949? What airline has never lost a jet plane in a crash?

If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.

True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.

The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.

Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.

The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.

So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”

Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.

The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”

So here’s a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What’s the exact recipe for your favorite homemade pastry?

Read the entire article after the jump.

Image: Google Glass. Courtesy of Google.

The Police Drones Next Door

You might expect to find police drones in the pages of a science fiction novel by Philip K. Dick or Iain M. Banks. But by 2015, citizens of the United States may well see these unmanned flying machines patrolling the skies over the homeland. The U.S. government recently pledged to loosen Federal Aviation Administration (FAA) restrictions that would allow local law enforcement agencies to use drones in just a few short years. So, soon the least of your worries will be traffic signal cameras and the local police officer armed with a radar gun. Our home-grown drones are likely to be deployed first for surveillance. But, undoubtedly armaments will follow. Hellfire missiles over Helena, Montana anyone?

[div class=attrib]From National Geographic:[end-div]

At the edge of a stubbly, dried-out alfalfa field outside Grand Junction, Colorado, Deputy Sheriff Derek Johnson, a stocky young man with a buzz cut, squints at a speck crawling across the brilliant, hazy sky. It’s not a vulture or crow but a Falcon—a new brand of unmanned aerial vehicle, or drone, and Johnson is flying it. The sheriff ’s office here in Mesa County, a plateau of farms and ranches corralled by bone-hued mountains, is weighing the Falcon’s potential for spotting lost hikers and criminals on the lam. A laptop on a table in front of Johnson shows the drone’s flickering images of a nearby highway.

Standing behind Johnson, watching him watch the Falcon, is its designer, Chris Miser. Rock-jawed, arms crossed, sunglasses pushed atop his shaved head, Miser is a former Air Force captain who worked on military drones before quitting in 2007 to found his own company in Aurora, Colorado. The Falcon has an eight-foot wingspan but weighs just 9.5 pounds. Powered by an electric motor, it carries two swiveling cameras, visible and infrared, and a GPS-guided autopilot. Sophisticated enough that it can’t be exported without a U.S. government license, the Falcon is roughly comparable, Miser says, to the Raven, a hand-launched military drone—but much cheaper. He plans to sell two drones and support equipment for about the price of a squad car.

A law signed by President Barack Obama in February 2012 directs the Federal Aviation Administration (FAA) to throw American airspace wide open to drones by September 30, 2015. But for now Mesa County, with its empty skies, is one of only a few jurisdictions with an FAA permit to fly one. The sheriff ’s office has a three-foot-wide helicopter drone called a Draganflyer, which stays aloft for just 20 minutes.

The Falcon can fly for an hour, and it’s easy to operate. “You just put in the coordinates, and it flies itself,” says Benjamin Miller, who manages the unmanned aircraft program for the sheriff ’s office. To navigate, Johnson types the desired altitude and airspeed into the laptop and clicks targets on a digital map; the autopilot does the rest. To launch the Falcon, you simply hurl it into the air. An accelerometer switches on the propeller only after the bird has taken flight, so it won’t slice the hand that launches it.

The stench from a nearby chicken-processing plant wafts over the alfalfa field. “Let’s go ahead and tell it to land,” Miser says to Johnson. After the deputy sheriff clicks on the laptop, the Falcon swoops lower, releases a neon orange parachute, and drifts gently to the ground, just yards from the spot Johnson clicked on. “The Raven can’t do that,” Miser says proudly.

Offspring of 9/11

A dozen years ago only two communities cared much about drones. One was hobbyists who flew radio-controlled planes and choppers for fun. The other was the military, which carried out surveillance missions with unmanned aircraft like the General Atomics Predator.

Then came 9/11, followed by the U.S. invasions of Afghanistan and Iraq, and drones rapidly became an essential tool of the U.S. armed forces. The Pentagon armed the Predator and a larger unmanned surveillance plane, the Reaper, with missiles, so that their operators—sitting in offices in places like Nevada or New York—could destroy as well as spy on targets thousands of miles away. Aerospace firms churned out a host of smaller drones with increasingly clever computer chips and keen sensors—cameras but also instruments that measure airborne chemicals, pathogens, radioactive materials.

The U.S. has deployed more than 11,000 military drones, up from fewer than 200 in 2002. They carry out a wide variety of missions while saving money and American lives. Within a generation they could replace most manned military aircraft, says John Pike, a defense expert at the think tank GlobalSecurity.org. Pike suspects that the F-35 Lightning II, now under development by Lockheed Martin, might be “the last fighter with an ejector seat, and might get converted into a drone itself.”

At least 50 other countries have drones, and some, notably China, Israel, and Iran, have their own manufacturers. Aviation firms—as well as university and government researchers—are designing a flock of next-generation aircraft, ranging in size from robotic moths and hummingbirds to Boeing’s Phantom Eye, a hydrogen-fueled behemoth with a 150-foot wingspan that can cruise at 65,000 feet for up to four days.

More than a thousand companies, from tiny start-ups like Miser’s to major defense contractors, are now in the drone business—and some are trying to steer drones into the civilian world. Predators already help Customs and Border Protection agents spot smugglers and illegal immigrants sneaking into the U.S. NASA-operated Global Hawks record atmospheric data and peer into hurricanes. Drones have helped scientists gather data on volcanoes in Costa Rica, archaeological sites in Russia and Peru, and flooding in North Dakota.

So far only a dozen police departments, including ones in Miami and Seattle, have applied to the FAA for permits to fly drones. But drone advocates—who generally prefer the term UAV, for unmanned aerial vehicle—say all 18,000 law enforcement agencies in the U.S. are potential customers. They hope UAVs will soon become essential too for agriculture (checking and spraying crops, finding lost cattle), journalism (scoping out public events or celebrity backyards), weather forecasting, traffic control. “The sky’s the limit, pun intended,” says Bill Borgia, an engineer at Lockheed Martin. “Once we get UAVs in the hands of potential users, they’ll think of lots of cool applications.”

The biggest obstacle, advocates say, is current FAA rules, which tightly restrict drone flights by private companies and government agencies (though not by individual hobbyists). Even with an FAA permit, operators can’t fly UAVs above 400 feet or near airports or other zones with heavy air traffic, and they must maintain visual contact with the drones. All that may change, though, under the new law, which requires the FAA to allow the “safe integration” of UAVs into U.S. airspace.

If the FAA relaxes its rules, says Mark Brown, the civilian market for drones—and especially small, low-cost, tactical drones—could soon dwarf military sales, which in 2011 totaled more than three billion dollars. Brown, a former astronaut who is now an aerospace consultant in Dayton, Ohio, helps bring drone manufacturers and potential customers together. The success of military UAVs, he contends, has created “an appetite for more, more, more!” Brown’s PowerPoint presentation is called “On the Threshold of a Dream.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Unmanned drone used to patrol the U.S.-Canadian border. (U.S. Customs and Border Protection/AP).[end-div]

First, Build A Blue Box; Second, Build Apple

Edward Tufte built the first little blue box in 1962. The blue box contained home-made circuitry and a tone generator that could place free calls over the phone network to anywhere in the world.

This electronic revelation spawned groups of “phone phreaks” (hackers) who would build their own blue boxes to fight MaBell (AT&T), illegally of course. The phreaks assumed suitably disguised names, such as Captain Crunch and Cheshire Cat, to hide from the long-arm of the FBI.

This later caught the attention of a pair of new recruits to the subversive cause, Berkeley Blue and Oaf Tobar, who would go on to found Apple under their more common pseudonyms, Steve Wozniak and Steve Jobs. The rest, as the saying goes, is history.

Put it down to curiosity, an anti-authoritarian streak and a quest to ever-improve.

[div class=attrib]From Slate:[end-div]

One of the most heartfelt—and unexpected—remembrances of Aaron Swartz, who committed suicide last month at the age of 26, came from Yale professor Edward Tufte. During a speech at a recent memorial service for Swartz in New York City, Tufte reflected on his secret past as a hacker—50 years ago.

“In 1962, my housemate and I invented the first blue box,” Tufte said to the crowd. “That’s a device that allows for undetectable, unbillable long distance telephone calls. We played around with it and the end of our research came when we completed what we thought was the longest long-distance phone call ever made, which was from Palo Alto to New York … via Hawaii.”

Tufte was never busted for his youthful forays into phone hacking, also known as phone phreaking. He rose to become one of Yale’s most famous professors, a world authority on data visualization and information design. One can’t help but think that Swartz might have followed in the distinguished footsteps of a professor like Tufte, had he lived.

Swartz faced 13 felony charges and up to 35 years in prison for downloading 4.8 million academic articles from the digital repository JSTOR, using MIT’s network. In the face of the impending trial, Swartz—a brilliant young hacker and activist who was a key force behind many worthy projects, including the RSS 1.0 specification and Creative Commons—killed himself on Jan. 11.

“Aaron’s unique quality was that he was marvelously and vigorously different,” Tufte said, a tear in his eye, as he closed his speech. “There is a scarcity of that. Perhaps we can all be a little more different, too.”

Swartz was too young to be a phone phreak like Tufte. In our present era of Skype and smartphones, the old days of outsmarting Ma Bell with 2600 Hertz sine wave tones and homemade “blue boxes” seems quaint, charmingly retro. But there is a thread that connects these old-school phone hackers to Swartz—common traits that Tufte recognized. It’s not just that, like Swartz, many phone phreaks faced trumped-up charges (wire fraud, in their cases). The best of these proto-computer hackers possessed Swartz’s enterprising spirit, his penchant for questioning authority, and his drive to figure out how a complicated system works from the inside. They were nerds, they were misfits; like Swartz, they were a little more different.

In his new history of phone phreaking, Exploding the Phone, engineer and consultant Phil Lapsley details the story of the 1960s and 1970s culture of hackers who, like Tufte, devised numerous ways to outwit the phone system. The foreword of the book is by Steve Wozniak, co-founder of Apple—and, as it happens, an old-school hacker himself. Before Wozniak and Steve Jobs built Apple in the 1970s, they were phone phreaks. (Wozniak’s hacker name was Berkeley Blue; Jobs’ handle was Oaf Tobar.)

In 1971, Esquire published an article about phone phreaking called “Secrets of the Little Blue Box,” by Ron Rosenbaum (a Slate columnist). It chronicled a ragtag crew sporting names like Captain Crunch and the Cheshire Cat, who prided themselves on using ingenuity and rudimentary electronics to outsmart the many-tentacled monstrosities of Ma Bell and the FBI. A blind 22-year-old named Joe Engressia was one of the scene’s heroes; according to Rosenbaum, Engressia could whistle at exactly the right frequency to place a free phone call.

Wozniak, age 20 in ’71, devoured the now-legendary article. “You know how some articles just grab you from the first paragraph?” he wrote in his 2006 memoir, iWoz, quoted in Lapsley’s book. “Well, it was one of those articles. It was the most amazing article I’d ever read!” Wozniak was entranced by the way these hackers seemed so much like himself. “I could tell that the characters being described were really tech people, much like me, people who liked to design things just to see what was possible, and for no other reason, really.” Building a blue box—a device that could generate the same tones that the phone system used to route phone calls, in a certain sequence—required technical smarts, and Wozniak loved nerdy challenges. Plus, the payoff—and the potential for epic pranks—was irresistible. (Wozniak once used a blue box to call the Vatican; impersonating Henry Kissinger he asked to talk to the pope.)

Wozniak immediately called Jobs, who was then a 17-year-old senior in high school. The friends drove to the technical library at Stanford’s Linear Accelerator Center to find a phone manual that listed tone frequencies. That same day, as Lapsley details in the book, Wozniak and Jobs bought analog tone generator kits, but were soon frustrated that the generators weren’t good enough for really high-quality phone phreaking.

Wozniak had a better, geekier idea: They needed to build their own blue boxes, but make them with digital circuits, which were more precise and easier to control than the usual analog ones. Wozniak and Jobs didn’t just build one blue box—they went on to build dozens of them, which they sold for about $170 apiece. In a way, their sophisticated, compact design foreshadowed the Apple products to come. Their digital circuitry incorporated several smart tricks, including a method to make the battery last longer. “I have never designed a circuit I was prouder of,” Wozniak says.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Exploding the Phone by Phil Lapsley, book cover. Courtesy of Barnes & Noble.[end-div]

Shakespearian Sonnets Now Available on DNA

Shakespeare meet thy DNA. The most famous literary figure in the English language had a recent rendezvous with that most famous and studied of molecules. Together chemists, cell biologists, geneticists and computer scientists are doing some amazing things — storing information using the base-pair sequences of amino-acids on the DNA molecule.

[div class=attrib]From ars technica:[end-div]

It’s easy to get excited about the idea of encoding information in single molecules, which seems to be the ultimate end of the miniaturization that has been driving the electronics industry. But it’s also easy to forget that we’ve been beaten there—by a few billion years. The chemical information present in biomolecules was critical to the origin of life and probably dates back to whatever interesting chemical reactions preceded it.

It’s only within the past few decades, however, that humans have learned to speak DNA. Even then, it took a while to develop the technology needed to synthesize and determine the sequence of large populations of molecules. But we’re there now, and people have started experimenting with putting binary data in biological form. Now, a new study has confirmed the flexibility of the approach by encoding everything from an MP3 to the decoding algorithm into fragments of DNA. The cost analysis done by the authors suggest that the technology may soon be suitable for decade-scale storage, provided current trends continue.

Trinary encoding

Computer data is in binary, while each location in a DNA molecule can hold any one of four bases (A, T, C, and G). Rather than using all that extra information capacity, however, the authors used it to avoid a technical problem. Stretches of a single type of base (say, TTTTT) are often not sequenced properly by current techniques—in fact, this was the biggest source of errors in the previous DNA data storage effort. So for this new encoding, they used one of the bases to break up long runs of any of the other three.

(To explain how this works practically, let’s say the A, T, and C encoded information, while G represents “more of the same.” If you had a run of four A’s, you could represent it as AAGA. But since the G doesn’t encode for anything in particular, TTGT can be used to represent four T’s. The only thing that matters is that there are no more than two identical bases in a row.)

That leaves three bases to encode information, so the authors converted their information into trinary. In all, they encoded a large number of works: all 154 Shakespeare sonnets, a PDF of a scientific paper, a photograph of the lab some of them work in, and an MP3 of part of Martin Luther King’s “I have a dream” speech. For good measure, they also threw in the algorithm they use for converting binary data into trinary.

Once in trinary, the results were encoded into the error-avoiding DNA code described above. The resulting sequence was then broken into chunks that were easy to synthesize. Each chunk came with parity information (for error correction), a short file ID, and some data that indicates the offset within the file (so, for example, that the sequence holds digits 500-600). To provide an added level of data security, 100-bases-long DNA inserts were staggered by 25 bases so that consecutive fragments had a 75-base overlap. Thus, many sections of the file were carried by four different DNA molecules.

And it all worked brilliantly—mostly. For most of the files, the authors’ sequencing and analysis protocol could reconstruct an error-free version of the file without any intervention. One, however, ended up with two 25-base-long gaps, presumably resulting from a particular sequence that is very difficult to synthesize. Based on parity and other data, they were able to reconstruct the contents of the gaps, but understanding why things went wrong in the first place would be critical to understanding how well suited this method is to long-term archiving of data.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Title page of Shakespeare’s Sonnets (1609). Courtesy of Wikipedia / Public Domain.[end-div]

Your City as an Information Warehouse

Big data keeps getting bigger and computers keep getting faster. Some theorists believe that the universe is a giant computer or a computer simulation; that principles of information science govern the cosmos. While this notion is one of the most recent radical ideas to explain our existence, there is no doubt that information is our future. Data surrounds us, we are becoming data-points and our cities are our information-rich databases.

[div class=attrib]From the Economist:[end-div]

IN 1995 GEORGE GILDER, an American writer, declared that “cities are leftover baggage from the industrial era.” Electronic communications would become so easy and universal that people and businesses would have no need to be near one another. Humanity, Mr Gilder thought, was “headed for the death of cities”.

It hasn’t turned out that way. People are still flocking to cities, especially in developing countries. Cisco’s Mr Elfrink reckons that in the next decade 100 cities, mainly in Asia, will reach a population of more than 1m. In rich countries, to be sure, some cities are sad shadows of their old selves (Detroit, New Orleans), but plenty are thriving. In Silicon Valley and the newer tech hubs what Edward Glaeser, a Harvard economist, calls “the urban ability to create collaborative brilliance” is alive and well.

Cheap and easy electronic communication has probably helped rather than hindered this. First, connectivity is usually better in cities than in the countryside, because it is more lucrative to build telecoms networks for dense populations than for sparse ones. Second, electronic chatter may reinforce rather than replace the face-to-face kind. In his 2011 book, “Triumph of the City”, Mr Glaeser theorises that this may be an example of what economists call “Jevons’s paradox”. In the 19th century the invention of more efficient steam engines boosted rather than cut the consumption of coal, because they made energy cheaper across the board. In the same way, cheap electronic communication may have made modern economies more “relationship-intensive”, requiring more contact of all kinds.

Recent research by Carlo Ratti, director of the SENSEable City Laboratory at the Massachusetts Institute of Technology, and colleagues, suggests there is something to this. The study, based on the geographical pattern of 1m mobile-phone calls in Portugal, found that calls between phones far apart (a first contact, perhaps) are often followed by a flurry within a small area (just before a meeting).

Data deluge

A third factor is becoming increasingly important: the production of huge quantities of data by connected devices, including smartphones. These are densely concentrated in cities, because that is where the people, machines, buildings and infrastructures that carry and contain them are packed together. They are turning cities into vast data factories. “That kind of merger between physical and digital environments presents an opportunity for us to think about the city almost like a computer in the open air,” says Assaf Biderman of the SENSEable lab. As those data are collected and analysed, and the results are recycled into urban life, they may turn cities into even more productive and attractive places.

Some of these “open-air computers” are being designed from scratch, most of them in Asia. At Songdo, a South Korean city built on reclaimed land, Cisco has fitted every home and business with video screens and supplied clever systems to manage transport and the use of energy and water. But most cities are stuck with the infrastructure they have, at least in the short term. Exploiting the data they generate gives them a chance to upgrade it. Potholes in Boston, for instance, are reported automatically if the drivers of the cars that hit them have an app called Street Bump on their smartphones. And, particularly in poorer countries, places without a well-planned infrastructure have the chance of a leap forward. Researchers from the SENSEable lab have been working with informal waste-collecting co-operatives in São Paulo whose members sift the city’s rubbish for things to sell or recycle. By attaching tags to the trash, the researchers have been able to help the co-operatives work out the best routes through the city so they can raise more money and save time and expense.

Exploiting data may also mean fewer traffic jams. A few years ago Alexandre Bayen, of the University of California, Berkeley, and his colleagues ran a project (with Nokia, then the leader of the mobile-phone world) to collect signals from participating drivers’ smartphones, showing where the busiest roads were, and feed the information back to the phones, with congested routes glowing red. These days this feature is common on smartphones. Mr Bayen’s group and IBM Research are now moving on to controlling traffic and thus easing jams rather than just telling drivers about them. Within the next three years the team is due to build a prototype traffic-management system for California’s Department of Transportation.

Cleverer cars should help, too, by communicating with each other and warning drivers of unexpected changes in road conditions. Eventually they may not even have drivers at all. And thanks to all those data they may be cleaner, too. At the Fraunhofer FOKUS Institute in Berlin, Ilja Radusch and his colleagues show how hybrid cars can be automatically instructed to switch from petrol to electric power if local air quality is poor, say, or if they are going past a school.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Images of cities courtesy of Google search.[end-div]

Light From Gravity

Often the best creative ideas and the most elegant solutions are the simplest. GravityLight is an example of this type of innovation. Here’s the problem: replace damaging and expensive kerosene fuel lamps in Africa with a less harmful and cheaper alternative. And, the solution:

[tube]1dd9NIlhvlI[/tube]

[div class=attrib]From ars technica:[end-div]

A London design consultancy has developed a cheap, clean, and safer alternative to the kerosene lamp. Kerosene burning lamps are thought to be used by over a billion people in developing nations, often in remote rural parts where electricity is either prohibitively expensive or simply unavailable. Kerosene’s potential replacement, GravityLight, is powered by gravity without the need of a battery—it’s also seen by its creators as a superior alternative to solar-powered lamps.

Kerosene lamps are problematic in three ways: they release pollutants which can contribute to respiratory disease; they pose a fire risk; and, thanks to the ongoing need to buy kerosene fuel, they are expensive to run. Research out of Brown University from July of last year called kerosene lamps a “significant contributor to respiratory diseases, which kill over 1.5 million people every year” in developing countries. The same paper found that kerosene lamps were responsible for 70 percent of fires (which cause 300,000 deaths every year) and 80 percent of burns. The World Bank has compared the indoor use of a kerosene lamp with smoking two packs of cigarettes per day.

The economics of the kerosene lamps are nearly as problematic, with the fuel costing many rural families a significant proportion of their money. The designers of the GravityLight say 10 to 20 percent of household income is typical, and they describe kerosene as a poverty trap, locking people into a “permanent state of subsistence living.” Considering that the median rural price of kerosene in Tanzania, Mali, Ghana, Kenya, and Senegal is $1.30 per liter, and the average rural income in Tanzania is under $9 per month, the designers’ figures seem depressingly plausible.

Approached by the charity Solar Aid to design a solar-powered LED alternative, London design consultancy Therefore shifted the emphasis away from solar, which requires expensive batteries that degrade over time. The company’s answer is both more simple and more radical: an LED lamp driven by a bag of sand, earth, or stones, pulled toward the Earth by gravity.

It takes only seconds to hoist the bag into place, after which the lamp provides up to half an hour of ambient light, or about 18 minutes of brighter task lighting. Though it isn’t clear quite how much light the GravityLight emits, its makers insist it is more than a kerosene lamp. Also unclear are the precise inner workings of the device, though clearly the weighted bag pulls a cord, driving an inner mechanism with a low-powered dynamo, with the aid of some robust plastic gearing. Talking to Ars by telephone, Therefore’s Jim Fullalove was loath to divulge details, but did reveal the gearing took the kinetic energy from a weighted bag descending at a rate of a millimeter per second to power a dynamo spinning at 2000rpm.

[div class=attrib]Read more about GravityLight after the jump.[end-div]

[div class=attrib]Video courtesy of GravityLight.[end-div]

Consumer Electronics Gone Mad

If you eat too quickly, then HAPIfork is the new eating device for you. If you have trouble seeing text on your palm-sized iPad, then Lenovo’s 27 inch tablet is for you. If you need musical motivation from One Direction to get your children to brush their teeth, then the Brush Buddies toothbrush is for you, and your kids. If you’re tired of technology, then stay away from this year’s Consumer Electronics Show (CES 2013).

If you’d like to see other strange products looking for a buyer follow this jump.

[div class=attrib]Image: The HAPIfork monitors how fast its user is eating and alerts them if their speed is faster than a pre-determined rate by vibrating, which altogether sounds like an incredibly strange eating experience. Courtesy of CES / Telegraph.[end-div]

Big Brother is Mapping You

One hopes that Google’s intention to “organize the world’s information” will remain benign for the foreseeable future. Yet, as more and more of our surroundings and moves are mapped and tracked online, and increasingly offline, it would be wise to remain ever vigilant. Many put up with the encroachment of advertisers and promoters into almost every facet of their daily lives as a necessary, modern evil. But where is the dividing line that separates an ignorable irritation from an intrusion of privacy and a grab for control? For the paranoid amongst us, it may only be a matter of time before our digital footprints come under the increasing scrutiny, and control, of organizations with grander designs.

[div class=attrib]From the Guardian:[end-div]

Eight years ago, Google bought a cool little graphics business called Keyhole, which had been working on 3D maps. Along with the acquisition came Brian McClendon, aka “Bam”, a tall and serious Kansan who in a previous incarnation had supplied high-end graphics software that Hollywood used in films including Jurassic Park and Terminator 2. It turned out to be a very smart move.

Today McClendon is Google’s Mr Maps – presiding over one of the fastest-growing areas in the search giant’s business, one that has recently left arch-rival Apple red-faced and threatens to make Google the most powerful company in mapping the world has ever seen.

Google is throwing its considerable resources into building arguably the most comprehensive map ever made. It’s all part of the company’s self-avowed mission is to organize all the world’s information, says McClendon.

“You need to have the basic structure of the world so you can place the relevant information on top of it. If you don’t have an accurate map, everything else is inaccurate,” he says.

It’s a message that will make Apple cringe. Apple triggered howls of outrage when it pulled Google Maps off the latest iteration of its iPhone software for its own bug-riddled and often wildly inaccurate map system. “We screwed up,” Apple boss Tim Cook said earlier this week.

McClendon, pictured, won’t comment on when and if Apple will put Google’s application back on the iPhone. Talks are ongoing and he’s at pains to point out what a “great” product the iPhone is. But when – or if – Apple caves, it will be a huge climbdown. In the meantime, what McClendon really cares about is building a better map.

This not the first time Google has made a landgrab in the real world, as the publishing industry will attest. Unhappy that online search was missing all the good stuff inside old books, Google – controversially – set about scanning the treasures of Oxford’s Bodleian library and some of the world’s other most respected collections.

Its ambitions in maps may be bigger, more far reaching and perhaps more controversial still. For a company developing driverless cars and glasses that are wearable computers, maps are a serious business. There’s no doubting the scale of McClendon’s vision. His license plate reads: ITLLHPN.

Until the 1980s, maps were still largely a pen and ink affair. Then mainframe computers allowed the development of geographic information system software (GIS), which was able to display and organise geographic information in new ways. By 2005, when Google launched Google Maps, computing power allowed GIS to go mainstream. Maps were about to change the way we find a bar, a parcel or even a story. Washington DC’s homicidewatch.org, for example, uses Google Maps to track and follow deaths across the city. Now the rise of mobile devices has pushed mapping into everyone’s hands and to the front line in the battle of the tech giants.

It’s easy to see why Google is so keen on maps. Some 20% of Google’s queries are now “location specific”. The company doesn’t split the number out but on mobile the percentage is “even higher”, says McClendon, who believes maps are set to unfold themselves ever further into our lives.

Google’s approach to making better maps is about layers. Starting with an aerial view, in 2007 Google added Street View, an on-the-ground photographic map snapped from its own fleet of specially designed cars that now covers 5 million of the 27.9 million miles of roads on Google Maps.

Google isn’t stopping there. The company has put cameras on bikes to cover harder-to-reach trails, and you can tour the Great Barrier Reef thanks to diving mappers. Luc Vincent, the Google engineer known as “Mr Street View”, carried a 40lb pack of snapping cameras down to the bottom of the Grand Canyon and then back up along another trail as fellow hikers excitedly shouted “Google, Google” at the man with the space-age backpack. McClendon, pictured, has also played his part. He took his camera to Antarctica, taking 500 or more photos of a penguin-filled island to add to Google Maps. “The penguins were pretty oblivious. They just don’t care about people,” he says.

Now the company has projects called Ground Truth, which corrects errors online, and Map Maker, a service that lets people make their own maps. In the western world the product has been used to add a missing road or correct a one-way street that is pointing the wrong way, and to generally improve what’s already there. In Africa, Asia and other less well covered areas of the world, Google is – literally – helping people put themselves on the map.

In 2008, it could take six to 18 months for Google to update a map. The company would have to go back to the firm that provided its map information and get them to check the error, correct it and send it back. “At that point we decided we wanted to bring that information in house,” says McClendon. Google now updates its maps hundreds of times a day. Anyone can correct errors with roads signs or add missing roads and other details; Google double checks and relies on other users to spot mistakes.

Thousands of people use Google’s Map Maker daily to recreate their world online, says Michael Weiss-Malik, engineering director at Google Maps. “We have some Pakistanis living in the UK who have basically built the whole map,” he says. Using aerial shots and local information, people have created the most detailed, and certainly most up-to-date, maps of cities like Karachi that have probably ever existed. Regions of Africa and Asia have been added by map-mad volunteers.

[div class=attrib]Read the entire article following the jump.[end-div]

Fly Me to the Moon: Mere Millionaries Need Not Apply

Golden Spike, a Boulder Colorado based company, has an interesting proposition for the world’s restless billionaires. It is offering a two-seat trip to the Moon, and back, for a tidy sum of $1.5 billion. And, the company is even throwing in a moon-walk. The first trip is planned for 2020.

[div class=attrib]From the Washington Post:[end-div]

It had to happen: A start-up company is offering rides to the moon. Book your seat now — though it’s going to set you back $750 million (it’s unclear if that includes baggage fees).

At a news conference scheduled for Thursday afternoon in Washington, former NASA science administrator Alan Stern plans to announce the formation of Golden Spike, which, according to a news release, is “the first company planning to offer routine exploration expeditions to the surface of the Moon.”

“We can do this,” an excited Stern said Thursday morning during a brief phone interview.

The gist of the company’s strategy is that it’ll repurpose existing space hardware for commercial lunar missions and take advantage of NASA-sanctioned commercial rockets that, in a few years, are supposed to put astronauts in low Earth orbit. Stern said a two-person lunar mission, complete with moonwalking and, perhaps best of all, a return to Earth, would cost $1.5 billion.

“Two seats, 750 each,” Stern said. “The trick is 40 years old. We know how to do this. The difference is now we have rockets and space capsules in the inventory. .?.?. They’re already developed. .?.?. We don’t have to invent them from a clean sheet of paper. We don’t have to start over.”

The statement says, “The company’s plan is to maximize use of existing rockets and to market the resulting system to nations, individuals, and corporations with lunar exploration objectives and ambitions.” Golden Spike says its plans have been vetted by a former space shuttle commander, a space shuttle program manager and a member of the National Academy of Engineering.

And Newt Gingrich is involved: The former speaker of the House, who was widely mocked this year when, campaigning for president, he talked at length about ambitious plans for a permanent moon base by 2021, is listed as a member of Golden Spike’s board of advisers.

Also on that list is Bill Richardson, the former New Mexico governor and secretary of the Department of Energy. The chairman of the board is Gerry Griffin, a former Apollo mission flight director and former director of NASA’s Johnson Space Center.

The private venture fills a void, as it were, in the wake of President Obama’s decision to cancel NASA’s Constellation program, which was initiated during the George W. Bush years as the next step in space exploration after the retirement of the space shuttle. Constellation aimed to put astronauts back on the moon by 2020 for what would become extended stays at a lunar base.

A sweeping review from a presidential committee led by retired aerospace executive Norman Augustine concluded that NASA didn’t have the money to achieve Constellation’s goals. The administration and Congress have given NASA new marching orders that require the building of a heavy-lift rocket that would give the agency the ability to venture far beyond low Earth orbit.

Routine access to space is being shifted to companies operating under commercial contracts. But as those companies try to develop commercial spaceflight, the United States lacks the ability to launch astronauts directly and must purchase flights to the international space station from the Russians.

[div c;ass=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of The Golden Spike Company.[end-div]

Steam Without Boiling Water

Despite what seems to be an overwhelmingly digital shift in our lives, we still live in a world of steam. Steam plays a vital role in generating most of the world’s electricity, steam heats our buildings (especially if you live in New York City), steam sterilizes our medical supplies.

So, in a research discovery with far-reaching implication, scientists have succeeded in making steam at room temperature without actually boiling water. All courtesy of some ingenious nanoparticles.

[div class=attrib]From Technology Review:[end-div]

Steam is a key ingredient in a wide range of industrial and commercial processes—including electricity generation, water purification, alcohol distillation, and medical equipment sterilization.

Generating that steam, however, typically requires vast amounts of energy to heat and eventually boil water or another fluid. Now researchers at Rice University have found a shortcut. Using light-absorbing nanoparticles suspended in water, the group was able to turn the water molecules surrounding the nanoparticles into steam while scarcely raising the temperature of the remaining water. The trick could dramatically reduce the cost of many steam-reliant processes.

The Rice team used a Fresnel lens to focus sunlight on a small tube of water containing high concentrations of nanoparticles suspended in the fluid. The water, which had been cooled to near freezing, began generating steam within five to 20 seconds, depending on the type of nanoparticles used. Changes in temperature, pressure, and mass revealed that 82 percent of the sunlight absorbed by the nanoparticles went directly to generating steam while only 18 percent went to heating water.

“It’s a new way to make steam without boiling water,” says Naomi Halas, director of the Laboratory for Nanophotonics at Rice University. Halas says that the work “opens up a lot of interesting doors in terms of what you can use steam for.”

The new technique could, for instance, lead to inexpensive steam-generation devices for small-scale water purification, sterilization of medical instruments, and sewage treatment in developing countries with limited resources and infrastructure.

The use of nanoparticles to increase heat transfer in water and other fluids has been well studied, but few researchers have looked at using the particles to absorb light and generate steam.

In the current study, Halas and colleagues used nanoparticles optimized to absorb the widest possible spectrum of sunlight. When light hits the particles, their temperature quickly rises to well above 100 °C, the boiling point of water, causing surrounding water molecules to vaporize.

Precisely how the particles and water molecules interact remains somewhat of a mystery. Conventional heat-transfer models suggest that the absorbed sunlight should dissipate into the surrounding fluid before causing any water to boil. “There seems to be some nanoscale thermal barrier, because it’s clearly making steam like crazy,” Halas says.

The system devised by Halas and colleagues exhibited an efficiency of 24 percent in converting sunlight to steam.

Todd Otanicar, a mechanical engineer at the University of Tulsa who was not involved in the current study, says the findings could have significant implications for large-scale solar thermal energy generation. Solar thermal power stations typically use concentrated sunlight to heat a fluid such as oil, which is then used to heat water to generate steam. Otanicar estimates that by generating steam directly with nanoparticles in water, such a system could see an increased efficiency of 3 to 5 percent and a cost savings of 10 percent because a less complex design could be used.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Stott Park Bobbin Mill Steam Engine. Courtesy of Wikipedia.[end-div]

The Rise of the Industrial Internet

As the internet that connects humans reaches a stable saturation point the industrial internet — the network that connects things — is increasing its growth and reach.

[div class=attrib]From the New York Times:[end-div]

When Sharoda Paul finished a postdoctoral fellowship last year at the Palo Alto Research Center, she did what most of her peers do — considered a job at a big Silicon Valley company, in her case, Google. But instead, Ms. Paul, a 31-year-old expert in social computing, went to work for General Electric.

Ms. Paul is one of more than 250 engineers recruited in the last year and a half to G.E.’s new software center here, in the East Bay of San Francisco. The company plans to increase that work force of computer scientists and software developers to 400, and to invest $1 billion in the center by 2015. The buildup is part of G.E’s big bet on what it calls the “industrial Internet,” bringing digital intelligence to the physical world of industry as never before.

The concept of Internet-connected machines that collect data and communicate, often called the “Internet of Things,” has been around for years. Information technology companies, too, are pursuing this emerging field. I.B.M. has its “Smarter Planet” projects, while Cisco champions the “Internet of Everything.”

But G.E.’s effort, analysts say, shows that Internet-era technology is ready to sweep through the industrial economy much as the consumer Internet has transformed media, communications and advertising over the last decade.

In recent months, Ms. Paul has donned a hard hat and safety boots to study power plants. She has ridden on a rail locomotive and toured hospital wards. “Here, you get to work with things that touch people in so many ways,” she said. “That was a big draw.”

G.E. is the nation’s largest industrial company, a producer of aircraft engines, power plant turbines, rail locomotives and medical imaging equipment. It makes the heavy-duty machinery that transports people, heats homes and powers factories, and lets doctors diagnose life-threatening diseases.

G.E. resides in a different world from the consumer Internet. But the major technologies that animate Google and Facebook are also vital ingredients in the industrial Internet — tools from artificial intelligence, like machine-learning software, and vast streams of new data. In industry, the data flood comes mainly from smaller, more powerful and cheaper sensors on the equipment.

Smarter machines, for example, can alert their human handlers when they will need maintenance, before a breakdown. It is the equivalent of preventive and personalized care for equipment, with less downtime and more output.

“These technologies are really there now, in a way that is practical and economic,” said Mark M. Little, G.E.’s senior vice president for global research.

G.E.’s embrace of the industrial Internet is a long-term strategy. But if its optimism proves justified, the impact could be felt across the economy.

The outlook for technology-led economic growth is a subject of considerable debate. In a recent research paper, Robert J. Gordon, a prominent economist at Northwestern University, argues that the gains from computing and the Internet have petered out in the last eight years.

Since 2000, Mr. Gordon asserts, invention has focused mainly on consumer and communications technologies, including smartphones and tablet computers. Such devices, he writes, are “smaller, smarter and more capable, but do not fundamentally change labor productivity or the standard of living” in the way that electric lighting or the automobile did.

But others say such pessimism misses the next wave of technology. “The reason I think Bob Gordon is wrong is precisely because of the kind of thing G.E. is doing,” said Andrew McAfee, principal research scientist at M.I.T.’s Center for Digital Business.

Today, G.E. is putting sensors on everything, be it a gas turbine or a hospital bed. The mission of the engineers in San Ramon is to design the software for gathering data, and the clever algorithms for sifting through it for cost savings and productivity gains. Across the industries it covers, G.E. estimates such efficiency opportunities at as much as $150 billion.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Internet of Things. Courtesy of Intel.[end-div]

Startup Culture: New is the New New

Starting up a new business was once a demanding and complex process, often undertaken in anonymity in the long shadows between the hours of a regular job. It still is over course. However nowadays “the startup” has become more of an event. The tech sector has raised this to a fine art by spawning an entire self-sustaining and self-promoting industry around startups.

You’ll find startup gurus, serial entrepreneurs and digital prophets — yes, AOL has a digital prophet on its payroll — strutting around on stage, twittering tips in the digital world, leading business plan bootcamps, pontificating on accelerator panels, hosting incubator love-ins in coffee shops or splashed across the covers of Entrepreneur or Inc or FastCompany magazines on an almost daily basis. Beware! The back of your cereal box may be next.

[div class=attrib]From the Telegraph:[end-div]

I’ve seen the best minds of my generation destroyed by marketing, shilling for ad clicks, dragging themselves through the strip-lit corridors of convention centres looking for a venture capitalist. Just as X Factor has convinced hordes of tone deaf kids they can be pop stars, the startup industry has persuaded thousands that they can be the next rockstar entrepreneur. What’s worse is that while X Factor clogs up the television schedules for a couple of months, tech conferences have proliferated to such an extent that not a week goes by without another excuse to slope off. Some founders spend more time on panels pontificating about their business plans than actually executing them.

Earlier this year, I witnessed David Shing, AOL’s Digital Prophet – that really is his job title – delivering the opening remarks at a tech conference. The show summed up the worst elements of the self-obsessed, hyperactive world of modern tech. A 42-year-old man with a shock of Russell Brand hair, expensive spectacles and paint-splattered trousers, Shingy paced the stage spouting buzzwords: “Attention is the new currency, man…the new new is providing utility, brothers and sisters…speaking on the phone is completely cliche.” The audience lapped it all up. At these rallies in praise of the startup, enthusiasm and energy matter much more than making sense.

Startup culture is driven by slinging around superlatives – every job is an “incredible opportunity”, every product is going to “change lives” and “disrupt” an established industry. No one wants to admit that most startups stay stuck right there at the start, pub singers pining for their chance in the spotlight. While the startups and hangers-on milling around in the halls bring in stacks of cash for the event organisers, it’s the already successful entrepreneurs on stage and the investors who actually benefit from these conferences. They meet up at exclusive dinners and in the speakers’ lounge where the real deals are made. It’s Studio 54 for geeks.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Startup, WA. Courtesy of Wikipedia.[end-div]

The Most Annoying Technology? The Winner Is…

We all have owned or have used or have come far too close to a technology that we absolutely abhor and wish numerous curses upon its inventors. Said gizmo may be the unfathomable VCR, the forever lost TV remote, the tinny sounding Sony Walkman replete with unraveling cassette tape, the Blackberry, or even Facebook.

Ours over here at theDiagonal is the voice recognition system used by 99 percent of so-called customer service organizations. You know how it goes, something like this: “please say ‘one’ for new accounts”, “please say ‘two’ if you are an existing customer”, please say ‘three’ for returns”, “please say ‘Kyrgyzstan’ to speak with a customer service representative”.

Wired recently listed their least favorite, most hated technologies. No surprises here — winners of this dubious award include the Bluetooth headset, CDROM, and Apple TV remote.

[div class=attrib]From Wired:[end-div]

Bluetooth Headsets

Look, here’s a good rule of thumb: Once you get out of the car, or leave your desk, take off the headset. Nobody wants to hear your end of the conversation. That’s not idle speculation, it’s science! Headsets just make it worse. At least when there’s a phone involved, there are visual cues that say “I’m on the phone.” I mean, other than hearing one end of a shouted conversation.

Leaf Blower

Is your home set on a large wooded lot with acreage to spare between you and your closest neighbor? Did a tornado power through your yard last night, leaving your property covered in limbs and leaves? No? Then get a rake, dude. Leaf blowers are so irritating, they have been been outlawed in some towns. Others should follow suit.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the Sun/Mercury News.[end-div]

The Tubes of the Internets

Google lets the world peek at the many tubes that form a critical part of its search engine infrastructure — functional and pretty too.

[div class=attrib]From the Independent:[end-div]

They are the cathedrals of the information age – with the colour scheme of an adventure playground.

For the first time, Google has allowed cameras into its high security data centres – the beating hearts of its global network that allow the web giant to process 3 billion internet searches every day.

Only a small band of Google employees have ever been inside the doors of the data centres, which are hidden away in remote parts of North America, Belgium and Finland.

Their workplaces glow with the blinking lights of LEDs on internet servers reassuring technicians that all is well with the web, and hum to the sound of hundreds of giant fans and thousands of gallons of water, that stop the whole thing overheating.

“Very few people have stepped inside Google’s data centers [sic], and for good reason: our first priority is the privacy and security of your data, and we go to great lengths to protect it, keeping our sites under close guard,” the company said yesterday. Row upon row of glowing servers send and receive information from 20 billion web pages every day, while towering libraries store all the data that Google has ever processed – in case of a system failure.

With data speeds 200,000 times faster than an ordinary home internet connection, Google’s centres in America can share huge amounts of information with European counterparts like the remote, snow-packed Hamina centre in Finland, in the blink of an eye.

[div class=attrib]Read the entire article after the jump, or take a look at more images from the bowels of Google after the leap.[end-div]

3D Printing Coming to a Home Near You

It seems that not too long ago we were writing about pioneering research into 3D printing and start-up businesses showing off their industrially focused, prototype 3D printers. Now, only a couple of years later there is a growing, consumer market, home-based printers for under $3,000, and even a a 3D printing expo — 3D Printshow. The future looks bright and very much three dimensional.

[div class=attrib]From the Independent:[end-div]

It is Star Trek science made reality, with the potential for production-line replacement body parts, aeronautical spares, fashion, furniture and virtually any other object on demand. It is 3D printing, and now people in Britain can try it for themselves.

The cutting-edge technology, which layers plastic resin in a manner similar to an inkjet printer to create 3D objects, is on its way to becoming affordable for home use. Some of its possibilities will be on display at the UK’s first 3D-printing trade show from Friday to next Sunday at The Brewery in central London .

Clothes made using the technique will be exhibited in a live fashion show, which will include the unveiling of a hat designed for the event by the milliner Stephen Jones, and a band playing a specially composed score on 3D-printed musical instruments.

Some 2,000 consumers are expected to join 1,000 people from the burgeoning industry to see what the technique has to offer, including jewellery and art. A 3D body scanner, which can reproduce a “mini” version of the person scanned, will also be on display.

Workshops run by Jason Lopes of Legacy Effects, which provided 3D-printed models and props for cinema blockbusters such as the Iron Man series and Snow White and the Huntsman, will add a sprinkling of Hollywood glamour.

Kerry Hogarth, the woman behind 3D Printshow, said yesterday she aims to showcase the potential of the technology for families. While prices for printers start at around £1,500 – with DIY kits for less – they are expected to drop steadily over the coming year. One workshop, run by the Birmingham-based Black Country Atelier, will invite people to design a model vehicle and then see the result “printed” off for them to take home.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: 3D scanning and printing. Courtesy of Wikipedia.[end-div]

GigaBytes and TeraWatts

Online social networks have expanded to include hundreds of millions of twitterati and their followers. An ever increasing volume of data, images, videos and documents continues to move into the expanding virtual “cloud”, hosted in many nameless data centers. Virtual processing and computation on demand is growing by leaps and bounds.

Yet while business models for the providers of these internet services remain ethereal, one segment of this business ecosystem is salivating — electricity companies and utilities — at the staggering demand for electrical power.

[div class=attrib]From the New York Times:[end-div]

Jeff Rothschild’s machines at Facebook had a problem he knew he had to solve immediately. They were about to melt.

The company had been packing a 40-by-60-foot rental space here with racks of computer servers that were needed to store and process information from members’ accounts. The electricity pouring into the computers was overheating Ethernet sockets and other crucial components.

Thinking fast, Mr. Rothschild, the company’s engineering chief, took some employees on an expedition to buy every fan they could find — “We cleaned out all of the Walgreens in the area,” he said — to blast cool air at the equipment and prevent the Web site from going down.

That was in early 2006, when Facebook had a quaint 10 million or so users and the one main server site. Today, the information generated by nearly one billion people requires outsize versions of these facilities, called data centers, with rows and rows of servers spread over hundreds of thousands of square feet, and all with industrial cooling systems.

They are a mere fraction of the tens of thousands of data centers that now exist to support the overall explosion of digital information. Stupendous amounts of data are set in motion each day as, with an innocuous click or tap, people download movies on iTunes, check credit card balances through Visa’s Web site, send Yahoo e-mail with files attached, buy products on Amazon, post on Twitter or read newspapers online.

A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.

Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid, The Times found.

To guard against a power failure, they further rely on banks of generators that emit diesel exhaust. The pollution from data centers has increasingly been cited by the authorities for violating clean air regulations, documents show. In Silicon Valley, many data centers appear on the state government’s Toxic Air Contaminant Inventory, a roster of the area’s top stationary diesel polluters.

Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.

“It’s staggering for most people, even people in the industry, to understand the numbers, the sheer size of these systems,” said Peter Gross, who helped design hundreds of data centers. “A single data center can take more power than a medium-size town.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the AP / Thanassis Stavrakis.[end-div]

Social Media and Vanishing History

Social media is great for notifying members in one’s circle of events in the here and now. Of course, most events turn out to be rather trivial, of the “what I ate for dinner” kind. However, social media also has a role in spreading word of more momentous social and political events; the Arab Spring comes to mind.

But, while Twitter and its peers may be a boon for those who live in the present moment and need to transmit their current status, it seems that our social networks are letting go of the past. Will history become lost and irrelevant to the Twitter generation?

A terrifying thought.

[div class=attrib]From Technology Review:[end-div]

On 25 January 2011, a popular uprising began in Egypt that  led to the overthrow of the country’s brutal president and to the first truly free elections. One of the defining features of this uprising and of others in the Arab Spring was the way people used social media to organise protests and to spread news.

Several websites have since begun the task of curating this content, which is an important record of events and how they unfolded. That led Hany SalahEldeen and Michael Nelson at Old Dominion University in Norfolk, Virginia, to take a deeper look at the material to see how much the shared  were still live.

What they found has serious implications. SalahEldeen and Nelson say a significant proportion of the websites that this social media points to has disappeared. And the same pattern occurs for other culturally significant events, such as the the H1N1 virus outbreak, Michael Jackson’s death and the Syrian uprising.

In other words, our history, as recorded by social media, is slowly leaking away.

Their method is straightforward. SalahEldeen and Nelson looked for tweets on six culturally significant events that occurred between June 2009 and March 2012. They then filtered the URLs these tweets pointed to and checked to see whether the content was still available on the web, either in its original form or in an archived form.

They found that the older the social media, the more likely its content was to be missing. In fact, they found an almost linear relationship between time and the percentage lost.

The numbers are startling. They say that 11 per cent of the social media content had disappeared within a year and 27 per cent within 2 years. Beyond that, SalahEldeen and Nelson say the world loses 0.02 per cent of its culturally significant social media material every day.

That’s a sobering thought.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Movie poster for the 2002 film ”The Man Without a Past”. The Man Without a Past (Finnish: Mies vailla menneisyyttä) is a 2002 Finnish comedy-drama film directed by Aki Kaurismäki. Courtesy of Wikipedia.[end-div]

What’s All the Fuss About Big Data?

We excerpt an interview with big data pioneer and computer scientist, Alex Pentland, via the Edge. Pentland is a leading thinker in computational social science and currently directs the Human Dynamics Laboratory at MIT.

While there is no exact definition of “big data” it tends to be characterized quantitatively and qualitatively differently from data commonly used by most organizations. Where regular data can be stored, processed and analyzed using common database tools and analytical engines, big data refers to vast collections of data that often lie beyond the realm of regular computation. So, often big data requires vast and specialized storage and enormous processing capabilities. Data sets that fall into the big data area cover such areas as climate science, genomics, particle physics, and computational social science.

Big data holds true promise. However, while storage and processing power now enable quick and efficient crunching of tera- and even petabytes of data, tools for comprehensive analysis and visualization lag behind.

[div class=attrib]Alex Pentland via the Edge:[end-div]

Recently I seem to have become MIT’s Big Data guy, with people like Tim O’Reilly and “Forbes” calling me one of the seven most powerful data scientists in the world. I’m not sure what all of that means, but I have a distinctive view about Big Data, so maybe it is something that people want to hear.

I believe that the power of Big Data is that it is information about people’s behavior instead of information about their beliefs. It’s about the behavior of customers, employees, and prospects for your new business. It’s not about the things you post on Facebook, and it’s not about your searches on Google, which is what most people think about, and it’s not data from internal company processes and RFIDs. This sort of Big Data comes from things like location data off of your cell phone or credit card, it’s the little data breadcrumbs that you leave behind you as you move around in the world.

What those breadcrumbs tell is the story of your life. It tells what you’ve chosen to do. That’s very different than what you put on Facebook. What you put on Facebook is what you would like to tell people, edited according to the standards of the day. Who you actually are is determined by where you spend time, and which things you buy. Big data is increasingly about real behavior, and by analyzing this sort of data, scientists can tell an enormous amount about you. They can tell whether you are the sort of person who will pay back loans. They can tell you if you’re likely to get diabetes.

They can do this because the sort of person you are is largely determined by your social context, so if I can see some of your behaviors, I can infer the rest, just by comparing you to the people in your crowd. You can tell all sorts of things about a person, even though it’s not explicitly in the data, because people are so enmeshed in the surrounding social fabric that it determines the sorts of things that they think are normal, and what behaviors they will learn from each other.

As a consequence analysis of Big Data is increasingly about finding connections, connections with the people around you, and connections between people’s behavior and outcomes. You can see this in all sorts of places. For instance, one type of Big Data and connection analysis concerns financial data. Not just the flash crash or the Great Recession, but also all the other sorts of bubbles that occur. What these are is these are systems of people, communications, and decisions that go badly awry. Big Data shows us the connections that cause these events. Big data gives us the possibility of understanding how these systems of people and machines work, and whether they’re stable.

The notion that it is connections between people that is really important is key, because researchers have mostly been trying to understand things like financial bubbles using what is called Complexity Science or Web Science. But these older ways of thinking about Big Data leaves the humans out of the equation. What actually matters is how the people are connected together by the machines and how, as a whole, they create a financial market, a government, a company, and other social structures.

Because it is so important to understand these connections Asu Ozdaglar and I have recently created the MIT Center for Connection Science and Engineering, which spans all of the different MIT departments and schools. It’s one of the very first MIT-wide Centers, because people from all sorts of specialties are coming to understand that it is the connections between people that is actually the core problem in making transportation systems work well, in making energy grids work efficiently, and in making financial systems stable. Markets are not just about rules or algorithms; they’re about people and algorithms together.

Understanding these human-machine systems is what’s going to make our future social systems stable and safe. We are getting beyond complexity, data science and web science, because we are including people as a key part of these systems. That’s the promise of Big Data, to really understand the systems that make our technological society. As you begin to understand them, then you can build systems that are better. The promise is for financial systems that don’t melt down, governments that don’t get mired in inaction, health systems that actually work, and so on, and so forth.

The barriers to better societal systems are not about the size or speed of data. They’re not about most of the things that people are focusing on when they talk about Big Data. Instead, the challenge is to figure out how to analyze the connections in this deluge of data and come to a new way of building systems based on understanding these connections.

Changing The Way We Design Systems

With Big Data traditional methods of system building are of limited use. The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works, because almost everything is significant!  As a consequence the normal laboratory-based question-and-answering process, the method that we have used to build systems for centuries, begins to fall apart.

Big data and the notion of Connection Science is outside of our normal way of managing things. We live in an era that builds on centuries of science, and our methods of building of systems, governments, organizations, and so on are pretty well defined. There are not a lot of things that are really novel. But with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.

With Big Data you can easily get false correlations, for instance, “On Mondays, people who drive to work are more likely to get the flu.” If you look at the data using traditional methods, that may actually be true, but the problem is why is it true? Is it causal? Is it just an accident? You don’t know. Normal analysis methods won’t suffice to answer those questions. What we have to come up with is new ways to test the causality of connections in the real world far more than we have ever had to do before. We no can no longer rely on laboratory experiments; we need to actually do the experiments in the real world.

The other problem with Big Data is human understanding. When you find a connection that works, you’d like to be able to use it to build new systems, and that requires having human understanding of the connection. The managers and the owners have to understand what this new connection means. There needs to be a dialogue between our human intuition and the Big Data statistics, and that’s not something that’s built into most of our management systems today. Our managers have little concept of how to use big data analytics, what they mean, and what to believe.

In fact, the data scientists themselves don’t have much of intuition either…and that is a problem. I saw an estimate recently that said 70 to 80 percent of the results that are found in the machine learning literature, which is a key Big Data scientific field, are probably wrong because the researchers didn’t understand that they were overfitting the data. They didn’t have that dialogue between intuition and causal processes that generated the data. They just fit the model and got a good number and published it, and the reviewers didn’t catch it either. That’s pretty bad because if we start building our world on results like that, we’re going to end up with trains that crash into walls and other bad things. Management using Big Data is actually a radically new thing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Techcrunch.[end-div]

Scientifiction

Science fiction stories and illustrations from our past provide a wonderful opportunity for us to test the predictive and prescient capabilities of their creators. Some like Arthur C. Clarke, we are often reminded, foresaw the communications satellite and the space elevator. Others, such as science fiction great, Isaac Asimov, fared less well in predicting future technology; while he is considered to have coined the term “robotics”, he famously predicted future computers and robots as using punched cards.

Illustrations of our future from the past are even more fascinating. One of the leading proponents of the science fiction illustration genre, or scientifiction, as it was titled in the mid-1920s, was Frank R. Paul. Paul illustrated many of the now classic U.S. pulp science fiction magazines beginning in the 1920s with vivid visuals of aliens, spaceships, destroyed worlds and bizarre technologies. Though, one of his less apocalyptic, but perhaps prescient, works showed a web-footed alien smoking a cigarette through a lengthy proboscis.

Of Frank R. Paul, Ray Bradbury is quoted as saying, “Paul’s fantastic covers for Amazing Stories changed my life forever.”

See more of Paul’s classic illustrations after the jump.

[div class=attrib]Image courtesy of 50Watts / Frank R. Paul.[end-div]

How Apple With the Help of Others Invented the iPhone

Apple’s invention of the iPhone is story of insight, collaboration, cannibalization and dogged persistence over the period of a decade.

[div class=attrib]From Slate:[end-div]

Like many of Apple’s inventions, the iPhone began not with a vision, but with a problem. By 2005, the iPod had eclipsed the Mac as Apple’s largest source of revenue, but the music player that rescued Apple from the brink now faced a looming threat: The cellphone. Everyone carried a phone, and if phone companies figured out a way to make playing music easy and fun, “that could render the iPod unnecessary,” Steve Jobs once warned Apple’s board, according to Walter Isaacson’s biography.

Fortunately for Apple, most phones on the market sucked. Jobs and other Apple executives would grouse about their phones all the time. The simplest phones didn’t do much other than make calls, and the more functions you added to phones, the more complicated they were to use. In particular, phones “weren’t any good as entertainment devices,” Phil Schiller, Apple’s longtime marketing chief, testified during the company’s patent trial with Samsung. Getting music and video on 2005-era phones was too difficult, and if you managed that, getting the device to actually play your stuff was a joyless trudge through numerous screens and menus.

That was because most phones were hobbled by a basic problem—they didn’t have a good method for input. Hard keys (like the ones on the BlackBerry) worked for typing, but they were terrible for navigation. In theory, phones with touchscreens could do a lot more, but in reality they were also a pain to use. Touchscreens of the era couldn’t detect finger presses—they needed a stylus, and the only way to use a stylus was with two hands (one to hold the phone and one to hold the stylus). Nobody wanted a music player that required two-handed operation.

This is the story of how Apple reinvented the phone. The general outlines of this tale have been told before, most thoroughly in Isaacson’s biography. But the Samsung case—which ended last month with a resounding victory for Apple—revealed a trove of details about the invention, the sort of details that Apple is ordinarily loath to make public. We got pictures of dozens of prototypes of the iPhone and iPad. We got internal email that explained how executives and designers solved key problems in the iPhone’s design. We got testimony from Apple’s top brass explaining why the iPhone was a gamble.

Put it all together and you get remarkable story about a device that, under the normal rules of business, should not have been invented. Given the popularity of the iPod and its centrality to Apple’s bottom line, Apple should have been the last company on the planet to try to build something whose explicit purpose was to kill music players. Yet Apple’s inner circle knew that one day, a phone maker would solve the interface problem, creating a universal device that could make calls, play music and videos, and do everything else, too—a device that would eat the iPod’s lunch. Apple’s only chance at staving off that future was to invent the iPod killer itself. More than this simple business calculation, though, Apple’s brass saw the phone as an opportunity for real innovation. “We wanted to build a phone for ourselves,” Scott Forstall, who heads the team that built the phone’s operating system, said at the trial. “We wanted to build a phone that we loved.”

The problem was how to do it. When Jobs unveiled the iPhone in 2007, he showed off a picture of an iPod with a rotary-phone dialer instead of a click wheel. That was a joke, but it wasn’t far from Apple’s initial thoughts about phones. The click wheel—the brilliant interface that powered the iPod (which was invented for Apple by a firm called Synaptics)—was a simple, widely understood way to navigate through menus in order to play music. So why not use it to make calls, too?

In 2005, Tony Fadell, the engineer who’s credited with inventing the first iPod, got hold of a high-end desk phone made by Samsung and Bang & Olufsen that you navigated using a set of numerical keys placed around a rotating wheel. A Samsung cell phone, the X810, used a similar rotating wheel for input. Fadell didn’t seem to like the idea. “Weird way to hold the cellphone,” he wrote in an email to others at Apple. But Jobs thought it could work. “This may be our answer—we could put the number pad around our clickwheel,” he wrote. (Samsung pointed to this thread as evidence for its claim that Apple’s designs were inspired by other companies, including Samsung itself.)

Around the same time, Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Retro design iPhone courtesy of Ubergizmo.[end-div]

Happy Birthday :-)

Thirty years ago today Professor Scott Fahlman of Carnegie Mellon University sent what is believed to be the first emoticon embedded in an email. The symbol, :-), which he proposed as a joke marker, spread rapidly, morphed and evolved into a universe of symbolic nods, winks, and cyber-emotions.

For a lengthy list of popular emoticons, including some very interesting Eastern ones, jump here.

[div class=attrib]From the Independent:[end-div]

To some, an email isn’t complete without the inclusion of 🙂 or :-(. To others, the very idea of using “emoticons” – communicative graphics – makes the blood boil and represents all that has gone wrong with the English language.

Regardless of your view, as emoticons celebrate their 30th anniversary this month, it is accepted that they are here stay. Their birth can be traced to the precise minute: 11:44am on 19 September 1982. At that moment, Professor Scott Fahlman, of Carnegie Mellon University in Pittsburgh, sent an email on an online electronic bulletin board that included the first use of the sideways smiley face: “I propose the following character sequence for joke markers: 🙂 Read it sideways.” More than anyone, he must take the credit – or the blame.

The aim was simple: to allow those who posted on the university’s bulletin board to distinguish between those attempting to write humorous emails and those who weren’t. Professor Fahlman had seen how simple jokes were often misunderstood and attempted to find a way around the problem.

This weekend, the professor, a computer science researcher who still works at the university, says he is amazed his smiley face took off: “This was a little bit of silliness that I tossed into a discussion about physics,” he says. “It was ten minutes of my life. I expected my note might amuse a few of my friends, and that would be the end of it.”

But once his initial email had been sent, it wasn’t long before it spread to other universities and research labs via the primitive computer networks of the day. Within months, it had gone global.

Nowadays dozens of variations are available, mainly as little yellow, computer graphics. There are emoticons that wear sunglasses; some cry, while others don Santa hats. But Professor Fahlman isn’t a fan.

“I think they are ugly, and they ruin the challenge of trying to come up with a clever way to express emotions using standard keyboard characters. But perhaps that’s just because I invented the other kind.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Wikipedia.[end-div]

Mobile Phone as Survival Gear

So, here’s the premise. You have hiked alone for days and now find yourself isolated and lost in a dense forest half-way up a mountain. Yes! You have a cell phone. But, oh no, there is no service in this remote part of the world. So, no call for help and no GPS. And, it gets worse: you have no emergency supplies and no food. What can you do? The neat infographic offers some tips.

[div class=attrib]Infographic courtesy of Natalie Bracco / AnsonAlex.com.[end-div]

The Pros and Cons of Online Reviews

There is no doubt that online reviews for products and services, from books to news cars to a vacation spot, have revolutionized shopping behavior. Internet and mobile technology has made gathering, reviewing and publishing open and honest crowdsourced opinion simple, efficient and ubiquitous.

However, the same tools that allow frank online discussion empower those wishing to cheat and manipulate the system. Cyberspace is rife with fake reviews, fake reviewers, inflated ratings, edited opinion, and paid insertions.

So, just as in any purchase transaction since the time when buyers and sellers first met, caveat emptor still applies.

[div class=attrib]From Slate:[end-div]

The Internet has fundamentally changed the way that buyers and sellers meet and interact in the marketplace. Online retailers make it cheap and easy to browse, comparison shop, and make purchases with the click of a mouse. The Web can also, in theory, make for better-informed purchases—both online and off—thanks to sites that offer crowdsourced reviews of everything from dog walkers to dentists.

In a Web-enabled world, it should be harder for careless or unscrupulous businesses to exploit consumers. Yet recent studies suggest that online reviewing is hardly a perfect consumer defense system. Researchers at Yale, Dartmouth, and USC have found evidence that hotel owners post fake reviews to boost their ratings on the site—and might even be posting negative reviews of nearby competitors.

The preponderance of online reviews speaks to their basic weakness: Because it’s essentially free to post a review, it’s all too easy to dash off thoughtless praise or criticism, or, worse, to construct deliberately misleading reviews without facing any consequences. It’s what economists (and others) refer to as the cheap-talk problem. The obvious solution is to make it more costly to post a review, but that eliminates one of the main virtues of crowdsourcing: There is much more wisdom in a crowd of millions than in select opinions of a few dozen.

Of course, that wisdom depends on reviewers giving honest feedback. A few well-publicized incidents suggest that’s not always the case. For example, when Amazon’s Canadian site accidentally revealed the identities of anonymous book reviewers in 2004, it became apparent that many reviews came from publishers and from the authors themselves.

Technological idealists, perhaps not surprisingly, see a solution to this problem in cutting-edge computer science. One widely reported study last year showed that a text-analysis algorithm proved remarkably adept at detecting made-up reviews. The researchers instructed freelance writers to put themselves in the role of a hotel marketer who has been tasked by his boss with writing a fake customer review that is flattering to the hotel. They also compiled a set of comparison TripAdvisor reviews that the study’s authors felt were likely to be genuine. Human judges could not distinguish between the real ones and the fakes. But the algorithm correctly identified the reviews as real or phony with 90 percent accuracy by picking up on subtle differences, like whether the review described specific aspects of the hotel room layout (the real ones do) or mentioned matters that were unrelated to the hotel itself, like whether the reviewer was there on vacation or business (a marker of fakes). Great, but in the cat-and-mouse game of fraud vs. fraud detection, phony reviewers can now design feedback that won’t set off any alarm bells.
Just how prevalent are fake reviews? A trio of business school professors, Yale’s Judith Chevalier, Yaniv Dover of Dartmouth, and USC’s Dina Mayzlin, have taken a clever approach to inferring an answer by comparing the reviews on two travel sites, TripAdvisor and Expedia. In order to post an Expedia review, a traveler needs to have made her hotel booking through the site. Hence, a hotel looking to inflate its rating or malign a competitor would have to incur the cost of paying itself through the site, accumulating transaction fees and tax liabilities in the process. On TripAdvisor, all you need to post fake reviews are a few phony login names and email addresses.

Differences in the overall ratings on TripAdvisor versus Expedia could simply be the result of a more sympathetic community of reviewers. (In practice, TripAdvisor’s ratings are actually lower on average.) So Mayzlin and her co-authors focus on the places where the gaps between TripAdvisor and Expedia reviews are widest. In their analysis, they looked at hotels that probably appear identical to the average traveler but have different underlying ownership or management. There are, for example, companies that own scores of franchises from hotel chains like Marriott and Hilton. Other hotels operate under these same nameplates but are independently owned. Similarly, many hotels are run on behalf of their owners by large management companies, while others are owner-managed. The average traveler is unlikely to know the difference between a Fairfield Inn owned by, say, the Pillar Hotel Group and one owned and operated by Ray Fisman. The study’s authors argue that the small owners and independents have less to lose by trying to goose their online ratings (or torpedo the ratings of their neighbors), reasoning that larger companies would be more vulnerable to punishment, censure, and loss of business if their shenanigans were uncovered. (The authors give the example of a recent case in which a manager at Ireland’s Clare Inn was caught posting fake reviews. The hotel is part of the Lynch Hotel Group, and in the wake of the fake postings, TripAdvisor removed suspicious reviews from other Lynch hotels, and unflattering media accounts of the episode generated negative PR that was shared across all Lynch properties.)

The researchers find that, even comparing hotels under the same brand, small owners are around 10 percent more likely to get five-star reviews on TripAdvisor than they are on Expedia (relative to hotels owned by large corporations). The study also examines whether these small owners might be targeting the competition with bad reviews. The authors look at negative reviews for hotels that have competitors within half a kilometer. Hotels where the nearby competition comes from small owners have 16 percent more one- and two-star ratings than those with neighboring hotels that are owned by big companies like Pillar.
This isn’t to say that consumers are making a mistake by using TripAdvisor to guide them in their hotel reservations. Despite the fraudulent posts, there is still a high degree of concordance between the ratings assigned by TripAdvisor and Expedia. And across the Web, there are scores of posters who seem passionate about their reviews.

Consumers, in turn, do seem to take online reviews seriously. By comparing restaurants that fall just above and just below the threshold for an extra half-star on Yelp, Harvard Business School’s Michael Luca estimates that an extra star is worth an extra 5 to 9 percent in revenue. Luca’s intent isn’t to examine whether restaurants are gaming Yelp’s system, but his findings certainly indicate that they’d profit from trying. (Ironically, Luca also finds that independent restaurants—the establishments that Mayzlin et al. would predict are most likely to put up fake postings—benefit the most from an extra star. You don’t need to check out Yelp to know what to expect when you walk into McDonald’s or Pizza Hut.)

[div class=attrib]Read the entire article following the jump:[end-div]

[div class=attrib]Image courtesy of Mashable.[end-div]

Shirking Life-As-Performance of a Social Network

Ex-Facebook employee number 51, gives us a glimpse from within the social network giant. It’s a tale of social isolation, shallow relationships, voyeurism, and narcissistic performance art. It’s also a tale of the re-discovery of life prior to “likes”, “status updates”, “tweets” and “followers”.

[div class=attrib]From the Washington Post:[end-div]

Not long after Katherine Losse left her Silicon Valley career and moved to this West Texas town for its artsy vibe and crisp desert air, she decided to make friends the old-fashioned way, in person. So she went to her Facebook page and, with a series of keystrokes, shut it off.

The move carried extra import because Losse had been the social network’s 51st employee and rose to become founder Mark Zuckerberg’s personal ghostwriter. But Losse gradually soured on the revolution in human relations she witnessed from within.

The explosion of social media, she believed, left hundreds of millions of users with connections that were more plentiful but also narrower and less satisfying, with intimacy losing out to efficiency. It was time, Losse thought, for people to renegotiate their relationships with technology.

“It’s okay to feel weird about this because I feel weird about this, and I was in the center of it,” said Losse, 36, who has long, dark hair and sky-blue eyes. “We all know there is an anxiety, there’s an unease, there’s a worry that our lives are changing.”

Her response was to quit her job — something made easier by the vested stock she cashed in — and to embrace the ancient toil of writing something in her own words, at book length, about her experiences and the philosophical questions they inspired.

That brought her to Marfa, a town of 2,000 people in an area so remote that astronomers long have come here for its famously dark night sky, beyond the light pollution that’s a byproduct of modern life.

Losse’s mission was oddly parallel. She wanted to live, at least for a time, as far as practical from the world’s relentless digital glow.

Losse was a graduate student in English at Johns Hopkins University in 2004 when Facebook began its spread, first at Harvard, then other elite schools and beyond. It provided a digital commons, a way of sharing personal lives that to her felt safer than the rest of the Internet.

The mix has proved powerful. More than 900 million people have joined; if they were citizens of a single country, Facebook Nation would be the world’s third largest.

At first, Losse was among those smitten. In 2005, after moving to Northern California in search of work, she responded to a query on the Facebook home page seeking résumés. Losse soon became one of the company’s first customer-service reps, replying to questions from users and helping to police abuses.

She was firmly on the wrong side of the Silicon Valley divide, which prizes the (mostly male) engineers over those, like Losse, with liberal arts degrees. Yet she had the sense of being on the ground floor of something exciting that might also yield a life-altering financial jackpot.

In her first days, she was given a master password that she said allowed her to see any information users typed into their Facebook pages. She could go into pages to fix technical problems and police content. Losse recounted sparring with a user who created a succession of pages devoted to anti-gay messages and imagery. In one exchange, she noticed the man’s password, “Ilovejason,” and was startled by the painful irony.

Another time, Losse cringed when she learned that a team of Facebook engineers was developing what they called “dark profiles” — pages for people who had not signed up for the service but who had been identified in posts by Facebook users. The dark profiles were not to be visible to ordinary users, Losse said, but if the person eventually signed up, Facebook would activate those latent links to other users.

All the world a stage

Losse’s unease sharpened when a celebrated Facebook engineer was developing the capacity for users to upload video to their pages. He started videotaping friends, including Losse, almost compulsively. On one road trip together, the engineer made a video of her napping in a car and uploaded it remotely to an internal Facebook page. Comments noting her siesta soon began appearing — only moments after it happened.

“The day before, I could just be in a car being in a car. Now my being in a car is a performance that is visible to everyone,” Losse said, exasperation creeping into her voice. “It’s almost like there is no middle of nowhere anymore.”

Losse began comparing Facebook to the iconic 1976 Eagles song “Hotel California,” with its haunting coda, “You can check out anytime you want, but you can never leave.” She put a copy of the record jacket on prominent display in a house she and several other employees shared not far from the headquarters (then in Palo Alto., Calif.; it’s now in Menlo Park).

As Facebook grew, Losse’s career blossomed. She helped introduce Facebook to new countries, pushing for quick, clean translations into new languages. Later, she moved to the heart of the company as Zuckerberg’s ghostwriter, mimicking his upbeat yet efficient style of communicating in blog posts he issued.

But her concerns continue to grow. When Zuckerberg, apparently sensing this, said to Losse, “I don’t know if I trust you,” she decided she needed to either be entirely committed to Facebook or leave. She soon sold some of her vested stock. She won’t say how much; they provided enough of a financial boon for her to go a couple of years without a salary, though not enough to stop working altogether, as some former colleagues have.

‘Touchy, private territory’

Among Losse’s concerns were the vast amount of personal data Facebook gathers. “They are playing on very touchy, private territory. They really are,” she said. “To not be conscious of that seems really dangerous.”

It wasn’t just Facebook. Losse developed a skepticism for many social technologies and the trade-offs they require.

Facebook and some others have portrayed proliferating digital connections as inherently good, bringing a sprawling world closer together and easing personal isolation.

Moira Burke, a researcher who trained at the Human-Computer Interaction Institute at Carnegie Mellon University and has since joined Facebook’s Data Team, tracked the moods of 1,200 volunteer users. She found that simply scanning the postings of others had little effect on well-being; actively participating in exchanges with friends, however, relieved loneliness.

Summing up her findings, she wrote on Facebook’s official blog, “The more people use Facebook, the better they feel.”

But Losse’s concerns about online socializing tracks with the findings of Sherry Turkle, a Massachusetts Institute of Technology psychologist who says users of social media have little understanding of the personal information they are giving away. Nor, she said, do many understand the potentially distorting consequences when they put their lives on public display, as what amounts to an ongoing performance on social media.

“In our online lives, we edit, we retouch, we clean up,” said Turkle, author of “Alone Together: Why We Expect More From Technology and Less From Each Other,” published in 2011. “We substitute what I call ‘connection for real conversation.’?”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Boy Kings by Katherine Losse.[end-div]