Tag Archives: technology

Technology and the Exploitation of Children

Many herald the forward motion of technological innovation as progress. In many cases the momentum does genuinely seem to carry us towards a better place; it broadly alleviates pain and suffering; it generally delivers more and better nutrition to our bodies and our minds. Yet for all the positive steps, this progress is often accompanied by retrograde leaps — often paradoxical ones. Particularly disturbing is the relative ease to which technology allows us — the responsible adults – to sexualise and exploit children. Now, this is certainly not a new phenomenon, but our technical prowess certainly makes this problem more pervasive. A case in point, the Instagram beauty pageant. Move over Honey Boo-Boo.

From the Washington Post:

The photo-sharing site Instagram has become wildly popular as a way to trade pictures of pets and friends. But a new trend on the site is making parents cringe: beauty pageants, in which thousands of young girls — many appearing no older than 12 or 13 — submit photographs of themselves for others to judge.

In one case, the mug shots of four girls, middle-school-age or younger, have been pitted against each other. One is all dimples, wearing a hair bow and a big, toothy grin. Another is trying out a pensive, sultry look.

Any of Instagram’s 30 million users can vote on the appearance of the girls in a comments section of the post. Once a girl’s photo receives a certain number of negative remarks, the pageant host, who can remain anonymous, can update it with a big red X or the word “OUT” scratched across her face.

“U.G.L.Y,” wrote one user about a girl, who submitted her photo to one of the pageants identified on Instagram by the keyword “#beautycontest.”

The phenomenon has sparked concern among parents and child safety advocates who fear that young girls are making themselves vulnerable to adult strangers and participating in often cruel social interactions at a sensitive period of development.

But the contests are the latest example of how technology is pervading the lives of children in ways that parents and teachers struggle to understand or monitor.

“What started out as just a photo-sharing site has become something really pernicious for young girls,” said Rachel Simmons, author of “Odd Girl Out” and a speaker on youth and girls. “What happened was, like most social media experiences, girls co-opted it and imposed their social life on it to compete for attention and in a very exaggerated way.”

It’s difficult to track when the pageants began and who initially set them up. A keyword search of #beautycontest turned up 8,757 posts, while #rateme had 27,593 photo posts. Experts say those two terms represent only a fraction of the activity. Contests are also appearing on other social media sites, including Tumblr and Snapchat — mobile apps that have grown in popularity among youth.

Facebook, which bought Instagram last year, declined to comment. The company has a policy of not allowing anyone under the age of 13 to create an account or share photos on Instagram. But Facebook has been criticized for allowing pre-teens to get around the rule — two years ago, Consumer Reports estimated their presence on Facebook was 7.5 million. (Washington Post Co. Chairman Donald Graham sits on Facebook’s board of directors.)

Read the entire article after the jump.

Image: Instagram. Courtesy of Wired.

 

Blame (Or Hug) Martin Cooper

Martin Cooper. You may not know that name, but you and a fair proportion of the world’s 7 billion inhabitants have surely held or dropped or prodded or cursed his offspring.

You see, forty years ago Martin Cooper used his baby to make the first public mobile phone call. Martin Cooper invented the cell phone.

From the Guardian:

It is 40 years this week since the first public mobile phone call. On 3 April, 1973, Martin Cooper, a pioneering inventor working for Motorola in New York, called a rival engineer from the pavement of Sixth Avenue to brag and was met with a stunned, defeated silence. The race to make the first portable phone had been won. The Pandora’s box containing txt-speak, pocket-dials and pig-hating suicidal birds was open.

Many people at Motorola, however, felt mobile phones would never be a mass-market consumer product. They wanted the firm to focus on business carphones. But Cooper and his team persisted. Ten years after that first boastful phonecall they brought the portable phone to market, at a retail price of around $4,000.

Thirty years on, the number of mobile phone subscribers worldwide is estimated at six and a half billion. And Angry Birds games have been downloaded 1.7bn times.

This is the story of the mobile phone in 40 facts:

1 That first portable phone was called a DynaTAC. The original model had 35 minutes of battery life and weighed one kilogram.

2 Several prototypes of the DynaTAC were created just 90 days after Cooper had first suggested the idea. He held a competition among Motorola engineers from various departments to design it and ended up choosing “the least glamorous”.

3 The DynaTAC’s weight was reduced to 794g before it came to market. It was still heavy enough to beat someone to death with, although this fact was never used as a selling point.

4 Nonetheless, people cottoned on. DynaTAC became the phone of choice for fictional psychopaths, including Wall Street’s Gordon Gekko, American Psycho’s Patrick Bateman and Saved by the Bell’s Zack Morris.

5 The UK’s first public mobile phone call was made by comedian Ernie Wise in 1985 from St Katharine dock to the Vodafone head offices over a curry house in Newbury.

6 Vodafone’s 1985 monopoly of the UK mobile market lasted just nine days before Cellnet (now O2) launched its rival service. A Vodafone spokesperson was probably all like: “Aw, shucks!”

7 Cellnet and Vodafone were the only UK mobile providers until 1993.

8 It took Vodafone just less than nine years to reach the one million customers mark. They reached two million just 18 months later.

9 The first smartphone was IBM’s Simon, which debuted at the Wireless World Conference in 1993. It had an early LCD touchscreen and also functioned as an email device, electronic pager, calendar, address book and calculator.

10 The first cameraphone was created by French entrepreneur Philippe Kahn. He took the first photograph with a mobile phone, of his newborn daughter Sophie, on 11 June, 1997.

Read the entire article after the jump.

Image: Dr. Martin Cooper, the inventor of the cell phone, with DynaTAC prototype from 1973 (in the year 2007). Courtesy of Wikipedia.

Next Up: Apple TV

Robert Hof argues that the time is ripe for Steve Jobs’ corporate legacy to reinvent the TV. Apple transformed the personal computer industry, the mobile phone market and the music business. Clearly the company has all the components in place to assemble another innovation.

From Technology Review:

Steve Jobs couldn’t hide his frustration. Asked at a technology conference in 2010 whether Apple might finally turn its attention to television, he launched into an exasperated critique of TV. Cable and satellite TV companies make cheap, primitive set-top boxes that “squash any opportunity for innovation,” he fumed. Viewers are stuck with “a table full of remotes, a cluster full of boxes, a bunch of different [interfaces].” It was the kind of technological mess that cried out for Apple to clean it up with an elegant product. But Jobs professed to have no idea how his company could transform the TV.

Scarcely a year later, however, he sounded far more confident. Before he died on October 5, 2011, he told his biographer, ­Walter Isaacson, that Apple wanted to create an “integrated television set that is completely easy to use.” It would sync with other devices and Apple’s iCloud online storage service and provide “the simplest user interface you could imagine.” He added, tantalizingly, “I finally cracked it.”

Precisely what he cracked remains hidden behind Apple’s shroud of secrecy. Apple has had only one television-related product—the black, hockey-puck-size Apple TV device, which streams shows and movies to a TV. For years, Jobs and Tim Cook, his successor as CEO, called that device a “hobby.” But under the guise of this hobby, Apple has been steadily building hardware, software, and services that make it easier for people to watch shows and movies in whatever way they wish. Already, the company has more of the pieces for a compelling next-generation TV experience than people might realize.

And as Apple showed with the iPad and iPhone, it doesn’t have to invent every aspect of a product in order for it to be disruptive. Instead, it has become the leader in consumer electronics by combining existing technologies with some of its own and packaging them into products that are simple to use. TV seems to be at that moment now. People crave something better than the fusty, rigidly controlled cable TV experience, and indeed, the technologies exist for something better to come along. Speedier broadband connections, mobile TV apps, and the availability of some shows and movies on demand from Netflix and Hulu have made it easier to watch TV anytime, anywhere. The number of U.S. cable and satellite subscribers has been flat since 2010.

Apple would not comment. But it’s clear from two dozen interviews with people close to Apple suppliers and partners, and with people Apple has spoken to in the TV industry, that television—the medium and the device—is indeed its next target.

The biggest question is not whether Apple will take on TV, but when. The company must eventually come up with another breakthrough product; with annual revenue already topping $156 billion, it needs something very big to keep growth humming after the next year or two of the iPad boom. Walter Price, managing director of Allianz Global Investors, which holds nearly $1 billion in Apple shares, met with Apple executives in September and came away convinced that it would be years before Apple could get a significant share of the $345 billion worldwide market for televisions. But at $1,000, the bare minimum most analysts expect an Apple television to cost, such a product would eventually be a significant revenue generator. “You sell 10 million of those, it can move the needle,” he says.

Cook, who replaced Jobs as CEO in August 2011, could use a boost, too. He has presided over missteps such as a flawed iPhone mapping app that led to a rare apology and a major management departure. Seen as a peerless operations whiz, Cook still needs a revolutionary product of his own to cement his place next to Saint Steve. Corey Ferengul, a principal at the digital media investment firm Apace Equities and a former executive at Rovi, which provided TV programming guide services to Apple and other companies, says an Apple TV will be that product: “This will be Tim Cook’s first ‘holy shit’ innovation.”

What Apple Already Has

Rapt attention would be paid to whatever round-edged piece of brushed-aluminum hardware Apple produced, but a television set itself would probably be the least important piece of its television strategy. In fact, many well-connected people in technology and television, from TV and online video maven Mark Cuban to venture capitalist and former Apple executive Jean-Louis Gassée, can’t figure out why Apple would even bother with the machines.

For one thing, selling televisions is a low-margin business. No one subsidizes the purchase of a TV the way your wireless carrier does with the iPhone (an iPhone might cost you $200, but Apple’s revenue from it is much higher than that). TVs are also huge and difficult to stock in stores, let alone ship to homes. Most of all, the upgrade cycle that powers Apple’s iPhone and iPad profit engine doesn’t apply to television sets—no one replaces them every year or two.

But even though TVs don’t line up neatly with the way Apple makes money on other hardware, they are likely to remain central to people’s ever-increasing consumption of video, games, and other forms of media. Apple at least initially could sell the screens as a kind of Trojan horse—a way of entering or expanding its role in lines of business that are more profitable, such as selling movies, shows, games, and other Apple hardware.

Read the entire article following the jump.

Image courtesy of Apple, Inc.

Startup Ideas

For technologists the barriers to developing a new product have never been so low. Tools to develop, integrate and distribute software apps are to all intents negligible. Of course, most would recognize that development is often the easy part. The real difficulty lies in building an effective and sustainable marketing and communication strategy and getting the product adopted.

The recent headlines of 17 year old British app developer Nick D’Aloisio selling his Summly app to Yahoo! for the tidy sum of $30 million, has lots of young and seasoned developers scratching their heads. After all, if a school kid can do it, why not anybody? Why not me?

Paul Graham may have some of the answers. He sold his first company to Yahoo in 1998. He now runs YCombinator a successful startup incubator. We excerpt his recent, observant and insightful essay below.

From Paul Graham:

The way to get startup ideas is not to try to think of startup ideas. It’s to look for problems, preferably problems you have yourself.

The very best startup ideas tend to have three things in common: they’re something the founders themselves want, that they themselves can build, and that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and Facebook all began this way.

Problems

Why is it so important to work on a problem you have? Among other things, it ensures the problem really exists. It sounds obvious to say you should only work on problems that exist. And yet by far the most common mistake startups make is to solve problems no one has.

I made it myself. In 1995 I started a company to put art galleries online. But galleries didn’t want to be online. It’s not how the art business works. So why did I spend 6 months working on this stupid idea? Because I didn’t pay attention to users. I invented a model of the world that didn’t correspond to reality, and worked from that. I didn’t notice my model was wrong until I tried to convince users to pay for what we’d built. Even then I took embarrassingly long to catch on. I was attached to my model of the world, and I’d spent a lot of time on the software. They had to want it!

Why do so many founders build things no one wants? Because they begin by trying to think of startup ideas. That m.o. is doubly dangerous: it doesn’t merely yield few good ideas; it yields bad ideas that sound plausible enough to fool you into working on them.

At YC we call these “made-up” or “sitcom” startup ideas. Imagine one of the characters on a TV show was starting a startup. The writers would have to invent something for it to do. But coming up with good startup ideas is hard. It’s not something you can do for the asking. So (unless they got amazingly lucky) the writers would come up with an idea that sounded plausible, but was actually bad.

For example, a social network for pet owners. It doesn’t sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners. Not all of them perhaps, but if just 2 or 3 percent were regular visitors, you could have millions of users. You could serve them targeted offers, and maybe charge for premium features.

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

Well

When a startup launches, there have to be at least some users who really need what they’re making—not just people who could see themselves using it one day, but who want it urgently. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. Which means you have to compromise on one dimension: you can either build something a large number of people want a small amount, or something a small number of people want a large amount. Choose the latter. Not all ideas of that type are good startup ideas, but nearly all good startup ideas are of that type.

Imagine a graph whose x axis represents all the people who might want what you’re making and whose y axis represents how much they want it. If you invert the scale on the y axis, you can envision companies as holes. Google is an immense crater: hundreds of millions of people use it, and they need it a lot. A startup just starting out can’t expect to excavate that much volume. So you have two choices about the shape of hole you start with. You can either dig a hole that’s broad but shallow, or one that’s narrow and deep, like a well.

Made-up startup ideas are usually of the first type. Lots of people are mildly interested in a social network for pet owners.

Nearly all good startup ideas are of the second type. Microsoft was a well when they made Altair Basic. There were only a couple thousand Altair owners, but without this software they were programming in machine language. Thirty years later Facebook had the same shape. Their first site was exclusively for Harvard students, of which there are only a few thousand, but those few thousand users wanted it a lot.

When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they’ll use it even when it’s a crappy version one made by a two-person startup they’ve never heard of? If you can’t answer that, the idea is probably bad.

You don’t need the narrowness of the well per se. It’s depth you need; you get narrowness as a byproduct of optimizing for depth (and speed). But you almost always do get it. In practice the link between depth and narrowness is so strong that it’s a good sign when you know that an idea will appeal strongly to a specific group or type of user.

But while demand shaped like a well is almost a necessary condition for a good startup idea, it’s not a sufficient one. If Mark Zuckerberg had built something that could only ever have appealed to Harvard students, it would not have been a good startup idea. Facebook was a good idea because it started with a small market there was a fast path out of. Colleges are similar enough that if you build a facebook that works at Harvard, it will work at any college. So you spread rapidly through all the colleges. Once you have all the college students, you get everyone else simply by letting them in.

Similarly for Microsoft: Basic for the Altair; Basic for other machines; other languages besides Basic; operating systems; applications; IPO.

Self

How do you tell whether there’s a path out of an idea? How do you tell whether something is the germ of a giant company, or just a niche product? Often you can’t. The founders of Airbnb didn’t realize at first how big a market they were tapping. Initially they had a much narrower idea. They were going to let hosts rent out space on their floors during conventions. They didn’t foresee the expansion of this idea; it forced itself upon them gradually. All they knew at first is that they were onto something. That’s probably as much as Bill Gates or Mark Zuckerberg knew at first.

Occasionally it’s obvious from the beginning when there’s a path out of the initial niche. And sometimes I can see a path that’s not immediately obvious; that’s one of our specialties at YC. But there are limits to how well this can be done, no matter how much experience you have. The most important thing to understand about paths out of the initial idea is the meta-fact that these are hard to see.

So if you can’t predict whether there’s a path out of an idea, how do you choose between ideas? The truth is disappointing but interesting: if you’re the right sort of person, you have the right sort of hunches. If you’re at the leading edge of a field that’s changing fast, when you have a hunch that something is worth doing, you’re more likely to be right.

In Zen and the Art of Motorcycle Maintenance, Robert Pirsig says:

You want to know how to paint a perfect painting? It’s easy. Make yourself perfect and then just paint naturally.

I’ve wondered about that passage since I read it in high school. I’m not sure how useful his advice is for painting specifically, but it fits this situation well. Empirically, the way to have good startup ideas is to become the sort of person who has them.

Being at the leading edge of a field doesn’t mean you have to be one of the people pushing it forward. You can also be at the leading edge as a user. It was not so much because he was a programmer that Facebook seemed a good idea to Mark Zuckerberg as because he used computers so much. If you’d asked most 40 year olds in 2004 whether they’d like to publish their lives semi-publicly on the Internet, they’d have been horrified at the idea. But Mark already lived online; to him it seemed natural.

Paul Buchheit says that people at the leading edge of a rapidly changing field “live in the future.” Combine that with Pirsig and you get:

Live in the future, then build what’s missing.

That describes the way many if not most of the biggest startups got started. Neither Apple nor Yahoo nor Google nor Facebook were even supposed to be companies at first. They grew out of things their founders built because there seemed a gap in the world.

If you look at the way successful founders have had their ideas, it’s generally the result of some external stimulus hitting a prepared mind. Bill Gates and Paul Allen hear about the Altair and think “I bet we could write a Basic interpreter for it.” Drew Houston realizes he’s forgotten his USB stick and thinks “I really need to make my files live online.” Lots of people heard about the Altair. Lots forgot USB sticks. The reason those stimuli caused those founders to start companies was that their experiences had prepared them to notice the opportunities they represented.

The verb you want to be using with respect to startup ideas is not “think up” but “notice.” At YC we call ideas that grow naturally out of the founders’ own experiences “organic” startup ideas. The most successful startups almost all begin this way.

That may not have been what you wanted to hear. You may have expected recipes for coming up with startup ideas, and instead I’m telling you that the key is to have a mind that’s prepared in the right way. But disappointing though it may be, this is the truth. And it is a recipe of a sort, just one that in the worst case takes a year rather than a weekend.

If you’re not at the leading edge of some rapidly changing field, you can get to one. For example, anyone reasonably smart can probably get to an edge of programming (e.g. building mobile apps) in a year. Since a successful startup will consume at least 3-5 years of your life, a year’s preparation would be a reasonable investment. Especially if you’re also looking for a cofounder.

You don’t have to learn programming to be at the leading edge of a domain that’s changing fast. Other domains change fast. But while learning to hack is not necessary, it is for the forseeable future sufficient. As Marc Andreessen put it, software is eating the world, and this trend has decades left to run.

Knowing how to hack also means that when you have ideas, you’ll be able to implement them. That’s not absolutely necessary (Jeff Bezos couldn’t) but it’s an advantage. It’s a big advantage, when you’re considering an idea like putting a college facebook online, if instead of merely thinking “That’s an interesting idea,” you can think instead “That’s an interesting idea. I’ll try building an initial version tonight.” It’s even better when you’re both a programmer and the target user, because then the cycle of generating new versions and testing them on users can happen inside one head.

Noticing

Once you’re living in the future in some respect, the way to notice startup ideas is to look for things that seem to be missing. If you’re really at the leading edge of a rapidly changing field, there will be things that are obviously missing. What won’t be obvious is that they’re startup ideas. So if you want to find startup ideas, don’t merely turn on the filter “What’s missing?” Also turn off every other filter, particularly “Could this be a big company?” There’s plenty of time to apply that test later. But if you’re thinking about that initially, it may not only filter out lots of good ideas, but also cause you to focus on bad ones.

Most things that are missing will take some time to see. You almost have to trick yourself into seeing the ideas around you.

But you know the ideas are out there. This is not one of those problems where there might not be an answer. It’s impossibly unlikely that this is the exact moment when technological progress stops. You can be sure people are going to build things in the next few years that will make you think “What did I do before x?”

And when these problems get solved, they will probably seem flamingly obvious in retrospect. What you need to do is turn off the filters that usually prevent you from seeing them. The most powerful is simply taking the current state of the world for granted. Even the most radically open-minded of us mostly do that. You couldn’t get from your bed to the front door if you stopped to question everything.

But if you’re looking for startup ideas you can sacrifice some of the efficiency of taking the status quo for granted and start to question things. Why is your inbox overflowing? Because you get a lot of email, or because it’s hard to get email out of your inbox? Why do you get so much email? What problems are people trying to solve by sending you email? Are there better ways to solve them? And why is it hard to get emails out of your inbox? Why do you keep emails around after you’ve read them? Is an inbox the optimal tool for that?

Pay particular attention to things that chafe you. The advantage of taking the status quo for granted is not just that it makes life (locally) more efficient, but also that it makes life more tolerable. If you knew about all the things we’ll get in the next 50 years but don’t have yet, you’d find present day life pretty constraining, just as someone from the present would if they were sent back 50 years in a time machine. When something annoys you, it could be because you’re living in the future.

When you find the right sort of problem, you should probably be able to describe it as obvious, at least to you. When we started Viaweb, all the online stores were built by hand, by web designers making individual HTML pages. It was obvious to us as programmers that these sites would have to be generated by software.

Which means, strangely enough, that coming up with startup ideas is a question of seeing the obvious. That suggests how weird this process is: you’re trying to see things that are obvious, and yet that you hadn’t seen.

Since what you need to do here is loosen up your own mind, it may be best not to make too much of a direct frontal attack on the problem—i.e. to sit down and try to think of ideas. The best plan may be just to keep a background process running, looking for things that seem to be missing. Work on hard problems, driven mainly by curiousity, but have a second self watching over your shoulder, taking note of gaps and anomalies.

Give yourself some time. You have a lot of control over the rate at which you turn yours into a prepared mind, but you have less control over the stimuli that spark ideas when they hit it. If Bill Gates and Paul Allen had constrained themselves to come up with a startup idea in one month, what if they’d chosen a month before the Altair appeared? They probably would have worked on a less promising idea. Drew Houston did work on a less promising idea before Dropbox: an SAT prep startup. But Dropbox was a much better idea, both in the absolute sense and also as a match for his skills.

A good way to trick yourself into noticing ideas is to work on projects that seem like they’d be cool. If you do that, you’ll naturally tend to build things that are missing. It wouldn’t seem as interesting to build something that already existed.

Just as trying to think up startup ideas tends to produce bad ones, working on things that could be dismissed as “toys” often produces good ones. When something is described as a toy, that means it has everything an idea needs except being important. It’s cool; users love it; it just doesn’t matter. But if you’re living in the future and you build something cool that users love, it may matter more than outsiders think. Microcomputers seemed like toys when Apple and Microsoft started working on them. I’m old enough to remember that era; the usual term for people with their own microcomputers was “hobbyists.” BackRub seemed like an inconsequential science project. The Facebook was just a way for undergrads to stalk one another.

At YC we’re excited when we meet startups working on things that we could imagine know-it-alls on forums dismissing as toys. To us that’s positive evidence an idea is good.

If you can afford to take a long view (and arguably you can’t afford not to), you can turn “Live in the future and build what’s missing” into something even better:

Live in the future and build what seems interesting.

School

That’s what I’d advise college students to do, rather than trying to learn about “entrepreneurship.” “Entrepreneurship” is something you learn best by doing it. The examples of the most successful founders make that clear. What you should be spending your time on in college is ratcheting yourself into the future. College is an incomparable opportunity to do that. What a waste to sacrifice an opportunity to solve the hard part of starting a startup—becoming the sort of person who can have organic startup ideas—by spending time learning about the easy part. Especially since you won’t even really learn about it, any more than you’d learn about sex in a class. All you’ll learn is the words for things.

The clash of domains is a particularly fruitful source of ideas. If you know a lot about programming and you start learning about some other field, you’ll probably see problems that software could solve. In fact, you’re doubly likely to find good problems in another domain: (a) the inhabitants of that domain are not as likely as software people to have already solved their problems with software, and (b) since you come into the new domain totally ignorant, you don’t even know what the status quo is to take it for granted.

So if you’re a CS major and you want to start a startup, instead of taking a class on entrepreneurship you’re better off taking a class on, say, genetics. Or better still, go work for a biotech company. CS majors normally get summer jobs at computer hardware or software companies. But if you want to find startup ideas, you might do better to get a summer job in some unrelated field.

Or don’t take any extra classes, and just build things. It’s no coincidence that Microsoft and Facebook both got started in January. At Harvard that is (or was) Reading Period, when students have no classes to attend because they’re supposed to be studying for finals.

But don’t feel like you have to build things that will become startups. That’s premature optimization. Just build things. Preferably with other students. It’s not just the classes that make a university such a good place to crank oneself into the future. You’re also surrounded by other people trying to do the same thing. If you work together with them on projects, you’ll end up producing not just organic ideas, but organic ideas with organic founding teams—and that, empirically, is the best combination.

Beware of research. If an undergrad writes something all his friends start using, it’s quite likely to represent a good startup idea. Whereas a PhD dissertation is extremely unlikely to. For some reason, the more a project has to count as research, the less likely it is to be something that could be turned into a startup. [10] I think the reason is that the subset of ideas that count as research is so narrow that it’s unlikely that a project that satisfied that constraint would also satisfy the orthogonal constraint of solving users’ problems. Whereas when students (or professors) build something as a side-project, they automatically gravitate toward solving users’ problems—perhaps even with an additional energy that comes from being freed from the constraints of research.

Competition

Because a good idea should seem obvious, when you have one you’ll tend to feel that you’re late. Don’t let that deter you. Worrying that you’re late is one of the signs of a good idea. Ten minutes of searching the web will usually settle the question. Even if you find someone else working on the same thing, you’re probably not too late. It’s exceptionally rare for startups to be killed by competitors—so rare that you can almost discount the possibility. So unless you discover a competitor with the sort of lock-in that would prevent users from choosing you, don’t discard the idea.

If you’re uncertain, ask users. The question of whether you’re too late is subsumed by the question of whether anyone urgently needs what you plan to make. If you have something that no competitor does and that some subset of users urgently need, you have a beachhead.

The question then is whether that beachhead is big enough. Or more importantly, who’s in it: if the beachhead consists of people doing something lots more people will be doing in the future, then it’s probably big enough no matter how small it is. For example, if you’re building something differentiated from competitors by the fact that it works on phones, but it only works on the newest phones, that’s probably a big enough beachhead.

Err on the side of doing things where you’ll face competitors. Inexperienced founders usually give competitors more credit than they deserve. Whether you succeed depends far more on you than on your competitors. So better a good idea with competitors than a bad one without.

You don’t need to worry about entering a “crowded market” so long as you have a thesis about what everyone else in it is overlooking. In fact that’s a very promising starting point. Google was that type of idea. Your thesis has to be more precise than “we’re going to make an x that doesn’t suck” though. You have to be able to phrase it in terms of something the incumbents are overlooking. Best of all is when you can say that they didn’t have the courage of their convictions, and that your plan is what they’d have done if they’d followed through on their own insights. Google was that type of idea too. The search engines that preceded them shied away from the most radical implications of what they were doing—particularly that the better a job they did, the faster users would leave.

A crowded market is actually a good sign, because it means both that there’s demand and that none of the existing solutions are good enough. A startup can’t hope to enter a market that’s obviously big and yet in which they have no competitors. So any startup that succeeds is either going to be entering a market with existing competitors, but armed with some secret weapon that will get them all the users (like Google), or entering a market that looks small but which will turn out to be big (like Microsoft).

Filters

There are two more filters you’ll need to turn off if you want to notice startup ideas: the unsexy filter and the schlep filter.

Most programmers wish they could start a startup by just writing some brilliant code, pushing it to a server, and having users pay them lots of money. They’d prefer not to deal with tedious problems or get involved in messy ways with the real world. Which is a reasonable preference, because such things slow you down. But this preference is so widespread that the space of convenient startup ideas has been stripped pretty clean. If you let your mind wander a few blocks down the street to the messy, tedious ideas, you’ll find valuable ones just sitting there waiting to be implemented.

The schlep filter is so dangerous that I wrote a separate essay about the condition it induces, which I called schlep blindness. I gave Stripe as an example of a startup that benefited from turning off this filter, and a pretty striking example it is. Thousands of programmers were in a position to see this idea; thousands of programmers knew how painful it was to process payments before Stripe. But when they looked for startup ideas they didn’t see this one, because unconsciously they shrank from having to deal with payments. And dealing with payments is a schlep for Stripe, but not an intolerable one. In fact they might have had net less pain; because the fear of dealing with payments kept most people away from this idea, Stripe has had comparatively smooth sailing in other areas that are sometimes painful, like user acquisition. They didn’t have to try very hard to make themselves heard by users, because users were desperately waiting for what they were building.

The unsexy filter is similar to the schlep filter, except it keeps you from working on problems you despise rather than ones you fear. We overcame this one to work on Viaweb. There were interesting things about the architecture of our software, but we weren’t interested in ecommerce per se. We could see the problem was one that needed to be solved though.

Turning off the schlep filter is more important than turning off the unsexy filter, because the schlep filter is more likely to be an illusion. And even to the degree it isn’t, it’s a worse form of self-indulgence. Starting a successful startup is going to be fairly laborious no matter what. Even if the product doesn’t entail a lot of schleps, you’ll still have plenty dealing with investors, hiring and firing people, and so on. So if there’s some idea you think would be cool but you’re kept away from by fear of the schleps involved, don’t worry: any sufficiently good idea will have as many.

The unsexy filter, while still a source of error, is not as entirely useless as the schlep filter. If you’re at the leading edge of a field that’s changing rapidly, your ideas about what’s sexy will be somewhat correlated with what’s valuable in practice. Particularly as you get older and more experienced. Plus if you find an idea sexy, you’ll work on it more enthusiastically.

Recipes

While the best way to discover startup ideas is to become the sort of person who has them and then build whatever interests you, sometimes you don’t have that luxury. Sometimes you need an idea now. For example, if you’re working on a startup and your initial idea turns out to be bad.

For the rest of this essay I’ll talk about tricks for coming up with startup ideas on demand. Although empirically you’re better off using the organic strategy, you could succeed this way. You just have to be more disciplined. When you use the organic method, you don’t even notice an idea unless it’s evidence that something is truly missing. But when you make a conscious effort to think of startup ideas, you have to replace this natural constraint with self-discipline. You’ll see a lot more ideas, most of them bad, so you need to be able to filter them.

One of the biggest dangers of not using the organic method is the example of the organic method. Organic ideas feel like inspirations. There are a lot of stories about successful startups that began when the founders had what seemed a crazy idea but “just knew” it was promising. When you feel that about an idea you’ve had while trying to come up with startup ideas, you’re probably mistaken.

When searching for ideas, look in areas where you have some expertise. If you’re a database expert, don’t build a chat app for teenagers (unless you’re also a teenager). Maybe it’s a good idea, but you can’t trust your judgment about that, so ignore it. There have to be other ideas that involve databases, and whose quality you can judge. Do you find it hard to come up with good ideas involving databases? That’s because your expertise raises your standards. Your ideas about chat apps are just as bad, but you’re giving yourself a Dunning-Kruger pass in that domain.

The place to start looking for ideas is things you need. There must be things you need.

One good trick is to ask yourself whether in your previous job you ever found yourself saying “Why doesn’t someone make x? If someone made x we’d buy it in a second.” If you can think of any x people said that about, you probably have an idea. You know there’s demand, and people don’t say that about things that are impossible to build.

More generally, try asking yourself whether there’s something unusual about you that makes your needs different from most other people’s. You’re probably not the only one. It’s especially good if you’re different in a way people will increasingly be.

If you’re changing ideas, one unusual thing about you is the idea you’d previously been working on. Did you discover any needs while working on it? Several well-known startups began this way. Hotmail began as something its founders wrote to talk about their previous startup idea while they were working at their day jobs. [15]

A particularly promising way to be unusual is to be young. Some of the most valuable new ideas take root first among people in their teens and early twenties. And while young founders are at a disadvantage in some respects, they’re the only ones who really understand their peers. It would have been very hard for someone who wasn’t a college student to start Facebook. So if you’re a young founder (under 23 say), are there things you and your friends would like to do that current technology won’t let you?

The next best thing to an unmet need of your own is an unmet need of someone else. Try talking to everyone you can about the gaps they find in the world. What’s missing? What would they like to do that they can’t? What’s tedious or annoying, particularly in their work? Let the conversation get general; don’t be trying too hard to find startup ideas. You’re just looking for something to spark a thought. Maybe you’ll notice a problem they didn’t consciously realize they had, because you know how to solve it.

When you find an unmet need that isn’t your own, it may be somewhat blurry at first. The person who needs something may not know exactly what they need. In that case I often recommend that founders act like consultants—that they do what they’d do if they’d been retained to solve the problems of this one user. People’s problems are similar enough that nearly all the code you write this way will be reusable, and whatever isn’t will be a small price to start out certain that you’ve reached the bottom of the well.

One way to ensure you do a good job solving other people’s problems is to make them your own. When Rajat Suri of E la Carte decided to write software for restaurants, he got a job as a waiter to learn how restaurants worked. That may seem like taking things to extremes, but startups are extreme. We love it when founders do such things.

In fact, one strategy I recommend to people who need a new idea is not merely to turn off their schlep and unsexy filters, but to seek out ideas that are unsexy or involve schleps. Don’t try to start Twitter. Those ideas are so rare that you can’t find them by looking for them. Make something unsexy that people will pay you for.

A good trick for bypassing the schlep and to some extent the unsexy filter is to ask what you wish someone else would build, so that you could use it. What would you pay for right now?

Since startups often garbage-collect broken companies and industries, it can be a good trick to look for those that are dying, or deserve to, and try to imagine what kind of company would profit from their demise. For example, journalism is in free fall at the moment. But there may still be money to be made from something like journalism. What sort of company might cause people in the future to say “this replaced journalism” on some axis?

But imagine asking that in the future, not now. When one company or industry replaces another, it usually comes in from the side. So don’t look for a replacement for x; look for something that people will later say turned out to be a replacement for x. And be imaginative about the axis along which the replacement occurs. Traditional journalism, for example, is a way for readers to get information and to kill time, a way for writers to make money and to get attention, and a vehicle for several different types of advertising. It could be replaced on any of these axes (it has already started to be on most).

When startups consume incumbents, they usually start by serving some small but important market that the big players ignore. It’s particularly good if there’s an admixture of disdain in the big players’ attitude, because that often misleads them. For example, after Steve Wozniak built the computer that became the Apple I, he felt obliged to give his then-employer Hewlett-Packard the option to produce it. Fortunately for him, they turned it down, and one of the reasons they did was that it used a TV for a monitor, which seemed intolerably déclassé to a high-end hardware company like HP was at the time.

Are there groups of scruffy but sophisticated users like the early microcomputer “hobbyists” that are currently being ignored by the big players? A startup with its sights set on bigger things can often capture a small market easily by expending an effort that wouldn’t be justified by that market alone.

Similarly, since the most successful startups generally ride some wave bigger than themselves, it could be a good trick to look for waves and ask how one could benefit from them. The prices of gene sequencing and 3D printing are both experiencing Moore’s Law-like declines. What new things will we be able to do in the new world we’ll have in a few years? What are we unconsciously ruling out as impossible that will soon be possible?

Organic

But talking about looking explicitly for waves makes it clear that such recipes are plan B for getting startup ideas. Looking for waves is essentially a way to simulate the organic method. If you’re at the leading edge of some rapidly changing field, you don’t have to look for waves; you are the wave.

Finding startup ideas is a subtle business, and that’s why most people who try fail so miserably. It doesn’t work well simply to try to think of startup ideas. If you do that, you get bad ones that sound dangerously plausible. The best approach is more indirect: if you have the right sort of background, good startup ideas will seem obvious to you. But even then, not immediately. It takes time to come across situations where you notice something missing. And often these gaps won’t seem to be ideas for companies, just things that would be interesting to build. Which is why it’s good to have the time and the inclination to build things just because they’re interesting.

Live in the future and build what seems interesting. Strange as it sounds, that’s the real recipe.

Read the entire article after the jump.

Image: Nick D’Aloisio with his Summly app. Courtesy of Telegraph.

Technology: Mind Exp(a/e)nder

Rattling off esoteric facts to friends and colleagues at a party or in the office is often seen as a simple way to impress. You may have tried this at some point — to impress a prospective boy or girl friend or a group of peers or even your boss. Not surprisingly, your facts will impress if they are relevant to the discussion at hand. However, your audience will be even more agog at your uncanny, intellectual prowess if the facts and figures relate to some wildly obtuse domain — quotes from authors, local bird species, gold prices through the years, land-speed records through the ages, how electrolysis works, etymology of polysyllabic words, and so it goes.

So, it comes as no surprise that many technology companies fall over themselves to promote their products as a way to make you, the smart user, even smarter. But does having constant, realtime access to a powerful computer or smartphone or spectacles linked to an immense library of interconnected content, make you smarter? Some would argue that it does; that having access to a vast, virtual disk drive of information will improve your cognitive abilities. There is no doubt that our technology puts an unparalleled repository of information within instant and constant reach: we can read all the classic literature — for that matter we can read the entire contents of the Library of Congress; we can find an answer to almost any question — it’s just a Google search away; we can find fresh research and rich reference material on every subject imaginable.

Yet, all this information will not directly make us any smarter; it is not applied knowledge nor is it experiential wisdom. It will not make us more creative or insightful. However, it is more likely to influence our cognition indirectly — freed from our need to carry volumes of often useless facts and figures in our heads, we will be able to turn our minds to more consequential and noble pursuits — to think, rather than to memorize. That is a good thing.

From Slate:

Quick, what’s the square root of 2,130? How many Roadmaster convertibles did Buick build in 1949? What airline has never lost a jet plane in a crash?

If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.

True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.

The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.

Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.

The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.

So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”

Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.

The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”

So here’s a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What’s the exact recipe for your favorite homemade pastry?

Read the entire article after the jump.

Image: Google Glass. Courtesy of Google.

Printing Human Cells

The most fundamental innovation tends to happen at the intersection of disciplines. So, what do you get if you cross 3-D printing technology with embryonic stem cell research? Well, you get a device that can print lines of cells with similar functions, such as heart muscle or kidney cells. Welcome to the new world of biofabrication. The science fiction future seems to be ever increasingly close.

[div class=attrib]From Scientific American:[end-div]

Imagine if you could take living cells, load them into a printer, and squirt out a 3D tissue that could develop into a kidney or a heart. Scientists are one step closer to that reality, now that they have developed the first printer for embryonic human stem cells.

In a new study, researchers from the University of Edinburgh have created a cell printer that spits out living embryonic stem cells. The printer was capable of printing uniform-size droplets of cells gently enough to keep the cells alive and maintain their ability to develop into different cell types. The new printing method could be used to make 3D human tissues for testing new drugs, grow organs, or ultimately print cells directly inside the body.

Human embryonic stem cells (hESCs) are obtained from human embryos and can develop into any cell type in an adult person, from brain tissue to muscle to bone. This attribute makes them ideal for use in regenerative medicine — repairing, replacing and regenerating damaged cells, tissues or organs. [Stem Cells: 5 Fascinating Findings]

In a lab dish, hESCs can be placed in a solution that contains the biological cues that tell the cells to develop into specific tissue types, a process called differentiation. The process starts with the cells forming what are called “embryoid bodies.” Cell printers offer a means of producing embryoid bodies of a defined size and shape.

In the new study, the cell printer was made from a modified CNC machine (a computer-controlled machining tool) outfitted with two “bio-ink” dispensers: one containing stem cells in a nutrient-rich soup called cell medium and another containing just the medium. These embryonic stem cells were dispensed through computer-operated valves, while a microscope mounted to the printer provided a close-up view of what was being printed.

The two inks were dispensed in layers, one on top of the other to create cell droplets of varying concentration. The smallest droplets were only two nanoliters, containing roughly five cells.

The cells were printed onto a dish containing many small wells. The dish was then flipped over so the droplets now hung from them, allowing the stem cells to form clumps inside each well. (The printer lays down the cells in precisely sized droplets and in a certain pattern that is optimal for differentiation.)

Tests revealed that more than 95 percent of the cells were still alive 24 hours after being printed, suggesting they had not been killed by the printing process. More than 89 percent of the cells were still alive three days later, and also tested positive for a marker of their pluripotency — their potential to develop into different cell types.

Biomedical engineer Utkan Demirci, of Harvard University Medical School and Brigham and Women’s Hospital, has done pioneering work in printing cells, and thinks the new study is taking it in an exciting direction. “This technology could be really good for high-throughput drug testing,” Demirci told LiveScience. One can build mini-tissues from the bottom up, using a repeatable, reliable method, he said. Building whole organs is the long-term goal, Demirci said, though he cautioned that it “may be quite far from where we are today.”

[div class=attrib]Read the entire article after the leap.[end-div]

[div class=attrib]Image: 3D printing with embryonic stem cells. Courtesy of Alan Faulkner-Jones et al./Heriot-Watt University.[end-div]

Consumer Electronics Gone Mad

If you eat too quickly, then HAPIfork is the new eating device for you. If you have trouble seeing text on your palm-sized iPad, then Lenovo’s 27 inch tablet is for you. If you need musical motivation from One Direction to get your children to brush their teeth, then the Brush Buddies toothbrush is for you, and your kids. If you’re tired of technology, then stay away from this year’s Consumer Electronics Show (CES 2013).

If you’d like to see other strange products looking for a buyer follow this jump.

[div class=attrib]Image: The HAPIfork monitors how fast its user is eating and alerts them if their speed is faster than a pre-determined rate by vibrating, which altogether sounds like an incredibly strange eating experience. Courtesy of CES / Telegraph.[end-div]

The Rise of the Industrial Internet

As the internet that connects humans reaches a stable saturation point the industrial internet — the network that connects things — is increasing its growth and reach.

[div class=attrib]From the New York Times:[end-div]

When Sharoda Paul finished a postdoctoral fellowship last year at the Palo Alto Research Center, she did what most of her peers do — considered a job at a big Silicon Valley company, in her case, Google. But instead, Ms. Paul, a 31-year-old expert in social computing, went to work for General Electric.

Ms. Paul is one of more than 250 engineers recruited in the last year and a half to G.E.’s new software center here, in the East Bay of San Francisco. The company plans to increase that work force of computer scientists and software developers to 400, and to invest $1 billion in the center by 2015. The buildup is part of G.E’s big bet on what it calls the “industrial Internet,” bringing digital intelligence to the physical world of industry as never before.

The concept of Internet-connected machines that collect data and communicate, often called the “Internet of Things,” has been around for years. Information technology companies, too, are pursuing this emerging field. I.B.M. has its “Smarter Planet” projects, while Cisco champions the “Internet of Everything.”

But G.E.’s effort, analysts say, shows that Internet-era technology is ready to sweep through the industrial economy much as the consumer Internet has transformed media, communications and advertising over the last decade.

In recent months, Ms. Paul has donned a hard hat and safety boots to study power plants. She has ridden on a rail locomotive and toured hospital wards. “Here, you get to work with things that touch people in so many ways,” she said. “That was a big draw.”

G.E. is the nation’s largest industrial company, a producer of aircraft engines, power plant turbines, rail locomotives and medical imaging equipment. It makes the heavy-duty machinery that transports people, heats homes and powers factories, and lets doctors diagnose life-threatening diseases.

G.E. resides in a different world from the consumer Internet. But the major technologies that animate Google and Facebook are also vital ingredients in the industrial Internet — tools from artificial intelligence, like machine-learning software, and vast streams of new data. In industry, the data flood comes mainly from smaller, more powerful and cheaper sensors on the equipment.

Smarter machines, for example, can alert their human handlers when they will need maintenance, before a breakdown. It is the equivalent of preventive and personalized care for equipment, with less downtime and more output.

“These technologies are really there now, in a way that is practical and economic,” said Mark M. Little, G.E.’s senior vice president for global research.

G.E.’s embrace of the industrial Internet is a long-term strategy. But if its optimism proves justified, the impact could be felt across the economy.

The outlook for technology-led economic growth is a subject of considerable debate. In a recent research paper, Robert J. Gordon, a prominent economist at Northwestern University, argues that the gains from computing and the Internet have petered out in the last eight years.

Since 2000, Mr. Gordon asserts, invention has focused mainly on consumer and communications technologies, including smartphones and tablet computers. Such devices, he writes, are “smaller, smarter and more capable, but do not fundamentally change labor productivity or the standard of living” in the way that electric lighting or the automobile did.

But others say such pessimism misses the next wave of technology. “The reason I think Bob Gordon is wrong is precisely because of the kind of thing G.E. is doing,” said Andrew McAfee, principal research scientist at M.I.T.’s Center for Digital Business.

Today, G.E. is putting sensors on everything, be it a gas turbine or a hospital bed. The mission of the engineers in San Ramon is to design the software for gathering data, and the clever algorithms for sifting through it for cost savings and productivity gains. Across the industries it covers, G.E. estimates such efficiency opportunities at as much as $150 billion.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Internet of Things. Courtesy of Intel.[end-div]

Startup Culture: New is the New New

Starting up a new business was once a demanding and complex process, often undertaken in anonymity in the long shadows between the hours of a regular job. It still is over course. However nowadays “the startup” has become more of an event. The tech sector has raised this to a fine art by spawning an entire self-sustaining and self-promoting industry around startups.

You’ll find startup gurus, serial entrepreneurs and digital prophets — yes, AOL has a digital prophet on its payroll — strutting around on stage, twittering tips in the digital world, leading business plan bootcamps, pontificating on accelerator panels, hosting incubator love-ins in coffee shops or splashed across the covers of Entrepreneur or Inc or FastCompany magazines on an almost daily basis. Beware! The back of your cereal box may be next.

[div class=attrib]From the Telegraph:[end-div]

I’ve seen the best minds of my generation destroyed by marketing, shilling for ad clicks, dragging themselves through the strip-lit corridors of convention centres looking for a venture capitalist. Just as X Factor has convinced hordes of tone deaf kids they can be pop stars, the startup industry has persuaded thousands that they can be the next rockstar entrepreneur. What’s worse is that while X Factor clogs up the television schedules for a couple of months, tech conferences have proliferated to such an extent that not a week goes by without another excuse to slope off. Some founders spend more time on panels pontificating about their business plans than actually executing them.

Earlier this year, I witnessed David Shing, AOL’s Digital Prophet – that really is his job title – delivering the opening remarks at a tech conference. The show summed up the worst elements of the self-obsessed, hyperactive world of modern tech. A 42-year-old man with a shock of Russell Brand hair, expensive spectacles and paint-splattered trousers, Shingy paced the stage spouting buzzwords: “Attention is the new currency, man…the new new is providing utility, brothers and sisters…speaking on the phone is completely cliche.” The audience lapped it all up. At these rallies in praise of the startup, enthusiasm and energy matter much more than making sense.

Startup culture is driven by slinging around superlatives – every job is an “incredible opportunity”, every product is going to “change lives” and “disrupt” an established industry. No one wants to admit that most startups stay stuck right there at the start, pub singers pining for their chance in the spotlight. While the startups and hangers-on milling around in the halls bring in stacks of cash for the event organisers, it’s the already successful entrepreneurs on stage and the investors who actually benefit from these conferences. They meet up at exclusive dinners and in the speakers’ lounge where the real deals are made. It’s Studio 54 for geeks.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Startup, WA. Courtesy of Wikipedia.[end-div]

Computers in the Movies

Most of us now carry around inside our smartphones more computing power than NASA once had in the Apollo command module. So, it’s interesting to look back at old movies to see how celluloid fiction portrayed computers. Most from the 1950s and 60s were replete with spinning tape drives and enough lights to resemble the Manhattan skyline. Our favorite here at theDiagonal is the first “Bat Computer” from the original 1960’s TV series, which could be found churning away in Batman’s crime-fighting nerve center beneath Wayne Mansion.

[div class=attrib]From Wired:[end-div]

The United States government powered up its SAGE defense system in July 1958, at an Air Force base near Trenton, New Jersey. Short for Semi-Automatic Ground Environment, SAGE would eventually span 24 command and control stations across the US and Canada, warning against potential air attacks via radar and an early IBM computer called the AN/FSQ-7.

“It automated air defense,” says Mike Loewen, who worked with SAGE while serving with the Air Force in the 1980s. “It used a versatile, programmable, digital computer to process all this incoming radar data from various sites around the region and display it in a format that made sense to people. It provided a computer display of the digitally processed radar information.”

Fronted by a wall of dials, switches, neon lights, and incandescent lamps — and often plugged into spinning tape drives stretching from floor to ceiling — the AN/FSQ-7 looked like one of those massive computing systems that turned up in Hollywood movies and prime time TV during the ’60s and the ’70s. This is mainly because it is one those massive computing systems that turned up in Hollywood movies and TV during the ’60s and ’70s — over and over and over again. Think Lost In Space. Get Smart. Fantastic Voyage. In Like Flint. Or our person favorite: The Towering Inferno.

That’s the AN/FSQ-7 in The Towering Inferno at the top of this page, operated by a man named OJ Simpson, trying to track a fire that’s threatening to bring down the world’s tallest building.

For decades, the AN/FSQ-7 — Q7 for short — helped define the image of a computer in the popular consciousness. Nevermind that it was just a radar system originally backed by tens of thousands of vacuum tubes. For moviegoers everywhere, this was the sort of thing that automated myriad tasks not only in modern-day America but the distant future.

It never made much sense. But sometimes, it made even less sense. In the ’60s and ’70s, some films didn’t see the future all that clearly. Woody Allen’s Sleeper is set in 2173, and it shows the AN/FSQ-7 helping 22nd-century Teamsters make repairs to robotic man servants. Other films just didn’t see the present all that clearly. Independence Day was made in 1996, and apparently, its producers were unaware that the Air Force decommissioned SAGE 13 years earlier.

Of course, the Q7 is only part of the tale. The history of movies and TV is littered with big, beefy, photogenic machines that make absolutely no sense whatsoever. Sometimes they’re real machines doing unreal tasks. And sometimes they’re unreal machines doing unreal tasks. But we love them all. Oh so very much.

Mike Loewen first noticed the Q7 in a mid-’60s prime time TV series called The Time Tunnel. Produced by the irrepressible Irwin Allen, Time Tunnel concerned a secret government project to build a time machine beneath a trap door in the Arizona desert. A Q7 powered this subterranean time machine, complete with all those dials, switches, neon lights, and incandescent lamps.

No, an AN/FSQ-7 couldn’t really power a time machine. But time machines don’t exist. So it all works out quite nicely.

At first, Loewen didn’t know it was a Q7. But then, after he wound up in front of a SAGE system while in the Air Force many years later, it all came together. “I realized that these computer banks running the Time Tunnel were large sections of panels from the SAGE computer,” Loewen says. “And that’s where I got interested.”

He noticed the Q7 in TV show after TV show, movie after movie — and he started documenting these SAGE star turns on his personal homepage. In each case, the Q7 was seen doing stuff it couldn’t possibly do, but there was no doubt this was the Q7 — or at least part of it.

Here’s that subterranean time machine that caught the eye of Mike Loewen in The Time Tunnel (1966). The cool thing about the Time Tunnel AN/FSQ-7 is that even when it traps two government scientists in an endless time warp, it always sends them to dates of extremely important historical significance. Otherwise, you’d have one boring TV show on your hands.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The Time Tunnel (1966). Courtesy of Wired.[end-div]

The Most Annoying Technology? The Winner Is…

We all have owned or have used or have come far too close to a technology that we absolutely abhor and wish numerous curses upon its inventors. Said gizmo may be the unfathomable VCR, the forever lost TV remote, the tinny sounding Sony Walkman replete with unraveling cassette tape, the Blackberry, or even Facebook.

Ours over here at theDiagonal is the voice recognition system used by 99 percent of so-called customer service organizations. You know how it goes, something like this: “please say ‘one’ for new accounts”, “please say ‘two’ if you are an existing customer”, please say ‘three’ for returns”, “please say ‘Kyrgyzstan’ to speak with a customer service representative”.

Wired recently listed their least favorite, most hated technologies. No surprises here — winners of this dubious award include the Bluetooth headset, CDROM, and Apple TV remote.

[div class=attrib]From Wired:[end-div]

Bluetooth Headsets

Look, here’s a good rule of thumb: Once you get out of the car, or leave your desk, take off the headset. Nobody wants to hear your end of the conversation. That’s not idle speculation, it’s science! Headsets just make it worse. At least when there’s a phone involved, there are visual cues that say “I’m on the phone.” I mean, other than hearing one end of a shouted conversation.

Leaf Blower

Is your home set on a large wooded lot with acreage to spare between you and your closest neighbor? Did a tornado power through your yard last night, leaving your property covered in limbs and leaves? No? Then get a rake, dude. Leaf blowers are so irritating, they have been been outlawed in some towns. Others should follow suit.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the Sun/Mercury News.[end-div]

The Tubes of the Internets

Google lets the world peek at the many tubes that form a critical part of its search engine infrastructure — functional and pretty too.

[div class=attrib]From the Independent:[end-div]

They are the cathedrals of the information age – with the colour scheme of an adventure playground.

For the first time, Google has allowed cameras into its high security data centres – the beating hearts of its global network that allow the web giant to process 3 billion internet searches every day.

Only a small band of Google employees have ever been inside the doors of the data centres, which are hidden away in remote parts of North America, Belgium and Finland.

Their workplaces glow with the blinking lights of LEDs on internet servers reassuring technicians that all is well with the web, and hum to the sound of hundreds of giant fans and thousands of gallons of water, that stop the whole thing overheating.

“Very few people have stepped inside Google’s data centers [sic], and for good reason: our first priority is the privacy and security of your data, and we go to great lengths to protect it, keeping our sites under close guard,” the company said yesterday. Row upon row of glowing servers send and receive information from 20 billion web pages every day, while towering libraries store all the data that Google has ever processed – in case of a system failure.

With data speeds 200,000 times faster than an ordinary home internet connection, Google’s centres in America can share huge amounts of information with European counterparts like the remote, snow-packed Hamina centre in Finland, in the blink of an eye.

[div class=attrib]Read the entire article after the jump, or take a look at more images from the bowels of Google after the leap.[end-div]

Mourning the Lost Art of Handwriting

In this age of digital everything handwriting does still matter. Some of you may even still have a treasured fountain pen. Novelist Philip Hensher suggests why handwriting has import and value in his new book, The Missing Ink.

[div class=attrib]From the Guardian:[end-div]

About six months ago, I realised that I had no idea what the handwriting of a good friend of mine looked like. I had known him for over a decade, but somehow we had never communicated using handwritten notes. He had left voice messages for me, emailed me, sent text messages galore. But I don’t think I had ever had a letter from him written by hand, a postcard from his holidays, a reminder of something pushed through my letter box. I had no idea whether his handwriting was bold or crabbed, sloping or upright, italic or rounded, elegant or slapdash.

It hit me that we are at a moment when handwriting seems to be about to vanish from our lives altogether. At some point in recent years, it has stopped being a necessary and inevitable intermediary between people – a means by which individuals communicate with each other, putting a little bit of their personality into the form of their message as they press the ink-bearing point on to the paper. It has started to become just one of many options, and often an unattractive, elaborate one.

For each of us, the act of putting marks on paper with ink goes back as far as we can probably remember. At some point, somebody comes along and tells us that if you make a rounded shape and then join it to a straight vertical line, that means the letter “a”, just like the ones you see in the book. (But the ones in the book have a little umbrella over the top, don’t they? Never mind that, for the moment: this is how we make them for ourselves.) If you make a different rounded shape, in the opposite direction, and a taller vertical line, then that means the letter “b”. Do you see? And then a rounded shape, in the same direction as the first letter, but not joined to anything – that makes a “c”. And off you go.

Actually, I don’t think I have any memory of this initial introduction to the art of writing letters on paper. Our handwriting, like ourselves, seems always to have been there.

But if I don’t have any memory of first learning to write, I have a clear memory of what followed: instructions in refinements, suggestions of how to purify the forms of your handwriting.

You longed to do “joined-up writing”, as we used to call the cursive hand when we were young. Instructed in print letters, I looked forward to the ability to join one letter to another as a mark of huge sophistication. Adult handwriting was unreadable, true, but perhaps that was its point. I saw the loops and impatient dashes of the adult hand as a secret and untrustworthy way of communicating that one day I would master.

There was, also, wanting to make your handwriting more like other people’s. Often, this started with a single letter or figure. In the second year at school, our form teacher had a way of writing a 7 in the European way, with a cross-bar. A world of glamour and sophistication hung on that cross-bar; it might as well have had a beret on, be smoking Gitanes in the maths cupboard.

Your hand is formed by aspiration to the hand of others – by the beautiful italic strokes of a friend which seem altogether wasted on a mere postcard, or a note on your door reading “Dropped by – will come back later”. It’s formed, too, by anti-aspiration, the desire not to be like Denise in the desk behind who reads with her mouth open and whose writing, all bulging “m”s and looping “p”s, contains the atrocity of a little circle on top of every i. Or still more horrible, on occasion, usually when she signs her name, a heart. (There may be men in the world who use a heart-shaped jot, as the dot over the i is called, but I have yet to meet one. Or run a mile from one.)

Those other writing apparatuses, mobile phones, occupy a little bit more of the same psychological space as the pen. Ten years ago, people kept their mobile phone in their pockets. Now, they hold them permanently in their hand like a small angry animal, gazing crossly into our faces, in apparent need of constant placation. Clearly, people do regard their mobile phones as, in some degree, an extension of themselves. And yet we have not evolved any of those small, pleasurable pieces of behaviour towards them that seem so ordinary in the case of our pens. If you saw someone sucking one while they thought of the next phrase to text, you would think them dangerously insane.

We have surrendered our handwriting for something more mechanical, less distinctively human, less telling about ourselves and less present in our moments of the highest happiness and the deepest emotion. Ink runs in our veins, and shows the world what we are like. The shaping of thought and written language by a pen, moved by a hand to register marks of ink on paper, has for centuries, millennia, been regarded as key to our existence as human beings. In the past, handwriting has been regarded as almost the most powerful sign of our individuality. In 1847, in an American case, a witness testified without hesitation that a signature was genuine, though he had not seen an example of the handwriting for 63 years: the court accepted his testimony.

Handwriting is what registers our individuality, and the mark which our culture has made on us. It has been seen as the unknowing key to our souls and our innermost nature. It has been regarded as a sign of our health as a society, of our intelligence, and as an object of simplicity, grace, fantasy and beauty in its own right. Yet at some point, the ordinary pleasures and dignity of handwriting are going to be replaced permanently.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Stipula fountain pen. Courtesy of Wikipedia.[end-div]

What’s All the Fuss About Big Data?

We excerpt an interview with big data pioneer and computer scientist, Alex Pentland, via the Edge. Pentland is a leading thinker in computational social science and currently directs the Human Dynamics Laboratory at MIT.

While there is no exact definition of “big data” it tends to be characterized quantitatively and qualitatively differently from data commonly used by most organizations. Where regular data can be stored, processed and analyzed using common database tools and analytical engines, big data refers to vast collections of data that often lie beyond the realm of regular computation. So, often big data requires vast and specialized storage and enormous processing capabilities. Data sets that fall into the big data area cover such areas as climate science, genomics, particle physics, and computational social science.

Big data holds true promise. However, while storage and processing power now enable quick and efficient crunching of tera- and even petabytes of data, tools for comprehensive analysis and visualization lag behind.

[div class=attrib]Alex Pentland via the Edge:[end-div]

Recently I seem to have become MIT’s Big Data guy, with people like Tim O’Reilly and “Forbes” calling me one of the seven most powerful data scientists in the world. I’m not sure what all of that means, but I have a distinctive view about Big Data, so maybe it is something that people want to hear.

I believe that the power of Big Data is that it is information about people’s behavior instead of information about their beliefs. It’s about the behavior of customers, employees, and prospects for your new business. It’s not about the things you post on Facebook, and it’s not about your searches on Google, which is what most people think about, and it’s not data from internal company processes and RFIDs. This sort of Big Data comes from things like location data off of your cell phone or credit card, it’s the little data breadcrumbs that you leave behind you as you move around in the world.

What those breadcrumbs tell is the story of your life. It tells what you’ve chosen to do. That’s very different than what you put on Facebook. What you put on Facebook is what you would like to tell people, edited according to the standards of the day. Who you actually are is determined by where you spend time, and which things you buy. Big data is increasingly about real behavior, and by analyzing this sort of data, scientists can tell an enormous amount about you. They can tell whether you are the sort of person who will pay back loans. They can tell you if you’re likely to get diabetes.

They can do this because the sort of person you are is largely determined by your social context, so if I can see some of your behaviors, I can infer the rest, just by comparing you to the people in your crowd. You can tell all sorts of things about a person, even though it’s not explicitly in the data, because people are so enmeshed in the surrounding social fabric that it determines the sorts of things that they think are normal, and what behaviors they will learn from each other.

As a consequence analysis of Big Data is increasingly about finding connections, connections with the people around you, and connections between people’s behavior and outcomes. You can see this in all sorts of places. For instance, one type of Big Data and connection analysis concerns financial data. Not just the flash crash or the Great Recession, but also all the other sorts of bubbles that occur. What these are is these are systems of people, communications, and decisions that go badly awry. Big Data shows us the connections that cause these events. Big data gives us the possibility of understanding how these systems of people and machines work, and whether they’re stable.

The notion that it is connections between people that is really important is key, because researchers have mostly been trying to understand things like financial bubbles using what is called Complexity Science or Web Science. But these older ways of thinking about Big Data leaves the humans out of the equation. What actually matters is how the people are connected together by the machines and how, as a whole, they create a financial market, a government, a company, and other social structures.

Because it is so important to understand these connections Asu Ozdaglar and I have recently created the MIT Center for Connection Science and Engineering, which spans all of the different MIT departments and schools. It’s one of the very first MIT-wide Centers, because people from all sorts of specialties are coming to understand that it is the connections between people that is actually the core problem in making transportation systems work well, in making energy grids work efficiently, and in making financial systems stable. Markets are not just about rules or algorithms; they’re about people and algorithms together.

Understanding these human-machine systems is what’s going to make our future social systems stable and safe. We are getting beyond complexity, data science and web science, because we are including people as a key part of these systems. That’s the promise of Big Data, to really understand the systems that make our technological society. As you begin to understand them, then you can build systems that are better. The promise is for financial systems that don’t melt down, governments that don’t get mired in inaction, health systems that actually work, and so on, and so forth.

The barriers to better societal systems are not about the size or speed of data. They’re not about most of the things that people are focusing on when they talk about Big Data. Instead, the challenge is to figure out how to analyze the connections in this deluge of data and come to a new way of building systems based on understanding these connections.

Changing The Way We Design Systems

With Big Data traditional methods of system building are of limited use. The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works, because almost everything is significant!  As a consequence the normal laboratory-based question-and-answering process, the method that we have used to build systems for centuries, begins to fall apart.

Big data and the notion of Connection Science is outside of our normal way of managing things. We live in an era that builds on centuries of science, and our methods of building of systems, governments, organizations, and so on are pretty well defined. There are not a lot of things that are really novel. But with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.

With Big Data you can easily get false correlations, for instance, “On Mondays, people who drive to work are more likely to get the flu.” If you look at the data using traditional methods, that may actually be true, but the problem is why is it true? Is it causal? Is it just an accident? You don’t know. Normal analysis methods won’t suffice to answer those questions. What we have to come up with is new ways to test the causality of connections in the real world far more than we have ever had to do before. We no can no longer rely on laboratory experiments; we need to actually do the experiments in the real world.

The other problem with Big Data is human understanding. When you find a connection that works, you’d like to be able to use it to build new systems, and that requires having human understanding of the connection. The managers and the owners have to understand what this new connection means. There needs to be a dialogue between our human intuition and the Big Data statistics, and that’s not something that’s built into most of our management systems today. Our managers have little concept of how to use big data analytics, what they mean, and what to believe.

In fact, the data scientists themselves don’t have much of intuition either…and that is a problem. I saw an estimate recently that said 70 to 80 percent of the results that are found in the machine learning literature, which is a key Big Data scientific field, are probably wrong because the researchers didn’t understand that they were overfitting the data. They didn’t have that dialogue between intuition and causal processes that generated the data. They just fit the model and got a good number and published it, and the reviewers didn’t catch it either. That’s pretty bad because if we start building our world on results like that, we’re going to end up with trains that crash into walls and other bad things. Management using Big Data is actually a radically new thing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Techcrunch.[end-div]

Scientifiction

Science fiction stories and illustrations from our past provide a wonderful opportunity for us to test the predictive and prescient capabilities of their creators. Some like Arthur C. Clarke, we are often reminded, foresaw the communications satellite and the space elevator. Others, such as science fiction great, Isaac Asimov, fared less well in predicting future technology; while he is considered to have coined the term “robotics”, he famously predicted future computers and robots as using punched cards.

Illustrations of our future from the past are even more fascinating. One of the leading proponents of the science fiction illustration genre, or scientifiction, as it was titled in the mid-1920s, was Frank R. Paul. Paul illustrated many of the now classic U.S. pulp science fiction magazines beginning in the 1920s with vivid visuals of aliens, spaceships, destroyed worlds and bizarre technologies. Though, one of his less apocalyptic, but perhaps prescient, works showed a web-footed alien smoking a cigarette through a lengthy proboscis.

Of Frank R. Paul, Ray Bradbury is quoted as saying, “Paul’s fantastic covers for Amazing Stories changed my life forever.”

See more of Paul’s classic illustrations after the jump.

[div class=attrib]Image courtesy of 50Watts / Frank R. Paul.[end-div]

How Apple With the Help of Others Invented the iPhone

Apple’s invention of the iPhone is story of insight, collaboration, cannibalization and dogged persistence over the period of a decade.

[div class=attrib]From Slate:[end-div]

Like many of Apple’s inventions, the iPhone began not with a vision, but with a problem. By 2005, the iPod had eclipsed the Mac as Apple’s largest source of revenue, but the music player that rescued Apple from the brink now faced a looming threat: The cellphone. Everyone carried a phone, and if phone companies figured out a way to make playing music easy and fun, “that could render the iPod unnecessary,” Steve Jobs once warned Apple’s board, according to Walter Isaacson’s biography.

Fortunately for Apple, most phones on the market sucked. Jobs and other Apple executives would grouse about their phones all the time. The simplest phones didn’t do much other than make calls, and the more functions you added to phones, the more complicated they were to use. In particular, phones “weren’t any good as entertainment devices,” Phil Schiller, Apple’s longtime marketing chief, testified during the company’s patent trial with Samsung. Getting music and video on 2005-era phones was too difficult, and if you managed that, getting the device to actually play your stuff was a joyless trudge through numerous screens and menus.

That was because most phones were hobbled by a basic problem—they didn’t have a good method for input. Hard keys (like the ones on the BlackBerry) worked for typing, but they were terrible for navigation. In theory, phones with touchscreens could do a lot more, but in reality they were also a pain to use. Touchscreens of the era couldn’t detect finger presses—they needed a stylus, and the only way to use a stylus was with two hands (one to hold the phone and one to hold the stylus). Nobody wanted a music player that required two-handed operation.

This is the story of how Apple reinvented the phone. The general outlines of this tale have been told before, most thoroughly in Isaacson’s biography. But the Samsung case—which ended last month with a resounding victory for Apple—revealed a trove of details about the invention, the sort of details that Apple is ordinarily loath to make public. We got pictures of dozens of prototypes of the iPhone and iPad. We got internal email that explained how executives and designers solved key problems in the iPhone’s design. We got testimony from Apple’s top brass explaining why the iPhone was a gamble.

Put it all together and you get remarkable story about a device that, under the normal rules of business, should not have been invented. Given the popularity of the iPod and its centrality to Apple’s bottom line, Apple should have been the last company on the planet to try to build something whose explicit purpose was to kill music players. Yet Apple’s inner circle knew that one day, a phone maker would solve the interface problem, creating a universal device that could make calls, play music and videos, and do everything else, too—a device that would eat the iPod’s lunch. Apple’s only chance at staving off that future was to invent the iPod killer itself. More than this simple business calculation, though, Apple’s brass saw the phone as an opportunity for real innovation. “We wanted to build a phone for ourselves,” Scott Forstall, who heads the team that built the phone’s operating system, said at the trial. “We wanted to build a phone that we loved.”

The problem was how to do it. When Jobs unveiled the iPhone in 2007, he showed off a picture of an iPod with a rotary-phone dialer instead of a click wheel. That was a joke, but it wasn’t far from Apple’s initial thoughts about phones. The click wheel—the brilliant interface that powered the iPod (which was invented for Apple by a firm called Synaptics)—was a simple, widely understood way to navigate through menus in order to play music. So why not use it to make calls, too?

In 2005, Tony Fadell, the engineer who’s credited with inventing the first iPod, got hold of a high-end desk phone made by Samsung and Bang & Olufsen that you navigated using a set of numerical keys placed around a rotating wheel. A Samsung cell phone, the X810, used a similar rotating wheel for input. Fadell didn’t seem to like the idea. “Weird way to hold the cellphone,” he wrote in an email to others at Apple. But Jobs thought it could work. “This may be our answer—we could put the number pad around our clickwheel,” he wrote. (Samsung pointed to this thread as evidence for its claim that Apple’s designs were inspired by other companies, including Samsung itself.)

Around the same time, Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Retro design iPhone courtesy of Ubergizmo.[end-div]

Happy Birthday :-)

Thirty years ago today Professor Scott Fahlman of Carnegie Mellon University sent what is believed to be the first emoticon embedded in an email. The symbol, :-), which he proposed as a joke marker, spread rapidly, morphed and evolved into a universe of symbolic nods, winks, and cyber-emotions.

For a lengthy list of popular emoticons, including some very interesting Eastern ones, jump here.

[div class=attrib]From the Independent:[end-div]

To some, an email isn’t complete without the inclusion of 🙂 or :-(. To others, the very idea of using “emoticons” – communicative graphics – makes the blood boil and represents all that has gone wrong with the English language.

Regardless of your view, as emoticons celebrate their 30th anniversary this month, it is accepted that they are here stay. Their birth can be traced to the precise minute: 11:44am on 19 September 1982. At that moment, Professor Scott Fahlman, of Carnegie Mellon University in Pittsburgh, sent an email on an online electronic bulletin board that included the first use of the sideways smiley face: “I propose the following character sequence for joke markers: 🙂 Read it sideways.” More than anyone, he must take the credit – or the blame.

The aim was simple: to allow those who posted on the university’s bulletin board to distinguish between those attempting to write humorous emails and those who weren’t. Professor Fahlman had seen how simple jokes were often misunderstood and attempted to find a way around the problem.

This weekend, the professor, a computer science researcher who still works at the university, says he is amazed his smiley face took off: “This was a little bit of silliness that I tossed into a discussion about physics,” he says. “It was ten minutes of my life. I expected my note might amuse a few of my friends, and that would be the end of it.”

But once his initial email had been sent, it wasn’t long before it spread to other universities and research labs via the primitive computer networks of the day. Within months, it had gone global.

Nowadays dozens of variations are available, mainly as little yellow, computer graphics. There are emoticons that wear sunglasses; some cry, while others don Santa hats. But Professor Fahlman isn’t a fan.

“I think they are ugly, and they ruin the challenge of trying to come up with a clever way to express emotions using standard keyboard characters. But perhaps that’s just because I invented the other kind.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Wikipedia.[end-div]

Mobile Phone as Survival Gear

So, here’s the premise. You have hiked alone for days and now find yourself isolated and lost in a dense forest half-way up a mountain. Yes! You have a cell phone. But, oh no, there is no service in this remote part of the world. So, no call for help and no GPS. And, it gets worse: you have no emergency supplies and no food. What can you do? The neat infographic offers some tips.

[div class=attrib]Infographic courtesy of Natalie Bracco / AnsonAlex.com.[end-div]

The Emperor Has Transparent Clothes

Hot from the TechnoSensual Exposition in Vienna, Austria, come clothes that can be made transparent or opaque, and clothes that can detect a wearer telling a lie. While the value of the former may seem dubious outside of the home, the latter invention should be a mandatory garment for all politicians and bankers. Or, for the less adventurous, millinery fashionistas, how about a hat that reacts to ambient radio waves?

All these innovations find their way from the realms of a Philip K. Dick science fiction novel, courtesy of the confluence of new technologies and innovative textile design.

[div class=attrib]From New Scientist:[end-div]

WHAT if the world could see your innermost emotions? For the wearer of the Bubelle dress created by Philips Design, it’s not simply a thought experiment.

Aptly nicknamed “the blushing dress”, the futuristic garment has an inner layer fitted with sensors that measure heart rate, respiration and galvanic skin response. The measurements are fed to 18 miniature projectors that shine corresponding colours, shapes, and intensities onto an outer layer of fabric – turning the dress into something like a giant, high-tech mood ring. As a natural blusher, I feel like I already know what it would be like to wear this dress – like going emotionally, instead of physically, naked.

The Bubelle dress is just one of the technologically enhanced items of clothing on show at the Technosensual exhibition in Vienna, Austria, which celebrates the overlapping worlds of technology, fashion and design.

Other garments are even more revealing. Holy Dress, created by Melissa Coleman and Leonie Smelt, is a wearable lie detector – that also metes out punishment. Using voice-stress analysis, the garment is designed to catch the wearer out in a lie, whereupon it twinkles conspicuously and gives her a small shock. Though the garment is beautiful, a slim white dress under a geometric structure of copper tubes, I’d rather try it on a politician than myself. “You can become a martyr for truth,” says Coleman. To make it, she hacked a 1990s lie detector and added a novelty shocking pen.

Laying the wearer bare in a less metaphorical way, a dress that alternates between opaque and transparent is also on show. Designed by the exhibition’s curator, Anouk Wipprecht with interactive design laboratory Studio Roosegaarde, Intimacy 2.0 was made using conductive liquid crystal foil. When a very low electrical current is applied to the foil, the liquid crystals stand to attention in parallel, making the material transparent. Wipprecht expects the next iteration could be available commercially. It’s time to take the dresses “out of the museum and get them on the streets”, she says.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Taiknam Hat, a hat sensitive to ambient radio waves. Courtesy of Ricardo O’Nascimento, Ebru Kurbak, Fabiana Shizue / New Scientist.[end-div]

Beware, Big Telecomm is Watching You

Facebook trawls your profile, status and friends to target ads more effectively. It also allows 3rd parties, for a fee, to mine mountains of aggregated data for juicy analyses. Many online companies do the same. However, some companies are taking this to a whole, new and very personal level.

Here’s an example from Germany. Politician Malte Spitz gathered 6 months of his personal geolocation data from his mobile phone company. Then, he combined this data with his activity online, such as Twitter updates, blog entries and website visits. The interactive results seen here, plotted over time and space, show the detailed extent to which an individual’s life is being tracked and recorded.

[div class=attrib]From Zeit Online:[end-div]

By pushing the play button, you will set off on a trip through Malte Spitz’s life. The speed controller allows you to adjust how fast you travel, the pause button will let you stop at interesting points. In addition, a calendar at the bottom shows when he was in a particular location and can be used to jump to a specific time period. Each column corresponds to one day.

Not surprisingly, Spitz had to sue his phone company, Deutsche Telekom, to gain access to his own phone data.

[div class=attrib]From TED:[end-div]

On August 31, 2009, politician Malte Spitz traveled from Berlin to Erlangen, sending 29 text messages as he traveled. On November 5, 2009, he rocked out to U2 at the Brandenburg Gate. On January 10, 2010, he made 10 outgoing phone calls while on a trip to Dusseldorf, and spent 22 hours, 53 minutes and 57 seconds of the day connected to the internet.

How do we know all this? By looking at a detailed, interactive timeline of Spitz’s life, created using information obtained from his cell phone company, Deutsche Telekom, between September 2009 and February 2010.

In an impassioned talk given at TEDGlobal 2012, Spitz, a member of Germany’s Green Party, recalls his multiple-year quest to receive this data from his phone company. And he explains why he decided to make this shockingly precise log into public information in the newspaper Die Zeit – to sound a warning bell of sorts.

“If you have access to this information, you can see what your society is doing,” says Spitz. “If you have access to this information, you can control your country.”

[div class=attrib]Read the entire article after the jump.[end-div]

How Do Startup Companies Succeed?

A view from Esther Dyson, one of world’s leading digital technology entrepreneurs. She has served as a an early investor in numerous startups, including Flickr, del.icio.us, ZEDO, and Medspace, and is currently focused on startups in medical technology and aviation.

[div class=attrib]From Project Syndicate:[end-div]

The most popular stories often seem to end at the beginning. “…and so Juan and Alice got married.” Did they actually live happily ever after? “He was elected President.” But how did the country do under his rule? “The entrepreneur got her startup funding.” But did the company succeed?

Let’s consider that last one. Specifically, what happens to entrepreneurs once they get their money? Everywhere I go – and I have been in Moscow, Libreville (Gabon), and Dublin in the last few weeks – smart people ask how to get companies through the next phase of growth. How can we scale entrepreneurship to the point that it has a measurable and meaningful impact on the economy?

The real impact of both Microsoft and Google is not on their shareholders, or even on the people that they employ directly, but on the millions of people whom they have made more productive. That argues for companies that solve real problems, rather than for yet another photo-sharing app for rich, appealing (to advertisers) people with time on their hands.

It turns out that money is rarely enough – not just that there is not enough of it, but that entrepreneurs need something else. They need advice, contacts, customers, and employees immersed in a culture of effectiveness to succeed. But they also have to create something of real value to have meaningful economic impact in the long term.

The easy, increasingly popular answer is accelerators, incubators, camps, weekends – a host of locations and events to foster the development of startups. But these are just buildings and conferences unless they include people who can help with the software – contacts, customers, and culture. The people in charge, from NGOs to government officials, have great ideas about structures – tax policy, official financing, etc. – while the entrepreneurs themselves are too busy running their companies to find out about these things.

But this week in Dublin, I found what we need: not policies or theories, but actual living examples. Not far from the fancy hotel at which I was staying, and across from Google’s modish Irish offices, sits a squat old warehouse with a new sign: Startupbootcamp. You enter through a side door, into a cavern full of sawdust and cheap furniture (plus a pool table and a bar, of course).

What makes this place interesting is its sponsor: venerable old IBM. The mission of Startupbootcamp Europe is not to celebrate entrepreneurs, or even to educate them, but to help them scale up to meaningful businesses. Their new products can use IBM’s and other mentors’ contacts with the much broader world, whether for strategic marketing alliances, the power of an IBM endorsement, or, ultimately, an acquisition.

I was invited by Martin Kelly, who represents IBM’s venture arm in Ireland. He introduced me to the manager of the place, Eoghan Jennings, and a bunch of seasoned executives.

There was a three-time entrepreneur, Conor Hanley, co-founder of BiancaMed (recently sold to Resmed), who now has a sleep-monitoring tool and an exciting distribution deal with a large company he can’t yet mention; Jim Joyce, a former sales executive for Schering Plough who is now running Point of Care, which helps clinicians to help patients to manage their own care after they leave hospital; and Johnny Walker, a radiologist whose company operates scanners in the field and interprets them through a network of radiologists worldwide. Currently, Walker’s company, Global Diagnostics, is focused on pre-natal care, but give him time.

These guys are not the “startups”; they are the mentors, carefully solicited by Kelly from within the tightly knit Irish business community. He knew exactly what he was looking for: “In Ireland, we have people from lots of large companies. Joyce, for example, can put a startup in touch with senior management from virtually any pharma company around the world. Hanley knows manufacturing and tech partners. Walker understands how to operate in rural conditions.”

According to Jennings, a former chief financial officer of Xing, Europe’s leading social network, “We spent years trying to persuade people that they had a problem we could solve; now I am working with companies solving problems that people know they have.”  And that usually involves more than an Internet solution; it requires distribution channels, production facilities, market education, and the like. Startupbootcamp’s next batch of startups, not coincidentally, will be in the health-care sector.

Each of the mentors can help a startup to go global. Precisely because the Irish market is so small, it’s a good place to find people who know how to expand globally. In Ireland right now, as in so many countries, many large companies are laying off people with experience. Not all of them have the makings of an entrepreneur. But most of them have skills worth sharing, whether it’s how to run a sales meeting, oversee a development project, or manage a database of customers.

[div class=attrib]Read the entire article after the jump.[end-div]

Extending Moore’s Law Through Evolution

[div class=attrib]From Smithsonian:[end-div]

In 1965, Intel co-founder Gordon Moore made a prediction about computing that has held true to this day. Moore’s law, as it came to be known, forecasted that the number of transistors we’d be able to cram onto a circuit—and thereby, the effective processing speed of our computers—would double roughly every two years. Remarkably enough, this rule has been accurate for nearly 50 years, but most experts now predict that this growth will slow by the end of the decade.

Someday, though, a radical new approach to creating silicon semiconductors might enable this rate to continue—and could even accelerate it. As detailed in a study published in this month’s Proceedings of the National Academy of Sciences, a team of researchers from the University of California at Santa Barbara and elsewhere have harnessed the process of evolution to produce enzymes that create novel semiconductor structures.

“It’s like natural selection, but here, it’s artificial selection,” Daniel Morse, professor emeritus at UCSB and a co-author of the study, said in an interview. After taking an enzyme found in marine sponges and mutating it into many various forms, “we’ve selected the one in a million mutant DNAs capable of making a semiconductor.”

In an earlier study, Morse and other members of the research team had discovered silicatein—a natural enzyme used used by marine sponges to construct their silica skeletons. The mineral, as it happens, also serves as the building block of semiconductor computer chips. “We then asked the question—could we genetically engineer the structure of the enzyme to make it possible to produce other minerals and semiconductors not normally produced by living organisms?” Morse said.

To make this possible, the researchers isolated and made many copies of the part of the sponge’s DNA that codes for silicatein, then intentionally introduced millions of different mutations in the DNA. By chance, some of these would likely lead to mutant forms of silicatein that would produce different semiconductors, rather than silica—a process that mirrors natural selection, albeit on a much shorter time scale, and directed by human choice rather than survival of the fittest.

[div class=attrib]Read the entire article after the jump.[end-div]

La Macchina: The Machine as Art, for Caffeine Addicts

You may not know their names, but Desiderio Pavoni and Luigi Bezzerra are to coffee as are Steve Jobs and Steve Wozniak to computers. Modern day espresso machines owe all to the innovative design and business savvy of this early 20th century Italian duo.

[div class=attrib]From Smithsonian:[end-div]

For many coffee drinkers, espresso is coffee. It is the purest distillation of the coffee bean, the literal essence of a bean. In another sense, it is also the first instant coffee. Before espresso, it could take up to five minutes –five minutes!– for a cup of coffee to brew. But what exactly is espresso and how did it come to dominate our morning routines? Although many people are familiar with espresso these days thanks to the Starbucksification of the world, there is often still some confusion over what it actually is – largely due to “espresso roasts” available on supermarket shelves everywhere. First, and most importantly, espresso is not a roasting method. It is neither a bean nor a blend. It is a method of preparation. More specifically, it is a preparation method in which highly-pressurized hot water is forced over coffee grounds to produce a very concentrated coffee drink with a deep, robust flavor. While there is no standardized process for pulling a shot of espresso, Italian coffeemaker Illy’s definition of the authentic espresso seems as good a measure as any:

A jet of hot water at 88°-93°C (190°-200°F) passes under a pressure of nine or more atmospheres through a seven-gram (.25 oz) cake-like layer of ground and tamped coffee. Done right, the result is a concentrate of not more than 30 ml (one oz) of pure sensorial pleasure.

For those of you who, like me, are more than a few years out of science class, nine atmospheres of pressure is the equivalent to nine times the amount of pressure normally exerted by the earth’s atmosphere. As you might be able to tell from the precision of Illy’s description, good espresso is good chemistry. It’s all about precision and consistency and finding the perfect balance between grind, temperature, and pressure. Espresso happens at the molecular level. This is why technology has been such an important part of the historical development of espresso and a key to the ongoing search for the perfect shot. While espresso was never designed per se, the machines –or Macchina– that make our cappuccinos and lattes have a history that stretches back more than a century.

In the 19th century, coffee was a huge business in Europe with cafes flourishing across the continent. But coffee brewing was a slow process and, as is still the case today, customers often had to wait for their brew. Seeing an opportunity, inventors across Europe began to explore ways of using steam machines to reduce brewing time – this was, after all, the age of steam. Though there were surely innumerable patents and prototypes, the invention of the machine and the method that would lead to espresso is usually attributed to Angelo Moriondo of Turin, Italy, who was granted a patent in 1884 for “new steam machinery for the economic and instantaneous confection of coffee beverage.” The machine consisted of a large boiler, heated to 1.5 bars of pressure, that pushed water through a large bed of coffee grounds on demand, with a second boiler producing steam that would flash the bed of coffee and complete the brew. Though Moriondo’s invention was the first coffee machine to use both water and steam, it was purely a bulk brewer created for the Turin General Exposition. Not much more is known about Moriondo, due in large part to what we might think of today as a branding failure. There were never any “Moriondo” machines, there are no verifiable machines still in existence, and there aren’t even photographs of his work. With the exception of his patent, Moriondo has been largely lost to history. The two men who would improve on Morinodo’s design to produce a single serving espresso would not make that same mistake.

Luigi Bezzerra and Desiderio Pavoni were the Steve Wozniak and Steve Jobs of espresso. Milanese manufacturer and “maker of liquors” Luigi Bezzera had the know-how. He invented single-shot espresso in the early years of the 20th century while looking for a method of quickly brewing coffee directly into the cup. He made several improvements to Moriondo’s machine, introduced the portafilter, multiple brewheads, and many other innovations still associated with espresso machines today. In Bezzera’s original patent, a large boiler with built-in burner chambers filled with water was heated until it pushed water and steam through a tamped puck of ground coffee. The mechanism through which the heated water passed also functioned as heat radiators, lowering the temperature of the water from 250°F in the boiler to the ideal brewing temperature of approximately 195°F (90°C). Et voila, espresso. For the first time, a cup of coffee was brewed to order in a matter of seconds. But Bezzera’s machine was heated over an open flame, which made it difficult to control pressure and temperature, and nearly impossible to to produce a consistent shot. And consistency is key in the world of espresso. Bezzera designed and built a few prototypes of his machine but his beverage remained largely unappreciated because he didn’t have any money to expand his business or any idea how to market the machine. But he knew someone who did. Enter Desiderio Pavoni.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A 1910 Ideale espresso machine. Courtesy of Smithsonian.[end-div]

Keeping Secrets in the Age of Technology

[div class=attrib]From the Guardian:[end-div]

With the benefit of hindsight, life as I knew it came to an end in late 1994, round Seal’s house. We used to live round the corner from each other and if he was in between supermodels I’d pop over to watch a bit of Formula 1 on his pop star-sized flat-screen telly. I was probably on the sofa reading Vogue (we had that in common, albeit for different reasons) while he was “mucking about” on his computer (then the actual technical term for anything non-work-related, vis-à-vis computers), when he said something like: “Kate, have a look at this thing called the World Wide Web. It’s going to be massive!”

I can’t remember what we looked at then, at the tail-end of what I now nostalgically refer to as “The Tipp-Ex Years” – maybe The Well, accessed by Web Crawler – but whatever it was, it didn’t do it for me: “Information dual carriageway!” I said (trust me, this passed for witty in the 1990s). “Fancy a pizza?”

So there we are: Seal introduced me to the interweb. And although I remain a bit of a petrol-head and (nothing if not brand-loyal) own an iPad, an iPhone and two Macs, I am still basically rubbish at “modern”. Pre-Leveson, when I was writing a novel involving a phone-hacking scandal, my only concern was whether or not I’d come up with a plot that was: a) vaguely plausible and/or interesting, and b) technically possible. (A very nice man from Apple assured me that it was.)

I would gladly have used semaphore, telegrams or parchment scrolls delivered by magic owls to get the point across. Which is that ever since people started chiselling cuneiform on to big stones they’ve been writing things that will at some point almost certainly be misread and/or misinterpreted by someone else. But the speed of modern technology has made the problem rather more immediate. Confusing your public tweets with your Direct Messages and begging your young lover to take-me-now-cos-im-gagging-4-u? They didn’t have to worry about that when they were issuing decrees at Memphis on a nice bit of granodiorite.

These days the mis-sent (or indeed misread) text is still a relatively intimate intimation of an affair, while the notorious “reply all” email is the stuff of tired stand-up comedy. The boundary-less tweet is relatively new – and therefore still entertaining – territory, as evidenced most recently by American model Melissa Stetten, who, sitting on a plane next to a (married) soap actor called Brian Presley, tweeted as he appeared to hit on her.

Whenever and wherever words are written, somebody, somewhere will want to read them. And if those words are not meant to be read they very often will be – usually by the “wrong” people. A 2010 poll announced that six in 10 women would admit to regularly snooping on their partner’s phone, Twitter, or Facebook, although history doesn’t record whether the other four in 10 were then subjected to lie-detector tests.

Our compelling, self-sabotaging desire to snoop is usually informed by… well, if not paranoia, exactly, then insecurity, which in turn is more revealing about us than the words we find. If we seek out bad stuff – in a partner’s text, an ex’s Facebook status or best friend’s Twitter timeline – we will surely find it. And of course we don’t even have to make much effort to find the stuff we probably oughtn’t. Employers now routinely snoop on staff, and while this says more about the paranoid dynamic between boss classes and foot soldiers than we’d like, I have little sympathy for the employee who tweets their hangover status with one hand while phoning in “sick” with the other.

Take Google Maps: the more information we are given, the more we feel we’ve been gifted a licence to snoop. It’s the kind of thing we might be protesting about on the streets of Westminster were we not too busy invading our own privacy, as per the recent tweet-spat between Mr and Mrs Ben Goldsmith.

Technology feeds an increasing yet non-specific social unease – and that uneasiness inevitably trickles down to our more intimate relationships. For example, not long ago, I was blown out via text for a lunch date with a friend (“arrrgh, urgent deadline! SO SOZ!”), whose “urgent deadline” (their Twitter timeline helpfully revealed) turned out to involve lunch with someone else.

Did I like my friend any less when I found this out? Well yes, a tiny bit – until I acknowledged that I’ve done something similar 100 times but was “cleverer” at covering my tracks. Would it have been easier for my friend to tell me the truth? Arguably. Should I ever have looked at their Twitter timeline? Well, I had sought to confirm my suspicion that they weren’t telling the truth, so given that my paranoia gremlin was in charge it was no wonder I didn’t like what it found.

It is, of course, the paranoia gremlin that is in charge when we snoop – or are snooped upon – by partners, while “trust” is far more easily undermined than it has ever been. The randomly stumbled-across text (except they never are, are they?) is our generation’s lipstick-on-the-collar. And while Foursquare may say that your partner is in the pub, is that enough to stop you checking their Twitter/Facebook/emails/texts?

[div class=attrib]Read the entire article after the jump.[end-div]

The SpeechJammer and Other Innovations to Come

The mind boggles at the possible situations when a SpeechJammer (affectionately known as the “Shutup Gun”) might come in handy – raucous parties, boring office meetings, spousal arguments, playdates with whiny children.

[div class=attrib]From the New York Times:[end-div]

When you aim the SpeechJammer at someone, it records that person’s voice and plays it back to him with a delay of a few hundred milliseconds. This seems to gum up the brain’s cognitive processes — a phenomenon known as delayed auditory feedback — and can painlessly render the person unable to speak. Kazutaka Kurihara, one of the SpeechJammer’s creators, sees it as a tool to prevent loudmouths from overtaking meetings and public forums, and he’d like to miniaturize his invention so that it can be built into cellphones. “It’s different from conventional weapons such as samurai swords,” Kurihara says. “We hope it will build a more peaceful world.”

[div class=attrib]Read the entire list of 32 weird and wonderful innovations after the jump.[end-div]

[div class=attrib]Graphic courtesy of Chris Nosenzo / New York Times.[end-div]

Killer Ideas

It’s possible that most households on the planet have one. It’s equally possible that most humans have used one — excepting members of PETA (People for the Ethical Treatment of Animals) and other tolerant souls.

United States Patent 640,790 covers a simple and effective technology, invented by Robert Montgomery. The patent for a “Fly Killer”, or fly swatter as it is now more commonly known, was issued in 1900.

Sometimes the simplest design is the most pervasive and effective.

[div class=attrib]From the New York Times:[end-div]

The first modern fly-destruction device was invented in 1900 by Robert R. Montgomery, an entrepreneur based in Decatur, Ill. Montgomery was issued Patent No. 640,790 for the Fly-Killer, a “cheap device of unusual elasticity and durability” made of wire netting, “preferably oblong,” attached to a handle. The material of the handle remained unspecified, but the netting was crucial: it reduced wind drag, giving the swatter a “whiplike swing.” By 1901, Montgomery’s invention was advertised in Ladies’ Home Journal as a tool that “kills without crushing” and “soils nothing,” unlike, say, a rolled-up newspaper might.

Montgomery sold the patent rights in 1903 to an industrialist named John L. Bennett, who later invented the beer can. Bennett improved the design — stitching around the edge of the netting to keep it from fraying — but left the name.

The various fly-killing implements on the market at the time got the name “swatter” from Samuel Crumbine, secretary of the Kansas Board of Health. In 1905, he titled one of his fly bulletins, which warned of flyborne diseases, “Swat the Fly,” after a chant he heard at a ballgame. Crumbine took an invention known as the Fly Bat — a screen attached to a yardstick — and renamed it the Fly Swatter, which became the generic term we use today.

Fly-killing technology has advanced to include fly zappers (electrified tennis rackets that roast flies on contact) and fly guns (spinning discs that mulch insects). But there will always be less techy solutions: flypaper (sticky tape that traps the bugs), Fly Bottles (glass containers lined with an attractive liquid substance) and the Venus’ flytrap (a plant that eats insects).

During a 2009 CNBC interview, President Obama killed a fly with his bare hands, triumphantly exclaiming, “I got the sucker!” PETA was less gleeful, calling it a public “execution” and sending the White House a device that traps flies so that they may be set free.

But for the rest of us, as the product blogger Sean Byrne notes, “it’s hard to beat the good old-fashioned fly swatter.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Goodgrips.[end-div]

Your Tween Online

Many parents with children in the pre-teenage years probably have a containment policy restricting them from participating on adult oriented social media such as Facebook. Well, these tech-savvy tweens may be doing more online than just playing Club Penguin.

[div class=attrib]From the WSJ:[end-div]

Celina McPhail’s mom wouldn’t let her have a Facebook account. The 12-year-old is on Instagram instead.

Her mother, Maria McPhail, agreed to let her download the app onto her iPod Touch, because she thought she was fostering an interest in photography. But Ms. McPhail, of Austin, Texas, has learned that Celina and her friends mostly use the service to post and “like” Photoshopped photo-jokes and text messages they create on another free app called Versagram. When kids can’t get on Facebook, “they’re good at finding ways around that,” she says.

It’s harder than ever to keep an eye on the children. Many parents limit their preteens’ access to well-known sites like Facebook and monitor what their children do online. But with kids constantly seeking new places to connect—preferably, unsupervised by their families—most parents are learning how difficult it is to prevent their kids from interacting with social media.

Children are using technology at ever-younger ages. About 15% of kids under the age of 11 have their own mobile phone, according to eMarketer. The Pew Research Center’s Internet & American Life Project reported last summer that 16% of kids 12 to 17 who are online used Twitter, double the number from two years earlier.

Parents worry about the risks of online predators and bullying, and there are other concerns. Kids are creating permanent public records, and they may encounter excessive or inappropriate advertising. Yet many parents also believe it is in their kids’ interest to be nimble with technology.

As families grapple with how to use social media safely, many marketers are working to create social networks and other interactive applications for kids that parents will approve. Some go even further, seeing themselves as providing a crucial education in online literacy—”training wheels for social media,” as Rebecca Levey of social-media site KidzVuz puts it.

Along with established social sites for kids, such as Walt Disney Co.’s Club Penguin, kids are flocking to newer sites such as FashionPlaytes.com, a meeting place aimed at girls ages 5 to 12 who are interested in designing clothes, and Everloop, a social network for kids under the age of 13. Viddy, a video-sharing site which functions similarly to Instagram, is becoming more popular with kids and teenagers as well.

Some kids do join YouTube, Google, Facebook, Tumblr and Twitter, despite policies meant to bar kids under 13. These sites require that users enter their date of birth upon signing up, and they must be at least 13 years old. Apple—which requires an account to download apps like Instagram to an iPhone—has the same requirement. But there is little to bar kids from entering a false date of birth or getting an adult to set up an account. Instagram declined to comment.

“If we learn that someone is not old enough to have a Google account, or we receive a report, we will investigate and take the appropriate action,” says Google spokesman Jay Nancarrow. He adds that “users first have a chance to demonstrate that they meet our age requirements. If they don’t, we will close the account.” Facebook and most other sites have similar policies.

Still, some children establish public identities on social-media networks like YouTube and Facebook with their parents’ permission. Autumn Miller, a 10-year-old from Southern California, has nearly 6,000 people following her Facebook fan-page postings, which include links to videos of her in makeup and costumes, dancing Laker-Girl style.

[div class=attrib]Read the entire article after the jump.[end-div]

First, There Was Bell Labs

The results of innovation surround us. Innovation nourishes our food supply and helps us heal when we are sick; innovation lubricates our businesses, underlies our products, and facilitates our interactions. Innovation stokes our forward momentum.

But, before many of our recent technological marvels could come in to being, some fundamental innovations were necessary. These were the technical precursors and catalysts that paves the way for the iPad and the smartphone , GPS and search engines and microwave ovens. The building blocks that made much of this possible included the transistor, the laser, the Unix operating system, the communication satellite. And, all of these came from one place, Bell Labs, during a short but highly productive period from 1920 to 1980.

In his new book, “The Idea Factory”, Jon Gertner explores how and why so much innovation sprung from the visionary leaders, engineers and scientists of Bell Labs

[div class=attrib]From the New York Times:[end-div]

In today’s world of Apple, Google and Facebook, the name may not ring any bells for most readers, but for decades — from the 1920s through the 1980s — Bell Labs, the research and development wing of AT&T, was the most innovative scientific organization in the world. As Jon Gertner argues in his riveting new book, “The Idea Factory,” it was where the future was invented.

Indeed, Bell Labs was behind many of the innovations that have come to define modern life, including the transistor (the building block of all digital products), the laser, the silicon solar cell and the computer operating system called Unix (which would serve as the basis for a host of other computer languages). Bell Labs developed the first communications satellites, the first cellular telephone systems and the first fiber-optic cable systems.

The Bell Labs scientist Claude Elwood Shannon effectively founded the field of information theory, which would revolutionize thinking about communications; other Bell Labs researchers helped push the boundaries of physics, chemistry and mathematics, while defining new industrial processes like quality control.

In “The Idea Factory,” Mr. Gertner — an editor at Fast Company magazine and a writer for The New York Times Magazine — not only gives us spirited portraits of the scientists behind Bell Labs’ phenomenal success, but he also looks at the reasons that research organization became such a fount of innovation, laying the groundwork for the networked world we now live in.

It’s clear from this volume that the visionary leadership of the researcher turned executive Mervin Kelly played a large role in Bell Labs’ sense of mission and its ability to institutionalize the process of innovation so effectively. Kelly believed that an “institute of creative technology” needed a critical mass of talented scientists — whom he housed in a single building, where physicists, chemists, mathematicians and engineers were encouraged to exchange ideas — and he gave his researchers the time to pursue their own investigations “sometimes without concrete goals, for years on end.”

That freedom, of course, was predicated on the steady stream of revenue provided (in the years before the AT&T monopoly was broken up in the early 1980s) by the monthly bills paid by telephone subscribers, which allowed Bell Labs to function “much like a national laboratory.” Unlike, say, many Silicon Valley companies today, which need to keep an eye on quarterly reports, Bell Labs in its heyday could patiently search out what Mr. Gertner calls “new and fundamental ideas,” while using its immense engineering staff to “develop and perfect those ideas” — creating new products, then making them cheaper, more efficient and more durable.

Given the evolution of the digital world we inhabit today, Kelly’s prescience is stunning in retrospect. “He had predicted grand vistas for the postwar electronics industry even before the transistor,” Mr. Gertner writes. “He had also insisted that basic scientific research could translate into astounding computer and military applications, as well as miracles within the communications systems — ‘a telephone system of the future,’ as he had said in 1951, ‘much more like the biological systems of man’s brain and nervous system.’ ”

[div class=attrib]Read the entire article after jump.[end-div]

[div class=attrib]Image: Jack A. Morton (left) and J. R. Wilson at Bell Laboratories, circa 1948. Courtesy of Computer History Museum.[end-div]