Tag Archives: IT

Retailing: An Engineering Problem

Traditional retailers look at retailing primarily as a marketing and customer acquisition and relationship problem. For Amazon, it’s more of an engineering and IT problem with solutions to be found in innovation and optimization.

From Technology Review:

Why do some stores succeed while others fail? Retailers constantly struggle with this question, battling one another in ways that change with each generation. In the late 1800s, architects ruled. Successful merchants like Marshall Field created palaces of commerce that were so gorgeous shoppers rushed to come inside. In the early 1900s, mail order became the “killer app,” with Sears Roebuck leading the way. Toward the end of the 20th century, ultra-efficient suburban discounters like Target and Walmart conquered all.

Now the tussles are fiercest in online retailing, where it’s hard to tell if anyone is winning. Retailers as big as Walmart and as small as Tweezerman.com all maintain their own websites, catering to an explosion of customer demand. Retail e-commerce sales expanded 15 percent in the U.S in 2012—seven times as fast as traditional retail. But price competition is relentless, and profit margins are thin to nonexistent. It’s easy to regard this $186 billion market as a poisoned prize: too big to ignore, too treacherous to pursue.

Even the most successful online retailer, Amazon.com, has a business model that leaves many people scratching their heads. Amazon is on track to ring up $75 billion in worldwide sales this year. Yet it often operates in the red; last quarter, Amazon posted a $41 million loss. Amazon’s founder and chief executive officer, Jeff Bezos, is indifferent to short-term earnings, having once quipped that when the company achieved profitability for a brief stretch in 1995, “it was probably a mistake.”

Look more closely at Bezos’s company, though, and its strategy becomes clear. Amazon is constantly plowing cash back into its business. Its secretive advanced-research division, Lab 126, works on next-generation Kindles and other mobile devices. More broadly, Amazon spends heavily to create the most advanced warehouses, the smoothest customer-service channels, and other features that help it grab an ever-larger share of the market. As former Amazon manager Eugene Wei wrote in a recent blog post, “Amazon’s core business model does generate a profit with most every transaction … The reason it isn’t showing a profit is because it’s undertaken a massive investment to support an even larger sales base.”

Much of that investment goes straight into technology. To Amazon, retailing looks like a giant engineering problem. Algorithms define everything from the best way to arrange a digital storefront to the optimal way of shipping a package. Other big retailers spend heavily on advertising and hire a few hundred engineers to keep systems running. Amazon prefers a puny ad budget and a payroll packed with thousands of engineering graduates from the likes of MIT, Carnegie Mellon, and Caltech.

Other big merchants are getting the message. Walmart, the world’s largest retailer, two years ago opened an R&D center in Silicon Valley where it develops its own search engines and looks for startups to buy. But competing on Amazon’s terms doesn’t stop with putting up a digital storefront or creating a mobile app. Walmart has gone as far as admitting that it may have to rethink what its stores are for. To equal Amazon’s flawless delivery, this year it even floated the idea of recruiting shoppers out of its aisles to play deliveryman, whisking goods to customers who’ve ordered online.

Amazon is a tech innovator by necessity, too. The company lacks three of conventional retailing’s most basic elements: a showroom where customers can touch the wares; on-the-spot salespeople who can woo shoppers; and the means for customers to take possession of their goods the instant a sale is complete. In one sense, everything that Amazon’s engineers create is meant to make these fundamental deficits vanish from sight.

Amazon’s cunning can be seen in the company’s growing patent portfolio. Since 1994, Amazon.com and a subsidiary, Amazon Technologies, have won 1,263 patents. (By contrast, Walmart has just 53.) Each Amazon invention is meant to make shopping on the site a little easier, a little more seductive, or to trim away costs. Consider U.S. Patent No. 8,261,983, on “generating customized packaging” which came into being in late 2012.

“We constantly try to drive down the percentage of air that goes into a shipment,” explains Dave Clark, the Amazon vice president who oversees the company’s nearly 100 warehouses, known as fulfillment centers. The idea of shipping goods in a needlessly bulky box (and paying a few extra cents to United Parcel Service or other carriers) makes him shudder. Ship nearly a billion packages a year, and those pennies add up. Amazon over the years has created more than 40 sizes of boxes– but even that isn’t enough. That’s the glory of Amazon’s packaging patent: when a customer’s odd pairing of items creates a one-of-a-kind shipment, Amazon now has systems that will compute the best way to pack that order and create a perfect box for it within 30 minutes.

For thousands of online merchants, it’s easier to live within Amazon’s ecosystem than to compete. So small retailers such as EasyLunchboxes.com have moved their inventory into Amazon’s warehouses, where they pay a commission on each sale for shipping and other services. That is becoming a highly lucrative business for Amazon, says Goldman Sachs analyst Heath Terry. He predicts Amazon will reap $3.5 billion in cash flow from third-party shipping in 2014, creating a very profitable side business that he values at $38 billion—about 20 percent of the company’s overall stock market value.

Jousting directly with Amazon is tougher. Researchers at Internet Retailer calculate that Amazon’s revenue exceeds that of its next 12 competitors combined. In a regulatory filing earlier this year, Target—the third-largest retailer in the U.S.—conceded that its “digital sales represented an immaterial amount of total sales.” For other online entrants, the most prudent strategies generally involve focusing on areas that the big guy hasn’t conquered yet, such as selling services, online “flash sales” that snare impulse buyers who can’t pass up a deal, or particularly challenging categories such as groceries. Yet many, if not most, of these upstarts are losing money.

Read the entire article here.

Image: Amazon fullfillment center, Scotland. Courtesy of Amazon / Wired.

Personalized Care Courtesy of Big Data

The era of truly personalized medicine and treatment plans may still be a fair way off, but thanks to big data initiatives predictive and preventative health is making significant progress. This bodes well for over-stretched healthcare systems, medical professionals, and those who need care and/or pay for it.

That said, it is useful to keep in mind how similar data in other domains such as shopping travel and media, has been delivering personalized content and services for quite some time. So, healthcare information technology certainly lags, where it should be leading. One single answer may be impossible to agree upon. However, it is encouraging to see the healthcare and medical information industries catching up.

From Technology Review:

On the ground floor of the Mount Sinai Medical Center’s new behemoth of a research and hospital building in Manhattan, rows of empty black metal racks sit waiting for computer processors and hard disk drives. They’ll house the center’s new computing cluster, adding to an existing $3 million supercomputer that hums in the basement of a nearby building.

The person leading the design of the new computer is Jeff Hammerbacher, a 30-year-old known for being Facebook’s first data scientist. Now Hammerbacher is applying the same data-crunching techniques used to target online advertisements, but this time for a powerful engine that will suck in medical information and spit out predictions that could cut the cost of health care.

With $3 trillion spent annually on health care in the U.S., it could easily be the biggest job for “big data” yet. “We’re going out on a limb—we’re saying this can deliver value to the hospital,” says Hammerbacher.

Mount Sinai has 1,406 beds plus a medical school and treats half a million patients per year. Increasingly, it’s run like an information business: it’s assembled a biobank with 26,735 patient DNA and plasma samples, it finished installing a $120 million electronic medical records system this year, and it has been spending heavily to recruit computing experts like Hammerbacher.

It’s all part of a “monstrously large bet that [data] is going to matter,” says Eric Schadt, the computational biologist who runs Mount Sinai’s Icahn Institute for Genomics and Multiscale Biology, where Hammerbacher is based, and who was himself recruited from the gene sequencing company Pacific Biosciences two years ago.

Mount Sinai hopes data will let it succeed in a health-care system that’s shifting dramatically. Perversely, because hospitals bill by the procedure, they tend to earn more the sicker their patients become. But health-care reform in Washington is pushing hospitals toward a new model, called “accountable care,” in which they will instead be paid to keep people healthy.

Mount Sinai is already part of an experiment that the federal agency overseeing Medicare has organized to test these economic ideas. Last year it joined 250 U.S. doctor’s practices, clinics, and other hospitals in agreeing to track patients more closely. If the medical organizations can cut costs with better results, they’ll share in the savings. If costs go up, they can face penalties.

The new economic incentives, says Schadt, help explain the hospital’s sudden hunger for data, and its heavy spending to hire 150 people during the last year just in the institute he runs. “It’s become ‘Hey, use all your resources and data to better assess the population you are treating,’” he says.

One way Mount Sinai is doing that already is with a computer model where factors like disease, past hospital visits, even race, are used to predict which patients stand the highest chance of returning to the hospital. That model, built using hospital claims data, tells caregivers which chronically ill people need to be showered with follow-up calls and extra help. In a pilot study, the program cut readmissions by half; now the risk score is being used throughout the hospital.

Hammerbacher’s new computing facility is designed to supercharge the discovery of such insights. It will run a version of Hadoop, software that spreads data across many computers and is popular in industries, like e-commerce, that generate large amounts of quick-changing information.

Patient data are slim by comparison, and not very dynamic. Records get added to infrequently—not at all if a patient visits another hospital. That’s a limitation, Hammerbacher says. Yet he hopes big-data technology will be used to search for connections between, say, hospital infections and the DNA of microbes present in an ICU, or to track data streaming in from patients who use at-home monitors.

One person he’ll be working with is Joel Dudley, director of biomedical informatics at Mount Sinai’s medical school. Dudley has been running information gathered on diabetes patients (like blood sugar levels, height, weight, and age) through an algorithm that clusters them into a weblike network of nodes. In “hot spots” where diabetic patients appear similar, he’s then trying to find out if they share genetic attributes. That way DNA information might add to predictions about patients, too.

A goal of this work, which is still unpublished, is to replace the general guidelines doctors often use in deciding how to treat diabetics. Instead, new risk models—powered by genomics, lab tests, billing records, and demographics—could make up-to-date predictions about the individual patient a doctor is seeing, not unlike how a Web ad is tailored according to who you are and sites you’ve visited recently.

That is where the big data comes in. In the future, every patient will be represented by what Dudley calls “large dossier of data.” And before they are treated, or even diagnosed, the goal will be to “compare that to every patient that’s ever walked in the door at Mount Sinai,” he says. “[Then] you can say quantitatively what’s the risk for this person based on all the other patients we’ve seen.”

Read the entire article here.

A Post-PC, Post-Laptop World

Not too long ago the founders and shapers of much of our IT world were dreaming up new information technologies, tools and processes that we didn’t know we needed. These tinkerers became the establishment luminaries that we still ove or hate — Microsoft, Dell, HP, Apple, Motorola and IBM. And, of course, they are still around.

But the world that they constructed is imploding and nobody really knows where it is heading. Will the leaders of the next IT revolution come from the likes of Google or Facebook? Or as is more likely, is this just a prelude to a more radical shift, with seeds being sown in anonymous garages and labs across the U.S. and other tech hubs. Regardless, we are in for some unpredictable and exciting times.

From ars technica:

Change happens in IT whether you want it to or not. But even with all the talk of the “post-PC” era and the rise of the horrifically named “bring your own device” hype, change has happened in a patchwork. Despite the disruptive technologies documented on Ars and elsewhere, the fundamentals of enterprise IT have evolved slowly over the past decade.

But this, naturally, is about to change. The model that we’ve built IT on for the past 10 years is in the midst of collapsing on itself, and the companies that sold us the twigs and straw it was built with—Microsoft, Dell, and Hewlett-Packard to name a few—are facing the same sort of inflection points in their corporate life cycles that have ripped past IT giants to shreds. These corporate giants are faced with moments of truth despite making big bets on acquisitions to try to position themselves for what they saw as the future.

Predicting the future is hard, especially when you have an installed base to consider. But it’s not hard to identify the economic, technological, and cultural forces that are converging right now to shape the future of enterprise IT in the short term. We’re not entering a “post-PC” era in IT—we’re entering an era where the device we use to access applications and information is almost irrelevant. Nearly everything we do as employees or customers will be instrumented, analyzed, and aggregated.

“We’re not on a 10-year reinvention path anymore for enterprise IT,” said David Nichols, Americas IT Transformation Leader at Ernst & Young. “It’s more like [a] five-year or four-year path. And it’s getting faster. It’s going to happen at a pace we haven’t seen before.”

While the impact may be revolutionary, the cause is more evolutionary. A host of technologies that have been the “next big thing” for much of the last decade—smart mobile devices, the “Internet of Things,” deep analytics, social networking, and cloud computing—have finally reached a tipping point. The demand for mobile applications has turned what were once called “Web services” into a new class of managed application programming interfaces. These are changing not just how users interact with data, but the way enterprises collect and share data, write applications, and secure them.

Add the technologies pushed forward by government and defense in the last decade (such as facial recognition) and an abundance of cheap sensors, and you have the perfect “big data” storm. This sea of structured and unstructured data could change the nature of the enterprise or drown IT departments in the process. It will create social challenges as employees and customers start to understand the level to which they are being tracked by enterprises. And it will give companies more ammunition to continue to squeeze more productivity out of a shrinking workforce, as jobs once done by people are turned over to software robots.

There has been a lot of talk about how smartphones and tablets have supplanted the PC. In many ways, that talk is true. In fact, we’re still largely using smartphones and tablets as if they were PCs.

But aside from mobile Web browsing and the use of tablets as a replacement for notebook PCs in presentations, most enterprises still use mobile devices the same way they used the BlackBerry in 1999—for e-mail. Mobile apps are the new webpage: everybody knows they need one to engage customers, but few are really sure what to do with them beyond what customers use their websites for. And while companies are trying to engage customers using social media on mobile, they’re largely not using the communications tools available on smart mobile devices to engage their own employees.

“I think right now, mobile adoption has been greatly overstated in terms of what people say they do with mobile versus mobile’s potential,” said Nichols. “Every CIO out there says, ‘Oh, we have mobile-enabled our workforce using tablets and smartphones.’ They’ve done mobile enablement but not mobile integration. Mobility at this point has not fundamentally changed the way the majority of the workforce works, at least not in the last five to six years.”

Smartphones make very poor PCs. But they have something no desktop PC has—a set of sensors that can provide a constant flow of data about where their user is. There’s visual information pulled in through a camera, motion and acceleration data, and even proximity. When combined with backend analytics, they can create opportunities to change how people work, collaborate, and interact with their environment.

Machine-to-machine (M2M) communications is a big part of that shift, according to Nichols. “Allowing devices with sensors to interact in a meaningful way is the next step,” he said. That step spans from the shop floor to the data center to the boardroom, as the devices we carry track our movements and our activities and interact with the systems around us.

Retailers are beginning to catch on to that, using mobile devices’ sensors to help close sales. “Everybody gets the concept that a mobile app is a necessity for a business-to-consumer retailer,” said Brian Kirschner, the director of Apigee Institute, a research organization created by the application infrastructure vendor Apigee in collaboration with executives of large enterprises and academic researchers. “But they don’t always get the transformative force on business that apps can have. Some can be small. For example, Home Depot has an app to help you search the store you’re in for what you’re looking for. We know that failure to find something in the store is a cause of lost sales and that Web search is useful and signs over aisles are ineffective. So the mobile app has a real impact on sales.”

But if you’ve already got stock information, location data for a customer, and e-commerce capabilities, why stop at making the app useful only during business hours? “If you think of the full potential of a mobile app, why can’t you buy something at the store when it’s closed if you’re near the store?” Kirschner said. “Instead of dropping you to a traditional Web process and offering you free shipping, they could have you pick it up at the store where you are tomorrow.”

That’s a change that’s being forced on many retailers, as noted in an article from the most recent MIT Sloan Management Review by a trio of experts: Erik Brynjolfsson, a professor at MIT’s Sloan School of Management and the director of the MIT Center for Digital Business; Yu Jeffrey Hu of the Georgia Institute of Technology; and Mohammed Rahman of the University of Calgary. If retailers don’t offer a way to meet mobile-equipped customers, they’ll buy it online elsewhere—often while standing in their store. Offering customers a way to extend their experience beyond the store’s walls is the kind of mobile use that’s going to create competitive advantage from information technology. And it’s the sort of competitive advantage that has long been milked out of the old IT model.

Nichols sees the same sort of technology transforming not just relationships with customers but the workplace itself. Say, for example, you’re in New York, and you want to discuss something with two colleagues. You request an appointment using your mobile device, and based on your location data, the location data of your colleagues, and the timing of the meeting, backend systems automatically book you a conference room and set up a video link to a co-worker out of town.

Based on analytics and the title of the meeting, relevant documents are dropped into a collaboration space. Your device records the meeting to an archive and notes who has attended in person. And this conversation is automatically transcribed, tagged, and forwarded to team members for review.

“Having location data to reserve conference rooms and calls and having all other logistics be handled in background changes the size of the organization I need to support that,” Nichols said.

The same applies to manufacturing, logistics, and other areas where applications can be tied into sensors and computing power. “If I have a factory where a machine has a belt that needs to be reordered every five years and it auto re-orders and it gets shipped without the need for human interaction, that changes the whole dynamics of how you operate,” Nichols said. “If you can take that and plug it into a proper workflow, you’re going to see an entirely new sort of workforce. That’s not that far away.”

Wearable devices like Google’s Glass will also feed into the new workplace. Wearable tech has been in use in some industries for decades, and in some cases it’s just an evolution from communication systems already used in many retail and manufacturing environments. But the ability to add augmented reality—a data overlay on top of a real world location—and to collect information without reaching for a device will quickly get traction in many enterprises.

Read the entire article here.

Image: Commodore PET (Personal Electronic Transactor) 2001 Series, circa 1977. Courtesy of Wikipedia.

Mind Over Mass Media

[div class=attrib]From the New York Times:[end-div]

NEW forms of media have always caused moral panics: the printing press, newspapers, paperbacks and television were all once denounced as threats to their consumers’ brainpower and moral fiber.

So too with electronic technologies. PowerPoint, we’re told, is reducing discourse to bullet points. Search engines lower our intelligence, encouraging us to skim on the surface of knowledge rather than dive to its depths. Twitter is shrinking our attention spans.

But such panics often fail basic reality checks. When comic books were accused of turning juveniles into delinquents in the 1950s, crime was falling to record lows, just as the denunciations of video games in the 1990s coincided with the great American crime decline. The decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.

For a reality check today, take the state of science, which demands high levels of brainwork and is measured by clear benchmarks of discovery. These days scientists are never far from their e-mail, rarely touch paper and cannot lecture without PowerPoint. If electronic media were hazardous to intelligence, the quality of science would be plummeting. Yet discoveries are multiplying like fruit flies, and progress is dizzying. Other activities in the life of the mind, like philosophy, history and cultural criticism, are likewise flourishing, as anyone who has lost a morning of work to the Web site Arts & Letters Daily can attest.

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Experience does not revamp the basic information-processing capacities of the brain. Speed-reading programs have long claimed to do just that, but the verdict was rendered by Woody Allen after he read “War and Peace” in one sitting: “It was about Russia.” Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.

Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.

The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.

Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.

And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.

Steven Pinker, a professor of psychology at Harvard, is the author of “The Stuff of Thought.”

[div class=attrib]More from theSource here.[end-div]