Tag Archives: ethics

Lab-Grown Beef and Zombie Cows

Cow-Dartmoor-England

Writer and student of philosophy Rhys Southan provides some food for thought in his essay over at Aeon on the ethics of eating meat. The question is simple enough: would our world be better if humans ate only lab-grown meat or meat from humanely raised farm animals?

The answer may not be as simple or as black and white as you first thought. For instance, were we to move to 100 percent lab-grown beef, it is likely that there would be a far reduced need, if any, for real cattle. Thus, we’d be depriving an entire species from living and experiencing some degree of sentience and happiness. Or, if we were to retain some cows, but only in the wild, wouldn’t that be tantamount to torture for a domestic animal raised for millennia in domesticity? This might actually be worse than allowing cows to graze on humane farms for a good portion of their lives before being humanely killed — if there is such a thing — and readied for our plates.

From Aeon:

Three years ago, a televised taste test of a lab-grown burger proved it was possible to grow a tiny amount of edible meat in a lab. This flesh was never linked to any central nervous system, and so there was none of the pain, boredom and fear that usually plague animals unlucky enough to be born onto our farms. That particular burger coalesced in a substrate of foetal calf serum, but the goal is to develop an equally effective plant-based solution so that a relatively small amount of animal cells can serve as the initial foundation for glistening mounds of brainless flesh in vats – meat without the slaughter.

For many cultured-meat advocates, a major motive is the reduction of animal suffering. Vat meat avoids both the good and the bad of the mixed blessing that is sentient existence. Since the lives of animals who become our food are mostly a curse, producing mindless, unfeeling flesh to replace factory farming is an ethical (as well as literal) no-brainer.

A trickier question is whether the production of non-sentient flesh should replace what I will call ‘low-suffering animal farming’ – giving animals good lives while still raising them for food. Ideally, farmed animals would be spared the routine practices that cause severe pain: dehorning, castration, artificial insemination, branding, the separation of mothers from calves for early weaning, and long, cramped truck rides to slaughterhouses. But even in its Platonic form, low-suffering animal farming has detractors. If we give farm animals good lives, it presumably means that they like their lives and want to keep living – so how do we justify killing them just to enjoy the tastes and textures of meat? By avoiding all the good aspects of subjective experience, growing faceless flesh in vats also escapes this objection. Since vat meat cannot have any experiences at all, we don’t take a good life away by eating it.

This could avoid what many see as the fatal contradiction of humane animal farming: it commits us to treating animals with love and kindness… before slashing their throats so that we can devour their insides. It’s not the most compassionate end to a mutually respectful cross-species friendship. However, conscientiously objecting to low-suffering animal husbandry can be paradoxical as well. Those who want plants and nerveless animal cells to replace all animal farming because they think it wrong to kill happy creatures seem to believe that life for these farmed animals is such a good thing that it’s a shame for them to lose it – and so we should never create their lives at all. They love sentience so much, they want this to be a less sentient world.

So, which of these awkward positions has more going for it? In order to figure this out, I’m afraid we’ll need a thought experiment involving, well, zombie cows.

Read the entire essay here.

Image: Highland cow, in southern Dartmoor, England, 2009. Courtesy: Nilfanion / Wikipedia. Creative Commons Attribution-Share Alike 3.0.

Hedging With Death

Wiertz-burial

I’ve never met a hedge fund guy. I don’t think I ever will. They’re invariably male and white. Hedge fund guys move in very different circles than mere mortgage-bound morals like me, usually orbited by billions of dollars and extravagant toys like 200 ft yachts, Tuscan palazzos and a Lamborghini on every continent. At least that’s the popular stereotype.

I’m not sure I like the idea of hedge funds and hedge fund guys with their complex and obfuscated financial transactions, nano-second trading, risk shifting strategies, corporate raids and restructurings. I’m not against gazillionaires per se — but I much prefer the billionaires who invent and make things over those who simply bet and gamble and destroy.

So, it comes as no surprise to learn that one predatory hedge fund guy has found a way to make money from the death of strangers. His name is Donald F. “Jay” Lathen Jr. and his hedge fund is known as Eden Arc Capital Management. Lathen found a neat way for his hedge fund to profit from bonds and CDs (certificates of deposit) with survivor options. For each of his “death transactions” there would be two named survivors: himself or an associate and a terminally-ill patient at a nursing home or hospice. In exchange for naming Lathen as a financial beneficiary the patient would collect $10,000 from Lathen. Lathen would then rake in far greater sums from the redeemed bonds when the patient died.

Lathen’s trick was to enter into such deals only with patients that he calculated to be closest to death. Nothing illegal here, but certainly ethically head-scratching. Don’t you just love capitalism!

From Bloomberg:

A vital function of the financial system is to shift risk, but that is mostly a euphemism. Finance can’t make risks go away, or even really move them all that much. When the financial system shifts the risk of X happening from Y to Z, all that means is that Z gives Y money if X happens. If X was going to happen to Y, it’s still going to happen to Y. But now Y gets money.

Death is a central fact of human existence, the fundamental datum that gives meaning to life, but it is also a risk — you never know when it will happen! — and so the financial industry has figured out ways to shift it. Not in any supernatural sense, I mean, but in the regular financial-industry sense: by giving people money when death happens to them. One cannot know for certain how much of a consolation that is.

Another vital function of the financial system is to brutally punish the mispricing of risk through arbitrage. Actually I don’t really know how vital that one is, but people are pretty into it. If someone under- or overestimates a risk, someone else will find a way to make them pay for it. That’s how markets, even the market for death, stay efficient.

The normal way to shift the risk of death is life insurance — you die, the insurance company gives you money — but there are other, more esoteric versions, and they are more susceptible to arbitrage. One version involves “medium and long-term bonds and certificates of deposit (‘CDs’) that contain ‘survivor options’ or ‘death puts.'” Schematically, the idea is that a financial institution issues a bond that pays back $100 when it matures in 2040 or whatever. But if the buyer of the bond dies, he gets his $100 back immediately, instead of having to wait until 2040. He’s still dead, though.

But the bond can be owned jointly by two people, and when one of them dies, the other one gets the $100 back. If you and your friend buy a bond like that for $80, and then your friend dies, you make a quick $20.

But what are the odds of that? “Pretty low” was presumably the thinking of the companies issuing these bonds. But they didn’t reckon with Donald F. “Jay” Lathen Jr. and his hedge fund Eden Arc Capital Management:

Using contacts at nursing homes and hospices to identify patients that had a prognosis of less than six months left to live, and conducting due diligence into the patients’ medical condition, Lathen found Participants he could use to execute the Fund’s strategy. In return for agreeing to become a joint owner on an account with Lathen and/or another individual, the Participants were promised a fixed fee—typically, $10,000.

That is, needless to say, from the Securities and Exchange Commission administrative action against Lathen and Eden Arc. Lathen and a terminally ill patient would buy survivor-option bonds in a joint account, using Eden Arc’s money; the patient would die, Lathen would redeem the bonds, and Eden Arc would get the money. You are … somehow … not supposed to do this?

Read the entire story here.

Image: Antoine Wiertz’s painting of a man buried alive, 1854. Courtesy: Wiertz Museum, Netherlands / Wikipedia. Public Domain.

Litigation Financing

Have some loose change under your mattress? If so, and the loose change comes in the millions of dollars, you may want to consider investing it. But, not in a paltry savings account or the stock market. You should consider investing it in litigation. Yes, there are funds, run by money managers and lawyers, that do nothing but sue for financial gain. And, if that so-called “litigation fund” happens to be suing for a cause that you believe in, then you’ll reap a two-fold reward: you’ll collect a handsome investment return, and you’ll get the pleasure of ruining your legal adversary in the process.

Here’s just one example. Burford Capital, a British litigation investment company, has recorded an almost 400 percent growth in profits over the last five years. The firm reported recent profits of $77 million and declared a staggering 70 percent net return on its investments.

So, perhaps you should ditch the notion of becoming the next Warren Buffet; trash the thought of investing in companies that innovate, create and build, and pour your retirement funds in companies that sue and litigate. Furthermore, if you seek a really stellar return on your hard-earned cash, then you should consider investing in litigation funds that sue media companies over the first amendment — that’s where the action and the money is today, and that’s where the next part of this ethically questionable story leads.

From Wired:

The revelation that Silicon Valley billionaire Peter Thiel bankrolled Hulk Hogan’s sex tape lawsuit against Gawker sent shockwaves through the media industry. Commentators had barely recovered from the $140 million in damages awarded to Hogan. Now they were grappling with a bigger question: Is this kind of financial arrangement even legal? Could it happen to them?

The short answer to both is yes—picking up the tab on someone else’s lawsuit is now perfectly legal (it wasn’t always), and people who do it aren’t required to reveal that they’re doing it or why. The practice is reviled by the business community, and yet Thiel, a staunch pro-business libertarian, has shown billionaires everywhere that it’s possible to not only sue a media company indirectly for revenge but to make money doing it. Now that the message is out, there’s nothing to stop other billionaires from following his lead.

“This [case] could really change the landscape, because everyone who has gripes about what the media has done is going to start thinking about dollars and cents and running to their lawyers,” says Thomas Julin, a partner at Miami-based law firm Hunton and Williams who focuses on First Amendment litigation.

“And it’s going to get lawyers thinking, ‘Maybe I should be more willing to represent other individuals against the media.’”

Regardless of how you feel about Gawker, Hogan, or Thiel, this financial arrangement sets a dangerous precedent for anyone running a business—especially a media business. Litigation finance is a booming industry, and Thiel’s success likely makes the entire media industry vulnerable to professional litigation financiers willing to fund other vendettas.

“Litigation financing is really dangerous,” says Bryan Quigley from the Institute for Legal Reform, the civil justice arm of the US. Chamber of Commerce, an advocate for American businesses. “There’s no doubt it’s going to create more litigation in general.”

Read the entire story here.

 

 

Crispr – Designer DNA

The world welcomed basic genetic engineering in the mid-1970s, when biotech pioneers Herbert Boyer and Stanley Cohen transferred DNA from one organism to another (bacteria). In so doing they created the first genetically modified organism (GMO). A mere forty years later we now have extremely powerful and accessible (cheap) biochemical tools for tinkering with the molecules of heredity. One of these tools, known as Crispr-Cas9, makes it easy and fast to move any genes around, within and across any species.

The technique promises immense progress in the fight against inherited illness, cancer treatment and viral infection. It also opens the door to untold manipulation of DNA in lower organisms and plants to develop an infection resistant and faster growing food supply, and to reimagine a whole host of biochemical and industrial processes (such as ethanol production).

Yet as is the case with many technological advances that hold great promise, tremendous peril lies ahead from this next revolution. Our bioengineering prowess has yet to be matched with a sound and pervasive ethical framework. Can humans reach a consensus on how to shape, focus and limit the application of such techniques? And, equally importantly, can we enforce these bioethical constraints before it’s too late to “uninvent” designer babies and bioweapons?

From Wired:

Spiny grass and scraggly pines creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California’s Monterey Peninsula hammerheads into the Pacific. It’s a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.

They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.

Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.

By 1975, other fields of science—like physics—were subject to broad restrictions. Hardly anyone was allowed to work on atomic bombs, say. But biology was different. Biologists still let the winding road of research guide their steps. On occasion, regulatory bodies had acted retrospectively—after Nuremberg, Tuskegee, and the human radiation experiments, external enforcement entities had told biologists they weren’t allowed to do that bad thing again. Asilomar, though, was about establishing prospective guidelines, a remarkably open and forward-thinking move.

At the end of the meeting, Baltimore and four other molecular biologists stayed up all night writing a consensus statement. They laid out ways to isolate potentially dangerous experiments and determined that cloning or otherwise messing with dangerous pathogens should be off-limits. A few attendees fretted about the idea of modifications of the human “germ line”—changes that would be passed on from one generation to the next—but most thought that was so far off as to be unrealistic. Engineering microbes was hard enough. The rules the Asilomar scientists hoped biology would follow didn’t look much further ahead than ideas and proposals already on their desks.

Earlier this year, Baltimore joined 17 other researchers for another California conference, this one at the Carneros Inn in Napa Valley. “It was a feeling of déjà vu,” Baltimore says. There he was again, gathered with some of the smartest scientists on earth to talk about the implications of genome engineering.

The stakes, however, have changed. Everyone at the Napa meeting had access to a gene-editing technique called Crispr-Cas9. The first term is an acronym for “clustered regularly interspaced short palindromic repeats,” a description of the genetic basis of the method; Cas9 is the name of a protein that makes it work. Technical details aside, Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people. “These are monumental moments in the history of biomedical research,” Baltimore says. “They don’t happen every day.”

Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used Crispr to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals. Startups devoted to Crispr have launched. International pharmaceutical and agricultural companies have spun up Crispr R&D. Two of the most powerful universities in the US are engaged in a vicious war over the basic patent. Depending on what kind of person you are, Crispr makes you see a gleaming world of the future, a Nobel medallion, or dollar signs.

The technique is revolutionary, and like all revolutions, it’s perilous. Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.

In a way, humans were genetic engineers long before anyone knew what a gene was. They could give living things new traits—sweeter kernels of corn, flatter bulldog faces—through selective breeding. But it took time, and it didn’t always pan out. By the 1930s refining nature got faster. Scientists bombarded seeds and insect eggs with x-rays, causing mutations to scatter through genomes like shrapnel. If one of hundreds of irradiated plants or insects grew up with the traits scientists desired, they bred it and tossed the rest. That’s where red grapefruits came from, and most barley for modern beer.

Genome modification has become less of a crapshoot. In 2002, molecular biologists learned to delete or replace specific genes using enzymes called zinc-finger nucleases; the next-generation technique used enzymes named TALENs.

Yet the procedures were expensive and complicated. They only worked on organisms whose molecular innards had been thoroughly dissected—like mice or fruit flies. Genome engineers went on the hunt for something better.

As it happened, the people who found it weren’t genome engineers at all. They were basic researchers, trying to unravel the origin of life by sequencing the genomes of ancient bacteria and microbes called Archaea (as in archaic), descendants of the first life on Earth. Deep amid the bases, the As, Ts, Gs, and Cs that made up those DNA sequences, microbiologists noticed recurring segments that were the same back to front and front to back—palindromes. The researchers didn’t know what these segments did, but they knew they were weird. In a branding exercise only scientists could love, they named these clusters of repeating palindromes Crispr.

Then, in 2005, a microbiologist named Rodolphe Barrangou, working at a Danish food company called Danisco, spotted some of those same palindromic repeats in Streptococcus thermophilus, the bacteria that the company uses to make yogurt and cheese. Barrangou and his colleagues discovered that the unidentified stretches of DNA between Crispr’s palindromes matched sequences from viruses that had infected their S. thermophilus colonies. Like most living things, bacteria get attacked by viruses—in this case they’re called bacteriophages, or phages for short. Barrangou’s team went on to show that the segments served an important role in the bacteria’s defense against the phages, a sort of immunological memory. If a phage infected a microbe whose Crispr carried its fingerprint, the bacteria could recognize the phage and fight back. Barrangou and his colleagues realized they could save their company some money by selecting S. thermophilus species with Crispr sequences that resisted common dairy viruses.

As more researchers sequenced more bacteria, they found Crisprs again and again—half of all bacteria had them. Most Archaea did too. And even stranger, some of Crispr’s sequences didn’t encode the eventual manufacture of a protein, as is typical of a gene, but instead led to RNA—single-stranded genetic material. (DNA, of course, is double-stranded.)

That pointed to a new hypothesis. Most present-day animals and plants defend themselves against viruses with structures made out of RNA. So a few researchers started to wonder if Crispr was a primordial immune system. Among the people working on that idea was Jill Banfield, a geomicrobiologist at UC Berkeley, who had found Crispr sequences in microbes she collected from acidic, 110-degree water from the defunct Iron Mountain Mine in Shasta County, California. But to figure out if she was right, she needed help.

Luckily, one of the country’s best-known RNA experts, a biochemist named Jennifer Doudna, worked on the other side of campus in an office with a view of the Bay and San Francisco’s skyline. It certainly wasn’t what Doudna had imagined for herself as a girl growing up on the Big Island of Hawaii. She simply liked math and chemistry—an affinity that took her to Harvard and then to a postdoc at the University of Colorado. That’s where she made her initial important discoveries, revealing the three-dimensional structure of complex RNA molecules that could, like enzymes, catalyze chemical reactions.

The mine bacteria piqued Doudna’s curiosity, but when Doudna pried Crispr apart, she didn’t see anything to suggest the bacterial immune system was related to the one plants and animals use. Still, she thought the system might be adapted for diagnostic tests.

Banfield wasn’t the only person to ask Doudna for help with a Crispr project. In 2011, Doudna was at an American Society for Microbiology meeting in San Juan, Puerto Rico, when an intense, dark-haired French scientist asked her if she wouldn’t mind stepping outside the conference hall for a chat. This was Emmanuelle Charpentier, a microbiologist at Ume?a University in Sweden.

As they wandered through the alleyways of old San Juan, Charpentier explained that one of Crispr’s associated proteins, named Csn1, appeared to be extraordinary. It seemed to search for specific DNA sequences in viruses and cut them apart like a microscopic multitool. Charpentier asked Doudna to help her figure out how it worked. “Somehow the way she said it, I literally—I can almost feel it now—I had this chill down my back,” Doudna says. “When she said ‘the mysterious Csn1’ I just had this feeling, there is going to be something good here.”

Read the whole story here.

Time For a New Body, Literally

Brainthatwouldntdie_film_poster

Let me be clear. I’m not referring to a hair transplant, but a head transplant.

A disturbing story has been making the media rounds recently. Dr. Sergio Canavero from the Turin Advanced Neuromodulation Group in Italy, suggests that the time is right to attempt the transplantation of a human head onto a different body. Canavero believes that advances in surgical techniques and immunotherapy are such that a transplantation could be attempted by 2017. Interestingly enough, he has already had several people volunteer for a new body.

Ethics aside, it certainly doesn’t stretch the imagination to believe Hollywood’s elite would clamor for this treatment. Now, I wonder if some people, liking their own body, would want a new head?

From New Scientist:

It’s heady stuff. The world’s first attempt to transplant a human head will be launched this year at a surgical conference in the US. The move is a call to arms to get interested parties together to work towards the surgery.

The idea was first proposed in 2013 by Sergio Canavero of the Turin Advanced Neuromodulation Group in Italy. He wants to use the surgery to extend the lives of people whose muscles and nerves have degenerated or whose organs are riddled with cancer. Now he claims the major hurdles, such as fusing the spinal cord and preventing the body’s immune system from rejecting the head, are surmountable, and the surgery could be ready as early as 2017.

Canavero plans to announce the project at the annual conference of the American Academy of Neurological and Orthopaedic Surgeons (AANOS) in Annapolis, Maryland, in June. Is society ready for such momentous surgery? And does the science even stand up?

The first attempt at a head transplant was carried out on a dog by Soviet surgeon Vladimir Demikhov in 1954. A puppy’s head and forelegs were transplanted onto the back of a larger dog. Demikhov conducted several further attempts but the dogs only survived between two and six days.

The first successful head transplant, in which one head was replaced by another, was carried out in 1970. A team led by Robert White at Case Western Reserve University School of Medicine in Cleveland, Ohio, transplanted the head of one monkey onto the body of another. They didn’t attempt to join the spinal cords, though, so the monkey couldn’t move its body, but it was able to breathe with artificial assistance. The monkey lived for nine days until its immune system rejected the head. Although few head transplants have been carried out since, many of the surgical procedures involved have progressed. “I think we are now at a point when the technical aspects are all feasible,” says Canavero.

This month, he published a summary of the technique he believes will allow doctors to transplant a head onto a new body (Surgical Neurology Internationaldoi.org/2c7). It involves cooling the recipient’s head and the donor body to extend the time their cells can survive without oxygen. The tissue around the neck is dissected and the major blood vessels are linked using tiny tubes, before the spinal cords of each person are cut. Cleanly severing the cords is key, says Canavero.

The recipient’s head is then moved onto the donor body and the two ends of the spinal cord – which resemble two densely packed bundles of spaghetti – are fused together. To achieve this, Canavero intends to flush the area with a chemical called polyethylene glycol, and follow up with several hours of injections of the same stuff. Just like hot water makes dry spaghetti stick together, polyethylene glycol encourages the fat in cell membranes to mesh.

Next, the muscles and blood supply would be sutured and the recipient kept in a coma for three or four weeks to prevent movement. Implanted electrodes would provide regular electrical stimulation to the spinal cord, because research suggests this can strengthen new nerve connections.

When the recipient wakes up, Canavero predicts they would be able to move and feel their face and would speak with the same voice. He says that physiotherapy would enable the person to walk within a year. Several people have already volunteered to get a new body, he says.

The trickiest part will be getting the spinal cords to fuse. Polyethylene glycol has been shown to prompt the growth of spinal cord nerves in animals, and Canavero intends to use brain-dead organ donors to test the technique. However, others are sceptical that this would be enough. “There is no evidence that the connectivity of cord and brain would lead to useful sentient or motor function following head transplantation,” says Richard Borgens, director of the Center for Paralysis Research at Purdue University in West Lafayette, Indiana.

Read the entire article here.

Image: Theatrical poster for the movie The Brain That Wouldn’t Die (1962). Courtesy of Wikipedia.

Are Most CEOs Talented or Lucky?

According to Harold G. Hamm, founder and CEO of Continental Resources, most CEOs are lucky not talented. You see, Hamm’s net worth has reached around $18 billion and in recent divorce filings he claims to only have been responsible for generating around 10 percent of this wealth since founding his company in 1988. Interestingly, even though he made most of the key company appointments and oversaw all the key business decisions, he seems to be rather reticent in claiming much of the company’s success as his own. Strange then that his company  would compensate him to the tune of around $43 million during 2006-2013 for essentially being a lucky slacker!

This, of course, enables him to minimize the amount owed to his ex-wife. Thus, one has to surmise from these shenanigans that some CEOs are not only merely lucky, they’re also stupid.

On a broader note this does raise the question of why many CEOs are rewarded such extraordinary sums when it’s mostly luck guiding their company’s progress!

From NYT:

The divorce of the oil billionaire Harold G. Hamm from Sue Ann Arnall has gained attention largely for its outsize dollar amounts. Mr. Hamm, the chief executive and founder of Continental Resources, who was worth more than $18 billion at one point, wrote his ex-wife a check last month for $974,790,317.77 to settle their split. She’s appealing to get more; he’s appealing to pay less.

Yet beyond the staggering sums, the Hamm divorce raises a fundamental question about the wealth of executives and entrepreneurs: How much do they owe their fortunes to skill and hard work, and how much comes from happenstance and luck?

Mr. Hamm, seeking to exploit a wrinkle in divorce law, made the unusual argument that his wealth came largely from forces outside his control, like global oil prices, the expertise of his deputies and other people’s technology. During the nine-week divorce trial, his lawyers claimed that although Mr. Hamm had founded Continental Resources and led the company to become a multibillion-dollar energy giant, he was responsible for less than 10 percent of his personal and corporate success.

Some in the courtroom started calling it the “Jed Clampett defense,” after the lead character in “The Beverly Hillbillies” TV series who got rich after tapping a gusher in his swampland.

In a filing last month supporting his appeal, Mr. Hamm cites the recent drop in oil prices and subsequent 50 percent drop in Continental’s share price and his fortune as further proof that forces outside his control direct his company’s fortunes.

Lawyers for Ms. Arnall argue that Mr. Hamm is responsible for more than 90 percent of his fortune.

While rooted in a messy divorce, the dispute frames a philosophical and ethical debate over inequality and the obligations of the wealthy. If wealth comes mainly from luck or circumstance, many say the wealthy owe a greater debt to society in the form of taxes or charity. If wealth comes from skill and hard work, perhaps higher taxes would discourage that effort.

Sorting out what value is created by luck or skill is a tricky proposition in itself. The limited amount of academic research on the topic, which mainly looks at how executives can influence a company’s value, has often found that broader market forces often have a bigger impact on a company’s success than an executive’s actions.

“As we know from the research, the performance of a large firm is due primarily to things outside the control of the top executive,” said J. Scott Armstrong, a professor at the Wharton School at the University of Pennsylvania. “We call that luck. Executives freely admit this — when they encounter bad luck.”

A study conducted from 1992 to 2011 of how C.E.O. compensation changed in response to luck or events beyond the executives’ control showed that their pay was 25 percent higher when luck favored the C.E.O.

Some management experts say the role of luck is nearly impossible to measure because it depends on the particular industry. Oil, for instance, is especially sensitive to outside forces.

“Within any industry, a more talented management team is going to tend to do better,” said Steven Neil Kaplan of the University of Chicago Booth School of Business. “That is why investors and boards of directors look for the best talent to run their companies. That is why company stock prices often move a lot, in both directions, when a C.E.O. dies or a new C.E.O. is hired.”

The Hamm case hinged on a quirk in divorce law known as “active versus passive appreciation.” In Oklahoma, and many other states, if a spouse owns an asset before the marriage, the increase in the value of an asset during marriage is not subject to division if the increase was because of “passive” appreciation. Passive appreciation is when an asset grows on its own because of factors outside either spouse’s control, like land that appreciates without any improvements or passively held stocks. Any value that’s not deemed as “passive” is considered “active” — meaning it increased because of the efforts, skills or funding of a spouse and can therefore be subject to division in a divorce.

The issue has been at the center of some other big divorces. In the 2002 divorce of the Chicago taxi magnate David Markin and Susan Markin, filed in Palm Beach, Fla., Mr. Markin claimed he was “merely a passenger on this corporate ship traveling through the ocean,” according to the judge. But he ruled that Mr. Markin was more like “the captain of the ship. Certainly he benefited by sailing through some good weather. However, he picked the course and he picked the crew. In short, he was directly responsible for everything that happened.” Ms. Markin was awarded more than $30 million, along with other assets.

Mr. Hamm, now 69, also had favorable conditions after founding Continental Resources well before his marriage in 1988 to Sue Ann, then a lawyer at the company. By this fall, when the trial ended, Continental had a market capitalization of over $30 billion; Mr. Hamm’s stake of 68 percent and other wealth exceeded $18 billion.

Their divorce trial was closed to the public, and all but a few of the documents are under seal. Neither Mr. Hamm nor his lawyers or representatives would comment. Ms. Arnall and her spokesman also declined to comment.

According to people with knowledge of the case, however, Mr. Hamm’s chief strategy was to claim most of his wealth as passive appreciation, and therefore not subject to division. During his testimony, the typically commanding Mr. Hamm, who had been the face of the company for decades, said he couldn’t recall certain decisions, didn’t know much about the engineering aspects of oil drilling and didn’t attend critical meetings.

Mr. Hamm’s lawyers calculated that only 5 to 10 percent of his wealth came from his own effort, skill, management or investment. It’s unclear how they squared this argument with his compensation, which totaled $42.7 million from 2006 to 2013, according to Equilar, an executive compensation data company.

Ms. Arnall called more than 80 witnesses — from Continental executives to leading economists like Glenn Hubbard and Kenneth Button — to show how much better Continental had done than its peers and that Mr. Hamm made most or all of the key decisions about the company’s strategy, finances and operations. They estimated that Mr. Hamm was responsible for $14 billion to $17 billion of his $18 billion fortune.

Read the entire article here.

 

The Sandwich of Corporate Exploitation

Google-search-sandwich

If ever you needed a vivid example of corporate exploitation of the most vulnerable, this is it. So-called free-marketeers will sneer at any suggestion of corporate over-reach — they will chant that it’s just the free market at work. But, the rules of this market,
as are many others, are written and enforced by the patricians and well-stacked against the plebs.

From NYT:

If you are a chief executive of a large company, you very likely have a noncompete clause in your contract, preventing you from jumping ship to a competitor until some period has elapsed. Likewise if you are a top engineer or product designer, holding your company’s most valuable intellectual property between your ears.

And you also probably have a noncompete agreement if you assemble sandwiches at Jimmy John’s sub sandwich chain for a living.

But what’s most startling about that information, first reported by The Huffington Post, is that it really isn’t all that uncommon. As my colleague Steven Greenhouse reported this year, employers are now insisting that workers in a surprising variety of relatively low- and moderate-paid jobs sign noncompete agreements.

Indeed, while HuffPo has no evidence that Jimmy John’s, a 2,000-location sandwich chain, ever tried to enforce the agreement to prevent some $8-an-hour sandwich maker or delivery driver from taking a job at the Blimpie down the road, there are other cases where low-paid or entry-level workers have had an employer try to restrict their employability elsewhere. The Times article tells of a camp counselor and a hair stylist who faced such restrictions.

American businesses are paying out a historically low proportion of their income in the form of wages and salaries. But the Jimmy John’s employment agreement is one small piece of evidence that workers, especially those without advanced skills, are also facing various practices and procedures that leave them worse off, even apart from what their official hourly pay might be. Collectively they tilt the playing field toward the owners of businesses and away from the workers who staff them.

You see it in disputes like the one heading to the Supreme Court over whether workers at an Amazon warehouse in Nevada must be paid for the time they wait to be screened at the end of the workday to ensure they have no stolen goods on them.

It’s evident in continuing lawsuits against Federal Express claiming that its “independent contractors” who deliver packages are in fact employees who are entitled to benefits and reimbursements of costs they incur.

And it is shown in the way many retailers assign hourly workers inconvenient schedules that can change at the last minute, giving them little ability to plan their lives (my colleague Jodi Kantor wrote memorably about the human effects of those policies on a Starbucks coffee worker in August, and Starbucks rapidly said it would end many of them).

These stories all expose the subtle ways that employers extract more value from their entry-level workers, at the cost of their quality of life (or, in the case of the noncompete agreements, freedom to leave for a more lucrative offer).

What’s striking about some of these labor practices is the absence of reciprocity. When a top executive agrees to a noncompete clause in a contract, it is typically the product of a negotiation in which there is some symmetry: The executive isn’t allowed to quit for a competitor, but he or she is guaranteed to be paid for the length of the contract even if fired.

Read the entire story here.

Image courtesy of Google Search.

Caveat Asterisk and Corporate Un-Ethics

Froot-Loops-Cereal-BowlWe have to believe that most companies are in business to help us with their products and services, not hurt us. Yet, more and more enterprises are utilizing novel ways to shield themselves and their executives from the consequences and liabilities of shoddy and dangerous products and questionable business practices.

Witness the latest corporate practice:  buried deeply within a company’s privacy policy you may be surprised to find a clause that states the company is not liable to you in any way if you have purchased one of their products, or downloaded a coupon, or “liked” them via a social network!

You have to admire the combined creativity of these corporate legal teams — who needs real product innovation with tangible consumer benefits when you can increase the corporate bottom-line through legal shenanigans that abrogate ethical responsibility.

So if you ever find a dead rodent in your next box of Cheerios, which you purchased with a $1-off coupon, you may be out of luck; and General Mills executives will be as happy as the families in their blue sky cereal commercials.

From the NYT:

Might downloading a 50-cent coupon for Cheerios cost you legal rights?

General Mills, the maker of cereals like Cheerios and Chex as well as brands like Bisquick and Betty Crocker, has quietly added language to its website to alert consumers that they give up their right to sue the company if they download coupons, “join” it in online communities like Facebook, enter a company-sponsored sweepstakes or contest or interact with it in a variety of other ways.

Instead, anyone who has received anything that could be construed as a benefit and who then has a dispute with the company over its products will have to use informal negotiation via email or go through arbitration to seek relief, according to the new terms posted on its site.

In language added on Tuesday after The New York Times contacted it about the changes, General Mills seemed to go even further, suggesting that buying its products would bind consumers to those terms.

“We’ve updated our Privacy Policy,” the company wrote in a thin, gray bar across the top of its home page. “Please note we also have new Legal Terms which require all disputes related to the purchase or use of any General Mills product or service to be resolved through binding arbitration.”

The change in legal terms, which occurred shortly after a judge refused to dismiss a case brought against the company by consumers in California, made General Mills one of the first, if not the first, major food companies to seek to impose what legal experts call “forced arbitration” on consumers.

“Although this is the first case I’ve seen of a food company moving in this direction, others will follow — why wouldn’t you?” said Julia Duncan, director of federal programs and an arbitration expert at the American Association for Justice, a trade group representing plaintiff trial lawyers. “It’s essentially trying to protect the company from all accountability, even when it lies, or say, an employee deliberately adds broken glass to a product.”

General Mills declined to make anyone available for an interview about the changes. “While it rarely happens, arbitration is an efficient way to resolve disputes — and many companies take a similar approach,” the company said in a statement. “We even cover the cost of arbitration in most cases. So this is just a policy update, and we’ve tried to communicate it in a clear and visible way.”

A growing number of companies have adopted similar policies over the years, especially after a 2011 Supreme Court decision, AT&T Mobility v. Concepcion, that paved the way for businesses to bar consumers claiming fraud from joining together in a single arbitration. The decision allowed companies to forbid class-action lawsuits with the use of a standard-form contract requiring that disputes be resolved through the informal mechanism of one-on-one arbitration.

Credit card and mobile phone companies have included such limitations on consumers in their contracts, and in 2008, the magazine Mother Jones published an article about a Whataburger fast-food restaurant that hung a sign on its door warning customers that simply by entering the premises, they agreed to settle disputes through arbitration.

Companies have continued to push for expanded protection against litigation, but legal experts said that a food company trying to limit its customers’ ability to litigate against it raised the stakes in a new way.

What if a child allergic to peanuts ate a product that contained trace amounts of nuts but mistakenly did not include that information on its packaging? Food recalls for mislabeling, including failures to identify nuts in products, are not uncommon.

“When you’re talking about food, you’re also talking about things that can kill people,” said Scott L. Nelson, a lawyer at Public Citizen, a nonprofit advocacy group. “There is a huge difference in the stakes, between the benefit you’re getting from this supposed contract you’re entering into by, say, using the company’s website to download a coupon, and the rights they’re saying you’re giving up. That makes this agreement a lot broader than others out there.”

Big food companies are concerned about the growing number of consumers filing class-action lawsuits against them over labeling, ingredients and claims of health threats. Almost every major gathering of industry executives has at least one session on fighting litigation.

Last year, General Mills paid $8.5 million to settle lawsuits over positive health claims made on the packaging of its Yoplait Yoplus yogurt, saying it did not agree with the plaintiff’s accusations but wanted to end the litigation. In December 2012, it agreed to settle another suit by taking the word “strawberry” off the packaging label for Strawberry Fruit Roll-Ups, which did not contain strawberries.

General Mills amended its legal terms after a judge in California on March 26 ruled against its motion to dismiss a case brought by two mothers who contended that the company deceptively marketed its Nature Valley products as “natural” when they contained processed and genetically engineered ingredients.

“The front of the Nature Valley products’ packaging prominently displays the term ‘100% Natural’ that could lead a reasonable consumer to believe the products contain only natural ingredients,” wrote the district judge, William H. Orrick.

He wrote that the packaging claim “appears to be false” because the products contain processed ingredients like high-fructose corn syrup and maltodextrin.

Read the entire article here.

Image: Bowl of cereal. Courtesy of Wikipedia / Evan-Amos.

You Are a Google Datapoint

At first glance Google’s aim to make all known information accessible and searchable seems to be a fundamentally worthy goal, and in keeping with its “Do No Evil” mantra. Surely, giving all people access to the combined knowledge of the human race can do nothing but good, intellectually, politically and culturally.

However, what if that information includes you? After all, you are information: from the sequence of bases in your DNA, to the food you eat and the products you purchase, to your location and your planned vacations, your circle of friends and colleagues at work, to what you say and write and hear and see. You are a collection of datapoints, and if you don’t market and monetize them, someone else will.

Google continues to extend its technology boundaries and its vast indexed database of information. Now with the introduction of Google Glass the company extends its domain to a much more intimate level. Glass gives Google access to data on your precise location; it can record what you say and the sounds around you; it can capture what you are looking at and make it instantly shareable over the internet. Not surprisingly, this raises numerous concerns over privacy and security, and not only for the wearer of Google Glass. While active opt-in / opt-out features would allow a user a fair degree of control over how and what data is collected and shared with Google, it does not address those being observed.

So, beware the next time you are sitting in a Starbucks or shopping in a mall or riding the subway, you may be being recorded and your digital essence distributed over the internet. Perhaps, someone somewhere will even be making money from you. While the Orwellian dystopia of government surveillance and control may still be a nightmarish fiction, corporate snooping and monetization is no less troubling. Remember, to some, you are merely a datapoint (care of Google), a publication (via Facebook), and a product (courtesy of Twitter).

From the Telegraph:

In the online world – for now, at least – it’s the advertisers that make the world go round. If you’re Google, they represent more than 90% of your revenue and without them you would cease to exist.

So how do you reconcile the fact that there is a finite amount of data to be gathered online with the need to expand your data collection to keep ahead of your competitors?

There are two main routes. Firstly, try as hard as is legally possible to monopolise the data streams you already have, and hope regulators fine you less than the profit it generated. Secondly, you need to get up from behind the computer and hit the streets.

Google Glass is the first major salvo in an arms race that is going to see increasingly intrusive efforts made to join up our real lives with the digital businesses we have become accustomed to handing over huge amounts of personal data to.

The principles that underpin everyday consumer interactions – choice, informed consent, control – are at risk in a way that cannot be healthy. Our ability to walk away from a service depends on having a choice in the first place and knowing what data is collected and how it is used before we sign up.

Imagine if Google or Facebook decided to install their own CCTV cameras everywhere, gathering data about our movements, recording our lives and joining up every camera in the land in one giant control room. It’s Orwellian surveillance with fluffier branding. And this isn’t just video surveillance – Glass uses audio recording too. For added impact, if you’re not content with Google analysing the data, the person can share it to social media as they see fit too.

Yet that is the reality of Google Glass. Everything you see, Google sees. You don’t own the data, you don’t control the data and you definitely don’t know what happens to the data. Put another way – what would you say if instead of it being Google Glass, it was Government Glass? A revolutionary way of improving public services, some may say. Call me a cynic, but I don’t think it’d have much success.

More importantly, who gave you permission to collect data on the person sitting opposite you on the Tube? How about collecting information on your children’s friends? There is a gaping hole in the middle of the Google Glass world and it is one where privacy is not only seen as an annoying restriction on Google’s profit, but as something that simply does not even come into the equation. Google has empowered you to ignore the privacy of other people. Bravo.

It’s already led to reactions in the US. ‘Stop the Cyborgs’ might sound like the rallying cry of the next Terminator film, but this is the start of a campaign to ensure places of work, cafes, bars and public spaces are no-go areas for Google Glass. They’ve already produced stickers to put up informing people that they should take off their Glass.

They argue, rightly, that this is more than just a question of privacy. There’s a real issue about how much decision making is devolved to the display we see, in exactly the same way as the difference between appearing on page one or page two of Google’s search can spell the difference between commercial success and failure for small businesses. We trust what we see, it’s convenient and we don’t question the motives of a search engine in providing us with information.

The reality is very different. In abandoning critical thought and decision making, allowing ourselves to be guided by a melee of search results, social media and advertisements we do risk losing a part of what it is to be human. You can see the marketing already – Glass is all-knowing. The issue is that to be all-knowing, it needs you to help it be all-seeing.

Read the entire article after the jump.

Image: Google’s Sergin Brin wearing Google Glass. Courtesy of CBS News.

Sign First; Lie Less

A recent paper filed with the Proceedings of the National Academy of Sciences (PNAS) shows that we are more likely to be honest if we sign a form before, rather than after, completing it. So, over the coming years look out for Uncle Sam to revise the ubiquitous IRS 1040 form by adding a signature line at the top rather than the bottom of the last page.

[div class=attrib]From Ars Technica:[end-div]

What’s the purpose of signing a form? On the simplest level, a signature is simply a way to make someone legally responsible for the content of the form. But in addition to the legal aspect, the signature is an appeal to personal integrity, forcing people to consider whether they’re comfortable attaching their identity to something that may not be completely true.

Based on some figures in a new PNAS paper, the signatures on most forms are miserable failures, at least from the latter perspective. The IRS estimates that it misses out on about $175 billion because people misrepresent their income or deductions. And the insurance industry calculates that it loses about $80 billion annually due to fraudulent claims. But the same paper suggests a fix that is as simple as tweaking the form. Forcing people to sign before they complete the form greatly increases their honesty.

It shouldn’t be a surprise that signing at the end of a form does not promote accurate reporting, given what we know about human psychology. “Immediately after lying,” the paper’s authors write, “individuals quickly engage in various mental justifications, reinterpretations, and other ‘tricks’ such as suppressing thoughts about their moral standards that allow them to maintain a positive self-image despite having lied.” By the time they get to the actual request for a signature, they’ve already made their peace with lying: “When signing comes after reporting, the morality train has already left the station.”

The problem isn’t with the signature itself. Lots of studies have shown that focusing the attention on one’s self, which a signature does successfully, can cause people to behave more ethically. The problem comes from its placement after the lying has already happened. So, the authors posited a quick fix: stick the signature at the start. Their hypothesis was that “signing one’s name before reporting information (rather than at the end) makes morality accessible right before it is most needed, which will consequently promote honest reporting.”

To test this proposal, they designed a series of forms that required self reporting of personal information, either involving performance on a math quiz where higher scores meant higher rewards, or the reimbursable travel expenses involved in getting to the study’s location. The only difference among the forms? Some did not ask for a signature, some put the signature on top, and some placed it in its traditional location, at the end.

In the case of the math quiz, the researchers actually tracked how well the participants had performed. With the signature at the end, a full 79 percent of the participants cheated. Somewhat fewer cheated when no signature was required, though the difference was not statistically significant. But when the signature was required on top, only 37 percent cheated—less than half the rate seen in the signature-at-bottom group. A similar pattern was seen when the authors analyzed the extent of the cheating involved.

Although they didn’t have complete information on travel expenses, the same pattern prevailed: people who were given the signature-on-top form reported fewer expenses than either of the other two groups.

The authors then repeated this experiment, but added a word completion task, where participants were given a series of blanks, some filled in with letters, and asked to complete the word. These completion tasks were set up so that they could be answered with neutral words or with those associated with personal ethics, like “virtue.” They got the same results as in the earlier tests of cheating, and the word completion task showed that the people who had signed on top were more likely to fill in the blanks to form ethics-focused words. This supported the contention that the early signature put people in an ethical state of mind prior to completion of the form.

But the really impressive part of the study came from its real-world demonstration of this effect. The authors got an unnamed auto insurance company to send out two versions of its annual renewal forms to over 13,000 policy holders, identical except for the location of the signature. One part of this form included a request for odometer readings, which the insurance companies use to calculate typical miles travelled, which are proportional to accident risk. These are used to calculate insurance cost—the more you drive, the more expensive it is.

Those who signed at the top reported nearly 2,500 miles more than the ones who signed at the end.

[div class=attrib]Read the entire article after the jump, or follow the article at PNAS, here.[end-div]

[div class=attrib]Image courtesy of University of Illinois at Urbana-Champaign.[end-div]

Is It Good That Money Can Buy (Almost) Anything?

Money is a curious invention. It enables efficient and almost frictionless commerce and it allows us to assign tangible value to our time. Yet it poses enormous societal challenges and ethical dilemmas. For instance, should we bribe our children with money in return for better grades? Should we allow a chronically ill kidney patient to purchase a replacement organ from a donor?

Raghuram Rajan, professor of finance at the University of Chicago, reviews a fascinating new book that attempts to answer some of these questions. The book, “What Money Can’t Buy: The Moral Limits of the Market” is written by noted Harvard philosopher Michael Sandel.

[div class=attrib]From Project Syndicate:[end-div]

In an interesting recent book, What Money Can’t Buy: The Moral Limits of the Market, the Harvard philosopher Michael Sandel points to the range of things that money can buy in modern societies and gently tries to stoke our outrage at the market’s growing dominance. Is he right that we should be alarmed?

While Sandel worries about the corrupting nature of some monetized transactions (do kids really develop a love of reading if they are bribed to read books?), he is also concerned about unequal access to money, which makes trades using money inherently unequal. More generally, he fears that the expansion of anonymous monetary exchange erodes social cohesion, and argues for reducing money’s role in society.

Sandel’s concerns are not entirely new, but his examples are worth reflecting upon. In the United States, some companies pay the unemployed to stand in line for free public tickets to congressional hearings. They then sell the tickets to lobbyists and corporate lawyers who have a business interest in the hearing but are too busy to stand in line.

Clearly, public hearings are an important element of participatory democracy. All citizens should have equal access. So selling access seems to be a perversion of democratic principles.

The fundamental problem, though, is scarcity. We cannot accommodate everyone in the room who might have an interest in a particularly important hearing. So we have to “sell” entry. We can either allow people to use their time (standing in line) to bid for seats, or we can auction seats for money. The former seems fairer, because all citizens seemingly start with equal endowments of time. But is a single mother with a high-pressure job and three young children as equally endowed with spare time as a student on summer vacation? And is society better off if she, the chief legal counsel for a large corporation, spends much of her time standing in line?

Whether it is better to sell entry tickets for time or for money thus depends on what we hope to achieve. If we want to increase society’s productive efficiency, people’s willingness to pay with money is a reasonable indicator of how much they will gain if they have access to the hearing. Auctioning seats for money makes sense – the lawyer contributes more to society by preparing briefs than by standing in line.

On the other hand, if it is important that young, impressionable citizens see how their democracy works, and that we build social solidarity by making corporate executives stand in line with jobless teenagers, it makes sense to force people to bid with their time and to make entry tickets non-transferable. But if we think that both objectives – efficiency and solidarity – should play some role, perhaps we should turn a blind eye to hiring the unemployed to stand in line in lieu of busy lawyers, so long as they do not corner all of the seats.

What about the sale of human organs, another example Sandel worries about? Something seems wrong when a lung or a kidney is sold for money. Yet we celebrate the kindness of a stranger who donates a kidney to a young child. So, clearly, it is not the transfer of the organ that outrages us – we do not think that the donor is misinformed about the value of a kidney or is being fooled into parting with it. Nor, I think, do we have concerns about the scruples of the person selling the organ – after all, they are parting irreversibly with something that is dear to them for a price that few of us would accept.

I think part of our discomfort has to do with the circumstances in which the transaction takes place. What kind of society do we live in if people have to sell their organs to survive?

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google.[end-div]

Morality and Machines

Fans of science fiction and Isaac Asimov in particular may recall his three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, technology has marched forward relentlessly since Asimov penned these guidelines in 1942. But while the ideas may seem trite and somewhat contradictory the ethical issue remains – especially as our machines become ever more powerful and independent. Though, perhaps first humans, in general, ought to agree on a set of fundamental principles for themselves.

Colin Allen for the Opinionator column reflects on the moral dilemma. He is Provost Professor of Cognitive Science and History and Philosophy of Science at Indiana University, Bloomington.

[div class=attrib]From the New York Times:[end-div]

A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”

The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel ?apek’s 1920 play about artificial factory workers rising against their human overlords.

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.

This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do. A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Asimov Foundation / Wikipedia.[end-div]