Tag Archives: sentience

Are You Smarter Than My Octopus?


My pet octopus has moods. It can change the color of its skin on demand. It watches me with its huge eyes. It’s inquisitive and can manipulate objects. Importantly, my octopus has around half a billion neurons in its brain, compared with around 100 billion in mine, and around 50 million in your pet gerbil.

Ok, let me stop for a moment. I don’t actually have a pet octopus. But the rest is true — about the octopus’ remarkable abilities. So, does it have a mind and is it sentient?

From the Atlantic:

Drawing on the work of other researchers, from primatologists to fellow octopologists and philosophers, Godfrey-Smith suggests two reasons for the large nervous system of the octopus. One has to do with its body. For an animal like a cat or a human, details of the skeleton dictate many of the motions the animal can make. You can’t roll your arm into a neat spiral from wrist to shoulder— your bones and joints get in the way. An octopus, having no skeleton, has no such constraint. It can, and frequently does, roll up some of its arms; or it can choose to make one (or several) of them stiff, creating an elbow. Surely the animal needs a huge number of neurons merely to be well coordinated when roaming about the reef.

At the same time, octopuses are versatile predators, eating a wide variety of food, from lobsters and shrimps to clams and fish. Octopuses that live in tide pools will occasionally leap out of the water to catch passing crabs; some even prey on incautious birds, grabbing them by the legs, pulling them underwater, and drowning them. Animals that evolve to tackle diverse kinds of food may tend to evolve larger brains than animals that always handle food in the same way (think of a frog catching insects).

Like humans, octopuses learn new skills. In some species, individuals inhabit a den for only a week or so before moving on, so they are constantly learning routes through new environments. Similarly, the first time an octopus tackles a clam, say, it has to figure out how to open it—can it pull it apart, or would it be more effective to drill a hole? If consciousness is necessary for such tasks, then perhaps the octopus does have an awareness that in some ways resembles our own.

Perhaps, indeed, we should take the “mammalian” behaviors of octopuses at face value. If evolution can produce similar eyes through different routes, why not similar minds? Or perhaps, in wishing to find these animals like ourselves, what we are really revealing is our deep desire not to be alone.

Read the entire article here.

Image: Common octopus. Courtesy: Wikipedia. CC BY-SA 3.0.

Send to Kindle

Lab-Grown Beef and Zombie Cows


Writer and student of philosophy Rhys Southan provides some food for thought in his essay over at Aeon on the ethics of eating meat. The question is simple enough: would our world be better if humans ate only lab-grown meat or meat from humanely raised farm animals?

The answer may not be as simple or as black and white as you first thought. For instance, were we to move to 100 percent lab-grown beef, it is likely that there would be a far reduced need, if any, for real cattle. Thus, we’d be depriving an entire species from living and experiencing some degree of sentience and happiness. Or, if we were to retain some cows, but only in the wild, wouldn’t that be tantamount to torture for a domestic animal raised for millennia in domesticity? This might actually be worse than allowing cows to graze on humane farms for a good portion of their lives before being humanely killed — if there is such a thing — and readied for our plates.

From Aeon:

Three years ago, a televised taste test of a lab-grown burger proved it was possible to grow a tiny amount of edible meat in a lab. This flesh was never linked to any central nervous system, and so there was none of the pain, boredom and fear that usually plague animals unlucky enough to be born onto our farms. That particular burger coalesced in a substrate of foetal calf serum, but the goal is to develop an equally effective plant-based solution so that a relatively small amount of animal cells can serve as the initial foundation for glistening mounds of brainless flesh in vats – meat without the slaughter.

For many cultured-meat advocates, a major motive is the reduction of animal suffering. Vat meat avoids both the good and the bad of the mixed blessing that is sentient existence. Since the lives of animals who become our food are mostly a curse, producing mindless, unfeeling flesh to replace factory farming is an ethical (as well as literal) no-brainer.

A trickier question is whether the production of non-sentient flesh should replace what I will call ‘low-suffering animal farming’ – giving animals good lives while still raising them for food. Ideally, farmed animals would be spared the routine practices that cause severe pain: dehorning, castration, artificial insemination, branding, the separation of mothers from calves for early weaning, and long, cramped truck rides to slaughterhouses. But even in its Platonic form, low-suffering animal farming has detractors. If we give farm animals good lives, it presumably means that they like their lives and want to keep living – so how do we justify killing them just to enjoy the tastes and textures of meat? By avoiding all the good aspects of subjective experience, growing faceless flesh in vats also escapes this objection. Since vat meat cannot have any experiences at all, we don’t take a good life away by eating it.

This could avoid what many see as the fatal contradiction of humane animal farming: it commits us to treating animals with love and kindness… before slashing their throats so that we can devour their insides. It’s not the most compassionate end to a mutually respectful cross-species friendship. However, conscientiously objecting to low-suffering animal husbandry can be paradoxical as well. Those who want plants and nerveless animal cells to replace all animal farming because they think it wrong to kill happy creatures seem to believe that life for these farmed animals is such a good thing that it’s a shame for them to lose it – and so we should never create their lives at all. They love sentience so much, they want this to be a less sentient world.

So, which of these awkward positions has more going for it? In order to figure this out, I’m afraid we’ll need a thought experiment involving, well, zombie cows.

Read the entire essay here.

Image: Highland cow, in southern Dartmoor, England, 2009. Courtesy: Nilfanion / Wikipedia. Creative Commons Attribution-Share Alike 3.0.

Send to Kindle

I Think, Therefore I am, Not Robot


A sentient robot is the long-held dream of both artificial intelligence researcher and science fiction author. Yet, some leading mathematicians theorize it may never happen, despite our accelerating technological prowess.

From New Scientist:

So long, robot pals – and robot overlords. Sentient machines may never exist, according to a variation on a leading mathematical model of how our brains create consciousness.

Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. They argue that in conscious minds, integrated information cannot be reduced into smaller components. For instance, when a human perceives a red triangle, the brain cannot register the object as a colourless triangle plus a shapeless patch of red.

But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “0” if A and B are the same and “1” if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.

Memory edit

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit.

Consider an album of digital photographs. The pictures are compiled but not integrated, so deleting or modifying individual images is easy. But when we create memories, we integrate those snapshots of information into our bank of earlier memories. This makes it extremely difficult to selectively edit out one scene from the “album” in our brain.

Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.

Open minds

“It means that you would not be able to achieve the same results in finite time, using finite memory, using a physical machine,” says Maguire. “It doesn’t necessarily mean that there is some magic going on in the brain that involves some forces that can’t be explained physically. It is just so complex that it’s beyond our abilities to reverse it and decompose it.”

Disappointed? Take comfort – we may not get Rosie the robot maid, but equally we won’t have to worry about the world-conquering Agents of The Matrix.

Neuroscientist Anil Seth at the University of Sussex, UK, applauds the team for exploring consciousness mathematically. But he is not convinced that brains do not lose information. “Brains are open systems with a continual turnover of physical and informational components,” he says. “Not many neuroscientists would claim that conscious contents require lossless memory.”

Read the entire story here.

Image: Robbie the Robot, Forbidden Planet. Courtesy of San Diego Comic Con, 2006 / Wikipedia.

Send to Kindle