Sentience and brain size

Whether it's pushpin, poetry or neither, you can discuss it here.

Sentience and brain size

Postby Brian Tomasik on 2009-05-05T03:39:00

Edit on 4 Aug. 2013:
My current thoughts on this topic: "Is Brain Size Morally Relevant?"

Edit on 23 June 2012:
I wrote the following post in May 2009, back when I was confused about consciousness and didn't fully comprehend the reductionist interpretation of it. So forgive some of the language, but most of the content is still sensible and important.

My current position on this topic is as follows. I remain uncertain whether I want to give greater weight to more powerful brains (i.e., ones with greater computational throughput). On the one hand, Bostrom's thought experiment (discussed further below) is compelling. But on the other hand, I have an intuition that what matters most is a single organism's unified experience, and I might bite the bullet and say that if a single mind is divided into two separate minds, then the moral weight has thereby doubled. After all, there's no law of conservation of consciousness analogous to the law of conservation of mass/energy. If the context changes, by converting one unified conscious actor into two, then our assessment can change.

Let me put it another way. It may be that the amount of neural tissue that responds when I prick my finger is more than the amount that responds when a fruit fly is shocked to death. (I don't know if this is true, but I have almost a million times more neurons than a fruit fly.) However, the pinprick is trivial to me, but to the fruit fly, the pain that it endures before death comes is the most overwhelming thing in the world. There's a unified quality to the fruit fly's experience that isn't captured merely by number of neurons, or even a more sophisticated notion of computational horsepower of the brain. See also this discussion about Harry the human and Sam the snail.

Another factor that may be relevant is the "clock speed" of the brain in question. If smaller animals tend to have higher clock speeds (at least within a given class of animals, like mammals), then other things being equal, they would get more weight in our calculations, although it's not clear how big the effect size would be.

In addition, if anthropics are not weighted by brain size, then not giving moral weight to brain size begins to seem more plausible as well.

There are many more subtleties and arguments to bear on this tough question, but I shall leave that to the discussion which follows. So without further ado, here's the original post.


Original piece from May 2009:
--------------------------
Several people have suggested to me the idea that capacity for suffering may vary in rough proportion with -- or at least according to some approximately monotonic function of -- brain size. (I'll use "brain size" as a shorthand term referring to the amount of neural tissue an organism has. Perhaps a more relevant measure, though one for which it's harder to find good statistics, is the amount of neural tissue devoted specifically to producing pain emotions, rather than, say, vision processing or planning.) When I first heard this idea, I found it somewhat surprising. Introspectively, consciousness feels like a binary state: I'm aware and feeling emotions now, but I don't do either to any degree when I'm under general anesthesia. I can't recall feeling "more intensely conscious" at one time rather than another, unless you count the groggy state right before falling asleep. On the other hand, personal introspection doesn't prove much, because during my adult life, I've always had the same brain size, so that even if sentience did vary with brain size, I wouldn't know it. (I might be able to notice the difference if my child self had experienced less intense emotions than my adult self. Introspectively this doesn't feel true, but I don't remember my feelings as a very young child well enough to be sure.)

There do seem to be some good arguments for the size-proportionality position.
  • As the cortical homunculus notion suggests, regions of the body that are more sensitive (e.g., the hands) have far more neurons devoted to them in the brain. My feelings of touch are much more refined (and somewhat more intense) on my hands than on my back. Presumably the same would be true of pain nerves.
  • Painkillers work by reducing the number of pain signals that are produced or the number that actually reach the spinal cord and brain. A smaller number of signals translates into less intense experience.
  • We suspect that other abilities of the brain also vary with its size, notably intelligence. That ability is apparently not binary, despite the fact that it feels somewhat binary to any one individual. (I do have a sense of my intelligence varying from moment to moment, but not drastically.)
However, there are some objections.
  • If sentience scales with brain size, then would men have more intensity of emotion than women, since their brains are larger? And should we worry less about pain in children than adults? Maybe -- politically unpopular ideas needn't be incorrect.
  • Intelligence does not seem to be related to absolute brain size but more to relative brain size. Men and women seem to have basically the same IQs even though men have an extra 4 billion brain cells. Moreover, brain size scales very closely with body size (especially lean body mass), so that in fact, whales have much bigger brains (4000-7000 g) than humans (1300-1700 g). Indeed, the standard metric of interest in intelligence studies is not absolute brain size but brain-to-body mass ratio, or Encephalization Quotient (EQ). Shouldn't we expect a similar sort of trend for emotional intensity?
  • In his "Toward Welfare Biology," Yew-Kwang Ng proposes the principle that because hedonic experiences are energetically costly, evolution generally endows organisms with the minimal levels of emotion needed to motivate them to action, perhaps to differential degrees depending on the situation. Ng is an economist rather than a biologist, so I'm not sure how realistic this assumption is, but it does seem true that intense emotion is more metabolically taxing -- thus explaining why we're close to hedonic neutrality most of the time. The question for the sentience-scaling view is then why evolution would give larger-brained organisms substantially more intense emotions in order to motivate them to do similar sorts of things (eat, mate, avoid predators). Maybe the reply would be that, because of their greater prevalence of neurons, larger organisms simply experience greater emotional intensity for the same level of metabolic cost? Or maybe larger-brained organisms, being perhaps more intelligent, have a larger number of options available to them and so need a larger number of levels of pleasure and pain for motivating a wider range of possible responses?
  • If consciousness results from implementing the right algorithm, then maybe it doesn't matter exactly how that algorithm is run? This suggests the notion that consciousness is either on or off, at least for serial computers. As an illustration, there are lots of functions for computing factorial(n) -- some fast, some slow, some simple, some complex -- but whether a given function computes factorial(n) is either true or false. It doesn't depend on lines of code or the computational burden of running the code. (On the other hand, the number of instances of factorial(n) that can be run on a given machine does depend on the latter factor.)
  • Certain qualia, such as the redness of red, do seem to be binary -- indeed, that's the whole premise behind David Chalmers's "fading qualia" thought experiment. Might consciousness itself be the same way? This is especially interesting viewed in light of the suggestion that consciousness should be substrate-independent. I'm not sure if qualia can be produced by Turing-machine simulations, but if so, what does "brain size" look like in this context? Of course, we do know that different degrees of pain and pleasure are possible, so this last point may be a red herring.
  • On the painkiller point, it may be that fewer pain signals translates into less pain for a given organism. But maybe what matters are the relative amounts of pain signals vs. other signals the brain receives. Larger brains have to process more total inputs. Maybe a tiny brain receiving only a few pain signals feels subjectively worse than a large brain that, while receiving a larger number of pain signals, is distracted by lots of other information. (The gate control theory of pain may be relevant here?)
The question of if and how emotional intensity scales with brain size is, in my view, extremely important, because it affects the urgency we attach to potential suffering by insects in the wild, assuming they can feel pain at all. If, for instance, insect suffering is 1000 times less intense than human suffering, then we should discount the 10^18 insects in the world not only by their probability of sentience (say, 0.1, to use a round number) but also by their reduced emotional intensity if sentient (0.001). In that case, there would be effectively 10^14 human-equivalent insects, much closer to the 10^10 humans that exist.

There's a fundamental problem here, though. A good Bayesian will not pick a point estimate for the ratio of insect sentience to human sentience but will maintain a probability distribution over plausible ratios. In view of the uncertainty on this question, I think it's reasonable to maintain some probability on the hypothesis that insects do suffer about as much as humans. For instance, maybe we assign probability 0.2 to insects being able to suffer at least half as much. But in that case, an upper bound on the "expected ratio" of sentience between insects and humans is 0.1, which implies much more concern for insects than a ratio like 0.001, even if the latter is the value we consider most likely.

Note that if number of neurons devoted to pain processing is the relevant measure, then the disparity between insects and humans will probably be smaller than straight division based on brain mass would suggest, since I assume humans devote a larger fraction of neurons to non-hedonic brain functions. On the other hand, humans are endothermic and, so a friend tells me, have a larger proportion of synapses than insects. Exactly how to incorporate these factors into a weighting even if we do apply one is not obvious.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-05-06T00:40:00

I figure pain is what makes you stop trying to do that, and pleasure is what makes you want to do it more. Do insects learn? If not, they can't feel pain or pleasure. What's more, without something like that, there's no way to say what an insect finds painful, and what it finds pleasurable, or wether or not their lives are worth living.

Sentience is an approximation of the degree to which one can feel happiness and sadness. It is not binary. The question is, if there's more "mass" getting happy, does that make it happier?

I figure it's all about information. Happiness changes the information generated. With more information content of a mind, there will be more change from happiness, and thus more happiness.

As far as I can figure, qualia should not cause anything. It should thus be impossible to tell if you have qualia. Your insistence on there being qualia is an artifact of the way you think. Even if there somehow is a result of qualia, it should be possible to get the result another way. There should be no way to tell if you're a p-zombie or not. That all just seems so wrong, but no matter how much I think about it, I can't think of any reason it would be wrong, or any logical alternative. I guess that just means I can't contribute meaningfully to whether or not it would be possible to notice if your sentience changes.

I find it very difficult to believe that hedonic experiences are energetically costly, or at least that that's the main reason that beings don't experience more. They don't experience more because, if they did, they wouldn't respond to the environment correctly. They'd change what they do based on their last result to much, when it matters how it went before that, and before that.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-05-06T01:51:00

I figure pain is what makes you stop trying to do that, and pleasure is what makes you want to do it more.

I would guess it's not quite that simple. I don't think, say, reinforcement learning algorithms induce pleasure and pain in their simulated agents. Similarly, I doubt that plants feel pleasure when they move toward sunlight.

Do insects learn?

Indeed they do. One 1986 paper, "Invertebrate Learning and Memory: From Behavior to Molecules," reviewed studies on organisms like bees, slugs, molluscs, snails, leeches, locusts, and fruit flies, and concluded (pp. 473-76):

The progress achieved over the last 10-15 years in studying a wide variety of forms of learning in simple invertebrate animals is quite striking. There is now no question, for example, that associative learning is a common capacity in several invertebrate species. In fact, the higher-order features of learning seen in some invertebrates (notably bees and Limax) rivals that commonly observed in such star performers in the vertebrate laboratory as pigeons, rats, and rabbits.

[... W]e have reason to hope that the distinction between vertebrate and invertebrate learning and memory is one that will diminish as our understanding of underlying mechanisms increases.


I figure it's all about information. Happiness changes the information generated. With more information content of a mind, there will be more change from happiness, and thus more happiness.

So the idea is that smarter organisms feel more intensity because more "goes on" in their brain?

Viewed from the standpoint of information transmission, one might argue that more neural mass doesn't mean more sentience, because bigger organisms need more tissue in order to send the same basic signal to the brain -- namely, "I'm in pain. Do something."

There should be no way to tell if you're a p-zombie or not.

Does the fact that we both appear to be members of the same species help? It doesn't provide certainty, but it would seem to make me think it highly likely that you experience qualia because I experience qualia, no?

Your point about energy costs makes sense.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-05-06T16:40:00

GLaDOS wrote:Although the euthanizing process is remarkably painful, 8 out of 10 Aperture Science engineers believe that the companion cube is most likely incapable of feeling much pain.

I just felt like quoting that.

I don't think, say, reinforcement learning algorithms induce pleasure and pain in their simulated agents.

How would you know? I agree that it's probably more complicated, but I'd guess that said algorithms feel less pleasure and pain.


So the idea is that smarter organisms feel more intensity because more "goes on" in their brain?

Viewed from the standpoint of information transmission, one might argue that more neural mass doesn't mean more sentience, because bigger organisms need more tissue in order to send the same basic signal to the brain -- namely, "I'm in pain. Do something."


It's telling the same message to more brain. Every part of the brain reacts in its own way. Think of it like this: The free market is one giant hive mind. When something bad happens, it feels pain. But it's not the same as just one person feeling pain. Each piece of the market is affected slightly differently. As such, a market containing a million people feels about a million times as much pain as one person.

There should be no way to tell if you're a p-zombie or not.

Let me rephrase this, there should be no way for you to tell if you're a p-zombie or not.


In my psychology class they said that people do not generally respond nearly as well to punishment as to reenforcement. Does that mean that humans feel significantly more happiness than sadness? Does anyone know if other animals respond similarly?

A problem with my Pavlovian idea of happiness is learned helplessness in which a person doesn't think they have control over there situation, and thus does not react in the Pavlovian manner, but still shows other signs of pleasure or pain. For example, a person would tell you that they're happy/sad.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-05-06T17:52:00

How would you know? I agree that it's probably more complicated, but I'd guess that said algorithms feel less pleasure and pain.

If they do feel any amount of pleasure and pain comparable to that of ordinary organisms, then running such simulations (with lots of rewards and no punishment) would be an extraordinarily cheap way to produce utility. I ought to start running some right now with my spare computing cycles! Indeed, we could create a "Happiness@home" project similar to SETI's in which people run massive numbers of enormously blissful simulated agents.

I'm not being completely facetious here, because I do think that post-humans ought to fill the universe with utilitronium. But I'm guessing that consciousness requires more than, say, a Python object with a field "Happiness = +5," even one that responds to events in a simulated environment. Perhaps you could elaborate on your position here?

As such, a market containing a million people feels about a million times as much pain as one person.

Interesting. And yet, I only feel like I'm one mind. Or is that an illusion? Are there really lots of "minds" in my head that all feel like they're the only one?

In my psychology class they said that people do not generally respond nearly as well to punishment as to reenforcement. Does that mean that humans feel significantly more happiness than sadness?

Do you know what was meant by "responding better" to one than the other? Could that just mean that reinforcement is more effective than punishment at changing behavior (regardless of how it feels subjectively)? I'd be interested to hear more, because I'm not aware of such research. Indeed, the concept of negativity bias would seem to suggest the opposite.

Your learned-helplessness point is well taken.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Possible experiments

Postby Brian Tomasik on 2009-05-06T18:15:00

It seems that the dependence of emotional intensity on neural mass may be a testable hypothesis, at least in theory. For example, scientists could conceivably take away people's nerve tissue and see whether the intensity of their experiences decreased. (Indeed, maybe one could ask that question of patients who have undergone hemispherectomies.)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-05-06T21:56:00

Maybe I was a bit too fast saying that sentience is just how much emotion you feel. I figure a person losing brain mass would feel less happiness because there's less of them to feel it, but that wouldn't make them think they have less happiness. The parts of them you take out wouldn't respond to the emotion, but the person wouldn't notice, as they wouldn't act as if without the emotion. They just wouldn't be there, and thus would have no effect on how the person feels.

You could give someone a drug to make them feel like time passes slower. (This is a thought experiment. I don't actually know of such a drug.) It wouldn't actually make them think faster. It would just make them think more time has passed. They'd think they feel happy (or sad) longer, and therefore more. Would they actually feel the emotion faster? If so, what if you completely destroyed their ability to judge the passage of time?

If they do feel any amount of pleasure and pain comparable to that of ordinary organisms, then running such simulations (with lots of rewards and no punishment) would be an extraordinarily cheap way to produce utility.

I doubt it's anywhere near what, say, an ant would feel. Possibly on par with an ant, if it's done very well. Also, there's no obvious way to say whether somethings being more or less like it was before. As a simple example (if you have background with computers), let's say you have a program that outputs numbers, and it normally picks around 2. It then picks -1, and starts picking around 1, it seems like it would be doing something closer. On the other hand, if it picks 4294967295 (2^32-1) and then starts picking around 1, you'd say that it's doing something further. The problem here is that the only difference is the first program uses signed integers, and the second uses unsigned integers. Both programs are doing exactly the same thing. The only difference is how you interpret the binary value "1111111111111111".

Unfortunately, the post-humans won't have much better an ability do deal with this. There is simply no objective way to measure pleasure. They can only guess what it is and measure (or optimize) that.

Interesting. And yet, I only feel like I'm one mind. Or is that an illusion? Are there really lots of "minds" in my head that all feel like they're the only one?

I find it rather hard to believe that qualia can think. You think you're the only mind because of how your brain works. Your conscious mind is probably the only one that even realizes it's a mind at all. Unfortunately, this is all too closely related to that qualia paradox I mentioned earlier.

Of course, you can't feel like you're two minds, as you can only feel your own mind.

Could that just mean that reinforcement is more effective than punishment at changing behavior (regardless of how it feels subjectively)?

Yes. That's what I meant.

My psychology teacher suggested that it was because you feel happy with getting away with something when you're not punished, which, if correct, would mean that it is subjectively not as bad.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-05-06T23:04:00

I figure a person losing brain mass would feel less happiness because there's less of them to feel it, but that wouldn't make them think they have less happiness.

That sounds somewhat plausible. In that case, the brain-mass hypothesis would be harder to test than I thought.

Both programs are doing exactly the same thing. The only difference is how you interpret the binary value "1111111111111111".

Nice example. Related is this quote from Eliezer Yudkowsky, which I referenced in a piece on the hard problem of consciousness:
If you redesigned the brain to represent the intensity of pleasure using IEEE 754 double-precision floating-point numbers, a mere 64 bits would suffice to feel pleasures up to 10^308 hedons... in, um, whatever base you were using. [...]

Now we have lost a bit of fine-tuning by switching to IEEE-standard hedonics. The 64-bit double-precision float has an 11-bit exponent and a 52-bit fractional part (and a 1-bit sign). So we'll only have 52 bits of precision (16 decimal places) with which to represent our pleasures, however great they may be. An original human's orgasm would soon be lost in the rounding error... which raises the question of how we can experience these invisible hedons, when the finite-precision bits are the whole substance of the pleasure.

We also have the odd situation that, starting from 1 hedon, flipping a single bit in your brain can make your life 10154 times more happy.

And Hell forbid you flip the sign bit. Talk about a need for cosmic ray shielding.

But really - if you're going to go so far as to use imprecise floating-point numbers to represent pleasure, why stop there? Why not move to Knuth's up-arrow notation?

For that matter, IEEE 754 provides special representations for +/- INF, that is to say, positive and negative infinity. What happens if a bit flip makes you experience infinite pleasure? Does that mean you Win The Game?

Now all of these questions I'm asking are in some sense unfair, because right now I don't know exactly what I have to do with any structure of bits in order to turn it into a "subjective experience". Not that this is the right way to phrase the question. It's not like there's a ritual that summons some incredible density of positive qualia that could collapse in its own right and form an epiphenomenal black hole.

But don't laugh - or at least, don't only laugh - because in the long run, these are extremely important questions.
I'm rather skeptical that these suggestions have much to do with qualia at all. Even if the functionalists are right that qualia can be produced by executing the right algorithm on an arbitrary Turing machine (which itself I find dubious), I suspect it might be impossible for someone to specify "amount of happiness / pain" just by encoding a symbolic number. Rather, I'm imagining that one execution of the "simulate happiness" loop produces some fixed amount of experience, and you have to execute the loop lots of times to get lots of experience.

But I really don't know. I agree with Eliezer that "these are extremely important questions"!

Unfortunately, the post-humans won't have much better an ability do deal with this.

So do you think the hard problem is insoluble (or perhaps ill-formed)?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-05-07T04:51:00

I'm rather skeptical that these suggestions have much to do with qualia at all. Even if the functionalists are right that qualia can be produced by executing the right algorithm on an arbitrary Turing machine (which itself I find dubious), I suspect it might be impossible for someone to specify "amount of happiness / pain" just by encoding a symbolic number. Rather, I'm imagining that one execution of the "simulate happiness" loop produces some fixed amount of experience, and you have to execute the loop lots of times to get lots of experience.

You could interpret anything in an infinite number of ways, so I figure the Turing machine would have to be sufficiently simple. Other than that, I find it more likely that people have qualia because of the way they think, not because of some chance physical process. Of course, there's much more to it than a number. Even my idea, which I figure is over-simplified, requires something that continually does things closer to how it's doing them right now. There's no simple way to create ridiculous amounts of happiness with it (unless you count ridiculously fast computers).

So do you think the hard problem is insoluble (or perhaps ill-formed)?

It's insoluble. Even on the off chance that there is some measurable effect of qualia, there's still no way to tell that the qualia is an intermediate step. It would be indistinguishable from whatever is causing the qualia to cause the result directly.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby davidpearce on 2009-05-07T08:32:00

Electrode studies suggest large parts of the brain contribute very little to the intensity of consciousness, whereas stimulation of extraordinarily small areas can be agonizingly painful or intensely pleasurable.
Perhaps it's the particular architecture and gene expression profile of a firing nerve cell (and its connectivity) that matters most to the intensity of experience rather than brain size per se.

However, it might be illuminating to conduct a variant of the Wada test http://en.wikipedia.org/wiki/Wada_test
and ask subjects to report on their subjective intensity of awareness after one cerebral hemisphere is anaesthetized at a time.

davidpearce
 
Posts: 45
Joined: Thu May 07, 2009 8:27 am

Re: Sentience and brain size

Postby Brian Tomasik on 2009-05-07T19:59:00

Other than that, I find it more likely that people have qualia because of the way they think, not because of some chance physical process.

I guess the question is whether "the way they think" is all that matters for consciousness. If so, and if this thinking process is algorithmic, then it can be implemented on, say, a Lego universal Turing machine, as the functionalists claim.

Regarding "chance physical processes," does whether something happens by chance make a difference? If the population of China just happens to make cell-phone calls corresponding to the algorithm for the way you respond when you stub your toe, does that fail to produce pain? What if the Chinese population sets out to make the phone calls deliberately? These are just general questions for anyone who wants to answer. As I've suggested, I'm skeptical but uncertain about the existence of consciousness in either case.

Perhaps it's the particular architecture and gene expression profile of a firing nerve cell (and its connectivity) that matters most to the intensity of experience rather than brain size per se.

Interesting suggestion. Of course, we might suspect that an organism's number of such firing nerve cells is roughly proportional to total brain size (?), in which case the implications for how much we care about various animals would be the roughly the same.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby RyanCarey on 2009-05-11T04:22:00

I agree with DanielLC that Pavlovian change is linked to consciousness. The fact that doing things that are evolutionarily bad for us (touching a hot object) tends to hurt and that doing things that are evolutionarily good for us (eating food, sex) tends to feel pleasurable can be no coincidence.

Without asserting that this idea is necessarily true, I'll clarify this idea and explore it a little. What causes consciousness is a change-generator. For us, that means an impulse that moves along a neuron and tunes us to respond to our environment. We're conscious when messages reach our decision-making centres. We consciously experience these messages whether they're true or false. Possibly, this is equivalent to remodelling of neurons.

Sometimes, a decision arises from misunderstanding of the environment: the brain can be conscious of false perception just as it can be conscious of true information. When you touch a hot object, you may feel pain. But pain in the arm can also occur when there's nothing wrong with your arm, like if your pain is referred from a heart attack. You can even experience arm pain after your arm has been amputated.

I would guess it's not quite that simple. I don't think, say, reinforcement learning algorithms induce pleasure and pain in their simulated agents. Similarly, I doubt that plants feel pleasure when they move toward sunlight.
It might depend on whether we interpret such a plant's movement as behaviour or change in behaviour. But I'm not sure I see the problem with seeing plants to be conscious. What's really problematic is that so many of the processes that occur inside our human brains are unconscious. I think we need to formulate a theory that explains why we feel what goes on in our cerebral cortex but not the stuff that goes on in the bits of our brain that are close to the spine and that are evolutionarily older.

Interesting. And yet, I only feel like I'm one mind. Or is that an illusion? Are there really lots of "minds" in my head that all feel like they're the only one?
I don't know, but I'll share an analogy: maybe it's kind of like the interaction between magnets. When you put a couple of small magnets next to eachother, they align themselves so that they stick together. Then, their magnetic fields combine and they exert a magnetic force on the things at either end of the newly formed larger magnet.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Sentience and brain size

Postby Brian Tomasik on 2009-05-11T13:55:00

I agree with DanielLC that Pavlovian change is linked to consciousness. The fact that doing things that are evolutionarily bad for us (touching a hot object) tends to hurt and that doing things that are evolutionarily good for us (eating food, sex) tends to feel pleasurable can be no coincidence.

Need it be Pavlovian change? Either nurture (learning) or nature (genes) can encode our revulsion toward hot objects and attraction toward food and sex. Indeed, I strongly suspect that it's genetic hard-coding at work in each of those examples. Suppose the sharp pain of hot objects was due to classical conditioning rather than evolutionary hard-wiring. Then aversion to hot objects would be learned as a conditioned response in reaction to the association between hot objects (conditioned stimulus) and subsequent damage to the hand (unconditioned stimulus) that produces the unconditioned response of not wanting tissue damage. But organisms don't have to pick up hot objects and observe tissue damage to learn that hot objects hurt.

The pleasure of sex is even more clearly not due to classical conditioning. What would be the unconditioned response that reinforces it? Having a child? (That implies a delay of at least 9 months in the learning process.) What could possibly reinforce sex with condoms?

But I'm not sure I see the problem with seeing plants to be conscious.

Interesting -- I think that's a rare position, except among, say, panpsychists. The standard arguments against plant sentience are
  • Plants have no nervous system, and yet the nervous system seems necessary for consciousness in humans.
  • Consciousness evolved to improve organisms' fitness by allowing them to react intelligently to novel situations. Stress serves the purpose of informing an organism about tissue damage and/or motivating it to avoid danger. Plants can't locomote or make decisions and so would seem to derive no evolutionary benefit from consciousness.

I think we need to formulate a theory that explains why we feel what goes on in our cerebral cortex but not the stuff that goes on in the bits of our brain that are close to the spine and that are evolutionarily older.

Yes, that's a fascinating and important question. :P
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-05-11T15:00:00

When I mentioned Pavlov, I was referring to operant condition, and not classical conditioning. I don't think anybody was confused by this, but I think I should correct it anyway.

Before you touch something hot, you don't see anything wrong with it. After you touch it, you want to avoid touching hot objects. Ergo, touching hot objects generates negative utility.

I suspect that it's not only the cerebral cortex that generates qualia. It's just the only part that generates the qualia of thinking that we have qualia.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby RyanCarey on 2009-05-13T07:39:00

I'd misused the phrase pavlovian conditioning too (sorry).

I suppose we have to resolve some features of consciousness that might seem contradictory:
1) We can feel pleasure or pain from events (e.g. sexual activity) the first time that they occur and long before we see their outcomes. In this way, our consciousness predicts the evolutionary favourability of an event really accurately.
2) We can have consciousness of things that aren't really happening. You can experience pain referred from your heart to your arm. You can experience (phantom) pain from an arm that has long been amputated. So our consciousness can is sometimes rather inaccurate at representing what has occurred.

Pavlovian conditioning (association of one event with another) can only explain some consciousness. As Alan explained, we don't learn that sex is enjoyable because every time a baby comes afterward! More seriously, Pavlovian conditioning can transfer our experience of one event from another event. But eventually, there must be some event that we were conscious of first. And we need to decide the origin of that consciousness.

Now, the idea that feelings are what guides our behaviour is intuitive. Then consciousness is an evolutionary adaptation. This explains 1. Then, 2 can be described as a misfiring of evolution. However, it must face the objection "but we already understand the behaviour of all of the components of our brain to be determined by their their previous physical states. Why include emotion in our model of the brain's operation?".
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Sentience and brain size

Postby Brian Tomasik on 2009-05-13T14:48:00

Ryan, I agree with your explanations of 1 and 2. And your final question is a good one -- indeed, if I understand it correctly, it's precisely the hard problem of consciousness: "Why doesn't all this information-processing go on 'in the dark,' free of any inner feel?" (to quote David Chalmers).
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-08T21:59:00

A friend of mine responded to the "Towards Welfare Biology"-related point from the original post as follows:
I would just say that hedonic experiences basically ARE patterns of activity propagating through a nervous system in order change that nervous system's large-scale behavior, both momentary (approach/flight) and in the long term (positive and negative reinforcement, allocation of attention to learning new patterns of whatever level of complexity a given nervous system needs to allocate attention to in order to learn, etc).


I also had the following exchange with him:
[Friend]: How could pain experience possibly not be proportional to brain size? Do only the first set of neurons in a given brain count? If you put many brains parallel to one another in a single skull do the qualia go away?

[Me]: What about the suggestion that conscious experience only requires doing the right algorithm, regardless of how much hardware it's run on? That seems implausible, but perhaps not massively so. For instance, what if conscious experience of pain were like opening a text file on an operating system: You can do it just as well on a mobile-phone OS or on Windows Visa, but the computational burden of doing so is very different.

[Friend]: Is opening two copies of a text file on two versions of Vista different from opening one copy on one computer with twice the voltage in its logic gates?

[Me]: I guess it depends on what sense of "different" is relevant to talk about. The former allows two people in two different places to read the file, while the latter doesn't. But maybe at the fundamental physical level, they're identical insofar as they both involve the same sorts of physical manipulations -- just, in the two-copies case, those operations can be more separated in space and time. I guess the question hinges on where exactly the consciousness comes from?


A second friend recommended "reading up on the literature on what it is to implement a computation," including Nick Bostrom's article, "Quantity of Experience: Brain-Duplication and Degrees of Consciousness." That piece includes a thought experiment suggesting that, indeed, one version of Vista running with twice the wire voltage is equivalent to two versions running separately: The basic idea is to imagine gradually splitting the wires of the logic gates down the center with an insulator and then eventually separating the two halves. Bostrom goes on to propose thought experiments suggesting that the number of instances of a conscious experience can be not just one, or two, or ten, but potentially 0.85 or 1.6. These thought experiments are done with computer components, but footnote 8 (p. 189) suggests that the same types of ideas would apply to biological brains-in-vats.

Bostrom replies to my objection that consciousness seems binary (p. 196):
‘‘How can this be the case?’’ one might ask. ‘‘Either the experience occurs or it doesn’t. How can there be a question of quantity, other than all or nothing?’’ But [when we degrade the reliability of a computer implementing a consciousness algorithm] the underlying reality, the system upon which the experience supervenes, does not change abruptly from a condition of implementing the relevant program to a condition of not implementing it. Instead, the supervenience base changes gradually from one condition to the other. It would be arbitrary and implausible to suppose that the phenomenology did not follow a similarly gradual trajectory.

Footnote 11 (p. 196) adds:
The point here is that systems that have associated phenomenal experience can have it in varying amounts or degrees of ‘‘intensity,’’ even when the duration and the qualitative character of the experience does not vary. Moreover, this particular quantity of degree does not come only in integer increments. Formally, this is no more mysterious than the fact that sticks come in different lengths and that length is a continuous variable (at least on the macroscopic scale).


At the end of the paper, Bostrom notes that an alternative response to the "fading qualia" thought experiment (also mentioned in the opening post) is that, as biological neurons are replaced by silicon, the qualia that the mind generates do not fade in a qualitative sense (e.g., red becomes less red) but merely in the fractional degree to which that experience is being implemented.

I didn't see much discussion of Bostrom's paper in the academic literature, though perhaps there are some responses? There's an email-list discussion here.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-09T04:27:00

Felicifia participant EmbraceUnity posted some interesting contributions to this discussion here.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-09T04:33:00

In response to the idea that it might be true that consciousness is binary and insects are as conscious as humans, it seems at least as likely that consciousness is binary and every tiny piece of the human brain is as conscious as the full human.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-09T04:41:00

DanielLC, what if all those little parts of the human brain are not running the full consciousness algorithm? Windows Vista opening a text file consists of a number of function calls to lots subroutines, but none of those subroutines individually would make the file open.

Perhaps this analogy doesn't hold well in practice, because brains -- unlike operating systems on conventional computers -- run in parallel?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-09T15:51:00

I just said it was as likely as an insect's brain being sentient. It seems just as likely that a random tiny piece of my brain is running it as that the brain of an insect is.

It isn't so much brains running in parallel as running in fractals. You could cut half of someone's brain off and they're still a person. You could probably cut out a tiny piece of it and still do classical conditioning on it.

Does anyone know any research as to the extent to which that can be done? I strongly suspect some pieces work that way, but I doubt every piece does.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-09T21:03:00

That's an interesting question, DanielLC. I think it makes sense that if an insect-brain-sized chunk of human brain, in isolation from the rest of the human brain, does the same sorts of operations that a full insect brain can, then the insect brain is no more likely conscious then the chunk of human brain. But I doubt whether that is the case. I'm not aware of particular studies on your question, but I do know that in hemispherectomy patients, it takes time for the functions usually performed by the excised side of the brain to move to the other side. Describing a patient Nico, this page says, "it appears that Nico's so-called right-hemisphere skills--mathematics, visual arts, and music--have migrated to the left hemisphere." The phrase "migrated" makes me think those skills weren't already present and operating before his left brain adapted itself.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-09T22:40:00

As I've mentioned elsewhere, I believe consciousness is based on classical conditioning, or at least that you need it to tell what's pleasure or pain. Because classical conditioning is linked to learning, I suggested using that to tell if something is conscious. If the skills where already in that part of the brain, it would tell me nothing. The fact that they can migrate shows that that little piece of the brain is capable of learning.

That being said, learning isn't the same as classical conditioning. That is why I'd want to see a study on trying it on a piece of a brain.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-15T23:49:00

Dave Pearce mentioned the following excellent comments:

We know from microelectrode studies that stimulation of certain tiny areas of the limbic system can be agonizing or delightful - whereas stimulation of large parts of the prefrontal cortex or cerebellum induces very little in the way of intense experience. So IMO a much stronger case could be made that emotional intensity correlates with the size/cellular density and connectivity of these of limbic areas rather than absolute brain size.

Also, we only have 40,000 odd dopamine cells. Complete loss of dopamine cells in advanced Parkinson's disease often leaves its victims complete apathetic, whereas dopaminergic drugs intensify experience.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-16T01:23:00

I don't find that argument very compelling. Correlation does not imply causation. The reverse is also true. Those areas may cause the emotion, but there are a variety of ways your brain could amplify them before it affects the rest of the brain. It's only the result that has significant evolutionary pressure, so the actual size of those areas will be pretty much random, and the amplification will cancel it out.

Think of it like this: those areas store the number for happiness, but whatever unit it's using has to do with the rest of the structure of the brain.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-16T01:33:00

DanielLC wrote:Think of it like this: those areas store the number for happiness, but whatever unit it's using has to do with the rest of the structure of the brain.


Isn't the brain somewhat compartmentalized, such that the limbic system could be doing its own thing without much influence by the rest? Actually, that's part of the triune brain hypothesis, which -- while largely discredited now -- may have some truth behind it. And a priori, isn't it simpler to assume that the limbic system acts on its own, rather than supposing that the neocortex, etc. are intimately involved?

But maybe your model (a number plus units) is totally different. What reason is there to choose your model? Why would human units be different from frog units?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-16T01:51:00

I don't mean that the rest of the brain influences the limbic system. I mean the limbic system influences the rest of the brain. Happiness is intimately linked to learning and a very important part of a brain. If the limbic system is the only part of the brain that feels happiness, it is the only part that will learn. I think we can safely say that that isn't the case. The idea that how much happiness is felt is decided by a small area of the brain, on the other hand, would be entirely expected.

Human units would be different than frog units because the limbic system would be linked to the rest of the brain with different strengths. There is no reason for a specific strength of a connection, and there is little reason for a specific size of that area, but there is much reason for a specific amount of happiness. Either of the former alone would be a horrible indicator of the ladder.

And now for something else:
There is some chance that consciousness is binary, and a certain chance it isn't. How would we compare the two? If we were to say that the binary consciousness of one organism was the same as the analog consciousness of an insect, the expected consciousness of a small animal would be significantly smaller than if we were to say that the binary consciousness of one organism was the same as the analog consciousness of a human. There's no reason to say it's one or the other, or even something much larger or smaller.

A while ago on the old forum, I suggested dividing by the standard deviation of utility, which would fix this paradox along with some infinity paradoxes. Unfortunately, I realized that that was totally incompatible with eternalism, as the standard deviation would always be zero as there is going to be a specific amount of utility.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-16T23:02:00

DanielLC wrote:There is some chance that consciousness is binary, and a certain chance it isn't. How would we compare the two?


If consciousness is binary, then our consciousness is binary, so it would be equivalent to the experience of a human, no? In other words, if consciousness is binary, there's a whole lot more total emotional intensity in the world than if it's not!

I prefer not to divide by the standard deviation (however that's defined -- perhaps across individual organisms?). Suppose we're not sure which type of world we're in: It could be world A or world B. In world A, organisms only ever suffer from pinpricks and enjoy pleasant breezes. In world B, they either get burned at the stake or sent to paradise. Only one of worlds A and B actually exists (the other is a mistaken notion on our part), but at the moment, we assign them equal probability. We come across a button that, if we're in world A, gives two people a pleasant breeze, but if we're in world B, causes someone to be burned at the stake. Dividing by the standard deviation given a particular possible way the world is would suggest pressing the button. But I don't think we should.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-16T23:39:00

Personally, I suspect that we don't know how much consiousness we have. You can easily make a smaller brain that thinks it has as much consciousness as a human, so if it is proportional to brain size, that would mean that the smaller brain size is simply wrong. As such, you can't just treat human consciousness as a known constant to compare the two universes.

How can you tell how much consciousness you have? If you mean by the intensity of your emotion, than what you're really talking about is happiness being binary. But we know happiness isn't binary because we know you can feel different intensities of emotion.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-17T00:05:00

I'm not sure we need to know how much consciousness a human has. Why doesn't the following reasoning suffice?

Let a unit of human-level intensity be 1 iu ("intensity unit"), which is some unknown amount. If consciousness is binary, then all conscious organisms also have 1 iu of intensity (different from amount of happiness or pain). If consciousness is not binary, then maybe smaller brains have smaller intensities. Say a chicken has only 0.1 iu's in that case. Then the expected suffering of the chicken is

Prob(consciousness is binary) * 1 + [1 - Prob(consciousness is binary)] * 0.1.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-17T00:38:00

For the same reason the following reasoning won't work:

Let a unit of insect-level intensity be 1 iu ("intensity unit"), which is some unknown amount. If consciousness is binary, then all conscious organisms also have 1 iu of intensity (different from amount of happiness or pain). If consciousness is not binary, then maybe larger brains have larger intensities. Say a chicken has 10 iu's in that case. Then the expected suffering of the chicken is

Prob(consciousness is binary) * 1 + [1 - Prob(consciousness is binary)] * 10.

This way, there's a lot more total emotional intensity if intelligence isn't binary. There are two separate unknowns. You can't assume that they're the same.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-17T00:46:00

Hmm, what's wrong with that? Sure, the units are different, but utility functions (even hedonistic ones) are always non-unique up to a scale factor. We never need to compare numbers computed using human iu's with those computed using mouse iu's. We just use one or the other and go. Or am I missing something?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-17T02:10:00

If you use human iu's there is more total emotional intensity when intelligence is binary. If you use mouse iu's, there's more if it isn't. How do you decide which to use?

It's possible that the analog amount is utterly ridiculous. Should we just assume it's analog because it's possible that it's analog and a human's sentience is a googol times as much as what it would be if it was binary?
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-17T02:42:00

I guess I didn't read your previous comment carefully enough. That is a problem -- you're right!

I'm stumped as to how to solve it. I suppose we'd just have to decide on a conversion factor between binary intensity and non-binary intensity, but where to set that isn't obvious.

In fact, because "intensity" is basically a measure of "how much we care," this is actually a special case of the general problem of aggregating over different value systems when you're not certain which one to use. I don't think there's an objective moral truth (rather, I subscribe to emotivism), but if I did, I would have this problem in trying to maximize my expected rightness.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2009-07-17T04:21:00

I bypassed the issue by pointing out that if an insect has sentience, so does a tiny piece of a human brain, so larger brains are more important either way.

It seems like if it's binary it should be treated as a constant, but if it's analog it could be anything. If you do it that way, and accept that there is a non-negligable probability that it's something huge, which doesn't seem to be unexpected, as in the universe it's hard to find a constant that isn't, the possibility of it being analog and huge would be the deciding factor.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby JamesEvans on 2009-07-20T00:40:00

Just adding my thoughts from an email exchange I've had with Alan:

I'm immediately skeptical when humans say we should worry most about human suffering. Reminds me of what Xenophanes said about horses drawing gods which looked like horses. I would guess that brain size
isn't what matters to the perception of pain or pleasure. Small regions can be stimulated to elicit pain or pleasure. So probably there is some functional properties of neurons which are responsible for generating the experience. But neurobiology aside, if 50 billion animals are slaughtered each year and live completely valueless lives, it isn't necessary to understand the nitty gritty to come to the conclusion that non-human animal welfare must take priority on utilitarian grounds. But the biological question is still extremely interesting and important and can be empirically tested. Have I recommended Revonsuo's "Inner Presence: Consciousness as a Biological Phenomenon"?

Also what is the total weight of non-human brains compared to human brains? Taking the total would also be telling even if we accepted the theory.

JamesEvans
 
Posts: 9
Joined: Tue Nov 11, 2008 1:51 am
Location: Chapel Hill, NC

Re: Sentience and brain size

Postby Brian Tomasik on 2009-07-20T02:46:00

JamesEvans wrote:Also what is the total weight of non-human brains compared to human brains? Taking the total would also be telling even if we accepted the theory.


I replied, "I can't find great figures, but here's a start. Ants have 2.5 * 10^5 brain cells while humans have 10^10. The number of humans is 10^10, while the number of arthropods is 10^18. Assuming an ant is typical, that implies ~10^23 arthropod neurons and 10^20 human neurons. The disparity is still big, but not nearly so big as I usually picture it."
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2009-10-26T06:24:00

I updated my essay on the hard problem, and it now contains some additional discussion of sentience and brain size: See, in particular, the bullet point, "Is degree of pain proportional to the amount of neural matter involved in generating it?" To some extent, the exposition raises more questions than answers -- I haven't completely decided what I think about many of these issues.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2010-04-25T12:37:00

The article "Insects may have consciousness and could even be able to count, claim experts" and the associated research paper seem relevant to this discussion. In particular, here's the abstract of the paper:
Attempts to relate brain size to behaviour and cognition have rarely integrated information from insects with that from vertebrates. Many insects, however, demonstrate that highly differentiated motor repertoires, extensive social structures and cognition are possible with very small brains, emphasising that we need to understand the neural circuits, not just the size of brain regions, which underlie these feats. Neural network analyses show that cognitive features found in insects, such as numerosity, attention and categorisation-like processes, may require only very limited neuron numbers. Thus, brain size may have less of a relationship with behavioural repertoire and cognitive capacity than generally assumed, prompting the question of what large brains are for. Larger brains are, at least partly, a consequence of larger neurons that are necessary in large animals due to basic biophysical constraints. They also contain greater replication of neuronal circuits, adding precision to sensory processes, detail to perception, more parallel processing and enlarged storage capacity. Yet, these advantages are unlikely to produce the qualitative shifts in behaviour that are often assumed to accompany increased brain size. Instead, modularity and interconnectivity may be more important.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2010-05-17T07:36:00

There's an excellent report, "Aspects of the biology and welfare of animals used for experimental and other scientific purposes" that's linked to from this page. Pages 26-27 of the report (PDF pages 73-74) include the following discussion as section 2.3.2:
Although, as mentioned above, it is better to judge animal cognition and awareness by their functioning, it is still of interest to consider the numbers of brain cells available for processing. As noted later in relation to spider capabilities, sophisticated processing can occur with smaller numbers of cells at the expense of the rate of processing. Spiders may be clever if allowed enough time. The remainder of this section refers solely to number of cells.

Studies of complexity of brain function can give much information about ability as well as about welfare (Broom and Zanella, 2004). One measure of brain complexity is the total numbers of nerve cells present in the central nervous systems, for these cells are the basic elements responsible for neural integration, memory and the generation of behaviour. Nerve cell numbers in central nervous systems vary enormously across different animal groups with around 1010 in mammalian brains, 108 in cephalopod brains (Young, 1971), 106 in the nervous systems of social insects such as honey bees (Giurfa, 2003), 105 in other insects (Burrows, 1996), 104 in noncephalopod molluscs, such as Aplysia, (Kandel, 2001) and less than this in simpler invertebrates, such as leeches, worms and nematodes (Williams and Herrup, 1988). This rank order seems well correlated with the performance ability and behavioural sophistication of the different animal groups.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Richard Pearce on 2010-09-29T16:01:00

'We might suspect that an organism's number of such firing nerve cells is roughly proportional to total brain size (?), in which case the implications for how much we care about various animals would be the roughly the same' (Alan Dawrst). Do you have evidence to back up this suspicion? Or is your suspicion merely guesswork?
I would tread cautiously on the 'animals feel less pain than humans route' as it has gathered momentum and is repeated often by people who fail to substantiate that claim.
Here I will say why I doubt that non-human mammals feel less pain and fear than we do. I will recall from memory, so forgive any inacurracies, especially David Pearce. David says in either 'The Hedonistic Imperative' or 'The Post-Darwinian Transition' that an organism's capacity to feel pain, anxiety or pleasure relates to the size and density of neurons in the part of the brain responsible for feeling that emotion in proportion to the body size of the organism. He also says that non-human mammals have proportionately larger brain parts that sense fear and anxeity than humans have. If this is correct, then non-human mammals have the capacity to feel more fear and anxiety than humans.
Here I present another reason to doubt that non-human animals feel less pain than humans. Evan Thompson has written a fascinating essay 'Empathy and Consciousness' on mirror neurons and their relationship to empathy in sentient organisms. Here is a link to that essay:

http://www.imprint.co.uk/pdf/Thompson.pdf

His essay explains that our mirror neurons allow us to both learn physical activities by copying others and feel some of the pain that others feel. He says that studies suggest that:

1. the closer likeness that another person has to oneself, the more accurately one's mirror neurons will be able to represent the other person's actions.
2. the more likeness the other person has to oneself, the more intensely we can feel the pain that they feel.

Does this resonate with anyone's experience? For example, does anyone here sympathise more with someone who looks very much like him or herself, male and female? There are a few women I have met, who have looked like me and had some of my idiosyncracies who on seeing, I have been drawn into feeling what seemed like their pain and pleasure. I felt a more intense bodily and emotional experience of the present, because I was suddenly feeling my feelings and theirs I imagine almost as intensely as they were.
From Thompson's evidence, it also makes sense that identical twins feel intense empathy. Their similarity allows their mirror neurons to create in the mind of one twin the pain, actions, pleasure etc. of the other. What genetic benefits do mirror neurons have? They would help organisms favour exploiting organisms with whom they have a more distant genetic relationship.
Mirror neurons might also explain the regular occurrence of sexual partners looking similar to each other. If 2 sexual partners are similar, then they will have a child who looks similar to both of them, and they will both care for it intensely. In caring intensely for the child, the male will be more likely to nurture the child instead of philandering. This is an advantage to the mother, whose child is reared by a caring father, and it is also an advantage to the father, who can be more sure that the child is his (either consciously sure in the case of humans, or sure in genetic terms if we personify his genes in evolutionary speak). He will be more sure that the child is his.
So our lack of mirror neurons that accurately represent the pain of non-human animals should make us wary of guessing that they feel less pain than we do. The make up of our mirror neural network makes us feel much of the pain of fellow humans and less of the pain of non-humans.

Richard Pearce
 
Posts: 32
Joined: Mon Sep 20, 2010 3:10 pm

Re: Sentience and brain size

Postby Richard Pearce on 2010-09-30T08:25:00

'I would tread cautiously on the 'animals feel less pain than humans route' as it has gathered momentum and is repeated often by people who fail to substantiate that claim' (Richard Pearce).
Sorry for the way I expressed that Alan. I typed it quickly and did not realise it would sound almost threatening. Oops. Please take my sentiment as more gentle than the above expression would have you think.

Richard Pearce
 
Posts: 32
Joined: Mon Sep 20, 2010 3:10 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2010-10-10T22:55:00

Thanks for the comments, Richard!

Richard Pearce wrote:Do you have evidence to back up this suspicion? Or is your suspicion merely guesswork?

Since writing that comment, I've realized that the question of "how much pain an organism feels" has no objective answer but depends on how much we want to care about the organism. Here's a section from my "How the Hard Problem Matters" (see the original source for hyperlinks):

•Is degree of pain proportional to the amount of neural matter involved in generating it? Answer: Do we want to care about algorithms run on a greater amount of hardware more than those same algorithms run on less hardware? I think I probably do, especially in light of Nick Bostrom's thought experiments, but I'm not entirely sure -- my intuitions are still fuzzy.

Proportioning concern based on brain size may be a rough heuristic, but I think we should be wary of extending it too far. For instance, suppose certain insects do run algorithms that self-model their own reaction to pain signals in a similar way to how this happens in humans. (See, for instance, pp. 77-81 of Jeffrey A. Lockwood, “The Moral Standing of Insects and the Ethics of Extinction” for a summary of evidence suggesting that some insects may be conscious.) If bees have only 950,000 neurons while humans have 100 billion, should we count human suffering exactly 105,263 times as much as comparable bee suffering? I would argue not, for a few reasons.

◦The relevant figures should not be the number or mass of neurons in the whole brain but only in those parts of the brain that “run the given pieces of pain code.” I would guess that human brains contain a lot more “executable code” corresponding to non-pain brain functions that bees lack than vice versa, so that the proportion of neurons involved in pain production in insects is likely higher.
◦The kludgey human brain, presumably containing significant amounts of “legacy code,” is probably a lot “bulkier” than the more highly optimized bee cognitive architecture. This is no doubt partly because evolution constrained bee brains to run on small amounts of hardware and with low power requirements, in contrast to what massive human brains can do, powered by an endothermic metabolism. Think of the design differences between an operating system for, say, a hearing aid versus a supercomputer. If we care more about the number of instances of an algorithm that are run more than, e.g., the number of CPU instructions executed, the difference between bees and humans shrinks further.

These points raise some important general questions: How much extra weight (if any) should we give to brains that contain lots of extra features that aren't used? For instance, if we cared about the number of hearing-aid audio-processing algorithms run, would it matter if the same high-level algorithm were executed on a device using the ADRO operating system versus a high-performance computer running Microsoft Vista? What about an algorithm that uses quicksort vs. one using bubblesort? Obviously these are just computer analogies to what are probably very different wetware operations in biological brains, but the underlying concepts remain. (I should add that current computers don't presently run self-modeling algorithms anywhere near similar enough to those of conscious animals for me to extend them moral concern.)


Richard Pearce wrote:He also says that non-human mammals have proportionately larger brain parts that sense fear and anxeity than humans have. If this is correct, then non-human mammals have the capacity to feel more fear and anxiety than humans.

A great point -- one I made in the above quotation as well. Of course, I still think the relevant question is probably about absolute rather than relative amounts of neural tissue, but my emotions are still fuzzy here.

Richard Pearce wrote:Does this resonate with anyone's experience? For example, does anyone here sympathise more with someone who looks very much like him or herself, male and female? There are a few women I have met, who have looked like me and had some of my idiosyncracies who on seeing, I have been drawn into feeling what seemed like their pain and pleasure. I felt a more intense bodily and emotional experience of the present, because I was suddenly feeling my feelings and theirs I imagine almost as intensely as they were.

I agree with the mirror-neuron point in general. Indeed, that's the reason I care about, say, rats but not rocks.

Personally, I can't think of examples of animals that I care more about because of similarity to myself (for instance, I would be just as appalled to see Hitler tortured as I would to see a friend tortured), but perhaps this reflects years of cognitive override on my part. Hard to say. I do think I'm rather unusual in this regard.

Richard Pearce wrote:So our lack of mirror neurons that accurately represent the pain of non-human animals should make us wary of guessing that they feel less pain than we do.

Indeed! I couldn't agree more.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby EmbraceUnity on 2012-02-09T08:20:00

What does everyone think of Metcalfe's Law in relation to this question? Is the brain a network? If no, explain how and why it is different, or what causes you to doubt.

https://en.wikipedia.org/wiki/Metcalfe%27s_law

EmbraceUnity
 
Posts: 58
Joined: Thu Jul 09, 2009 12:52 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2012-02-09T18:46:00

It's an economic law based on the idea the value of having a node is proportional to the number of nodes it connects to. Why would it relate to this?

Also, the brain isn't a network like the ones that law is talking about. Each neuron is indirectly connected to every other neuron, but those indirect connections don't matter much. It can't just pass information to an arbitrary neuron.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2012-02-12T02:34:00

Thanks for the reference, EmbraceUnity.

Certainly the brain is a kind of network, but as DanielLC said, "economic value" for a network is different from the "ethical value" of happiness. The latter is determined by what the neurons are doing, i.e., are they implementing the processes for pleasure-glossing of experience? If they aren't, then whatever connections they have are irrelevant. If they are, then again the number of connections doesn't matter, so long as they have the connections that they need. And the connections in this case provide only instrumental value, not intrinsic value.

Well, at least that's what my intuition says at first glance. However, by the same logic, the amount of neural tissue itself shouldn't matter either, in which case brain size has no relation to sentience. Is that true? I'm not sure, as this thread has shown. So I suppose one could argue for some importance of network connections as one factor determining how much it matters when a brain of a given size suffers.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Ruairi on 2012-02-12T23:53:00

maybe he means perhaps a more connected brain is more sentient?

or sorry maybe you just covered that, is the last part of your post arguing for the first or second view you present?
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: Sentience and brain size

Postby Brian Tomasik on 2012-02-14T12:23:00

Ruairi wrote:is the last part of your post arguing for the first or second view you present?

I guess I meant that if you think brain size in general matters at all, then you might care about number of connections as at least one part of a holistic assessment about which brains are more vs. less sentient. However, I'm not sure if I agree that brain size in general does matter at all (I'm still undecided). I lean toward biting the bullet of a thought experiment that says if you take the same amount of neural tissue and divide it into two different brains, then you thereby double the happiness/suffering that each experiences.

At the very least, almost everyone agrees that there's no "law of conservation of sentience" similar to the "law of conservation of mass/energy" because if you arrange formerly non-sentient matter in the right way, it becomes sentient. I say "almost everyone" because panpsychists might remonstrate. :)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Pablo Stafforini on 2012-03-06T10:20:00

At the very least, almost everyone agrees that there's no "law of conservation of sentience" similar to the "law of conservation of mass/energy" because if you arrange formerly non-sentient matter in the right way, it becomes sentient. I say "almost everyone" because panpsychists might remonstrate.

Even some panpsychists would agree with that statement. David Pearce, for example, believes that there is more pain in an ordinary painful episode than would be in an aggregate of its constituent pain "micro-qualia". (At least that is how I interpret his position. An alternative interpretation could be that painfulness is not a building block of conscious experience, but that it "emerges" when the constituent micro-qualia are organized in certain ways. The problem with this position, however, is that now this emergence of macro-qualia from micro-qualia becomes as hard to explain as the emergence of mind from matter that posed the original hard problem of consciousness, and that panpsychism was supposed to solve.)
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Sentience and brain size

Postby Pablo Stafforini on 2012-03-06T11:28:00

Coincidentally, I just stumbled upon a post by Jesper Östman that raises the same objection I hinted at in my parenthetical comment above. Here is the relevant part:
A unified qualia field of eg a red and a blue qualia (two bits of mind-dust) isn't identical to the mere sum of the red and blue qualia. Since there is no identity the field is a new emergent thing, distinct from the red qualia and the blue qualia. But now the existence of these other things don't do anything to explain the character of this new thing. Sure, it would be a law that a unified red-blue field would arise when we have a red qualia, a blue qualia and some further physical conditions. However, this law is doesn't seem less arbitrary than a law saying that the purely physical correlates to the qualia and the further physical conditions give raise to a unified red-blue field.
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Sentience and brain size

Postby Brian Tomasik on 2012-03-07T03:58:00

Thanks, Pablo! I don't know if I fully understood Jesper's point when he said it originally, but in this context it makes perfect sense.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2012-06-23T21:15:00

From Wikipedia's "Pain in invertebrates":
One suggested reason for rejecting a pain experience in invertebrates is that invertebrate brains are too small. However, brain size does not necessarily equate to complexity of function.[8] Moreover, weight for body-weight, the cephalopod brain is in the same size bracket as the vertebrate brain, smaller than that of birds and mammals, but as big or bigger than most fish brains.[9][10]

The article quotes Charles Darwin:
It is certain that there may be extraordinary activity with an extremely small absolute mass of nervous matter; thus the wonderfully diversified instincts, mental powers, and affections of ants are notorious, yet their cerebral ganglia are not so large as the quarter of a small pin’s head. Under this point of view, the brain of an ant is one of the most marvellous atoms of matter in the world, perhaps more so than the brain of man.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Jonatas on 2012-11-17T18:07:00

I have some observations about some objections.

1. While men have larger brains, women have a higher neuronal density (their neurons are more tightly packed into a smaller area) and higher connectivity, rendering a similar functionality. I think that brains' functionality is largely limited by available energy rather than by area.

2. Intelligence is moderately related to absolute brain size (0.4) among humans, but the causation of intelligence is much more complex than this. There is indeed also Encephalization Quotient, but it doesn't help much. One could count number of neurons left for cognitive purposes after those involved with bodily functions are subtracted, and their degree of connectivity, myelination, etc. Whales and elephants have larger brains, but humans have more neurons than either of them (human brains, like our female brains, are much more tightly packed than those of larger animals).

http://www.subjectpool.com/ed_teach/y3p ... igence.pdf

http://www.ploscompbiol.org/article/inf ... bi.1000395

According to Wikipedia, the cingulate cortex seems to be highly involved with feelings of suffering and "unpleasantness" in humans, so number of cortical neurons may actually be a good proxy for capacity for bad feelings.

http://en.wikipedia.org/wiki/Cingulate_cortex

http://en.wikipedia.org/wiki/Suffering# ... psychology

I think that feeling intensity should be proportional to the number of neurons involved with causing bad and good feelings and perhaps some relationship with the amount of neurons involved with consciousness, and bandwidth in between, if there be such a separate process. For causing superhuman-like intensities of good feelings, it seems to be necessary to increase the number of neurons (or related artificial substrates) and possibly increase the efficiency of their organization. As pointed out, there seems to be a relationship of quantity of impulses increasing intensity of feelings in humans. I estimated the correlation based on suffering in humans being dependent on the cingulate cortex, a region whose size should correlate with absolute number of cortical neurons, and also on integration of neuronal networks as a degree of consciousness. However, this is still quite imprecise, because people and animals vary in their sensitivity to pain, for example, according to their genes. Giulio Tononi's theory of consciousness as integrated information seems to support the quantitative hypothesis.

http://www.biomedcentral.com/1471-2202/5/42/

http://www.youtube.com/watch?v=AgQgfb-HkQk

The first study I linked to in this post has a table estimating the number of cortical neurons in some animals. For instance:

Humans - 11,500,000,000
African elephants - 11,000,000,000
Chimpanzees - 6,200,000,000
Horses - 1,200,000,000
Dogs - 160,000,000
Rats - 15,000,000
Mice - 4,000,000

There are different estimates for the number of cortical neurons in humans. This other study estimates the number of cortical neurons in average humans to be higher, 16,000,000,000. ( http://www.frontiersin.org/Human_Neuros ... .2009/full ). This other study estimates the number of cortical neurons in average human brains to be 13,500,000,000. ( http://www.sciencedirect.com/science/ar ... 1305000823 ). According to this study, the number of cortical neurons in humans ranges from 15 to 31 billion and averages about 21 billion. ( http://www.ncbi.nlm.nih.gov/pubmed/9215725 ).

The difference from mice to humans is quite steep, a 2,875-fold increase, perhaps more than one would intuitively think. Furthermore, brain connectivity and myelination should be taken into account, such that intelligence may actually be a better proxy for feeling capacity than number of neurons.

For cautiousness, we may still give a non-negligible probability to the hypothesis that intensity of feelings has been evolutionarily contained to a low number, similar between different species, regardless of number of neurons involved, in order not to overdo its role. But this case seems very improbable to me.

When estimating the capacity of different animals for bad feelings in decision theory, the resulting values of both hypotheses (relationship and non-relationship) can be compounded, weighed by their probabilities. I would assign about 95% of chance to the relationship hypothesis and 5% of chance to the non-relationship hypothesis.

So, for instance, in decision theory I would take the bad feelings of mice to probably be worth approximately 20 times less than those of humans (0.05). In case of assuming a linear relationship between number of neurons and feelings, their feelings would be estimated as about 2,785 times less intense.

(0.95 x 4,000,000) + (0.05 x 11,500,000,000) = 578,800,000
578,800,000 / 11,500,000,000 = 0.05

Jonatas
 
Posts: 4
Joined: Wed Jul 21, 2010 9:35 pm

Re: Sentience and brain size

Postby Jonatas on 2012-11-17T21:12:00

This article estimates the number of cortical neurons for pigs as 432,000,000. So in decision theory, giving a probability of 95% to the linear correlation hypothesis, pigs' feelings could be considered 12 times less valuable than those of humans, or 26 times less, assuming the linear correlation hypothesis as certain.

http://jeb.biologists.org/content/209/8/1454.abstract

Cows' brains weigh about 440g, compared to 532g of horses, which have 1,200,000,000 cortical neurons, so cows probably have around 1,000,000,000 cortical neurons. This would mean, in decision theory with a probability of 95% to the linear correlation hypothesis, that cows feelings could be considered as 7.5 times less valuable that those of humans, or 11.5 times less valuable assuming the linear correlation hypothesis as certain.

http://faculty.washington.edu/chudler/facts.html

For birds and fishes, the comparison is probably not as straightforward as between different mammals, because their brain structures and neuron cells are different. However, one could roughly estimate.

The brain of a chicken is estimated to weigh about 4g. In comparison, a rat's brain weighs about 2g, and has 15,000,000 cortical neurons, giving chickens a figure of about 30,000,000 cortical neurons. However, birds have smaller neurons and a higher brain efficiency per weight. So it could be hypothesized that chickens have some 50,000,000 cortical neurons. So, in decision theory, giving a probability of 95% to the linear correlation hypothesis, chicken's feelings could be valued about 18 times less than those of humans, or about 230 times less, assuming a linear relationship between number of cortical neurons and feeling intensity as certain.

I couldn't find data on the number of neurons in fishes' brains, but a fish with a body weight of 1kg should have a central nervous system of about 1g. The brain of a 30g mouse weighs about 0.5g. However, while the telencephalon of a fish occupies an area of about 20% of its central nervous system, the proportion of a mouse's cortex is much higher, about 50%. So I would estimate that a fish of 1kg may have a number of "cortical" neurons comparable to that of a mouse of 30g, and the relation to humans in terms of feelings would be similar to that of mice. The brain of a goldfish weighs about 0.01g, so it should have a number of neurons about 100 times lower than that of a 1kg fish.

The fruit fly seems to have about 100,000 neurons for its whole central nervous system. In comparison, humans have about 100,000,000,000. This would mean that fruit flies could be taken in decision theory to have bad feelings 20 times less intense than those of humans. This number is wholly dependent on the 5% of chance of the non-relationship hypothesis (supposing that fruit flies feel the same as humans, what seems ludicrous), being its minimum, otherwise the estimation would be that fruit flies have bad feelings 1,000,000 times less intense than those of humans. Perhaps a different estimation in decision theory should be taken for animals below the "20 times less" threshold, because flies couldn't possibly feel the same as humans. I think that there may be a minimum threshold for bad feelings to appear at all, which does not include fruit flies and insects. Cockroaches, as one of the bigger insects, have 1,000,000 neurons, 10 times more than fruit flies, but still 100,000 times less than humans.

Jonatas
 
Posts: 4
Joined: Wed Jul 21, 2010 9:35 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2012-11-22T09:34:00

Very interesting, Jonatas. Thanks for all the great info! I have more to learn on this topic.

I didn't realize that females have tighter and more connected brains than males, nor that the same is true for humans vs. cetaceans.

It is indeed a strawman to suggest that those who think our concern for suffering should scale with brain complexity believe that the raw number of (cortical or otherwise) neurons is an adequate proxy. However, number of neurons is easier to look up than more advanced measures, and it does at least capture claims that, say, the suffering of small fish matters intrinsically less than that of humans, a stance on which I remain ambivalent.

Jonatas wrote:As pointed out, there seems to be a relationship of quantity of impulses increasing intensity of feelings in humans.

Yes, but it seems to me that this is essentially "normalized" per organism. This is the main intuition behind why I feel brain size may not matter ethically.

Harry the human is doing his morning jumping jacks, while listening to music. Suddenly he feels a pain in his knee. The pain comes from nociceptive firings of 500 afferent neurons. At the same time, Harry is enjoying his music and the chemicals that exercise is releasing into his body, so his brain simultaneously generates 500 "I like this" messages. Harry is unsure whether to keep exercising. But after a minute, the nociceptive firings decrease to 50 neurons, so he decides his knee doesn't really hurt anymore. He continues his jumping routine.

Meanwhile, Sam the snail is sexually aroused by an object and is moving toward it. His nervous system generates 5 "I like this" messages. But when he reaches the object, an experimenter applies a shock that generates 50 "ouch" messages. This is the same as the number of "ouch" messages that Harry felt from his knee at the end of the previous example, yet in this case, because the comparison is against only 5 "I like this" messages, Sam wastes no time in recoiling from the object.

Now, we can still debate whether the moral significance of 50 of Harry's and Sam's "ouch" messages are equal, but I'm pointing out that, to the organism himself, they're like night and day. Sam hated the shock much more than Harry hated his diminished knee pain. Sam might describe his experience as one of his most painful in recent memory; Harry might describe his as "something I barely noticed."

<EDIT, 12 Sept 2013>
Carl Shulman makes the following comment:
Harry could still make choices (eat this food or not, go here or there) if the intensity of his various pleasures and pains were dialed down by a factor of 10. The main behavioral disruption would be the loss of gradations (some levels of relative importance would have to be dropped or merged, e.g. the smallest pains and pleasures dropping to imperceptibility/non-existence).

But he would be able to remember the more intense experiences when he got 10x the signals and say that those were stronger, richer, more intense, more morally noteworthy.

I find his point about remembering the more intense experiences is interesting. I'm not sold on it, but I wouldn't rule it out either.
</EDIT, 12 Sept 2013>

Jonatas wrote:For cautiousness, we may still give a non-negligible probability to the hypothesis that intensity of feelings has been evolutionarily contained to a low number, similar between different species, regardless of number of neurons involved, in order not to overdo its role. But this case seems very improbable to me.

At the end of the day, there is no "right answer" to this question. There's no such thing as a "true" intensity of suffering that organisms tap into. That said, we can (and I do) still have moral uncertainty about how we want to apportion our concern depending on various size, connectivity, etc. characteristics of relevant brain regions.

Thanks for all the expected-value estimates for the ratios between various animals and humans. Fascinating! My own probability that the relationship is not linear is much higher than 5% -- maybe ~50%.

Jonatas wrote:The brain of a goldfish weighs about 0.01g

I think you mean 0.01 kg.

Jonatas wrote:Perhaps a different estimation in decision theory should be taken for animals below the "20 times less" threshold, because flies couldn't possibly feel the same as humans.

Haha, I think it's not completely obvious. :) That said, there are two separate issues: (1) Whether insects possess the relevant structures for conscious pain at all. (2) If so, how does the scale of those structures compare with the scale in humans? My probability for (1) is maybe ~40%? My probability for (2) is ~50%; yours is ~5% unless you want to decrease it here.

Jonatas wrote:Cockroaches, as one of the bigger insects

And one of the most likely to be able to experience suffering IMO.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2012-11-26T10:54:00

Jonatas mentioned to me Giulio Tononi's theory of consciousness as something he might write about in an upcoming reply. I hadn't heard of this and so thought I would spend 5 minutes looking up a brief summary. I got a little carried away and spent more like 50 minutes reading about it, but now I have a few comments. Of course, I claim no expertise on this topic, so take everything I say with some grains of sodium chloride.

Christof Koch wrote a summary of Tononi's theory for Scientific American Mind titled "A Theory of Consciousness." As usual, that's sensationalism by the popular press to get people to read the article. (Don't get me wrong -- I love the popular-science press. But that doesn't mean they don't regularly overblow the significance of their stories.)

Tononi's Phi formula is not an explanation of how consciousness works. It's a measure on network systems that describes their information content and connectedness. Phi is a more sophisticated measure than, say, "number of neurons" or "number of synapses," but it's no more fundamental than those. The "number of neurons" of an organism is surely relevant to consciousness, but it's hardly a theory of consciousness. :)

Phi is presumed to be highest for those brain regions that seem most crucial for consciousness in humans, which is great. However, it's not clear that Phi is identical with the features of a mind that we care about. After all, almost any system has nonzero Phi. Do we want to care about almost any system to a nonzero degree? There may be particular, specific things that happen in brains during emotional experiences that are the only features we wish to value. Perhaps these usually accompany high Phi, but they may be a small subset of all possible systems which have high Phi. Work remains to articulate exactly what those features are that we care about. Such work would be aided by a deeper understanding of the mechanisms of conscious experience, in addition to this aggregate measure that seems to be generally correlated with consciousness. (Of course, that aggregate measure is a great tool, but it's far from the whole story.)

In any event, as far as the discussion at hand goes: If we are seeking a measure that captures the intuition that more complex brains deserve more weight, it's possible Phi (or some function of Phi) would be closer than "number of neurons" or "number of synapses" or whatever. However, the moral question still remains whether we want to use that measure, or whether we'd rather treat brains equally based on their belonging to a separate organism. Tononi's measure doesn't have implications for that question (except insofar as learning more about these ideas may mold our underlying moral intuitions).

In his article, Koch conflates Phi with consciousness, but it would be sophistical reasoning to say, "I'm defining Phi as consciousness. Therefore, consciousness is not binary but varies depending on brain complexity. Therefore, this moral question on this Felicifia thread is resolved." The "consciousness" that we care about morally may differ from the "consciousness" that someone defines some measure to be.

Anyway, I really do like this stuff from Tononi, and I like Koch, so thanks, Jonatas, for teaching me about something new. :)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2012-11-27T10:23:00

Follow-up note: Why don't I like the panpsychist view? Why can't it be the case that Phi actually is the consciousness that we care about, with no extra complications? Well, it could be, and I won't rule it out. But it seems to me that panpsychism doesn't explain anything. Consciousness is not a reified thing; it's not a physical property of the universe that just exists intrinsically. Rather, it's an algorithm that's implemented in specific steps. As Eliezer Yudkowsky has said, if you can't write me the source code for a process, you don't really understand it. Consciousness involves specific things that brains do.

Now, maybe the consciousness algorithm is really general, like basic propagation of any kind of information in any kind network. If so, then panpsychism would once again hold, because that algorithm is run by all kinds of physical systems. But my hunch is that the algorithm is more specific, and indeed, if we so choose, we can declare that the types of consciousness that we care about are more specific.

It's not incoherent to care about something really simple. We could, for example, decide that consciousness is the number of times that electrons jump energy levels in atoms. If we did so, then indeed consciousness would scale with brain size (and body size). But that measure doesn't capture what moves us when we see an animal writhing in pain. Similarly, it may be that a Phi measure that declares hydrogen atoms as marginally conscious also does not capture what moves us. Or maybe it will once our understanding of neuroscience improves and our intuitions are more refined. We'll see.

Postscript, 18 Mar 2013:
A good way to think about consciousness is to ask, "What process is going on in my brain that makes me think to myself, 'Wow, I'm conscious!'?" The process that gives rise to that feeling and that exclamation is probably pretty close to what we're getting at when we want to point to what we find consciousness to be. But it's more plausible to suppose that this process is a specific series of steps in the brain than that it's a generic information-processing system that spills over into the more specific perception of being aware of one's emotions and sensations. How exactly would the spilling over happen? Well, take whatever that spilling-over process would be, and ask why that can't happen on its own, without the more fundamental form of consciousness kicking off the spilling process. This is very similar to the argument against dualism: Why do we need a separate consciousness stuff if the material brain already does all the work? Well, why do we need a fundamental, panpsychist form of consciousness (Phi or whatever) if the specific steps of thinking that you're conscious could be done anyway?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Pablo Stafforini on 2012-12-02T05:26:00

Brian Tomasik wrote:it seems to me that panpsychism doesn't explain anything.

Well, if one is a dualist, panpsychism would explain the puzzling phenomenon that consciousness seems to exist only in certain regions of spacetime. For the panpsychist, there is really no puzzle, since consciousness exists everywhere.

Brian Tomasik wrote:Consciousness is not a reified thing; it's not a physical property of the universe that just exists intrinsically. Rather, it's an algorithm that's implemented in specific steps.

Consciousness may be a "physical property of the universe that just exists intrinsically" and also "an algorithm that's implemented in specific steps." This is precisely the view of David Chalmers. He believes that consciousness is an irreducible aspect of the natural world, but he also believes that there is a systematic connection between the character of a conscious state and the associated pattern of functional organization. As he writes:

If consciousness arises from the physical, in virtue of what sort of physical properties does it arise? Presumably these will be properties that brains can instantiate, but it is not obvious just which properties are the right ones. Some have suggested biochemical properties; some have suggested quantum properties; many have professed uncertainty. A natural suggestion is that consciousness arises in virtue of the functional organization of the brain. On this view, the chemical and indeed the quantum substrate of the brain is irrelevant to the production of consciousness. What counts is the brain's abstract causal organization, an organization that might be realized in many different physical substrates.


Brian Tomasik wrote:It's not incoherent to care about something really simple. We could, for example, decide that consciousness is the number of times that electrons jump energy levels in atoms.

I think we should keep the moral question of what states we ultimately care for distinct from the empirical question of what physical states realize consciousness. Of course, we could simply define 'consciousness' as that which we ultimately care for. But I think that would just introduce more confusion into the discussion. It seems preferable to use the term 'consciousness' in its standard sense (or the more precise sense given to it by philosophers and cognitive scientists), and leave open the question of whether the physical states that realize this property are states for which we ultimately care.
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Sentience and brain size

Postby Brian Tomasik on 2012-12-02T14:46:00

Cool -- thanks, Pablo!

Pablo Stafforini wrote:Well, if one is a dualist, panpsychism would explain the puzzling phenomenon that consciousness seems to exist only in certain regions of spacetime. For the panpsychist, there is really no puzzle, since consciousness exists everywhere.

Makes sense. But I am not a dualist.

Pablo Stafforini wrote:Consciousness may be a "physical property of the universe that just exists intrinsically" and also "an algorithm that's implemented in specific steps."

I don't understand why one would claim both at the same time. Why on Mars would there be two independent properties of something that are inextricably linked? With the caveat that I don't understand Chalmers's view in detail, I feel like it's time to hire Occam the hedge trimmer.

Pablo Stafforini wrote:It seems preferable to use the term 'consciousness' in its standard sense (or the more precise sense given to it by philosophers and cognitive scientists), and leave open the question of whether the physical states that realize this property are states for which we ultimately care.

Sure. There are already too many meanings of consciousness. It can mean being awake, being aware of one's surroundings, having the "feeling of what it's like" experience, what we care about ethically, etc. I'm fine with whatever definitions we settle on, so long as we avoid confusing the scientific meanings with the ethical ones.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Pablo Stafforini on 2012-12-02T21:53:00

Brian Tomasik wrote:I don't understand why one would claim both at the same time. Why on Mars would there be two independent properties of something that are inextricably linked? With the caveat that I don't understand Chalmers's view in detail, I feel like it's time to hire Occam the hedge trimmer.


The problem is that you cannot use Occam's razor to deny the existence of something with which you are directly acquainted. Occam's razor says that you shouldn't multiply entities beyond necessity; but in this case recognizing the existence of conscious properties is necessary to do justice to appearances.

Of course, you could argue that we can recognize that consciousness exists, and then claim that consciousness is reducible. Yet Chalmers's position is that no reductive explanation of consciousness can be given (he supports this claim with a battery of sophisticated arguments).

Note, by the way, that denying that there are "two independent properties of something that are inextricably linked" doesn't automatically lead to materialism. One may instead regard physical properties as redundant, and embrace the view that reality is ultimately mental.

Brian Tomasik wrote:Sure. There are already too many meanings of consciousness. It can mean being awake, being aware of one's surroundings, having the "feeling of what it's like" experience, what we care about ethically, etc. I'm fine with whatever definitions we settle on, so long as we avoid confusing the scientific meanings with the ethical ones.


Just to be picky: I don't think there is any "ethical" meaning of 'consciousness'; there is no sense in which 'consciousness' means "what we care about ethically". Moreover, if 'consciousness' had that meaning, it would then be vacuous to say that we care ethically about consciousness. In order for ethical statements of that sort to be non-trivial, the term 'consciousness' in those statements needs to be used in a non-moral sense.
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Sentience and brain size

Postby Brian Tomasik on 2012-12-02T23:42:00

Pablo Stafforini wrote:Of course, you could argue that we can recognize that consciousness exists, and then claim that consciousness is reducible. Yet Chalmers's position is that no reductive explanation of consciousness can be given (he supports this claim with a battery of sophisticated arguments).

Yeah, I think consciousness is reducible, but I haven't read Chalmers's battery of arguments.

Pablo Stafforini wrote:Just to be picky: I don't think there is any "ethical" meaning of 'consciousness'; there is no sense in which 'consciousness' means "what we care about ethically".

Haha, maybe I'm the only one who uses it like that. ;) I agree it's not a good way to employ terms, but I haven't invented a shorter alternative for "the kinds of mental operations that I care about."
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Arepo on 2012-12-03T12:50:00

Brian Tomasik wrote:Haha, maybe I'm the only one who uses it like that. ;) I agree it's not a good way to employ terms, but I haven't invented a shorter alternative for "the kinds of mental operations that I care about."


I have a lot of sympathy for that view. It seems at least plausible that the only distinguishing features of what we think of as consciousness are also the distinguishing features as what we think of as emotion. I'd like to follow that path more at some stage, but most people I've said it to have dismissed it outright as seeming too implausible, and I don't really have much of argument for it except Occam's Razor - two mysteriously significant invisible processes in the same physical location is a priori less likely than one.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Sentience and brain size

Postby Brian Tomasik on 2012-12-03T13:09:00

Arepo wrote:It seems at least plausible that the only distinguishing features of what we think of as consciousness are also the distinguishing features as what we think of as emotion.

Cool.

The ethical definition of "consciousness" is akin to the functional definition of "tableness." The most practical question about consciousness for most of us is, "Do we care about what looks like suffering by this organism?" For tables, the practical question is, "Is this something I want to put my dinner plates on?" One could come up with alternate requirements for tableness -- e.g., that it must have at least four legs -- and those would make sense too, but they're less relevant to the practical question I'm answering. They may also not be totally irrelevant. Maybe I'm more inclined to put plates on things that have at least four legs.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Arepo on 2012-12-03T17:32:00

Right, but it might be of interest if you could prove that a plate can sit on things iff they're four-legged wooden constructions. Similarly here (though I think my equivalence more likely :P).
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am


Re: Sentience and brain size

Postby Brian Tomasik on 2013-05-03T01:31:00

This brain-size topic is really hard. Of anything that comes to mind, it's the question on which I have the greatest (emotivist) moral uncertainty. On most other questions, I pretty much know what I think, and the moral uncertainty is more of a guard against overconfidence than something I viscerally feel.

I was thinking today about what I would do if it turned out that unicellular organisms are sentient. I hasten to add that they almost certainly are not sentient, but the probability shouldn't be set lower than, say, 1 in 10,000, given model uncertainty in neuroscience.

My intuition is that when a biological entity is separate from another and not part of a larger whole, it counts as its own organism and gets its own (near-)equal weight in the calculations. If you can personify it, tell a story about how it's feeling, and see it as a distinct agent, then it counts separately. If, instead, it's a more mechanical process that doesn't look like it's acting on its own, then it doesn't count as separate. So, for example, a neuron in a brain is not a separate organism, but maybe a sperm cell would be a separate organism by this definition, because it's self-driven in pursuit of something, rather than just mechanically firing in response to incoming stimuli. Again, sperm cells are almost certainly not sentient, but my point is that if they were, I would probably classify them as being separate organisms, unlike most cells. Unicellular organisms, especially those that move around independently without being part of a big group, would also be separate organisms.

Imagining a world where unicellular organisms were sentient prompted me to wonder if brain-size weighting has more going for it than I was acknowledging. It feels like it would be weird to weight a unicellular organism on par with, say, a mouse. Maybe part of this intuition comes from the fact that a unicellular organism is not actually sentient, and that muddies the question of what we would do if it were. Maybe, also, I'm just being a wimp and not wanting to adjust my moral conclusions to a radically different world where unicellular organisms matter that much. It would be a lot harder to live in such a world, so I just try to deny the possibility. Still, it wouldn't be impossible to accommodate one's actions to such a world, and in fact, the implications would be not unlike those that I accept now: For example, we'd want to prevent the spread of wild-animal life (because it allows for more unicellular organisms to exist and suffer), and we'd want to avert colonization in general. Simulated suffering would probably still dominate calculations because the simulations might include unicellular organisms, or they might include other tiny-brained computations in vast numbers. So it actually might not be a nightmare to decide we need to give high weight to unicellular critters if they turned out to be sentient.

One thing the brain-size view has going for it is that it seems somewhat less arbitrary, in that it doesn't depend on a fuzzy notion of what counts as a "separate" organism. For example, are worker ants not separate because they don't really act on their own but for the good of the colony? Would extremely servile people not even be separate individuals? In both cases, I would say they are separate because structurally, they're nearly identical to other animals that do act independently, and they could act independently if they were separated from the group. However, I agree it feels a little fishy.

What about a slime mold? Again, it's not sentient, but what if it were?
When food is abundant a slime mold exists as a single-celled organism, but when food is in short supply, slime molds congregate and start moving as a single body. [...]
When a slime mold mass or mound is physically separated, the cells find their way back to re-unite. Studies on Physarum have even shown an ability to learn and predict periodic unfavorable conditions in laboratory experiments (Saigusa et al. 2008).[9][10] Professor John Tyler Bonner, who has spent a lifetime studying slime molds argues that they are "no more than a bag of amoebae encased in a thin slime sheath, yet they manage to have various behaviours that are equal to those of animals who possess muscles and nerves with ganglia – that is, simple brains."

Would it count a lot when the parts were separated and then count as only one when they fused together? This feels not right.

Yet, when I think about why I care about something suffering, I realize that I care about the organism responding negatively to the experience in its mind, and this negative response doesn't seem to depend on how big the brain is. It's more of a unified response by the whole organism. I can't pick apart pieces of the organism's brain and identify with them as separate individuals. If I could, then I would take brain-size weighting more seriously.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Hedonic Treader on 2013-05-04T19:36:00

Brian Tomasik wrote:Simulated suffering would probably still dominate calculations because the simulations would include unicellular organisms, or they might include other tiny-brained computations in vast numbers.

Yes. One may hope that unicellular organisms or any algorithm on the same complexity level are not capable of suffering, and indeed I would be surprised if they were - suffering in human brains seems far more complex than that. However, insects may well suffer, and simulations of that sort would probably be computationally cheap.

But ultimately, even cheap computation is expensive (in opportunity cost) in a world in which you can always use it to create more copies of happy people. One might hope for a certain selection pressure against nonfunctional suffering algorithms, even if they are cheap. The reason there are currently so many insects is because we don't have a strong motivation or easy way to get rid of them. In the future, resource use may be much more rigorous than that.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am


Re: Sentience and brain size

Postby DanielLC on 2013-05-05T17:01:00

What's the probability that humans count as sentient many times over?

There are a lot of sub-circuits in your brain that are more complex than an insect's brain.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm


Re: Sentience and brain size

Postby Brian Tomasik on 2013-05-06T06:09:00

Very good point, DanielLC. In fact, I don't think I had heard it until yesterday when Carl Shulman brought it up on Facebook. It may indeed have important implications, but perhaps not radical ones. For example, humans have ~10^11 neurons, while small insects have ~10^5. This is a ratio of 10^6. But the number of humans is 10^10, while the number of insects may be 10^18-10^19, a ratio of ~10^9. So there are still 1000 times more insect neurons than human neurons.

In any event, as Elijah points out, we don't know if all the subsystems are suffering when the big system suffers, whereas we do know an insect is likely suffering when it appears to suffer (if insects can suffer at all). Also, the big-picture implications don't change: It's still good to reduce wild-animal populations, still good to prevent simulations and space colonization, etc.

Finally, we may decide that the subcircuits in the brain are too interconnected to count as separate organisms. The choice is up to us.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2013-05-14T03:55:00

This table puts the number of ants at 10^16-10^17. Even assuming the remaining insects have too few neurons to matter, this gives 10^21-10^22 neurons for ants, compared with 10^21 for humans.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby DanielLC on 2013-05-14T18:43:00

I very much doubt that humans are more important than all other life combined. However, there's not a whole lot we can do for the wild at the moment (except mass euthanasia). Also, I'm not yet vegetarian, and I want to know if I should be trying to avoid eating large animals or small animals. I went with just eating medium animals, on the basis that it's definitely better for everyone to eat medium animals than half to eat large animals and half to eat small.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Hedonic Treader on 2013-05-15T02:52:00

DanielLC wrote:I very much doubt that humans are more important than all other life combined. However, there's not a whole lot we can do for the wild at the moment (except mass euthanasia).

Raising awareness in associated discussions is cheap. I usually just point out that wild animals suffer a great deal, and that none of these experiences are really voluntary. It is still an exotic enough thought (though offensive to some if you push it to an anti-wildlife position) that simply mentioning it is valuable.

Also, I'm not yet vegetarian, and I want to know if I should be trying to avoid eating large animals or small animals. I went with just eating medium animals, on the basis that it's definitely better for everyone to eat medium animals than half to eat large animals and half to eat small.

Are meat substitutes so bad? I think there are some good ones, and the more demand there is for them, the more incentive food producers have to create better and cheaper ones.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Sentience and brain size

Postby Brian Tomasik on 2013-06-16T09:06:00

Thanks for the comments, Hedonic Treader! I would have said similar things.

I wanted to add an additional thought. It seems to me (and another friend I asked) that anthropics does not weight by brain size. The Self-Sampling Assumption says, "All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class." I myself prefer to do away with reference classes and just say, "All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future)." I would also replace "observers" by "observer-moments." Anyway, nowhere in this specification is brain size involved. As long as the brain is sophisticated enough to generate a conscious observer, anthropics begin to apply.

If this is true, it may furnish some intuitive support for the position that brain size doesn't matter morally beyond making it more likely the mind is conscious. For example, in a Rawlsian veil-of-ignorance scenario, you would not weigh by brain size for deciding policies in the future world.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby xodarap on 2013-07-07T02:28:00

Brian, does SSA argue against insect sentience? If insects were sentient, the overwhelming majority of sentient beings would be insects.

xodarap
 
Posts: 22
Joined: Thu Jul 26, 2012 1:52 am

Re: Sentience and brain size

Postby DanielLC on 2013-07-07T04:13:00

Are meat substitutes so bad?


Just hard to find. I don't have a car, and the only meat substitute that my local grocery store carries is intended for food storage. It might be okay, but I'm not particularly interested in finding out. Also, there's some kind of meat substitute in the vegetarian chili I buy.

I have finished stopping eating meat. I guess now it's time to be an aspiring vegan.

Anyway, nowhere in this specification is brain size involved.


Are the moments the same length? If the same brain runs twice as fast, I'd definitely expect it to contain twice as many observer-moments. If you replace it with one twice as big, it seems likely that the same effect would happen. If you have a non-human brain, how do you figure out how fast to run it to get observer-moments at the same rate as a human brain?

Brian, does SSA argue against insect sentience? If insects were sentient, the overwhelming majority of sentient beings would be insects.


Yes. It also argues that animals in general are less sentient than we believe them to be.

I've noticed that if you accept timeless decision theory, it doesn't matter so much which of those is correct. You have evidence that insects aren't sentient, but you shouldn't act on it, for the same reason that you shouldn't act on the evidence that you were clearly picked up when deciding whether or not to pay the guy that picked you up in Parfit's hitchhiker.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2013-07-31T07:51:00

xodarap wrote:Brian, does SSA argue against insect sentience?

From my essay on anthropics:
If non-human animals are observers, this leaves us with a puzzle about why I'm not a non-human animal. Neurologically, it certainly seems like at least some non-human animals should be conscious, although the primates and cetaceans are few enough that the anthropic update against their being conscious is smaller than against, say, insects. Another possibility is that animals were indeed conscious in basement Earth, but they're less interesting to simulate, so there ended up being lots more humans in the long run through sims. In this way, greater evidence for animal consciousness implies greater probability that we're in a sim. (Of course, why aren't the insects that we interact with in our sim also conscious? Can they be simulated without being conscious? Maybe we don't interact with enough insects on a regular basis to tip the anthropics too heavily. Insects in an isolated forest don't need to be simulated.) Anyway, this discussion is all conditional on the assumption that animals should be observers, which is not at all clear. The possibility that animals are not observers is a reason why concern for potential animal suffering remains extremely important in expectation. It would also remain important if my general anthropic framework were wrong -- e.g., if we used a reference class that excluded non-human animals or if we adopted SIA. In general, doomsday reasoning about the future is directly parallel with these anthropic questions about animal consciousness.

Update: My recent realization about the multiple-civilizations deflation of the doomsday argument also has something to say about the animal question. Consider two planets: One where only human-level life is conscious, and one where insects and pre-human animals are as well. Even after finding yourself as a human, you're equally likely to be on either of these planets, because the second one has a lot more observers offsetting the low probability that any particular one is human. There is a universal update away from animal sentience across all planets, but that update changes the probability distribution slower than a single-planet update would. (Of course, if the animals are identical on both planets, then knowing whether they're conscious on one tells you whether they're conscious on the other, but in general, the animals on different planets should be different, allowing for some to be conscious while others are not.)


DanielLC wrote:I have finished stopping eating meat. I guess now it's time to be an aspiring vegan.

:)

DanielLC wrote:If you replace it with one twice as big, it seems likely that the same effect would happen.

Hmm, that's a fascinating point! I'll update my essay to mention it.

DanielLC wrote:You have evidence that insects aren't sentient, but you shouldn't act on it, for the same reason that you shouldn't act on the evidence that you were clearly picked up when deciding whether or not to pay the guy that picked you up in Parfit's hitchhiker.

Are you using updateless decision theory, then? Care to explain further?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2013-07-31T08:20:00

Previously in this thread we saw that you can't naively use a Pascalian argument for the dominance of insect sentience in the calculations.

The naive argument says: Let the importance unit (iu) for humans be 1. Then either insects matter a lot less, in which case the world doesn't have that many iu's, or insects count equally, in which case it has a lot more iu's. In the latter case, things are much more important, so we should act as if the latter is true.

The counter-naive argument says: Let the importance unit (iu) for insects be 1. Then either humans matter the same, in which case the world doesn't have that many iu's, or humans matter vastly more, in which case the world has more iu's. The latter kind of world would matter more. (Of course, this argument doesn't have quite as much force, because there are still more insect neurons than human neurons on the planet, and it's also plausible that one insect neuron matters more than one human neuron if it's more efficient and is more dedicated to basic emotions rather than higher-level cognition.)

A friend of mine compared this with the two-envelopes problem: When you hold one unit fixed and vary the other, it seems naively like the other has higher expected value. The solution to the two-envelopes problem is to realize that the prior distribution of possible amounts of money in the envelopes needs to be bounded, so for instance, if you find a huge amount of money in one envelope, it's not equally likely that the other envelope contains half as much or twice as much; it's more likely it contains half as much. Applying a similar approach to the case of insects could be tricky but may not be impossible.

It seems like the naive argument is more intuitive than the counter-naive argument, for the following reason: I am a human, so I can take the importance of my experience as a given amount. Then maybe insects matter this much, or maybe they matter less. I'm not an insect wondering how important a human is because I'm already a human, and I can see that my experience matters but not huge amounts. Maybe this is like finding that my envelope contains a low-ish amount of money. Anyway, one objection to this reasoning is that my own brain may not realize all the value it contains: If there are insect-like substructures that have insect-level value, I may not have access to them and so may be underestimating the value (and potential for disvalue) contained in my skull.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby utilitymonster on 2013-07-31T19:05:00

It seems like the naive argument is more intuitive than the counter-naive argument, for the following reason: I am a human, so I can take the importance of my experience as a given amount.


This seems analogous to saying that you should first look in the envelope you are given, and then reason as follows:

1. I know how much is in this envelope.
2. But I'm uncertain about the other one.
3. Therefore I should buy the argument for switching from this envelope, rather than the argument for switching from the other envelope.

But that seems like bad reasoning because if you had gotten the other envelope, you would have made an analogous argument in favor of switching in the other direction.

Analogously, if we had been insects and we knew how much value our lives had, we'd have to go for the other Pascalian argument.

In the face of theoretical arguments that point in no clear direction, I favor sticking with common sense, which in this case means ignoring both Pascalian arguments and relying on other considerations to say how to compare the value of insects and humans.

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: Sentience and brain size

Postby Brian Tomasik on 2013-08-01T03:37:00

Yes, that's exactly right. :) I tried to save the argument by saying
Maybe this is like finding that my envelope contains a low-ish amount of money.

in which case the argument is no longer fallacious, except insofar as my prior probabilities for how much money the envelopes should contain is itself tainted by the kind of reasoning I suggested.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2013-08-01T04:09:00

Comment from Carl Shulman:
Brian:
Harry the human is doing his morning jumping jacks, while listening to music. Suddenly he feels a pain in his knee. The pain comes from nociceptive firings of 500 afferent neurons. At the same time, Harry is enjoying his music and the chemicals that exercise is releasing into his body, so his brain simultaneously generates 500 "I like this" messages. Harry is unsure whether to keep exercising. But after a minute, the nociceptive firings decrease to 50 neurons, so he decides his knee doesn't really hurt anymore. He continues his jumping routine.
Meanwhile, Sam the snail is sexually aroused by an object and is moving toward it. His nervous system generates 5 "I like this" messages. But when he reaches the object, an experimenter applies a shock that generates 50 "ouch" messages. This is the same as the number of "ouch" messages that Harry felt from his knee at the end of the previous example, yet in this case, because the comparison is against only 5 "I like this" messages, Sam wastes no time in recoiling from the object.
Now, we can still debate whether the moral significance of 50 of Harry's and Sam's "ouch" messages are equal, but I'm pointing out that, to the organism himself, they're like night and day. Sam hated the shock much more than Harry hated his diminished knee pain. Sam might describe his experience as one of his most painful in recent memory; Harry might describe his as "something I barely noticed."

Carl:
In a split-brain patient each hemisphere can react separately to stimuli with awareness and cognitive processing, with access to half the body (more or less). So each hemisphere gets half as many "ouch" and "liking" messages, but takes them just as seriously. Similarly, each smaller subregion/subsystem is getting only a small number of positive and negative reinforcement signals, scaled down with size (other things equal). For the conditioning/reinforcement of particular learning in a particular system, the messages it is getting are enough to drastically change synapse strength, its behaviors/outputs, and so on.
"Harry might describe his as "something I barely noticed."" This is just limiting yourself to what is accessible for speech. In split-brain patients the left hemisphere doesn't know what the right is experiencing, and confabulates explanations. The split-brain case is easier because the right hemisphere can communicate with us through control of the motor processes of the left half of the body, but other tinier systems would be more voiceless (except through careful experiments or neurological scanning), just as many animals are voiceless.


I replied:
I agree with a lot of your intuitions, but I'm not sure if I prefer consciousness to be seen as the uppermost level of unified reflection rather than the subsystems. It's not speech specifically, but maybe the part right before speech. Given this view of consciousness, it's more dubious that animals (and especially insects) have it at all.

If the stuff below "almost speech" does matter, then that's a pretty knock-down argument for brain weighting.


Carl then cited Cartesian-theater confusion. I said that was a good point, but what's weird about Carl's position (wherewith I sympathize) is that it's not clear where the boundary of the mind lies. Even the body could be seen as part of the mind to some degree, since it interacts with the RL subsystems. And then even the external environment does somewhat too.

A paragraph from my essay on computations I care about is relevant:
As the homunculus discussion above highlights, there's not a distinct point in the brain that produces consciousness while everything else is non-conscious. Lots of parts work together to do the processes that overall we call consciousness. It does seem clear that some parts are not very important (e.g., much of a person's body, many peripheral nerves, some cortical regions, etc.), but as we keep stripping away things that look non-essential, we risk having nothing left. By way of analogy, I imagine looking in a big box full of stuffing for a tiny ring, only to find that there is no ring and that the stuffing itself was the content of the gift. (In the case of the brain, it's not the stuffing itself that matters but the structure and algorithmic behavior.)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby CarlShulman on 2013-08-01T18:10:00

Combined expected value of population count and neuron count, avoiding two-envelope problems
Let's work things out using ex ante priors that avoid the two-envelope problem.

We start off with a prior distribution about the value of population count, each individual organism regardless of size, which we will represent with X.

We also have a prior distribution over the value of some minimum scale of neural structure, which we'll represent with Y.

Now we take a lump of N neural structures, and divvy them up into Z separate organisms, each of which gets N/Z neural structures. How does the expected value change as we move between the extremes of a separate organism for each neural structure and a single big brain?

When Z=1 (one big brain), the expected value is 1*X+N*Y. With Z=N (each neural structure on its own) expected value is N*X+N*Y.

For N=1,000,000, and X=Y=1, going from a single brain to maximum subdivision takes us from 1,000,001 units of value to 2,000,000. With X=10Y=10, we would go from 1,000,010 units to 11,000,000 units.

If instead we look at slightly bigger organisms at the small end, e.g. Z=N/10, then most of the expected value from population count goes away.

So unless one starts with an extreme ratio of X to Y, or gets very strong additional evidence, expected value considering both population count and neural structure count will track fairly closely with neural count alone.

Implications of neural count
Wild invertebrates still make up a majority of neural count (although by much a smaller factor than for raw count, which could easily be passed by human population expansion if the trends of expanding human population and shrinking invertebrate populations continue somewhat longer), and the suffering of factory farmed animals still outweighs the pleasure of human carnivory. However, neural count suggests changes of focus, e.g. cows dominate the neural count of farmed land animals whereas chickens dominate the population count, and in neural count the interests of farmed and wild animals are much more in the same ballpark.

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2013-08-02T09:20:00

Thanks, Carl!
CarlShulman wrote:So unless one starts with an extreme ratio of X to Y, or gets very strong additional evidence, expected value considering both population count and neural structure count will track fairly closely with neural count alone.

Well, it's not obvious that we should make X and Y in the same ballpark. Naively, it seems like we might want X to be around the magnitude of the average size of organisms, so that individual count can compete with neural count for value -- otherwise one will dominate the other by fiat.

The total value can be written as Z*X + (N/Z)*Z*Y, which is proportional to X + (N/Z)*Y. So in order for X and Y to have a shot at competing with one another, we need X to be roughly (N/Z) times as big as Y, i.e., around the same magnitude as the average number of neurons per organism.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Brian Tomasik on 2013-08-02T10:49:00

The two-elephants problem

We can turn the brain-size question into something structurally like the two-envelopes problem as follows. Suppose naively that we weigh brain size by number of neurons. An elephant has 23 billion neurons, compared with a human's 85 billion. Say this is 1/4 as many.

Two elephants and one human are about to be afflicted with temporary pain. There are two envelopes in front of us: One contains a ticket that will let us stop the human from being hurt, and the other will let us stop the two elephants from being hurt. We can only pick one ticket. Which should we take?

Suppose you're a human, and right now, you think there's a 50% chance you weight by brain size and a 50% chance you count each organism equally. If organisms are equal, then helping the elephants saves 2 instead of 1 individuals. If you weight by brain size, then helping the elephants is only 2 * (1/4) = 1/2 as worthwhile as helping the human. 50% * 2 + 50% * 1/2 = 5/4 > 1, so you should help the elephants.

Now suppose you're an elephant. If humans are equal to you, then helping the 1 human is only 1/2 as good as helping 2 of your elephant brethren. If you weight by brain size, then helping the human is 4 times as good per organism, or 2 times as good overall, as helping the elephants. 50% * 1/2 + 50% * 2 = 5/4, so you should save the human.

Applying a prior distribution

The Bayesian solution to the two-envelopes paradox is to realize that given the value of your current envelope, it's not possible for the other envelope to be equally likely to have 1/2 or 2 times the value of yours, for all possible values of your envelope. As the value in your envelope increases, it becomes more likely you got the bigger of the two.

One simple way to model the situation could be to use a fixed uniform prior distribution: The value in the larger envelope is uniform on [0,1000], which implies that the value in the smaller envelope is uniform on [0, 500]. Suppose you find that your envelope contains an amount in the range 300 +/- 1/2. (I'm using a range here, because my probability distribution is continuous, so the probability of any given point value is 0.) The probability of this is 1/500 if this is the smaller amount or 1/1000 if this is the larger amount. Therefore, if you started with equal priors between getting the smaller and larger amount (which you should have, given the symmetry of envelope picking), the posterior is that you got the smaller envelope with 2/3 probability. Then (2/3)*600 + (1/3)*150 > 300, so you should switch, and this is not fallacious to do.

Similar reasoning should work for a more complicated prior distribution.

The human envelope has a low amount?

I have the intuition that what I experience as a human isn't vastly important compared to how important I can imagine things being. In this way, it feels like when I open the envelope for how much a human is worth, I find a smaller amount than I would have expected if this was actually the higher of the envelopes. If this is true, it would not be an error to "switch envelopes," i.e., care more about insects than a brain-size weighting suggests.

A good counterargument to this is that my prior distribution is biased by factors like my past experience with caring a lot about insects, so it seems counterintuitive that the universe could matter so little.

Pascalian wagers in the other direction

Let n be the number of neurons in a single brain and W(n) be the moral weight of that brain's experiences. The debate here is whether to take W(n) = n (brain-size weighting) or W(n) = 1 (individual-count weighting). Of course, there are other options, like W(n) = n^2 or W(n) = 2^n or W(n) = busybeaver(n). The Kolmogorov complexity of using the busybeaver weighting is not proportionate with the size of the busybeaver values, so Pascalian calculations may cause that to dominate. In particular, there's some extremely tiny chance that a mind 100 times as big as mine counts busybeaver((85 billion) * 100) / busybeaver(85 billion) times as much, in which case my decisions would be dominated by the tiniest chance of the biggest possible mind, with everything else not mattering at all by comparison.

So the Pascalian wagers don't just go one way, and if I'm not willing to bite the above bullet, it's not clear I can sustain the Pascalian wager on insects either.

However: Moral-uncertainty calculations do not need to conform to Occam's razor. I'm allowed to care about whatever I want however much I want, and I'm not obligated to enforce consistent probabilistic calculations over possible moral values the way I have to over factual variables. So I can keep caring a lot about insects if I want; it's just that I can't ground this in a Pascalian moral-uncertainty wager without accepting the attendant consequences.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby CarlShulman on 2013-08-02T11:48:00

"I have the intuition that what I experience as a human isn't vastly important compared to how important I can imagine things being."

Could you say more about what the higher alternative could have been? You can't directly evaluate the quantity of your experiences (at issue here), only the quality, and then only that connected to the speech centers. Aren't your judgments of quality calibrated in relative terms, i.e. comparisons of your different experiences at different times?

Also, in thinking about the probabilistic parallel architecture of the human brain it might be helpful to think about decision-making (at least partially) conducted by a democratic plebescite, or a randomly selected jury.

"In particular, there's some extremely tiny chance that a mind 100 times as big as mine counts busybeaver((85 billion) * 100) / busybeaver(85 billion) times as much, in which case my decisions would be dominated by the tiniest chance of the biggest possible mind, with everything else not mattering at all by comparison."

It is also possible that all experiences are multiplied similarly to the largest scale structures we could make. In general, all actions and states have some Pascalian tail in expected value through exotic hypotheses. Wedges between the expected values of actions then come from the strength of evidence distinguishing them. Superlinear scale and unbounded growth (like baby universes) are worth investigating but they don't automatically dominate in expected value over more mundane things.

"However: Moral-uncertainty calculations do not need to conform to Occam's razor. I'm allowed to care about whatever I want however much I want, and I'm not obligated to enforce consistent probabilistic calculations over possible moral values the way I have to over factual variables. So I can keep caring a lot about insects if I want; it's just that I can't ground this in a Pascalian moral-uncertainty wager without accepting the attendant consequences."

In some sense this is true, it is certainly physically possible to select do-gooding actions purely based on the immediate unreflective emotional reactions, even ones based on clear empirical, logical or philosophical mistakes. But abandoning reflection is a very big bullet to swallow, one which even moral anti-realist philosophers generally consider to be a mistake.

Consider an extreme case, e.g. someone who takes the view "I want the warm feeling of reducing suffering effectively by torturing squirrels, and I don't care if there are mistakes in the reasoning/information-processing going into that feeling, or whether this actually reduces suffering." There is a nihilistic take on which there's nothing wrong with this stance, but it's normal to say that the expected revision under various levels of idealization is relevant, i.e. that we predict this person would revise their view if better informed, having thought more carefully and precisely, etc. And such a view runs against many of the well-supported heuristics of everyday life, such as that better informed decisions tend to be better.

The reification of current policy conclusions, even if the underlying reasons don't hold up, seems vulnerable to such considerations, and likely to be ultimately considered a mistake by one's own lights in less extreme cases as well.

If one just wants warm feelings for oneself, one can get those in other ways (David Pearce has discussed a lot of them). But if one is pursuing good in the world, then it seems one should try to be sensitive to the world in one's judgments.

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Sentience and brain size

Postby Brian Tomasik on 2013-08-02T13:43:00

CarlShulman wrote:Could you say more about what the higher alternative could have been? You can't directly evaluate the quantity of your experiences (at issue here), only the quality, and then only that connected to the speech centers. Aren't your judgments of quality calibrated in relative terms, i.e. comparisons of your different experiences at different times?

Hmm, good points! Yes, it is all relative, and I can't make assessments of quantity, because to me, I am the whole world. That's precisely the intuition behind brain size not mattering: At the highest levels of reporting, at least (e.g., verbalization), you seem to be just one entity, ignoring all the subcomponents, conflicting coalitions, etc. that went on to get to that point.

CarlShulman wrote:Also, in thinking about the probabilistic parallel architecture of the human brain it might be helpful to think about decision-making (at least partially) conducted by a democratic plebescite, or a randomly selected jury.

And if you only care about the direction in which the final vote came out, it doesn't matter how many people voted.

CarlShulman wrote:Superlinear scale and unbounded growth (like baby universes) are worth investigating but they don't automatically dominate in expected value over more mundane things.

Why not?

CarlShulman wrote:The reification of current policy conclusions, even if the underlying reasons don't hold up, seems vulnerable to such considerations, and likely to be ultimately considered a mistake by one's own lights in less extreme cases as well.

I somewhat agree. I would just caution that our brains are leaky, so there's not a clear distinction between "learning more" vs. "changing your brain in a way that biases you toward something you used to not agree with." If you try heroin, you're certainly learning what it feels like, but that's not all you're doing -- you're also setting yourself up to be prone to take it again next time. If you "try out" what it feels like to spend money lavishly, you might get used to the lifestyle and then not return to your old ideals. And so on.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby CarlShulman on 2013-08-03T02:04:00

One of David Pearce's hedonic hotspot facts:

Neuroscientists are already homing in on the twin cubic-millimetre sized “hedonic hotspots” in the ventral pallidum and nucleus accumbens of the rodent brain. The equivalent hedonic hotspots in humans may be as large as a cubic centimeter.


So we have a ~1000x difference in hedonic hotspot scale, and a 600-700x difference in brain size between rats and humans. These hotspots have more neurons and more tissue to condition/reinforce.

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Sentience and brain size

Postby Darklight on 2014-02-14T01:07:00

Sorry to necro an old thread, but I thought these interesting facts might be somewhat relevant.

According to my old undergrad Biopsychology textbook:

Lack of Clear Cortical Representation of Pain
The second paradox of pain is that it has no obvious cortical representation (Rainville, 2002). Painful stimuli often activate areas of cortex, but the areas of activation have varied greatly from study to study (Apkarian, 1995).

Painful stimuli usually elicit responses in SI and SII. However, removal of SI and SII in humans is not associated with any change in the threshold for pain. Indeed, hemispherectomized patients (this with one cerebral hemisphere removed) can still perceive pain from both sides of their bodies.

The cortical area that has been most frequently linked to the experience of pain is the anterior cingulate cortex (the cortex of the anterior cingulate gyrus; see figure 7.21). For example, using PET, Craig and colleagues (1996) demonstrated increases in anterior cingulate cortex activity when subjects placed a hand on painfully cold bars, painfully hot bars, or even on a series of alternating cool and warm bars, which produce an illusion of painful stimulation.

Evidence suggests that the anterior cingulate cortex is involved in the emotional reaction to pain rather than to the perception of pain itself (Panksepp, 2003; Prince, 2000). For example, prefrontal lobotomy, which damages the anterior cingulate cortex and its connections, typically reduces the emotional reaction to pain without changing the threshold for pain.


According to my old undergrad Sensation & Perception textbook:

COGNITIVE ASPECTS OF PAIN

Pain is actually a subjective state with two distinguishable components: the sensation of the painful source, and the emotion that accompanies it (Melzack and Casey, 1968). The latter aspect of pain can be affected by social and cultural contexts and higher-level cognition. For example, reports of painful strains of the arm from tasks requiring repetitive motion spread rapidly in Australia during the 1980s--like a contagious disease--but they were communicated by workers who did nothing more than talk to one another about their experiences.

We have known for some time that areas S1 and S2 are responsible for the sensory aspects of pain, but researchers have recently been able to use new methods to identify the areas of the brain that correspond to the more cognitive aspects of painful experiences. In one study (Rainville et al., 1997) (Figure 12.11), participants were hypnotized and their hands were placed in lukewarm or very hot water (which activated thermal nociceptors). The participants were sometimes told that the unpleasantness from the water was increasing or decreasing, and their brains were imaged during these periods by positron emission tomography (PET). The primary sensory areas of the cortex, S1 and S2, were activated by the hot water, but the suggestion of greater unpleasantness did not increase their response relative to the suggestion of decreased unpleasantness. In contrast, another area, the anterior cingulate cortex (ACC), did respond differentially to the two hypnotic suggestions, by increasing or decreasing its activity according to the suggestion of increased or decreased unpleasantness. The researchers concluded that the ACC processes the raw sensory data from S1 and S2 in such a way as to produce an emotional response.

At the higher level still, pain can produce what Price (2000) has called "secondary pain affect." This is the emotional response associated with long-term suffering that occurs when painful events are imagined or remembered. For example, cancer patients who face a second round of chemotherapy may remember the first and dread what is forthcoming. This component of pain is associated with the prefrontal cortex, an area concerned with cognition and executive control.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Sentience and brain size

Postby Brian Tomasik on 2014-02-14T05:39:00

Thanks. :) Good background info. I hope you didn't spend too long typing that up. :P

The question on this thread is whether, if we shrunk those whole mechanisms to 1/2 their size, we would care 1/2 as much. It's clear that some parts of the brain are more important for pain than others, and reducing brain size by removing unrelated parts doesn't matter.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Darklight on 2014-02-14T22:29:00

Thanks. :) Good background info. I hope you didn't spend too long typing that up. :P


You're welcome. Nah, it was pretty quick. I type fast.

The question on this thread is whether, if we shrunk those whole mechanisms to 1/2 their size, we would care 1/2 as much. It's clear that some parts of the brain are more important for pain than others, and reducing brain size by removing unrelated parts doesn't matter.


It seems like it wouldn't matter for our sensory experience of pain, but it may matter for our emotional experience of pain. So I think an important thing to clarify first is which of these two we actually care about. Someone may register pain signals, but might not "feel" like the experience was particularly unpleasant to them. I know from first-hand experience that certain psychiatric medication can do things like "zombify" a person, and make them have generally less affect. It's like experiencing everything with a veil of cotton around your head. Pain and pleasure both matter less to you, and the experience is almost like being less conscious than before. There are also other medications that can have the opposite effect, essentially making you feel more alert and aware of things, and able to process information better. This is actually partly why there's a black market for "study aid" stimulants. In my view, they don't just make you more focused, but actually seem to increase your IQ temporarily. It's also known that sleep deprivation lowers your IQ. Think about how you felt the last time you were really sleep deprived. How conscious would you rate yourself then compared to normal?

Anyways, if just the pure sensation of pain is what matters, then it follows that what we should be counting is the number of nociceptors that an organism has, or the ratio of nociceptors to other neurons.

But if what matters is the emotional experience of pain, then arguably we also have to count the neurons and connections that contribute to this distributed experience.

I'm just thinking of a simple example of how a seemingly unrelated area of the brain might have significant influence on emotional pain. Insects generally don't have a concept of their mother. Thus, they don't feel separation anxiety the way a human infant would. We would feel pain at the death of the loved one, mainly because we have the capacity to do a bunch of things, like represent close relationships and attachments to other beings, which itself requires an ability to model such beings, and then there's the faculties required to imagine what it would be like to be without them for the foreseeable future.

Similarly, I would think that animals that can pass the mirror test and are self-aware, like a pig, would suffer more emotional/psychological pain from feeling the threat of existential annihilation, than say, an insect.

Having more neurons and connections and more advanced structures that enable one to cognate at higher levels, arguably allows the experience of emotional pain and emotional pleasure to be much richer because of all these feedbacks and interactions and our imagination. Thus, the "quality" of the pain is greater, even if the quantity of physical pain is not. A human being is able to experience emotional pain in many more ways than an insect.

So I guess it's a matter of whether or not emotional/psychological pain stacks with sensory pain.

There's also the Gate Control Theory of pain (both my textbooks go into some detail about it but I'm feeling too lazy today to type it all up) which notes that there are ways for the brain to inhibit pain when it needs to do so. Examples of this include when soldiers have reported feeling no pain in the middle of battle despite severe wounds. So it's possible that the pain that we actually experience isn't just the nociceptors going "OW OW OW", but a complicated interplay of different emotional conditions.

So to take a stab at answering the question "if we shrunk those whole mechanisms to 1/2 their size, we would care 1/2 as much", I think, assuming size means number of relevant neurons, the answer is yes, because it is the emotional state or experience of pain, it's actual unpleasantness to us, that matters more than the pure sensory experience itself. I imagine this is why insects that lose limbs don't seem to show nearly as much stress as a mammal that loses a limb (not that I've tested this in any way). Both the insect and the mammal have neurons going "OW OW OW OW", but the mammal also has a complex of neurons going "OMG I'M GOING TO DIE!!! A PART OF ME IS MISSING!!! PANIC PANIC PANIC!!!" etc., while the insect is just going "OW OW OW OW OW, MOVE AWAY FROM BAD THING!!!"

So, it kinda depends on what you mean by "care". Do they care less in the sense that they are less conscious and intelligent and therefore can't cognitively comprehend the pain in as many ways as we can? Yes. Do they care less in the sense that they aren't as averse to the negative stimuli as us? Probably not. Given their much simpler world view, it's possible that physical sensations may actually matter more to them because they have little else to care about. So it all boils down to whether you think caring on a higher cognitive level matters or not.

It's also possible that insects, or more likely, micro-organisms are basically Braitenberg vehicles. Which is to say that they aren't sufficiently advanced to reach the threshold of consciousness, and their experience would more like the way we might experience sleepwalking.

Learning actually doesn't require conscious awareness. The ur-example here is H. M., the man who lost his ability to add things to long-term memory after an invasive surgical procedure. Even though he couldn't consciously recollect new things he learned, he was still somehow able to unconsciously learn certain motor skills. Also, there's the famous anecdote about how he repeatedly met a scientist "for the first time" and shook hands with him the first actual time, but because the scientist had a electric buzzer in his hand, in subsequent "first meetings" H. M. eventually started to refuse to shake hands with the scientist, even though he couldn't explain why. XD

So, what does this all mean? It means that even if insects can feel pain and care about it at a very basic cognitive level, it's probable that their conscious experience is very different from ours, and thus their emotional "feeling" of pain may be difficult to compare to our own. We can't therefore assume that the experience will be the same but more or less intense. They don't have a cerebral cortex (though they have a Pallium). They probably can't feel existential angst, or fears that are based on it. I don't think a bee stings an intruder only after a careful deliberation about its sacrifice for the greater good of the hive. It probably does so instinctively, without any consideration to consequences to itself. Though at the same time, the fact that such insects are social, suggests that it has a concept of "other bees who are my friend". That does require some degree of cognitive sophistication. So it may have a very simple concept of self, the part of the world that I can control directly, versus other things. But it apparently doesn't reason much about the self because bees generally are not very self-interested.

I guess where I'm going with this is that, while sensing pain doesn't scale with brain size, experiencing the negative emotions and cognitions that associate with sensing pain, does. There is probably a threshold of consciousness, but it appears to be very low. Nevertheless, there may be other thresholds that determine the quality or character of conscious experience. I think for instance, that the gap between mammals and non-mammals is probably greater than many people realize in terms of their cognitive ability to comprehend their happiness and suffering. But at the same time, I do lean towards thinking that even very primitive animals like insects experience some very basic level of consciousness, and that probably has to be weighted into our considerations.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Sentience and brain size

Postby Brian Tomasik on 2014-02-15T13:11:00

Once again, I'm back only briefly. Your writing output is impressive, and I don't have time at the moment to keep up with all of it, sadly. So as usual, the fact that I don't reply doesn't mean I wouldn't like to given more time.

I think you should consider getting a website or at least a blog. Your writings are too good to be lost in the depths of Felicifia threads. Let me know what I need to do to persuade you. :)

Also, if you're on Facebook, you are warmly encouraged to add me there.

Darklight wrote:Nah, it was pretty quick. I type fast.

That's good!

Darklight wrote:Someone may register pain signals, but might not "feel" like the experience was particularly unpleasant to them.

Pain asymbolia is often mentioned in this regard. I think most people agree it's the emotional reaction that matters rather than the quale itself.

Darklight wrote:Insects generally don't have a concept of their mother. Thus, they don't feel separation anxiety the way a human infant would.

As Robert Elwood says, fish don't have lungs, but they still breathe. Some insects are extremely social and may even develop behaviors that look like isolation syndromes when separated from the group.

Darklight wrote:We would feel pain at the death of the loved one, mainly because we have the capacity to do a bunch of things, like represent close relationships and attachments to other beings, which itself requires an ability to model such beings

You miss food without being able to model food. Pain upon separation from others could be implemented at a more basic level.

Darklight wrote:Similarly, I would think that animals that can pass the mirror test and are self-aware, like a pig, would suffer more emotional/psychological pain from feeling the threat of existential annihilation, than say, an insect.

I feel like threat of existential annihilation is probably trivial in most animals compared with living in the moment. Maybe this is an extension of my own peculiar psychology, which doesn't fear death.

Darklight wrote:A human being is able to experience emotional pain in many more ways than an insect.

On the flip side, a human may feel less pain at some things than an insect. For instance, when you're sick, you know you'll get better. An animal doesn't know that. When your loved one goes away, you can look forward to his/her return and therefore suffer less in the interim.

Darklight wrote:There's also the Gate Control Theory of pain (both my textbooks go into some detail about it but I'm feeling too lazy today to type it all up) which notes that there are ways for the brain to inhibit pain when it needs to do so. Examples of this include when soldiers have reported feeling no pain in the middle of battle despite severe wounds. So it's possible that the pain that we actually experience isn't just the nociceptors going "OW OW OW", but a complicated interplay of different emotional conditions.

Yeah. :) I mentioned Gate Control Theory briefly on the opening post, though I now think some of what I said there is confused. For my current views on this topic, see "Is Brain Size Morally Relevant?"

Wikipedia says that GCT is outdated, though I'm not sure what has replaced it.

Darklight wrote:I imagine this is why insects that lose limbs don't seem to show nearly as much stress as a mammal that loses a limb (not that I've tested this in any way). Both the insect and the mammal have neurons going "OW OW OW OW", but the mammal also has a complex of neurons going "OMG I'M GOING TO DIE!!! A PART OF ME IS MISSING!!! PANIC PANIC PANIC!!!" etc., while the insect is just going "OW OW OW OW OW, MOVE AWAY FROM BAD THING!!!"

It's plausible evolution hasn't built in the mechanisms for the insect to act upon the broken leg, i.e., avoid using it until it gets better, tend the wounds, etc. It's also possible evolution builds in less deterrence against injury for shorter-lived creatures.

How about a mouse vs. a human? A mouse reacts very badly to a broken leg too. Here it's not clear there's a big difference in behavioral response by the organism itself.

Darklight wrote:Do they care less in the sense that they are less conscious and intelligent and therefore can't cognitively comprehend the pain in as many ways as we can? Yes.

Do you think people with cognitive disabilities feel pain in fewer ways?

Darklight wrote:Do they care less in the sense that they aren't as averse to the negative stimuli as us? Probably not. Given their much simpler world view, it's possible that physical sensations may actually matter more to them because they have little else to care about.

Yes. :)

Darklight wrote:Learning actually doesn't require conscious awareness.

More on that here.

Getting concrete with numbers: How many insects would you let be shocked to death to prevent one human from being shocked to death? (I'm assuming that insects react about as aversively to shocking as humans do, relative to everything they care about.) How about mice vs. humans?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Sentience and brain size

Postby Darklight on 2014-02-16T20:43:00

Once again, I'm back only briefly. Your writing output is impressive, and I don't have time at the moment to keep up with all of it, sadly. So as usual, the fact that I don't reply doesn't mean I wouldn't like to given more time.


No worries. My two skills in life are probably writing and programming, so I've gotten very good at turning my thoughts into words with a minimal amount of editing. I actually probably output a lot more than I should be, as it means I'm probably procrastinating on my thesis. >_>

I think you should consider getting a website or at least a blog. Your writings are too good to be lost in the depths of Felicifia threads. Let me know what I need to do to persuade you. :)


I actually do have a website, but it's mostly a personal portfolio rather than a proper site about my ideas. I used to have a few personal blogs, but I found that, in my less mature years, I would waste a lot of time posting emo stuff rather than meaningfully productive things, so I eventually shut down my blogs to cull that unfortunate habit. Also, I feel like there's already a million blogs out there, and I would just be adding yet another blog to people's already long reading lists.

One of the reasons why I am choosing to post on Felicifia is because I like the idea of a Utilitarian forum, and want to support it with my writing. Rather than just making yet another website or blog on the Internet, I feel that my time might be better spent supporting existing ones and trying to help sites like Felicifia reach a critical mass of intellectual activity that will produce synergies, for lack of a better word.

As Robert Elwood says, fish don't have lungs, but they still breathe. Some insects are extremely social and may even develop behaviors that look like isolation syndromes when separated from the group.


Indeed, this is why I discuss bees later on. I'm also aware that apparently cockroaches appear to show distress at social isolation.

You miss food without being able to model food. Pain upon separation from others could be implemented at a more basic level.


Again, it's quite possible.

I feel like threat of existential annihilation is probably trivial in most animals compared with living in the moment. Maybe this is an extension of my own peculiar psychology, which doesn't fear death.


It's not -that- peculiar, given that Epicurus was apparently the same way (or at the very least he taught that the fear of death was silly).

On the flip side, a human may feel less pain at some things than an insect. For instance, when you're sick, you know you'll get better. An animal doesn't know that. When your loved one goes away, you can look forward to his/her return and therefore suffer less in the interim.


This does complicate things a bit. I admit, I'm making a big assumption that the added pain and pleasure from all the possible considerations that a human can have outweighs this somehow.

Wikipedia says that GCT is outdated, though I'm not sure what has replaced it.


I'm not aware of any new theory, but maybe my textbooks are out of date? I took those courses in 2007 though...

How about a mouse vs. a human? A mouse reacts very badly to a broken leg too. Here it's not clear there's a big difference in behavioral response by the organism itself.


A mouse is a mammal with a cerebral cortex, so it's not surprising that they would behave very similarly to humans. Most mammals are actually quite intelligent (stupid rats who fail to learn to press the lever for a food pellet in my Experimental Psychology: Learning labs, notwithstanding XD). I would definitely say a mouse probably feels emotional pain more similarly to a human than an insect, probably.

Do you think people with cognitive disabilities feel pain in fewer ways?


It depends on the cognitive disability. If it's comprehensive rather than something specific like left field object blindness, then they probably do feel pain in fewer ways, but the experience of pain itself may still be as intense as with non-impaired humans. There are some very unique disabilities, like Congenital Analgesia, where the person feels no sensory pain. I also think someone who's had a lobotomy, probably doesn't feel as much pain either. Again, it really depends.

More on that here.


I find myself constantly being impressed by just how thoroughly you've researched these things. I apologize if I often go over things that you've already looked at. While I've read a lot of your essays, I'll admit I've more skimmed them for the gist, than given them all the kind of due diligence they deserve.

Getting concrete with numbers: How many insects would you let be shocked to death to prevent one human from being shocked to death? (I'm assuming that insects react about as aversively to shocking as humans do, relative to everything they care about.) How about mice vs. humans?


Right now, I'm not confident enough about my knowledge of the subject to really make such calculations with certainty. On the one hand, I did say I consider humans to be utility monsters, which would suggest the numbers could go to arbitrarily large numbers.... but I hesitate to take that stand because it would potentially justify enormous amounts of suffering. So I guess I have to say, I don't know. It depends on whether or not a strict hierarchy of sentience can be justified, or whether the calculation function should weigh each sentient being according to their relative sentience level.

If it were the latter, we could assume that a human being is a million times as sentient as a insect. In that case the number could be a million insects to one human. And if a human being is a hundred times as sentient as a mouse, a hundred mice to one human. But again, I'm not even sure if this can be justified, any more than weighing them equally can be justified. My intuition is that a human should be worth more than a mouse or an insect, but I admit that could be a bias.

On the other hand, if we look at extreme edge cases, I really doubt we should give the same weight we give to humans, mice, or insects, to that of a photo-diode, which according to Integrated Information Theory would be the most minimally conscious thing possible. A photo-diode doesn't feel pain or pleasure at its "experience" of discrimating between different levels of light. So I'm inclined to think that there are certain thresholds that need to be reached before we start granting "sentient" things, moral worth. Thus, it may well matter more what structures are in the "brain" of these entities, than the pure number of neurons. It's imaginable for instance, to create an artificial neural network with billions of neurons that would rival a human brain in size, but all those neurons were purely used for image classification, rather than having any sort of pain/pleasure evaluation. Indeed, Google recently made a massive network that just classifies cats in Youtube videos. It had more neurons than a snail, but arguably, it was less sentient because all those neurons were for much fewer and simpler structures.

Thus, while the number of neurons is a good approximation in practice, it's only because evolved brain complexity in terms of structures seems to correlate well with number of neurons.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada


Return to General discussion