Optimal giving, x-risk, and the ethics of disagreement

Whether it's pushpin, poetry or neither, you can discuss it here.

Optimal giving, x-risk, and the ethics of disagreement

Postby GregoryLewis on 2011-10-16T16:24:00

Hello all. Poorly cooked thoughts - my apologies in advance.

There's lots of discussion about how valuable x-risk charities (SigInst, FHI, etc. etc.) are. Proponents point to the massive divergence in utility between happy singularity and extinction (or "I have no mouth and I must scream" manevolence) and say that even if these charities are really ineffective, and even if your terminal events are extremely unlikely, and even if your error bars for all these things are massive, your expectation for donating to these charities is still zillions of times greater than health-based charities like VillageReach or SCI.

Sceptical people (like me) think there is something fishy about this sort of reasoning - on its face, it seems to generalize to pascal's wager, and my intution (dirty word I know) is that it simply sucks to spend all your money changing the balance of existential risk by (if you're lucky!) a nigh-infinetesemal margin in the same way it sucks to spend your sunday in church on the off-chance of infinite payoff. Far better, surely, to guarantee hundreds of lived saved by giving to NTDs or similar. So, in the time-honoured tradition, let's go fishing for philosophy to shore up these intuitions after the fact.

One option is risk-aversity, which I quite like (I far prefer a very negatively skewed distribution of utility across the worlds in my future cone over a higher average but very positively skewed distribution - I want my universe to be happy, not just sharing an antecedent with some really happy universe). But let's ignore that. Another option is playing statistics as to why high unbiased estimates with massive error bars shouldn't be trusted, even if you're only interested in maximizing aggregate (I think both LW and Givewell have chatted about this). But lets also ignore that: I'm not convinced we can use winner's curse type objections, if only because of the repeatability of people thinking "woah, x-risk is really important!"

Epistemic disagreement can ride to the rescue. The situation about x-risk seems to be fairly described as one of epistemic peers being in disagreement: similarly able and careful reasoners, with similar access to the data, are yet drawing completely different conclusions. Unless we have lots of credence we are better than our epistemic peers (which is pretty close to false by definition), there are two candidate epistemic practices I can think of.

1) Correction over sum of beliefs. If only you and Nick Bostrom exist, and your credence that your utility calculations are right and he is wrong is about 60%, then your final credences should be a 60:40 weighted sum between you and him. In real life, where there are lots of people, you need to do a much more complicated weighting (with lots o' bayes to avoid dutch book situations). In effect, you smear out your credence estimations: if you're convinced that x-risk is really important, accepting an odds ratio of you being right over an arch x-risk sceptic is 1.5 means you need to adjust downwards, and so on.

2) Rosseau-esque renunciation. I think it was in the Social Contract Rosseau said that, in cases where our beliefs contradict the popular will, we should conclude our belief must be mistaken. This isn't so crazy as I thought it was: we have limited insight to our own belief forming practices, but it's a pretty safe bet our cranial contents are a melange of rational and irrational elements. If so, then perhaps we should say that it is the central cluster of epistemic peers that are most likely to be right, on the hope that irrational quirks will cancel each other out or similar. If we ourselves are outliers, and even if we can't help but hold the outlier position (it might be a fact of our psychology that our brain, trying to be as rational as it can, cannot help but find the outlier position more persuasive than the central cluster), we should still act as if the central cluster is correct, because it is more likely that an outlier like ourselves has fallen victim to a quirk of irrationality we lack insight to than the bulk of our epistemic peers.


Back to x-risk:

If 1) is the right course to take, then evaluating x-risk giving becomes even murkier. We need to somehow work out what lots of other reasonable people think about p(singularity-esque-event), p(x-risk), p(charity-doing-any-good), and work out how confident we are that we're right rather than they are, and correcting our credences accordingly. Guess no one said reflective equilibrium would be easy...

It's a toss-up whether x-risk would get the nod here, but probably so. Again, we can point to the ridiculously vast payoff to defray worries about low probability, and the 'spread' isn't a big deal compared to central estimates. Even if we're exceptionally modest, and rate that non-utilitarians (and non transhumanists/x-riskers) are only just more likely wrong than we are, the integral of our corrected calculations is still ginormous for x-risk over any thing else. The reason for this is the fact we hold these beliefs means we must think we are more likely right than someone who disagrees with us (if not, why are you disagreeing with them!), so the degree of smearing can only be so much.

(Indeed, 1) is probably more effective in shifting sceptics - if I only think I'm a smidgen more likely to be right than Bostrom or Yudkowsky, then that drags up my corrected credence estimates to a level that makes x-risk giving a very high-expectation activity.)

If 2) is right, though, x-risk is almost certainly a bad idea. X-riskers are certainly outliers, and (I suspect) "giving money to x-risk charities will help x-risk" are an even smaller outlier. So either the bulk of non-x-riskers are much less rational than x-riskers, and consequently aren't close to being epistemic peers, or the x-riskers are outliers, and should think that they are probably victims of some irrational quirk.

Obviously, picking the community of epistemic peers is crucial, and this might well be issue specific (I'd be inclined to think the vast bulk of people are not my epistemic peers re. religion, for example, so I don't have to go to church, but they are re. keeping promises or whatever). However, I find 2) generally persuasive: when even the smartest of us, explicitly striving to be rational, still end up (in effect) agreeing to disagree, we should stop treating our judgements as part of our rational selvs, and instead treat the outcomes as the output of a not-very-reliable channel to the truth. On this measure, giving to NTDs or other effective health interventions (which seems to command agreement from everyone asked about it) seems to completely beat x-risk (which I suspect 'the man on the street' would think is utterly crazy, and even experts are divided about).

(Final aside: Even if this is true, this isn't necessarily demanding we dump x-risk stuff. There's a case made elsewhere - Hanson, I think? - that having cantankerous outliers stubbornly combing their own patch of idea space is a better search strategy then everyone going to the central cluster - the outliers cost little, and might find lower troughs in the confirmation space for us to switch into. So (and I find this conclusion utterly mind-bending) an x-risker might find x-risk by far the most important issue facing humanity, yet realise this is probably due to an irrationality quirk he doesn't have insight into, yet still continues trying to limit x-risk just as before, because he realises that even though he is probably being wrong, it is more optimal for him to consider this far flung bit of idea space rather than join the crowd. What this would mean for charitable giving I have no idea: they'd be a trade of for maximum expectation against our guestimate of blowing money on unlikely-to-be-valuable outliers will yield a breakthrough to even more effective utilon generators.)

Anyway, I've run my brain into the ground here. Anyone with better ideas?

GregoryLewis
 
Posts: 13
Joined: Sat Oct 15, 2011 10:59 pm

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Brian Tomasik on 2011-10-17T05:16:00

Hey Gregory, Thanks for the post!

I guess my main reaction is that I don't see how disagreement theory applies much to this case, because the principal divergence between the men on the street and the x-riskers is about values: x-riskers take an undiscounted sum of future utility, while ordinary people (including economists, explicitly) discount the future, and they often don't even care about animals or people beyond their communities. Other obvious differences: (1) Most people aren't utilitarians or even consequentialists. (2) Most people are risk-averse, as you suggested, so that they don't buy Pascalian wagers.

It seems to me all of the above are questions about what you value, rather than what the truth is. So looking at other people doesn't much help, unless you place value on trying to value what other people value.

As far as my own thoughts on x-risk, I'm not necessarily supportive of trying to reduce it -- not because I don't think the Pascalian calculation works (I think it probably does), but because I'm worried about the massive increases in suffering that could result. Here is one thread on that topic.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Arepo on 2011-10-17T12:41:00

Greetings from a fellow sceptic. I’ve discussed this a fair bit on here, so I won’t go into it at length, but my objections to the ER crowd can be distilled to something like this (all interrelated):

a) Precisely what you said about error bars. They make enormous assumptions based on almost nonexistent data about what the future will look like. As Alan’s discussed, it could be heavily negative.
b) I am not particularly impressed by the prominent names in the field. Both Bostrom and Yudkowsky are actively scorned by most of the mathematicians/statisticians/theoretical physicists I know who’ve seen their work, sometimes with clear examples of their poor practice (eg Bostrom dismissing things as ‘implausible’, or failing to cite the originators of his ideas, instead citing one of his own previous papers that discuss them), sometimes with the complaint that (related to a) they just don’t have well defined premises. Also they seem to have a heavy bias against conventional views, which I rarely see them try to justify – the thought seems to be that large amounts of money already goes into these, which is true, but GWWC and Givewell offers a great example and stark warning of thinking that just because people have been pouring money into an end doesn’t mean they’ve been doing it *well*. Giving money to poverty-reduction now is thousands of times more effective an approach than it was even a couple of years ago, whereas I don’t know of any concrete result SIAI and co could offer to justify their funding.
c) I’m not convinced by the views they’ve advanced. Eg coherent extrapolated volition seems like a trendy description of preference utilitarianism, and preference utilitarianism seems obviously less sound than standard hedonistic util, since it invokes obscure metaphysics with no physical correlate to claim that the universe somehow gets ‘better’ even if no-one in it actually feels anything different. HU only has to invoke obscure philosophy of mind with no physical correlate, which is still unsatisfactory, but given that most of us are clear that we experience better and worse mental states, seems like a far sounder starting point to me.
d) (related to b and c) It’s far from clear that ER organisations offer the best way of reducing ER. Nuclear/biological war and deliberately malicious gray goos etc all tend to happen because people are unhappy with each other. In a world with fewer people suffering and more people encouraging each other to help each other *now*, you have far less motivation for anyone to make malicious use of these technologies, or to build malicious versions of them in the first place.

Not convinced your objections work if mine don’t though. 2 seems like an extreme version of 1, with little reason to believe it. I would much rather sum the beliefs of people who’ve given the topic serious thought than of every random person on the street, when most of the world is effectively logically illiterate. With 1, it’s true enough, but if you assign any credence to their Pascalian wager - ie a dollar given to them is worth thousands times more in expected utility than a dollar given elsewhere - then reducing your credence in their views by 50% isn’t going to have much effect on your expectation of that dollar relative to alternative causes.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Brian Tomasik on 2011-10-19T07:06:00

Arepo wrote:I am not particularly impressed by the prominent names in the field.

Even if your criticisms are true, I think Bostrom and Yudkowsky are some of the smartest people on earth, and they've both profoundly shaped my outlook on the world (as well as the outlooks of scores of other people). That's definitely worth something. However, they'll both get funding regardless of what happens to SIAI or FHI, so they don't represent the marginal value of dollars donated.

Arepo wrote:and preference utilitarianism seems obviously less sound than standard hedonistic util, since it invokes obscure metaphysics with no physical correlate to claim that the universe somehow gets ‘better’ even if no-one in it actually feels anything different.

Glad you agree. :) It's funny how modern academic utilitarians consider classical utilitarianism dead, with preference utilitarianism the only plausible candidate. Seems most of the people around here are more fond of good old Bentham.

Arepo wrote:I’m not convinced by the views they’ve advanced.

I'm not a fan of many of Yudkowsky's moral beliefs: Average utilitarianism instead of total, caring about "saving lives" independent of hedonistic considerations, opposition to 'crude' utilitronium (aka wireheading), support of CEV as a metaethically desirable (rather than just instrumentally useful) value-resolution process, and uncertainty about whether we should aim to abolish all suffering. I think he's also epistemically crazy to place the relatively high probability he does on living forever (or at least a long time).

That said, his ethical views are better than average. At least he's a "shut up and multiply" consequentialist, and I think he would eliminate wild-animal suffering, as well as suffering by sentient computational subroutines. However, he doesn't think frogs are conscious.

Arepo wrote:It’s far from clear that ER organisations offer the best way of reducing ER. Nuclear/biological war and deliberately malicious gray goos etc all tend to happen because people are unhappy with each other.

I think nixing nukes, guarding against grey goo, and planning for paperclippers do yield higher expected reduction in existential risk per dollar. Much easier to keep scissors out of the reach of children than just to ask them not to run with scissors. And most serious terrorists come from rich families.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Arepo on 2011-10-19T09:40:00

Alan Dawrst wrote:Even if your criticisms are true, I think Bostrom and Yudkowsky are some of the smartest people on earth,


I've heard this a lot (not necessarily about them, but as a general claim about philosopher x who one agrees with), and become convinced that it's little more than a claim that you agree with them/like the sound of what they say a lot. Ie it describes a blend of how charismatic they are/how similar their starting premises are to yours/analytical ability rather than straight analytical ability. Yudkowsky is extremely charismatic, but that just means his expected analytical ability is significantly lower than his level of success implies.

What *can* it actually mean to say a philosopher is smarter than another, when virtually by definition nothing they do or say can be directly tested?

Arepo wrote:Glad you agree. :) It's funny how modern academic utilitarians consider classical utilitarianism dead, with preference utilitarianism the only plausible candidate. Seems most of the people around here are more fond of good old Bentham.


Yep. I have the sense hedonistic utilitarianism coming back into fashion if not the moniker 'classical' - Toby Ord seems to be one, as are Torbjorn Tannso, Alastair Norcross. Maybe it's just a reflection on my knowledge (Tannso has certainly been around for a while), but I would struggle to name three professional HUs who were around a decade ago.

Much easier to keep scissors out of the reach of children than just to ask them not to run with scissors.


But perhaps better still to reduce the number of children ;)

And most serious terrorists come from rich families.


What's your source for this? What I've read has been equivocal on the issue. 'Serious terrorist' sounds like it means something like 'successful terrorist', which I would expect to be more an issue of competence than motivation, and you'd clearly expect to find competence of all types coming more from rich families.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Brian Tomasik on 2011-10-20T08:17:00

Arepo wrote:What *can* it actually mean to say a philosopher is smarter than another, when virtually by definition nothing they do or say can be directly tested?

Creativity. Clear writing. Ability to manipulate arguments quickly and accurately. Knowledge of other fields (physics, math, etc.).

What does it mean to say that Shakespeare was one of the smartest people in history?

We needn't argue further. As you say, these debates involve a large degree of opinion, and this point isn't important for the discussion at hand.

Arepo wrote:I would struggle to name three professional HUs who were around a decade ago.

Yew-Kwang Ng is the only I can think of.

Arepo wrote:What's your source for this?

Oh, no particular source, but searching something like "terrorism poverty" brings up relevant articles, e.g., this by Alan B. Krueger and Jitka Maleckov.

Arepo wrote:What I've read has been equivocal on the issue.

Yeah, it obviously is complex. I'm sure there are many instances in which poverty, disease, etc. do contribute to terrorism.

Arepo wrote:'Serious terrorist' sounds like it means something like 'successful terrorist',

I meant people or groups that seem capable of creating existential risks, either directly (biological weapons, nanotech, etc.) or indirectly (by aggravating a major power through an attack that leads to nuclear war). People in the poor countries killing other people in poor countries don't typically fall into these categories, except when the countries have nuclear weapons, like India and Pakistan. Car-bombers in Iraq probably don't contribute much to existential risk, at least compared with, say, the 9/11 attacks.

Arepo wrote:and you'd clearly expect to find competence of all types coming more from rich families.

Exactly. So more wealth / education / technological knowledge lead to more serious terrorism.

Of course, we already knew that. Existential risk would be smaller per year if humans were still in the Dark Ages. But I guess the proper measure isn't "probability of extinction per year" but "probability of extinction per unit of progress toward space colonization," after which extinction risks will abate somewhat. You need technological knowledge to get to that point, so higher risk of extinction per year isn't necessarily a bad sign for those who want to prevent extinction.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Arepo on 2011-10-20T09:08:00

Alan Dawrst wrote:What does it mean to say that Shakespeare was one of the smartest people in history?

Very little to me, to be honest. I think claims like this are simply better off broken down into modest, falsifiable claims - eg 'Shakespeare evidenced the widest vocabulary and has retained a greater degree of public interest than any other writer of his era'.

Yew-Kwang Ng is the only I can think of.

I had never heard of him! Have you read his book? It looks interesting, though I don't think I can justify £60 on it :?

Oh, no particular source, but searching something like "terrorism poverty" brings up relevant articles, e.g., this by
Alan B. Krueger and Jitka Maleckov.


I skimmed it fairly briskly, but didn't spot any direct support for what you're saying, except that the Hezbollah militants tended to be slightly better off than the average Lebanon citizen.

Arepo wrote:Of course, we already knew that. Existential risk would be smaller per year if humans were still in the Dark Ages. But I guess the proper measure isn't "probability of extinction per year" but "probability of extinction per unit of progress toward space colonization," after which extinction risks will abate somewhat. You need technological knowledge to get to that point, so higher risk of extinction per year isn't necessarily a bad sign for those who want to prevent extinction.


Good point, though it doesn't yet indicate which causes offer the best ratio.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Brian Tomasik on 2011-10-21T06:17:00

Arepo wrote:I had never heard of him! Have you read his book? It looks interesting, though I don't think I can justify £60 on it :?

I've just read articles (several of which he sent me for free when I wrote to him :)). He's the author of the classic paper "Towards Welfare Biology," which argues that there's a predominance of suffering over happiness in nature.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby GregoryLewis on 2011-10-27T22:10:00

Sorry for my delay in getting back to you guys.I've been following the wider discussion re. x-risk charities and their thinkers, and don't have anything of my own. But I would like to talk a bit more about my pet counter-argument. ;)

@ Alan Dawst

I agree that lots of the difference between x-riskers and the rest boils down to value-beliefs. But value beliefs are surely beliefs, and I think most are willing to acknowledge some chance their beliefs about ethics are mistaken: maybe util is just wrong, or your preferred subspecies of util is, etc. And so long there are right answers to these questions, then I think disagreement stuff is at least 'in play'. I could be missing something...


@ Arepo

You are right re. 1). My mistake.

My development for 2 was very sketchy. I agree you should only apply 2) to your epistemic peers (and superiors) - although I'm perhaps more optimistic in the rationality of the 'man-on-the-street' than you are. What I was thinking was something like this:

Suppose you and a bunch of your epistemic peers think about X, and then there is a central cluster around some belief about it, but you are one of the outliers. I think standard epistemology of disagreement would say it's fine for you to keep your belief and believe the central cluster is wrong. I think a better approach is to assume that your brain is doing something erroneous. In a sense, you stop treating the deliverances of your rationality as the highest court of your beliefs, but rather one result to be weighed against others.

It is tricky to apply that to x-risk. Because I'd probably want to sub-stratify by a) people who've heard of x-risk stuff and b) some basic criterion of general rationality. On those measures it may well be the 'central cluster' is actually the FHI folks. There are other tricky problems (people with particular error-tendencies in their brain might be more likely to come across particular ideas, etc. etc.). Unsure. But I'm definitely sceptical about giving to x-risk.

GregoryLewis
 
Posts: 13
Joined: Sat Oct 15, 2011 10:59 pm

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Brian Tomasik on 2011-10-28T10:33:00

GregoryLewis wrote:And so long there are right answers to these questions, then I think disagreement stuff is at least 'in play'.

Well, that's the thing: I don't think there are 'right answers' to these questions, any more than there's a 'right answer' to the question of whether chocolate tastes better than vanilla, Beethoven was a better composer than Mozart, or a toothache is worse than a headache. I'm an emotivist about ethics. There's no more objectively right reason to maximize happiness than to maximize suffering, or to maximize paperclips. ;)

(Of course, I really, really don't want people to cause suffering. But among a community of superintelligent suffering-promoters, my view is the one that would be discarded as the outlier.)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby GregoryLewis on 2011-10-31T22:00:00

@AlanDawrst

Fair enough, but that seems to just shift matters one step back. Because so long as you assign some possibility moral realism could be false, then I think we can repeat similar sorts of moves.

I know a bit about emotivism (or at least it's descendants, like norm-expressivism etc.), but my understanding is that emotivism doesn't really have much normative content: no emotional reactions are better than another. If so, then if emotivism is true, it doesn't really matter what your ethical values/beliefs are. So you should just play it safe and assume it is wrong, because if it is wrong you might be losing big time, but if you are right you draw any way. ;)

Obviously feel free to correct me if I've mistaken you or confused myself.

GregoryLewis
 
Posts: 13
Joined: Sat Oct 15, 2011 10:59 pm

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Argothair on 2011-11-01T08:07:00

So (and I find this conclusion utterly mind-bending) an x-risker might find x-risk by far the most important issue facing humanity, yet realise this is probably due to an irrationality quirk he doesn't have insight into, yet still continues trying to limit x-risk just as before, because he realises that even though he is probably being wrong, it is more optimal for him to consider this far flung bit of idea space rather than join the crowd.


That sounds correct to me! The upside of all this mind-bending is that an x-risker can quietly devote a few years to decreasing x-risk without feeling desperate or marginalized or proudly heroic. The x-risker, like zir epistemic peers, is doing zir part to improve the world as ze reckons that this can and should be done, i.e., by zir own lights. Nothing would be gained by you (or the x-risker) trying to improve the world some other way, at least in the absence of evidence that was weighty enough to truly change your mind, and not just to induce a bit of skepticism. You (or the x-risker) might spent some energy recruiting or fundraising or explaining to people why their point of view makes relatively more sense, but the bulk of the energy would go to the cause itself, and there would be no need to stay up nights worrying about whether you've got it right.

Argothair
 
Posts: 7
Joined: Tue Nov 01, 2011 7:26 am

Re: Optimal giving, x-risk, and the ethics of disagreement

Postby Brian Tomasik on 2011-11-01T10:14:00

GregoryLewis wrote:Because so long as you assign some possibility moral realism could be false, then I think we can repeat similar sorts of moves.

You mean "possibility moral realism could be true"? Yeah, but the thing is that even if moral realism were true (whatever that means), I wouldn't care. Why should it matter to me what the moral fabric of the universe dictates?

GregoryLewis wrote:If so, then if emotivism is true, it doesn't really matter what your ethical values/beliefs are. So you should just play it safe and assume it is wrong, because if it is wrong you might be losing big time, but if you are right you draw any way. ;)

Well, if emotivism is true, there's no absolute sense in which one value is better than another. But I don't know what it would look like for values to be absolute anyway -- as I said above, why should I care what the universal values are? I care about what I care about. I think suffering is horrible and want to prevent as much of it as I can.

As an analogy: You can like Beethoven without believing that his music is "objectively" better than, say, the sound of fingernails on a blackboard.

GregoryLewis wrote:Obviously feel free to correct me if I've mistaken you or confused myself.

Neither. :) Thanks for the good discussion.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA


Return to General discussion