find the most important cause on which to work. I started with climate
change, but when I compared it to global poverty, I found that global
poverty was about two orders of magnitude more cost effective from a
human perspective. From a biodiversity perspective, it was not clear
whether climate change or rain forest destruction was the most
cost-effective. In the last few years, I've become interested in
existential risks. A lot of important work on this topic has been done
recently by people like Nick Bostrom, Eliezer Yudkowsky, and Ray
Kurzweil. Yudkowsky in particular has made the argument that since the
impact of existential risk is so large and the current amount of work
being done is so small, it is the most cost-effective area to work on.
I have gone out on a limb and tried to actually quantify this
statement. One has to make many assumptions in order to do this, so
there is large uncertainty. However, as long you as you believe the
uncertainty in my analysis is less than 10 orders of magnitude, we can
draw some conclusions about priorities.
Please see my spreadsheet at: http://utilitarian-essays.com/cost-effe ... ation.xlsx . First I will talk about some
conclusions, and then I will consider the assumptions and uncertainty.
Working on physics disasters, the singularity (which includes trying
to eliminate suffering in simulations and gradients in bliss), gray
goo, and an asteroid large enough to cause extinction are clear
winners from the perspective of economics, biodiversity, and positive
utilitarians (maximizing happiness). The difficulty is the negative
utilitarians, who want to minimize suffering. As has been discussed on
this website, these disasters could actually be good things from this
perspective. However, if we think that the galaxy is likely to be
colonized by some civilization, and we think that we might have less
suffering than a random civilization, even negative utilitarians might
be convinced that preventing existential risks is a good thing. The
extremely high cost effectiveness numbers blindly extrapolated mean
that we should be spending more than our entire economy at solving
these problems. This is problematic, but in reality the effectiveness
of putting more and more money into it would eventually fall off. An
interesting conclusion is that the cost-effectiveness of the typical
American spending money on preventing existential risks relative to
spending money on themselves is much greater than the very rich
American funding American poverty or even global poverty relative to
spending money on themselves. So this means that even people of modest
means should be giving to charity.
A very important assumption that affects the relative importance of
human extinction versus a smaller catastrophe is how much you lose
with extinction. Even if you only think there is a 1% chance of going
to computer consciousnesses and a 10% chance of going to a Dyson
sphere (and not even considering expanding into the galaxy), the
expected number of future consciousness is is at least 1E30. The basic
framework for cost-effectiveness of working on disasters where we have
little direct information is to scale with the expected damage, and
inversely with the amount of effort. I even allow you to change the
functional form, for instance where the cost-effectiveness scales
inversely with the square root of the current effort. But this only
changes the result a couple orders of magnitude. Of course there is
uncertainty in extrapolating the cost-effectiveness from one problem
to another. I have used grey to indicate uncertainty greater than one
order of magnitude, and red to indicate uncertainty greater than 10
orders of magnitude. Many of these numbers are knowable, like the
current effort on different problems, so I am very open to people's
input. Also, please play around with the assumptions, and see if the
conclusions change. Some justification of many of my assumptions are
in the comments for the relevant cells (just mouse over to see them).
The main differences between the utilitarians and the economists are
that utilitarians value humans equally, and the utilitarians do not
discount.