Matheny's result, without time-discounting, is that an asteroid screening program will give us 0.4 hly/$. The source of the hly are the expected lives of all the future human generations which will live on earth if an asteroid does not make humans go extinct (within the next 100 years, he assumes that after that point we will be able to handle asteroids in any case??). Since he assumes that the human population remains on earth, the number of expected future humans will be astronomically larger if we assume a succesful human space colonization. I will make a rough estimate of this number multiplying Matheny's hly/$ result with the ratio of the c-adjusted total hly number and Matheny's hly number.
We get the total number of human hly by multiplying the population size (hly/y) with its lifetime (y). Matheny assumes a population of 10^10 which lasts for 1.6*10^6 years. That totals 1.6*10^16 hly. How much larger is the c-adjusted number? Bostrom considers the utility of a colonization of our local supercluster Virgo. According to him the cluster can support 10^23 humans, assuming just a conservative 10^10 humans on average around each star (note3). This is my c-adjusted hly/y estimate. Assuming we can get this energy output for 10^11 years(note4) our total is 10^34 hly. That gives us a ratio between the c-adjusted result and Matheny's hly of about 10^18.
Matheny's non-discounted hly/$ number was 0.4 So the c-adjusted hly/$ will be 4*10^17. Assuming the probability of a happy supercluster colonization is only 0.01 we still get the result that the c-adjusted number is an astronomical 4*10^15 hly/$. This is a huge number. It says that each dollar in the asteroid screening program will net us an expected four million billions of happy life years.
For comparison, estimated hly/$ for top non-existential-risk utilitarian interventions:
Vegan Outreach: 0.55-25 hly/$ (estimate by Alan Dawrst "How much is a dollar worth, the case of vegan outreach")
VillageReach vaccination: 0.15 hly/$ (saves a life for 545$, although it's unlikely I assume that one saved life gives 80 hly, so we get some 0.15 hly/$)
My conclusion is that even if an (acceptable by utilitarian standards) supercluster colonization is a lot less likely than 1/100 for a utilitarian asteroid screening and other long-term existential risk interventions will give far more expected utility than even the most effective short-term interventions.
a) Note that nothing more speculative than technology for self-replicating space-colonization and some hedonic enhancement is needed for this result (eg no singularity, superintelligence, molecular nanotechnology or uploading).
b) In particular, note that with access to conscious ai or brain emulation and powerful energy harvesting technology we could get up to 17 orders of magnitude more hly/$ (see note 3).
c) The 1/100 likelihood assumption includes the possibilities that we can't get space colonization technology, that we become extinct for some other reason, or that we wouldn't be motivated to colonize that much. Subjectively, I think the biggest obstacle here is the other risks. However, two considerations may make it reasonable not to give survival estimates several orders of magnitude less than this: (1) many experts in the global catastrophic risk subject believe that the era before we start colonizing space will be especially dangerous (2) all of these experts give survival estimates above 50%. (note5)
d) Asteroid screening may not be the most effective way to reduce existential risk. Compare, for example the utility of building a self-sustaining bunker. According to Matheny its cost would be in the same order of magnitude as the asteroid program. The latter would reduce a 1 in a billion risk by 50%. Subjectively, it seems like a conservative estimate is that a bunker could reduce a total risk of say at least 10% from biotech and other technologies to at least 9.5%. If that's the case than we would get some 7 orders of magnitude more utility. In this case, if we also use Bostrom's more liberal estimate we could get up to 4*10^39 hly/$ from a bunker project.
Minimum because it is based on the numbers for an asteroid screening program, there may be other ways of reducing existential risk which are even more effective.
I assume that future hedonic enhancement technology will ensure that the overwhelming majority of all these future life years will be "happy" from a hedonistic utilitarian perspective.
Bostrom also considers a more ambitious scenario. In this scenario we use advanced molecular nano-technology to harness the total computing power from each star and use it to run as many human minds as possible. In this case we would get 17 orders of magnitude more hly.
100*10^9 years =10^11 years, assuming the energy output will be roughly constant for the "current era of star formation".
"The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013-1014 years), as the smallest, longest-lived stars in our astrosphere, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions."
These are some of the main sources on existential risk estimates, note that all are far below 100% risk. Of course, if the risk post space colonization hasn't decreased enough cumulative risk over long periods of time could get very high. But it seems that with an exponentially increasing expansion most risks would quickly decrease.
Estimated probability of extinction (or similar):
50% (of a disastrous setback of civilization) in the next 100 years according to Sir Martin Reese Our final century/hour (2004)
30% in the next 500 years according to John Leslie - The End of the World (1996)
"Significant risk" according to Richard Posener - Catastrophe: Risk and Response (2005)
>25% Bostrom, "Existential Risks" (2002)