Summary
Nick Bostrom's "Astronomical Waste" suggests that a reasonable estimate of the number of human minds that could be computed in the future is ~10^38 at any given moment. This figure seems potentially high to me because it assumes colonization of the entire Virgo Supercluster, and I also don't know if the estimates of the computation per star and computation per brain are overly optimistic. An extremely skeptical outlook could reduce the expected number of minds to a much smaller magnitude -- possibly even not substantially more than the number of biological minds. However, in practice, we have to look at expected values rather than conservative estimates, and the Pascalian tail scenarios will ensure that the expected importance of influencing the future remains high even if we think our likely impact on it is low.
Note: As many readers know, I fear that massive amounts of future computation may lead to astronomical amounts of suffering, and I'm not a fan of colonization for this reason. I would prefer to avoid colonizing our supercluster, but, barring that, I hope we can at least reduce the number of suffering computations that take place.
Introduction
Predictions are hard, especially about the future. There's immense uncertainty about how things will play out and whether something we do now will have a good or bad impact. In addition, entropy diffuses the influences that we do have, such that over time, the probability that what we do now still matters is smaller and smaller, except in cases where there are specific mechanisms to prevent this damping effect, such as with a goal-preserving AGI.
These factors militate against trying to influence the future, especially the far future. However, the usual reply is that the amount of consciousness (including suffering) in the future could be so vast that, even if it's unlikely to come about, and even if our control over it is likely to be small, our expected impact by working on futurism issues still dominates present-day considerations. Thus, we'll tend to view present-day considerations mainly from the lens of how they shape the future.
I think all of this is correct, but I'm curious just how big the number of future minds could be using a truly conservative estimate. In "Astronomical Waste," Bostrom talks about 10^38 human-level minds that could exist for "the lifespan of galaxies," i.e., "billions of years." 10^38 is a product of 10^13 stars in the Virgo Supercluster times 10^42 computing operations per second per star divided by 10^17 operations per second per human mind. Let's examine these numbers.
10^13 stars
The Virgo Supercluster is big. Its diameter is ~10^8 light-years across, and given that even the best space travel would probably be many times slower than light, it could take billions of years to cross it. By the time we finished, the party might already be over? (Here I use "party" as an expression, but in fact, the computations of the future may contain vast amounts of suffering.)
Alpha Centauri, the nearest star, is 4.3 light-years from us, and one article estimates: "The Voyagers [spacecraft] aren’t aimed toward Alpha Centauri, but if they were, they’d take tens of thousands of years to get there." So maybe spacecraft would be ~10^4 times slower than light, which means it could take 10^12 years to cross the Virgo Supercluster.
A scenario for the end of the universe that is considered most likely is the Big Freeze, in which "Over a time scale on the order of 10^14 years or less, existing stars burn out, stars cease to be created, and the universe goes dark." If this is true, then even if it took 10^12 years to cross the Virgo Supercluster, we'd still have 10^14 - 10^12 ~= 10^14 years left.
Of course, this isn't the only possibility. In the Big Rip scenario, "the end of the universe is approximately 22 billion years from the present," although this particular number "is not considered a prediction, but a hypothetical example." If it were correct, though, the universe would last ~10^10 more years, which means we might not be able to colonize very much of the Virgo Supercluster before time ran out. Anyway, note that "Experimental evidence currently suggests that" the Big Rip is not accurate.
10^42 computing operations per second per star
For this figure, Bostrom cites R. J. Bradbury's "Matrioshka Brains" paper, which I can't find immediately on the web. Without having read it, I remain mildly skeptical because there may be large differences between theory and practical feasibility.
Maybe we can take a bottom-up approach. Today's typical computers run at speeds in the gigaflops (say, 10^10 flops), and there are probably around as many computers as there are humans (~10^10?). It seems reasonable to expect we can increase both the speed per computer and the number of computers, so maybe ~10^25 or ~10^30 would be a conservative estimate for future computing power on Earth. (Here I'm assuming a flop is the same as what Bostrom calls an "operation per second.")
10^17 operations per second per human mind
Since I may care about as much about insect-level minds as human minds (pending discoveries about whether insects are sentient), we can actually reduce this number down from 10^17. I don't know exactly how the scaling would work, but assume it's roughly proportional to number of neurons. A human has ~10^11 neurons, compared with ~10^5 in a fruit fly. The difference is 10^6, so maybe it would require only ~10^17 / ~10^6 = ~10^11 operations per second for a fruit fly.
Combining the numbers
On Earth, assuming ~10^25 to ~10^30 flops, and ~10^11 flops per fruit fly, we would have ~10^14 to ~10^19 (suffering) fruit flies, which is at most about the same as the number of insects in nature. If we use the original 10^42 flops estimate, then we could simulate 10^31 (suffering) fruit flies on Earth alone, which is noticeably bigger.
However, to this 10^31 figure, we have to apply major discounts for (a) the probabilities that a computational future doesn't happen, (b) the probability that the computing power is used for non-sentient minds, (c) the probability the sentient minds are much bigger than fruit flies, (d) the probability that what we do now isn't useful, (e) damping factors for our influence even if what we do now is useful, and (f) many other things. Multiplying these all together could conceivably give something at least as small as 10^-10, say. Of course, there are plenty of discounts and damping factors for preventing the suffering of biological insects that already exist on Earth too, but not as many as for future computational suffering.
The tails dominate
A super-conservative estimate of future computational consciousness may not be really huge, but the expected value for the magnitude is a lot bigger. Maybe the universe will last longer than we thought, or maybe we'll find other loopholes in physics that allow for escaping these limits. Maybe Matrioshka Brains can simulate more minds than we thought. Maybe fruit flies are inefficient, and we can actually get away with even smaller brains. Or maybe the intensities of emotion could be multiplied many-fold over what fruit flies experience if they're sentient even with the same number of flops.
Because I fear that in expectation, the amount of suffering in the future may exceed the amount of happiness using my personal assessments of happiness vs. suffering, I hope these big-computation scenarios are not true, but if they are, the results would be devastating. In our calculations, we have to take them seriously, and the probabilities of weird big-computation scenarios may not fall as fast as the size of potential computation grows. (This is trivially true if we assign nonzero probability to physical hypercomputation.) So even if you're skeptical about how big future computation might be, you'll still likely be compelled by the Pascalian tails of the probability distributions.
Comments?
Let me know if you think I'm being overly skeptical or if there are major factors I haven't considered. This whole analysis could use a lot of refinement and expansion, so consider this just a quick, first-pass attempt.
Nick Bostrom's "Astronomical Waste" suggests that a reasonable estimate of the number of human minds that could be computed in the future is ~10^38 at any given moment. This figure seems potentially high to me because it assumes colonization of the entire Virgo Supercluster, and I also don't know if the estimates of the computation per star and computation per brain are overly optimistic. An extremely skeptical outlook could reduce the expected number of minds to a much smaller magnitude -- possibly even not substantially more than the number of biological minds. However, in practice, we have to look at expected values rather than conservative estimates, and the Pascalian tail scenarios will ensure that the expected importance of influencing the future remains high even if we think our likely impact on it is low.
Note: As many readers know, I fear that massive amounts of future computation may lead to astronomical amounts of suffering, and I'm not a fan of colonization for this reason. I would prefer to avoid colonizing our supercluster, but, barring that, I hope we can at least reduce the number of suffering computations that take place.
Introduction
Predictions are hard, especially about the future. There's immense uncertainty about how things will play out and whether something we do now will have a good or bad impact. In addition, entropy diffuses the influences that we do have, such that over time, the probability that what we do now still matters is smaller and smaller, except in cases where there are specific mechanisms to prevent this damping effect, such as with a goal-preserving AGI.
These factors militate against trying to influence the future, especially the far future. However, the usual reply is that the amount of consciousness (including suffering) in the future could be so vast that, even if it's unlikely to come about, and even if our control over it is likely to be small, our expected impact by working on futurism issues still dominates present-day considerations. Thus, we'll tend to view present-day considerations mainly from the lens of how they shape the future.
I think all of this is correct, but I'm curious just how big the number of future minds could be using a truly conservative estimate. In "Astronomical Waste," Bostrom talks about 10^38 human-level minds that could exist for "the lifespan of galaxies," i.e., "billions of years." 10^38 is a product of 10^13 stars in the Virgo Supercluster times 10^42 computing operations per second per star divided by 10^17 operations per second per human mind. Let's examine these numbers.
10^13 stars
The Virgo Supercluster is big. Its diameter is ~10^8 light-years across, and given that even the best space travel would probably be many times slower than light, it could take billions of years to cross it. By the time we finished, the party might already be over? (Here I use "party" as an expression, but in fact, the computations of the future may contain vast amounts of suffering.)
Alpha Centauri, the nearest star, is 4.3 light-years from us, and one article estimates: "The Voyagers [spacecraft] aren’t aimed toward Alpha Centauri, but if they were, they’d take tens of thousands of years to get there." So maybe spacecraft would be ~10^4 times slower than light, which means it could take 10^12 years to cross the Virgo Supercluster.
A scenario for the end of the universe that is considered most likely is the Big Freeze, in which "Over a time scale on the order of 10^14 years or less, existing stars burn out, stars cease to be created, and the universe goes dark." If this is true, then even if it took 10^12 years to cross the Virgo Supercluster, we'd still have 10^14 - 10^12 ~= 10^14 years left.
Of course, this isn't the only possibility. In the Big Rip scenario, "the end of the universe is approximately 22 billion years from the present," although this particular number "is not considered a prediction, but a hypothetical example." If it were correct, though, the universe would last ~10^10 more years, which means we might not be able to colonize very much of the Virgo Supercluster before time ran out. Anyway, note that "Experimental evidence currently suggests that" the Big Rip is not accurate.
10^42 computing operations per second per star
For this figure, Bostrom cites R. J. Bradbury's "Matrioshka Brains" paper, which I can't find immediately on the web. Without having read it, I remain mildly skeptical because there may be large differences between theory and practical feasibility.
Maybe we can take a bottom-up approach. Today's typical computers run at speeds in the gigaflops (say, 10^10 flops), and there are probably around as many computers as there are humans (~10^10?). It seems reasonable to expect we can increase both the speed per computer and the number of computers, so maybe ~10^25 or ~10^30 would be a conservative estimate for future computing power on Earth. (Here I'm assuming a flop is the same as what Bostrom calls an "operation per second.")
10^17 operations per second per human mind
Since I may care about as much about insect-level minds as human minds (pending discoveries about whether insects are sentient), we can actually reduce this number down from 10^17. I don't know exactly how the scaling would work, but assume it's roughly proportional to number of neurons. A human has ~10^11 neurons, compared with ~10^5 in a fruit fly. The difference is 10^6, so maybe it would require only ~10^17 / ~10^6 = ~10^11 operations per second for a fruit fly.
Combining the numbers
On Earth, assuming ~10^25 to ~10^30 flops, and ~10^11 flops per fruit fly, we would have ~10^14 to ~10^19 (suffering) fruit flies, which is at most about the same as the number of insects in nature. If we use the original 10^42 flops estimate, then we could simulate 10^31 (suffering) fruit flies on Earth alone, which is noticeably bigger.
However, to this 10^31 figure, we have to apply major discounts for (a) the probabilities that a computational future doesn't happen, (b) the probability that the computing power is used for non-sentient minds, (c) the probability the sentient minds are much bigger than fruit flies, (d) the probability that what we do now isn't useful, (e) damping factors for our influence even if what we do now is useful, and (f) many other things. Multiplying these all together could conceivably give something at least as small as 10^-10, say. Of course, there are plenty of discounts and damping factors for preventing the suffering of biological insects that already exist on Earth too, but not as many as for future computational suffering.
The tails dominate
A super-conservative estimate of future computational consciousness may not be really huge, but the expected value for the magnitude is a lot bigger. Maybe the universe will last longer than we thought, or maybe we'll find other loopholes in physics that allow for escaping these limits. Maybe Matrioshka Brains can simulate more minds than we thought. Maybe fruit flies are inefficient, and we can actually get away with even smaller brains. Or maybe the intensities of emotion could be multiplied many-fold over what fruit flies experience if they're sentient even with the same number of flops.
Because I fear that in expectation, the amount of suffering in the future may exceed the amount of happiness using my personal assessments of happiness vs. suffering, I hope these big-computation scenarios are not true, but if they are, the results would be devastating. In our calculations, we have to take them seriously, and the probabilities of weird big-computation scenarios may not fall as fast as the size of potential computation grows. (This is trivially true if we assign nonzero probability to physical hypercomputation.) So even if you're skeptical about how big future computation might be, you'll still likely be compelled by the Pascalian tails of the probability distributions.
Comments?
Let me know if you think I'm being overly skeptical or if there are major factors I haven't considered. This whole analysis could use a lot of refinement and expansion, so consider this just a quick, first-pass attempt.