These high-stakes low-probability issues are really tricky to think about.
I don't think we should go for the solution according to which any prospect with any chance of getting infinite value is equally good. For one thing, this violates the sure-thing axiom of decision theory. This axiom says that if one prospect will definitely be at least as good as another, and might be better, we should prefer it. If we are indifferent between all alternatives with infinite expected value, we violate this axiom. Easy to see:
Deal 1: If heads, heaven forever. If tails, heaven forever.
Deal 2: If heads, heaven forever. If tails, an ice cream cone.
If you are indifferent between prospects with infinite expected value you'll be indifferent between these deals. Not only does that seem irrational, it violates the sure thing principle. The most natural way to get into the business of maximizing expected utility is to appeal to things like the sure thing principle. If someone insists on being indifferent between these deals, we should ask why. What compelling assumption about rationality forces you to think this?
We're not mathmatically forced into being indifferent between options with infinite expected value. You can avoid this by using non-standard arithmetic to model expected utility theory, or by a number of other means. See "The Infinitarian Challenge to Aggregative Ethics" by Nick Bostrom.
It's true that there are many puzzling issues surrounding infinities that involve permutations of worlds seeming to make things better or worse. These are not any less problematic for people who don't care about infinities. If infinities are possible, everyone must come to grips with these cases, whether they think they have infinite value or not. Saying that these infinite cases have finite value doesn't help solve the core issue, which is: how much value do we place on these seemingly permutation-dependent alternatives?
It's also important to note that some existential risks, like an asteroid impact, are well-understood. We know that the odds of destruction via asteroid in the next century are about 1 in 1 million, and we know that we could substantially decrease these odds (between 50% and 90% reduction) for between $2-20 billion (see
http://www.jgmatheny.org/matheny_extinction_risk.htm). So even if AI risk is too hand wavy, other things aren't.