"such organisations clearly have to assess themselves as a potential recipient of funding. Aside from their inevitable bias (which could amount to massive overestimation of the amount of finance they should receive, given that they're theoretically wielding most of the global budget"
It seems very unlikely that futuristic organisation will have control over most of the global budged in the near future. But perhaps you mean of resources that could be extra usefully used? And yes, overestimation of one's own work seems likely (although this is a problem anyone will have). For a fun criticism of much philosophy from that perspective, see
this. This is a good reason for caution in one's estimations of funding for oneself or those similar to oneself. But I can only see it as a reason for the importance of more careful and bias-aware work on Existential Risks, not as a reason for why E-risk research isn't important.
"how can you reach any reliable estimate of the value of the (non-data-dependent) conclusions you're going to reach without actually having reached them? In other words, if such research is reliable, it's obsolete, since you've performed your most important task - if it's not obsolete then it's not reliable."
It seems a similar problem can be generalized to most cases of eg grant writing. How can any researcher know in advance what conclusions they'll reach? One way of solving it would be to do an induction on relevantly similar work. I can predict that the next paper David Chalmers writes will be something a certain group of people will consider insightful (making an important contribution to the issue) with eg probability >40%.
It wouldn't be that hard to look at a representative sample of what SIAI has produced, judge it's importance, make predictions about further work, check the new work etc, see to what extent they are fulfilled, etc.
"No, me neither. But the media seem prone if anything to overreporting successful predictions, so I doubt we massively underestimate their history."
It's also great fun to read about the foolishness of predictions. Probably it would be most useful to compare a class of people using similar methods for prediction.
"But an example of the kind of thing that frustrates me is Bostrom's simulation argument. We have no way of knowing how likely any of his possibilities are"
I agree with you in that this is a useful criticism. It is easy to get frustrated at examples like this. I've been a bit skeptical about SIAI myself since I've thought there's a risk they focus too much of their energy on interesting but ultimately not very relevant paradoxes and technical problems which easily catches the attention of philosophers/logicians/mathematicians. For instance, if we look at what SIAI officially plan on doing, most of it seems less arcane and more useful from a utilitarian perspective
http://singinst.org/challenge#grantproposals. In particular, very little of the total time and resources of SIAI/FHI/Bostrom that are spent on the simulation argument.
Furthermore, it is always dangerous to dismiss issues that many persons one consider at least somewhat authoritative are concerned about (as should follow from what you point out below). In particular, a reason for caution is that the mechanisms that make one frustrated over questions like these may be the same as make most people think most plans discussed on Felicifia are "absurd".
"I was thinking mainly of the people, actually, since I think argument from authority is seriously underrated as a potential source of guidance."
I agree very much.
"Peak resources, peak oil in particular, meteor impact, climate change, and more specifically the strain they put on international relations leading to us using existing or imminent technologies to wipe ourselves out (turns out this is a risk even for meteors). Or leading us to destroy civilisation and it never recovering."
Agreed. This is important, and as I've mentioned before there's a risk that futuristically minded people don't give them enough attention since they seem too mundane and boring.
"I disagree. I think we maximise our impact by dealing with problems that we expect soon and have quite a lot of info about now, rather than looking for data to deal with future problems. Even if we don't make the difference between extinction and continuation, we might well improve the global economy enough that we can afford to make up the research difference in other areas later on. One exception might be studies that examine some aspect of the present that future studies couldn't replicate - happiness measurements, for eg. But such studies are relatively cheap, don't require specialised organisations, and don't seem to be the focus of groups like SIAI."
1. And I think more reasearch is needed on precisely this issue, comparing long term (futurism) and short term risks. Disagree? 2. The problem with spending resources at increasing economic growth is that it itself might be increasing existential risk.
Perhaps it couldn't be justified spending a huge part of humanity's resources on investigating futuristic questions. My main point is that barely any resources at all are spent. SIAI for instance is extremely cheap, even compared to something like happiness studies.