The claim that it's optimal for utilitarians to research speculative scenarios (including unsettled methodological problems like understanding qualia or evaluating our impacts in an infinite universe) derives from the observation that small changes to the quality of our understanding could drastically alter our conclusions about which courses of action are good and bad. For instance, suppose we discovered that entities we never thought conscious actually do experience qualia and, in fact, suffer greatly in a preventable way. (This isn't an absurd suggestion -- it happened to me several years ago when I realized that animals can feel pain. To the extent that the question of which animals can suffer remains open, such a discovery process is still going on right now.) If these entities outnumbered the sentient organisms we currently know about by orders of magnitude, the new optimal course of action could be dominated by doing what would prevent the most suffering on the part of those new entities.
As far as focusing on futuristic speculation, the argument is basically that there's a non-negligible chance that humans will have vast impacts on their future light cone, affecting many orders of magnitude more sentient organisms than have or will ever populate earth during the few billion years for which life exists there. The chance that humans do have such an astronomical impact is small, but the expected value is still likely enormous.
As a follow-up to my previous question about where to donate, I'll note that I'm currently leaning toward donating the money toward research at SIAI. While in general that organization's work is probably something that utilitarians would endorse, this particular project is one that I've coordinated to be of special interest for utilitarians concerned about preventing massive amounts of suffering in the universe -- possibly even outside our lightcone. In general, I recommend that utilitarians consider contacting SIAI to see if the group can arrange for research that may be of mutual interest.
The main objection I have to this strategy is the following. I am a total hedonistic utilitarian with an "exchange rate" between pleasure and pain that gives a significant weight to the badness of pain. In addition, I care more about animal suffering than I think most people do, in part because hedonism implies a lot more potential value and disvalue on the part of animals than do consequentialisms that value more abstract traits that seem to be possessed mainly by humans and their evolutionary kin. The number of people who hold my particular values is very small; the number who hold utilitarianism proper is somewhat bigger; and the number of rationalists who tend to hold some brand of consequentialism is larger still.
Now, knowledge is important, but so is ideology. For instance, I have concerns about what might result from a superintelligent friendly AI that -- perhaps influenced by deep ecology and impulses to propagate life, or perhaps just due to giving insufficient thought to animal suffering -- led to an increase in the number of wild animals throughout the universe, or perhaps in new universes. So there's a question: At what point is it better to promote your specific memes (hedonistic anti-speciesism, in my case) rather than general knowledge or AI that's generally "human-friendly" but perhaps not Benthamite? This might include, for instance, promoting concern about wild-animal suffering, so that -- if humans do have a huge impact on the future of the universe -- they do so in a positive rather than negative way. Sure, research on decision theory is important, but unless people use it to maximize the right things, it's to no benefit, and could even be harmful.
However, I should point out that while SIAI has no explicit ideology, several of its members do lean strongly utilitarian, and many more lean strongly toward some sort of rationalist consequentialism. So even on the question of ideology, SIAI may not be a bad choice for Benthamites, because the amount of philosophical overlap remains extremely high relative to the overlap with the general population. And if one arranges for specific research on a utilitarian-oriented project, the actual marginal impact of a utilitarian's donation can potentially be even better. But I still think contributing to SIAI's general funds is (probably, based on my current knowledge) an excellent choice.
What do others think here? Are there other reasons SIAI and the like are not optimal for utilitarians? For instance, perhaps the Singularity scenario is highly improbable. Or perhaps SIAI's ability to have an impact on it if it did occur would likely be minuscule. Or maybe real "friendly AI" is a utilitarian pipe dream that will almost certainly never amount to anything. While I agree with all of those statements, I still think the vast potential consequences of success here dominate the expected-value calculation.
But maybe there are other causes that would have higher chance of success? Or other organizations more qualified to address these matters? Or other donation strategies (e.g., funding research informally by coordinating with undergraduate students) that have higher leverage? In other words, tell me why SIAI is not an optimal recipient of charitable-donation dollars for expected-value maximizers?