I guess there can be two kinds of risk aversity
1) A bias towards avoiding negative shifts
2) A bias towards ignoring low probability outcomes
Let's pretend there is only our universe. So when I take or reject one of Omega's gambles, I choosing between two fields of epistemic possibilities.
On this view, being risk averse makes sense, and it still seems intuitive even if we assume we have perfect insight into the epistemic space we are casting ourselves into. It is hard to articulate, but there should be some discounting of unlikely outcomes. Picking a high-aggregate (or high average) gamble where all the utility is crammed into one minute part of the possibility space just seems crazy. We could almost call it 'median util'
For the reverse case (where we cram all the disutility into a tiny bit of the probability space), I think we likewise discount it, but discount it less because of the second sense of risk aversity. This seems about intuitive, and avoids issues of pascals mugging etc.
Now what about if many worlds is true?
On this view, me taking Omegas gamble will change which space of possible worlds will exist*: either a higher average distribution where some worlds are really good but most are really bad, or a world ensemble where the good is more evenly distributed, but the average is lower.
I think we get similar risk averse results in this case too, on reflection. The extra ingredient is a Rawlsian account about picking behind a veil of ignorance (e.g. what would you pick behind a veil of ignorance about which outcome would be yours?) then you get similar results in avoiding less-equal distributions of good whether the inequality is between particular life-times in a given world, or between worlds. In the same way we should prefer a more equal distribution of utility across society over one with a higher average concentrated on very few people, we should make the same equality versus aggregate trade** when selecting world ensembles.
I might be missing a lot.
* Obviously both spaces will exist because there will be worlds in which I both take and refuse omegas gamble in many worlds. But that doesn't really matter.
** In case it wasn't covered in prior discussion: obviously this only goes so far. I wouldn't prefer 1 hedon for all versus 2 hedons for all and 2 million more for one person, for example. Even in less easy cases I'd be willing to go for better aggregate even at the expense of greater equality (so that some people might be worse off). But the point is there is a balance to be struck, and we shouldn't just go on maximizing aggregate utility.