What's so bad about being risk averse, anyway?

Whether it's pushpin, poetry or neither, you can discuss it here.

What's so bad about being risk averse, anyway?

Postby GregoryLewis on 2011-10-27T21:06:00

Hello all,

I'm intuitively quite a big fan of risk-aversity: with appropriate veil-of-ignorance stuff it seems to capture most of our intuitions about justice, and doesn't fall into the same traps I see with aggregate util or average util.

I don't want to rehearse all the moves in utilitarianism (it is a literature I am more ignorant of than I would like to me). I was just wondering if a) there are some really powerful objections/counter-examples to risk aversity I should know about, and b) are there good books/articles/whatever to read.

Enjoy life!

GregoryLewis
 
Posts: 13
Joined: Sat Oct 15, 2011 10:59 pm

Re: What's so bad about being risk averse, anyway?

Postby rehoot on 2011-10-27T23:53:00

I have heard people complaining about risk-aversion and risk-taking. I think people who criticize a given risk-orientation believe that they know what the best choice is: some say that, in a given situation, taking some risk produces that best expected outcome while another person can look at the same situation and declare that avoiding risk produces the best expected outcome.

Teenagers are known for taking risks that produce life-altering consequences (driving recklessly, driving drunk, stupid skateboard stunts...) and adults view those risks as stupid. Philosophically, somebody could argue that the thrill of driving drunk is worth an X% chance of dying in a car crash and a Y% chances of killing innocent people, but those with different values would disagree. If you value continuation of life greater than momentary thrills, you will be more risk-averse than somebody who values momentary thrills more than continued life without thrills. It seems that older people fall into one category and younger people tend to fall in the other (with a significant amount of variation everywhere).

If you want to know which is the "correct" answer, first study the "is-ought problem" and the "naturalistic fallacy."

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm

Re: What's so bad about being risk averse, anyway?

Postby utilitymonster on 2011-10-28T00:44:00

What does it mean to be risk averse about some quantity, such as money, number of friends, years you live, your well-being, or goodness? It means that you prefer to get amount X of that quantity for sure rather than getting an expected value of X of that quantity, where the amount you get is uncertain. OK.

So, why is it bad to be risk averse about goodness? I basically think this comes down to terminology, but what do we mean when we say that A is twice as good as B? I think the best definition of this claim is that a .5 chance of A is as good as B for sure. If this is how you define "is twice as good as," then you should be risk-neutral. If this is not how you define "twice as good as," then maybe it is OK not to be risk-neutral, but no one seems to be able to tell me what it means.

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: What's so bad about being risk averse, anyway?

Postby Pat on 2011-10-28T00:55:00

To me, risk aversion means favoring a smaller, certain reward over a larger, less certain reward, even when the expected value of the second is greater (e.g., taking $10 instead of a 50% chance of $30, assuming negligible diminishing marginal returns). It extends to losses, too, so a risk-averse millionaire might pay $100 a year to insure against a 1% risk of losing $5000 (for people with more ordinary incomes, insurance might make more sense because bankruptcy or large financial losses could be disruptive).

What's wrong with risk aversion depends on the situation. If you've set aside money to donate later, investing in it Treasuries rather than an emerging-markets index fund reduces the amount you can expect to eventually give. If you're a public-policy maker and you're trying to decide which pollution-control measures to implement, using the precautionary principle as a guide will trade too much economic growth for the perception of safety. If you're the U.S. after the 11 September attacks, you'll money on security that could have better been spent elsewhere.

Regarding choosing a charity, I'm not sure what risk aversion would entail. Let's say you believe x-risk is bad. Maybe you should give to a GiveWell-approved charity, and be pretty sure that your money is doing some good. Or maybe you should donate your money to an x-risk charity, since the loss of humanity would be immense and irreversible.

I'm not sure that the theory that people would be risk averse behind a veil of ignorance would make much difference relative to utilitarianism. When the assumption of risk aversion is applied to wealth disparities, you get to about the same conclusion that a utilitarian does: people in rich countries should do a lot more to help poor countries (they should lower trade barriers and open their borders, as well as give aid). If you apply it to intergenerational equity, you find the x-risk is something that matters a great deal. (I'm thinking that people would prefer a sure humdrum existence over a chance at a fantastic existence. But decreasing x-risk increases the chance of both humdrum and fantastic existences, so there's no trade-off.)

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: What's so bad about being risk averse, anyway?

Postby DanielLC on 2011-10-28T01:27:00

First off, I only care about what actually happens. Risk is just uncertainty. It's not a fact of the universe. It's a fact of my state of mind.

Beyond that, if you do decide on risk aversion, it tends not to matter in the long run. For example, if you're looking at a 100% chance of saving one person vs. a 50.1% chance of saving two, it looks like you'd be absorbing a lot of risk for a small increase in utility. In fact, you're not. If you have this option 2500 times, the variance will be 1/4*2500 = 625. The standard deviation will be √625 = 25. Now you're choosing between saving 2500 people vs. 2550 ± 25 people. Of course, 25 is just one standard deviation. There's still a good chance of being below that. But suppose it was 250000 times. now it's 255000 ± 250. There's no way it's going to fall 20 standard deviations below the mean. I'd give a probability, but I can't seem to find a calculator that can give numbers that low. Since it's clearly a good idea to choose 50.1% of two over 100% of one in all the cases, you'd expect it to be good in the marginal case. Now consider: there's six billion people in the world. There's 60 to 120 billion people who have already lived. There's untold billions who are yet to live. There's trillions of animals that can likely feel happiness and sadness. There's who knows how many other life-bearing planets. The current varience in utility is many orders of magnitude beyond everything experience on this planet put together. There is no possible way you are so risk averse as to it to actually make a difference on a choice you'd ever have to make.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: What's so bad about being risk averse, anyway?

Postby utilitymonster on 2011-10-28T13:11:00

There is no possible way you are so risk averse as to it to actually make a difference on a choice you'd ever have to make.


For roughly the reasons you give, risk aversion doesn't really matter when you face a long sequence of similar decisions and set a common policy for making all of those decisions. However, risk aversion does matter a lot in one-off decisions. What you personally decide to do about x-risk is a one-off decision, so if you are risk-averse about lives saved or lives created, risk-aversion does matter.

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: What's so bad about being risk averse, anyway?

Postby RyanCarey on 2011-10-28T13:30:00

I think DanielLC's point is good. Risk aversion doesn't matter when you face a long sequence of similar decisions.

Pat's point leads me to demand the following clarification. What do we mean by risk averse. Is it:
1. We should discount the unlikliest outcomes, such as the chance of going to heaven and the chance of destruction of our civilisation.
or 2. We should give extra weight to avoiding bad outcomes. Bad outcomes are worse than they appear.
These two versions of 'risk aversion' seem to be coherent, but neither of them seems to capture our common-sense notion of risk aversion. That is, neither would seem to lead us to intuitive outcomes in all situations. And I would be interested to see anyone attempt to justify either of them without recourse to our intuitions. I don't know where I would even begin with such an effort.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: What's so bad about being risk averse, anyway?

Postby GregoryLewis on 2011-10-28T17:21:00

I agree that risk aversity is silly when you have to make the same 'bet' lots of times, so you almost certainly win, but my worry is about decisions more like this:

Omega offers you a 10^-30 chance of heaven, but a 1 - 10^-30 chance of hell. Hell has a payoff of - 1 million utils, but heaven has a utility bonus of 10^1000. So if you're an aggregator, you are forced to pick the option that almost certain means you suffer. It strikes me as pretty obvious you should refuse the deal (and if you don't find it persuasive, just stick more zeros on P(good payoff) and expectation to suit).

You might say negative util can sort these problems out, but I don't think it can unless you don't weigh good stuff at all (which seems to lead to pinprick reductios), because again, we can just pile on as many zeros as we like. Even if we do, we can just talk about scenarios where you have to get almost-certainly-zero, but potentially massive payoff against a safer bet of substantial utility for all (e.g. solving world hunger, tropical disease, education etc. etc. versus a 1 in 10^30 chance of improving the likelihood of a positive singularity by 1%).

You may then say that the only reason our intuition gets this result is because our brains suck at huge numbers. But I think risk aversity applies to much more 'down-to-earth' examples too. Following Daniel, let's have a Sophie's choice modified so it is certainly save one vs. 50.1% chance of saving two, and 49.9% chance of both dying. The fact that the aggregator wins if you can iterate several thousand times isn't going to convince me not to be risk averse here.

I think we can give a reason why we should be risk averse that isn't just intuition. In a sense, a aggregator will maximize the sum of good in their future cone /counter-factual tree. So, on average, the people in these worlds will be best off.

In contrast, risk-aversers care about median stuff. They don't just care about about maximising the future cone (and thus their average payoff), but making sure that they're in a good world, which is surely what we should be interested in. Future cones where all the good is concentrated in a tiny sliver should be avoided. I'm not going to be consoled in hell knowing that a lucky few on a far flung everitt branch are having inestimable bliss in heaven: I want a far safer bet to make sure me and my universe are happy. In contrast, I'd be happy to take Daniel's 'riskier' bet if I know I can iterate loads of times - because almost certainly I'll be in a world that does much better. This just makes sense to me, but I might be missing something.


@Pat:

I agree that most real world decisions are no brainers re. your views on justice and utilitarianism: because you get to choose between more equitable higher-aggregate solutions and lower-aggregate more inequitable solutions. That said, there are a few times those are in conflict, and I think life lotteries with risk aversity can get us pretty much the same answers as justice, with less strange conclusions. We shouldn't care about even massive injusticies if everyone's utility is increased, for example, but we should care about a higher-aggregate set of people where the good is wildly unevenly distributed over lower aggregate more flatly distributed set: 'cos we're risk averse and would prefer a fairly good median over a small shot at the big time versus a larger chance of squalor, etc.

GregoryLewis
 
Posts: 13
Joined: Sat Oct 15, 2011 10:59 pm

Re: What's so bad about being risk averse, anyway?

Postby Pat on 2011-10-28T23:06:00

I think that our intuitions in your first thought experiment are due to problems thinking about big numbers. If -1,000,000 utils is equal to something trivial, like stubbing my toe, it seems reasonable to take the bet even if I wouldn't be able to repeat it ad infinitum. But I wouldn't intuitively choose a life a toe-stubbing to increase my chances. This doesn't make sense to me, but maybe you have different intuitions.

Regarding Sophie's choice, I can see it going either way. The 0.2% difference would be insignificant compared to the other ramifications of the situation. [spoiler=Sophie's Choice]I didn't watch the movie but I remember reading that Sophie gets to save only one child or none, so she saves just one. But the decision drives her to suicide. Two people's lives are ruined, and the child whom she does save presumably doesn't have it easy, either.[/spoiler]So it might make sense to risk everything.

A purer version of the thought experiment would involve two orphans who live in different places. Would you rather save one, allowing the other to die, or have a 50.1% chance of saving them both? I'd choose the second option. Favoring the first is what leads people to contribute to "Adopt a Child" sorts of aid organizations, which save fewer lives. Perhaps this revised thought experiment is misleading, though, because it's repeatable: even if it's the last decision you make, it's part of the larger class of "acts that might save lives."

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska

Re: What's so bad about being risk averse, anyway?

Postby DanielLC on 2011-10-28T23:43:00

if you are risk-averse about lives saved


Lives saved doesn't matter. What matters is how many there are. The fact that you had anything to do with it is irrelevant.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: What's so bad about being risk averse, anyway?

Postby utilitymonster on 2011-10-29T14:08:00

Lives saved doesn't matter. What matters is how many there are. The fact that you had anything to do with it is irrelevant.


Fine, but it doesn't affect the substance of the point.

utilitymonster
 
Posts: 54
Joined: Wed Jan 20, 2010 12:57 am

Re: What's so bad about being risk averse, anyway?

Postby rehoot on 2011-10-29T19:30:00

Several posts here describe decision-making when the probabilities of good or bad outcomes are known precisely—this rarely occurs when dealing with humans. Part of the risk-orientation question is how people respond to the uncertainty of risk, and many people misinterpret their risk.

First the hardest example for those who have studied some statistics. Let's say that I study all the data on the risk of cancer from smoking and determine that the observed risk is X% (or X% per year of smoking). What does that X% mean? It is NOT a precise measurement of my risk and might not even be remotely accurate for me. "X%" is the best unbiased estimate of the risk of smoking; but an omniscient observer might know that I have a previously undiscovered genetic trait that makes my risk of cancer 20 times higher or 1/20 of the unbiased estimate. Even if I make a more precise estimate that takes five other variables or genetic traits into consideration, there is no way to know my true risk of cancer! The X% is a decent estimate, but it is only an estimate. There is an additional, unstated risk beside the "X%" number: the risk that the unbiased estimate does not apply to me due to confounding factors or misinterpretation of the condition.

Teens are notorious for statements like "I'm not going to crash my car" after chugging a 6-pack of beer. Many adults say the same kind of stupid things because they have a gut feeling that the risks don't apply to them, then they proceed to risk their lives and those of innocent bystanders. Given the tendency of people to make poor decisions in the presence of risk and uncertainty, a prudent response is to err on the side of caution when the risks suggest substantial harm. I can't describe exactly what the "right" answer is in any given instance.

rehoot
 
Posts: 161
Joined: Wed Dec 15, 2010 7:32 pm

Re: What's so bad about being risk averse, anyway?

Postby DanielLC on 2011-10-30T01:51:00

Several posts here describe decision-making when the probabilities of good or bad outcomes are known precisely—this rarely occurs when dealing with humans.


There is only one kind of uncertainty. If you had a probability of 10% +- 5%, it would just work out as 10%. You might not be very good at coming up with the probability, but it's all you have. If you have to make a choice where the best answer depends on something unknown, whether or not to place a bet for example, you can't not choose any more than you can make time not pass.

... there is no way to know my true risk of cancer!


Your true risk of cancer is not the fraction of people that get cancer. It is not the fraction of Everett Branches in which you get cancer. It is not 0 or 1 depending on whether or not you will get cancer. It is the certainty you have that you will get cancer. It is an attribute of your brain, not your lungs. Probability is in the mind.

... a prudent response is to err on the side of caution when the risks suggest substantial harm.


The best response it to recognize that when you think that you're 99% sure, you're only right about 75% of the time, and adjust for that. The fact that your normal thought process gave 99% as an answer gives a certain amount of evidence, which points to it happening 75% of the time. You do this whether or not there's danger. The only reason those stand out is that they tend to have a low (and therefore underestimated) probability of high risk. If someone thinks they have a 50% chance of dying, there's probably little need to compensate for bias, but they won't do it so you won't notice that.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: What's so bad about being risk averse, anyway?

Postby GregoryLewis on 2011-10-31T22:58:00

Pat wrote:I think that our intuitions in your first thought experiment are due to problems thinking about big numbers. If -1,000,000 utils is equal to something trivial, like stubbing my toe, it seems reasonable to take the bet even if I wouldn't be able to repeat it ad infinitum. But I wouldn't intuitively choose a life a toe-stubbing to increase my chances. This doesn't make sense to me, but maybe you have different intuitions.


Fair enough, picking numbers without giving some sense of scale was silly of me, how about these.

Omega 1:

1 in 10^30 chance of a utilitronium explosion/universal wireheading/heaven or similar, but a 1- 10^30 chance of something unpleasant - say everyone going blind and getting depression.


Omega 2:

As above, but this time the 1-10^30 event is of all sapient life being painlessly euthanaized (to get around negutil)


It seems (to me) in both cases the aggregator should gamble, as this has the highest payoff. Yet this intuitively strikes me as crazy. Again, I think we can do better than intuition: we should say we care about maximizing our path through the future cone, not just the average of all possible paths in that cone. Obviously maximizing aggregate will often maximise our path (if you get to gamble lots of times, it pays off), but I think examples like this when the stakes are really high and you can't simply 'go again' so you win by large numbers suggest that a touch of risk aversity, sometimes, is a good thing.

Or are my examples still too shoddy to pump any intuitions?

GregoryLewis
 
Posts: 13
Joined: Sat Oct 15, 2011 10:59 pm

Re: What's so bad about being risk averse, anyway?

Postby Hedonic Treader on 2011-11-02T15:54:00

GregoryLewis wrote:a lucky few on a far flung everitt branch

I think this is misleading. The concept of expected utility would imply that, if so much good only happens in a far-flug Everett branch, there must be far more than just a lucky few in those branches; or else the experienced utility of those few per time unit must be completely through the roof.

I think the probability that many-worlds is true has a high relevance for this question.

This also applies to pure neg-util, since we could have a tradeoff "extreme disutility in a far-flung Everett branch" vs. "general smaller disutility in the mainstream probability worlds". Example: Small probability of creating hell-worlds vs. high probability of creating intense, but short, distributed and less total-sum suffering for many others otherwise. What would it mean to be risk-averse here, and how much intuitive sense would it make? Should we gamble then?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: What's so bad about being risk averse, anyway?

Postby GregoryLewis on 2011-12-19T05:12:00

[quote=Hedonic Trader]I think this is misleading. The concept of expected utility would imply that, if so much good only happens in a far-flug Everett branch, there must be far more than just a lucky few in those branches; or else the experienced utility of those few per time unit must be completely through the roof.[/quote]

The examples are designed to be extremely high payoff. Positive singularities get estimates thrown around like 10^50 lives in incomprehensible bliss, etc. So in my Omega 1 scenario above, the 1/10^30 chance given above seems like an excellent deal (it is probably an excellent deal even if you think a +singularity is certainly going to happen - having it happen 50-200 years sooner in the history of the universe is a vast amount of utility), and on a 'maximise expected utility' deal, surely worth the very high risk of worldwide blindness and depression. Yet taking that gamble strikes me as crazy, and so motivates me to be risk averse.

I agree that Many worlds might remove this concern. Arguably, risk aversity on many worlds is selfish as it privileges the fates of closer branches over further ones. However, I'm pretty sure there are all sorts of horrible Newcomb like issues in opting for certain ways worlds will split in the interests of some (but not others) of your successors - we might want to talk about justice over possible worlds and suggest that people would still prefer risk averse future cones at the expense of missing out on very high utility slivers. If anyone good at this sort of thing has thought about it, please chime in!

Wrt. Neg util. I'd intuit that as we get closer to extremely low probability of hell world, I'd be willing to risk less and less to forestall this now extremely unlikely catastrophe. I'd want to defend this by saying the "I want to be a in a good world, not just in the maximum aggregate space of possibilities" will discount utility (+ or -) disproportionately at the low probability branches. This sort of risk aversity is symmetric between positive and negative utility. To the degree you are slanting towards avoiding negative utility, you should slant your gambling to some commensurate degree when avoiding hell-worlds.

My worry is this starts sounding a lot like rationalized scope insensitivity.

GregoryLewis
 
Posts: 13
Joined: Sat Oct 15, 2011 10:59 pm

Re: What's so bad about being risk averse, anyway?

Postby Hedonic Treader on 2011-12-19T17:23:00

GregoryLewis wrote:I'd want to defend this by saying the "I want to be a in a good world, not just in the maximum aggregate space of possibilities" will discount utility (+ or -) disproportionately at the low probability branches.

Again, I think it's important to distinguish what probability means here. If it means uncertainty of your map vs. territory, then I intuitively agree with you. If it means percentage of Everett branches (which are all real), I don't. (I would adjust for confidence in Many-worlds though). I think the distinction is important, as earlier pointed out, but we confounded both by talking about "far-flung Everett branches" and probability.

Not being an expected utility maximizer can solve some problems. Intuitively, I'd rather have 5 certain hedons than a 10^-7 chance at 10^8 hedons. You could adjust expected utility by exponentially discounting small-probability utilities, maybe to a minimum probability threshold below which all utility is ignored. This also solves problems like Pascal's Mugging, while expressing a preference for higher probability of good outcomes. But then again:

My worry is this starts sounding a lot like rationalized scope insensitivity.

Yes, I was worrying the same thing.

we might want to talk about justice over possible worlds and suggest that people would still prefer risk averse future cones at the expense of missing out on very high utility slivers.

This sounds much like the normal problem of aggregation to me.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: What's so bad about being risk averse, anyway?

Postby GregoryLewis on 2011-12-29T17:33:00

I guess there can be two kinds of risk aversity

1) A bias towards avoiding negative shifts
2) A bias towards ignoring low probability outcomes

Let's pretend there is only our universe. So when I take or reject one of Omega's gambles, I choosing between two fields of epistemic possibilities.

On this view, being risk averse makes sense, and it still seems intuitive even if we assume we have perfect insight into the epistemic space we are casting ourselves into. It is hard to articulate, but there should be some discounting of unlikely outcomes. Picking a high-aggregate (or high average) gamble where all the utility is crammed into one minute part of the possibility space just seems crazy. We could almost call it 'median util' ;)

For the reverse case (where we cram all the disutility into a tiny bit of the probability space), I think we likewise discount it, but discount it less because of the second sense of risk aversity. This seems about intuitive, and avoids issues of pascals mugging etc.


Now what about if many worlds is true?

On this view, me taking Omegas gamble will change which space of possible worlds will exist*: either a higher average distribution where some worlds are really good but most are really bad, or a world ensemble where the good is more evenly distributed, but the average is lower.

I think we get similar risk averse results in this case too, on reflection. The extra ingredient is a Rawlsian account about picking behind a veil of ignorance (e.g. what would you pick behind a veil of ignorance about which outcome would be yours?) then you get similar results in avoiding less-equal distributions of good whether the inequality is between particular life-times in a given world, or between worlds. In the same way we should prefer a more equal distribution of utility across society over one with a higher average concentrated on very few people, we should make the same equality versus aggregate trade** when selecting world ensembles.

I might be missing a lot.


* Obviously both spaces will exist because there will be worlds in which I both take and refuse omegas gamble in many worlds. But that doesn't really matter.

** In case it wasn't covered in prior discussion: obviously this only goes so far. I wouldn't prefer 1 hedon for all versus 2 hedons for all and 2 million more for one person, for example. Even in less easy cases I'd be willing to go for better aggregate even at the expense of greater equality (so that some people might be worse off). But the point is there is a balance to be struck, and we shouldn't just go on maximizing aggregate utility.

GregoryLewis
 
Posts: 13
Joined: Sat Oct 15, 2011 10:59 pm

Re: What's so bad about being risk averse, anyway?

Postby Pat on 2012-01-09T05:14:00

I think that only (1) should be considered risk aversion. The two tendencies seem independent. (1) occurs because we especially dislike losing. (2) occurs because we round down sufficiently small probabilities to 0. If we do consider very small probabilities, our tendency is to give them too much weight.

If there's no name for it, I propose that (2) be called "small-probability discounting." I'd prefer to call it "small-probability neglect" or "small-probability underweighting," since I don't believe it's a source of wisdom that evolution blessed us with. I think most people would agree that (1) and (2) both lead us to do the wrong thing in non-Omega, ordinary-life situations.

What you said about one world versus many worlds made sense. It would be utterly insane to choose the option that has a minute chance of large positive utility, or to choose the "safe" option in which everybody gets depressed to eliminate a minute chance or large negative utility. But it would be utterly insane to do the opposite, too, at least in the second scenario.

Pat
 
Posts: 111
Joined: Sun Jan 16, 2011 10:12 pm
Location: Bethel, Alaska


Return to General discussion