CosmicPariah wrote:Tree-planting is mainly a Canadian thing and I could probably get up to earning 300$ a day in the next couple of years.
Wow, that's really good! $300/day is $110K per year, assuming you could work every day of the year. (Maybe you can only do it in the summer?)
CosmicPariah wrote:I think that by listening to non-fiction audiobooks all day I could get a better education than most other careers (Most careers seem like they only provide highly specific knowledge) and that they will keep me content enough to work long hours.
Agree. If you can listen to audio books on your job, then those hours have almost no opportunity cost. (Of course, some things like math are hard to do by audio alone.) If you want to listen to things that haven't already been turned into audio format, you could get a text-to-speech converter and put the converted files on your iPod as well. (I've done this a few times, but I don't listen to things often enough to make it worth the effort.)
CosmicPariah wrote:The singularity institute is would probably be my main choice right now.
I'm currently less enthusiastic about SIAI than before in part because it works to reduce existential risk, but I think doing so could increase rather than decrease suffering in the multiverse:
"
Are increases in existential risks good or bad?"
"
A few dystopic future scenarios"
"
Friendly AI and utilitarianism"
"
should we be optimistic or pessimistic about the future?"
Here's a quote from one of my comments on a
blog post that mentioned the topic:
As I said in
one Felicifia discussion: “my current stance is to punt on the question of existential risk and instead to support activities that, if humans do survive, will encourage our descendants to reduce rather than multiply suffering in their light cone. This is why I donate to Vegan Outreach, to spread awareness of how bad suffering is and how much animal suffering matters, with the hope that this will eventually blossom into greater concern for the preponderate amounts of suffering in the wild.”
“Safe AI” sounds like a great goal, but what’s safe in the eyes of many people may not be safe for wild animals. Most people would prefer an AI with human values over a
paperclipper. However, it’s quite possible that a paperclipper would be less likely to cause massive suffering than a human-inspired AI. The reason is that humans have motivations to spread life and to simulate minds closer to their own in mind-space; simulations of completely foreign types of minds don’t count as “suffering” in my book and so don’t pose a direct risk. (The main concern would be if paperclippers simulated human or animal minds for instrumental reasons.) In other words, I might prefer an unsafe AI over a “safe” one. Most unsafe AIs are paperclippers rather than malevolent torturers.