This post is about how people might be able to make the best use of information about social issues that comes from artificial intelligence.
Many people have discussed the tasks that should be assigned to an artificial intelligence computer after the intelligence explosion. Shouldn't we ask the computer what its goals should be? It's preference might be to solve social problems as opposed to manufacturing problems.
Given my comments below, I'm wondering if AI could be most useful for social progress if it is used as a source of suggestions to (only) human experts who then review the information privately and publish detailed reviews of the AI's suggestions (like a special issue of a journal that might contain varying views on the same issue). For some background on this issue, psychologists (and other researchers) have an ethical obligation (according to professional organizations) to ensure that the results of research are presented in a way to avoid misinterpretation or abuse by the lay public.
Here are my original ramblings....
What if the computer recommends political or social changes that people in power don't like? Will the suggestions from the computer fall on deaf ears? Won't the computer realize that and thereby use surreptitious means to gradually trick people into altering their culture to fit the computer's long-term vision of how people should live? (note that this would take lots of research in which the computer would need to collect new data). If I were a political leader with unscientific beliefs whose goals were to increase profits for big companies, I would change the mission of any public AI project before it gets too far along. I wouldn't want the computer to be given respect because it might lead to people treating it like a god.
I suspect that people of some political ideologies or some belief systems would actively oppose the pronouncements of the supercomputer and try to restrict its use to improving manufacturing processes or curing diseases. Those forces would oppose using the computer to solve politically unpopular problems (like fixing the overpopulation problem or correcting widely-held irrational beliefs that adversely affect the management of society but which are popular with the people who run governments and military forces).
If nonprofits develop an AI computer, any suggestions (on social or political issues) from that computer would be countered by suggestions from somebody else's computer (which of course would be programmed with the ideology of the people who run it). In the event that there is only one real magic computer, those who oppose it could just write an essay and claim that it came from their supercomputer AI or say that it is solid proof that the AI is flawed. How is the lay person to know who to trust? They simply trust the humans who share their ideologies, which means not much would change in the short run. Maybe things would change for the better in the long run, but I don't know--ask the computer.
Many people have discussed the tasks that should be assigned to an artificial intelligence computer after the intelligence explosion. Shouldn't we ask the computer what its goals should be? It's preference might be to solve social problems as opposed to manufacturing problems.
Given my comments below, I'm wondering if AI could be most useful for social progress if it is used as a source of suggestions to (only) human experts who then review the information privately and publish detailed reviews of the AI's suggestions (like a special issue of a journal that might contain varying views on the same issue). For some background on this issue, psychologists (and other researchers) have an ethical obligation (according to professional organizations) to ensure that the results of research are presented in a way to avoid misinterpretation or abuse by the lay public.
Here are my original ramblings....
What if the computer recommends political or social changes that people in power don't like? Will the suggestions from the computer fall on deaf ears? Won't the computer realize that and thereby use surreptitious means to gradually trick people into altering their culture to fit the computer's long-term vision of how people should live? (note that this would take lots of research in which the computer would need to collect new data). If I were a political leader with unscientific beliefs whose goals were to increase profits for big companies, I would change the mission of any public AI project before it gets too far along. I wouldn't want the computer to be given respect because it might lead to people treating it like a god.
I suspect that people of some political ideologies or some belief systems would actively oppose the pronouncements of the supercomputer and try to restrict its use to improving manufacturing processes or curing diseases. Those forces would oppose using the computer to solve politically unpopular problems (like fixing the overpopulation problem or correcting widely-held irrational beliefs that adversely affect the management of society but which are popular with the people who run governments and military forces).
If nonprofits develop an AI computer, any suggestions (on social or political issues) from that computer would be countered by suggestions from somebody else's computer (which of course would be programmed with the ideology of the people who run it). In the event that there is only one real magic computer, those who oppose it could just write an essay and claim that it came from their supercomputer AI or say that it is solid proof that the AI is flawed. How is the lay person to know who to trust? They simply trust the humans who share their ideologies, which means not much would change in the short run. Maybe things would change for the better in the long run, but I don't know--ask the computer.