Ayatollah So
the spoof'll set you free
In another thread I wrote:
The most critical issue my society and all societies are facing hardly bothers us at all, yet. It probably won't make the front page of the newspaper more than a few handfuls of times in the next decade or two. Naturally this means we're mostly not facing it. Trouble is, if we wait until the issue is obviously serious, that's probably way too late. And I'm not talking about climate change.
The problem is that we - and by "we" I mean human beings - may be making ourselves obsolete.
One of the early warning signs, I expect, will be economic. As machines become better than humans at various tasks - medical diagnosis, science, engineering, math, whatever - the economic value of that sort of human labor will fall. In most cases it will fall to such low levels that even low-status jobs will pay better. The problem is that low-status jobs - flipping burgers, waitressing, whatever - will also become mechanized. Now of course, some jobs by their very definition as stipulated by the customers, require a human being to perform. But we can't all be prostitutes.
The provision of basic life necessities to vast numbers of people who can't earn the wages to purchase them, will become a very pressing political question, to put it mildly.
Then there are the military implications of artificial intelligent autonomous vehicles, to consider.
Which leads to the obvious question, who will hold the power in such a future? Will it be presidents, generals, or programmers? But any of those answers implies the highly optimistic idea that one or more humans will be in control. In practice, few programs do exactly what the programmer (never mind his boss) intended. Just look at all the updates and bug-fixes. And that's in programs directly written by humans. When the human merely programmed the machine that programmed the machine that ... (etc etc) programmed the machine, any claim to control the result threatens to be a bad joke.
But, no way! We're special! Computers utterly fail at writing poetry, or philosophy, or just generally learning from experience! Well, yeah, for now. But if we look back to the time (say the 70s and 80s) when computers were just beginning to be important, we can find pundits listing things that computers will "never be able to do" - many of which have been done.
Our uber-special brains are the products of millions of years of evolution. Random mutations, occurring at a rate not guided by results, occasionally improving body-types in many survival-relevant ways other than just intelligence. Whole orders and phyla being killed off now and then by asteroids or volcanoes. With no intelligent agent in charge. By contrast, modern evolutionary algorithm design methods optimize mutation rates, can select for intelligence alone, don't scrap experiments just when things are getting good, and can occasionally tweak the results to get past traps like local optima. And can run many "generations" per second. With today's technology. Oh, and we have a good working model to copy (selectively!) from, if only we can understand how it works.
----
I'll quote hobbsyoyo's reply, next
The most critical issue my society and all societies are facing hardly bothers us at all, yet. It probably won't make the front page of the newspaper more than a few handfuls of times in the next decade or two. Naturally this means we're mostly not facing it. Trouble is, if we wait until the issue is obviously serious, that's probably way too late. And I'm not talking about climate change.
The problem is that we - and by "we" I mean human beings - may be making ourselves obsolete.
Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and found a more elegant proof for one of them than Russell and Whitehead had given in Principia Mathematica.[1] By the late 1990s, 'expert systems' had surpassed human skill for a wide range of tasks.[2] In 1997, IBM's Deep Blue computer surpassed human ability in chess.[3] In 2011, IBM's Watson computer beat the best human players at a much more complicated game: Jeopardy![4] Recently, a robot named Adam was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results.[5][6]
We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. By this method the machine could become vastly more intelligent than the smartest human being on Earth: an 'intelligence explosion' resulting in a machine superintelligence.[7][8][9][10]
The consequences of an intelligence explosion would be enormous, because intelligence is powerful.[11][12] Intelligence is what caused humans to dominate the planet in the blink of an eye (on evolutionary timescales). Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war. Intelligence gives us superior strategic skills, superior social skills, superior economic productivity, and the power of invention.
A machine that surpassed us in all these domains could significantly reshape our world, for good or bad. Thus, intelligence explosion scenarios demand our attention.
One of the early warning signs, I expect, will be economic. As machines become better than humans at various tasks - medical diagnosis, science, engineering, math, whatever - the economic value of that sort of human labor will fall. In most cases it will fall to such low levels that even low-status jobs will pay better. The problem is that low-status jobs - flipping burgers, waitressing, whatever - will also become mechanized. Now of course, some jobs by their very definition as stipulated by the customers, require a human being to perform. But we can't all be prostitutes.
The provision of basic life necessities to vast numbers of people who can't earn the wages to purchase them, will become a very pressing political question, to put it mildly.
Then there are the military implications of artificial intelligent autonomous vehicles, to consider.
Which leads to the obvious question, who will hold the power in such a future? Will it be presidents, generals, or programmers? But any of those answers implies the highly optimistic idea that one or more humans will be in control. In practice, few programs do exactly what the programmer (never mind his boss) intended. Just look at all the updates and bug-fixes. And that's in programs directly written by humans. When the human merely programmed the machine that programmed the machine that ... (etc etc) programmed the machine, any claim to control the result threatens to be a bad joke.
But, no way! We're special! Computers utterly fail at writing poetry, or philosophy, or just generally learning from experience! Well, yeah, for now. But if we look back to the time (say the 70s and 80s) when computers were just beginning to be important, we can find pundits listing things that computers will "never be able to do" - many of which have been done.
Our uber-special brains are the products of millions of years of evolution. Random mutations, occurring at a rate not guided by results, occasionally improving body-types in many survival-relevant ways other than just intelligence. Whole orders and phyla being killed off now and then by asteroids or volcanoes. With no intelligent agent in charge. By contrast, modern evolutionary algorithm design methods optimize mutation rates, can select for intelligence alone, don't scrap experiments just when things are getting good, and can occasionally tweak the results to get past traps like local optima. And can run many "generations" per second. With today's technology. Oh, and we have a good working model to copy (selectively!) from, if only we can understand how it works.
----
I'll quote hobbsyoyo's reply, next