Technology, obsolete jobs, and obsolete people

Ayatollah So

the spoof'll set you free
Joined
Feb 20, 2002
Messages
4,389
Location
SE Michigan
In another thread I wrote:

The most critical issue my society and all societies are facing hardly bothers us at all, yet. It probably won't make the front page of the newspaper more than a few handfuls of times in the next decade or two. Naturally this means we're mostly not facing it. Trouble is, if we wait until the issue is obviously serious, that's probably way too late. And I'm not talking about climate change.

The problem is that we - and by "we" I mean human beings - may be making ourselves obsolete.

Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and found a more elegant proof for one of them than Russell and Whitehead had given in Principia Mathematica.[1] By the late 1990s, 'expert systems' had surpassed human skill for a wide range of tasks.[2] In 1997, IBM's Deep Blue computer surpassed human ability in chess.[3] In 2011, IBM's Watson computer beat the best human players at a much more complicated game: Jeopardy![4] Recently, a robot named Adam was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results.[5][6]

We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. By this method the machine could become vastly more intelligent than the smartest human being on Earth: an 'intelligence explosion' resulting in a machine superintelligence.[7][8][9][10]

The consequences of an intelligence explosion would be enormous, because intelligence is powerful.[11][12] Intelligence is what caused humans to dominate the planet in the blink of an eye (on evolutionary timescales). Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war. Intelligence gives us superior strategic skills, superior social skills, superior economic productivity, and the power of invention.

A machine that surpassed us in all these domains could significantly reshape our world, for good or bad. Thus, intelligence explosion scenarios demand our attention.

One of the early warning signs, I expect, will be economic. As machines become better than humans at various tasks - medical diagnosis, science, engineering, math, whatever - the economic value of that sort of human labor will fall. In most cases it will fall to such low levels that even low-status jobs will pay better. The problem is that low-status jobs - flipping burgers, waitressing, whatever - will also become mechanized. Now of course, some jobs by their very definition as stipulated by the customers, require a human being to perform. But we can't all be prostitutes.

The provision of basic life necessities to vast numbers of people who can't earn the wages to purchase them, will become a very pressing political question, to put it mildly.

Then there are the military implications of artificial intelligent autonomous vehicles, to consider.

Which leads to the obvious question, who will hold the power in such a future? Will it be presidents, generals, or programmers? But any of those answers implies the highly optimistic idea that one or more humans will be in control. In practice, few programs do exactly what the programmer (never mind his boss) intended. Just look at all the updates and bug-fixes. And that's in programs directly written by humans. When the human merely programmed the machine that programmed the machine that ... (etc etc) programmed the machine, any claim to control the result threatens to be a bad joke.

But, no way! We're special! Computers utterly fail at writing poetry, or philosophy, or just generally learning from experience! Well, yeah, for now. But if we look back to the time (say the 70s and 80s) when computers were just beginning to be important, we can find pundits listing things that computers will "never be able to do" - many of which have been done.

Our uber-special brains are the products of millions of years of evolution. Random mutations, occurring at a rate not guided by results, occasionally improving body-types in many survival-relevant ways other than just intelligence. Whole orders and phyla being killed off now and then by asteroids or volcanoes. With no intelligent agent in charge. By contrast, modern evolutionary algorithm design methods optimize mutation rates, can select for intelligence alone, don't scrap experiments just when things are getting good, and can occasionally tweak the results to get past traps like local optima. And can run many "generations" per second. With today's technology. Oh, and we have a good working model to copy (selectively!) from, if only we can understand how it works.

----
I'll quote hobbsyoyo's reply, next
 
hobbsyoyo wrote:
---

While I do believe that Artificial Intelligence will cause many disruptions, I don't think it will be a catastrophic event for mankind as we will have prepared for it by then.

I find the depiction of AI in popular culture extremely childish. The scenario basically goes: people invent AI, people enslave robots, then the robots kill all humans.

This scenario is roughly equivalent to the crusaders having nuclear weapons in an alternate timeline. Absurd, you say? Crusaders didn't have all the other techs required to build nukes, you say?

Exactly. The path to AI isn't linear or one track. The world isn't a civ game where you can beeline to a specific tech and ignore the rest. By the time we (probably) develop AI, we will also more than likely be extensively bio-engineering ourselves or directly interfacing with computers, blurring the line between what is human and what is AI.

Similarly, our culture will be growing and adapting. Just as the Crusaders would've nuked most of the ME because that's how they rolled, of course we would enslave robots if given that tech right now. But today, thankfully, we don't nuke every country that crosses us. Hopefully, our cultural, societal, economic and political systems and institutions will have grown up to the point that we could rationally handle AI when it comes.

That being said, the current nightmare depictions of our coming AI overlords actually does help our culture adapt. We're thinking about the problem before we even have it. It's effing amazing to be human.

While I know this all doesn't address most of the implications you raised, I hope it does a decent enough job explaining why I don't think any of those implications will be show-stoppers long term. I also believe that by this time we will be living in a world of plenty as all of the prerequisite technologies that will enable AI and genetic engineering and such will also be applied to many of our pressing material needs.
 
innonimatu wrote:
-----

That seems a very promising thread. And for my part I'm willing to argue that the problem is not AI as a threat to humans, but society adapting to reduced need for "economically productive" human labour. So fat that adaptation has been done by expanding the categories of property so as to transform more human activities in services that can be traded. The whole "intellectual property" mess is a consequence of that.
The transformation of labour is the most critical issue for many modern societies.
 
Murky wrote:

----
I see AI as more of a positive development than a threat at this point. Unlike people, most machines can usually be counted on to do what they are supposed to do. They don't come with desire or selfishness so it's unlikely they would try to take over the planet. In industry, AI acts as a production multiplier. One person directing robots can accomplish a lot more. In battle, AI acts as a force multiplier and reduces risk to personal. Of course, society needs to adapt to machines doing more of the labor. People need to get smarter along with the machines.
 
We would have to learn how to live "communally responsible" and destructive vices would have to be worked out in holodecks. I think the biggest issue is the competitive factor that dictates that one must pay one's dues to society (the economy of fairness).
 
As long as the A.I. and robots are subordinate to me I don't mind the fact that they're intelligent.
 
The way I always thought about it, the problem is not the AI. The problem will be who programs the AI.

Basically, meaning, it's gonna be just like anything else for all of human history.
 
One thing to note is that AI so far has not be proven to be "Self Aware" or have consciousness the same way people do. Even simulated brains are not there yet. If say, a super-computer that was running a neural net modeled after the human brain gained self awareness the end result could be unpredictable. That's the terminator movie scenario. A self-aware AI "Sky Net" see's humans as a threat. It is able to take over other machines through networks by sending them programs. I think such a scenario is highly unlikely to happen because we don't fully understand the brain or human consciousness enough to emulate it with technology.
 
The way I always thought about it, the problem is not the AI. The problem will be who programs the AI.

Basically, meaning, it's gonna be just like anything else for all of human history.

This is the biggest problem with AI that I foresee. I think the other issues of making jobs obsolete and the task of 'getting along' with our AI creations will work themselves out as we adopt to the new reality.

Problem is, there is always going to be some jerk in his mom's basement programming killbots and AI viruses that will be tough to deal with. Then again, if we have created AI and learned to live with it peacefully, I can't see a hacker being able to out-hack them, so to speak.

We just need to make sure that we are programming the initial series of AI's to be 'good' before they are more competent at programming than we are. After that, they'll follow their own course and I don't think anyone could stop them. Hopefully, the course they set will be 'good' and we'll coexist peacefully.
 
I believe humans will always have a way to "pull the plug" if things go wrong. Robots can rebuild electrical connections you say? Then we cut them again. I do not believe robots can best humans in warfare (provided equal tech). Yes I'm talking about creativity. Despite what the OP suggest, I do not believe robots will ever be as creative as humans. Without creativity, can they really beat us in warfare? No


But that's far future stuff. I think the immediate problem posted by the OP is true. Robots are taking our jobs. Unemployment will most likely never be below 5% again. Wealth redistribution can solve this problem (as much as I hate redistribution it may be necessary in the future as most humans will not work). Then the problem becomes humans tend to do bad, evil things when they are bored. Kids turn into serious troubleamakers. How do we give people a sense of purpose if everyone is unemployed?
 
Why will wealth redistribution be needed when robotics advance to the point that all goods cost next to nothing and most can be manufactured in your 3D printer beside your desk?

I think advanced robotics will cause massive disruption and unemployment briefly, but will quickly make it so that no one has to work.
 
having everyone not working is the problem as I mentioned. Wealth distribution isn't that important. what is important is what people will do with all that free time. Where I live gangs of bored kids go around beating people up because they are bored. This is our future.
 
This is the biggest problem with AI that I foresee. I think the other issues of making jobs obsolete and the task of 'getting along' with our AI creations will work themselves out as we adopt to the new reality.

Problem is, there is always going to be some jerk in his mom's basement programming killbots and AI viruses that will be tough to deal with. Then again, if we have created AI and learned to live with it peacefully, I can't see a hacker being able to out-hack them, so to speak.

We just need to make sure that we are programming the initial series of AI's to be 'good' before they are more competent at programming than we are. After that, they'll follow their own course and I don't think anyone could stop them. Hopefully, the course they set will be 'good' and we'll coexist peacefully.

Why would we want them to be able to choose between "good" and "bad"? That seems to be the beginning of a consciousness?
 
having everyone not working is the problem as I mentioned. Wealth distribution isn't that important. what is important is what people will do with all that free time. Where I live gangs of bored kids go around beating people up because they are bored. This is our future.

Oh I see. I thought you were coming at it from the angle of poverty, which I guess we both agree will go away as an issue for the most part in the future. (We're already well on the way there if you look at the standard of living people in the first world enjoy)

I don't even think gangs will take over or we'll all die of boredom though. Not having to work and the decreasing cost of doing interesting and fun things will give people lots of options besides turning to violence.

Plus, by the time we're programming AI's, as I stated above, I think people will be jacking into computer systems and living in virtual worlds. Who knows though?

Maybe that's why we never hear from ET's, they're all stuck inside video games and can't be bothered to send a greeting! ;)

Why would we want them to be able to choose between "good" and "bad"? That seems to be the beginning of a consciousness?

I am talking about the initial series of concious AI robots. They need to be hardwired to be good and ethical before they become truly independent and capable of programming other AI conciousnesses. That will help guarantee they don't go off and program killbots for lulz.

Plus, once you have set up the inititial series to be hardwired to be good and ethical, that makes it harder for human hackers to undo it because at some point the AI's will be better at programming than humans and will easily reverse any attempts to make a strain of killbots.
 
I think the problem with programming them to be good and ethical is, who decides what they know as good and ethical? Different groups have different ideas of what good and ethical might be.
 
I think the problem with programming them to be good and ethical is, who decides what they know as good and ethical? Different groups have different ideas of what good and ethical might be.

I think decided general guidelines for what is good an ethical will be among the easiest tasks when it comes to programming concious machines.

In any case, I'm really talking about the basics, like:
Thou shalt not kill
Thou shalt not make killbots
Lulz are not funny
...and so on.
 
I am talking about the initial series of concious AI robots. They need to be hardwired to be good and ethical before they become truly independent and capable of programming other AI conciousnesses. That will help guarantee they don't go off and program killbots for lulz.

Plus, once you have set up the inititial series to be hardwired to be good and ethical, that makes it harder for human hackers to undo it because at some point the AI's will be better at programming than humans and will easily reverse any attempts to make a strain of killbots.

Your trust in a program is alarming. You did leave out bad, but leaving it out does not exclude it from being a factor in AI reasoning. Isaac Asimov already gave three simple laws that allowed for self preservation, but never introduced ethics into the equation. Ethics is a curse. Why would you want AI to have the same curse? Especially if you want to build a barrier of protection. Why would we want AI to emulate us, or study us as in a parent child relationship?
 
Your trust in a program is alarming. You did leave out bad, but leaving it out does not exclude it from being a factor in AI reasoning. Isaac Asimov already gave three simple laws that allowed for self preservation, but never introduced ethics into the equation. Ethics is a curse. Why would you want AI to have the same curse? Especially if you want to build a barrier of protection. Why would we want AI to emulate us, or study us as in a parent child relationship?
I am not even asserting that they should be like us other than beeing enough like us to share the value that murder is bad and so forth. I'm talking first principle of ethics and value, nothing even as abstract as programming them to never hit on some guys wife or other random ethical or moral values.

I don't think they should have elaborate sets of ethical rules because that takes away free will. I do think they should have some basic rules though because they could probably easily overpower us without them.
 
Isaac Asimov said:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws..

This?
 
Back
Top Bottom