"Humans Need Not to Apply"

Well, I don't have a crystal ball, nor do I think that's ever going to completely happen...
But, to make a wild guess.
We'll drink more and engage in life rather than office work.
Well El Machinae hit the nail on the head - the transition will be a massive hurdle to overcome.

And even still, do we just tacitly accept that one day we'll be enslaving sentient robots? Or am I looking at it wrong?

I think the transition will be important. Once we're all in networked holodecks and given food replicators, things will be fine(ish). Social currency is really a great motivator, the urge to entertain (and be liked) is always present. Anyone on rpg.net who puts the least bit of thought into making their post entertaining understands, whether it's thoughtful or funny, it's crafted in order to generate some type of social currency.

I think the transition is the hardest part. Socially, we do value work. I have to currently tell people I am unemployed, and it's not easy, seeing their noses wrinkle. I've been thinking about this hard, nearly posted to rpg.net many times. I don't see a solution other than deliberately inefficient workfare. And that needs to be funded using progressive taxation. The workfare should either generate some social good, or better, correct some market inefficiency. The hardest part will be the deliberate inefficiency. We like to get a bang for our buck. The free market rewards that type of innovation, and we encourage the public sector to do so too. If we can get a team of pothole fixers to fix more quickly, we'd like them to, please.

Inefficiences can disappear as we invent/discover new forms of morally valuable workfare. Morale-wise, not moral-wise. Plus, it would be awesome if they created public goods.

The demand-side benefit of workfare will disappear. Eventually you're using more and more of your paycheque to rent time from some network of robots so that the uber-rich and collect that income. The uber-rich will continue to modify their products and services to compete for the consumer dollar of Joe Potholefixer. But Joe's purchases will cause increasingly less employment.

Straight out welfare would be nearly as good. Two downsides: (a) less ability to correct market failures and (b) the shitstorm of offense when people think that other people are getting something for free. The uberrich would still continue to compete for our consumer dollars that was given to us through progressive taxation, but I personally think that avoiding the shitstorm is valuable. It's there, sitting as an obstacle. No amount of angry blogging can eliminate it. Certainly not quickly enough.

So, my recommendation: workfare. And increasing number of workfare projects, with intentional inefficiencies that can be eliminated as more workfare projects are discovered. The department of lolcat production (with its quota of 20 funny lolcats per day) can get photoshop (and layoffs) once the department of playing with kids gets moving into full swing. The goal here is employment and a nod to creating a societally useful product.

The feeling of performing valuable work is an important one. We can use a lot of feedback from people we like ("lol!"). We can use our own internal morals (satisfaction in a job well done). We can even (though few people do this) just use the Free Market as validation; if someone is paying you to do something, then there are really strong odds they find important at least to them.

It's nice when all three align. I greatly enjoy being a rich attorney who's respected for suing large corporations when they trounce welfare mom's rights. Or a pediatric cardiac surgeon who can afford yearly Caribbean cruises. I'm not one, obviously, so rarely do all three of those (above) align. You might wonder why you should be proud you can align and collate Excel spreadsheets, for example.

It's why I think the intentional workfare needs to have some obvious social benefit. Digging and filling ditches isn't as emotionally satisfying as keeping your boots polished feeling like you're contributing to your nations defense. Or even hand-filling potholes a few times a day.

On workfare - a great example is Goodwill (in the US). The place essentially exists solely to give people decent-paying jobs. It doesn't really contribute that much to society outside of that and is inefficient. But it does employee people and is itself not a net-drain on the economy I think.

Let robots do all the work, divert the wealth to the people that would have been paid via benefits, and let them chill and play and party all their lives.

Paradise.

Same question - if we require sentient robots to do a lot of the work (which we probably will but possibly not), is it OK to force them to do it?
 
Theyre robots made by us, they have no rights. Make them work, give the wealth to humans.
 
Id imagine that the robots would not be made sentient. Such a thing is mostly just used in sci fi to make stories out of it, in reality there would be no purpose behind making a sentient robot.
 
There's no reason to "enslave" a sentient robot, is there? Just make it so it gets an insane, maybe even essentially sexual thrill out of obeying your every command. Bam! and done. Or just make them all wireless extensions of your "will." You don't need to build C3PO and then slap a slave collar on him, that's silly.
 
Marxism is my favorite form of government ... On paper.

In practice, it fails because humans on top are too greedy.

Hmmm, maybe that would be another good use for robots, to be our future leaders and politicians.
 
Id imagine that the robots would not be made sentient. Such a thing is mostly just used in sci fi to make stories out of it, in reality there would be no purpose behind making a sentient robot.

That is a really good point, actually. It is of course impossible to tell if it will prove correct until we get there.
 
Here's the problem, there's way too much prediction/speculation going on.

Example, I think we can agree that Ben Franklin was at least as intelligent as the average CFC poster.

Ben Franklin's idea is of the most importance to what I want to say. That was during the Industrial Revolution, and things were changing. With all these great machines, we were to be able to produce everything we need in 4 hours a week. That may even be arguable true. However, humans don't stop at just what they need, like animals in the wild do.
Point being, we'll have to see what happens... it's fun for some people to talk about it, but let's not get worried or guilty feelings about potentially enslaving robots well after our lifetimes.
 
It is an interesting ethical question and given the pace of advancement, it is one we should face sooner rather than later. You are free to talk about or not talk about whatever you want.
 
The problem has been with any such company, when profits increase, only the people at the top of their workforce become rich, while the ones at the bottom carry on being paid as little as possible.

So the idea of the 4 hour work week would never become reality, because even if with industrial, and a possible future robotic revolution we could produce in 4 hours what we could in a whole week, those people at the bottom would not be paid a weeks salary for 4 hours of work, they would continue being paid the same for 4 hours as they dis before due to greed of the people at the top.
 
A solution would be to abolish the minimum wage and generally reduce the economy's reliance to allow it happen without all too much of a bump. Because of the robots, there will be no decrease in real purchasing power though humans can always replace robots in certain sectors. Additionally, we could - for instance - make industry a free-a-for-all place for automatisation and limit automatisation in agriculture and services.
 
There's no reason to "enslave" a sentient robot, is there? Just make it so it gets an insane, maybe even essentially sexual thrill out of obeying your every command. Bam! and done. Or just make them all wireless extensions of your "will." You don't need to build C3PO and then slap a slave collar on him, that's silly.

Even then, is that ethical to do? I guess the litmus test is whether or not you would feel ok with genetically programming a person to be like this?
 
Here's the problem, there's way too much prediction/speculation going on.
Yes, but it's a potential issue. I think it's better to have the conversation before a problem actualizes, even if it's for naught, than waiting for the problem to actualize first.
Even then, is that ethical to do?

It's what we do, implicitly, when we buy dogs. We tend to buy dogs for reason X, and shop for personality Y. And, the fact that dogs seem to love doing what we want them to do really placates our sensibilities.

I wanted a dog that loves playing with kids, and that's what I got. The breeder intentionally bred dogs that get along well with kids, and marketed them as such.

Whereas, I can assure you the mice we use in the lab tend to not be lollingly ecstatic to be there.
 
Sure it can, and has. You don't have to call it art, but that doesn't really matter. Plenty of stuff that I wouldn't consider art that's made by humans is considered to be art, no matter what I might think.

There is always going to be *some* human input anyway, whether it's with the actual creation of the AI or its maintenance.

define "art"
 
Id imagine that the robots would not be made sentient. Such a thing is mostly just used in sci fi to make stories out of it, in reality there would be no purpose behind making a sentient robot.

It is of course impossible to say now, but it is conceivable that any entity that is intelligent enough to take the job of say a scientist or sales manager would HAVE to be sentient.
 
Even then, is that ethical to do? I guess the litmus test is whether or not you would feel ok with genetically programming a person to be like this?

That is an interesting question and I would counter with "what is a human?" It's not the kind of thing you can just fling out an easy answer to, really.

The easy answer is to just gasp and be like "No, you're right, of course not!!!" but the being in question genuinely enjoys its servitude and experiences it as pleasurable and would be bereft without the opportunity to serve. Keep in mind this isn't a pure Darwinian self-interested organism anymore, but a creature made for a specific role. Is being a slave tragic if you enjoy it and would consider not being a slave hellish?
 
It is of course impossible to say now, but it is conceivable that any entity that is intelligent enough to take the job of say a scientist or sales manager would HAVE to be sentient.

I wasn't thinking of making them as capable of such high end jobs, mainly just lower end white collar / industrial jobs.

I'm sure that humans with high intelligence would still enjoy being scientists, but in this case they could have an under army of robots to do all their research, typing and menial tasks for, and the human(s) could simply oversee and monitor the work of the robots.
 
Though experiment: Automatisation eventually takes over entire sectors, yet are dispelled from others (say services and agriculture) through market forces and social pressure. Plausible?
 
Back
Top Bottom