[RD] The God Machine

Do you think the God Machine is a good thing overall?

  • Yes

    Votes: 0 0.0%
  • No

    Votes: 10 100.0%
  • Neither, it is neutral

    Votes: 0 0.0%

  • Total voters
    10
The whole premise of the God Machine is that this works though, if you doubt the premise itself the whole experiment does not make any sense. Pretend the god machine actually only intervened in violent crime and the likes of it, and completely ignored petty thievery, lying, or cheating (as it states in the text).
I have to doubt the premise, it’s just who I am! :lol: (I’m sure I’m inviting some criticism as to whatever logical fallacy I’m committing here.)

You define violent crime in such a way that is clear and unambiguous. I don’t think it can be seen that way unless you take a kind of moral absolutist position and then build the system around that.

Here’s a direct case: would a doctor be able to perform an abortion? Someone who is against legal abortion would consider it murder. Someone against restricting abortion would say it isn’t. What does the computer decide?

Then you have indirect cases. The Ford Pinto had a design flaw that resulted in increased fatal crashes and this was known about. In short, Ford did some statistical analysis and decided that it was more economical to not fix the problem. The cost was $11 per vehicle, which most people would individually argue is a trivial amount in the purchase of a car.

Would the computer intervene in the Ford Pinto’s design flaw? What if instead of $11, it was $110? Or $1,100? Or $11,000? (That would triple the price of a car in the 1970’s.)

It kind of goes into the question brought before the Supreme Court: what is pornography? The famous response being “I know it when I see it.” I kind of take that view.
 
You define violent crime in such a way that is clear and unambiguous. I don’t think it can be seen that way unless you take a kind of moral absolutist position and then build the system around that.

I actually made that same criticism, but I didn't want to spoil, I wanted people to play along :D

Here’s a direct case: would a doctor be able to perform an abortion? Someone who is against legal abortion would consider it murder. Someone against restricting abortion would say it isn’t. What does the computer decide?

The God Machine intervents when someone is about to commit a violent or highly immoral act that would otherwise break the law. Seeing as how the God Machine acts independent of states and such, I'd think it would simply do whatever it deems correct, or act according to the human rights charta. In the authors view it would likely allow the abortion, but that is just my speculation. A proper self learned AI might actually not understand why anyone would want to abort at all, nor why anyone would want to procreate, since it does not feel embodyment.

Then you have indirect cases. The Ford Pinto had a design flaw that resulted in increased fatal crashes and this was known about. In short, Ford did some statistical analysis and decided that it was more economical to not fix the problem. The cost was $11 per vehicle, which most people would individually argue is a trivial amount in the purchase of a car.

Would the computer intervene in the Ford Pinto’s design flaw? What if instead of $11, it was $110? Or $1,100? Or $11,000? (That would triple the price of a car in the 1970’s.)

It kind of goes into the question brought before the Supreme Court: what is pornography? The famous response being “I know it when I see it.” I kind of take that view.

No, the GM would likely do nothing. It would also do nothing about systemic issues, or psychological violence, or anything that isn't obviously black or white.

This is essentially what I think in this regard: By forbidding ALL illegal violent acts, the GM makes it so that systemic and subtle violence is THE ONLY form of violence allowed, which inherently boosts all those people who are adept at using these forms of violence as means of control. We would see a sharp surge of this type of violence, yet no one could intervene, because words only do so much and violent revolution is, forever, out of the window, so now the most powerful people are those who can exude power, control and violence in a more subtle way, aka politicians, elites, capitalists and sociopaths. It's not a violence free world, it's a world that is sanitized of physical violence, but brooding with other forms of it. Certainly the GM would not help make people more moral, it will only make them LESS IMPULSIVE.
 
If I accept the hypothetical, the god machine allows you to believe you have made the choice no? So what's the difference?

Yes, exactly! If you follow the idea of the authors, they think this conundrum solves itself. Personally I am not sure it does. Basically the argument we are having currently is that the illusion of free will is qualitatively the same as free will, which is a really difficult debate.

so we make AI in our image

Almost. The AI simply has all kinds of human data to learn from, but it does learn in its own ways, it is not programmed or made by us. only informed by us.

An AI cannot not tell us what is moral. Any values it has would be reflective of those who programmed it.

Not in this case, no, since it was not programmed. See my reply to Berzerker. The values of the AI are the ones it itself hand-picked from all of the human data that was available. I think we are even supposed to believe that the learning algorhythm is itself not programmed by humans, hence the entire concept of GNMs.

I don't plan on killing anyone, but why would I give up the option for nothing in return? My neighbor, though, bears watching.

What you get in return, I suppose, is the confirmation that you will never, ever, physically harm anybody, irrespective of your state of mind. What you offer however is very big, imho.

Isn't this essentially the same with that "Roko's Basilisk" thing? Assuming it is even possible, I'd say that the existence of such a machine makes things even worse, because humans stay as they are but now have a built-in deprecation.
In a way, it is like comparing a person who has to examine if they will leave a room or not, with one who has to deal with a god in the room first, while outside of the room stuff are the same. At best you lose all rooms of this type.

I genuinely do not understand a single thing you're saying. And no, Roko's Basilisk is different in many ways. This example is actually not at all about AI, but about morality. I will reveal soon what (I think) the point is/was.
 
By forbidding ALL illegal violent acts, the GM makes it so that systemic and subtle violence is THE ONLY form of violence allowed, which inherently boosts all those people who are adept at using these forms of violence as means of control. We would see a sharp surge of this type of violence [...]
Then:
The values of the AI are the ones it itself hand-picked from all of the human data that was available.
Wouldn't that then itself adjust over time as behaviors change? All unambiguous violence is replaced with ostensibly nonviolent subversion, but the device itself makes decisions based on human behavior. Since the behavior changes, the function of the device should too.
 
Thank you for all the answers so far, much appreciated.

What about government policy decisions that will inevitably both cause and prevent death?

Would McDonald's still be able to make burgers and fries? This is certainly some form of indirect murder.

I'm not sure if McDonalds is indirect murder, seeing as people kind of wilfully kill themselves. I'd say that ads specifically targeting kids with Happy Meals and the US's dietary policy are close to grand-scale murder, though. No idea why nutrition isn't taught in schools, they should've done this like 50 years ago. And no, no food Pyramid ****.

But the GM likely does not care about any of that, since it is not illegal nor physical violence.

Sure, but maybe they would close down because no one would "choose" to eat there ;)

The GM is not intended to control the lives of people and make them healthy, only to stop violence and extremely immoral acts. People can still wilfully drink or eat themselves to death, smoke, and do just about anything besides murder or the like.

freedom is the absence of coercion or constraint, murder is a constraint and therefore eliminating it does not restrict freedom at all - just the opposite, the freedom of would-be murder victims is preserved.

That is a negative definition of freedom, which is definitely valid, but it's also not the only one. You can either view freedom as the absence of constraint, or you can view it as the presence of some quality, or you can view freedom purely as a relational quality. In general though I definitely agree with your argument, in this respect the GM does make some people free. Good point!

I wonder what the infrastructure would need to look like to manage the sex lives of 10 or more billion people?

The GM does not intervene in people's sex lives, it would only stop violent rape (actually, only nonconsenting rape)

Would it allow 10 billion people to exist? It certainly would practice some form of eugenics based on how many of your ancestors were created thru rape and deception.

If this morality machine needs to constantly monitor our behavior and alter it why not just eliminate the effort/problem altogether and discourage sex and reproduction altogether?

Again, none of these concern physical violence or illegal activities, so the GM would not intervene at all. The GM certainly would not practice eugenics, and I don't understand what the rape of anyone's ancesters has to do with this?

Some here have tried to view the GM as an "all knowing AI trying to lead humanity", but its only purpose is to stop violent crime or illegal acts that are highly immoral. I also do not understand why the GM would ever stop our reproduction, how did you arrive there?

So if you can't kill someone because the machine forces you to change your mind, can you still just think about it?

If not, such a world would have very boring literature.

On the other hand, consider this: there are religions that condone shunning, war, execution, killing animals, and so on. Believers are supposed to do these things under specific circumstances, if they follow the tenets of their faith in a literal sense.

Such a machine would result in rewriting every holy text in existence unless it already preaches total nonviolence.

Of course you can think about it, you can fantasize about all kinds of violence, you just change your mind shortly before actually committing it. One could still write books or make movies with violence in it.

The GM would also not be interested in rewriting any holy texts, just because they're violent, because reading or writing a violent text is not in itself violence. Its job is not to make humans a more peaceful people, nor to change our culture to a nonviolent one, but very simply to stop violence shortly before it happens.
 
Then:

Wouldn't that then itself adjust over time as behaviors change? All unambiguous violence is replaced with ostensibly nonviolent subversion, but the device itself makes decisions based on human behavior. Since the behavior changes, the function of the device should too.

I think you're right, and this is where the example gets paradoxical:

The machine is supposedly absolutely autonomous and self-learning, yet it is completely restricted by one rule imbued by humans: That it shall only intervene when actual violence or immoral illegal acts occur, not under any circumstances.

If you take the example as granted, then what you say is entirely unproblematic, the GM simply does not evolve further, it does its job untill all prisons are abolished and there is no violence anymore, and from that point on the GM might as well be shut off.

If you try to apply some degree of logic or autonomy to the example then of course the AI would change as it gets more data, and of course if it was really autonomous, it would not even be bound to human rules in the first place, would it? But at this point you're probably taking it too far, the GM serves a specific purpose as example and is supposed to be taken at face value, and I think the idea is that the GM actually rarely does anything, people have simply abandoned violence since (an optimistic presumption, I know).
 
Basically the argument we are having currently is that the illusion of free will is qualitatively the same as free will, which is a really difficult debate.
Yeah, this is how far my brain got into it before realising I need to work today, and gave up for the time being :p

A tl;dr of my current thoughts: if we cannot know either way if our decision was modified, we either inherently have to accept the premise or reject it outright, otherwise we'd end up in a permanent state of "decision paralysis", so to speak. That would be a low-level anxiety cost to things we - individually - perceive as pivotal purely on moral grounds. Not rape or murder, obviously, but the entire danger is the notion of human thought going "what if". What if the AI elected to modify lesser decisions? We wouldn't know. What if the AI grew beyond the original human programming constraints? We wouldn't know.

I don't think it's something that solves itself, though I can see the logic that leads the authors to claim such. That solution speaks to a world where people are happy and content with a basic state of affairs - not questioning, just accepting. People aren't like that! There will always be people, however ostracised, who will push boundaries, ask questions, and have those doubts that nobody else have. I don't know know if I'm that kind of person, but I strongly believe those types of people can and should exist. I favour science-fiction that isn't necessarily dystopian, but ones where the simulation models require a baseline of conflict for humanity to treat it as a believable world (Agents of Shield, in my opinion yet possibly banal opinion, did quite a good job of this during their Season 5 run).
 
That would be a low-level anxiety cost to things we - individually - perceive as pivotal purely on moral grounds. Not rape or murder, obviously, but the entire danger is the notion of human thought going "what if". What if the AI elected to modify lesser decisions? We wouldn't know. What if the AI grew beyond the original human programming constraints? We wouldn't know.

really cool, an argument I actually haven't seen yet. I buy that.

I don't think it's something that solves itself, though I can see the logic that leads the authors to claim such. That solution speaks to a world where people are happy and content with a basic state of affairs - not questioning, just accepting. People aren't like that! There will always be people, however ostracised, who will push boundaries, ask questions, and have those doubts that nobody else have. I don't know know if I'm that kind of person, but I strongly believe those types of people can and should exist. I favour science-fiction that isn't necessarily dystopian, but ones where the simulation models require a baseline of conflict for humanity to treat it as a believable world (Agents of Shield, in my opinion yet possibly banal opinion, did quite a good job of this during their Season 5 run).

fully agree. I also do not think this is incedental in any way, media currently is a lot about performative rebellion or total subjugation, but both of these types of media are actually conformist, genuinely non-conformist media, be it music or TV or anything essentially only exists at the fringes (mostly of the internet, because actual, physical countercultural communities are pretty dead (better: fractured) compared to the 60s/70s/80s/90s. the biggest and most commendable counterculture we have is probably coming out of lgbtq+ activism currently. I feel like we could really use another Lou Reed or David Bowie currently :D
 
As-written, no. There are implied dangers within that text, and they are pretty nasty.

Having the state ultimately decide on genetic engineering should it become mainstream is a terrifying thought, and this is effectively a computer (programmed by human beings) doing the same thing.
 
Having the state ultimately decide on genetic engineering should it become mainstream is a terrifying thought, and this is effectively a computer (programmed by human beings) doing the same thing.

There is no state in this example which controls the GM, no computer (a self-learning AI is not a computer in any way, and the AI in the example is semi-biological), definitely not a computer programmed by humans, and the GM does not in any way do genetic engineering or anything of the likes, as stated multiple times now. Clearly you did not even read the text in the OP :(
 
The GM is not intended to control the lives of people and make them healthy, only to stop violence and extremely immoral acts. People can still wilfully drink or eat themselves to death, smoke, and do just about anything besides murder or the like.
So... suicide by sleeping pills is still okay? (according to the machine; I'm not endorsing this). Romeo could still take poison and Juliet could still stab herself?

The GM does not intervene in people's sex lives, it would only stop violent rape (actually, only nonconsenting rape)
:dubious:

Rape is, by definition, nonconsenting. It doesn't have to be violent. It just needs to be non-consensual.

Nice to know the machine would stop it, though. It'd be nice if the machine would change the minds of people who think sex/"marriage" with underage girls is okay.

So I'm guessing this would stop human trafficking?

Of course you can think about it, you can fantasize about all kinds of violence, you just change your mind shortly before actually committing it. One could still write books or make movies with violence in it.

The GM would also not be interested in rewriting any holy texts, just because they're violent, because reading or writing a violent text is not in itself violence. Its job is not to make humans a more peaceful people, nor to change our culture to a nonviolent one, but very simply to stop violence shortly before it happens.
You have somewhat missed my point with this last paragraph. If your holy text says you must kill something (human or animal), and the machine forces you to change your mind about doing it, that would mean you wouldn't be following the rules of your faith (I'm talking about any religion that requires killing or sacrificing living things). Since you wouldn't be following the rules written in your holy text (whatever it might be), that holy text would need to be rewritten into some version that you could follow, without feeling frustrated or conflicted about not being able to follow particular rules or tenets.

I'm not saying the machine would rewrite the text. I'm saying the affected humans would need to do that, for their own mental well-being. After all, what would be the point of having a text that says, "You must kill five ______ every full moon" - but you can't/don't want to, and you're not sure why the text would even say that, and by not doing it, you're breaking the rules?

This is getting into "Captain Kirk defeats the AI by applying a feedback loop of illogic" territory, except the targets are humans.
 
So... suicide by sleeping pills is still okay? (according to the machine; I'm not endorsing this). Romeo could still take poison and Juliet could still stab herself?

That is a grey-area which the text does not touch. Personally I believe suicide is entirely morally permissible, but I am unsure the machine does. Very good input Valka, curious as to how I never thought of that! Thanks

:dubious:

Rape is, by definition, nonconsenting. It doesn't have to be violent. It just needs to be non-consensual.

My point was moreso "pretend rape" as a role play between consenting adults vs actual rape. I agree that all rape is nonconsensual sexual acts, that is also the definition I support. But the GM would not intervene in any BDSM or any other kink. Some people specifically mentioned the GM policing someones bedroom, hence why I made this distinction.

So I'm guessing this would stop human trafficking?

It would stop human trafficking since it is both violent and illegal, exactly.

You have somewhat missed my point with this last paragraph. If your holy text says you must kill something (human or animal), and the machine forces you to change your mind about doing it, that would mean you wouldn't be following the rules of your faith (I'm talking about any religion that requires killing or sacrificing living things). Since you wouldn't be following the rules written in your holy text (whatever it might be), that holy text would need to be rewritten into some version that you could follow, without feeling frustrated or conflicted about not being able to follow particular rules or tenets.

I'm not saying the machine would rewrite the text. I'm saying the affected humans would need to do that, for their own mental well-being. After all, what would be the point of having a text that says, "You must kill five ______ every full moon" - but you can't/don't want to, and you're not sure why the text would even say that, and by not doing it, you're breaking the rules?

This is getting into "Captain Kirk defeats the AI by applying a feedback loop of illogic" territory, except the targets are humans.

I did actually completely miss the point, didn't I? :D

But yes. If your belief or ideology is inherently violent, the machine will stop you from following it in praxis. It will not change your mind about it being good or bad. People can still technically think the murder of brown people is justified, or the stoning of homosexuals is god's will. They just cannot murder or stone anymore. Accordingly, maybe some holy texts would be rewritten to work in the age of the GM. Your logic is not far off at all, it is conclusive and easily understandable.
 
Back
Top Bottom