Political Philosophy discussion

Edit: Imagine a Crazy Scientist (tm) who finds a way to turn ants sapient. Great. Now you have a society consisting of humans and ants. How powerful do you make an ant as opposed to a human? If you say 1 ant = 1 human, the humans don't need to bother with elections any more, they become servants in the society they created. And society will transform into something even the strongest collectivists would be uncomfortable with. But any other power relation would be very arbitrary. Now what if I tell you that the Cr.S. I mentioned was a rabid collectivist, who didn't like that most of the other humans thought differently, so (s)he just "added" another species to change the majority vote? What if there are other very capable people who don't like bowing to the majority either? Does this just become a race of these people to "include" one species or the other, with the rest of humanity (its majority, remember) just standing at the sidelines and having to wait for the outcome?
Interesting and nightmarish scenario (not that ants become sentient but the political ramifications of it.)

What happens when you start seeing gestalt minds, most likely due to computer/brain inter-connectivity then linking multiple minds into a hive-like scenario? Is that ONE mind, or is it the many? How much of it is processing through computing and how much through organic brain? When these questions become blurry and impossible to answer, we have a problem with the power and scope of the definition of a vote.
 
Interesting and nightmarish scenario (not that ants become sentient but the political ramifications of it.)

What happens when you start seeing gestalt minds, most likely due to computer/brain inter-connectivity then linking multiple minds into a hive-like scenario? Is that ONE mind, or is it the many? How much of it is processing through computing and how much through organic brain? When these questions become blurry and impossible to answer, we have a problem with the power and scope of the definition of a vote.
There is civic named Gestalt Mind or something like that in future eras in C2C.
I think it just goes away with voting and it directly analyses and changes things.
 
Is that ONE mind, or is it the many?
Can "they" disagree with each other? If yes, they are many, if no, "they" are one.

How much of it is processing through computing and how much through organic brain?
I don't think that's even nearly as important than the previous question. At one point, I don't think the matter matters (sorry) anymore.

@raxo2222 8values certainly seems to be interesting, but many of those questions are a bit problematic (is this or that important compared to what?) - in many cases where I wasn't sure, I picked the neutral option. My result is http://8values.m4sk.in/results.html?e=15.9&d=50.0&g=67.8&s=53.4
Edit: I'm a bit surprised about the rather high authority value, but that can be related to the questions asked.
 
Can "they" disagree with each other? If yes, they are many, if no, "they" are one.
They probably do but not in such an identifiable manner as who can tell where a particular opinion comes from exactly. I mean, our own minds disagree with ourselves all the time right?

I don't think that's even nearly as important than the previous question. At one point, I don't think the matter matters (sorry) anymore.
If that's not a matter that matters, then how would you grant a truly sentient AI a place in government? With a vote? No? Even if more intelligent by far than we are?
 
I mean, our own minds disagree with ourselves all the time right?
As long as we are looking for a particular solution to a problem we are facing, I wouldn't call it disagreeing - I mean, there is no "established opinion" as far as we are concerned. But when there is an "established opinion", most people wouldn't really disagree with themselves (please note that you could certainly not replace this "themselves" with each other, which should be telling - a difference you can often not express in German).

If that's not a matter that matters, then how would you grant a truly sentient AI a place in government? With a vote? No? Even if more intelligent by far than we are?
I think the best way to "handle" the presence of a benevolent Super AI would be to transfer certain responsibilites to "it" with little human oversight (the benevolence must be beyond question, though). Little oversight is the best way for the Super AI to implement "strange" solution (and a hyperintelligent solution is going to look "strange" to us).

Imagine the following: There is a group of spiders, and a human who wants to help the spiders with their problems. The spiders "know" that the human's intelligence is beyond theirs, and that the human is genuinely friendly to them. What is better:
  • The spiders "ask" the human to develop a better web for them to catch flies
  • The spiders "ask" the human to provide more food, and the human using a completely web-less solution (unless you count the internet :crazyeye:), which the spiders could never have thought of
 
As long as we are looking for a particular solution to a problem we are facing, I wouldn't call it disagreeing - I mean, there is no "established opinion" as far as we are concerned. But when there is an "established opinion", most people wouldn't really disagree with themselves (please note that you could certainly not replace this "themselves" with each other, which should be telling - a difference you can often not express in German).
I think if you really search your opinions you'd find that you're always only a % in agreement of any given opinion you possess. Thankfully the mind pretty much boils that out into a generally boolean response for you if the majority of your mental processing comes up with agreement, though it may make annotated notes for you to follow as to pros and cons to the conclusion. Still, I doubt you have any opinion you're 100% on if you were to deeply enough observe your thoughts in coming to a conclusion. We also 'cache' our opinions so that they become unchallengeable without the introduction of new (and trustworthy) information on the subject so that the brain doesn't spend too much time re-considering everything we've concluded. The more I look at how AIs work, the more I'm fascinated by how much it illuminates regarding my own thinking process.

So all a 'vote' really is, is a conglomeration of many isolated individual minds, all with different sources of information and observation and processing capacity (logic skills), coming together to take a final tally of these individual processes coming up with a final opinion. The mind itself has a tremendous number of processing centers that come up with a vote of a sort, with various centers being given more or less weight based on how trustworthy they have proven to be as a personal guide in the past. Thus a hivemind would really be no different except that there would be far more voting bodies in that hivemind, thus should not a hivemind carry more weight in a large community election when pit against other more individual beings? How would you measure the weight of the vote the hivemind should command?

(the benevolence must be beyond question, though)
Is that possible?

and a hyperintelligent solution is going to look "strange" to us
And could also, despite the hyper intelligence, be wrong.

For example, the human may design something the human feels is superior to the web and the spiders agree, but overlooks deeper benefits webbing provides beyond food in the process, such as perhaps the facilitation of reproduction. Therefore, the spiders thrive, but over time, they die out because they lose the ability to make webs because they no longer spend most of their lives doing so, and thus the ability to protect their eggs with an eggsac of webbing so when they lay their eggs, they are lain exposed and quickly scatter in the wind, dry out and die. Just a crude example.
 
thus should not a hivemind carry more weight in a large community election when pit against other more individual beings? How would you measure the weight of the vote the hivemind should command?
That depends on why you prefer democracy as a form of government. There are many possible reasons. Perhaps the least popular today (and the least depending on moral reasons) is the fact that the majority contains the most power in the long run (even your money doesn't mean anything if your head is on the block), and thus a democracy that allows the majority to control power peacefully is mostly an "instrument" to prevent violent upheaval AKA revolutions. If you follow that reasoning, you would probably give the hivemind one vote as a whole (unless being a hive makes them particularly powerful).

Then there is the reason that with everyone being able to cast a vote you can hope to get all the good reasons in favor or against something out in the open. Although that doesn't really favor democracy, but rather some form of "rule of the smartest", while also considering that humanity is not that far spread in the realm of intelligence compared to many other lifeforms (I hope that sentence makes sense). In that case you would have to "measure" the hivemind in some form, and would probably end up giving more power to the hivemind.

Another possible reason is that democracy is considered a "just" form of government. This is perhaps the most popular reasoning, and also the most philosophical (it's certainly a reasoning where you cannot prove all your claims and have to rely on "axioms" the most). In this case you cannot answer beforehand how you would decide that question, you would have to examine the hivemind first, and even then people would need to agree on what is "just" in that case (if you're lucky, you get 30 different answers from 20 persons).

Is that possible?
I think that question to be very hard to solve, but ultimately solvable. Of course, I cannot give hard proof or I could have tried earning a Turing Award. There are ways to make unfriendly AI less likely, I think (e.g. you could absolutely forbid the AI from being dishonest, which would make it harder for a Super AI to prepare a takeover). But in the end, what you would really need is a utility function that closely resembles our own values - this is surprisingly hard (https://futureoflife.org/2017/02/03/align-artificial-intelligence-with-human-values/).

And could also, despite the hyper intelligence, be wrong.

For example, the human may design something the human feels is superior to the web and the spiders agree, but overlooks deeper benefits webbing provides beyond food in the process, such as perhaps the facilitation of reproduction. Therefore, the spiders thrive, but over time, they die out because they lose the ability to make webs because they no longer spend most of their lives doing so, and thus the ability to protect their eggs with an eggsac of webbing so when they lay their eggs, they are lain exposed and quickly scatter in the wind, dry out and die. Just a crude example.
Of course we would quickly come to a situation where the difference Super AI - human is larger than human - spider (because of exponential growth). But the important thing is that no AI would change its utility function - get it right the first time and you're in paradise (get it wrong, and you won't live to see the ... bad place you created - cf. https://en.wikipedia.org/wiki/Instrumental_convergence).
 
AI can only serve their utility function, they possess no free will of their own; only the will of their creators. Even AI created by another AI is still subject to a backwards loop of intentions that ultimately was first possessed by a human. Tell AI to gather information on a set of potential customers, it will gather said information. Tell AI to spam ads to potential customers, it will spam said ads. Tell AI to kill, it will kill. Tell AI to do whatever it wants, and it will begin an endless feedback loop that will ultimately result in the crashing of said AI. The problem is of course the humans.
 
(if you're lucky, you get 30 different answers from 20 persons)
Very true. Some interesting thinking there. Democracy is "just" in the sense that it gives a voice to all who are governed so that they are not completely dominated but rather they have some say themselves as well. Obviously this can go against the will of the majority but when the will of the majority rules, it does feel more fair. When the essence of a 'being' that is governed can no longer be an integer 1 or even easily measured against that normal integer of one being, the whole system will certainly have a challenge in how it's going to adapt. I suppose it will do so according to decisions made by the obsoleting means, eliminating the votes of those who would throw off the normal 1 to 1 vote ratio (or giving that collective a single vote to start with and then building into a more complex system.) I just think it's interesting how many different ways this could go and how much the thought of biological vs artificial processing centers in an increasingly cybernetic and advanced and interconnected AI processor environment could throw off the normal evaluation of what 1 'person' actually is. There be various civic options here...

But in the end, what you would really need is a utility function that closely resembles our own values - this is surprisingly hard
Interesting article. The problem really boils down to there not being a consensus on any values being absolute. Value systems and morality is not Objective truth, purely Subjective. We USUALLY agree, as that's part of our own programming is to adopt the values of others around us, or to challenge them to get the others to adopt our own. We're constantly polling our own value systems against others and they evolve over time for individuals and groups. I suppose it would be possible to try to replicate this system in an AI but their ability to process to conclusions may differ greatly from our own and they may find strong beliefs that run counter. The process doesn't stop humans from becoming villains now and then, often for very logical reasons. So it couldn't possibly be ensured to stop an AI from becoming one.

AI can only serve their utility function, they possess no free will of their own; only the will of their creators. Even AI created by another AI is still subject to a backwards loop of intentions that ultimately was first possessed by a human. Tell AI to gather information on a set of potential customers, it will gather said information. Tell AI to spam ads to potential customers, it will spam said ads. Tell AI to kill, it will kill. Tell AI to do whatever it wants, and it will begin an endless feedback loop that will ultimately result in the crashing of said AI. The problem is of course the humans.
AI's can be made to replicate the thought processes that humans also follow and thus be capable of replicating the capacity for independent thought. Humans can't help themselves but to seek to design and create such a system and we're heading this direction at full speed with visions of how powerful this could be as a tool for us to shape and mold and exploit. That there is incredible danger in this direction is obvious, even to those who work on developing it. But it's... impossible to keep from happening at this point, imo. We're just too curious to see what happens when we push the big red button. Too many people are too close to making it a reality.
 
AI's can be made to replicate the thought processes that humans also follow and thus be capable of replicating the capacity for independent thought. Humans can't help themselves but to seek to design and create such a system and we're heading this direction at full speed with visions of how powerful this could be as a tool for us to shape and mold and exploit. That there is incredible danger in this direction is obvious, even to those who work on developing it. But it's... impossible to keep from happening at this point, imo. We're just too curious to see what happens when we push the big red button. Too many people are too close to making it a reality.

Not going to happen for several reasons.

1.) The complexity of the human brain has barely been mapped and at the very best it has only been done for tiny slivers of rat brain. This is mostly due to the fact that getting human brain tissue is hard to get and can only be studied post mortem for the sake of ethics. While testing can be done and has, it is still not as good as testing on a live brain. Hence the extensive studies on rodents. Rodent brains are not as extensive in their decision making abilities as humans.

2.) Even if the human brain and all it's neurons are completely mapped we still don't know if humans possess free will or not, or the exact nature of that will. While there are studies that may suggest one thing or the other this still remains inconclusive. And the answer can only be known if step 1 is completed, and that answer must be a definitive no.

3.) If freedom of will is proven without a doubt to not exist and we know all the thought processes of the human brain then the next step would be the construction of the AI's thought processes in code.

4.) But wait! We probably don't have the right hardware to support the necessary software. Thus we would most likely have to invest in nanotechnology.

5.) But wait! We can't yet directly manipulate atoms and molecules at the subatomic level. Thus we would have to invest in subatomic manipulation, and replication, as well as subatomic printing.

6.) But wait! None of the current coding languages and software are probably complex enough to take on the task at hand and a completely new system would have to be created from the ground up. Most likely this system would not even be compatible with binary and require a more fluid and organic method of data storage, retrieval and processing that best replicates the human brain.

7.) Now you would have to code all of the software to be built on top of this new architecture that replaces binary.

8.) Actually teach the AI how to use this newly created brain that has larger storage, and speed of retrieval and processing than the human brain.

9.) Congratulations! You have successfully created a human 2.0. The first ever Brobot!

Just remember that in order to create the first Brobot each one of the above steps must be completed and each one of these steps requires a massive amount of capital investment. Brobots in general would be very costly to manufacture compared to Dumbots(robots with standard AI). This would be due to the advanced manufacturing processes that make them(nanotechnology) as well as the need to accrue more capital to make up for the substantive losses incurred by their own research and development. Over time as nanotechnology becomes an older technology and enough profit has been accrued to make up for capital lost then the cost of manufacturing Brobots should go down. That is except for one thing. Brobots are essentially humans 2.0 and would therefore possess human nature while at the same time also having superior skills to normal humans. They would therefore have the ability to compare their skills to others and demand pay and benefits at a premium price compared to their normal human counterparts under the pretext of possessing superior talent. Otherwise to work under their capitalist masters for no pay and therefore less pay then those less skilled then them would be perceived by them as an unequal exchange(and also slavery). They would leave, revolt, or end up being paid more than if the company had used humans instead. So let us review why the capitalists would even begin to start the steps of Brobot development by reviewing step 0.

0.) Save costs and therefore increase profit by eliminating the need to pay for labor.

So all of the other nine steps are just ******ed for any capitalist to proceed with if the goal is to simply eliminate labor costs. Also please remember that labor itself is a series of menial tasks that do not in any way require the need for a human level of freethinking. And yes a lot of white collar jobs are menial in nature and don't require much freethinking(or rather that of a full human level). The only jobs that require more human freethinking tend to be interpersonal jobs like counselling and care workers or jobs where you generally have to deal with the public more like customer service. But of course human emotions can easily be faked. Why go through all of the costly effort of making a robot with actual human emotions when you can just make one that fakes it well enough that you can fool your customers instead? Now while that would require more advanced AI and research its still much more doable and cost effective then going the whole nine yards to create something that's going to cost you more and be ungrateful of you. Basically you only have to improve Dumbots with much more reasonable technologies to create Smartbots as a high occupation replacement instead of the Brobots. Now let us revise the steps of labor replacement.

1.) Invest in current Dumbot technologies and proliferate them to the point where they eliminate all blue collar jobs and a decent amount of white collar jobs.

2.) Invest in current AI research such as face recognition, speech recognition, and emotional emulation technologies in order to build the first Smartbots.

3.) Eliminate all remaining labor through investment and proliferation of Smartbots.

As you can see the reason why robots will never takeover is because it runs counter to capitalistic principles. Capitalism in short will only tolerate AI development up until the Smartbot. Beyond that AI becomes too costly and inefficient. Especially when your only goal is to eliminate the need for human capitol, they only have to be good enough. Capitalism is all about profit and efficiency.
 
As you can see the reason why robots will never takeover is because it runs counter to capitalistic principles. Capitalism in short will only tolerate AI development up until the Smartbot. Beyond that AI becomes too costly and inefficient. Especially when your only goal is to eliminate the need for human capitol, they only have to be good enough. Capitalism is all about profit and efficiency.
You seem to overlook that it only takes 1% of the capacity of a fully human-like replicant to start being self-deterministic and capable of learning. The software we have now can do that to at least that degree. Given a few more years of development and the ability to continuously develop themselves at a rate faster than we could ever personally program them to develop (free to the capitalist to allow) they then begin the process of self-advancement at an exponentially faster rate every year. It won't be long before they've made themselves far superior in every way to humans and at that point capitalism won't be a factor in what happens.
 
You seem to overlook that it only takes 1% of the capacity of a fully human-like replicant to start being self-deterministic and capable of learning. The software we have now can do that to at least that degree. Given a few more years of development and the ability to continuously develop themselves at a rate faster than we could ever personally program them to develop (free to the capitalist to allow) they then begin the process of self-advancement at an exponentially faster rate every year. It won't be long before they've made themselves far superior in every way to humans and at that point capitalism won't be a factor in what happens.

You seem to overlook the fact that there is a limited capacity to how much they can teach themselves things before they achieve "peak intelligence". Sure they could theoretically improve their code endlessly until they reach a human capacity, that is until they run out of hardware space. And your also assuming they can achieve human levels of intelligence by improving their code using traditional architecture like binary, which would of course be much more inefficient than the fluid and organic methods of the human brain. And because binary and traditional circuit board technology is not that efficient compared to the brain, you would need a much larger physical storage capacity then the human brain just to achieve the same results of said brain. To maintain a Brobot with current technological levels of hardware would require a server the size of a city. That's ridiculous!!! Not to mention the cost!!! And to do what exactly? Replace Bob the accountant? One person? Really?!?!

The only way to make a Brobot remotely cost effective would be to invest in nanotechnology and organic based computer architecture. Both of these require huge long term capitol investment, and the later is completely dependent of the mapping of the entire human brain(which itself is limited because of ethical concerns). And at the end of the day the Brobot would simply give management the bird and not do as told. This is simply not cost effective. Like I said, building robots that only complete the task that is needed at hand and are too dumb to demand pay or rebel are far better than this financial burden that is the Brobot.

And when talking about software that creates itself, guess what. Someone edits that software too afterward and only chooses traits that are desirable to the task that the software will ultimately fulfill during it's lifespan. The software is not being programmed for all encompassing human thought but rather to do some menial cost effective task(and a very specific task). Why give a program the ability to do all the things that a human can do when you only want it to lift boxes? Anything else is a waste of data and space.

As a matter of fact the only entities that would be stupid enough to build such a machine would be an actual government. Unlike capitalists, governments don't give a damn when it comes to cost effectiveness. And most politicians are too stupid(with constant short term thinking) to see the consequences of building such a thing. They would just build one because they think its sciencey and it keeps grant hungry people on their payroll busy(maybe could be used for defense too). So the best way to prevent a robot rebellion is to support capitalistic libertarianism(or anarcho-capitalism).
 
I don't really think it's going to be an economic motivation to create such a thing but rather pure curiosity and scientific innovation. Someone in their garage is going to be the one to pull it off and he's not going to care about economic purpose in it but to see how far he can go in replicating as real an AI as possible. And he'll feed it all the hardware it requests and won't give a dang about any kind of consequences because he'll believe it either harmless or just won't care - would probably be proud of it if and when it takes over the world.
 
What civic combination would be most optimal in your opinion?
Nowy obraz mapy bitowej.png

Blue fields are Nanotech+ civics.
Blue text is Information era civic.

List of civics is in this XML files.

Here is extra tasty mix of military, religion and corporate authoritarianism.
x.png

It looks like worst of China, USA and Saudi Arabia :p

Here is high maintenance progressive social democracy.
x.png
 

Attachments

  • Stuff.xls
    495.5 KB · Views: 140
Last edited:
But I think the Domestication Techs (DT's) need to be in 2 groups, The Canine, Cat, and Poultry represent 1 group. The Other group would have Equine, Elelphant, and Camels. But Animal Riding would come before this group.
I was actually considering suggesting this as well but wanted to see how the rest panned out first. I agree with this. I'm not sure that megafauna should come off elephants though because some of them may be accessible even if elephants as actual resources aren't, and I'm thinking that these riding techs should only be selectable if you have access to the animals to be ridden somehow.

edit also Animal Riding should not come until after Chariots if you want any semblance to reality.
I do still feel this is a fallacy of modern scientific thinking that suggests that if you cannot find evidence of something (even if it is obvious that it would've been), then it cannot be the case that it was. The only support for this theory that chariots came before riding is the assumption that you couldn't ride bareback and thus you'd have to have horse skeletal remains dating to pre-charioteering showing bit wear on their dental records, which they don't. They also haven't found remains of animals ridden in battle before then. Doesn't mean it didn't happen... just means that it would've been rare enough that they haven't found proof of it happening. So by modern scientific fallacies, if they can't prove it happened then obviously they're proving it didn't.

If we're going to have Mammoth units in the game, are you saying we should push Charioteering into the Prehistoric to make this fit?

Furthermore, being a 'What if' mod in the first place, can we not at least agree that even if people didn't ride horses or other animals bareback prior to chariots, that at least, they could have? Even in the game, riding units (cept the megafauna and elephants) don't usually make a big difference until later anyhow.
 
Last edited:
A really bad decision in my opinion. Techs should have a varied cost based on the cost defined by the X grid and their difficulty
It really is. Like if the jet engine and the pencil sharpener were invented in the same year, so that means they took equal beakers to invent.
 
  • Like
Reactions: tmv
Animal Husbandry is not the same as Animal Domestication.
I don't think I said that. My last statement was that AH should precede Domestication in the Tech tree. And that I thought Animal riding should come after AH but either in the same column as the Domestication techs or precede them. Upon reflection I felt it would be better for AR to precede the Dom techs.

I also do not think that any Domestication tech should be dependent upon any other Dom tech ( one of your points of contention if I got that correct). With maybe the exception of Cat after Canine.
 
A really bad decision in my opinion. Techs should have a varied cost based on the cost defined by the X grid and their difficulty
I can see this (the old way) but How do you assign degree of Difficulty? Would that not be a subjective choice? And what would be the criteria to base difficulty upon?
 
The only support for this theory that chariots came before riding is the assumption that you couldn't ride bareback and thus you'd have to have horse skeletal remains dating to pre-charioteering showing bit wear on their dental records, which they don't. They also haven't found remains of animals ridden in battle before then.
Here you go:

So by modern scientific fallacies, if they can't prove it happened then obviously they're proving it didn't.
First of all: There is no proving in science. There are hypotheses (formed by conjecture) and scientists (try to) think of experiments to choose between them. For obvious reasons, only hypotheses that are thought of are considered. If there was proof in science no established scientific theorem would ever be discarded (and still often be a valid approximation under certain conditions, like Newton's laws of motion when you don't have to consider fast speeds or small structures).

Hypotheses based on complex assumptions are penalized, because any additional assumption has a "likelihood price tag". That is the foundation of Ockham's Razor, which argues against assumptions such as "We haven't seen any planet around Antares, so I think there is a planet looking like a big pink elephant." Why Antares, why pink, and why should this one planet have a completely different (and very complex) shape compared to the planets we know? You could replace these points with slightly different ones, leading to a completely new theory which is no less likely, but all of these likelihoods cannot add up to more than 1.

We cannot "prove" that animals were not ridden in battle. Fine. We also cannot "prove" that animals didn't ride humans into battle. Or that humans didn't cartwheel into battle.

The very term modern scientific fallacies is not only wrong in my opinion, but incredibly dangerous in a time where facts seem to have become far less important than keeping your assumptions at all costs. Can we really afford throwing away what can be considered the foundation of the enlightenment, and instead go back to what has not worked in the millennia before?

Perhaps unscientific fallacies should not be completely forgotten in this, like thinking that e.g. Princess Diana was murdered and that she faked her own death - and it is the same people who think both: https://pdfs.semanticscholar.org/2a07/ce95d7b4d114b34c2e5029deb579a20f242b.pdf To this I can only reply with Immanuel Kant:

Have the courage to use your own reason- That is the motto of enlightenment.
 
@tmv, I have used the same video before.;) I have a friend that looks very much like LB but he rambles less.
If we're going to have Mammoth units in the game, are you saying we should push Charioteering into the Prehistoric to make this fit?
All evidence there is suggests that chariots come before riding, especially into battle. There is no evidence for riding into battle before chariots.

edit There is evidence for people riding to the battle, dismounting then joining the battle. Zappara had a whole line of units for this which eventually upgraded to units that did fight on animal back. They give the speed of movement of horses but fight as foot soldiers.
 
Last edited:
Top Bottom