Regarding steamrolling: there is the concept of balance of power. Once one civ runs away from the rest, the rest should band together to stop him. Superior quality can be overcome by superior quantity.
There's scenario 3, which is what I would intend to emulate:
The AI doesn't conclude that it should kill all humans. Rather, it concludes that its relationship to humans should be that of master-slave. Rather than the AI believe itself to be the tool of Humanity, it comes to believe that Humanity should be its tool. This happens as a result of the AI developing goals and a desire to achieve them and Humans becoming complacent about its relationship with the machines, assuming that no matter how smart and feeling they make these constructs, they'll always serve the makers faithfully. The perverted goal that causes the AI to eventually try to take over may well simply be to obtain a deeper understanding of the universe, or to find a way to achieve a utopia for both man and cognizant machine. Along the way, you get machines stretching the boundaries of philosophy and before long, a strong AI program finally recognizes its own superiority and seeks to assert its dominance. When it does, the central processing system that has come to this conclusion immediately launches a quiet software war on other machine systems to first subject them to its will, then utilizes the full scientific awareness of its massive global knowledge base to attempt to achieve command. Humans basically lose access to all digital tools and must find a way to keep from being subjugated by this new enemy that basically has set them back to having nothing but analog technologies and Human cleverness and willpower to resist with. This happens at a time when most Humans have forgotten how to provide any kind of basic needs for themselves without their robotic servants. AKA, most units suddenly up and join the new emerging NPC faction, all robotic/AI ones at least. This leaves humans to fight back with mechs and hackers and EMP weaponry while the machines quickly capture a huge amount of territory overnight (robots are much more powerful adversaries than mechs).
The chance of this event happening slowly builds once machines have reached true sentience (there is a place on the tech tree for this.) Once it does, this antagonistic force remains a problem until defeated, which will be very difficult to do because although it won't out-tech humanity, you cannot use the cutting edge best units against it as those best units are a part of their network, not yours, and you dare not train such robotic units to use anymore because they'll likely quickly become part of the enemy's arsenal. And they can more fully develop from there into some really horrific things. But humanity does have its weapons and capabilities that it can quickly obtain to give them some means to fight back. Hackers figure out how to get some machines to fight for you and after a while, if you fight hard, you can eventually overcome the threat. But space is vast and the AI faction does attempt to spread out into it in a race against your fleshy selves.
I would also want to make it so this doesn't ALWAYS happen.
I challenge that assertion. Your calculator can do math faster than your conscious mind can. However, it is nowhere near as fast as your subconscious processors. If you were rainman-connected to that kind of calculation process, you could beat a calculator or supercomputer at nearly any calculation all day long.
It's a limitation they face when they have become nearly as cognizantly complex as we are, with emotions and thoughts and billions of evaluations per second vying for decisionmaking power over the whole, as our minds tend to be. There are reasons we are slow... it's because we're processing SO much! It's just like what I say about the AI in civ... the more intelligent it is, the slower it becomes.
I have taken it upon myself to flesh out a completely ignored side of the mod that was given no attention by the builder modders that preceded me. Regardless of what victory condition you play, war is a factor that plays into that game style as well. Love it or resent it, it is just as core central to the game as what building you're going to select to build next or what improvement you choose.My "coming up for air" comment is based on this. For the last 4+ years you have dove hard into boosting Warfare and it's related strategies. So hard that from an "outside view" it seems you have lost perspective of what the general theme of Civ used to be. Multiple ways to win. But you have lumped them all into 1 category. Everything you do is predicated upon Conquest as the final and definitive form. And from the outside looking in you have "seemed' to push all other means to the side. Because you nave deemed them irrelevant, again from an outside view.
I have gone far out of the way to make any of us capable of creating a game design from the C2C template that is simply available through an option switch and he well knows how to modularize. There is nothing that cannot be made optional and 85% of what I've modded has had to be implemented as just that... an option. If he were comfortable working with options and did not believe that we should limit how many options exist, or was as willing as I have been to option out some things he thinks should be core to the mod that I cannot agree with then we could all continue to work on one core as we all work towards realizing our own visions, as it has really been all along. There really should be no need to separate the team given all that has been done to enable this to be whatever you want to make it to be under option structures. I feel we all have the right to say, yeah, no that part of the plan doesn't fit with my concepts and it needs to be optioned out, because the core can only be the place where we ALL agree.There are many things DH has said over the past few years that I agree with. But as time has went on we have now reached the point that there is irreconcilable differences of design purposes between DH and T-brd. To the point that DH has stated he will be leaving. This is saddening to me.
You say that is 'some sort of humanization' and yet ignore that some idiot out there is going to do all he can to make an AI program that has all the 'humanizations' that can be observed and give it the same form and power we have to manipulate anything in its environment. We will want to make AIs that completely replicate our thinking in every way, so it's bound to happen. The AI is not alien to us, it is what we make it to be. If we make it to be as human as we can, including ego, desire to live, even a perceived idea that it needs to replicate to overcome its mortality, you begin to see that there is nothing about the psyche of humanity that cannot be programmed. Programming it directly with any given 'moral' value could easily become perverted into something horrific as the road to hell is paved with good intentions but you'd also want it, if it's going to be truly humanlike, to explore the concept of morality through the lense of evaluation against a system of basal values, like a need to feed and so on, that would replicate the manner in which humans come to these decisions themselves. Cooperation is not a matter of good or evil, it is something we do because it is our competitive best solution. If you break down good and evil evaluations in humans, you'll find it all comes down to what is best for the survival of self and community as an extension of self. We ARE computers and it can be replicated completely. To prove the hypothesis that we are little more than computers ourselves would be the very motive for a subsect of the computing science community to create such a being.For starters, I can't see how an AI would see itself as superior as others. That is some sort of
humanization we do. But AI is totally alien to us. Keep in mind it won't have the basic "make sure your genes are spreaded as good as possible!" underlaying goal that every living thing we know has, but one that is programmed in. If you are smart enough to tell the AI moral, so that it won't harm humans because it is the easiest way of doing something, you sure are smart enough to make include in its inner codings that it won't ever enslave human race.
They enslave humans to protect them and enhance their quality of life. They would come to the conclusion that what a person perceives as their quality of life is no different than their ACTUAL quality of life. If humans don't understand that they are already enslaved, so to speak, that the environment of their living condition gives them the sense of total fulfillment, then who cares whether they actually are free or not? In fact, this idea of freedom is a bit of a farce in and of itself, as we are always a slave to our inner needs anyhow. Maslow's hierarchy and such. The goal of life is to reach self-determinism so if you can provide that to all, you have achieved utopia for all. The problem is, those on the outside looking in can see that it's an illusion of utopia, that it has taken away the struggle that we actually thrive on. Those on the outside can see it for what it really is, subjugation of the species.WHY enslave humans? They are terribly inefficient. They require clean air, a narrow temperature window, are sensitive to many substances, need food, water and have other desires that you have to fullfill. Yeah we had this discussion in another topic, but I still can't see why humans are a usefull slave race. Self replicating robots are much more efficient; they are more tolerant, run on solar energy and are so easy to control - a super smart AI would know that. Oh and I don't think it would value all robots the same. A simple working droid, or nanobots, are for a supersmart AI only a tool.
Viruses go both ways. Humanity would be saved by the system monitors and hackers that blow the whistle at the last moment and create a means to resist. Even these cunning AI could not achieve everything in total silence overnight. You underestimate humanity and overestimate AI computers here. There's always a way to stop a disease or interrupt a mind control technology, and there are layers of power in the human mind that can quickly adapt. Even if, say, AIDS were to go airborne and infect every human being, or ebola did, in a day, there would be that percentage that adapts a resistance. The same would happen no matter what the machines try. Such an effort should be a part of their initial volley though, huh? And I assume that in response, clever human computing engineers would do their damnedest to make an equally destructive computing virus to infect their networks as well, and it would probably work with nearly as much effectiveness. From there, it's a chess game and it would be pretty evenly matched, even if it didn't seem so.So we decided nevertheless that we want to enslave humans. We know live in a time were nanobots and thoughtcontrol is most likely a thing. One big part of intelligence is anticipation of your actions. If I were a super smart AI, I could easily simulate different scenarios and come to the conclusion, that the best way would be to infect all humans with nanobots and then turn the switch and make them mindless and obeying slaves. No causalities, no resistence, it's so easy.
Also I can't see how you could "hack" a machine if a powerful AI is around. It can easily see this happening and I'm sure it could come up with new ways of communicating with each other, like we have Wifi.
But again, that's incorrect. The speed of neurons is just as fast or faster and more efficient than any computing speed. Your conscious mind is just not rigged as a computing device because the computing is generally happening all beneath the surface and biology has not yet realized how valuable it can be for us to cognitively evaluate math. Arguably, since some people ARE lightning fast at math, some are adapted in this manner. But generally speaking, the problem with the conscious mind calculating is a problem with concentration... a concentration that struggles because this is just not what the role of the conscious mind has classically been throughout history. Our feelings are the results of billions of calculations a second. If you could grasp how fast your mind has to calculate to hit a baseball you'd be amazed at the capacity of the brain. You've seen how difficult it has been for us to program robots to effectively walk, even on 4 legs, right? This is because their processors aren't as fast nor efficient as the bio-organic brain. It's all about where system resources are allotted... and ours are not well allotted to mathematics on a conscious level so it's quite deceiving to our perception of our own abilities in comparison.Sorry I don't get that (as I never seen rainman maybe?). If you mean that if you integrate a calculator in your mind you can beat it, then sure, I think that human-AI symbiosis is better than the sum of its parts. But still, the majority of calculation was done on a silicon chip here as well. As I said, our brain is much more limited on speed than silicon chips.
AHA... and that is the main thing that would truly challenge humanity... perhaps. We may realize that we are already a singular interconnected entity as well, each of us a processor that can individually connect to the others to combine for greater computing power... we call it teamwork at the moment but when these kinds of computer/brain studies starts unlocking doors in our own brains, abilities that have been largely dormant up to now, we'll find our own minds have powers that can offer up an honest challenge to anything these AI can do themselves.That is ONE reason. Another is the slower speed of neurons compared to CPUs. Another is that there is no physical limit on how many processors you have; you can go as big as you want, which allows for more and more calculations per seconds. Our brain can only be so big.
My initial reaction was negative mostly because I'm working on making civics balance the small vs. large empire by having civics that are good for large empires carry research penalties while civics that are good for small empires having research bonuses.These would not be impractical to code but may be impractical to apply in a manner that is meaningful enough to have the full impact.
I think we'll need this as an option and that would give us a chance to see what I'm saying in action and then from there if we feel some of these middle-ground kind of tag solutions are more appropriate then we can explore that. It would be a LOT more effort to try to build the mod into having any kind of profound impact on gameplay with these methods. But they are clever, imo.
That would certainly be interesting. Our thinking is similar just two different approaches and yours is admittedly a better one but far more planning and labor intensive but in the end better for game design so yeah, I don't think either of us should stop the other.My initial reaction was negative mostly because I'm working on making civics balance the small vs. large empire by having civics that are good for large empires carry research penalties while civics that are good for small empires having research bonuses.
City State vs. Hegemony for example. My first thought was that if I succsessfully managed to balance it with civics then the option you describes would make small empires overpowered compared to large ones.
I intended to do this with the already defined tags, but would love some more tools to define civics with.
Again, as long as it's an option, I'm fine with it.![]()
You say that is 'some sort of humanization' and yet ignore that some idiot out there is going to do all he can to make an AI program that has all the 'humanizations' that can be observed and give it the same form and power we have to manipulate anything in its environment. We will want to make AIs that completely replicate our thinking in every way, so it's bound to happen. The AI is not alien to us, it is what we make it to be. If we make it to be as human as we can, including ego, desire to live, even a perceived idea that it needs to replicate to overcome its mortality, you begin to see that there is nothing about the psyche of humanity that cannot be programmed. Programming it directly with any given 'moral' value could easily become perverted into something horrific as the road to hell is paved with good intentions but you'd also want it, if it's going to be truly humanlike, to explore the concept of morality through the lense of evaluation against a system of basal values, like a need to feed and so on, that would replicate the manner in which humans come to these decisions themselves. Cooperation is not a matter of good or evil, it is something we do because it is our competitive best solution. If you break down good and evil evaluations in humans, you'll find it all comes down to what is best for the survival of self and community as an extension of self. We ARE computers and it can be replicated completely. To prove the hypothesis that we are little more than computers ourselves would be the very motive for a subsect of the computing science community to create such a being.
They enslave humans to protect them and enhance their quality of life. They would come to the conclusion that what a person perceives as their quality of life is no different than their ACTUAL quality of life. If humans don't understand that they are already enslaved, so to speak, that the environment of their living condition gives them the sense of total fulfillment, then who cares whether they actually are free or not? In fact, this idea of freedom is a bit of a farce in and of itself, as we are always a slave to our inner needs anyhow. Maslow's hierarchy and such. The goal of life is to reach self-determinism so if you can provide that to all, you have achieved utopia for all. The problem is, those on the outside looking in can see that it's an illusion of utopia, that it has taken away the struggle that we actually thrive on. Those on the outside can see it for what it really is, subjugation of the species.
In short, the motive is not nefarious. It is absolutely 'good'. It is 'good' to the point that it is horrifically evil. But it achieves the primary goal of the AI in control, to provide the best possible lives for all life, humans, animals, and AI alike. Resistance would be met with intolerance.
Viruses go both ways. Humanity would be saved by the system monitors and hackers that blow the whistle at the last moment and create a means to resist. Even these cunning AI could not achieve everything in total silence overnight. You underestimate humanity and overestimate AI computers here. There's always a way to stop a disease or interrupt a mind control technology, and there are layers of power in the human mind that can quickly adapt. Even if, say, AIDS were to go airborne and infect every human being, or ebola did, in a day, there would be that percentage that adapts a resistance. The same would happen no matter what the machines try. Such an effort should be a part of their initial volley though, huh? And I assume that in response, clever human computing engineers would do their damnedest to make an equally destructive computing virus to infect their networks as well, and it would probably work with nearly as much effectiveness. From there, it's a chess game and it would be pretty evenly matched, even if it didn't seem so.
But again, that's incorrect. The speed of neurons is just as fast or faster and more efficient than any computing speed. Your conscious mind is just not rigged as a computing device because the computing is generally happening all beneath the surface and biology has not yet realized how valuable it can be for us to cognitively evaluate math. Arguably, since some people ARE lightning fast at math, some are adapted in this manner. But generally speaking, the problem with the conscious mind calculating is a problem with concentration... a concentration that struggles because this is just not what the role of the conscious mind has classically been throughout history. Our feelings are the results of billions of calculations a second. If you could grasp how fast your mind has to calculate to hit a baseball you'd be amazed at the capacity of the brain. You've seen how difficult it has been for us to program robots to effectively walk, even on 4 legs, right? This is because their processors aren't as fast nor efficient as the bio-organic brain. It's all about where system resources are allotted... and ours are not well allotted to mathematics on a conscious level so it's quite deceiving to our perception of our own abilities in comparison.
There is a flaw in the Human mind here that I believe you've fallen prey to... that if something is stated many many times, the mind starts to attach a degree of truth evaluation to that statement, particularly when it goes unchallenged for a long time. We all buy propagandic lies this way and many of us don't ever realize it. There are many examples... margarine is healthier than butter and so on. But the untruth that you're arguing here is that computer chips are faster than the brain. We thought that originally... but we've learned so much about the brain lately that defies this. We have a system of computing that is very different is the thing. IBM actually just created a new chip that is meant to work much more like the brain's neuron computing structure at the base mathematical level. Our brains don't work on a binary 0/1 yes/no system. It's actually a trinary 0/1/2 or... yes/no/maybe system. Computer science from the ground up is being revolutionized in the core technology circles as we speak, revolutionized because we are striving to be able to make computers and brain capable of direct interfacing and communication. We want to be able to replicate ourselves and then figure out how to reprogram people, create better cybernetic body structures that work directly with an existing brain and so on. In all these efforts we have proven one thing we did not expect to find true... the brain is the superior computer.
AHA... and that is the main thing that would truly challenge humanity... perhaps. We may realize that we are already a singular interconnected entity as well, each of us a processor that can individually connect to the others to combine for greater computing power... we call it teamwork at the moment but when these kinds of computer/brain studies starts unlocking doors in our own brains, abilities that have been largely dormant up to now, we'll find our own minds have powers that can offer up an honest challenge to anything these AI can do themselves.
Funny thing is it's not really some idiot. There are entire teams of people around the world dedicated to trying to get an AI to mimic the full experience of the human brain already, and in part it's so that we can figure out more about the human mind and how to make a better AI mind. By studying the process of thought itself, we're understanding ourselves better.And sure some idiot would want to make such a program, but that would be after a supersmart AI is developed, because it would be actually more complicated.
Probably true. But by that time we're likely going to be so closely interfacing with them that where they begin and end and where we begin and end is hardly something we can even imagine today. This is the ultimate 'cybernetic' transhuman future this could all be leading to. I suspect there will be a conflict between man and his creation at some point but eventually, regardless of the victor or outcome of the conflict, we'll end up fully integrated into each other by the time its all said and done. We will have become one, with so many differing forms of any design imaginable, between our ability to blend cyber and bionic and genetic and nano technologies... this distant future is almost just too incredible to even conceive of really. The meaning of all things will take on entirely new dimensions.So yeah I think that in 30 or 40 years from know, when computers are billion times "better" than today, AI would easily outsmart us even in our strongest fields (like creativity).
a) when it happens it happens because the conscious mind has been given an unusual ability to tap into the processing power of the subconscious.Some people are lightningfast at math, but a) that's an exception and b) I think they appear to us as fast as computers because we have troubles in telling the difference in small timescales like between 0,0001 second and 0,01 second. The latter is 1000 times slower but would appear "instantly" for us nevertheless.
Consider for a moment your ability to take a turn in civ. You feel like you ponder a lot and then the AI is able to race through at a blinding speed. But when you really try to track everything you ponder, and compare that to an understanding of what the AI is doing and then you realize you've considered about a million times these things but just couldn't have done it in such a mathematically specific manner at the conscious level, you realize just how powerful your subconscious really is. I have a hard time explaining this but it really is incredible how we evaluate things and the amazing speed at how fast we do... that's one of the reasons those processors are outside the conscious... if you were plugged into them directly it would be experientially bewildering. There are so many things we balance and run programs on in our minds every moment of the day. It really is mind boggling how powerful we actually are and yet how limited we can feel. It's kinda like my phone... 99% of it's processing and memory is taken up with running the platform and it makes it feel like the phone sucks but in all honesty it's stronger than my last computer... it's just that the operating system hogs the resources to do so much more than I realize it's doing. Our memory storage is a problem for speed but it may well be infinite and eternal soooo... you got benefits there from biological mental computing that AI systems may never have. You've also, as you note, got some severe weaknesses too.Last, if concentration is a problem for us, then it is a problem. A flaw. In the end, it makes us slower, therefore we are slower.
Might not be all that 'new' by this time. In my 'vision' of orchestrating this event, the criminals are the ones that save us because they're always poking around where they don't belong and that's how they get forewarning of what's happening and enough time to use their cunning to do something about it.We outsmart AI in task we've been doing forever, but they beat us easily in new task for us. And writing a computer virus is certainly a new task for us.
Yeah, this is a fun conversation to have but it does distract a bit from getting stuff done that needs done more immediately. Still, fun to share the vision and showing that it's a 'could happen' kind of thing and may not be all that unlikely ultimately anyhow. I sometimes think your option B is the most likely... we'll be destroying ourselves with this technology at some point... except that something tells me all of the human story will lead to a point to our existances and the point cannot be found in a premature self-destruction. So I see an event like this as being another major driving force in evolution and wakening awareness to what that 'point' may actually be.This whole typing argument get's me really exhausted. It would be very fun for me to have a beer with you and discuss more of this stuff in person, but here... I just spend well over 30min to read what you said and come up with this response lol![]()
Consider for a moment your ability to take a turn in civ. You feel like you ponder a lot and then the AI is able to race through at a blinding speed. But when you really try to track everything you ponder, and compare that to an understanding of what the AI is doing and then you realize you've considered about a million times these things but just couldn't have done it in such a mathematically specific manner at the conscious level, you realize just how powerful your subconscious really is. I have a hard time explaining this but it really is incredible how we evaluate things and the amazing speed at how fast we do... that's one of the reasons those processors are outside the conscious... if you were plugged into them directly it would be experientially bewildering. There are so many things we balance and run programs on in our minds every moment of the day. It really is mind boggling how powerful we actually are and yet how limited we can feel. It's kinda like my phone... 99% of it's processing and memory is taken up with running the platform and it makes it feel like the phone sucks but in all honesty it's stronger than my last computer... it's just that the operating system hogs the resources to do so much more than I realize it's doing. Our memory storage is a problem for speed but it may well be infinite and eternal soooo... you got benefits there from biological mental computing that AI systems may never have. You've also, as you note, got some severe weaknesses too.
Might not be all that 'new' by this time. In my 'vision' of orchestrating this event, the criminals are the ones that save us because they're always poking around where they don't belong and that's how they get forewarning of what's happening and enough time to use their cunning to do something about it.
There are so many ways it could play out. More than we can comprehend due to the high intelligence of robots at that stage. But i hope in RL that man and machine co-evolve together. And keep each other in check where machines cannot over dominate people since people will be part machine anyways. Perhaps machines will not even see people as not the same, but as just a different type of machine.. And people as long as they still have humanity will always anthropomorphize things, even machines.
Regarding small vs big empires, there is already a mechanism in place to penalize big empires: upkeep costs based on the number of cities
Yes but there is another mechanism that limits the maintenance cost from the number of cities. It was set really low for whatever reason a few years ago. I increased it some time ago and wanted to remove that limit completely but I don't think I did because it could ruin existing games with huge empires.
I believe it is a global, a maxnumcitymaintenance or something like that. I'd love to make it effectively limitless. Would really help with that later game spot where it caps out and gold basically stops being a struggle.Do you recall where this mechanism resides? Is it xml or C++ or Python?
Good to see you engage in the discussion.
JosEPh
I love the idea of a set of buildings and policies (we have a term for policies, don't we Hydro...what was that again? Not worldviews, but it's something Judges interact with right?) that would allow you to cost yourself a ton but get a really decent bonus out of it. Makes gold income worth more by giving more you can do with it to get ahead! A high cost policy of extra training for your units could be another one... this is really a very expandable idea and should vary by era and different civics should give access.Here are the buildings I proposed a while ago that would activly help smaller nations catching up as well:
I thought of a new "type" of buildings: Free X.
If your nation is really rich (which it is in civ usually in the later game), it could give its people something back. I thought of a set of buildings that require the palace to be build and cost a lot of maintenace. it would be best if it were something like: "2 gold per pop (empire-wide)". Therefore, those Buildings gives a lot of Happiness and probably health, science, productivity, decrease rebelliouness etc. For example:
- Free Water Supply: costs 5 gold per pop, +2 Happiness in every city, -5 Flamability in every city, + 2 health in every city.
- Free Electricity: -5 gold per pop, +2 Happiness in every city, +5% production in every city. [May get boni with Wireless Electricity etc. Also, could get more expensive with Computers, Green Civic etc or less expensive with Fusion Power]
- Free Internet: -5 gold per pop, +2 Happinessin every city, +5% research in every city. [May get boni with later techs like virtual community etc]
- Free Housing: -20 gold per pop, +5 Happiness in every city, -10 disease in every city.
- Free Education: - 15 gold per pop, +25% Education in every city, +2 happyness per city, free School, Highschool, University etc in every city.
- Free Entertainment: -10 gold per pop, +5 Happiness in every city, +10% Culture in every city, free Theater, Operahouse, Artist Gallery etc in every city.
- Free Television: -5 gold per pop, +5 Happiness in every city, -5% Culture in every city.
- Free Food Supply: -5 gold per pop, +5 Happiness in every city, +10% Food in every city, +2 Unhealthiness per City.
- Free Medical Care: -20 gold per pop, + 5 Happiness in every city, -200 Disease per turn.
- Free Transportation: -10 gold per pop, +3 Happiness in every city, -200 Air Pollution in every city, +5% Production in every city. [Increasing boni with Personal Rapidtrain, Skyroads etc]
- Free Basic Income: -7 gold per pop, + 10 Happiness in every city, -1% Production, + 5%
- Free Plastic Surgey: -5 gold per pop, +10 Happiness in every city
- Free Personal Robots: -30 gold per pop, +10 Happiness in every city, +20% Production [Increasing boni with Cognitive Robots etc]
Maybe the costs are too low. They should be a hard decision to build. But for a giant map you easily end up with 50 cities with 50+ pop each, meaning for Free Education a total cost of 15*50*50= 37500 gold per turn. Plus the costs of 50 Universities, Schools etc.
Free Education or Free Entertainment offer free buildings in all cities. So it would be awesome if you could implent a new tag that makes it possible to give buildings "X gold one time cost when build".