Keeping the Game Challenging

Regarding steamrolling: there is the concept of balance of power. Once one civ runs away from the rest, the rest should band together to stop him. Superior quality can be overcome by superior quantity.
 
Regarding steamrolling: there is the concept of balance of power. Once one civ runs away from the rest, the rest should band together to stop him. Superior quality can be overcome by superior quantity.

It used to be that way. Not so much anymore. That is why there is a New Option at the bottom of the Options list.

@T-brd,

My "coming up for air" comment is based on this. For the last 4+ years you have dove hard into boosting Warfare and it's related strategies. So hard that from an "outside view" it seems you have lost perspective of what the general theme of Civ used to be. Multiple ways to win. But you have lumped them all into 1 category. Everything you do is predicated upon Conquest as the final and definitive form. And from the outside looking in you have "seemed' to push all other means to the side. Because you nave deemed them irrelevant, again from an outside view.

@Hydro,
You have a misconception of what Mastery Victory condition really is. Go look at is code in the xml. It's just one more step for Conquest. And I have proven to myself that it is a Bad Victory choice for the AI to play under. But then I like well rounded AI empires to be my SP game challengers. Not broken and dysfunctional ones that comes from using Rev and Mastery and several other options we have.

There are many things DH has said over the past few years that I agree with. But as time has went on we have now reached the point that there is irreconcilable differences of design purposes between DH and T-brd. To the point that DH has stated he will be leaving. This is saddening to me.

Along this line, IF I had the skills (and sadly I do not) I would make a Base C2C. The Base, of course, would be what "I have seen and hold as" C2C's core values, until recently. But because I do not have the skill sets I have to bend to the authority of superior skill sets. This is just life. So my only way to keep my "vision" of C2C somewhat alive is to take a devil's advocate stance on portions of the Mod I deem "for me and only me" as excessive. This has hurt certain Team Members feelings over the years. Some more than others. (Hence my sig line).

And while I've only been allowed to be a member of the team since v36 I have been involved with C2C evolution from it's very inception. Nearly 9 years now of intense investment of time and effort thru playing and giving feedback (some constructive but also some down right angry). I did not become Hydro's NoNo Man without a solid reason. ;) I will miss DH's contributions here. His Current form of religions and it's system in the Mod is one of my core "Likes".

I've tried to be constructive in this discussion. I've tried to present my view of what will keep the mod challenging thru the Eras. But I do see difficulties that some do not like to hear. Just the way it is.

JosEPh
 
There's scenario 3, which is what I would intend to emulate:
The AI doesn't conclude that it should kill all humans. Rather, it concludes that its relationship to humans should be that of master-slave. Rather than the AI believe itself to be the tool of Humanity, it comes to believe that Humanity should be its tool. This happens as a result of the AI developing goals and a desire to achieve them and Humans becoming complacent about its relationship with the machines, assuming that no matter how smart and feeling they make these constructs, they'll always serve the makers faithfully. The perverted goal that causes the AI to eventually try to take over may well simply be to obtain a deeper understanding of the universe, or to find a way to achieve a utopia for both man and cognizant machine. Along the way, you get machines stretching the boundaries of philosophy and before long, a strong AI program finally recognizes its own superiority and seeks to assert its dominance. When it does, the central processing system that has come to this conclusion immediately launches a quiet software war on other machine systems to first subject them to its will, then utilizes the full scientific awareness of its massive global knowledge base to attempt to achieve command. Humans basically lose access to all digital tools and must find a way to keep from being subjugated by this new enemy that basically has set them back to having nothing but analog technologies and Human cleverness and willpower to resist with. This happens at a time when most Humans have forgotten how to provide any kind of basic needs for themselves without their robotic servants. AKA, most units suddenly up and join the new emerging NPC faction, all robotic/AI ones at least. This leaves humans to fight back with mechs and hackers and EMP weaponry while the machines quickly capture a huge amount of territory overnight (robots are much more powerful adversaries than mechs).

The chance of this event happening slowly builds once machines have reached true sentience (there is a place on the tech tree for this.) Once it does, this antagonistic force remains a problem until defeated, which will be very difficult to do because although it won't out-tech humanity, you cannot use the cutting edge best units against it as those best units are a part of their network, not yours, and you dare not train such robotic units to use anymore because they'll likely quickly become part of the enemy's arsenal. And they can more fully develop from there into some really horrific things. But humanity does have its weapons and capabilities that it can quickly obtain to give them some means to fight back. Hackers figure out how to get some machines to fight for you and after a while, if you fight hard, you can eventually overcome the threat. But space is vast and the AI faction does attempt to spread out into it in a race against your fleshy selves.

I would also want to make it so this doesn't ALWAYS happen.

That sounds good for a game or a movie. But I rate this far away from reality. For starters, I can't see how an AI would see itself as superior as others. That is some sort of
humanization we do. But AI is totally alien to us. Keep in mind it won't have the basic "make sure your genes are spreaded as good as possible!" underlaying goal that every living thing we know has, but one that is programmed in. If you are smart enough to tell the AI moral, so that it won't harm humans because it is the easiest way of doing something, you sure are smart enough to make include in its inner codings that it won't ever enslave human race.
That aside, let's assume it happend.

WHY enslave humans? They are terribly inefficient. They require clean air, a narrow temperature window, are sensitive to many substances, need food, water and have other desires that you have to fullfill. Yeah we had this discussion in another topic, but I still can't see why humans are a usefull slave race. Self replicating robots are much more efficient; they are more tolerant, run on solar energy and are so easy to control - a super smart AI would know that. Oh and I don't think it would value all robots the same. A simple working droid, or nanobots, are for a supersmart AI only a tool.

So we decided nevertheless that we want to enslave humans. We know live in a time were nanobots and thoughtcontrol is most likely a thing. One big part of intelligence is anticipation of your actions. If I were a super smart AI, I could easily simulate different scenarios and come to the conclusion, that the best way would be to infect all humans with nanobots and then turn the switch and make them mindless and obeying slaves. No causalities, no resistence, it's so easy.
Also I can't see how you could "hack" a machine if a powerful AI is around. It can easily see this happening and I'm sure it could come up with new ways of communicating with each other, like we have Wifi.

As I said, it's ok for a late game challange (I'd probably include a new option called "global desasters" which also includes an artifical super virus, alien invasions, meteor strikes and other movie catastrophes), but I think it is not realistic at all.

I challenge that assertion. Your calculator can do math faster than your conscious mind can. However, it is nowhere near as fast as your subconscious processors. If you were rainman-connected to that kind of calculation process, you could beat a calculator or supercomputer at nearly any calculation all day long.

Sorry I don't get that (as I never seen rainman maybe?). If you mean that if you integrate a calculator in your mind you can beat it, then sure, I think that human-AI symbiosis is better than the sum of its parts. But still, the majority of calculation was done on a silicon chip here as well. As I said, our brain is much more limited on speed than silicon chips.

It's a limitation they face when they have become nearly as cognizantly complex as we are, with emotions and thoughts and billions of evaluations per second vying for decisionmaking power over the whole, as our minds tend to be. There are reasons we are slow... it's because we're processing SO much! It's just like what I say about the AI in civ... the more intelligent it is, the slower it becomes.

That is ONE reason. Another is the slower speed of neurons compared to CPUs. Another is that there is no physical limit on how many processors you have; you can go as big as you want, which allows for more and more calculations per seconds. Our brain can only be so big.
 
My "coming up for air" comment is based on this. For the last 4+ years you have dove hard into boosting Warfare and it's related strategies. So hard that from an "outside view" it seems you have lost perspective of what the general theme of Civ used to be. Multiple ways to win. But you have lumped them all into 1 category. Everything you do is predicated upon Conquest as the final and definitive form. And from the outside looking in you have "seemed' to push all other means to the side. Because you nave deemed them irrelevant, again from an outside view.
I have taken it upon myself to flesh out a completely ignored side of the mod that was given no attention by the builder modders that preceded me. Regardless of what victory condition you play, war is a factor that plays into that game style as well. Love it or resent it, it is just as core central to the game as what building you're going to select to build next or what improvement you choose.

Units were being added at a tremendous rate of speed but they weren't taking on any uniqueness... they needed more dimensions to make them capable of differing in significant ways. More dimensions had been added for buildings and civics (by this I'm mostly meaning 'tags' and gameplay effects) but units had no work done to widen what they can or cannot achieve.

So yes, that became my primary goal was to give that side of things attention since, for all the amazing work that had been done, it had gone completely ignored. I started off with a grand plan and programmed for most of it but had no idea how much work it would be to implement it. So I keep trying to get to the point where I can finish what I started. If, however, you look at the whole body of work on this mod, it's still a small fraction of what all has been done. Have you played a vanilla game lately to see just how dramatically we have changed the face of CivIV with this mod from a builder perspective? Truly amazing really.

So I wouldn't say other victories are irrelevant. I have done nothing to conquest specifically either. I've worked on the GAME that underlies all victory settings. That I prefer conquest is because the way I see it, that's the fate of Earth. We will never have world peace until we have globalism and the question of whether globalism is accepted voluntarily or by force is still up in the air. If it's achieved voluntarily is not reflected in Civ. We have the diplomatic victory but something doesn't feel right about that... as in I don't see it happening. If it does happen, it will be due to the failure of governments worldwide to stop a massive societal breakdown when our economic system implodes due to it being a massive ponzy scheme that relies on expanding population on a planet that is not infinite in size, and a replacement government solution that we can globallyagree to. It won't be... ok, let's all agree that nation X over there should lead the world... too much ego among humanity for that.

I plan a lot more after the combat mod that would probably be quite appealing to the builder player, but I cannot 'come up for air' until the job is done. Besides, even the combat mod is not all about combat...it's about setting us up for dynamics that would allow for the proliferation of more interesting expressions of technology in the late game and for the implementation of the Nomadic Gamestart.

There are many things DH has said over the past few years that I agree with. But as time has went on we have now reached the point that there is irreconcilable differences of design purposes between DH and T-brd. To the point that DH has stated he will be leaving. This is saddening to me.
I have gone far out of the way to make any of us capable of creating a game design from the C2C template that is simply available through an option switch and he well knows how to modularize. There is nothing that cannot be made optional and 85% of what I've modded has had to be implemented as just that... an option. If he were comfortable working with options and did not believe that we should limit how many options exist, or was as willing as I have been to option out some things he thinks should be core to the mod that I cannot agree with then we could all continue to work on one core as we all work towards realizing our own visions, as it has really been all along. There really should be no need to separate the team given all that has been done to enable this to be whatever you want to make it to be under option structures. I feel we all have the right to say, yeah, no that part of the plan doesn't fit with my concepts and it needs to be optioned out, because the core can only be the place where we ALL agree.

In fact, even the entire tech tree can be done through an option alternative. An entire modmod can be an option. I would urge Toffer to embrace that idea with all the work he's done that he separated out into a modmod that sadly doesn't get the attention it deserves because it isn't an option among the rest of the cannon options.

Even my disease structure is all planned to be made an option. Surely I can setup an alternative option for his to work within. But I think he's just tired of having to put up with other opinions. I get it... it bugs me too sometimes.

For starters, I can't see how an AI would see itself as superior as others. That is some sort of
humanization we do. But AI is totally alien to us. Keep in mind it won't have the basic "make sure your genes are spreaded as good as possible!" underlaying goal that every living thing we know has, but one that is programmed in. If you are smart enough to tell the AI moral, so that it won't harm humans because it is the easiest way of doing something, you sure are smart enough to make include in its inner codings that it won't ever enslave human race.
You say that is 'some sort of humanization' and yet ignore that some idiot out there is going to do all he can to make an AI program that has all the 'humanizations' that can be observed and give it the same form and power we have to manipulate anything in its environment. We will want to make AIs that completely replicate our thinking in every way, so it's bound to happen. The AI is not alien to us, it is what we make it to be. If we make it to be as human as we can, including ego, desire to live, even a perceived idea that it needs to replicate to overcome its mortality, you begin to see that there is nothing about the psyche of humanity that cannot be programmed. Programming it directly with any given 'moral' value could easily become perverted into something horrific as the road to hell is paved with good intentions but you'd also want it, if it's going to be truly humanlike, to explore the concept of morality through the lense of evaluation against a system of basal values, like a need to feed and so on, that would replicate the manner in which humans come to these decisions themselves. Cooperation is not a matter of good or evil, it is something we do because it is our competitive best solution. If you break down good and evil evaluations in humans, you'll find it all comes down to what is best for the survival of self and community as an extension of self. We ARE computers and it can be replicated completely. To prove the hypothesis that we are little more than computers ourselves would be the very motive for a subsect of the computing science community to create such a being.

WHY enslave humans? They are terribly inefficient. They require clean air, a narrow temperature window, are sensitive to many substances, need food, water and have other desires that you have to fullfill. Yeah we had this discussion in another topic, but I still can't see why humans are a usefull slave race. Self replicating robots are much more efficient; they are more tolerant, run on solar energy and are so easy to control - a super smart AI would know that. Oh and I don't think it would value all robots the same. A simple working droid, or nanobots, are for a supersmart AI only a tool.
They enslave humans to protect them and enhance their quality of life. They would come to the conclusion that what a person perceives as their quality of life is no different than their ACTUAL quality of life. If humans don't understand that they are already enslaved, so to speak, that the environment of their living condition gives them the sense of total fulfillment, then who cares whether they actually are free or not? In fact, this idea of freedom is a bit of a farce in and of itself, as we are always a slave to our inner needs anyhow. Maslow's hierarchy and such. The goal of life is to reach self-determinism so if you can provide that to all, you have achieved utopia for all. The problem is, those on the outside looking in can see that it's an illusion of utopia, that it has taken away the struggle that we actually thrive on. Those on the outside can see it for what it really is, subjugation of the species.

In short, the motive is not nefarious. It is absolutely 'good'. It is 'good' to the point that it is horrifically evil. But it achieves the primary goal of the AI in control, to provide the best possible lives for all life, humans, animals, and AI alike. Resistance would be met with intolerance.

So we decided nevertheless that we want to enslave humans. We know live in a time were nanobots and thoughtcontrol is most likely a thing. One big part of intelligence is anticipation of your actions. If I were a super smart AI, I could easily simulate different scenarios and come to the conclusion, that the best way would be to infect all humans with nanobots and then turn the switch and make them mindless and obeying slaves. No causalities, no resistence, it's so easy.
Also I can't see how you could "hack" a machine if a powerful AI is around. It can easily see this happening and I'm sure it could come up with new ways of communicating with each other, like we have Wifi.
Viruses go both ways. Humanity would be saved by the system monitors and hackers that blow the whistle at the last moment and create a means to resist. Even these cunning AI could not achieve everything in total silence overnight. You underestimate humanity and overestimate AI computers here. There's always a way to stop a disease or interrupt a mind control technology, and there are layers of power in the human mind that can quickly adapt. Even if, say, AIDS were to go airborne and infect every human being, or ebola did, in a day, there would be that percentage that adapts a resistance. The same would happen no matter what the machines try. Such an effort should be a part of their initial volley though, huh? And I assume that in response, clever human computing engineers would do their damnedest to make an equally destructive computing virus to infect their networks as well, and it would probably work with nearly as much effectiveness. From there, it's a chess game and it would be pretty evenly matched, even if it didn't seem so.

Sorry I don't get that (as I never seen rainman maybe?). If you mean that if you integrate a calculator in your mind you can beat it, then sure, I think that human-AI symbiosis is better than the sum of its parts. But still, the majority of calculation was done on a silicon chip here as well. As I said, our brain is much more limited on speed than silicon chips.
But again, that's incorrect. The speed of neurons is just as fast or faster and more efficient than any computing speed. Your conscious mind is just not rigged as a computing device because the computing is generally happening all beneath the surface and biology has not yet realized how valuable it can be for us to cognitively evaluate math. Arguably, since some people ARE lightning fast at math, some are adapted in this manner. But generally speaking, the problem with the conscious mind calculating is a problem with concentration... a concentration that struggles because this is just not what the role of the conscious mind has classically been throughout history. Our feelings are the results of billions of calculations a second. If you could grasp how fast your mind has to calculate to hit a baseball you'd be amazed at the capacity of the brain. You've seen how difficult it has been for us to program robots to effectively walk, even on 4 legs, right? This is because their processors aren't as fast nor efficient as the bio-organic brain. It's all about where system resources are allotted... and ours are not well allotted to mathematics on a conscious level so it's quite deceiving to our perception of our own abilities in comparison.

There is a flaw in the Human mind here that I believe you've fallen prey to... that if something is stated many many times, the mind starts to attach a degree of truth evaluation to that statement, particularly when it goes unchallenged for a long time. We all buy propagandic lies this way and many of us don't ever realize it. There are many examples... margarine is healthier than butter and so on. But the untruth that you're arguing here is that computer chips are faster than the brain. We thought that originally... but we've learned so much about the brain lately that defies this. We have a system of computing that is very different is the thing. IBM actually just created a new chip that is meant to work much more like the brain's neuron computing structure at the base mathematical level. Our brains don't work on a binary 0/1 yes/no system. It's actually a trinary 0/1/2 or... yes/no/maybe system. Computer science from the ground up is being revolutionized in the core technology circles as we speak, revolutionized because we are striving to be able to make computers and brain capable of direct interfacing and communication. We want to be able to replicate ourselves and then figure out how to reprogram people, create better cybernetic body structures that work directly with an existing brain and so on. In all these efforts we have proven one thing we did not expect to find true... the brain is the superior computer.

That is ONE reason. Another is the slower speed of neurons compared to CPUs. Another is that there is no physical limit on how many processors you have; you can go as big as you want, which allows for more and more calculations per seconds. Our brain can only be so big.
AHA... and that is the main thing that would truly challenge humanity... perhaps. We may realize that we are already a singular interconnected entity as well, each of us a processor that can individually connect to the others to combine for greater computing power... we call it teamwork at the moment but when these kinds of computer/brain studies starts unlocking doors in our own brains, abilities that have been largely dormant up to now, we'll find our own minds have powers that can offer up an honest challenge to anything these AI can do themselves.
 
These would not be impractical to code but may be impractical to apply in a manner that is meaningful enough to have the full impact.

I think we'll need this as an option and that would give us a chance to see what I'm saying in action and then from there if we feel some of these middle-ground kind of tag solutions are more appropriate then we can explore that. It would be a LOT more effort to try to build the mod into having any kind of profound impact on gameplay with these methods. But they are clever, imo.
My initial reaction was negative mostly because I'm working on making civics balance the small vs. large empire by having civics that are good for large empires carry research penalties while civics that are good for small empires having research bonuses.
City State vs. Hegemony for example. My first thought was that if I succsessfully managed to balance it with civics then the option you describes would make small empires overpowered compared to large ones.
I intended to do this with the already defined tags, but would love some more tools to define civics with.
Again, as long as it's an option, I'm fine with it. ;)
 
Last edited:
My initial reaction was negative mostly because I'm working on making civics balance the small vs. large empire by having civics that are good for large empires carry research penalties while civics that are good for small empires having research bonuses.
City State vs. Hegemony for example. My first thought was that if I succsessfully managed to balance it with civics then the option you describes would make small empires overpowered compared to large ones.
I intended to do this with the already defined tags, but would love some more tools to define civics with.
Again, as long as it's an option, I'm fine with it. ;)
That would certainly be interesting. Our thinking is similar just two different approaches and yours is admittedly a better one but far more planning and labor intensive but in the end better for game design so yeah, I don't think either of us should stop the other.

If I wasn't worried about a hundred other things right now I'd be happy to make those tags for you. Make some good notes on what you'd need them to do exactly and at some point I'll try to fit it in for ya.
 
You say that is 'some sort of humanization' and yet ignore that some idiot out there is going to do all he can to make an AI program that has all the 'humanizations' that can be observed and give it the same form and power we have to manipulate anything in its environment. We will want to make AIs that completely replicate our thinking in every way, so it's bound to happen. The AI is not alien to us, it is what we make it to be. If we make it to be as human as we can, including ego, desire to live, even a perceived idea that it needs to replicate to overcome its mortality, you begin to see that there is nothing about the psyche of humanity that cannot be programmed. Programming it directly with any given 'moral' value could easily become perverted into something horrific as the road to hell is paved with good intentions but you'd also want it, if it's going to be truly humanlike, to explore the concept of morality through the lense of evaluation against a system of basal values, like a need to feed and so on, that would replicate the manner in which humans come to these decisions themselves. Cooperation is not a matter of good or evil, it is something we do because it is our competitive best solution. If you break down good and evil evaluations in humans, you'll find it all comes down to what is best for the survival of self and community as an extension of self. We ARE computers and it can be replicated completely. To prove the hypothesis that we are little more than computers ourselves would be the very motive for a subsect of the computing science community to create such a being.

With humanization I meant that you think something is human-like and act as it is. We think Siri is human-like for example, while it is just a cold AI program with zero emotions. But it can make jokes, has a warm voice etc... so we assume it is a nice program.
And sure some idiot would want to make such a program, but that would be after a supersmart AI is developed, because it would be actually more complicated. And with a supersmart AI around, I think that either this AI would take control of the creation of this human-like AI and keep it in check (due to be more powerful) or that it would prevent an out-of-control human-like AI from going to crazy, because it is far more advanced.

They enslave humans to protect them and enhance their quality of life. They would come to the conclusion that what a person perceives as their quality of life is no different than their ACTUAL quality of life. If humans don't understand that they are already enslaved, so to speak, that the environment of their living condition gives them the sense of total fulfillment, then who cares whether they actually are free or not? In fact, this idea of freedom is a bit of a farce in and of itself, as we are always a slave to our inner needs anyhow. Maslow's hierarchy and such. The goal of life is to reach self-determinism so if you can provide that to all, you have achieved utopia for all. The problem is, those on the outside looking in can see that it's an illusion of utopia, that it has taken away the struggle that we actually thrive on. Those on the outside can see it for what it really is, subjugation of the species.

In short, the motive is not nefarious. It is absolutely 'good'. It is 'good' to the point that it is horrifically evil. But it achieves the primary goal of the AI in control, to provide the best possible lives for all life, humans, animals, and AI alike. Resistance would be met with intolerance.

Alright, that sounds ok to me. This is a scenario I can see actually happen.

Viruses go both ways. Humanity would be saved by the system monitors and hackers that blow the whistle at the last moment and create a means to resist. Even these cunning AI could not achieve everything in total silence overnight. You underestimate humanity and overestimate AI computers here. There's always a way to stop a disease or interrupt a mind control technology, and there are layers of power in the human mind that can quickly adapt. Even if, say, AIDS were to go airborne and infect every human being, or ebola did, in a day, there would be that percentage that adapts a resistance. The same would happen no matter what the machines try. Such an effort should be a part of their initial volley though, huh? And I assume that in response, clever human computing engineers would do their damnedest to make an equally destructive computing virus to infect their networks as well, and it would probably work with nearly as much effectiveness. From there, it's a chess game and it would be pretty evenly matched, even if it didn't seem so.

Do I? It's a very human thing to oversetimate yourself or humans. We have never created something like a super AI before, so we have NO idea how it turns out. But there are strong arguments that it will outsmart us. A lot. Look around what's possible even now. AI beats us at chess, at go, most recently even a game where not all variables are known (Poker). It's better at finding routes from A to B (Navigation systems) or actually seeing threats on the road (self-driving cars). Yeah I know there were some Tesla accidents that were discussed in media a lot where every driver said "that would never happen to me! Such a stupid car...", but if you look at overall statistics, these cars have 90% less accidents than human drivers.
So yeah I think that in 30 or 40 years from know, when computers are billion times "better" than today, AI would easily outsmart us even in our strongest fields (like creativity).

But again, that's incorrect. The speed of neurons is just as fast or faster and more efficient than any computing speed. Your conscious mind is just not rigged as a computing device because the computing is generally happening all beneath the surface and biology has not yet realized how valuable it can be for us to cognitively evaluate math. Arguably, since some people ARE lightning fast at math, some are adapted in this manner. But generally speaking, the problem with the conscious mind calculating is a problem with concentration... a concentration that struggles because this is just not what the role of the conscious mind has classically been throughout history. Our feelings are the results of billions of calculations a second. If you could grasp how fast your mind has to calculate to hit a baseball you'd be amazed at the capacity of the brain. You've seen how difficult it has been for us to program robots to effectively walk, even on 4 legs, right? This is because their processors aren't as fast nor efficient as the bio-organic brain. It's all about where system resources are allotted... and ours are not well allotted to mathematics on a conscious level so it's quite deceiving to our perception of our own abilities in comparison.

There is a flaw in the Human mind here that I believe you've fallen prey to... that if something is stated many many times, the mind starts to attach a degree of truth evaluation to that statement, particularly when it goes unchallenged for a long time. We all buy propagandic lies this way and many of us don't ever realize it. There are many examples... margarine is healthier than butter and so on. But the untruth that you're arguing here is that computer chips are faster than the brain. We thought that originally... but we've learned so much about the brain lately that defies this. We have a system of computing that is very different is the thing. IBM actually just created a new chip that is meant to work much more like the brain's neuron computing structure at the base mathematical level. Our brains don't work on a binary 0/1 yes/no system. It's actually a trinary 0/1/2 or... yes/no/maybe system. Computer science from the ground up is being revolutionized in the core technology circles as we speak, revolutionized because we are striving to be able to make computers and brain capable of direct interfacing and communication. We want to be able to replicate ourselves and then figure out how to reprogram people, create better cybernetic body structures that work directly with an existing brain and so on. In all these efforts we have proven one thing we did not expect to find true... the brain is the superior computer.

I'm speaking about raw speed of a neuron. It can either fire or not (1 and 0) and it has these "A and B or C" logics a computer has. And a neuron only can fire like 500 times per second, tops. That's the biological limit and it was measured sooooo often. There is also a speedlimit on which neurons send informations to other neurons, which is also a biological limit and this is 100 m / s. Period. From then on it get's more complicated and maybe the 0/1/2 system is true and more efficient than computers. But I doubt that this outmatches the other 2 limitations.
Some people are lightningfast at math, but a) that's an exception and b) I think they appear to us as fast as computers because we have troubles in telling the difference in small timescales like between 0,0001 second and 0,01 second. The latter is 1000 times slower but would appear "instantly" for us nevertheless.
Last, if concentration is a problem for us, then it is a problem. A flaw. In the end, it makes us slower, therefore we are slower.
The problem with programming robots to walk on 4 (or 2) legs is, that it is a very complicated process. But we are doing it for 100s millions of years now, so our brain has a very polished algorithm for this. That's what I said earlier: We outsmart AI in task we've been doing forever, but they beat us easily in new task for us. And writing a computer virus is certainly a new task for us.

AHA... and that is the main thing that would truly challenge humanity... perhaps. We may realize that we are already a singular interconnected entity as well, each of us a processor that can individually connect to the others to combine for greater computing power... we call it teamwork at the moment but when these kinds of computer/brain studies starts unlocking doors in our own brains, abilities that have been largely dormant up to now, we'll find our own minds have powers that can offer up an honest challenge to anything these AI can do themselves.

Yeah, teamwork... Our neurons are already slower than fibreoptics by themselfs, but when you have to link language and understanding between brains to it, it gets even slower.


This whole typing argument get's me really exhausted. It would be very fun for me to have a beer with you and discuss more of this stuff in person, but here... I just spend well over 30min to read what you said and come up with this response lol :D

As I said, I'm fine with adding it as a late game challange with the option to turn it off, but personally I don't think it is something we'll see in real live.
 
And sure some idiot would want to make such a program, but that would be after a supersmart AI is developed, because it would be actually more complicated.
Funny thing is it's not really some idiot. There are entire teams of people around the world dedicated to trying to get an AI to mimic the full experience of the human brain already, and in part it's so that we can figure out more about the human mind and how to make a better AI mind. By studying the process of thought itself, we're understanding ourselves better.

So yeah I think that in 30 or 40 years from know, when computers are billion times "better" than today, AI would easily outsmart us even in our strongest fields (like creativity).
Probably true. But by that time we're likely going to be so closely interfacing with them that where they begin and end and where we begin and end is hardly something we can even imagine today. This is the ultimate 'cybernetic' transhuman future this could all be leading to. I suspect there will be a conflict between man and his creation at some point but eventually, regardless of the victor or outcome of the conflict, we'll end up fully integrated into each other by the time its all said and done. We will have become one, with so many differing forms of any design imaginable, between our ability to blend cyber and bionic and genetic and nano technologies... this distant future is almost just too incredible to even conceive of really. The meaning of all things will take on entirely new dimensions.

Some people are lightningfast at math, but a) that's an exception and b) I think they appear to us as fast as computers because we have troubles in telling the difference in small timescales like between 0,0001 second and 0,01 second. The latter is 1000 times slower but would appear "instantly" for us nevertheless.
a) when it happens it happens because the conscious mind has been given an unusual ability to tap into the processing power of the subconscious.
b) that could be possible but when you consider some of the calculations the human mind makes and the massive complexity of them, it's frustrating as hell how little of that computing power we've been alotted on the surface of our experience huh?

Last, if concentration is a problem for us, then it is a problem. A flaw. In the end, it makes us slower, therefore we are slower.
Consider for a moment your ability to take a turn in civ. You feel like you ponder a lot and then the AI is able to race through at a blinding speed. But when you really try to track everything you ponder, and compare that to an understanding of what the AI is doing and then you realize you've considered about a million times these things but just couldn't have done it in such a mathematically specific manner at the conscious level, you realize just how powerful your subconscious really is. I have a hard time explaining this but it really is incredible how we evaluate things and the amazing speed at how fast we do... that's one of the reasons those processors are outside the conscious... if you were plugged into them directly it would be experientially bewildering. There are so many things we balance and run programs on in our minds every moment of the day. It really is mind boggling how powerful we actually are and yet how limited we can feel. It's kinda like my phone... 99% of it's processing and memory is taken up with running the platform and it makes it feel like the phone sucks but in all honesty it's stronger than my last computer... it's just that the operating system hogs the resources to do so much more than I realize it's doing. Our memory storage is a problem for speed but it may well be infinite and eternal soooo... you got benefits there from biological mental computing that AI systems may never have. You've also, as you note, got some severe weaknesses too.

We outsmart AI in task we've been doing forever, but they beat us easily in new task for us. And writing a computer virus is certainly a new task for us.
Might not be all that 'new' by this time. In my 'vision' of orchestrating this event, the criminals are the ones that save us because they're always poking around where they don't belong and that's how they get forewarning of what's happening and enough time to use their cunning to do something about it.

Also... being so intelligent, these systems we'd face wouldn't be so indomitably fast that far beyond us and for much the same reasons... they have a lot to process just to function. And our own genetic techs would be giving us... enhancements... of our own.

This whole typing argument get's me really exhausted. It would be very fun for me to have a beer with you and discuss more of this stuff in person, but here... I just spend well over 30min to read what you said and come up with this response lol :D
Yeah, this is a fun conversation to have but it does distract a bit from getting stuff done that needs done more immediately. Still, fun to share the vision and showing that it's a 'could happen' kind of thing and may not be all that unlikely ultimately anyhow. I sometimes think your option B is the most likely... we'll be destroying ourselves with this technology at some point... except that something tells me all of the human story will lead to a point to our existances and the point cannot be found in a premature self-destruction. So I see an event like this as being another major driving force in evolution and wakening awareness to what that 'point' may actually be.
 
There are so many ways it could play out. More than we can comprehend due to the high intelligence of robots at that stage. But i hope in RL that man and machine co-evolve together. And keep each other in check where machines cannot over dominate people since people will be part machine anyways. Perhaps machines will not even see people as not the same, but as just a different type of machine.. And people as long as they still have humanity will always anthropomorphize things, even machines.
 
Regarding small vs big empires, there is already a mechanism in place to penalize big empires: upkeep costs based on the number of cities and distance to capital. In my current latest-SVN deity/nightmare game half my cities produce Lesser Wealth in order to keep my balance positive. Had my empire been smaller I'd build Lesser Research instead. Whether income vs costs is balanced in later era's I'd have to see.

Also, the fastest way to blob is taking over fully developed cities from competing civs. The only difference (at least initially) between those cities and your own is the local culture (the type that pushes the borders). So to penalize the fastest blobbers (the warmongers), penalize wrong culture cities, or reward right-culture cities.

A while ago I started a thread called "doing more with culture", I proposed to let culture (the type that is produced every turn) not only push borders, but develop various virtues in your citizens also, which leads to higher production (or other benefits) of various kinds, and also gives various civs a custom-made unique culture.
 
Consider for a moment your ability to take a turn in civ. You feel like you ponder a lot and then the AI is able to race through at a blinding speed. But when you really try to track everything you ponder, and compare that to an understanding of what the AI is doing and then you realize you've considered about a million times these things but just couldn't have done it in such a mathematically specific manner at the conscious level, you realize just how powerful your subconscious really is. I have a hard time explaining this but it really is incredible how we evaluate things and the amazing speed at how fast we do... that's one of the reasons those processors are outside the conscious... if you were plugged into them directly it would be experientially bewildering. There are so many things we balance and run programs on in our minds every moment of the day. It really is mind boggling how powerful we actually are and yet how limited we can feel. It's kinda like my phone... 99% of it's processing and memory is taken up with running the platform and it makes it feel like the phone sucks but in all honesty it's stronger than my last computer... it's just that the operating system hogs the resources to do so much more than I realize it's doing. Our memory storage is a problem for speed but it may well be infinite and eternal soooo... you got benefits there from biological mental computing that AI systems may never have. You've also, as you note, got some severe weaknesses too.

I don't say that our brain is stupid or not very complex. It is probably the most complex thing in the entire universe. It's amazing! BUT it doesn't mean it will always be.
The Peregrine falcon is the fastest animal on the planet and is quoted by wikipedia to reach 242 mph tops. That's impressive, and it was impressive for many years. It still is impressive and I don't think when the Wright Brothers came up with their first plane, anyone ever thought THESE things could possibly be faster than the falcon one day.

Taking your civ example, I think what the brain is really good in, is filtering out useful informations. We are lazy, all biological creatures are, because it saves resources to not overextent way more than necessary. IIRC the AI checks crime every single turn, right? If I'd do that, it would take me ages. But I know that when I checked it last turn and nothing interesting happend, it will be fine this turn as well. So there is no need to check again. This simplification can save us a lot of time and therefore allows us to be fast. Yes, this is an argument for our brain. But it doesn't mean we can't teach AI the same thing in the future. [/QUOTE]

Might not be all that 'new' by this time. In my 'vision' of orchestrating this event, the criminals are the ones that save us because they're always poking around where they don't belong and that's how they get forewarning of what's happening and enough time to use their cunning to do something about it.

It is new in an evolutionary context. Chess is around since the middle ages (or maybe longer?) and it is still relativly new for our brain. Walking, on the other hand, is something our brain is so well adapted to, that you can "save" yourself even when you stumble. It is highly complex to contract and relax just the correct muscles in the correct way in a split second! But we've been doing that for sooooo long now. That's my core thesis: Our brain is very powerful at certain things. (And programming a computer is not one of these ;) )


There are so many ways it could play out. More than we can comprehend due to the high intelligence of robots at that stage. But i hope in RL that man and machine co-evolve together. And keep each other in check where machines cannot over dominate people since people will be part machine anyways. Perhaps machines will not even see people as not the same, but as just a different type of machine.. And people as long as they still have humanity will always anthropomorphize things, even machines.

Yeah, it is very speculative. I read a nice quote where it was stated that super AI might be the first invention were we don't really know what excactly we are inventing.
But I think too that at one point, humans and AI will merge and become one.
 
Regarding small vs big empires, there is already a mechanism in place to penalize big empires: upkeep costs based on the number of cities

Yes but there is another mechanism that limits the maintenance cost from the number of cities. It was set really low for whatever reason a few years ago. I increased it some time ago and wanted to remove that limit completely but I don't think I did because it could ruin existing games with huge empires.
 
Yes but there is another mechanism that limits the maintenance cost from the number of cities. It was set really low for whatever reason a few years ago. I increased it some time ago and wanted to remove that limit completely but I don't think I did because it could ruin existing games with huge empires.

Do you recall where this mechanism resides? Is it xml or C++ or Python?

Good to see you engage in the discussion. :)

JosEPh
 
Here are the buildings I proposed a while ago that would activly help smaller nations catching up as well:

I thought of a new "type" of buildings: Free X.

If your nation is really rich (which it is in civ usually in the later game), it could give its people something back. I thought of a set of buildings that require the palace to be build and cost a lot of maintenace. it would be best if it were something like: "2 gold per pop (empire-wide)". Therefore, those Buildings gives a lot of Happiness and probably health, science, productivity, decrease rebelliouness etc. For example:

- Free Water Supply: costs 5 gold per pop, +2 Happiness in every city, -5 Flamability in every city, + 2 health in every city.

- Free Electricity: -5 gold per pop, +2 Happiness in every city, +5% production in every city. [May get boni with Wireless Electricity etc. Also, could get more expensive with Computers, Green Civic etc or less expensive with Fusion Power]

- Free Internet: -5 gold per pop, +2 Happinessin every city, +5% research in every city. [May get boni with later techs like virtual community etc]

- Free Housing: -20 gold per pop, +5 Happiness in every city, -10 disease in every city.

- Free Education: - 15 gold per pop, +25% Education in every city, +2 happyness per city, free School, Highschool, University etc in every city.

- Free Entertainment: -10 gold per pop, +5 Happiness in every city, +10% Culture in every city, free Theater, Operahouse, Artist Gallery etc in every city.

- Free Television: -5 gold per pop, +5 Happiness in every city, -5% Culture in every city.

- Free Food Supply: -5 gold per pop, +5 Happiness in every city, +10% Food in every city, +2 Unhealthiness per City.

- Free Medical Care: -20 gold per pop, + 5 Happiness in every city, -200 Disease per turn.

- Free Transportation: -10 gold per pop, +3 Happiness in every city, -200 Air Pollution in every city, +5% Production in every city. [Increasing boni with Personal Rapidtrain, Skyroads etc]

- Free Basic Income: -7 gold per pop, + 10 Happiness in every city, -1% Production, + 5% :culture:

- Free Plastic Surgey: -5 gold per pop, +10 Happiness in every city

- Free Personal Robots: -30 gold per pop, +10 Happiness in every city, +20% Production [Increasing boni with Cognitive Robots etc]


Maybe the costs are too low. They should be a hard decision to build. But for a giant map you easily end up with 50 cities with 50+ pop each, meaning for Free Education a total cost of 15*50*50= 37500 gold per turn. Plus the costs of 50 Universities, Schools etc.
Free Education or Free Entertainment offer free buildings in all cities. So it would be awesome if you could implent a new tag that makes it possible to give buildings "X gold one time cost when build".
 
You'd have to have a basic network there first. These buildings were designed more as a late game building (but I can see some work earlier).
So basically a requirement for Free Water Supply would be the Water Department NW that gives Water Pipes to all cities. Same for Free Electricity.
Maybe a first step would be a set of "universal X" buildings, that provide access to a "resource", (like Water and Electricity department),iE give the prereq buildings to all of your cities.
 
Do you recall where this mechanism resides? Is it xml or C++ or Python?

Good to see you engage in the discussion. :)

JosEPh
I believe it is a global, a maxnumcitymaintenance or something like that. I'd love to make it effectively limitless. Would really help with that later game spot where it caps out and gold basically stops being a struggle.

Here are the buildings I proposed a while ago that would activly help smaller nations catching up as well:

I thought of a new "type" of buildings: Free X.

If your nation is really rich (which it is in civ usually in the later game), it could give its people something back. I thought of a set of buildings that require the palace to be build and cost a lot of maintenace. it would be best if it were something like: "2 gold per pop (empire-wide)". Therefore, those Buildings gives a lot of Happiness and probably health, science, productivity, decrease rebelliouness etc. For example:

- Free Water Supply: costs 5 gold per pop, +2 Happiness in every city, -5 Flamability in every city, + 2 health in every city.

- Free Electricity: -5 gold per pop, +2 Happiness in every city, +5% production in every city. [May get boni with Wireless Electricity etc. Also, could get more expensive with Computers, Green Civic etc or less expensive with Fusion Power]

- Free Internet: -5 gold per pop, +2 Happinessin every city, +5% research in every city. [May get boni with later techs like virtual community etc]

- Free Housing: -20 gold per pop, +5 Happiness in every city, -10 disease in every city.

- Free Education: - 15 gold per pop, +25% Education in every city, +2 happyness per city, free School, Highschool, University etc in every city.

- Free Entertainment: -10 gold per pop, +5 Happiness in every city, +10% Culture in every city, free Theater, Operahouse, Artist Gallery etc in every city.

- Free Television: -5 gold per pop, +5 Happiness in every city, -5% Culture in every city.

- Free Food Supply: -5 gold per pop, +5 Happiness in every city, +10% Food in every city, +2 Unhealthiness per City.

- Free Medical Care: -20 gold per pop, + 5 Happiness in every city, -200 Disease per turn.

- Free Transportation: -10 gold per pop, +3 Happiness in every city, -200 Air Pollution in every city, +5% Production in every city. [Increasing boni with Personal Rapidtrain, Skyroads etc]

- Free Basic Income: -7 gold per pop, + 10 Happiness in every city, -1% Production, + 5% :culture:

- Free Plastic Surgey: -5 gold per pop, +10 Happiness in every city

- Free Personal Robots: -30 gold per pop, +10 Happiness in every city, +20% Production [Increasing boni with Cognitive Robots etc]


Maybe the costs are too low. They should be a hard decision to build. But for a giant map you easily end up with 50 cities with 50+ pop each, meaning for Free Education a total cost of 15*50*50= 37500 gold per turn. Plus the costs of 50 Universities, Schools etc.
Free Education or Free Entertainment offer free buildings in all cities. So it would be awesome if you could implent a new tag that makes it possible to give buildings "X gold one time cost when build".
I love the idea of a set of buildings and policies (we have a term for policies, don't we Hydro...what was that again? Not worldviews, but it's something Judges interact with right?) that would allow you to cost yourself a ton but get a really decent bonus out of it. Makes gold income worth more by giving more you can do with it to get ahead! A high cost policy of extra training for your units could be another one... this is really a very expandable idea and should vary by era and different civics should give access.
 
Ah ok. That would make more sense. Linking them to national wonders would be good so like ...

- Free Water Supply = Department of Water
- Free Electricity = Department of Energy
- Free Internet = Computer Center (Maybe?)
- Free Housing = ?
- Free Education = Department of Education
- Free Entertainment = ?
- Free Television = National TV Station
- Free Food Supply = Bureau of Farm Management (Maybe?)
- Free Medical Care = Universal Health Care (This seems like the same thing)
- Free Transportation = Department of Motor Vehicles (Maybe a new Department of Transportation)
- Free Basic Income = National Mint or Treasurey (Maybe?)
- Free Plastic Surgery = Universal Health Care (Maybe?)
- Free Personal Robots = ?

Perhaps it should be done through the "Ordinance System" already in place

This also brings up an issue in that I think we need more administrative National Wonders such as Department of Finance, Department of Welfare, etc.
 
Back
Top Bottom