It also would rule out deals, which would not push the AI into negative income. And if I understood the essence of this thread correctly, then most people agree this to be the exploit: generating gold out of thin air by pushing the AI into negative. But if I can rip the AI off their "normal" income by canceling a deal, why should that be prohibited?
I would still consider it an exploit even if the AI can afford it, but definitely not on the same scale.
It would not cover the way I exploited the AI in my GOTM72 game: in that game I had noticed that America, Aztekia and China had joined up against poor Arabia. Arabia was loosing like two cities per turn, and when they were down to one or two cities, I expected them to be gone pretty soon... At that point I signed alliances with America, Aztekia and China for all the gold I had and then sold my tech lead to them. Indeed about two turns later the Aztecs took the last Arabian city and from then on I was making about 500 extra gpt...

This would not be covered by Niklas' rule, because I did not cancel any deal! The Aztecs did it, so it's their own fault. If they wanted to have my money keep coming in, they shouldn't have taken the last Arabian city...!
In some sense this could be argued to be perfectly acceptable, though I don't think it should be. But it is not as bad an exploit as the Emsworth Agreement simply because it is (almost) impossble for the player to engineer such a situation. If Emsworth Agreements are allowed, you would be forced to play that way in order to be competitive. You don't have that problem here. In some sense it is comparable to signing gpt deals and then asking for a "remove or declare", since it requires the AI to give you the opportunity in the first place. But yes, it happens often enough for it to be exploitish, and you are right that we should probably find a definition that covers this case as well.
Instead I would propose a rule like this:
"A player is not allowed to pay an AI more than the AI demands."
I like this idea, good thinking. Though it still isn't perfect. If the AI has a (monopoly) tech that you don't have, and you have a (monopoly) tech that AI lacks, you could easily trade for his tech using gpt and a luxury, then sell him your tech for gpt, and finally disconnect the luxury. Sure you destroy your trade reputation, but the gain would far outweigh that cost. And if you can do it with an MA (harder to set up of course) and kill, you don't even get that rep hit. And you are not breaking the rule you propose.
So how about a combination? If we put a big fat AND between our two rules we get something that captures all cases proposed, but it could also potentially generate a few "false positives", as you point out in your first point. I'm not sure what a "legally" ripping the AI of their "normal" income would be like though. Could you give an example that you don't consider exploitish? If it still involves making him pay more than he would normally have for something, I don't see how it could
not be an exploit.
I can see two ways of composition. Either we simply concatenate the rules, like so:
The player is not allowed to actively cancel a deal that gives gpt to an AI if the player is at the same time receiving gpt from that same AI in another deal that would not be cancelled. Also, a player is not allowed to pay an AI more than the AI demands.
Or we tie it all to the deliberate act of actually taking the gpt from the AI, which would be my preference. It might be possible to turn the original deal around, like so:
The player is not allowed to trade for gpt from an AI if he there is an active gpt deal from the player to that AI that the player knows will be cancelled very shortly.
But clearly this would yield too many false positives, there are certainly very legal situations that would break the above. So to temper it a bit further, how about:
The player is not allowed to trade for gpt from an AI if he there is an active gpt deal from the player to that AI that the player knows will be cancelled very shortly, unless the player leaves the AI with at least as much free gpt as the value of the deal that will be cancelled.
How about it? Very convoluted sentence I know, but we can worry about the formulation after we decide on the actual contents of the rule. I'm not sure I'm perfectly happy with either proposed rule here (first or third), but both of them are IMO better than my first proposal or Lanzelot's proposal. And any proposal is of course much better than nothing.
On a side note, I see that the
exploits listed on the GOTM site are not formulated like this at all, but rather as a (loose) description of a situation that would be an exploit. Using such a formulation here, we might have something like:
Emsworth Agreements
It is possible to make an AI pay much more gold per turn than he normally has available for something by
- giving him gold per turn through a deal that includes a luxury or a military alliance
- trading back that gold per turn for techs
- break the luxury route or the alliance, either deliberately or through circumstances known to be about to happen.
This forces the AI to pay you gold per turn that he wouldn't have otherwise had, sometimes even forcing him into deficit spending. This is not allowed.
To me that would be the best formulation of all.
Lanzelot said:
However, one more thought about whether the "Emsworth Agreement" is really exploitative or not: if I understood Lord Emsworth's writeup correctly, then it was not the "plain Emsworth Agreements" which gave him such an absurd income, it was the fact that he applied these agreements "iteratively".
What I mean is: he used the gpt that he gained by such an agreement to setup a second Emsworth Agreement with even higher sums! For example, if he starts with a natural income of 100gpt and gifts these to 4 AIs, he will end up with 500gpt. Now if he still has enough stuff to sell (and another victim that can be destroyed), he can set up a second round using the 500gpt as starting point, and that will give him 2500 gpt. This is what will lead to exponential growth, which I readily admit is exploitative.
So perhaps a compromise would be: "Plain Emsworth Agreements" are allowed, but "Iterative Emsworth Agreements" are not, so you need to wait the full 20 turns before you are allowed to set it up again?!
This would eliminate the absurd sums you get from the exponential growth, and it would eliminate the problem that "free money is generated from thin air", because with one such deal the AI may not yet be in negative gpt, or perhaps only in like minus 50-100gpt, and that's not yet really generating money from thin air, because the AI pays for that by dissolving units and city improvements.
No, I don't like this reasoning at all. 500 gpt is still an incredible amount of money "for free", you are still crippling the AI beyond the point he would normally have been willing to go. Likely you are still forcing him into negative spending, but even if you don't you are still crippling him and making him a much easier target for yourself.
(I can see that minus 1000gpt would be exploitative, because one unit/building is never worth that much. But to me it seems that loosing a unit/building per turn is an appropriate punishment for running minus 50gpt, so I have no problem with that.)
No way. We have banned it as an exploit if the
player runs a large deficit and "only" loses a unit/building per turn. In this situation we're forcing it onto the
AI so we don't even take the hit ourselves, even though we are the once benefitting from it. If the AI chose on its own volition to go into negative spending, then by our own rule he would be using an exploit. In this situation he doesn't even get a choice.