• We are currently performing site maintenance, parts of civfanatics are currently offline, but will come back online in the coming days. For more updates please see here.

Last Pre-Integration AI Games Analysis (4.21.1)

Did Ottomans build unusually many wonders early game? In my playthough on t.91 (Epic) The Ottomans have built Stonehenge, Artemis, Great Lighthouse, Statue of Zeus, Great Library and Angkor Wat, which is insane. It's n=1, so I'm curious if you saw this in your data too. That huge focus on wonders might explain their unusually good performance.
This is an issue in general if someone runs away in tech, I now play with double amounts in the "multiple wonder" costs increase and I see a much better spread pre industrial.
 
Thanks! And very pretty graph.
It's remarkably linear in the middle! Medieval T2 tech time is a bit of an outlier amidst fairly flat numbers... hmm...
I think a bigger problem than my outline earlier is that the form in the DLL is not super flexible.
I started conducting an experiment with the policy costs as
Code:
50 + (4n+0.2)**2.22 --> 51 + (4.32n+0.3)**2.246

Spoiler :

I set the optimization as
Code:
import numpy as np
from scipy.optimize import minimize

initial_guess = [50, 4, 0.2, 2.22]

def modified_cost(n, a, b, c, d):
    return a + (b * n + c) ** d

def original_cost(n):
    return modified_cost(n, *initial_guess)

# minimize deviation at n < 7 and keep ~3 policies ahead at n = 12 to ~25
def objective(params):
    a, b, c, d = params
    ns = np.arange(0, 7)
    original = original_cost(ns)
    modified = modified_cost(ns, a, b, c, d)
    low_n_error = np.sum((modified - original) ** 2) / len(ns)  # per point error

    # Maintain ~3 ahead in cost: modified(n) ≈ original(n + 3)
    ns_mid = np.arange(12, 26)
    target = original_cost(ns_mid + 3)
    modified_mid = modified_cost(ns_mid, a, b, c, d)
    offset_error = np.sum((modified_mid - target) ** 2) / len(ns_mid)

    # Scale goes up by at least 10x, so weight early more to preserve behavior for small n
    return 20 * low_n_error + offset_error
# note c cannot go negative or we will get a complex number
result = minimize(objective, initial_guess, bounds=[(40, 60), (3, 5), (0, 1), (2, 2.5)])
print(result.x)
We'll see how it feels whilst I study the .csv. Ty again.
 
Probably Poland?

I love overlaying the raw data on the histograms, it's my favorite thing.
Artistry being slightly earlier makes sense. I suppose same logic explains why Imperialism is often later -- civs in this position probably need to pay more attention to catching up, perhaps Industry was the better choice for some of these guys?


What I'd like to do is adjust the Policy Cost formula of
Code:
cost = 50 + (4x+0.2)**2.22
(whoever commented this code didn't know what exponential scaling was but ok)
to take into account the addition of N new techs.

So if I had the turn each policy was adopted I could back out a total culture at turn n and from that interpolate an effective culture rate.
Then use the technology research times figure to give me the number of turns, T, I expect to be added by the N new techs.
Then adjust the formula accordingly so the same (or, if balancing, maybe fewer) policies are adopted in the additional T turns; subject to changing the acquisition before Renaissance as little as possible (on your figure, roughly turn 180).

Does that sound right to you?
Here's what the individual policy pick times look like:
policy_picks_turns.png

I've attached the numbers you asked for (average turn each policy was adopted) as a csv file
 

Attachments

Thanks! And very pretty graph.
It's remarkably linear in the middle! Medieval T2 tech time is a bit of an outlier amidst fairly flat numbers... hmm...
I think a bigger problem than my outline earlier is that the form in the DLL is not super flexible.
I started conducting an experiment with the policy costs as
Code:
50 + (4n+0.2)**2.22 --> 51 + (4.32n+0.3)**2.246
Progress is very sensitive to the early policy costs because of the culture <-> science feedback loop. Just be careful not to nerf Progress too much.
 
Back
Top Bottom