Best CPU for Modding Civ

bostonbongrips

Chieftain
Joined
Mar 30, 2016
Messages
53
Location
Chicago
I am in the process of upgrading my CPU and wanted to get some feedback from the Civ community because Civ is my most played game. I am into a lot of mods, I usually have 60-80 enabled at a time. I am considering the Ryzen 3900X vs Intel i9 9900K.

My initial thought was Ryzen because of higher core count and the fact that it is somewhat future proof when I upgrade my GPU with the X570 boards utilizing PCIE Gen 4. But intel always seems to get better gaming performance.

I’m wondering if you guys think late game, heavy modded Civ games will benefit from the Ryzen having more cores and threads? My GPU is a GTX1080 and it will be eventually upgraded. This combined with my current CPU, I5 5500, has a lot of trouble with late game Civ, mainly when modded.

Thoughts?
 
There is a wonderful thread about AI Benchmarks where a bunch of us have posted PC specs and AI turn times

https://forums.civfanatics.com/threads/post-your-ai-benchmark-score.628767/

Yes, Intel still holds the throne in raw performance, the best times being with OC top Intel processors, compared to similarly specced AMDs, but it is bottom of the barrel in performance versus price.
Still, the big bulk of newish CPUs fall into the 30-35s category. Id say: buy a higher tier AMD for the same price (or cheaper) as lower tier Intel and sleep happy.

More standarized tests would give easier to read results. We have reported from different graphic and resolution configurations, which will affect slightly the end result, but in the end, any mid-high processor from either Intel or AMD is going to give you similar results in Civ VI.

I dont think you need any graphic card upgrades to play Civ VI at your max tho. I have a rtx2070, which is quite comparable to the 1080, and its always waiting on the cpu honestly.
 
I am considering the Ryzen 3900X vs Intel i9 9900K.
For reference to some of the replies, the Ryzen 3900x & i9-9900k are ~$500 CPUs. This is clearly a high end build. OP, since you're pairing with a graphics card anyways you might be interested in the i9-9900KF, which is a tad cheaper since it's the same chip with the on board graphics disabled.
Remark #1:
That said, for almost all applications, single core performance is going to be the bottleneck. Most software isn't designed to be multi-threaded, and even things that are still fall victim to Amdahl's law. Parallel computing is no panacea for computer performance: imagine the total number of tasks it takes to process a turn of civ. Some of these tasks can be computed at the same time, some can't because they require information from previous tasks. Even if 50% of the workload was parallelized with no overhead cost and infinite cores, you'd still only speed up performance by 2x.

TLDR even applications that support multithreading still have huge blocks of code that has to be run single core, and most applications that do support multithreading aren't leveraging unlimited numbers of cores. Higher clock speed (GHz) and how efficient the processor is at handling tasks (some optimization can be done so fewer clock cycles are needed for the same instructions) directly affect both single core performance and parallel workloads. You need a lot of extra cores to be utilized to start making a dent in being less efficient per core. While AMD certainly made leaps, intel generally has shown they still have the most efficient core designs on the market.

Remark #2:
Everything I just said about how good those cores are at crunching numbers goes out the window when the CPU itself needs to go get more data from RAM, or, god forbid, from disk. In CPU terms doing this takes an eternity: A well greased stick of DDR4 2666 ram might have a latency of 10 nanoseconds. Do you know how long that is? That i9-9900k can get around 5Ghz on a single core - a clock cycle of 0.2 nanoseconds. Literally getting a single byte of information from ultra high performance ram is going to cost you 500 CPU clock cycles. The way that CPUs get around this is by implementing what are called "caches," where it keeps some data nearby so it doesn't always have to go to the ram. The downside to a bigger cache is that it takes longer to access (this is true for any type of addressed memory) so what chipmakers did was have the CPU's primary on chip data pull from a cache, and then that cache of data pulls from a cache. Hence we see L2 and L3 cache numbers quoted on the spec sheet for CPUs. Like all things, bigger is better (up to a point.) This is the primary advantage of the Ryzen- its caches are huge- and likely the source of why it sometimes beats intel and sometimes loses, imo. (That and it's on a 7nm process instead of 14nm like intel's currently is. That smaller size greatly improves any design.)
A Side comment on future proofing
My initial thought was Ryzen because of higher core count and the fact that it is somewhat future proof when I upgrade my GPU with the X570 boards utilizing PCIE Gen 4. But intel always seems to get better gaming performance.
Future proofing is a funny subject. In my experience, going to the higher end of the range can often be a decent investment- such as, say, splurging to move to an i9 k series. But paying the further premium to move from i9-9900k to i9-9900X is often not worth it, because at the very top end you pay a huge premium to the chipmaker for marginal gain. (Edit: I mean by this that the absolute best chips are expensive because they are hard to make, not because they are so superior in performance.)

The real bet right now for AMD vs Intel at this price point really depends on whether you think intel's response to AMD is going to be a repeat of the 2000s when the CEO's stated goal was to "bury AMD." This 9th gen was pretty rushed out the door because ryzen has been a real wake up call to stop being lazy, but intel certainly has the resources to become untouchable again. For example: the ryzen is on a 7nm process where intel's is a 14nm. I would guarantee intel could produce a 7nm chip before AMD moves to a smaller process node, and just making the i9 in 7nm will give it a nice bump in performance. The reason this matters is because the two processors take different sockets; but if you'll buy a new mobo in a few years anyways then, it doesn't matter.
 
Last edited:
What's not been mentioned yet are the limitations of Civ VI itself, and unfortunately there's nothing you can do about these. So to answer the question in bold at the end of your post: upgrading your i5 will help to a limited degree, but there are intrinsic issues that we cannot address.

The following issues will occur even if you're using a top-of-the-line, extremely expensive build.

- Increased turn times as the game goes on are simply unavoidable. If you're using a modded big map with 30+ civs, no amount of expensive hardware will make for a perfectly smooth experience.

- The game engine can no longer handle modded large map sizes after Gathering Storm. Most sizes greater than the vanilla Huge will crash upon the sea levels rising. Doesn't matter if you have a $2000 graphics card.

- Games take longer and longer to load from the main menu with more mods active. Doesn't matter how fast your RAM is or anything else - the game takes time to add the modded content to its database.

- Mods with a lot of Lua scripting may cause multiplayer desyncs as well as lag throughout the game or increased turn times. These scripts often have to check the game for certain conditions every turn, and the more of these functions the game has, the longer things take.

- Experiences with certain mods like the excellent Terra Mirabilis have revealed limitations in the game's capacity to handle large amounts of new "modifiers" and "requirements" that trigger these modifiers.

And so on. My ultimate point is that unfortunately, poor performance with extremely large maps and excessive amounts of certain mods (especially Lua mods) is inevitable.
 
I don't want to quote @Sostratus's entire post, because it's too long. But, a few points.

1. Intel's "equivalent" to TSMC's 7nm process is their 10nm process, which is currently in a revised 10nm+ form. This is the process upon which Ice Lake mobile CPUs are built. Intel has no plans to use this process for at least the next two sets of desktop CPUs (Comet Lake, Rocket Lake). So, don't expect anything other than 14nm this year and don't be surprised if you're still getting 14nm next year.

2. That said, who cares? What really matters is overall performance of the CPU. The process technology contributes to that, but ultimately, it's academic. So far, Intel still holds a small lead in most gaming benchmarks because of the very high clock rates on the 9900K. AMD's best processors are very close, though.

3. If you're planning to play at a resolution above 1080p, then either processor is likely to give you equal results because at that point, the GPU usually becomes the limiting factor.

4. DDR4 2666 isn't really high end. It's the maximum officially supported speed for Intel's current processors (AMD's processors support 3200). In either case, you can go higher without spending much more. The sweet spot is probably around 3200-3600, leaning toward 3600 if you go AMD. @Sostratus's point about latency still stands, though. And, as you increase frequency, you usually also increase latency.

5. Don't worry about PCIe4 graphics cards or whatever. The current cards don't even utilize the full bandwidth of PCIe3. The real benefit of having PCIe4 lanes is for the M.2 disks that will soon take advantage of the extra bandwidth. We aren't really there, yet.

I ultimately went with the 3950X because I just couldn't pass up on having 16 cores. It's plenty fast enough to play whatever modded game you can come up.
 
Top Bottom