[TL;DR: It might help slightly in some circumstances, but CiV is not a good contender for benefitting from a RAM drive]
One reviewer said turns are 5-10x faster.
There's a word for that, but I think it's against this forum's T&Cs
.
So, there are some very specific circumstances in which a RAM drive is useful; this is a general overview, and might sound somewhat patronising, but I want to be sure the background is clear.
There are numerous factors that affect the performance of a process. Some of the most influential ones for games are:
* Processing power, which boils down to number of CPUs/cores, and their speed
* Disk/SSD read speed
* Disk/SSD access or seek time - for an SSD the access time doesn't change much dependent upon the distance between two pieces of data, but on a disk this is a big deal
* Disk/SSD write speed - rarely important in gaming, but can make a difference to save speed in some circumstances
* GPU processing power
* Amount of memory
* Amount of
video memory
* Speed of memory - rarely of interest outside microbenchmarks
* Speed of transferring from system memory to video memory - can sometimes have a real effect, but
mostly only when there's not enough video memory so it's constantly being swapped in and out
Ideally, an imaginary perfect program perfectly optimised to run on a given computer will make completely even use of all resources, so that no one aspect is waiting for any other. So, for example, let's say you're trying to load some data from disk, process it into the format that the video card wants in order to use it as a texture (if it's an image), or a mesh (if it's a model), or whatever, and send it to be displayed:
In a perfect world you would load in one chunk from the disk
then process it while loading the next chunk
then transfer to video memory while processing the next chunk and loading the
next next chunk
then display it while transferring the next chunk to video memory and processing the
next next chunk and loading the
next next next chunk
and so on. And all this time, every part of the system would be working at 100%, with every other part finishing its current job at the exact moment that it get new work to do.
In practice, this never happens. What
actually happens is that the processor issues a read request then waits...then the disk eventually reads a block of data in a big chunk (this is dramatically improved with SSDs since they don't have the extremely expensive seek times of a disk, but the access time is still never zero). Then depending on how much processing needs to be done, the disk either finishes reading the next chunk and sits around waiting for the processor to do something with it, or the processor finishes its work and sends the data to the video card, then sits around waiting for the disk to provide the next chunk. In both cases, if the amount of data the CPU can process per second is higher than the amount of data that the storage can provide per second, then the storage becomes the bottleneck and the rest of the process is waiting for it. With a disk, the data rate is highly dependent upon whether each read follows directly after the next or requires a seek to another part of the disk - it might get, say, 90MB/s for reading large chunks of sequential data, or maybe a
thousandth of that if they're small requests that go back and forth all over the disk. For an SSD the sequential rate might be several times that, but the random rate will be far, far higher. (Note that some people simplify by saying that it doesn't matter
at all whether it's random or sequential; this isn't strictly accurate, but the slowdown is so much less than in the case of a disk that it's close enough in many practical scenarios.)
I've focussed on the aspect of loading from storage, as this is obviously the bottleneck that's addressed by the use of a RAM drive. Ultimately, if you can't reduce the amount of data you need to load, then you can really only do one thing: do everything you can to ensure that all accesses are sequential in order to get the best possible performance from the hardware. One way to do this as a program author is to optimise your resource loading to grab big chunks of data in order. If you don't have any control over the program's design though, there's really only one option: preload all of that data sequentially in big chunks, and store it somewhere that access times are so small as to be inconsequential - ie. main memory - then use
that to satisfy any requests from the application, rather than having to go back to the disk. Also, if you keep that data around rather than discarding it as soon as the request has been made, then you will be able to respond immediately to any future requests for the same data.
Your operating system does this for you, to an extent: when a request is made from a storage device it will typically start reading somewhat further ahead than it was asked, in the (usually correct) assumption that that's what will be requested next. The effect of this is particularly pronounced on a disk as reading a little bit more from the same place is practically free compared to seeking to another part of the disk. Furthermore, as long as there is memory free, the OS will store a cached copy of whatever data it think is most important - which might be measured based on recency or frequency of access.
Using a RAM drive is an attempt to tailor this behaviour to your exact needs: you know that you're about to be running a game, so you manually preload all of its data into memory, without waiting for the game to do it, then you use that to satisfy the game's requests. Hence,
if the game spends more time waiting for data to load from storage than actually doing anything with it, and
if it's loading it in a suboptimal manner that can be improved by first preloading it sequentially, and
if you have enough memory to dedicate to the task of storing the preloaded copy without it having to resort to the disk anyway, and
if that leaves enough memory free for the game to get all it wanted,
then you might see a measurable performance improvement.
You can get a vague idea of what the effect might be fairly easily: time how long it takes to start the game and load a save, then close the game and repeat the process. For bonus points, repeat a few times. What will happen is that the game loads all of its data, then if you have enough free memory for the OS to think it's worthwhile it will save that data in memory ready for the next time. Repeating the process is just in case the OS decides the first time that it wasn't important enough to keep compared to whatever you have cached already, in which case accessing it repeatedly acts as a hint that it's more likely to be needed in the future.
If the effect is a distinctly improved loading time, then you know that the storage was the bottleneck the first time, and thus you
might get a benefit from using a RAM drive.
In my case, it makes no difference, and if I monitor my system while CiV is loading then I can see that there's always at least one CPU core pegged while the rest of the system is largely idle, and the average storage queue length (ie. the number of outstanding unsatisfied requests at any given time) is well below 1, thus the game while loading is CPU-bound almost all of the time and it makes no difference how fast my storage is. This might be different for you if you have a very fast CPU but a very slow storage device (I have an i5-3570k and a blazing fast SSD), but I have a hard time believing that it will be
much different unless your system is
seriously unbalanced.
If you do decide to give it a go (and bear in mind that you might have other games that are likely to get more of a benefit) and you find that I'm wrong about this in practice, then I would very much like to hear about it as it might indicate that I should investigate better ways of monitoring the loading process.