1. We have added a Gift Upgrades feature that allows you to gift an account upgrade to another member, just in time for the holiday season. You can see the gift option when going to the Account Upgrades screen, or on any user profile screen.
    Dismiss Notice

Smoothing the combat curve

Discussion in 'Civ4 - Fall from Heaven' started by xanaqui42, Sep 5, 2007.

  1. MagisterCultuum

    MagisterCultuum Great Sage

    Joined:
    Feb 14, 2007
    Messages:
    16,108
    Location:
    Kael's head
    So long as a unit can only attach once per turn (without blitz), then this is a terrible idea. If units were allowed multiple attacks but suffered some penalty for attacking multiple times without the blitz ability (iirc, thats kinda how it worked in Alpha Centari, but I haven't played that in years so I could be wrong) then it could be quite interesting.
     
  2. Gamestation

    Gamestation Introducing Servo

    Joined:
    May 24, 2006
    Messages:
    551
    Agreed that it would be terrible if everyone could only attack once per turn and if everyone could take more than one attack before dying. Wars would take forever then especially with stacks and healing in between. That system from SMAC (?) that you refer could certainly make things interesting. I was also thinking that only certain units should have this "extra life" concept though. Units with heroic and great commander for example. Of course now that I think about it, that would be like putting these units on a whole other plane.
     
  3. psychoak

    psychoak Warlord

    Joined:
    May 15, 2007
    Messages:
    159
    I didn't say that. I'm thinking somewhere around 2-1 strength it should be impossible for a unit to die to that weaker opponent. I don't want two warriors to spend multiple turns trying to kill each other, I want to be able to use a hero in something besides a 100% combat and be reasonably safe. I want to lose a key unit because I was stupid or had to use it in an unsafe combat, not because there are no unsafe combats. If you cap it by attack rounds, someone with better understanding of the combat code could tell you what the odds of an even fight not having a resolution would be depending on the limitation. If you cap it by successful strikes by either party, it's guaranteed to resolve to whatever point you're stopping at, and never after.

    In vanilla it's not so terribly bad, you lose a 98%er and the defending unit gets a couple levels, no biggy. You lose a 98%er here and you just doubled the strength of that unit, maybe better. Having heroes and upgraded units you can't afford to lose to them just makes it all the worse. Yesterday I was playing and won a .3, 3.6 and 7.6% combat in the same turn with a 6 warrior stack in a rush. Having three warriors that highly promoted was the end of the ai, it could just as easily have been the end of the war if I'd lost three combats that easy with my initial strike.
     
  4. xanaqui42

    xanaqui42 King

    Joined:
    Sep 5, 2006
    Messages:
    780
    I assume that you've looked at this thread and this mod, and found them insufficient. They don't limit the number of combat rounds, but they do decrease the chance of the stronger unit loosing a single combat immensely.
     
  5. psychoak

    psychoak Warlord

    Joined:
    May 15, 2007
    Messages:
    159
    The increased health makes turns take even longer, and I'm barely patient enough to play through a game as it is. I don't mind randomness either, I'd just prefer to lessen the screwed factor in it. Hyborem not killing the rider wont make me pull my hair out and hit things, just hyborem dying to the rider. Predictability is just as bad as getting screwed by luck is, unfortunately.
     
  6. Hanny

    Hanny Prince

    Joined:
    Sep 3, 2007
    Messages:
    343
    Location:
    IOW UK
    Another couple of sheckles...


    Yes i undestood that was what you wanted, my point was that the work involved hardly seems justified by the expected end result.

    Firaxis are not dumb, they did it this way for a reason, my best guess is that the AI is primed to get to steps by its promotion goals, this in the combat model gives it a unit that *thinks* it has a better chance because of the step increase system resulting in more attacks from those on step advantage over other units.

    Sure from a human pov, you rotate your units to max out step aquisistion and gain an advantage, but the AI does this not, so any percieved advantge your working towards , it appears to me, will be in multi player not solo.
     
  7. Vulcans

    Vulcans Prince

    Joined:
    Aug 9, 2005
    Messages:
    326
    xanaqui42:
    here are my thoughts, my apologies if some bits are a bit disjoint, as it is a compilation of a few different thoughts over the last few days. This is just giving the idea of an implementation method, we should double check the formulas on the chosen method of implementation, sometimes I didn’t use hundredths notation, but this can trivially be changed to hundredths, which would be faster than floating point.

    I wonder if we need to implement both absolute and also relative ranges simultaneously. I think we only really need to implement absolute or random factor. They both do a similar function, just slightly shifted probability distribution depending on the method chosen.

    They are actually interchangeable in a way, we just need to calculate a different range depending on what system we are using.

    Probability_Absolute_maximum + Random_Absolute_ maximum =Probability_reletive_ maximum *Random_reletive_ maximum

    So if we make them the same then if Probability_Absolute_maximum= Probability_reletive_ maximum

    Then it follows that
    Ra=P*(Rr-1)

    Naturally absolute and relative would operate slightly differently in practice, but I think we will only really need to implement one of the options in practice. Once working out the relative range or absolute range it can easily be converted into the other calculation system. Hence we can do the calculation with either method, and set the dynamic range according to the ratio.

    Personally I am leaning towards a relative factor, so when you are 25% of the way from one midpoint to the next jump point then there would be 25% chance of needing one extra hit. This would apply the same on both sides. Although it does make the midpoint and range calculations a bit different. As Miniumdamage=Base_damage/D, and Miniumdamage=Base_damage*D

    Therefore for D>1
    1/D<Random_value<D
    But we can&#8217;t use a simple random factor eg if D=1.1 then 0.90909<Random_value<1.1
    So then mean=1.00455
    Meaning that we actually slightly shift the mean amounts of hits required by +0.4% chance of needing an extra hit. This is insignificant, and could be ignored, but being a perfectionist like I am, I came up with another way to get the correct distribution.

    Float Random=new random value from -D to D. //or you could use multiple dice

    If Random>1
    Damage=base_damage*Random;
    Else
    Damage=-base_damage/Random; //the negative signs will cancel out

    Hence we get a random distribution centrally located around 1.


    I&#8217;m wondering if setting up matricies for the AI to instantly look up would be the simplest and fastest way to implement this.

    I&#8217;m actually thinking we could set the range depending on where we are in the ratio range. Matrices could be good for this.
    JP[]={100,50,33.33,25,20.,16.67,14.29,12.50,11.11,10.00,9.09,8.33,7.69}
    Or JP(N]=round(100/N)
    JP[N+1]=JP[N]*N/(N+1)
    This gives our jump point values JP[]/100

    And a midpoint:
    M[N]= JP[N]*Rmax[N]= JP[N+1]*Rmin[N+1]


    Now if we look at the midpoint(M) between the jump locations. (D=range of random factor as you defined in your post)

    Now

    Now the mid points between the jumps need to match. (note this is relative not absolute midpoints)
    M[N]=JP[N]*(1+D[N]/2)= JP[N+1]*(1-D[N+1]/2)
    JP[N]*(1+D[N]/2) = JP[N]*N/(N+1)*(1-D[N+1]/2)
    (1+D[N]/2)= N/(N+1)*(1-D[N+1]/2)
    And D[N+1]/2+ D[N]/2=JP[N]-JP[N+1]=JP[N]*(1- (N/(N+1)))

    M*(1-D/2)=16.6
    M*(1+D/2)=20
    D=0.186
    M=18.3
    etc
    so we get another couple of matrices (you can turn them all into &#8220;int hundredths&#8221; if that&#8217;s easier)
    D[]={0.66,0.4,0.29,0.222,0.182,0.153} etc
    M[]={75,41.66, 29.17,22.5,18.33,15.48} //etc note: this is the the absolute midpoints, the matrix should actually be made up of the reletive midpoints not absolute midpoints

    So for our example of 3 Vs 4
    R=1.333
    Da=23.07
    Dd=17.33
    Na=rounddown(100/Da+.5)=4
    Nd= rounddown(100/Dd+.5)=6
    I&#8217;m rounding down just out of convention, and then calculating Na and Na+1 to get the two options coming from the single variable.

    M[Na]=29.17
    M[Na-1]=22.5

    M[Nd]= 18.33
    M[Nd-1]= 15.48

    D[Na]=0.29
    D[Nd]=0.18



    One idea:
    Now for calculating the probability on the right would use the fact that each hit ratio seems to be a linear graph (or almost linear).

    We could do the AI calculation based on a lookup table system, This can be treated like a linear function on the graph.
    Like any standard linear function P(r)=Q*r + H

    Actually I like writing linear equations in the form of Y=M*X+C but I&#8217;ll use the variables you defined in your post, and you&#8217;ve already defined variable C so I&#8217;ll call it H so we don&#8217;t get our variable references mixed up.

    Where
    Pv=probability of victory
    Q=slope of the graph
    H=vertical offset
    R=combat ratio

    We can calculate the line of the graph for each combat ratio as described in the war academy article
    Now the graph has a different line for each different A, B.
    BTW is M the same for different first strikes? It looks similar on the graph, just with a vertical offset.
    For combat first strikes is the same for all scenarios, so I&#8217;ll treat it like a constant here (and maybe leave it off some of the notations).
    We have prior knowledge of the basic lines from the war academy article, we could store the information of these lines in a couple of matricies.

    Pv(A,D,S,r)=Q[A][D]*r + H[A][D]

    Actually the matricies could probably be simplified as there would only be certain combinations called, where A=100/D±1, and first strikes -4<=S<=4

    So it is a simple linear function call for each of the 4 scenarios
    Ie in the example above of 3 Vs 4, we work out there is hcance of attacker having 4 or 5 hits, and defender needing 6 or 7 hits
    Nor R=1.333

    So we have
    Pv[5][6]=1.3*Q[5][6]+H[5][6]=(data from linear graph equation)
    Pv[5][7]=1.3*Q[5][7]+H[5][7]= (data from linear graph equation)
    Pv[4][6]=1.3*Q[4][6]+H[4][6]= (data from linear graph equation)
    Pv[4][7]=1.3*Q[4][7]+H[4][7]= (data from linear graph equation)


    If setting the ranges as described in previous posts then there are only 4 scenarios so the calculations are limited to looking up 4 lines from the matricies.


    Note, in I called this function to return an int value, max 100, giving the percentage probability. Ie the probability is Int Ph(R,C)/100.

    Now we can define a function for the probability of a certain number of hits we have something like
    Int Ph(R,C) or int Ph(R,D) (depending on method chosen)
    Where
    Ph(probability of that many hits.
    R=ratio
    C = Range of Absolute damage.
    D = Range of relative damage.

    Which returns the probability of N=Round_down(R*20-.5) hits.
    Note that N+1=Round_down (R*20+.5)

    Now if there are two options (4 OR 5 hits) then they are mutually exclusive.
    So the probability of one more hit(N+1) is 100- Ph(r,D). hence the function only needs to be called once and not twice, and we don&#8217;t need to pass N into the function as that is implied when we pass R into the function.

    note you can write all this in hundredths, I am just writing it here as a float so I don&#8217;t need to think about hundteths multiplyer while thinking about the concept,
    note: as a second thought it could also be more efficient to pass N etc into the function as a variable instead of re-calculating it every time.

    Note: In a single dice situation the function could look something like this, Gaussian distribution can be added for more dice, I probably wouldn&#8217;t go over 2 dice, and I wonder if that&#8217;s even necisary.

    Note, maybe passing D/2 would be faster to save dividing by 2 all the time. minimum= Base_Damage/D Maximum damage= Base_Damage*D. so the total range of damage becomes (D+1/D)

    Pseudo cole along the lines of this:

    int Ph(float R,float D)
    N=Round_down(R*20-.5) hits.
    Base_Damage=20*R
    JP[N]=round(100/N) //this could be faster to look up from a pre-defined matrix instead of calculating every time
    Required_roll= JP[N]/R
    If Required_roll>1
    Probability_of_N_hits=50+50* (Required_roll-1)/(D-1);
    ElseD
    Probability_of_N_hits=50*(D*Required_roll-1)/(D-1); //at miniumm point D*Required_roll=1, midpoint D*Required_roll=D

    Return Probability_of_N_hits;

    So then we put it all together, using a matrix with the data stored for linear relation of combat chances repending on R for each specific hit requirements.

    Giving a equation along the lines of

    Pv[N][N]* Ph(r,D[N])* Ph(1/R,D[N]))+ Pv[N+1][N]* (100-Ph(R,D[N]))* Ph(1/R,D[N]))+ Pv[N][N+1]* Ph(R,D[N])* (100-Ph(1/R,D[N]))+ Pv[N+1][N+1]*(100- Ph(R,D[N]))* (100-Ph(1/R,D[N]))

    This can be expanded into the lookup table matricies defined from the graph
    (R*Q[N][N]+H[N][N])*Ph(r,D[N])* Ph(1/R,D[N]))
    +(R*Q[N+1][N+1]+H[N][N])*(100-Ph(R,D[N]))* Ph(1/R,D[N]))
    +(R*Q[N][N]+H[N+1][N+1])* Ph(R,D[N])* (100-Ph(1/R,D[N]))
    +(R*Q[N+1][N+1]+H[N+1][N+1])*(100- Ph(R,D[N]))* (100-Ph(1/R,D[N]))

    This would require two calls of the previous function (actually calling the function prior to operation might be faster, so then the the information from Ph can be used for Ph(1/R,D[N]) and (100- Ph(1/R,D[N])) which are the two possibilities assuming that the smoothing function doesn&#8217;t extend over more than 2 hitpoint outcomes.

    This means that the function o(x) as you defined in your post wouldn&#8217;t ever have to be called. Just extraction of data points from the matricies which defines the probabilities for each combat scenario. Hence this could possibly even be faster then the current O(x) or O(X+AB+F*Glg(F))

    What are your thoughts?
     
  8. Vulcans

    Vulcans Prince

    Joined:
    Aug 9, 2005
    Messages:
    326
    i don't think firaxis thought about jump points when designing the combat calculations (although they would have thought about the chances of a weaker unit defeating a stronger unit). jump points it is a side effect of rounding to a integer number of hits required. just like i don't think firaxis purposely designed the optimum city spacing for a ring of no distance maintenance cities in civ3. or any of the other bugs that come out of rounding. the fact is that there are so many oddities in civ due to rounding errors, and it has been obvious that fireaxis has been aiming at reducing these rounding anomalies in the change from civ3 to civ4. thus making the game easy by removing loopholes in calculations. or maybe we're just geeks, and can't resist to find a solution when there is a problem that needs to be solved. :D
     
  9. Hanny

    Hanny Prince

    Joined:
    Sep 3, 2007
    Messages:
    343
    Location:
    IOW UK
    Was it not a geek who invented whtever was required by neccisity?, or was that a greek.....:)

    You may well be correct, it that it is an unintended consequence, that said, how do invision the AI responding to your proposed change?, will it affect AI choice making in combat detemination. dranticly or not noticable.
     
  10. Vulcans

    Vulcans Prince

    Joined:
    Aug 9, 2005
    Messages:
    326
    if we implement this correctly it shouldn&#8217;t affect the AI choice making drastically as our intention is to make approximately the same strength ratios relations (and hence game dynamics) as the original combat, just with the jump points removed.

    if anything it should improve the AI decision making by making it more accurate, as I understand, the program has two combat calculation methods, the real one (which has jump point oddities etc) and a faster approximate one, which the AI uses, and get's displayed to the humans when right-click+drag. in reality at the moment you could right-click+drag and it'll say 95% when it could really be 90% or something else.

    by removing the jump points and making the predicted outcome closer to the actual outcome we'll improve AI decision making, and also help human decision making as the number displayed should be a more accurate representation of what your real chances are.
     
  11. xanaqui42

    xanaqui42 King

    Joined:
    Sep 5, 2006
    Messages:
    780
    Floating point is only a linear loss of speed, and a pretty minor one. I'm more concerned about lack of accuracy. For example, 0.99 can't be represented exactly in floating point; 0.75 can. Only additive powers of 2 can be represented exactly in floating point.

    I'm not following this. As a trivial example, let's assume that Probability_Absolute_maximum = 1/3, Random_Absolute_ maximum =0.01, Probability_reletive_ maximum = 0.05, and Random_reletive_ maximum = 1/11. These numbers don't appear to plug in correctly to the above equation.

    Note that "relative" and "absolute" as I'm using them are for damage. So if a creature is doing 10 points of damage average, +/- 0.01 absolute grants 9.99-10.01 points of damage; +/-0.01 relative grants 9.9-10.1 points of damage. If they are doing 30 points of damage average, the former becomes 29.99-30.01, and the later becomes 29.7-30.3. So basically, absolute does not scale with damage; relative does. I don't know which one we'll finally want, but I don't think that implementing both will be significantly more expensive than implementing one.
    This would work, although you'd have to be very careful to respect the number of bits of precision in the floating point. You'd still have problems at the edges with the extreme values not being precise. Note that these problems goes away in denominator-based math.


    I essentially agree. I'm planning to use maps to help optimize the most complex portion(s) of the equation(s). They are slower than an array, but there are substantial implementation advantages (and O(logX) is pretty easy to swallow).


    I think that you're correct - the apparent lines are actually lines.

    Correction: The above is false, save in the most trivial of cases. They can't be lines, any more than 3d6 grants a linear distribution of the values from 3..18 (effectively the ratio is a weight to the 2-sided dice). On the other hand, they likely look a lot like lines, at least on a fine scale.

    I like the basic idea, although you're focusing more than I would on speeding up some of the relatively fast code.
    This is exactly the point I'm presently hung up on - how to calculate the chances of various ranges of 2 or more dice efficiently. If you have a suggestion, please fire away :)
    This has gotten me to thinking: might it be worthwhile to combine the AI and non-AI versions of combat odds together, use the non-AI version as the standard version, and use lookup tables to speed up the calculations to speeds that the AI would like? That would actually be a mild AI improvement, at little (possibly negative overall) time cost, and a fairly small memory cost. I think I'll look in to how difficult this would be to implement.

    I think that your suggestions would work, given your assumptions. The main thing is that I'd like to make it a bit more generic (so that for example, the code won't have problems when expected to blur 3 line segments on the graph, instead of 2). However, doing so is mostly a matter of looping.
    O(x) isn't a function; it's a notation. O is short for "order"; it's a simple way of describing the complexity of calculating an equation (So, as a trivial example, O(X^2) has higher complexity than O(X)). The canonical way to use O notation is to remove all non-dominant portions for useful values.
    X is needed to scan the units in the defender's plot for any Lifesparks (I don't see a way around this one). AB is needed if and only if we calculate separately for each combination of rounds of combat. F^(G-1) is needed if and only if I can't figure out a way that's much faster than the trivial way to figure out the probabilities of getting various rounds results (I'm thinking that a simple solution is using the trivial method, and pre-calculating, but I'm pretty certain that there must be a faster way).
     
  12. xanaqui42

    xanaqui42 King

    Joined:
    Sep 5, 2006
    Messages:
    780
    Both versions have jump point oddities; they're present just from calculating the number of rounds (which is accurate in both). The AI version has the following problems:
    1) It doesn't calculate first strikes accurately - it just uses an approximation based on the average number from each unit.
    2) The final calculation that the AI uses is a linear approximation of the actual one.

    Going back to the post above, it seems that the following would be needed to calculate the odds exactly (this is prior to the proposed change):
    1) Rounds needed for the attacker
    2) Rounds needed for the defender
    3) odds of the attacker winning a round
    4) Minimum first strikes for the attacker
    5) Maximum first strikes for the attacker
    6) Minimum first strikes for the defender
    7) Maximum first strikes for the defender
    I think that this is too much information to store concisely; I'm getting something in the multiple megabyte range as a rough estimate for the storage of this data.

    On the other hand, it would help for those with memory to burn - perhaps I should make it optional?
     
  13. xanaqui42

    xanaqui42 King

    Joined:
    Sep 5, 2006
    Messages:
    780
    I was wrong - they aren't lines, save in the most trivial of cases - where there's only 1 round left for either side. The AI function presently approximates them as lines.
     
  14. xanaqui42

    xanaqui42 King

    Joined:
    Sep 5, 2006
    Messages:
    780
    Sorry if the thoughts are a bit disjoint :D
    I think I have a solution which is faster than the one I propose above. Essentially, I view each die as a finite dimension, in a geometry where only movements parallel to the directions of the dimensions are valid. Then, each value of the dice is a constant distance from the origin, and the sum of those and lesser values can be obtained through the integral under the curve of all points of that constant distance (back to the origin).

    The advantage of using such a geometry in N-space is largely that it makes visualization of what components are useful from one calculation to the next....

    Wait - I'm being stupid, aren't I - these are just the N-spacial icosahedral numbers, aren't they? In 2-space, with evenly sized dice, they'd be:
    X*(X+1)/2
    In 3-space, they'd be:
    X*(X+1)*(X+2)/6
    So in general, they're:
    Where A = the number of dice,
    (X*(X+1)*(X+2)*...*(X+A-3)*(X+A-2)*(X+A-1))/(A*(A-1)*(A-2)*...3*2*1)

    This obviously only works directly to the mid-way point, but the same series can be used to calculate beyond this point- it merely needs to be used multiple times - once for a superset of the N-space we want to use, then once for each portion of that set we aren't planning to use. Since the maxima is inside the set we're planning to use, we don't need to worry about the overlap beyond that point.

    I think that this can even be used for non-evenly sized dice, but I'll have to work at that more. In any case, a direct equation around the order of the square of the number of dice is far better than I expected to be able to obtain.
     
  15. Vulcans

    Vulcans Prince

    Joined:
    Aug 9, 2005
    Messages:
    326
    I remember back from statistics lectures many years ago there was a formula for this, it was much simpler, calculating the statistical variation due to the number if die, and then you get a formula for the probability of any point away from the midpoint. you could probably easily find this formula on the internet.
     
  16. Vulcans

    Vulcans Prince

    Joined:
    Aug 9, 2005
    Messages:
    326
    I agree.


    Actually neaither static option scale exactly the way we want it, as the jump point spacing varies by N / (N+1) as your damage increases. Relative is the closest scaling of the two you mentioned, but even relative doesn&#8217;t scale perfectly with damage as the jump point spacing changes N/(N+1) and so the range that you&#8217;d need to smooth over changes accordingly as you progress up to higher ratios.

    Hence if we want to make the smoothing zone perfectly extend to the midpoints (such as the last graph on post 3 in this thread) then we&#8217;ll actyally have to change the relative factor range depending on what our ratio is.

    This can be calculated dynamically (I think I made a formula for random max and random max/min as a function or R in one of the previous posts), so the relative factor also dynamically scales as a function of R with the N/(N+1) jump point spacing factor.

    Then I figured it could also be done with pre-stored values in a array, eg D[N] or alternatively C[N] so depending on where you are in the spectrum then it&#8217;ll pull out the value of the random factor. So then you have a base damage of 10 then N=10, while if you are doing 30 damage then N=3. So therefore D[N] or C[N] will be larger.

    So 1/D[N]<Random_value<D[N]
    Or C[N]<Random_value<C[N]

    N is an indicator of which midpoints you are between. So for any N there is possibility of N or N+1 hits.

    Since we are chosing the minimum and maximum ourselves we can chose whatever we want, so we can chose relative or absolute factor size that extends from each jump point to the mid point between jump points. (note there is a slightly different midpoint depending on if we&#8217;re talking in absolute or reletive)

    Now ideally for the best function with smooth damage we&#8217;d have functions C(R) or D(R) as a dynamic function across R to get a smooth output hitpoints left, but the key thing we are looking at about the damage is how it affects the number of hits, ie jump points, so we can simplify our calculations by setting the rande C[N] or D[N] depending on which jump point(n) we are talking about.

    Example when base_damage=30
    N=3 //note N is not number of hits, number of hits is actually N or N+1. But this N works out better for array indexing, it is really telling you which two midpoints you are between

    JP[N]=100/N
    JP[3]=33.333
    JP[3+1]=25

    For absolute case:
    Ma[N]=29.166 //absolute midpoint
    C[N]=4.1666

    -4.166<Random_absolute_damage_component<4.166
    Final_Damage = Base_damage + Random_absolute_damage_component
    Required_random_absolute_damage_component_for_JP = JP[N] - Base_damage
    Required_random_absolute _damage_component_for_JP = 33.333-30=3.333

    [/B]Probability of needing 4 hits (N=3). [/B]
    If using linear relation(single die)
    Probability_of_N= (C[N]+ Required_random_damage_component_for_JP)(2*C[N])= (4.1666+3.333)/ (4.1666+4.1666)=90%
    //this is the probability of needing N+1 hits, ie needing 4 hits when base damage=30.

    [/B]Probability of needing 3 hits. [/B]
    Probability_of_(N-1)= (C[N]- Required_random_damage_component_for_JP)(2*C[N])= (4.1666-3.333)/ (4.1666+4.1666)=10%
    This is the probability of being in the N-1 region, ie needing N hits, or 3 Hits required

    For relative case:
    When Base_damage=30, N=3
    M[N]=JP[N]/D[N] =JP[N+1]*D[N] //note you need to be careful of which way you are moving the spectrum as JP[N+1]< JP[N]

    M[3]= 33.33/D[3]= 25*D[3]
    D[3]=Sqrt(JP[3]/JP[3+1])=1.1547
    M[N]= JP[N]/D[N]=28.8675 //Note, we are looking at a different midpoint if we are talking absolute or reletive calculation
    1/D[N]<Random_Reletive_damage_component<D[N]
    0.866026 <Random_Reletive_damage_component<1.1547

    Final_Damage = Base_damage * Random_Reletive_damage_component
    Required_random_reletive_damage_component_for_JP = JP[N] / Base_damage=33.33/30=1.111

    Note the case when random_factor>1 needs to be treated differently then then random_factor<1

    Probability of needing 4 hits (N=3).
    If Required_random_reletive_damage_component_for_JP>1 then:
    Probability_of_N=50+50* (Required_roll-1)/(D-1)=50+50*(1.111-1)/(1.1547-1)=85.9%

    Probability of needing 3 hits.
    If Required_random_reletive_damage_component_for_JP>1 then:
    Probability_of_(N-1)=50+50* (D-Required_roll)/(D-1)=50+50*(1.111-1)/(1.1547-1)=14.0918%

    Note
    Absolute or relative factor range can be scaled according to which jump point section(N) you are in.
    Notice that these probabilities are slightly different, this is largely because the absolute and the relative midpoint between jump points is slightly different.


    example when base_damage=10
    Interesting you chose an example exactly on the jump point.

    Now the same applies with base_damage=10
    N=10
    JP [10] = 10
    JP [11] = 9.909090909
    JP [9]=11.1111

    For absolute case:
    M[N]=(JP[N]+JP[N+1])/2=9.545
    C[N]=JP[N]-M[N]=0.454545

    -0.454545<Random_absolute_damage_component<0.454545
    Final_Damage = Base_damage + Random_absolute_damage_component
    Required_random_absolute_damage_component_for_JP = JP[N] - Base_damage
    Required_random_absolute _damage_component_for_JP = 10 &#8211; 10 = 0

    [/B]Probability of needing 11 hits (N=10). [/B]
    If using linear relation(single die)
    Probability_of_N= (C[N]+ Required_random_damage_component_for_JP)(2*C[N])= (0.454545+0)/ (2*0.454545)=50%
    //this is the probability of needing N+1 hits, ie needing 11 hits when base damage=10.

    [/B]Probability of needing 10 hits. [/B]
    Probability_of_(N-1)= (C[N]- Required_random_damage_component_for_JP)(2*C[N])= (0.454545+0)/ (2*0.454545)=50%
    This is the probability of being in the N-1 region, ie needing N hits, or 3 Hits required


    For relative case:
    When Base_damage=10, N=10
    JP [10] = 10
    JP [11] = 9.0909090909
    JP [9]=11.1111

    M[N]=JP[N]/D[N] =JP[N+1]*D[N] //note you need to be careful of which way you are moving the spectrum as JP[N]< JP[N+1]

    M[10]= 10/D[10]= 9.09*D[10]
    D[10]=Sqrt(JP[10]/JP[10+1])=1.1547=1.04881
    M[10]= JP[10]/D[10]=9.53463
    1/D[N]<Random_Reletive_damage_component<D[N]
    0.953463 <Random_Reletive_damage_component<1.04881

    Final_Damage = Base_damage * Random_Reletive_damage_component
    Required_random_reletive_damage_component_for_JP = JP[N] / Base_damage=10/10=1

    Note the case when random_factor>1 needs to be treated differently then then random_factor<1 at jump point it doesn&#8217;t make any difference as they are both the same.

    Probability of needing 11 hits (N=10).
    If Required_random_reletive_damage_component_for_JP>1 then:
    Probability_of_N=0.5+0.5* (Required_roll-1)/(D-1)=50+50*(1-1)/(1.04881-1)=50%

    Probability of needing 10 hits.
    If Required_random_reletive_damage_component_for_JP>1 then:
    Probability_of_(N-1)=1-0.5*(D-Required_roll)/(D-1)=1-0.5*(1.04881-1)/( 1.04881-1)=50%


    Note:
    Please note that the array indicies N refer to which two midpoints the ratio lies between. So for N then it is after midpoint N and before midpoint N+1. the jump points are between the midpoints, so for any N there is possibility of N or N+1 rits required.

    This is what I mean also in the previous section saying that absolute and relative are similar, as the range C [N] or D[N] varies depending on where you are in the spectrum.

    This array doesn&#8217;t need to be that big, eg, you could easily have an array of length 25, which would provide you with information about the workings of every jump point up till a ratio of 5:1, so when one unit does 100 damage and the other does 4 damage. past that I think ratios higher than 5:1 will rarely occur, and when they do occur I I don&#8217;t think the jump point smoothing will be an issue as it&#8217;s about 100% probability of victory, so a jump point goes from something like 99.9997% to 99.9999% is still smoothed with D[25] to 99.9998%it's just that the smoothing might extend over the midpoint, and you can just use an D[25] for all ratios above 5:1.

    This won't causde any exceptions when using reletive dammage methods, even with a ratio of 200:1, you'll only do 0.1 dase dammage, and so reletive proportion will never go bellow zero, as 0.1/1.0001 is still positive. but this situation will need to be checked for if you are using absolute damage, as when getting to extreeme ratios you need to be careful not to accidently subtract a dammage component larger than the base dammage hence with reletive you can use D[25] for all N above 25, but with absolute damage you can't use C[25] for all N>25 because unless you put in a failsafe for extreeme situations that final_damage=max(basedamage+absolute_random,0). naturally these situations shouldent occur unless someone modifies one of their creatures to have 200 strength, but it's good to put in the failsafe just in case so the program doesn't crash.
     
  17. Vulcans

    Vulcans Prince

    Joined:
    Aug 9, 2005
    Messages:
    326
    oh yeah, you'll need to chuck in another couple of lines to save a potential array out of bounds error.

    N=round(100/base_Dammage); //rounded to the nearest integer
    if (N==0){
    N=1;
    return 1; //probability of needing 1 hit to kill is 100%, as you always need at least one hit, even if you are doing over 200 damage/hit.
    }
    naturally you can't kill someone with half a hit if you are doing more than 200 damage/hit in some extreeme combat ratio situation(i doubt this'll come up often). so it's best to save this potential array out of bounds error.
     
  18. Vulcans

    Vulcans Prince

    Joined:
    Aug 9, 2005
    Messages:
    326
    that's a pity. we could have drastically simplified the probability estimations if they were linear lines.

    at the moment we need to calculate Q^2 cambat scenarios when there are Q different possibilities of hits required. so having 2 hits takes it from 1 to 4 calculations. if we extended it to 3 possibilies for number of hits required then we'd have to calculate 9 combat scenarios.

    So i think it would be good if we look at how to optimise the calculation of specific combat scenarios. looking up a linear function from a matrix of variables would have been trivial, vut it's a pity we don't have a perfectly linear function. i'm sure there are also other optimizations we can make to speed up the process.
     
  19. Vulcans

    Vulcans Prince

    Joined:
    Aug 9, 2005
    Messages:
    326
    Yes I think that we could optimize the code and make the odds calculation much faster, so the more accurate calculations can be used for AI decision making.

    Yes, we could make it generic. Probably the easiest way is to make it a function that returns the probability of needing over N number of hits, (or probability of needing under N) and run it in a while loop.

    A more generic function gets significantly larger as you extend the smoothing range over multiple hit requirement ranges.

    A more generic function could be defined like this:
    First define two functions (as described previously):

    Hundreth probability_over (int N, R)// the chance of needing over N hits
    Probablility_of_Scenario_victory(int defender_hits, int attacker_hits, R) //calculation of that scenario outcome

    Then in a loop you can check. Of course most situations N is around 3-7, but I guess there isn&#8217;t much harm just running the loop from N=1

    This is not proper code, as it is probably full of bugs, but at least it&#8217;ll probably give you a good idea of what I&#8217;m thinking.


    //for defender
    Int M=0;
    Do{
    M++
    Probability_of_Defender_needing_exactly_M_hits[M]=probability_over(M-1,1/R)- probability_over(M,1/R)
    If (probability_over(M-1,1/R)==1){
    Minimum_attacker_possibility=M;
    //this stores the last value that has 100% probability
    }
    } While (probability_over(M)>0)
    //M will stop at the final value, maximum value in the range


    //for attacker
    Int N=0;
    Do{
    N++
    Probability_of_attacker_needing_exactly_N_hits[N]=probability_over(N-1,R)- probability_over(N,R)
    If (probability_over(N-1,R)==1){
    Minimum_attacker_possibility=N;
    //this stores the last value that has 100% probability
    }
    } While (probability_over(N)>0)
    //N will stop at the final value, maximum value in the range




    //Now combine with two loops, one for for attacker and one for defender, to give calculation of all possible scenarios.
    Probability_of_victory=0;
    For (int O=Minimum_attacker_possibility; O<=N; N++)
    For (int P=Minimum_defender_possibility; P<=M; M++)
    Overall_probability_of_victory+=Probability_of_Defender_needing_exactly_M_hits[M]* Probability_of_attacker_needing_exactly_N_hits[N]* Probablility_of_Scenario_victory(M,N,R)
    }
    }
     
  20. xanaqui42

    xanaqui42 King

    Joined:
    Sep 5, 2006
    Messages:
    780
    Yes; you could derive that formula from this one (also, this one looks a lot simpler if you use real mathematics symbols). This is solving a somewhat different problem - how do I calculate the probability of a value or less on N dice that may not be equal in size in a relatively brief period of time? The other approach would require significant calculation to get what I'm looking for (which is exactly what I'm trying to avoid).

    In other words, the problem was the complexity, not the solution.

    In any case, I have an algorithm based on the above that seems like it will work in O(Number_of_dice ^ 2) in the worst case, and O(Number_of_dice) in the best case.
     

Share This Page