Let's discuss Mathematics


I got about half way through. The logic is completely flawed.

The video claims that the sum of all the natural numbers is -1/12.

However, the sum of all the integers from 1 to N is S = N(N+1)/2. Obviously as you keep increasing N, S keeps getting higher.

And this is why you don't listen to youtube comments for anything remotely intellectual.

This.

EDIT: Sometime on the weekend I might go through it more thoroughly.
 
Actually, some divergent series can be assigned values by rigorous methods (as explained in the linked Wikipedia article). The series

1 + 2 + 3 + ... = -1/12

is derived from zeta function regularization. I haven't watched the YouTube video, but that may be what's being discussed.

That video does not go into details regarding the zeta function, however, there is an additional 20 min. video about the Reimann-zeta function and how it is used to prove that the sum of all natural numbers is -1/12 (http://www.youtube.com/watch?v=E-d9mgo8FGk&feature=youtu.be).

Apparently the result is very important for string theory and quantum electro dynamics.

The finite sum, N(N+1)/2, is only valid for a non-finite N, and since the series is a divergent series it cannot be used to assess the sum of the infinite series.
 
Thanks for the link. I've now watched both videos. The guys are good at explaining the ideas, but I have one reservation. You could come away from their videos believing that the only possible interpretation of 1 + 2 + 3 + ... is that it equals -1/12. I would have preferred a statement such as "Although the series 1 + 2 + 3 + ... diverges and can't be assigned a finite value under the usual definitions, it's possible to give an alternate definition in which the value equals -1/12."
 
Thanks for the link. I've now watched both videos. The guys are good at explaining the ideas, but I have one reservation. You could come away from their videos believing that the only possible interpretation of 1 + 2 + 3 + ... is that it equals -1/12. I would have preferred a statement such as "Although the series 1 + 2 + 3 + ... diverges and can't be assigned a finite value under the usual definitions, it's possible to give an alternate definition in which the value equals -1/12."
That makes it sound like -1/12 was specifically chosen to be the result. Like it could just as easily be said that 1 + 2 + 3 ... equals 42.
 
That makes it sound like -1/12 was specifically chosen to be the result. Like it could just as easily be said that 1 + 2 + 3 ... equals 42.

In lay(ish) terms what happened is this:

The sum 1/n^(x) with n=1...infinity is the power series for the Rieman Zeta function (zeta(x)). But for obvious reasons this isn't defined when x=-1, since the power series is the sum of the natural numbers which is divergent.
Using a technique called analytic continuation the Rieman Zeta function can be extended analytically (think infinitely differentiable), to have a larger domain, namely one that includes x=-1. However on this domain the function would have a different power series, this different power series that converges on this part of the domain zeta(-1)=-1/12. All that is fairly reasonable, the trickery is when we say that means zeta(-1)=1+2+..., certainly there is a strong connection between them. But you can't say that it is equality, at least not in the traditional sense.
 
A similar radius of convergence issue, without needing zeta functions is:
1+x+x^2+... = \sum_{n=0}^\infty x^n = 1/(1-x)

and then putting x = -2
 
nevermind, Harv beat me to it
 
CY8W2Ix.png


The above graph shows probabilities of each team finishing in a range of positions in the Premier League. Let's just focus on the green bit: the Europa League position. In order to qualify for the Europa League, the team has to finish 5th. That is, only the 5th placed team qualifies for the Europa league. Shouldn't the sum of the green bars, therefore, add up to 1? (For the record, 1 team can finish 1st, 3 qualify for champs league, 1 for the europa league, 3 get relegated, and the rest are safe.)

I haven't quite got my head around it. Does it depend on how the statistic is phrased? If you say "the probability of the Europa league spot going to team Tn = Yn", then the sum of all Yn for n=1 to 20 should be 1. But if you say "the probability of team Tn finishing in position Pm = Zn,m", (where P1 = first, P2 = champs league, P3 = europa, P4 = safe, P5 = relegation) then the sum of Z1,m for m=1 to 5 should be 1, but the sum of all Zn,3 for n=1 to 20 might not equal 1?

I can't think of a reductio argument for any of this. Everything I think of suggests that the probabilities should add up to 1. I think it makes mathematical sense for it not adding up to 1, but no physical/intuitive sense. E.g. you can build a model that predicts the probability of teams finishing first, where 4 teams have a 90% chance (say) of finishing first. But that doesn't make intuitive sense, because they can't all finish first. I think the data in the chart might just be wrong. I guess it must be using betting odds or something, which obviously don't add up to 1 and aren't going to be logical.
 
The data in the chart is either wrong, or their model is seriously flawed. The sum of probabilities for any individual position should equal 1, whether that be 1st, 5th, 8th, 13th, 20th, whatever.

If they're taking the betting odds, they'd be normalizing it to remove the overround, which would definitely explain the lack of adding up to 1.


As an aside, modelling EPL (and soccer in general) is an absolute pain in the arse thanks to the frequency of draws.
 
Nobody has posted: Happy pi-day everybody! Maybe next year I will try to post at 9:27am.

Easy one: Suppose I use an approximation pi(a) ~= 180*sin(a)/a. Prove that the approximation pi(0.5a) ~= 360*sin(0.5a)/a is four times better than the approximation pi(a).

Another one: The approximation pi(a) is an inside polygon approximation. The outside polygon approximation is pi(a)o ~= 180*tan(a)/a. Prove that the approximation pi(a) is twice as close to the true value of pi as pi(a)o.

Admittedly, I have not looked at or studied Euler's method in quite some time.
 
I've been trying to figure out how to convert to Julian dates manually, for scifi worldbuilding purposes. I found this: ( http://www.aavso.org/computing-jd ), but the value I found on Wikipedia for an existing wide binary star is given in decimal years (1991.605 in this case), so I'm not sure how to convert it directly from that.

Any ideas?
 
I've been trying to figure out how to convert to Julian dates manually, for scifi worldbuilding purposes. I found this: ( http://www.aavso.org/computing-jd ), but the value I found on Wikipedia for an existing wide binary star is given in decimal years (1991.605 in this case), so I'm not sure how to convert it directly from that.

Any ideas?

Well 1991 is obvious I suppose. Then there are approximately 365 days/year so .605*365~=220.8.

Now count days
31 (January)
28
31
30
31
30
31 (July)
add up and get 212 days so far so 8.8 days left...
So it happened on the 8th of August 1991 at around 7pm.

Because the months aren't uniform I'm not sure of a clean way to do it with less work than that. Though if you just say months are 30 days you could easily place it within a day or two of being correct without as many calculations.
 
in excel you can subtract 1900 from the number, then multiply by 365.2425 days (as per Gregorian calendar), then format it as a date. To convert this to a Julian calendar date, follow the following instructions: http://office.microsoft.com/en-us/excel-help/insert-julian-dates-HP003056114.aspx

EDIT: wait, I didn't realise that "Julian date" is a different thing to the "Julian calendar"!! In that case you can just do "=ROUNDDOWN(A1,0)&(A1-ROUNDDOWN(A1,0))*365.2425". If you don't want decimals you can wrap the whole thing in "=round( )" or do it in a separate cell.
 
I have a question about confidence intervals.

In some lecture notes this guys says that once you've calculated the CI, say 95% CI, it isn't correct to say that it contains the parameter with 95% chance. It either contains it or not. He goes on with a lengthy reply what you are allowed to say, which could be compressed into "If we produce intervals with this method, 95% of them will contain the right value".

Now, I accept that the interval either contains or doesn't the right value. But if the value is unknown, isn't it "philosophical nitpicking" to say that it's incorrect to say that there's a 95% chance it does?

Couldn't you similarly say that Brazil either winds or doesn't win the world cup 2014? So assigning a probability value to it would be incorrect.

Or to have an example that doesn't involve time: playing cards, either the card on the top of the deck is or is not the card you're wishing to draw. There's no sense in calculating probabilities.

So, my whole point is: isn't this kind of rigorousness with the terminology conserning CI, loosing all the power that the statistics have? Isn't it similar kind of fatalism that makes people to do bad desicions against odds?
 
I suppose it's a nuanced way of looking at it, but I think they're practically equivalent. "There is a 95% chance that any individual CI will contain the value" presents a slightly different perspective to saying that there is a 95% chance that the value is within this interval. It treats the value is that absolute thing, and the CI as the thing that is ephemeral and moveable. I guess people in practice have a tendency to view the model as the real thing and the actual value as a bit of unwanted randomness that is adequately encompassed by sticking CIs on everything. At least I do that, anyway.
 
I have a question about confidence intervals.

In some lecture notes this guys says that once you've calculated the CI, say 95% CI, it isn't correct to say that it contains the parameter with 95% chance. It either contains it or not. He goes on with a lengthy reply what you are allowed to say, which could be compressed into "If we produce intervals with this method, 95% of them will contain the right value".

Now, I accept that the interval either contains or doesn't the right value. But if the value is unknown, isn't it "philosophical nitpicking" to say that it's incorrect to say that there's a 95% chance it does?

Couldn't you similarly say that Brazil either winds or doesn't win the world cup 2014? So assigning a probability value to it would be incorrect.

Or to have an example that doesn't involve time: playing cards, either the card on the top of the deck is or is not the card you're wishing to draw. There's no sense in calculating probabilities.

So, my whole point is: isn't this kind of rigorousness with the terminology conserning CI, loosing all the power that the statistics have? Isn't it similar kind of fatalism that makes people to do bad desicions against odds?

I would say that these are 2 different things. To use your deck of cards example, if you know that the deck is a normal 52 card deck well shuffled you could say that P(red card) = 0.5 with zero confidence interval. However, if I put a deck of cards in front of you, and offer you $1.10 if the next card is red for $1, you may want to check. You could take a few cards and estimate a P(red card) and calculate a confidence interval for this estimate. This would allow you to make a better decision on whether to play or not.
 
Back
Top Bottom