Let's discuss Mathematics

Sure it can be estimated! For example, the probability is bigger than getting ten H in n tosses, for which the probability is
n!/(10! (n-10)! ) 1/2^n.
(I assume we're talking about only H's here, because as PS said, you can get H's or T's by doubling)


On the other hand it's smaller than getting THHHHHHHHHH at the end of the tosses, this is your approach of disregarding q_n. So for the exactly n:th toss that is 1/2048 (n>11), and for the "by n:th toss
\sum_{k=1}^{n-10} 1/2048 =(n-10)/2048.

I'm pretty sure I did something stupid there. :)

If you want to know whether you or opponent win 10 times first, there are easier methods for that, I calculated it yesterday actually...
 
Yeah, that's probably it!

I've been doing probability thingies lately, because I think I need a real job....

That's something I've been wanting to rant a while now, how frustrating these things can be, especially for a mathematician!

First I borrowed Probability with martingales by David Williams. I had used it ~10 years ago to quick learn martingales for Banach space studies. Then I noticed I didn't like his conversational style, for which it is often lauded by the way. Also, I don't remember anything about generating functions and all these things, so I go to borrow the undergraduate book for our local university.

Well, that in turn is unreadable, because the writer doesn't tell anything: he uses words like "distribution" or "random variable" without definition, this is of course because they don't want to have measure theory as prerequisite.

Then I rant about it to a mathematician friend, and she advices me to read another book, also for the local university, but this is for graduate students. Again, I can't stand it, because this book doesn't have enough of simple exercises, and it still contains undefined or poorly defined things. Like distribution, it's supposed to be "the set of probabilities for a random variable". So if p(1)=p(2)=1/4 and p(3)=...=p(6)=1/8, does this random variable has the same distribution as the one for which p(1)=p(2)=1/8 and p(3)=p(4)=p(5)=1/4, namely {1/4, 1/8} for both of them?

Ok, now I borrow Shiryaev's Probability, which is more rigorous, but it has very polarized exercises, either very hard or trivial, and also, it uses unorthodox symbols. And this btw is a general issue with these probability guys, they have to always use symbols the wrong way. They can't say X^{-1} (A) like the normal people, that is if normal people ever called functions Xs, but instead it must be {X=A} or something else equally stupid.

Now I've finally found a good book, Feller's Introduction to probability, which is from the 40s or 50s, and has lots of interesting examples, good exercises, very good justification for things, and also lovely nostalgia of the time when Harvard computer lab printed heavy books like "binomial probabilities for n<100".

I've learned that the dropping of bombs in London Blitz conformed to Poisson distribution, and there were actually people who cast 12 dice some 12000 times to find out whether it conforms to multinomial distribution. I've also calculated, how many raisins there should be in cookie dough for a probability less than 1/100 for a cookie with no raisins to emerge. :)

It's interesting subject, but very diffficult to find proper literature.
[/rant]
 
Distribution is just the shape of the probability density curve. Scaling/shifting is just a change of the parameters for the distribution.

They use X as a random variable (from a given pdf usually), so it's both a variable and a function. I guess you're talking about stuff like p(X=a).

Normally use capital Greek letters for cumulative density functions (inverse pdf doesn't come up very often) and we do use proper inverse notation for that.

Generating functions are just a convenient way to express a pdf based on formal power series (and integrals I think for continuous pdf's). EDIT: They have some nice features which makes calculating the expected value and variance very easy via differentiation.
 
Yeah, well those are no more problem since I encountered Feller! :love:

I understood that distribution is "the shape" of the density function, that's the intuitive meaning of the word. The problem however was that these writers didn't bother to give it accurate definition, and yet they had sentences like "these two have the same distribution", "find out the distribution of this". That's shameful!
 
You just mistyped the first line, it should be (nx-1)/n, on the second line it's correct. ;)

I didn't like topology that much. Functional analysis was my favourite course. Our course contained good deal of topology at the beginning, the best part of it, I'd say, Baire's category theorem and such. Maybe that's why the actual topology course felt boring. Plus I really never saw the idea of more general topologies than those induced by a metric. Sure, they are more general, and so on, but they felt very artificial. I think I've seen non-metric topology one or two times part from the classes.

EDIT: X-post. Adult content.

The one-point compactification (for example the circle as compactification of the reals) is one example which can't be done in a metric space. It is quite useful.
 
Is that adding the point at infinity to the reals? Like the Riemann sphere in complex analysis?

Does infinity have a neighbourhood with that model? Would it be the union of (k, inf) and (-inf, -k) or something?
 
I might be missing something, but I don't think it's hard to get an exact % for a run of x in y trials, given the values of y & x. Coming up with a general formula might take a bit longer, I'll see what I can do tomorrow.
 
The right hand side of the distribution is easy to calculate.

Define the variable L(n,r) as the probability that the longest run in n tosses is r.

If r > n/2 there is only 1 possible run of this length, and it must be surrounded by tosses that aren't of the same kind, but apart from those "fixed" values the others toss results can be anything we like.

The case for r < n/2 is trickier. If r = n/2 that's not hard to calculate either.
 
I think you're overcomplicating things by looking at possible runs. The tosses will always be in order: a tails, b heads, c tails, d heads, etc. All you need to look at is the chance that one of a, b, c, d is equal to or greater than x. You know the average length of a, b, c, d, etc is 2 (actually a tiny bit below, it will only equal 2 when y is infinite.)

So the probability is that you will have y/2 runs.

The probability of a given run being x is 1/2^x.

The probability of a given run being less than x is 1/2 + 1/4 + 1/8 + ... + 1/2^(x-1)

Therefore the probability of a given run being x or greater is 1/2^(x-1)

So the probability of a given run being less than x is [2^(x-1)-1]/2^(x-1)

The probability that all runs are less than x is {[2^(x-1)-1]/2^(x-1)}^y/2

The probability of at least one run of x or greater is 1 - {[2^(x-1)-1]/2^(x-1)}^y/2


I don't think there's any problems in there, except for using y/2 as the approximation.

The actual run length won't be y/2, it will be y/(1/2 + 2/4 + 3/8 + 4/16 + ... + y/2^y) It's too late at night for me to try and get a formula for that sum. But come up with one, and you should have an exact formula for the probability of a run of at least x in y flips, for y>1
 
Is that adding the point at infinity to the reals? Like the Riemann sphere in complex analysis?

Does infinity have a neighbourhood with that model? Would it be the union of (k, inf) and (-inf, -k) or something?

Yeah, I think it's similar to the Riemann sphere
http://en.wikipedia.org/wiki/One-point_compactification

The stereographic projection of R is the most insightful example. Imagine the real line as the x-as in R^2 and the circle S^1 as a circle with radius 1/2 around (0,1/2). Given a point x in R, we draw a line from x to the north pole (0,1) of the circle. This line intersects the circle in another point y. When we identify x and y, we have found a continuous bijection between R and S^1\{(0,1)}. We then add (0,1) to the circle and identify it with +/- infinity, the resulting space is compact.

I Y is an open neighbourhood of infinity in this new space if the complement of Y is compact.
 
Why can't you add a metric to that space?

Can't you define d(x,inf) = inf for real x and d(inf,inf) = 0?

I think this still works with the triangle inequality

d(a,c) <= d(a,b) + d(b,c)

Or does d have to be S X S -> R?

EDIT: I guess divergent sequences wouldn't converge to inf in that metric, therefore the space wouldn't be compact.

EDIT2: Because there's really no way to express "getting closer to infinity" with my "metric".
 
I've never come across with anything requiring that, or at least don't remember, or if I have, it has been pretty trivial. If my grandfather could live without it, I need it neither.
 
Knots and classification of surfaces (manifolds)?

Topology is basically study of invariants innit.
 
No, I mean that specific one point compactification of R that Dutchfire mentioned.

Metric topology is of course essential. And there's nothing bad about general topology either, but I just found it boring. One reason for that boringness is that exotic topologies feel artificial to me. Also I'm more into analysis. Topology without metric is too algebra-like for my taste.
 
Well the complex variant (Riemann sphere) is very useful.
 
Do you really need some special topology with it? Isn't it just the ordinary C and infinity treated as special case?

Complex analysis is one of my embarrassing weak spots...
 
Another interesting topological exercise is proving that there is an infinite number of primes. You do it by defining the sets N_(a,b)=a+b Z as the base for your topology. These sets N_(a,b) are all clopen. It can be shown that all open sets in this topolgy are either infinite or empty. Now: Z\{1,-1}= union over primes p of N_(0,p). If there were a finite number of primes, this would be a finite union of closed sets, so Z\{1,-1} would be closed, {1,-1} would be open but that contradicts "all open sets in this topolgy are either infinite or empty".
 
No, I mean that specific one point compactification of R that Dutchfire mentioned.

Metric topology is of course essential. And there's nothing bad about general topology either, but I just found it boring. One reason for that boringness is that exotic topologies feel artificial to me. Also I'm more into analysis. Topology without metric is too algebra-like for my taste.

Well, defining lim f(x) -> infinity gets quite messy in metric sense, while it's quite clean topologically. That's true for many things done using topology. You don't need all those epsilons and deltas.
 
Top Bottom