Let's discuss Mathematics

Nope, I had a night off ;)

I haven't been thinking about the question much, been playing scrabble and civ ;) I may have a ponder again.

Have you got a proof that (p.2^n)-1 where p is odd can represent any odd number? (Goes to look at Mathworld in the meantime)
 
Have you got a proof that (p.2^n)-1 where p is odd can represent any odd number? (Goes to look at Mathworld in the meantime)

I didn't, because it was just obvious. But it should be pretty easy to do...

Take any odd number, call it x.

x+1 is even. The prime factorisation of x+1 will be just 2^n, or 2^n and some odd numbers. If it's just 2^n, p=1. If there's some odd numbers, then p = the product of those, and will be odd. So p.2^n = x+1, where p is odd, so (p.2^n)-1 = x
 
OK, gotcha. Lack of alcohol is affecting my ability to think maths today ;)
 
i was totally disappointed when they found out the taylor expansion of the terms in the integral from -inf to inf of e^-x^2 dx dont obviously sum to pi? well i dont think they do. they all seem divergent to me. im a physicist. sue me. :[
 
Well, elaborate.

I don't think it's trivial to show that the sum is pi (I think it's sqrt(2pi) too? drunk though).
 
well. i guess the limits will be implicit in most of this. cant latex i think. and youre right, it is root pi!

int (e^(-x^2) dx)^2 = int (e^-x^2)(e^-y^2)dxdy = int (e-(x^2 + y^2))dxdy

change to polar coordinates

= int from 0 to 2pi (d phi) * int from 0 to inf (dr r e^-r^2) = -pi e^-r^2 |0 to infinity = pi

(disclaimer: im not totally sober either so im not guaranteeing that isnt nonsense...)

so the original integral is equal to the square root of that. but what i mean is, i dont THINK the sum is obviously root pi if you taylor expand the e^-x^2 in the integral and evaluate the terms directly. each ones divergent! but again, i could be wrong there. just always find it funny how an apparently divergent series can equal something finite.

but im not such a great mathematician, so i could be totally wrong there. its also probably a silly sort of thing to expect. but i was a bit miffed, because i thought it would be such an easy way of computing pi
 
feynmans still my hero :p not technically a mathematician... but c'mon!
 
Euler is the best! Gauss and Newton joint second. IMHO. I forgot to put Ramanujan in my poll of greatest mathematicians so I am repenting.
 
All algebras with a t-norm T, and a t-conorm S, when restricted to {0, 1}, are Boolean Algebras. If anyone around here seriously has an interest in AI, I suggest you learn what that means. It relates to an AI approach which has had at least some success.

There exist different, equivalent axioms of a Boolean algebra, but I prefer the definition which J. Eldon Whitesitt in Boolean Algebra and Its Applications uses, and he says E. V. Hunington gave in 1904, and I've modified to fit here.

Definition of a Boolean Algebra: A set (or class) B of elements with two binary operations ^, v (i. e. functions with two arguments... f(x, y)=z) satisfies the following axioms:
B1: a ^ b=b ^ a, a v b=b v a
B2: There exist distinct elements 0, and 1 belonging to B such that a ^ 1=a, a v 0=a.
B3: a ^ (b v c)= (a ^ b) v (a ^ c), a v (b ^ c)=(a v b) ^ (a v c)
B4: For all elements "a" belonging to B, there exists an element a' in B such that a ^ a'=0, a v a'=1.

Definition of a T-norm: Let [0, 1] denote the interval of all real numbers between 0 and 1, and including 0 and 1. Let a, b, c denote real numbers belonging to [0, 1]. A T-norm is a binary operation which takes both of its values from [0, 1] and returns a number in [0, 1] also, which satisfies the following axioms:
T1: T(a, 1)=a... or a T 1=a if you prefer that notation.
T2: If a<=b, then T(a, c)<=T(b, c).
T3: T(a, b)=T(b, a)
T4: T(a, T(b, c))=T(T(a, b), c)

Definition of an S-norm: Let [0, 1] denote the interval of all real numbers between 0 and 1, and including 0 and 1. Let a, b, c denote real numbers belonging to [0, 1]. An S-norm is a binary operation which takes both of its values from [0, 1] and returns a number in [0, 1] also, which satisfies the following axioms:
S1: S(a, 0)=a.
S2: If a<=b, then S(a, c)<=S(b, c).
S3: S(a, b)=S(b, a)
S4: S(a, S(b, c))=S(S(a, b), c).

So, say we have two functions which satisfy the axioms for a T-norm and an S-norm, and we restrict those functions to {0, 1} for their inputs and outputs instead of [0, 1]. Anyone care to show that if we have {0, 1} as the set B, T as ^, and S as v, then ({0, 1}, T, S) satisfies B1-B4?
 
If it's a definition and axioms I'd imagine it can't be proven. Axioms are the minimal set of things you need to assume to derive all the needed results.
 
If it's a definition and axioms I'd imagine it can't be proven. Axioms are the minimal set of things you need to assume to derive all the needed results.

I thought I'd wait until someone tried to demonstrate this before supplying what I've got, but since you said this, I may as well do this now. Sometimes, if not often, or even usually, in an axiomatic system it might come as possible to derive those axioms from another set of axioms with fewer axioms. For example, if we fit the first definition by the Wolfram site used to define a Boolean Algebra to fit what we had above, we would have three more axioms
B5: a^(b^c)=(a^b)^c, a v (b v c)=(a v b) v c. (associativity)
B6: a^(a v b)=a, a v (a ^ b)=a. (absorption)
B7: a^a=a, a v a=a (idempotence).

These all come as derivable from B1-B4, as Whitesitt's book shows, a few Schaum's outlines show for a few of B5-B7 or all of them (Boolean Algebra, Abstract Algebra, Set Theory and Related Topics, Discrete Mathematics), and the Wolfram page cited above implies.

One could also state a^1=1^a in B2, but this comes as derivable given a^b=b^a. On to the demonstration.

If T(a, b)=T(b, a), and we replace T with ^, then a^b=b^a. If S(a, b)=S(b, a), and we replace S by v, then a v b=b v a. So, B1 holds for ({0, 1}, T, S) or ({0, 1}, ^, v).

0 and 1 are distinct in [0, 1], so they are distinct in {0, 1} also. If T(a, 1)=a, and we replace T with ^, we have a^1=a. If S(a, 0)=a, and we replace S with v, we have a v 0=a. So, B2 holds.

Since a, b, and c each represent any number in {0, 1}, and we have a^1=a, we then have 0^1=0. By B1, and the symmetric property of equality, we have 0=0^1=1^0. By the transitive property of equality, 0=1^0. Consequently, for 1 we have 0 as its a' element with respect to ^. We also for 0 we have 1 as its a' element with respect to ^. Or more compactly, 1'=0, since 1^0=0, 0'=1 since 0^1=0. Since we have a v 0=a, we have 1 v 0=1. This implies 1'=0 as we had above for ^. Since a v b=b v a, 1 v 0=0 v 1. By the symmetric and transitive properties of equality, we have 1=0 v 1, which implies 0'=1 as we had above for ^. So, the a' elements match, and B4 also holds.

I only know how to check B3 on a case-by-case basis.

In the above paragraph we derived that 0=0^1=1^0, and 1=1 v 0=0 v 1. Since a ^ 1=a, we also have 1^1=1. Since a v 0=a, we also have 0 v 0=0. {0, 1} has only two elements. So, for each operation ^ or v or T or S, we have four possible inputs (0, 0), (0, 1), (1, 0), (1, 1), and two possible outputs 0 and 1. We've already determined the output of three inputs for ^, 0^1=1^0=1^1=1, and the output of three inputs for v, 0 v 0=0 v 1=1 v 0=0. So, we only need to determine the output of
0^0, and the output 1 v 1.

0<=1. So, T(0, 0)<=T(1, 0) by T2. T(1, 0)=0, so T(0, 0)<=0. Since T only gives outputs in {0, 1}, we then have T(0, 0)=0. Thus replacing accordingly we have 0^0=0. Also since 0<=1, S(0, 1)<=S(1, 1). S(0, 1)=1, so 1<=S(1, 1). Consequently, S(1, 1)=1, and by replacement of operations we have 1 v 1=1.

Since we have two elements, and three variables in B3 in each equation, we can now check all 2^3 cases for each equation.
0^(0 v 0)=0^0=0. (0^0) v (0^0)=0 v 0=0.
0^(0 v 1)=0^1=0. (0^0) v (0^1)=0 v 0=0.
0^(1 v 0)=0^1=0. (0^1) v (0^0)=0 v 0=0.
0^(1 v 1)=0^1=0. (0^1) v (0^1)=0 v 0=0.
1^(0 v 0)=1^0=0. (1^0) v (1^0)=0 v 0=0.
1^(0 v 1)=1^1=1. (1^0) v (1^1)=0 v 1=1.
1^(1 v 0)=1^1=1. (1^1) v (1^0)=1 v 0=1.
1^(1 v 1)=1^1=1. (1^1) v (1^1)=1 v 1=1.
So, a^(b v c)=(a^b) v (a^c).
0 v (0^0)=0 v 0=0. (0 v 0) ^ (0 v 0)=0 v 0=0.
0 v (0^1)=0 v 1=0. (0 v 0) ^ (0 v 1)=0^1=0.
0 v (1^0)=0 v 0=0. (0 v 1) ^ (0 v 0)=1^0=0.
0 v (1^1)=0 v 1=1. (0 v 1) ^ (0 v 1)=1^1=1.
1 v (0^0)=1 v 0=1. (1 v 0) ^ (1 v 0)=1^1=1.
1 v (0^1)=1 v 0=1. (1 v 0) ^ (1 v 1)=1^1=1.
1 v (1^0)=1 v 0=1. (1 v 1) ^ (1 v 0)=1^1=1.
1 v (1^1)=1 v 1=1. (1 v 1) ^ (1 v 1)=1^1=1.
So, a v (b^c)=(a v b) ^ (a v c). Consequently, B3 holds.

Therefore, if we have {0, 1} as the set B, T as ^, and S as v, then ({0, 1}, T, S), or ({0, 1}, ^, v) satisfies B1-B4.
 
Yeah sometimes you can get away with less axioms and derive the others from them but since mathworld's primary purpose is to explain clearly it probably helps to list some redundant ones.
 
Statistics question:

Hypothetical (sic): I'm testing a hypothesis (either single mean or difference of two means) and I use a confidence interval at some confidence coefficient (e.g. 1 - alpha) to support or reject the null hypothesis, and I end up supporting it by the confidence interval. Is that the same as saying I rejected or failed to reject the null hypothesis at the alpha% level of significance?

I'm thinking the answer is "no" just because the two equations (the confidence interval definition and the test statistic equation) are not identical (even though they are similar).

Am I right?
 
Why does this thread only ever get bumped on Friday nights :(?

Dunno exactly what your on about. I also don't know what you mean by the CI definition and the test statistic equation not being identical... the CI is just the statistic value where area under the pdf is alpha is it not?

Show your working anyway, might help.
 
It's disturbing how little I can follow in this thread, I only graduated a year ago! I'm just gonna take solace in the fact that there hasn't been anything in my studied areas for a few pages...
 
Back
Top Bottom