Let's discuss Mathematics

Well it seems obvious that with a long enough string of 1s and 0s there's gonna be another integer that N multiplies by to give that giant string. Like you can just say, take any arbitrary string of 1s, e.g. 111111111111111111111111111111111111111111111111, and then keep multiplying it by 10 until it divides by N. I bet eventually you'll find one.
 
Nah, cos multiplying by 10 only introduces extra factors 2 and 5.
 
Okay, how about this. Lets say there's a function f(n) that generates your giant strings of 1s of length n. There exists n1, n2 such that f(n1) = N*k1 + remainder, f(n2) = N*k2 + remainder, where k and remainder are integers. If you do f(n1) - f(n2), you get N*(k1-k2) with no remainder. Et voila!
 
Okay, how about this. Lets say there's a function f(n) that generates your giant strings of 1s of length n. There exists n1, n2 such that f(n1) = N*k1 + remainder, f(n2) = N*k2 + remainder, where k and remainder are integers. If you do f(n1) - f(n2), you get N*(k1-k2) with no remainder. Et voila!

I'll accept that if you can prove the bolded statement ;)

If remainder comes out as zero, obviously f(x) is an answer, e.g. f(6) = 111111 = 7*15873 and f(9) = 111111111 = 9 * 12345679, good.

IF we do that with the number 2 we get f(2) = 11 = 2*5 + 1, f(3) = 111 = 2*55 + 1

and f(3) - f(2) = 111 - 11 = 100 = 2*50

Looks good so far.

EDIT: Yep, looks correct, since after considering N+1 strings we will have at most N distinct remainders so 2 must be the same by the pigeon hole principle.

Woot Mise!

EDIT2: The max number of strings we must consider are:

let x = ceil(log10(N)) {so if N = 99 x = 3}

strings f(x) to f(x+N)

EDIT3: This applies to any base as well. Nice question Petek!
 
Looks like the above proof can be modified for any string of any multiple of units (elements which have an inverse) in any Euclidean domain (division algorithm exists).

So there is a way to express an integer multiple of any Gaussian Integer (x+iy for x, y integers) in the form of strings of 1's and 0's + a string of 1's and 0's times i (with a possible factor of -1 in the real and/or imaginary part). You need to consider more strings since you need to square the number of strings you consider until the pigeon hole principle kicks in.
 
Good work! I know a second proof that relies on Euler's Theorem. Here's how it works if N = p is prime. I'll leave it to anyone who's interested to work out the details for the general case.

We can suppose that p =/= 2, 3, 5 since those cases are obvious. Then, by Fermat's Little Theorem, we have

10^(p-1) == 1 (mod p)

Thus, p|10^(p-1) - 1 = 999...9 = 9*(111...1)

and so p|111...1 since p =/= 3.
 
Spoiler :
I say YES! If a proof works, then it is at least somewhat good. If proof A and proof B prove the same theorem, and A has fewer steps than B and has more readily thought hypotheses, then A comes out more good than B. All proofs are good, it's only a matter of degree. There does exist an aesthetic appeal to so-called "brute force" approaches to proofs. It comes as much the same as the aesthetics of brick-laying or meditation or ritual where one does the same things over and over and over again. Wanting all proofs to come as short, elegant, and "intuitive" to professional mathematicians, comes as analogous to wanting all people to think well quickly, speak in beautiful words, and simply.

But, not only does that come as unrealistic, if that ever were to happen there wouldn't exist any possibility (or much less possibility) of creativity in seeing new ways of understanding and appreciating language. In the same way, if no brute force proofs existed, then there wouldn't exist the possibility of new proofs emerging, the possibility of simplification of proofs, and there would exist less ways in which proofs can work. Proofs would exist in a less diverse context. Considering that diversity in the natural world produces and allows for far more beauty and possibility, if we had the choice to do so, why would we want to limit the diversity of how proofs may appear? If you enjoy listening to Mozart, Bach, and Beethoven, do you want only those composers music ever to get heard and all music sheets of Duke Ellington, The Beatles, Queen, Count Basie, Berlioz, Scott Joplin, Arnold Schoenburg, and Weird Al Yankovic to get burnt?

It would be one thing if "brute force" proofs forced everyone to always use them, or we all had to constantly get subjected to them like people don't know how to shut their music off in the interest of courtesy. However, has this happened with brute force proofs? I don't think so. And barring something like that, rejecting brute force techniques in proofs as "good" comes as no different than not liking Edgar Allen Poe's "The Raven" or Coleridge's "The Rime of the Ancient Mariner" or Shakespeare's Hamlet, because they evoke some dark emotion in you. All of those are art despite their lack of prettiness. And proofs like those of the four-colour theorem and the brute-force techniques used in the referenced link have an aesthetic quality to them despite some mathematicians personal, a priori, conceptions of "elegance". If Mathematics were conscious, it would feel insulted at people who call such brute force techniques "bad". For if Mathematics were conscious, it surely comes as large enough in scope to end up beyond the "good and bad" or "good and evil" of the mathematicians who don't like those techniques.

If we could choose to do so, why not perceive Mathematics in its True form, and see its Beauty in how it is, in itself?

I agree to the gist of what you're saying. A comment I have.

If a proof works, then it is at least somewhat good. If proof A and proof B prove the same theorem, and A has fewer steps than B and has more readily thought hypotheses, then A comes out more good than B. All proofs are good, it's only a matter of degree.

It is probably quite difficult to actually determine how many steps a (non-formalized) proof really has. It can always be faked: our main lemma produces the whole argument but the last few simple steps, and then the proof proceeds in deriving the theorem using the lemma and only a few additional steps - short proof?

To give a less forced example, from formal logic, the compactness theorem:

If evey finite subset of a given set of sentences A has a model, then A has a model.

(Trivial of course if A is finite.)

Proof 1: uses the completenes and correctness theorems: suppose not, then by the completess theorem, there is a formal proof of a contradiction from A, but that proof can only use finitely many premises from A, and then by the correctness theorem, these finitely many premises from A don't have a model.

Proof 2: from the stipulated models of every finite subset of A we use an ultraproduct construction and create a model that is a model of A.

Proof 1 is short but fairly uninformative and relies heavily on 2 other theorems.
Proof 2 is long but direct, maybe not constructive in the strict sense as it does use infinite constructions, but constructive in spirit it is.

In this case, I go for the long proof, as it is direct, elegant, and way more informative.
 
Hey all, I posted a question in the General discussion forum of Civilization IV, but I think you would actually be the best to answer it. The link is here, and I would really like the answer... any chance you could help me out?

Also, as an introduction, I'm a freshman college student taking Multivariable calculus, and am attempting a mathematics major. So... hi!
 
Hey all, I posted a question in the General discussion forum of Civilization IV, but I think you would actually be the best to answer it. The link is here, and I would really like the answer... any chance you could help me out?

Also, as an introduction, I'm a freshman college student taking Multivariable calculus, and am attempting a mathematics major. So... hi!

I haven't played cIV for a while, but I'm sure you could find threads on those questions in the Strategy ad Tips forum, probably in the Articles sub-forum.
 
PieceOfMind is the man who knows everything about battle odds, so I'd PM him (he wrote the advanced combat odds mod which is in BUG and BUFFY).

Trade routes I have seen the formula as C++ code in a thread somewhere, you could try doing a search. I can read C++ if you do find the code for it ;)
 
Hey all, I posted a question in the General discussion forum of Civilization IV, but I think you would actually be the best to answer it. The link is here, and I would really like the answer... any chance you could help me out?

Also, as an introduction, I'm a freshman college student taking Multivariable calculus, and am attempting a mathematics major. So... hi!

http://www.civfanatics.com/civ4/strategy/combat_explained.php explains the mechanics. Sounds accurate to me. Work out the initial values, run the combat rounds until one unit is dead or retreats.

Generating the probabilities of the various outcomes for a battle (who wins, who retreats, how much strength remains) is trivial, just takes some basic calculation. Which is what any combat odds mod will be doing.

http://www.civfanatics.com/civ4/strategy/trade_routes.php has trade route mechanics, I have no idea how accurate or well tested that one is.
 
A string consists of a sequence of letters from some alphabet. For logical connectives, that is functions or functors if you prefer, there exists any ingenious technique invented by Lukasiewicz which allows us to p^q (p AND q), pvq (p OR q), etc. in a simpler manner. We can use capital letters to represent logical functors and lower case letters to represent truth values. Given that we have the truth set {T, F}, and we use K to stand for the logical conjunction functor with two arguments or Kxy, N for negation with one argument Nx, C for if-then material conditional with two arguments Cxy, D for logical disjunction with two arguments Dxy, and E for logical equivalence with two arguments we can then write all logical statements in a uniform alphabet Exy. E. G. Kab or K ab if you find that clearer, indicates the conjunction of a and b. NIab indicates the negation of if a, then b. We also can write abcCD for a statement like "a or (b implies c)" (NOT [a or (c implies b)]". Every proof which is a formal proof in certain contexts, can therefore get written in three different ways (or every proof can get triplicated one might say).

If we assume that closure holds we can use a letter to represent to an expression like Pxy, or Pxy=a, which represents a truth value T or F, which can in turn get represented by a letter. So, an expression like I C xy C xz can get checked as a well-formed form in this way
K Cxy Cxz=K C xy v where v=Cz
K C xy v=K u w where u=Cxy
K u w=r, and therefore we've proven ICxyCxz as a well-formed form.

Suppose that we have {K, C, D, E, N} as our set of logical functors, and {t, f} as our set of truth values. The number of concatenation of symbols in this language which are well-formed forms with one place is 0, with 7 possibilities of spell marks or letters. The number of concatenation of symbols in this language with two places which are well-formed forms can get regarded as having one algebraic form Nx, and thus we have 2 well-formed forms out of 7^2=49 possible concatenation of letters. We have 4 possibilities, K, C, D, and E for the first place for "word" with three letters, and 2^2 possibilities for truth value combinations in places two and three, thus giving us 16 well-formed forms, out 7^3=343 "words" in such an alphabet. How many well-formed forms are there for "words" with 5 places? How many for words with 6 places? How many for words with 7 places?
 
Could someone please tell me what tensors, quaternions, and octonions really are?
 
Tensors are a generalisation of matrices.

A vector is a 1-tensor, a matrix is a 2-tensor, and a 3D matrix is a 3-tensor, etc. The tensor product is a generalisation of matrix multiplication to higher dimensions than 2. I think that is the case anyway, it's more a physics/applied maths thing than what I know about (mainly pure maths).

A quaternion is an extension of complex numbers into 4 dimensions. Instead of having 1 principal root of -1, you have 3, i, j and k (-i, -j and -k are also roots).

However, the roots of -1 are also related by the following formulae discovered by Hamilton (who invented them)

i2 = j2 = k2 = ijk = -1

In general quaternions are not commutative, so order of multiplication is important. We can use the above equation to derive

ij = k, ji = -k
jk = i, kj = -i
ki = j, ik = -j

from which we get the modern vector cross product.

Every non zero quaternion has an inverse, let q = w + xi + yj + zk

then define q* = w - xi - yj - zk, the conjugate of q

Then q-1 = qq*/|q|2

where |q| is sqrt(w2 + x2 + y2 + z2)

similar to complex numbers.

This makes quaternions a division ring (every non-zero element has an inverse), but not a field (since they don't commute).

They are useful because unit quaternions (i.e. length 1) can be used to express any 3D rotation and multiplying them results in concatenation of rotations. In a sense they don't commute in exactly the same way that rotations don't ;) EDIT: Also, they only need 4 numbers to be stored to express them so are more compact for computers than a 3x3 matrix.



Octonions are like quaternions, but have 7 distinct principal roots of -1, and furthermore are not associative as well as being non-commutative. They also have zero divisors if I recall correctly (i.e ab = 0 but neither a, b is 0). This makes them complicated beasts indeed and I think they have little use anyway outside of being an interesting group in group theory.

EDIT: I think octonions don't have zero-divisors, but sedonions (16 dimensional) do have ;)
 
Yep lots of hot totty on my maths course.

The blokes were a bit ugly though ;)
 
Tensors are a generalisation of matrices.

A vector is a 1-tensor, a matrix is a 2-tensor, and a 3D matrix is a 3-tensor, etc. The tensor product is a generalisation of matrix multiplication to higher dimensions than 2. I think that is the case anyway, it's more a physics/applied maths thing than what I know about (mainly pure maths).

A quaternion is an extension of complex numbers into 4 dimensions. Instead of having 1 principal root of -1, you have 3, i, j and k (-i, -j and -k are also roots).

However, the roots of -1 are also related by the following formulae discovered by Hamilton (who invented them)

i2 = j2 = k2 = ijk = -1

In general quaternions are not commutative, so order of multiplication is important. We can use the above equation to derive

ij = k, ji = -k
jk = i, kj = -i
ki = j, ik = -j

Admittedly, one can introduce a table which makes this so, and its a great and useful hypothesis. This isn't as straightforward to show in my opinion as it seems.

ijk=-1, so ijkk=-1k. This comes as valid by introducing k on by sides (hypothesis introduction).
ij(-1)=-1k since kk=k2 by definition and the leading hypothesis.
Now, how do eliminate -1 on both sides? We could introduce x(-1)=-1x as a definition. This isn't inconsistent with commutativity, but it can look like it. There's another way which I think makes it clearer that commutativity holds. If we have x(-1) or -1x, we can just regard it as x- or -x where - indicates a unary operation (that is, function). We know from logic that if ~a=~b, then a=b, or if ~a=b~, then a=b. We can treat -1, or 1- in other notation, also as a unary function -, so ij(-1)=-1k implies ij-=-k, which implies ij=k. Here it seems clear that commutativity doesn't hold, and we still get the same result.

ijk=-1, so iijk=i(-1) by right multiplication. So, then (-1)jk=i(-1), -jk=i-, from which we get jk=i
ii=-1, so iij=-1j by right multiplication. Since ij=k, ik=-j.
jj=-1, so jjk=-1k by right multiplication. Since jk=i, ji=-k.
ii=-1, so jii=j(-1) by left multiplication. Since ji=-k, -ki=j(-1)=j-=-j. Since -ki=-j, ki=j.
jj=-1, so ijj=i(-1) by left multiplication. Since ij=k, kj=i(-1)=i-=-i.
 
-1 is real and hence commutes is the way to resolve that I think.
 
-1 is real and hence commutes is the way to resolve that I think.

Huh? If -1 is real and commutes, then we have commutation on a single element of the reals. Maybe someone I don't know of has given us a definition of commutativity where this holds, but usually commutation gets taken as a binary operation. I think you want to say that for any pair -1 a, the pair commutes. That is, for our operation *,
* -1 a=* a -1. This does hold true. But by definition, * does not commute here in general.

If x=y, then x*z=y*z is the right multiplication rule in infix notation. So, if x=y, then *xz=*yz is the right multiplication rule in Lukasiewicz prefix notation. And, if x=y, then xz*=yz* is the right multiplication rule in Lukasiewicz suffix notation.

If x=y, then z*x=z*y in infix. If x=y, *zx=*zy in prefix. If x=y, then zx*=zy* in suffix. Those give us the left multiplication rules.

Define *(-1)k as -k, *k(-1)=k-, and (-1)k*=-k, k(-1)*=k-.

*ii=*jj=*kk=**ijk=-1.
***iijk=*i(-1) by right multiplication in Lukasiewicz prefix notation. So, **(-1)jk=*i(-1). So, *-(jk)=i-=-i. So, *jk=i.

*kk=-1, so **jkk=*j(-1). Thus, since *jk=i, *ik=*j(-1)=j-=-j.

**ijk=*i*jk since association holds. Since **ijk=-1, ***ijkk=*(-1)k.
So, **i*jkk=*(-1)k. So, *i*j*kk=(-1)k. So, *i*j(-1)=(-1)k. So *ij-=-k. Thus, *ij=k.

*jj=-1, so **ijj=*i(-1). So, *kj=i-=-i.

*kk=-1, so **kkj=*-1j. Thus, *k*kj=*-1j. So, *k(-i)=*(-1)j. Thus, *ki=j.

*ii=-1. So, **kii=*k(-1). Since *ki=j, **kii=*ji=k-=-k.

Anyone want to re-derive these in suffix Lukasiewicz notation?
 
All reals commute with quaternions. This could be a circular argument though, relying on the definition for quaternion multiplication that relies on the rules for the base elements 1, i, j, k

EDIT: Also, all quaternions with only 1 imaginary unit commute too, since then it's an embedding of C in H

EDIT2: H being the ring of quaternions (named after Hamilton).

I prefer the complex 2x2 matrix definition of quaternions myself

where 1 is represented as

[ 1 0 ]
[ 0 1 ]

and the 3 roots of -1 are represented by

[ i 0 ]
[ 0 -i]

[ 0 1 ]
[-1 0 ]

and

[ 0 i ]
[ i 0 ]
 
Back
Top Bottom