Let's discuss Mathematics

Separate for me. Vector calculus was also applied maths and linear algebra was in pure maths. First year linear algebra isn't hard though usually (just goes up to reducing matrices to echelon form and maybe Cayley-Hamilton theorem). We went over most of the same ground again in the 2nd year Linear Maths course.
 
Separate for me. Vector calculus was also applied maths and linear algebra was in pure maths. First year linear algebra isn't hard though usually (just goes up to reducing matrices to echelon form and maybe Cayley-Hamilton theorem). We went over most of the same ground again in the 2nd year Linear Maths course.

It's a 2nd year subject, 2 hour lecture each week for each half of the subject, no practicals or tutorials. According to the course guide, it covers
knowledge of vector spaces, subspaces and their bases, linear transformations and their matrix representatives, the inner product on a vector space, eigenvalues and eigenvectors of a matrix and diagonalization of matrices, vector fields, space curves and surfaces, gradient, curl and divergence; multiple integrals, line integrals and surface integrals; Green, Gauss’ divergence and Stokes’ theorem.

Now, I'm not expecting it to be easy, but there's very few in the class who aren't struggling with it, and they're mostly the students who are repeating it. Hence why I'm wondering if it seems excessive.
 
no practicals or tutorials.

Does this mean you don't do exercises? Then that's the reason why people struggle with it.

Otherwise it depends on the depth of treatment and how many total hours there are lectures.

We had linear algebra and differential calculus in Euclidian spaces as different courses, 48h lectures, 12 X 7-10 exercises. Then we had also slightly shorter course on integral calculus in Euclidian spaces.

I'd think it's a good idea to let people digest things a bit before going further.
 
Does this mean you don't do exercises? Then that's the reason why people struggle with it.

We do some web based questions, but these have the obvious downside of only caring that the answer is correct, and not worrying about the process. We also have 4 assignments in total, but they don't really count as exercises.

Otherwise it depends on the depth of treatment and how many total hours there are lectures.

Assuming no more interruptions to the schedule, we'll have roughly 45 hours of lectures, split between Lin Alg and Vector Calc. So roughly 22.5 hours each.

We had linear algebra and differential calculus in Euclidian spaces as different courses, 48h lectures, 12 X 7-10 exercises. Then we had also slightly shorter course on integral calculus in Euclidian spaces.

I'd think it's a good idea to let people digest things a bit before going further.

The students taking the class intend to recommend splitting them up in the future because apparently we aren't allowed more than 5 contact hours per subject, so adding a practice class is unfeasible. We're pretty unanimous in the opinion that there's something wrong with the structure of the course. The only question is what can the heads of the department actually do about it.
 
ParadigmShifter said:
Don't let dutchfire hear you say that He's one of those weirdos who thinks 0 is a member of N.

If 0 is not a member of N, then at an abstract level (N, +) has less structure to it than (N, *) (+ indicates addition, * indicates multiplication). Suppose that instead of using the Peano axioms to look at the natural number system in a general context, we characterize basic properties of the natural numbers like how abstract algebra investigations often characterizes the integers as a ring, the real numbers as a field, etc.

Then, letting "@" indicate universal quantification, letting "!" indicate existential quantification, N denoting {1, 2, ...}, N' denoting {0, 1, 2, ...}, n denotes a neutral element, we have that (N, +) consists of a commutative semigroup. In other words (N, +) satisfies the axioms

1. @x@y (x+y)=(y+x)
2. @x@y@z (x+(y+z))=((x+y)+z)

But, (N, *) consists of a commutative monoid. Or in other words (N, *) satisfies the same axioms as (N, +) does abstractly just with "*" instead of "+", but has an additional axiom that it satisfies

1. @x@y (x*y)=(y*x)
2. @x@y@z (x*(y*z))=((x*y)*z)
3. @x!n (x*n)=x.

So, the basic or abstract structure of N under addition differs from that N under multiplication. However, the structure of N' under addition and the structure of N' under multiplication don't differ abstractly. They both consist of commutative monoids. In other words, where ^ indicates a member of {*, +}, (N', +) and (N', *) both satisfy the axioms

1. @x@y (x^y)=(y^x)
2. @x@y@z (x^(y^z))=((x^y)^z)
3. @x!n (x^n)=x.

So, if one thinks that the natural numbers would preferably have the same structure under multiplication as they do under addition, then Dutchfire's perspective makes a lot of sense.

Edit: If one thinks that the natural numbers under addition and under multiplication have the same structure *up to a certain point*, then Dutchfire's perspective makes a lot of sense. The natural numbers with 0 under multiplication also have a nullifier, while the natural numbers under addition with 0 don't. In other words the natural numbers with 0 under multiplication satisfy
4 @x!m (m*x)=m where "m" indicates a nullifier. This affects the order structure since for (N', +), where z doesn't equal the neutral element, we have that

@x@y@z if xLy, then (x+z)L(y+z) where "L" indicates "less than". This doesn't hold for the natural numbers with 0 under multiplication, because of the nullifier zero. Instead we have
@x@y@z if xLEy, then (x+z)LE(y+z) where "LE" indicates "less than or equal to".

Here's an easy (in my opinion) problem:

Show that if we have a commutative structure with a nullifier, that the nullifier is unique. In other words, that if the following formulas for a binary operation "^"

1. @x@y (x^y)=(y^x)
2. @x!m (x^m)=m,

then for any distinct nullifiers m_1, m_2, m_1=m_2.
 
Interesting discussion. Are there some typos in the following part of your post?

Edit: If one thinks that the natural numbers under addition and under multiplication have the same structure *up to a certain point*, then Dutchfire's perspective makes a lot of sense. The natural numbers with 0 under multiplication also have a nullifier, while the natural numbers with 0 don't. In other words the natural numbers with 0 under multiplication satisfy
4 @x!m (m*x)=m where "m" indicates a nullifier.

The part that I bolded appears to be missing some words. Also, shouldn't axiom 4 read

4 @x!m (m*x) = x, where "m" indicates a nullifier

since the nullifier presumably leaves x unchanged under the operation?

Assuming that my version is correct, if m1 and m2 are nullifiers in a commutative structure, then

m1 = m2^m1 (since m2 is a nullifier)
= m1^m2 (commutivity)
= m2 (since m1 is a nullifier)

and so m1 = m2.

Historically, Peano defined the natural numbers and let 1 denote the element that was not the successor of any other number. Later, it was realized that it was more convenient to denote such an element by 0. Otherwise, you have to account somehow for 0 when extending the naturals to the integers. This is mentioned in the Wikipedia article on the Peano axioms: "Peano's original formulation of the axioms used 1 instead of 0 as the "first" natural number." I think von Neumann was the first to start with 0 instead of 1, but I'm not positive.
 
Whoops! I left out "under addition". Thanks Petek! I meant that (N, *) has a nullifier, while (N, +) does not. Axiom 4 @x!m (m*x)=m, on the other hand comes as correct. The "nullifier" (maybe there exists some more standard term for this) is 0, since for all x, (0*x)=0. Actually, as stated axiom 4 @x!m (m*x)=m just says that we have a left-nullifier. But, since commutativity holds, we have a right-nullifier [@x!m (x*m)=m] also. I would have called what you wrote

Petek said:
4 @x!m (m*x) = x, where "m" indicates a nullifier

since the nullifier presumably leaves x unchanged under the operation?

a "neutral" or "identity".

Your demonstration
Petek said:
Assuming that my version is correct, if m1 and m2 are nullifiers in a commutative structure, then

m1 = m2^m1 (since m2 is a nullifier)
= m1^m2 (commutivity)
= m2 (since m1 is a nullifier)

and so m1 = m2.

shows that if we have commutation and a neutral (identity element) in a structure, that the neutral qualifies as unique. But, as I believe you know already, any identity element of any structure comes as unique for any magma (or groupoid), as associativity here didn't get used, and it consisted of a demonstration which held for any monoid.

From my understanding of logic a proof of uniqueness has an upshot in that we can just put the unique element(s) into the description of the structure and get rid of any existential quantifiers. For example, where N" denotes some set, and *" denotes some binary operation, for (N", *") with the axioms

1. @x@y (x*y)=(y*x)
2. @x@y@z (x*(y*z))=((x*y)*z)
3. @x!n (x*n)=x.
4. @x!m (m*x)=m.

since both "n" and "m" come as unique, we could just write (N", *", m, n) with the axioms

1. @x@y (x*y)=(y*x)
2. @x@y@z (x*(y*z))=((x*y)*z)
3. @x (x*n)=x
4. @x (m*x)=m.

So, for (N, *, 0, 1) where all lower case letters get understood as variables with universal quantifiers on them (or something equivalent, which I think universal algebra allows us to do), and putting all operations in prefix position we could more compactly write that it satisfies the following axioms:

1. *xy=*yx
2. **xyz=*x*yz
3. *x1=x
4. *0x=0.
 
Fair point. Didn't think it through enough. Though I think it's still the case that if it converges to a, then x = [e^(ln a)/a]

I did some more thinking about this. x = [e^(ln a)/a] simplifies to x = a^(1/a) Graph that, and it has a turning point at a = e. Plug in larger values for a, and you'll still get a value for x that will converge, it just won't converge to a. If x > e^(1/e), it won't converge.

Then I read
Petek said:
This section of a Wikipedia article discusses the convergence of (x^(x^(x^...). It converges for e-e < x < e1/e.
and discovered I've been beaten to it. :lol:
 
Consider logical conjunction as an algebraic structure ({0, 1}, AND) and logical disjunction
({0, 1}, OR) as a logical structure also. Find an isomorphism between the two structures by finding two homomorphisms between the structures. Jokes which could get said to follow (don't read unless you've tried the problem, the second joke actually has a serious side to it):

Spoiler :
1. An algebraist is a logician who can't distinguish AND from OR!
2. Classical logic is post-structural.
3. "NOT" is a way of saying that two particular structures are "structurally "the same""
 
De Morgan beat us all to it ;)
 
De Morgan beat us all to it ;)

Laughs. He did indeed, though I think Lukasiewicz first pointed out that some people knew them before him, just not in symbolic notation. For instance, William of Ockham wrote them in words, and old Aristotle seems to have known about them in some sense. Homomorphism laws, or exchange laws laws would seem more appropriate in light of this, but I have no pretension to believe that people will want to change the name here.

Problem: For some algebraic system A1 with some operation which always satisfies some formal property P (equation), find some algebraic system A2 such that there exists a homomorphism from A1 to A2, and A2 does not always satisfy P for all of its elements.

To make the notion of homomorphism clearer here, I'll state a definition:

Let lower case letters denote sets, and upper case letters denote functions.

A homomorphism H from an algebraic system (a, A) to algebraic system (b, B), where the arity of A equals that of B, consists of a unary function H:a->b which satisfies an exchange formula:

For all x_1, x_2, ..., x_n belonging to A, H(A(x_1, ..., x_n))=B(H(x_1), ..., H(x_n)), where n matches the arity of A and B. More compactly,
HA(x_1, ..., x_n)=BHx_1, ..., Hx_n

For example, if A and B come as binary, then a homomorphism H satisfies the exchange formula H(A(x_1, x_2))=B(H(x_1)),
HA(x_1, x_2)=B Hx_1 Hx_2.
 
Bobby George has just claimed on BBC 5 Live that you can do a 9 dart finish in over 3000 ways! EDIT: 3944 IIRC

I'm not so sure about that...

Need to get exactly 501 in 9 darts (1-20, doubles and triples), also 25 and 50 available per throw.

Must finish on a double or a bullseye (50).

Anyone gonna do the math(s) or use mathematica to show Bobby is correct or talking out his bling?

EDIT: I expect different orders of dart scores are factored in to Bobby's claim.

EDIT2: Usual way is 180x2 (6 triple 20s), 60 (treble 20), treble 17 (51) and then double 15 (30) I think.

EDIT3: Actually I think 19x3, 12x2 is the more usual finish.
 
I think the first step is to exclude all fields that make it impossible to reach 501 in 9 darts. Anything below 3*12 or 2*17 during the first 8 darts prevents a 9 dart finish. So from the 62 fields, only 14 can be relevant for such a finish.

The last dart has to be at least 2*12, so there are only 10 possibilities for the 9th dart.

That still leaves plenty of room for more than 3000 ways, though.

Edit: And the first 8 darts have to score more than 56 points on average. At least half your darts need to be 3*19 or 3*20.
 
A dart gets thrown from point A to point B. Define a darting as the action of the throw of any dart. So, each darting can defined as a map from the singleton {A} to the singleton {B}. Over the lifetime of a dart, of course, many dartings occur. If we consider the set of all dartings over the lifetime of a dart, does such a set qualify as a function? Does there exist one time when a darting starts at point A and ends at point B, and another time when a darting starts at point A and ends at point C such that point B does not equal point C? If the set of all dartings consists of a function, does it qualify as injective, surjective, bijective, or none of those?
 
Here's a puzzle whose solution I found surprising:

Suppose that you have an infinite supply of balls, each numbered with a positive integer. For each positive integer you have an infinite number of balls bearing that integer. You also have a box that contains a finite quantity of similarly-numbered balls. Your goal is to empty the box in a finite number of steps. Each step consists of removing one ball and replacing it with as many other balls as you like, but the replacement balls have to bear lower integers. However, if you remove a ball labeled with the number 1, then you don't replace it with anything.

It's obvious that you can always remove all the balls in a finite number of steps by replacing each ball numbered higher than one with a ball numbered one. Eventually all the balls will bear the number one and will be removed without replacement. However, the question is whether you have a strategy to avoid removing all the balls. After all you could, for example, remove a ball numbered 1000 and replace it with a billion balls numbered 999, or replace it with a billion billion balls numbered 998, and so on. So, is it possible to avoid completing the task (emptying the box) in a finite number of steps?

Assume that the box initially contains lots of balls numbered higher than one and also disregard such issues as whether it's physically possible to have an infinite number of balls or whether you would live long enough to complete the task.
 
My instinct says it is impossible to force an infinite number of steps.
 
How big is finite? :p
I'm not sure I have the right context, but any positive integer is "finite."
My instinct says it is impossible to force an infinite number of steps.

Here's a spoilered hint:

Spoiler :
Suppose that the box initially consisted only of balls bearing the numbers one and two. What then?
 
Back
Top Bottom