Ask a Mathematician!

A couple of questions, anyone may answer (I don't care what people call themselves, it's their accomplishments that matter more):

First, what's your Erdos number?

Second, I'd like to go back to Reimann's zeta function and the zeros. I've read a little about this (Marcus Du Sautoy's Music of the Primes) and I really don't understand it.

The way it was described in the book (assuming I remember it correctly!!), the zeros are areas of a complex landscape that are i distant from the y axis and never cross the line. The problem everyone was working on was finding a prove that no zero was ever off the line through i. Reimann's housekeeper burned many of his personal documents after he died, so any proof that he had figured out must be rediscovered.

Can you explain to me how that graphs of the zeta in polar coordinates relates to my description above? Again, I'm probably mangling parts of this - so I apologize if the question is ill-formed or a frustrating waste of mathematical times ;)

:D The Zeta function! My favorite function of all, and the focus of the book I was reading. First, it's important to note that there are two categories of zeroes for the Riemann Zeta function. The first are called trivial zeros, and occur at every negative even integer. -2, -4, -6, etc. The others are called non-trivial zeroes, and so far as we know, they all seem to be on the critical line Re(s) = 1/2. Now, the Riemann Zeta function exhibits some symmetry around this critical line thanks to the functional equation. If there's some zero between 1/2 < Re(s) < 1 and Im(s), then there's going to be an equivalent zero, with Im(1-s)! Couple this with the fact that the Riemann Zeta function's zeroes are also symmetric over the real line, this means that any zero that does not lie on Re(s)=1/2 will imply the existence of 3 other zeros that also do not lie on Re(s)=1/2. But that's not important now, is it? I sort of completely lost track of what question you were asking.

All right, so the function in polar coordinates I put up is a polar representation of Zeta(1/2 + i t), which would normally be represented as a complex number x+iy, is shown using the polar representation of complex numbers instead. Using this representation, every time the function that's moving around in its series of circles crosses the point (0,0), that's a zero of the Zeta function. I can't remember the exact wording, but I believe that there's a proof that if the Zeta function makes a complete circle WITHOUT crossing the point (0,0), then the Riemann Hypothesis will be disproven. I'll have to look that up when I get home.
 
I wasn't aware that definition contained a distinction. :huh:

It's not like Mathematician is a concrete title. As long as he is honest about his status and serious in his study of it, let him call himself what he wants.

It's a bit pretentious, which is why I am bothered by it. It's like calling myself a "researcher" or "scientist" just because I've worked for a month in a cell biology laboratory. I do not feel that I have earned either designation yet and as such I will refrain from doing so. I don't think I'll be calling the business students who have internships at major downtown financial institutions "bankers" too. I could go on for a lot of other examples too.
 
How many of the old classical "unsolved problems" have been solved in the past hundred years? I'm thinking mostly of things like the Goldbach Conjecture, Fermat's Last Theorem, etc.

Which solved one do you feel is the most signifigant? Which unsolved one?
 
Combinatorics is fun! I like to experiment with things like that in my spare time, and although I haven't gone into combinatorics much, the times I have gotten into it were fun and enlightening. Kind of like my dabbling in probability!

It was really fun, especially graph theory. I wish they introduced me to combinatorics back in high school, I would have probably taken more courses in the field had I known about it earlier.

I was thinking of something like this (random pick from image search)

Heh, you don't need a degree to be a mathematician.

nc-1701 said:
I've done a decent amount with it, but I've never really enjoyed combinatorics.

Blasphemy!

Integral said:
Combinatorics was one of the rare math classes at my uni whose problem sets made seasoned math majors cry.

:lol: At Waterloo what made people cry were first year "weeding out" courses like linear algebra. My brain really liked combinatorics for some reason, it just *made sense*.
 
I always loved this one:

If a hotel with infinite rooms, each room already occupied, gets another guest ... can he get a room? :)
 
:lol: At Waterloo what made people cry were first year "weeding out" courses like linear algebra. My brain really liked combinatorics for some reason, it just *made sense*.

"Combinatorics? It's counting! How hard could counting be?"

*dies*
 
I always loved this one:

If a hotel with infinite rooms, each room already occupied, gets another guest ... can he get a room? :)

Yup. Hilbert ran an infinite hotel, and even if an infinite number of new guests showed up when he was already fully booked, they all got rooms.
 
Yup. Hilbert ran an infinite hotel, and even if an infinite number of new guests showed up when he was already fully booked, they all got rooms.

And the follow up question is, of course: How can he do that while keeping his guest book straight (which is infinitely large, of course).
 
And the follow up question is, of course: How can he do that while keeping his guest book straight (which is infinitely large, of course).

The infinite monkeys in the back room do it for him.

--

On a more substantive note, I discovered a theorem today that makes me happy.

Let f be defined on an interval I. If f is monotonic, then it is differentiable almost everywhere on I.

--

A few of my favorite theorems:

1. The Kuhn-Tucker generalization of Lagrange's theorem, for obvious reasons.

2. Berge's maximum theorem, which has a host of useful results for maximization problems.

3. Kakutani's fixed point theorem, a set of sufficient conditions for the existence of a fixed point for set-valued functions.
 
Integral said:
1. The Kuhn-Tucker generalization of Lagrange's theorem, for obvious reasons.

Would a non-mathematician be interested in it? I'm hoping I don't come across as snarky, but it's not at all obvious to someone who doesn't know any of the people mentioned, nor the theorem referenced...

Thomas Kuhn?

Lagrange?? as in the Lagrange points of an orbit (which I've heard of, but don't know anything about - perhaps relate to geosynchronous orbit or specific point ahead of and behind an orbiting object sharing that same orbit)
 
I wasn't thinking of those seven problems in particular... I had in mind the older, "classical" problems in number theory such as Goldbach's conjecture.

Well, there are plenty of theorems that have been proven since before 1850 (which I'm just setting as a baseline here), and plenty that haven't been yet. For example, Poincare's Conjecture in 3-dimensions has been proven, and that's somewhat important of course. But Poincare's Conjecture in 4 dimensions hasn't been proven yet.
 
Would a non-mathematician be interested in it? I'm hoping I don't come across as snarky, but it's not at all obvious to someone who doesn't know any of the people mentioned, nor the theorem referenced...

Thomas Kuhn?

Lagrange?? as in the Lagrange points of an orbit (which I've heard of, but don't know anything about - perhaps relate to geosynchronous orbit or specific point ahead of and behind an orbiting object sharing that same orbit)

I'm being a bit glib, sure. :)

And it is the same Lagrange who found the Lagrange points in astronomy. Busy guy!

Lagrange's theorem provides a set of necessary conditions for solving constrained optimization problems. Problems like, "maximize the function F(x,y) subject to the constraint that you have to be on the unit circle, x^2 + y^2 = 1."

Of course it works for any general function, and for any (finite) number of equality constraints. Not only does he give you necessary conditions, he gives you a method for finding the optima you so desperately seek!

Spoiler :
The method is this.

1. Use the function, and the constraint, to write down a "Lagrange function":

L(x,y,T) = F(x,y) + T*(x^2 + y^2 - 1)

where T is a brand new variable called the multipier. Note that this L is an unconstrained problem.

Maximize L. So, take derivatives of L with respect to your original choice variables (x,y) and with respect to the multiplier (T).

You now have a set of 3 equations in 3 unknowns, and can solve them. This gives you the maxima and minima for L.

Here's the magic: the points you have found are also candidate optima for the original, constrained problem.

The induced variable T has some neat economic interpretations when applied to economic problems.


The Kuhn-Tucker generalization broadens Lagrange's theorem by allowing for inequality constraints. "Maximize F(x,y) subject to x >=0, y>=0, and px + qy <= 10". We encounter these types of problems all the time in economics; indeed the one I just stated is the classical optimization problem for a person, with a budget of 10, trying to consume x and y at prices p and q.

Lagrange/Kuhn-Tucker is the cornerstone of mathematical economics. You literally cannot get anywhere without it!

Interestingly, Lagrange first proved his theorem back in the 1700s, but the Kuhn-Tucker generalization was not proven until 1951. It's a surprisingly "recent" result.
 
Would a non-mathematician be interested in it? I'm hoping I don't come across as snarky, but it's not at all obvious to someone who doesn't know any of the people mentioned, nor the theorem referenced...

Thomas Kuhn?

Lagrange?? as in the Lagrange points of an orbit (which I've heard of, but don't know anything about - perhaps relate to geosynchronous orbit or specific point ahead of and behind an orbiting object sharing that same orbit)

The lagrange points of an orbit are somewhat connected to Lagrange's Theorem, in that Lagrange's theorem is used to find minima and maxima of multivariate functions subject to a constrant, and the Lagrange points of an orbit, I believe, are points where the gravity in a system at an instant is zero. (i.e. an object at that point is experiencing null gravity)
 
Something that you often hear mathematicians talk about: beauty. Is all of mathematics beautiful or is it only some concepts, theories and proofs? What makes them beautiful? And is it a subjective assessment that some mathematicians can disagree with, or is it objective and undeniable by all?
 
What's your favorite number?

Do you own a Rubix Cube?

What's your favorite computational aid?

How many digits of pi have you memorized?

Who is your favorite mathematician?

What do you think of statistics?

What do you think of non-standard analysis?

What do you use for writing equations with on a PC?

What's your favorite kind of triangle?

Do you own a slide rule?

Which is best, M&Ms, Reese's Pieces, or Skittles?
 
Something that you often hear mathematicians talk about: beauty. Is all of mathematics beautiful or is it only some concepts, theories and proofs? What makes them beautiful? And is it a subjective assessment that some mathematicians can disagree with, or is it objective and undeniable by all?

Mathematical beauty is connected to timelessness, certainty, abstraction, generality, and clarity.

Timelessness is always present, mathematical concepts are defined without relation to time so the theorems that describe them will remain true forever. Certainty and rigorous methods have been associated with mathematics since ancient times, it is by far the most rigorous of all the sciences. In the latter 19th century mathematicians reached a new level of rigorous methods based on set theory and they pushed for axiomatic perfection, but by the 1930s Godel's incompleteness result showed that such a perfect axiomatic system was impossible. Since then the focus has shifted away from logical foundations, towards more practical levels of certainty.

Abstraction and generality are closely related: a good abstract concept will be very general, applying to many different particular situations. Abstraction and generality are what give power to mathematical methods. Mathematicians face an infinite wilderness of infinitely strong beasts, and the only way forward is by finesse - leveraging our finite human intelligence - not brute force.

The opposite of abstraction and generality is an ad hoc solution, a term which means "for the problem at hand", whereas concepts, methods, and results are more beautiful if they solve a large number of disparate problems at once. For example, 19th century mathematician C.F. Gauss said he was not interested in Fermat's Last Theorem, as one could easily write down many similar equations and statements that were equally hard to prove, and that he would only be interested in the problem if there were a general theory encompassing all of these (which is what happened with Wile's proof over a century later based on the algebraic geometry of elliptic curves).


Not all mathematics would be regarded as beautiful. Newer proofs tend to be raw, undigested and unpolished, which makes them less beautiful by the five attributes above. Some subjects are inherently unwieldy and are left for future generations to hopefully tame. A lot of mathematical research is generated to fill grant quotas or supply students with problems, these unimportant results may never be polished to be particularly beautiful.

In judging whether a work is beautiful, it's a matter of subjective taste. There are many sub-areas of mathematics and everyone has preferences. Still, just like with movies, books, wines, etc it is possible with practice to evaluate the excellence of a mathematical work based on objective features, unclouded by personal preference. This happens all the time because research is peer reviewed.
 
Top Bottom