Let's discuss Mathematics

Hmm, is that actually true?
Let F(z) be the function that gives the complex conjugate of z, zo f(x+iy)=x-iy
F(z)=U(z)+iV(z) with U(z)=Re(z)=x and V(z)=-Im(z)=-y

dU/dx=1
dV/dy=-1

?

Or am I not properly understanding you?
 
Right, Chelsea 0 - 0 Everton.

So a natural number of goals was not scored ;)

Back to complex derivatives:

* Unlike real derivatives if a function can be complex differentiated once it can be differentiated infinitely many times, and is called holomorphic
* The Cauchy-Riemann equations give the necessary and sufficient criteria for a function to be complex differentiable

* Complex conjugation mustn't have a derivative over C but I haven't found a proof of this. I might do some back of the envelope calculations.

However, complex differentiation is more complicated than real differentiation since the limit must be the same in all directions (unlike the real case where it is only approaches from the left and the right).
 
No, its best shown by an example.
Say F(z) :-> z^2 = (x^2 - y^2) + i(2xy)
so U(z) = x^2 - y^2 and V(z) = 2xy

so dU/dx = 2x
and dV/dy = 2x

ans similarly

dU/dy = -2y and dV/dx = 2y

Strange huh?
 
Yeah that's true, those are the Cauchy-Riemann equations you posted. If it's in a physics textbook it probably uses "analytic" instead of holomorphic.

I'm saying real functions that can't be differentiated infinitely many times aren't complex differentiable when embedded in C.

I'm looking for an example ;)
 
A, A>B, D>~C, B>C |- ~D

1.A>B ------------------------------hyp
2.B>C ------------------------------hyp
3.D>~C -----------------------------hyp
4.A ---------------------------------hyp
5.B ---------------------------------MP for. 1, for. 4
6.C ---------------------------------MP for. 2, for. 5
7.C>~D -----------------------------CPI for. 3
8.~D -------------------------------MP for. 7, for. 6
 
Are you tripping on acid or just spamming?
 
That is a mathamatical proof.
 
OK I'll humour you. What is ~ defined as?
 
~= not
>= implies
/\= and
\/= or
<=> = biconditional
MP=Modus Ponus
when from P>Q, and P you get Q
CPI=Contrapositive Inverse
where from P>Q you get ~Q>~P
 
Right, not much discussion though.

I was thinking you were using > as greater than rather than implies and ~ as set complement which would make A>~C silly.
 
I am bored, solve this one, if you can(have know knowelege of your understanding of this, why would I, if not, then someone else do it)

A, A>B, D/\G, D>N, [B/\N]>~[~P\/~Q] |- P\/Q
 
I'll have to bow out of formal logic I'm afraid. Fifty has some knowledge in that area IIRC.
 
A, A>B, D/\G, D>N, [B/\N]>~[~P\/~Q] |- P\/Q

not gonna use your notation cuz i don't feel like it

6. (B&N)->~~(P&Q) 5, DeM
7. (B&N)->(P&Q) 6, DN
8. B 1,2, MP
9. D 3, simp
10. N 4,9,MP
11. B&N 8,10 conj.
12. P&Q 9,11 MP
13. P 12, Simp
14. PvQ 13, add.

Why the crap are we doing logic 101 proofs in thread that is supposed to be about interesting math.
 
Speaking of logic, here's a problem for you gamemaster77:

Consider the following system:

Primitive symbols: infinitely many propositional symbols P, Q, R, S, T, with and without subscripts, parenthesis (or dots), and the single operator symbol "|". The rules for wffs in N may be stated as:

1. Any single letter of N is a wff.
2. If P and Q are wffs, then (P)|(Q) is a wff.
(No formula of N will be regarded as being a wff unless its being so follows from this definition)

Single axiom (in metalanguage... its a pattern for infinite axioms in the object language):

Ax. P.|.Q|R:|::T.|.T|T:.|:.S|Q:|:p|S.|.P|S

Single rule of inference:

Rule. From P and P.|.R|Q to infer Q.

Now:



Prove: Q:.|:.Q|P.|.P:|:Q|P.|.P
 
Sorry fifty I am yet to figure that out.
 
That is fine ParadigmShifter. I only know basic logic fifty
 
Top Bottom