• Our friends from AlphaCentauri2.info are in need of technical assistance. If you have experience with the LAMP stack and some hours to spare, please help them out and post here.

artificial intelligence

I'm sorry, Dave, I can't let you in.

Hopefully by the time we as a species are smart enough to create a true AI, we'll be smart enough to hardwire or imprint Asimov's three laws into the AI.


My choosing of my avatarish avatar and title yesterday before this thread showed up was purely coincidence, by the way. :lol:
 
The 3 laws of robotic,

1.- a robot ( read a.i.) cannot hurt a human neither by staying passive, let a human exposed to danger.

2.- A robot have to follow human order, exept if it infige the first law.

3.- A robot must protect himself as long as it doesnt infringe law 1 and 2.

If you force those rules on it, then you don't have a true AI.
 
Originally posted by Speedo


If you force those rules on it, then you don't have a true AI.

You should read Assimov then, i tell you. Even myself at the begining was thinking'' hey a sci-fi about robots, it must be simplistic". I was wrong, it is a very good reading.

Those 3 law was like our incounsiouness archetype but without religious beleif. We have a lots more forced rule in our mind then those 3 law. Some are even programed into our genetic code which is a biological programm.
 
Can we create an AI thats self aware? Yes, eventually. But we already can produce software thats capable of mimicking self awareness so well that it might as well be 'real' self awareness, for all intents and purposes.
 
i cant wait till we make a true AI, give it ultimate power, till it turns on us and makes us into batteries.
humans ARE inneficient and way too weak for interstellar travel, so we'll just create the machines "in our immage" (if not physical, then intelectual) and they will carry on the heritage of humanity.

we could even call them HUMANS

Human's Undoing Made Around Nano Systems
 
Well, according to a very good book about robots I read recently:

Humans have 3 basic fields of intelligence: Computational, Reasoning, and Motor function (movement/interaction)

It's the same with Robots. Computers are millions upon millions times better than us in the computational aspect of intelligence

Computers are nearly as good at most areas of reason as us. Most experts gauge a robot's reasoning ability by the high level chess programs available today. X3D Fritz is the best one right now IIRC, and it recently lost to Kasparov, but it was close.

Motor function is where humans still rule, The most complex and advanced interaction robots have the motor ability of less than an infant.

I think it's not just possible... it's highly likely.. that robots will surpass humans in all fields of intelligence, and sooner than we may think.
 
will machines ever surpass humans? if so, when?
In some ways they do now. In the future it is likely they will do so in more ways. We are very far from building an android, however, too far to judge.

will this be a good thing or a bad thing?
People will need more moral stimuly to do manual labor. They will be an issue to manage, so the actual labor is likely not to dicrease, just its form.

And one more bad thing: People might start speaking up for machine rights, and programing machines to do the same.
 
Originally posted by Giotto
Computers are nearly as good at most areas of reason as us. Most experts gauge a robot's reasoning ability by the high level chess programs available today. X3D Fritz is the best one right now IIRC, and it recently lost to Kasparov, but it was close.

X3D Fritz is no better at reasoning than a slimy rock. It's all in the computing (a great deal of computing, might I add). Also, X3D Fritz is not the best there is (with the same software, the defunct Deep Blue would be some 100 times stronger) :p
 
Penrose put this issue very succinctly. There are essentially four viewpoints on AI

(A) The human mind is running an algorithm and if we can figure out the algorithm and run it on a computer then it will show the same consciousness and intelligence.

(B) The human mind is running an algorithm and if we can figure out the algorithm then it will show the same intelligence but not consciousness because consciousness it is a result of the actual hardware in the brain (neurons etc.)

(C) The human mind is not running an algorithm and hence cannot be computationally simulated. Hence human intelligence and consciousness can never be simulated by existing computers irrespective of how fast they become.

(D) The human mind is not within the realms of science.

---------

(A) is called strong AI. (B) is called weak AI. (D) is a matter of faith and hence not within the purview of science.

Although there are many proponents of (A) it can be very convincingly shown that (A) is probably not true.

Thus (B) or (C) seems to be closer to the truth. The jury is still out on this one and hence we still do not know if we can make computers than can think like us.

Personally, I tend towards (C). It just seems that I think differently than a computer. :)
 
Originally posted by betazed


Personally, I tend towards (C). It just seems that I think differently than a computer. :)


Yes, maybe today, but what about when you were born ? You think like you think now because of learning and studying, but if you were born into a forest and growth up like a primitive man, then your thinking process would have been very different.

So it depend on the learning capacity and the ability to communicate this learning, if the robots achieve that, then they are in a very good shape to surpass human beeing.
 
Originally posted by Tassadar
Yes, maybe today, but what about when you were born ? You think like you think now because of learning and studying, but if you were born into a forest and growth up like a primitive man, then your thinking process would have been very different.

So it depend on the learning capacity and the ability to communicate this learning, if the robots achieve that, then they are in a very good shape to surpass human beeing.

@Tassadar: You may very well be right. But here is why I doubt you are correct.

leraning and the ability to communicate is not enough. Thinking out of the box and improvising seems to me the key characteristic of the human mind.

In all out endevours in AI we have not been able to endow a computer with any reasoning process (that will allow it to improvise) that we have not taught him already.

For example, the moves of Deep Blue are not the result of some original strategy that it thinks up but the result of a brute force calculation.

All expert systems work under some rules. They never create new rules. Neural networks create new rules but the set of rules that they will work with (and come up with) is a predetermined computationally describable set. A brute force calculation can come up with that set and just rely on it. So it is nothing more fancy that a expert system with a fixed set of rules.

yet, the human brain must be something different. When we were nothing more than hunter gatherers we changed the rules that we were born with. We improvised. We found new ways of doing things.

We are yet to make a computer do the last one.

Computers have been becoming twice as fast every 18 months. Yet the ability to do something new is still stuck at what it was in the 1950s. Essentially zero. That is why I think (C) may be the true answer.
 
Betazed, very interesting:goodjob:

Do you know something new about the quantum coumputer ? i read something on that a few year ago. But my memory is kinda full of thing and i dont remember exactly.
 
Originally posted by Tassadar
Do you know something new about the quantum coumputer ? i read something on that a few year ago. But my memory is kinda full of thing and i dont remember exactly.

A pretty good resource on quantum computing.
 
Originally posted by betazed


A pretty good resource on quantum computing.

Thanks:goodjob:
From your link,

Furthermore, Feynman asserted that a quantum computer could function as a kind of simulator for quantum physics, potentially opening the doors to many discoveries in the field. Currently the power and capability of a quantum computer is primarily theoretical speculation; the advent of the first fully functional quantum computer will undoubtedly bring many new and exciting applications.

Didnt Penrose said that our brain rely on quantum physic insteed classic physic ?

And i had a good laught with the cartoon at the end of the page
:lol: Shift happen:lol:
 
Originally posted by Tassadar
Didnt Penrose said that our brain rely on quantum physic insteed classic physic ?

Actually Penrose claimed something far more speculative. He argued that some new physics that we do not know of is going on in our mind.

He based his arguments on non-computability of the processes of the human mind.

He argued that all physics that we know of is computable. So if the human mind is nonc-computable then it must be using a non-computable physics.

He even argued that it could be deterministic and yet noncomputable.


To give an example of a physics that could be non-computational he gave the following example.

------

Imagine a process that works like this; whenever it needs to make a choice, it selects by some algorithm a specific shape. Then if this shape can tile a plane then it choses a specific choice otherwise it choses something else.

The above process is deterministic. However, no computer can simulate the process because the tiling problem is a known noncomputable problem (IIRC, it is NP-Hard).

-----

Whether our brain really follows a noncomputable process is the key question that no one has answered.
 
Personally I am a determinist, so I believe that the human mind, or a version thereof, would be able to be simulated with enough computation space. It would be an AI. If it was self-aware, is another question.

To paraphrase Red Dwarf: "I think, therefore I am. You think you are thinking, therefore you possibly are."
Sentence taken from the ship's psychologist speaking to a dead man being simulated.

BTW, nothing states that if Asimov's three laws was hard-coded into an AI, it would not be an AI any longer. There are some driving forces that are nearly hard-coded into the human mind... Some fears (phobias), the need to reproduce, the sexual drive, instincts (pulling your hand away from very strong heat)... All these are, if you will, hard-coded.
 
Originally posted by Achinz
I guess I was referring to the fact that power has already been given to computers without the need for AI :)
the power is in the hands of the users, which, any tech support people will agree, is a very scary thing indeed.

Truth is that we are miles off from any true intelligence. Pretty much everything i've seen so far is predetermined action-reaction.

Originally posted by col
It may be stupid but it still beats most people
only cos it cheats :)
 
Originally posted by betazed
Penrose put this issue very succinctly. There are essentially four viewpoints on AI[...]

(B) The human mind is running an algorithm and if we can figure out the algorithm then it will show the same intelligence but not consciousness because consciousness it is a result of the actual hardware in the brain (neurons etc.)

(C) The human mind is not running an algorithm and hence cannot be computationally simulated. Hence human intelligence and consciousness can never be simulated by existing computers irrespective of how fast they become.
I think that (B) is essentially correct. But it's not necessary to figure out the algorithm of the human mind; there might be other - better! - algorithms enabling intelligence. It's not even necessary that the human mind is truly algorithmic; only that it sufficiently resembles an algorithm.

(C) contains a non-sequitur at the first "hence". Simulations need not match the target in every respect, only in the features of interest. The non-algorithmic nature of human thought (if true) doesn't necessarily make us more intelligent.

Penrose's specific arguments why human thought is non-algorithmic do imply that we are smarter than any algorithm - but I don't find his arguments convincing.
 
Originally posted by Ayatollah So

(C) contains a non-sequitur at the first "hence". Simulations need not match the target in every respect, only in the features of interest. The non-algorithmic nature of human thought (if true) doesn't necessarily make us more intelligent.


I do not think (C) is a non-sequitur.

(C) assumes that the human mind is not an algorithm. Not being an algorithm it cannot be computationally simulated. There is no logical fallacy here.

But you are right in the point that it need not be necessary to simulate the human mind in its entirety to evoke consciousness and intelligence. For example, a dog does not simulate the human mind but it would be very difficult to argue that dogs do not show any consciousness and intelligence.

I think Penrose's argument was specifically whether we could develop a "human" consciousness and intelligence (that displays all the subjective human traits) and not some putative machine intelligence which might be better than human intelligence.

Also you mention that you do not agree with his arguments. Can you elaborate?
 
Back
Top Bottom