Just another of those "Can A.I. ever be created"? threads/poll

Can A.I. ever be created by humans

  • Yes (categorically)

    Votes: 5 38.5%
  • Likely Yes (unsure but leaning to Yes)

    Votes: 5 38.5%
  • No (categorically)

    Votes: 1 7.7%
  • Likely No (unsure but leaning to No)

    Votes: 0 0.0%
  • Other/the way upwards and downwards is the same/there is no spoon and no is

    Votes: 2 15.4%

  • Total voters
    13
  • Poll closed .

Kyriakos

Creator
Joined
Oct 15, 2003
Messages
78,218
Location
The Dream
This is about whether you think Artificial Intelligence can ever be created by humans (by which is meant AI without DNA used in it, ie just computer/mechanic stuff but not biological material tied to senses/life).

There have been some of these. Yesterday one was resurrected in the science forum, so i thought i could make this poll in the OT :)

The poll question is

"Can A.I. (Artificial Intelligence) ever be created by humans"?

Poll options are:

1. Yes (categorically)
2. Likely yes
3. No (categorically)
4. Likely No.
5. Other/As above so below/AI is you/Think outside the box/You are the box and a diagonal in it/etc

You can also discuss why you chose the option you did ;) I am between 3 and 4, but tbh far closer to 3, so i chose 3. My view is that AI cannot be created (as long as we talk about computer parts without any dna tied to them), cause i don't see how organising a system of foreground process (consciousness in humans) and chaotic background of non-consciousness vital to the same system, can be achieved without the use of dna that for largely not-known reason causes a living being to at least have a primary 'sense' of something, which in turn means that it has potential to not follow all of its program at any given moment.
 
Im sure it can be with a heck of a lot of programming.
 
There's an entirely different question that needs to (but maybe never can) be answered first: Can we ever know that an AI created by humans is a real (as in sentient) AI ?
My intuition says that that is a question without a reliable answer.
 
There's an entirely different question that needs to (but maybe never can) be answered first: Can we ever know that an AI created by humans is a real (as in sentient) AI ?
My intuition says that that is a question without a reliable answer.

Likely, yes, cause we don't "know" the same answer in the case of another human either :scan:

:D

It might just be a nicely formed/dressed automaton, such as in stories by 19th century writer E.T.A. Hoffmann.

"He has stolen my best Automaton!!!"
 
Could we produce a machine capable of passing the Turing Test? Sure. Why not?

But could we produce a machine that is self-conscious? I doubt it. We don't even know what makes us self-conscious. Or even whether we fully are.
 
There's an entirely different question that needs to (but maybe never can) be answered first: Can we ever know that an AI created by humans is a real (as in sentient) AI ?
My intuition says that that is a question without a reliable answer.
Agreed, and actually I think the first problem with these questions is that intelligence is an ill-defined concept itself.

Even in the OP it is somehow implied that intelligence is somehow related to organic life (because an AI with organic components "does not count"), which kind of comes out of nowhere. Lack of proper definitions aside, I think the creation of what is commonly regarded as true AI will become reality given enough time, but it will probably take even more time until we will overcome our humanist/biologist narcissism and recognize it as such.

Im sure it can be with a heck of a lot of programming.
How?
 
I would say somewhere between 1 and 2, and the difference is really what counts as AI. You seem to put conciousness as an important factor, and I would debate that. As others have said, we do not really know what conciousness is or what role it plays in our abilities.

I would suggest that a better measure of AI would be its abilities. If a computer could look at a dataset, come up with novel hypotheses and suggest further experiments to test these hypotheses as well as a trained scientist then that would indicate AI. If a robot could perform a complex task (I am thinking space exploration) as well as if it had a human controlling it then that would indicate AI. A bit like the Turing test but a higher bar.

I feel that neither of these is that far off, so that as long as human society can survive another century or so I feel that AI in almost certain to happen.
 
I'm not sure where machines' ability to learn is, right now. Programming a machine to beat a Turing Test or play Jeopardy or whatever is kind of a brute force approach to AI. Programming a machine to program itself will, imo, be a big threshold for AI. Advances in artificial sensing are related, I think - teaching a robot vehicle to interpret visual data to avoid collisions, for example. On the front page of MIT's CSAIL website is something about an algorithm that can remove reflections from photographs taken through windows.

At the Computer Vision and Pattern Recognition conference in June, CSAIL researchers will present a new algorithm that, in a broad range of cases, can automatically remove reflections from digital photos.

There's also a blurb about people working on scene recognition ("this photo is of a kitchen") and object recognition ("there is a refrigerator in the kitchen in this photo").
 
"Instant AI, just add neural nets!" One of the most overused buzzwords right now. Machine learning does not AI make.
 
"Instant AI, just add neural nets!" One of the most overused buzzwords right now. Machine learning does not AI make.

Neural nets have been used to lead to intelligent behaviour. Not nearly on the same level as a human or even a dolphin or whatever (I don't think), and obviously you can't just "add neural nets", but.. yeah, we have AI already! It can solve some problems and act in an intelligent manner. It just isn't nearly as advanced as humans are, but it doesn't mean that it isn't AI.
 
You can also discuss why you chose the option you did ;) I am between 3 and 4, but tbh far closer to 3, so i chose 3. My view is that AI cannot be created (as long as we talk about computer parts without any dna tied to them), cause i don't see how organising a system of foreground process (consciousness in humans) and chaotic background of non-consciousness vital to the same system, can be achieved without the use of dna that for largely not-known reason causes a living being to at least have a primary 'sense' of something, which in turn means that it has potential to not follow all of its program at any given moment.
I can't not follow my DNA programming.

I see this as a form of special pleading. Saying DNA must be special because computers for such-and-such reason can't do x but not really giving any strong indication what the heck it is about DNA that allows It to pull that trick off.

Why don't you think computers can pull off the trick you describe?
 
Trivially, you can simulate every molecule in a human brain.
We probably can't make a working digital model of brain, at least through algorithmic means (Turing machine).
And not because of technical difficulties, but it may be theoretically impossible.
 
There's an entirely different question that needs to (but maybe never can) be answered first: Can we ever know that an AI created by humans is a real (as in sentient) AI ?
My intuition says that that is a question without a reliable answer.

^This. ^

How do we ever find out if a "machine" truly "feels" "pain" in the same way we feel it. We can make a machine that does something that looks like feeling "pain" on the outside but how would we find out what the machine itself "experiences" if it truly "experiences" anything? Would we ask the machine, "did that hurt"? To which the machine replies, "yes". So did the machine reply "yes" because it really did hurt or did it reply "yes" because someone programmed it to do so when certain inputs are received?
 
I haven't looked too much into Penrose, but I haven't found too much compelling about his ideas of quantum conciousness. From what evidence I've seen the brain doesn't act like a quantum computer. I think because conciousness and quatum mechanics are mysterious and counterintuitive there's a temptation to link the two, but I think that should be resisted.

Funny enough, my copy of "Explaining Conciousness: The Hard Problem" is scheduled to arrive today. Penrose is one of the contributors, so I'll actually read his argument in his words not just what critics say.
 
^This. ^

How do we ever find out if a "machine" truly "feels" "pain" in the same way we feel it. We can make a machine that does something that looks like feeling "pain" on the outside but how would we find out what the machine itself "experiences" if it truly "experiences" anything? Would we ask the machine, "did that hurt"? To which the machine replies, "yes". So did the machine reply "yes" because it really did hurt or did it reply "yes" because someone programmed it to do so when certain inputs are received?

I do not see why that would be relevant to the question at hand. We do not consider human (one of) the most intelligent species on the planet because we feel pain, but because we can use our brain to create outcomes that no other species has managed. So I feel the yardstick should be outcome based not experience based. It is also fortunate that this is measurable, unlike feelings as you point out.
 
How do we ever find out if a "machine" truly "feels" "pain" in the same way we feel it. We can make a machine that does something that looks like feeling "pain" on the outside but how would we find out what the machine itself "experiences" if it truly "experiences" anything? Would we ask the machine, "did that hurt"? To which the machine replies, "yes". So did the machine reply "yes" because it really did hurt or did it reply "yes" because someone programmed it to do so when certain inputs are received?
Strictly speaking, we can't even find out whether the other person can feel pain or anything else the same way as we do. All we can do is performing experiments trying to find difference in behavior, but there is no way to prove it conclusively.

In other words, strong AI will behave like a conscious being, but not necessary will have consciousness. And we have no way to differentiate between these two things.
 
^This. ^

How do we ever find out if a "machine" truly "feels" "pain" in the same way we feel it. We can make a machine that does something that looks like feeling "pain" on the outside but how would we find out what the machine itself "experiences" if it truly "experiences" anything? Would we ask the machine, "did that hurt"? To which the machine replies, "yes". So did the machine reply "yes" because it really did hurt or did it reply "yes" because someone programmed it to do so when certain inputs are received?
Pain response is related to self-preservation, and I think self-preservation would be a component of AI. I think whether it experiences "pain" in the same way we do would be irrelevant. The question to ask would be, is the machine invested in its own existence?
 
Back
Top Bottom