Just another of those "Can A.I. ever be created"? threads/poll

Can A.I. ever be created by humans

  • Yes (categorically)

    Votes: 5 38.5%
  • Likely Yes (unsure but leaning to Yes)

    Votes: 5 38.5%
  • No (categorically)

    Votes: 1 7.7%
  • Likely No (unsure but leaning to No)

    Votes: 0 0.0%
  • Other/the way upwards and downwards is the same/there is no spoon and no is

    Votes: 2 15.4%

  • Total voters
    13
  • Poll closed .
I haven't looked too much into Penrose, but I haven't found too much compelling about his ideas of quantum conciousness. From what evidence I've seen the brain doesn't act like a quantum computer. I think because conciousness and quatum mechanics are mysterious and counterintuitive there's a temptation to link the two, but I think that should be resisted.
If I remember correctly, his idea is different. Quantum computer is still a computer, a Turing machine. Penrose hypothesis is that human intelligence is based on more complex, non-algorithmic principles. Quantum effects in specific brain structures are necessary for human intelligence and consciousness to function, but it has nothing to do with the present idea of quantum computers.
 
Neural nets have been used to lead to intelligent behaviour. Not nearly on the same level as a human or even a dolphin or whatever (I don't think), and obviously you can't just "add neural nets", but.. yeah, we have AI already! It can solve some problems and act in an intelligent manner. It just isn't nearly as advanced as humans are, but it doesn't mean that it isn't AI.
I'm not going to argue with you if you want to call neural nets AI :)

Personally I do not think they qualify as intelligence. Neural networks are a (partial) solution to enable computers to learn, in particular to learn from data and optimize their behavior to accomplish some goal that wouldn't have been possible that effectively using programming. Neural networks are not that different in principle from stuff like Markov models and so on.

But I would say intelligence is more than the ability to learn. Most importantly, it is the ability to solve new problems, including abstracting known solutions to apply them to something else. That is really outside the scope of neural networks.

I think neural network is a specifically problematic term because (and that is not directed at you) many people assume that because they are inspired by the human brain, they have the potential for the same abilities as the human brain.

I haven't looked too much into Penrose, but I haven't found too much compelling about his ideas of quantum conciousness. From what evidence I've seen the brain doesn't act like a quantum computer. I think because conciousness and quatum mechanics are mysterious and counterintuitive there's a temptation to link the two, but I think that should be resisted.
I have a similar feeling.
 
^This. ^

How do we ever find out if a "machine" truly "feels" "pain" in the same way we feel it. We can make a machine that does something that looks like feeling "pain" on the outside but how would we find out what the machine itself "experiences" if it truly "experiences" anything? Would we ask the machine, "did that hurt"? To which the machine replies, "yes". So did the machine reply "yes" because it really did hurt or did it reply "yes" because someone programmed it to do so when certain inputs are received?
How do we know if other people experience pain? How do we know if non-human animals experience pain?

I think the way to go is look at how pain functions in the human psyche and see if it is matched in the machine. A machine that experiences true pain will for example be quite insistent on avoiding situations that cause it.
 
Strictly speaking, we can't even find out whether the other person can feel pain or anything else the same way as we do. All we can do is performing experiments trying to find difference in behavior, but there is no way to prove it conclusively.

In other words, strong AI will act like a conscious being, but not necessary will have consciousness. And we have no way to differentiate between these two things.

Granted I probably can't prove conclusively that others feel pain the same way as I do, however, it seems like a reasonable inference since presumably I share similar origins with most people around me. We all came into the world and behave in the world in a similar manner. It would be a bit of an additional extravagance to presume that although others share so much apparent commonality with me that they nevertheless don't experience pain in the same way. Of course, then comes the problem of the proverbial sadist who "likes" to feel "pain".

Of course it opens up a huge, ethical can of worms to even try to create a computer that we think is "conscious", if we can create a computer that is conscious, would that conscious being then have the same sort of rights and obligations that people do?

So if my computer were conscious, when I turn it off am I in effect putting it to "sleep". And if I throw it in a trash compactor am I "killing" it? And if such things can be done to sentient machines with little or no apparent ethical consequences, then how does that play out with other beings we ascribe sentience to? As the saying goes life imitates art and visa versa. It seems like the same can be said of technology. Our world views could perhaps become shaped by the ways we interact with technology.

I realize that humans will probably inevitably try to cross the threshold from creating machines that are not conscious to creating machines that (at least) seem conscious. It's just what we seem to collectively want to do. Technology is taking us in the direction of a Brave New World where our ethical sensibilities are challenged at their very core. :eek:
 
How do we know if other people experience pain? How do we know if non-human animals experience pain?

I think the way to go is look at how pain functions in the human psyche and see if it is matched in the machine. A machine that experiences true pain will for example be quite insistent on avoiding situations that cause it.

That machine would be even more plausible if there are situations where it is willing to endure differing levels.

Pain is a fact of life and virtually everyone deliberately chooses to endure at least some pain multiple times in their life if they deem the result of their choice to be worth the discomfort.
 
How do we know if other people experience pain? How do we know if non-human animals experience pain?

I think the way to go is look at how pain functions in the human psyche and see if it is matched in the machine. A machine that experiences true pain will for example be quite insistent on avoiding situations that cause it.

But would that insistence of avoiding such situations be "programmed" to appear real or will it be the "real thing"? How could we definitively know if a machine (even a neural net) truly experiences "pain"? I assume there is a right or wrong answer to the question but it also seems like a "black box" (or maybe like Wittgenstein's Beetle Box) which we will never be able to see inside of for ourselves.
 
Intelligence IS computation, so yes. What we don't yet know is the physical substrate necessary for consciousness. But there's no reason why we couldn't mechanically emulate those fields.
 
I suppose I'd have to say a weak yes, but I honestly dunno if we will ever create an AI that is human enough with, for instance, consciousness, as others have said. At least not within our lifetimes, singularity fans be damned.
 
If I remember correctly, his idea is different. Quantum computer is still a computer, a Turing machine. Penrose hypothesis is that human intelligence is based on more complex, non-algorithmic principles. Quantum effects in specific brain structures are necessary for human intelligence and consciousness to function, but it has nothing to do with the present idea of quantum computers.
Fair enough. I am aware of that distinction but was being pretty loose with my words here. My intuition here says that an argument can be made that either the brain must be wired to preserve quantum weirdness at a large scale (which is unrealistic) or any of the supposed special algorithms could be effectively replaced with finite look up tables (which would allow for conventional computers to be a basis for AI)
 
Granted I probably can't prove conclusively that others feel pain the same way as I do, however, it seems like a reasonable inference since presumably I share similar origins with most people around me. We all came into the world and behave in the world in a similar manner.
Yes, this is a reasonable assumption. But like with AI, assumptions are all we can do to determine whether this particular entity has consciousness, or not. Consciousness or lack of it cannot be proven in strict mathematical sense.

And yes, interaction with artificial intelligence will be a whole new can of worms in ethical sense for us. Or for them.

But would that insistence of avoiding such situations be "programmed" to appear real or will it be the "real thing"?
But we are also, in a way, programmed to avoid situations which cause pain. Is our feeling of pain a "real thing"? In physical sense it's just electrical impulses.
 
I'm not going to argue with you if you want to call neural nets AI :)

Personally I do not think they qualify as intelligence. Neural networks are a (partial) solution to enable computers to learn, in particular to learn from data and optimize their behavior to accomplish some goal that wouldn't have been possible that effectively using programming. Neural networks are not that different in principle from stuff like Markov models and so on.

But I would say intelligence is more than the ability to learn. Most importantly, it is the ability to solve new problems, including abstracting known solutions to apply them to something else. That is really outside the scope of neural networks.

I think neural network is a specifically problematic term because (and that is not directed at you) many people assume that because they are inspired by the human brain, they have the potential for the same abilities as the human brain.

I do not wish to argue or anything either, it's just that if you look into Artificial Intelligence research being done by Computer Scientists (and I presume others), these people are building Artificial Intelligence systems. AI already exists.

I threw out "neural nets" out there, because that's one approach I learned in AI class in university. I don't mean to imply that it's all you need for human-like intelligence. It's just one method that researchers and programmers have been using to implement AI. It's just one tool - not a golden bullet type solution.

My point is that AI doesn't mean human-like intelligence. It just means what it says, artificial intelligence. Maybe we need a new term - Artificial human-like intelligence?

What I take to be meant by AI:

wikipedia said:
Major AI researchers and textbooks define this field as "the study and design of intelligent agents", in which an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.

It doesn't necessarily have to be intelligence on par with human intelligence. It's just got to be an agent that can learn and to some degree adapt to incoming input. A neural net is one simple example of a way to make something like that happen.
 
Intelligence IS computation, so yes. What we don't yet know is the physical substrate necessary for consciousness. But there's no reason why we couldn't mechanically emulate those fields.

So if I see and therefore "experience" the color "blue", is "blueness" a "computation"? It seems to me that experiencing qualia somehow differs in character from strictly spatial and temporal tasks which clearly involve mathematics and computation. We can certainly show that when I see "blue" XYZ is happening in my brain and that is all computational, but I don't see a "computation" (so to speak). I see or experience "blueness".

Presumably it might be possible to create a machine that mimics a person in everything it overtly does, rotate it's machine eyes toward (what I experience as) a "blue" sky, sigh and produce a recorded message saying, "Wow, that is a beautiful sky today", all with the exception of actually experiencing "blueness". In other words the machine could be what David Chalmers would call a "zombie". Something which mimics consciousness in every observable way, except that it isn't conscious.
 
Since we haven't done it yet I won't say categorically yes, but based on what we know of neurology and intelligence so far I can't see any reason why it isn't technically possible. How difficult it may be is another question but at this point I lean strongly yes, it is possible.
 
But we are also, in a way, programmed to avoid situations which cause pain. Is our feeling of pain a "real thing"? In physical sense it's just electrical impulses.

Of course I don't assume that consciousness could never be created. For all we know maybe our PCs we are using to communicate with now are themselves independently conscious. It just seems like a universal constant that we will never truly know for sure.

However, one thing I don't think I will ever be able to accept is that I don't truly experience qualia, that it is "folk psychology" or something. As David Chalmers points out, consciousness is the most immediate and known phenomena to me. I can't be sure that anything I look at is "real" but I can know for certain that I experience what I experience, whatever it may be "in-itself". Call it "neo-Cartesianism" (in a sense) maybe.
 
So I suppose I should have assumed that that's what this thread was about.

Alas I read the OP quite literally instead..
It's not unreasonable not to make that assumption, the whole problem with this topic is that you can define intelligence in so many different ways.

Unfortunately that leads to the AI label being applied to many different things, from computer game AI agents (usually nothing more than simple heuristics + randomness), machine learning AI to "strong" AI.
 
It's not unreasonable not to make that assumption, the whole problem with this topic is that you can define intelligence in so many different ways.

Unfortunately that leads to the AI label being applied to many different things, from computer game AI agents (usually nothing more than simple heuristics + randomness), machine learning AI to "strong" AI.

I agree that the AI label is a bit.. problematic.

I find it odd that in general people will call things like the computer player in Civ "AI", as well as human-level type of artificial intelligence.. but then seem to ignore the AI research and systems already in place today.

AI exists :p You can look it up. The building blocks for human-level of intelligence are already being worked on. Nobody knows how we're going to get there, but it seems that if we're going to have a discussion on the future of AI - we should at least look at the state of AI research as it exists today.
 
So if I see and therefore "experience" the color "blue", is "blueness" a "computation"? It seems to me that experiencing qualia somehow differs in character from strictly spatial and temporal tasks which clearly involve mathematics and computation. We can certainly show that when I see "blue" XYZ is happening in my brain and that is all computational, but I don't see a "computation" (so to speak). I see or experience "blueness".

Presumably it might be possible to create a machine that mimics a person in everything it overtly does, rotate it's machine eyes toward (what I experience as) a "blue" sky, sigh and produce a recorded message saying, "Wow, that is a beautiful sky today", all with the exception of actually experiencing "blueness". In other words the machine could be what David Chalmers would call a "zombie". Something which mimics consciousness in every observable way, except that it isn't conscious.

It's why I distinguished between intelligence and consciousness. There's no doubt that computation is required to distinguish blue from not blue. We don't know what physical chemistry is required to generate the qualia.
 
So I suppose I should have assumed that that's what this thread was about.

Alas I read the OP quite literally instead..
It's interesting that originally (IIRC), Artificial Intelligence term was defined only in this form, like "thinking machine". Only later the term became more vague, started to be used in game development, as part of computer science, etc. 20 years ago, computer programs already could play chess on grandmaster level, but they weren't called AI. At least much less often than today.

However, one thing I don't think I will ever be able to accept is that I don't truly experience qualia, that it is "folk psychology" or something. As David Chalmers points out, consciousness is the most immediate and known phenomena to me. I can't be sure that anything I look at is "real" but I can know for certain that I experience what I experience, whatever it may be "in-itself". Call it "neo-Cartesianism" (in a sense) maybe.
I think most of people will agree with this. It's rather a question of whether qualia (or consciousness) objectively exist. Or can we truly experience anything.
 
Back
Top Bottom