Machines taking over in 2045?

That sounds like a hardware problem, which can be handled with a sufficient number of sensors!

Can it? Because I think it's more about how the brain interprets the data, and that's intrinsically tied to its biological nature. The "meat" makes us who we are. I think it's doubtful whether a human mind can exist out of a human body, because they're basically the same thing (despite all the spiritual nonsense that claims otherwise ;) ).
 
Well, I don't believe in a soul. I do believe in the experience of continuity of consciousness (even if it's interspersed with sleeping). We believe in this continuity because it's familiar, I expect. So, the question of the soul will (imo) be answered by proxy, whether the consciousness can be maintained in a continued state while shifting around the substrates.

We already know that consciousness can be maintained while shifting around in a substrate, because we do this with our brains. Components of our consciousness are maintained in different neuroanatomical structures, and as we change what we're thinking, the physical focus of our consciousness changes. We know that the consciousness can spread into certain regions, because we can shift our attention there.

IF we hook up to machines that allow us to migrate consciousness between silicon and biological substrate, then we can experience migrating our consciousness to our satisfaction. If we actually cannot shift the location of our consciousness, then we won't feel like the uploading is working. It will offend our instincts, and we'll feel like it's not 'safe'.

That sounds like a hardware problem, which can be handled with a sufficient number of sensors!

How do you know that consciousness is not a soul (God) that allows us to exist? Would firing millions of neurons or "protons" in a machine actually "create" the same consciousness?
 
Kurzweil has always been an idiot spewing nonsense due to his fear of death. The "singularity" is always just withing reach of his own mortality...

I'm with Winner here, this is religious thinking. And worse than that, is masquerades as science and some people fall for it!

As for artificial intelligence, it has been and remains an intractable problem. There have been zero breakthroughs since the idea came up in the 1950s and and bunch of Kurzweil-like idiots savants (yes, it's heresy to call that to famous scientists, so what? They are, in the sense that they become so specialized they live indeed in the proverbial ivory tower and fail to grasp the complexity of the real world) gathered at Dartmouth, 1956, thinking that they would crack the problem! Insstead all we're been manage to develop are specialized systems.

Kurzweil, as Dawkins and others, is a fraud masquerading as a scientist.
 
How do you know that consciousness is not a soul (God) that allows us to exist? Would firing millions of neurons or "protons" in a machine actually "create" the same consciousness?

Cognitive sciences are indeed sciences. There's no reason to start thinking of metaphysical reasons when you can instead work with the reliable information.
 
Kurzweil has always been an idiot spewing nonsense due to his fear of death.

death isn't really that scary; it's just nothingness, very much like sleeping without dreams. the pain in the process of dying is worse than death itself. hyperintelligent computer systems that blatantly don't need us and could subject humans to fates-worse-than-death on the other hand... just look at how humans treat animals for a taste of what we could be in for.

there are a lot of people that would downplay, deny or obstruct developments in AI out of a fearful inability to accept their long-run ramifications.
 
Well, I don't believe in a soul. I do believe in the experience of continuity of consciousness (even if it's interspersed with sleeping). We believe in this continuity because it's familiar, I expect. So, the question of the soul will (imo) be answered by proxy, whether the consciousness can be maintained in a continued state while shifting around the substrates.

We already know that consciousness can be maintained while shifting around in a substrate, because we do this with our brains. Components of our consciousness are maintained in different neuroanatomical structures, and as we change what we're thinking, the physical focus of our consciousness changes. We know that the consciousness can spread into certain regions, because we can shift our attention there.

IF we hook up to machines that allow us to migrate consciousness between silicon and biological substrate, then we can experience migrating our consciousness to our satisfaction. If we actually cannot shift the location of our consciousness, then we won't feel like the uploading is working. It will offend our instincts, and we'll feel like it's not 'safe'.
Consciousness doesn't migrate in the brain. The brain just uses different parts to do different things. The brain is the machine that facilitates consciousness. But as it is just a machine, it is possible to reverse engineer how it works, and simulate it in software. If that process involves copying an existing human, then the copy would think of itself as the original.

Well... I truly understand what you mean, its like, even when a machine can "cut and paste" you atom by atom in a place 10000 miles away; that new "you" is not the same person already. (Its like some of the teleportation device in sci-fic)

But my argument will be, when human advance to that level, our ethics and definition of "life" would also evolve. We may call this killing yourself, but it might be understood as "natural step" to the next level of life.

A lot of religion today also talks about life after death/resurrection/ Ascension kind of idea; which is pretty similar. For all intend and purposes, an exact copy of you will be the same for your friends/ relatives/ work mate; it doesn't matter that you "die" or not die. It can also apply to a complete upload of your consciousness into a machine. (not just memory)

I really really really hope this can happen, as this is one of the only way that can give human everlasting life. (Dream of all kings and dictator since Civ exists XD)
Well put.

First off, it was a general statement and not a reply at you. Awareness is very far removed from show-off machine learning like Watson. It's not that impressive in the end, just a set of algorithms that work out the connections between words. Robotics, for what it's worth, is just a field of programming applied to a physical mechanism, so you can just consider the advances in programming. The difference that needs to be made is between a proper simulation (doing exactly what a brain is doing) and heuristics (simple if-then-else behaviour patterns that are just meant to look convincing). A horse heuristic would not require exceedingly much effort to fool anyone not involved actively with horses. A full-on simulation is still impossible with modern technology and algorithms.
I disagree that it is necessary to simulate a brain to achieve the same function. It is possible to create two very different machines, that are indistinguishable in function.

There are two approaches to "true" AI. One is to copy the brain. Our ability to do this is dependent on our understanding of neuroscience. The other is to create a functional model of the brain. Our ability to do this is limited by our understanding of psychology. Either approach can ultimately lead to Turing Test passing machines.

why would you kill the old "you" when uploading your mind to the computer? there is no need or incentive whatsoever for this.
Presumably inspecting your neural pathways for copying is easier if the scope is allowed to damage the original.
 
IF we hook up to machines that allow us to migrate consciousness between silicon and biological substrate, then we can experience migrating our consciousness to our satisfaction. If we actually cannot shift the location of our consciousness, then we won't feel like the uploading is working. It will offend our instincts, and we'll feel like it's not 'safe'.

Hmm. Why do you assume that, if it's not actually possible and would result in death, we'll sense danger at the moment of transfer besides the gut feeling, experienced all along, that it's not that simple?

Personally, I do think that it's not that simple, whether or not there is such a thing as the soul, because I think we have not quite identified the root of the qualitative difference between the human mind and present-day AI. I suspect it has to do with how memory and consciousness are stored.
 
Can it? Because I think it's more about how the brain interprets the data, and that's intrinsically tied to its biological nature. The "meat" makes us who we are. I think it's doubtful whether a human mind can exist out of a human body, because they're basically the same thing (despite all the spiritual nonsense that claims otherwise ;) ).

I quite agree that one needs a human body to have a human mind (with some wiggle room regarding specific defintions). I think that the uploading has the potential to greate a mind that's objectively superior to a mere human mind. The transhumanists call this 'posthuman'. And, as a second agreement, I'd resist machine integration if I suspected that it would make me 'less' than my current human consciousness.

But I don't agree that it's intrinsically tied to its biological nature. It's a function of sensory integration across a large number of modalities. In principle, this is just a sensor problem. For example, I suspect that it would be possible to create an artificial simulation (a la the Matrix) that fully captured the human sensation. At least, theoretically. And, once that is possible, I don't know if we'd want to limit our experiences like that.

How do you know that consciousness is not a soul (God) that allows us to exist? Would firing millions of neurons or "protons" in a machine actually "create" the same consciousness?

I don't know that consciousness is not a soul. It seems like an unknowable thing, at least at present levels of neuroscience. My experience is that people who build worldviews off of Scripture or theological thinking tend to be really wrong about the nature of reality. These models of thinking cause assumptions that just don't hold up, and time-and-again it's been shown to be a false set of ideas. Because of that, I don't really think that the idea of the 'soul' offers much. But like I said, we can feel our consciousness, and I think that if we can migrate our consciousness between substrates, we'll be convinced.

That said, I don't see why a machine cannot be conscious, if build correctly. From the outside perspective, it really appears to be a question of building multi-modal sensory integration (in simplified terms) in the form of active metaphor. To learn more, I recommend the Almaden lectures on artificial intelligence, available in google.video. (Lecture 1 of 12) With 12 hours of video, there's a lot to learn in this field for the casual observer.
 
I quite agree that one needs a human body to have a human mind (with some wiggle room regarding specific defintions).

I don't know if I agree with that. All we'd need to do is duplicate the central nervous system, really. I suppose the loss of organs (or whatever) would feel weird, but.. Take out a kidney and you still feel pretty much the same.
 
I disagree that it is necessary to simulate a brain to achieve the same function. It is possible to create two very different machines, that are indistinguishable in function.

There are two approaches to "true" AI. One is to copy the brain. Our ability to do this is dependent on our understanding of neuroscience. The other is to create a functional model of the brain. Our ability to do this is limited by our understanding of psychology. Either approach can ultimately lead to Turing Test passing machines.

Attempting to base an AI using psychology means you are lacking in the foundations of learning and experience. Then you've just created a sophisticated machine learning system. That's not a true AI.
 
Attempting to base an AI using psychology means you are lacking in the foundations of learning and experience. Then you've just created a sophisticated machine learning system. That's not a true AI.

What would you then constitute as true AI then? Copying the human brain and providing it with infallible memory, massive databases and processing power seems like a solid way to me. Of course I have nothing to base this on so feel free to rip me/enlighten me.
 
Attempting to base an AI using psychology means you are lacking in the foundations of learning and experience. Then you've just created a sophisticated machine learning system. That's not a true AI.
I'm not sure what you mean, so I'll elaborate on what I mean.

Human behavior is highly predictable and can be modeled. Psychology explores the emotions that motivate or cause certain behavior. By observing people we can identify the variables that define our emotional state. It's not exactly easy to identify all the emotions that go into a single human action, mostly because we aren't generally aware of them. But with study, and the help of hindsight, it can be done. Or perhaps it can be found that some things people do are random, which is just as useful. We also reason about how to achieve the more conscious goals. This process is even easier to analyse. Together, these observed processes and variables describe human behavior. And anything that can be accurately described can be programmed. I'm injecting my own preconceived notions of the human psyche by splitting it into emotions and reason, but if other models are more accurate, they can be decomposed in similar ways.

To put it more succinctly, psychology seeks to describe human behavior. By understanding human behavior, it is possible to build a machine that behaves the same way. And if it thinks like a person, acts like a person, who's to say it's not a person?
 
The brain is not just memory, nor even the ability to randomly access that memory. It would be more like wireless packets being shot all over from one side and some how rearranging themselves properly on another side without a "road" map. When a machine can figure out how to "create" it's own road map, in this fashion (wirelessly), then you have an AI. BTW, even humans do not know the "road" map of the brain. Medication is used to change pathways, and "block" pathways, but no one can "map" or determine where and when things "need" to be fired.

This is a really simplistic analogy and there are a lot more complex things going on also.

@ Warpus

It is more than a net. There are no "hard" connections. There are "senders" and "receivers" that are constantly firing and receiving. And there are multiple "packets" from multiple "ports" all firing at the same time for each sender and receiver. It is this "process" that humans need to figure out how to replicate.
 
True AI = neural net
Neural nets are a tool for finding patterns, often patterns that are not obvious to human inspection. That makes them an alternative to human intelligence, not a copy of it. You could probably build a neural net that behaves like a human if you knew enough, but nothing about a neural net behavior is inherently AI.

Also, assuming that neural nets do accurately model neurons, it may still be possible to create a machine that behaves like the brain without using that model as inspiration. This is much like how I can define a software function in many different ways while still having the same input and output.

Awareness. Actual intelligence. Modelling an AI on a set of rules is too constraining to be considered a real AI.
Doesn't awareness and human thought not follow set rules?

If somebody behaves erratically, that's not a sign of consciousness or intelligence or humanity or anything like that. It's a sign of craziness.
 
Doesn't awareness and human thought not follow set rules?

If somebody behaves erratically, that's not a sign of consciousness or intelligence or humanity or anything like that. It's a sign of craziness.

Those rules are able to change dynamically.
 
The brain is not just memory, nor even the ability to randomly access that memory. It would be more like wireless packets being shot all over from one side and some how rearranging themselves properly on another side without a "road" map. When a machine can figure out how to "create" it's own road map, in this fashion (wirelessly), then you have an AI. BTW, even humans do not know the "road" map of the brain. Medication is used to change pathways, and "block" pathways, but no one can "map" or determine where and when things "need" to be fired.

This is a really simplistic analogy and there are a lot more complex things going on also.
Our inability to describe how we think is indeed the roadblock to creating AI. However, we don't know our own road map, yet we are surely intelligent, so knowing the roadmap cannot be a requirement for intelligence.


Those rules are able to change dynamically.
Can you give an example? When do the rules of human behavior change, and how are those changes intrinsic to intelligence?

I suspect say that rules changing dynamically is simply a case of not understanding and stating all the rules. But I await your example of dynamic rules.
 
Our inability to describe how we think is indeed the roadblock to creating AI. However, we don't know our own road map, yet we are surely intelligent, so knowing the roadmap cannot be a requirement for intelligence.


Can you give an example? When do the rules of human behavior change, and how are those changes intrinsic to intelligence?

I suspect say that rules changing dynamically is simply a case of not understanding and stating all the rules. But I await your example of dynamic rules.

Being intelligent and knowing how to get there are two different issues.
 
Back
Top Bottom