Machines taking over in 2045?

But arent ANNs sort of inspired by biological neural networks? Would copying a human brain ala like this be cheating? ( http://en.wikipedia.org/wiki/Blue_Brain_Project) :mischief:

Hmm.. If it works, no :)

timtofly said:
It is more than a net. There are no "hard" connections. There are "senders" and "receivers" that are constantly firing and receiving. And there are multiple "packets" from multiple "ports" all firing at the same time for each sender and receiver. It is this "process" that humans need to figure out how to replicate.

You're describing a neural net, the basics of which we understand. The human brain is an incredibly complex neural net though.. Basic neural nets we understand, but the nuances of the human one.. not so easy to figure out.

Souron said:
Neural nets are a tool for finding patterns, often patterns that are not obvious to human inspection. That makes them an alternative to human intelligence, not a copy of it. You could probably build a neural net that behaves like a human if you knew enough, but nothing about a neural net behavior is inherently AI.

That's what our brains are though - incredibly complex patern recognition machines. They're specialized, so they are only good at solving certain types of problems.

My point was just to distinguish "fake" AI, the stuff you see in video games.. such as civ.. from "true" AI, one popular form of which would be a neural net.
 
Any machine able to take something from the abstract to the concrete and from the concrete into the abstract is well on the way of being an AI. From there you can construct and replicate almost any human emotion and invention. Humor, algebra, poetry, velcro, sarcasm, empathy, rage and the atomic bomb.

The turing test is a rudimentary tool to figure out if a computer is self-aware. But AI I believe it's way too complicated a phenomenon to be tested solely on that ground.
 
Awareness. Actual intelligence. Modelling an AI on a set of rules is too constraining to be considered a real AI.

Well, each component of an AI can be built to carry out sets of rules. But I think our neurons are much the same: they follow a rather complex set of rules, and the interactions between the neurons are what create consciousness.

While each component can have rules, the sum of the AI will have a great number of interfaces with the real world (it has to, for consciousness to exist). These interfaces will also follow sets of rules, and those will go on to interact with the downstream components. So, obviously, the AI will be adaptive. Consciousness is more than machine learning, for sure, but if you break apart the pieces of conciousness, it looks a lot like machine learning.
 
It's unlikely that machines will govern our lives as they don't have aspirations.

Corporations don't literally have aspirations, either. But as certain science fiction memes point out :borg::assimilate: they have a good start on governing our lives.

Homo-machination - Humans with technical enhancements.

This can only postpone, not prevent, our obsolescence. Better to create artificial intelligences that value human life and value continuing to value human life. That way, we can be inferior but continue to flourish.

I'm probably what people would consider a 'singulatarian', in that I expect this transition to artificial substrate to occur. [...]

I think Kurzweil is wrong in his timelines, because he's tacked on too many exponential trends without realising that some of those trends are sigmoidal (not exponential). [...]

I think it's theoretically possible to upload. In my opinion, it will require a gradual migration, where our conscious substrate is first 'shared' between biology and machine. But once that continuity of consciousness exists, the uploading can eventually be total. But, even if I am wrong, the minds that are metaphysically 'okay' with suicide in order to clone themselves into machines will be selected for, and will likely be able to outcompete those whose philosophies keep them in their meat.

As usual, I find myself largely agreeing with El Mac. Here I'll focus on the disagreement. The only continuities of consciousness worth bothering about are causality and character/quality. If the machine can have experiences with the same qualities as yours (that's a big IF), and if it's being the way it is results from your being the way you are (via a brain scan for example), then that's a continuation of your experiences and actions. It doesn't matter if the wiring and the meat ever directly touch.

On the other hand, just duplicating the high-level patterns of behavior (as Souron seems to be suggesting) is not enough to guarantee similar experience. Just notice that you can perform the same outward behavior while experiencing different things.

Can it? Because I think it's more about how the brain interprets the data, and that's intrinsically tied to its biological nature. The "meat" makes us who we are. I think it's doubtful whether a human mind can exist out of a human body, because they're basically the same thing (despite all the spiritual nonsense that claims otherwise ;) ).

What do you think about the soul? Do you think it exists and would complicate such a transition? I'm guessing no, but you might have a more nuanced answer.

The soul just is the patterns of electrical activity that generate experience and action. Winner's meat-makes-us point is at least that right. The questions remain: (A) can we duplicate that electrical activity in other media, and (B) does that suffice for the same experiences? I dunno about (A). But I'm pretty sure the answer to (B) is yes - for reasons stated by R. Sharvy. Basically, if a mechanical device can perfectly imitate a neuron, you could replace one neuron with it. You'd still be conscious, and still have all your abilities. Now replace another neuron. Repeat ad nauseam.

Well, each component of an AI can be built to carry out sets of rules. But I think our neurons are much the same: they follow a rather complex set of rules, and the interactions between the neurons are what create consciousness.

This.

But I should note: even though it's possible to get AI by duplicating the brain, I think it is a really dumb stand-alone strategy if you just want to get stuff done. AI that works radically differently from our brains will probably appear long before the exact brain emulators. More later.
 
Consciousness doesn't migrate in the brain. The brain just uses different parts to do different things. The brain is the machine that facilitates consciousness.
Sure, sloppy language on my part, because I'm trying to have this conversation at a popularised level. That said, I disagree with your disagreement. If we allow that consciousness exists in 3D space (and it does, i.e., in your brain), then I will suggest that consciousness moves around in its point of focus within your brain, there are some places it is, and some places that it cannot be.

Hemiagnosia is probably my best example. When you're conscious, you can easier have consciousness of your environment, or not. Keep in mind that consciousness is not the same as sentience! If you're not paying attention to certain aspects of your environment, then your consciousness is not there (even if you have the capacity of becoming conscious, with stimulation). So, when you're thinking about the left side of your world, part of your consciousness is residing in your right parietal cortex. If we were to kill that cortex, then your consciousness can no longer be there. So, by becoming aware (or unaware) of your left world, you migrate your consciousness into and out of the right parietal cortex. Being unconscious of your left world (due to, say, distraction) is functionally the same as not having the living tissue! Now, the region is still sentient, obviously. It can be activated by stimuli, and draw your attention (i.e., consciousness) into it again.

Compare that to the spinal cord. The spinal nervous tissue is certainly alive and incredibly biologically continuous with the brain. But we're just incapable of becoming conscious of the percepts dealt with in the spine. We're a couple stages removed, cognitively. We (i.e., the 'person') can get information to the spine. We can get information from the spine. But we cannot place our consciousness in the tissue there.

Not only that, there is no reason why the proportion of researchers in the developed world should remain constant. I think (and hope) that future economic pressures will greatly reduce the demand for lawyers, accountants and etc, and increase demand for researchers. In short, I don't think that stagnation of the number of researchers need be a problem.
...
Depending on the pressures of the future, more people could turn into research. In fact, if we look at rapidly developing nations like China and South Korea, we note that a very high proportion of their educated workforce is on technical careers, when compared to the West. To keep it's competitive edge, the West will have to do the same.

The more I look at this, the more I wonder if it's true. A bunch of R&D is driven by consumer demand, but consumers don't always demand R&D goods. In fact, we're quite happy to spend on branding over R&D, and buy products that are advertised as 'better' instead of being objectively better. In the West, only 3% of GDP is spent on R&D. China spends even less than 3% (less than 2%, I think). We can expect the developing nations to catch up in proportional investment, but I don't think we'll see an escalation. There's no real long-term trend of escalating R&D competition between the developed nations, even though competition between these nations has been going on for some tiime.

Awareness. Actual intelligence. Modelling an AI on a set of rules is too constraining to be considered a real AI.

Could you please expand upon this? We're not quite sure what you mean!

As usual, I find myself largely agreeing with El Mac. Here I'll focus on the disagreement. The only continuities of consciousness worth bothering about are causality and character/quality. If the machine can have experiences with the same qualities as yours (that's a big IF), and if it's being the way it is results from your being the way you are (via a brain scan for example), then that's a continuation of your experiences and actions. It doesn't matter if the wiring and the meat ever directly touch.
As far as being comfortable with the idea that *I* am uploading, it will need to result in more than just a machine that perceives continuity with my body. My body will need to perceive continuity with the machine. Your example of the artificial neuron is mostly the way I think about it, as the neurons are replaced, the continuity of perception never needs to change. I will be confident about the process, for example, once my right parietal cortex is replaced, and I am able to shift my attention between my left and right worldviews (and even be able to hold them both at once).

But I should note: even though it's possible to get AI by duplicating the brain, I think it is a really dumb stand-alone strategy if you just want to get stuff done. AI that works radically differently from our brains will probably appear long before the exact brain emulators. More later.

Robin Hanson (and economist, with a really fun interview on the Econtalk podcast) might disagree. His thesis is that brain emulation is the low-hanging fruit for AI, because we don't need to know how it works in order to get it to work. And having AI will create such an economic shift that the meat-people will perceive it as a Singularity (but over the course of months, not Vinge Verner's 'interesting afternoon'). In other words, an emulation is not the ideal AI for us. But it will be an easy AI to make, and it will be a profitable AI to make.

The social implications of copying ourselves is pretty interesting, because you'd have to contract with yourself, essentially, before you copy yourself. You'd arrange the copying such that no matter where you ended up (as the meat, or as the AI), you'd feel like the copying was a mutually beneficial idea.
 
Sure, sloppy language on my part, because I'm trying to have this conversation at a popularised level. That said, I disagree with your disagreement. If we allow that consciousness exists in 3D space (and it does, i.e., in your brain), then I will suggest that consciousness moves around in its point of focus within your brain, there are some places it is, and some places that it cannot be.

Hemiagnosia is probably my best example. When you're conscious, you can easier have consciousness of your environment, or not. Keep in mind that consciousness is not the same as sentience! If you're not paying attention to certain aspects of your environment, then your consciousness is not there (even if you have the capacity of becoming conscious, with stimulation). So, when you're thinking about the left side of your world, part of your consciousness is residing in your right parietal cortex. If we were to kill that cortex, then your consciousness can no longer be there. So, by becoming aware (or unaware) of your left world, you migrate your consciousness into and out of the right parietal cortex. Being unconscious of your left world (due to, say, distraction) is functionally the same as not having the living tissue! Now, the region is still sentient, obviously. It can be activated by stimuli, and draw your attention (i.e., consciousness) into it again.

Compare that to the spinal cord. The spinal nervous tissue is certainly alive and incredibly biologically continuous with the brain. But we're just incapable of becoming conscious of the percepts dealt with in the spine. We're a couple stages removed, cognitively. We (i.e., the 'person') can get information to the spine. We can get information from the spine. But we cannot place our consciousness in the tissue there.
So I agree that a damaged brain can still be conscious, and that sometimes brain activity in a certain region may be non vital to the brains operation at any particular moment. However, our perceived identity, our self, is unaffected by what part of the brain we are using at a given moment. In general, we identify ourselves not by our brains, but by our entire bodies. Evidence of this are people that have the mental illness of disassociating a body part with themselves, wanting to get rid of it because they perceive it as alien. This strong aversion to having an alien ligament suggests a strong mental importance to what constitutes your true body. So we have a distinct self and brain. Our brain provides the mechanism of consciousness, but it is not the self.

You describe the process of brain augmentation or transfer a the seamless replacement of identically functional parts in a way that unnoticeable to the person changed. The way you describe it it almost seems necessary for the person to be awake during the procedure. However, we can imagine a person who has the same memories and perceived identity as their previous unaugmented body, but who's mental capabilities are quite different. If the new brain is different enough then it makes sense to treat the post- and pre-procedural people as different. However, I cannot imagine a definitive test that would test the sameness of the post- and pre-procedural people. Your test of perceived shifting consciousness wouldn't work.

The issue applies not just to the science fiction of brain augmentation, but also to more contemporary brain surgery.
 
Robin Hanson (and economist, with a really fun interview on the Econtalk podcast) might disagree. His thesis is that brain emulation is the low-hanging fruit for AI, because we don't need to know how it works in order to get it to work.

It's not.
(A) It's not low hanging, in that enormous technical advances are required to "scan" the brain as needed. Anders Sandberg discusses some of these in the linked video. Note his conclusion that the brain needs to be simulated at least down to the synapse level.
(B) The brain kicks computer butt at certain tasks, like pattern recognition. But the reverse is true of other tasks, like logic and math. A device that partly resembles the hardwiring of the brain and partly that of modern computers seems much more likely to get any given level of overall performance for the dollar.
(C) Evolutionary algorithm-designed architecture is the low hanging fruit here. All you need is a crapload of processing power, an appropriately wide-open search space, and some clever tweaks. Mother nature did it in a few billion years, with only a few generations per year on average, with very weak mutation rates, and very weak selection in favor of intelligence.
 
(C) Evolutionary algorithm-designed architecture is the low hanging fruit here. All you need is a crapload of processing power, an appropriately wide-open search space, and some clever tweaks. Mother nature did it in a few billion years, with only a few generations per year on average, with very weak mutation rates, and very weak selection in favor of intelligence.
How would you select for intelligence? How would you rapidly test for it?
 
It's not.
(A) It's not low hanging, in that enormous technical advances are required to "scan" the brain as needed. Anders Sandberg discusses some of these in the linked video. Note his conclusion that the brain needs to be simulated at least down to the synapse level.
(B) The brain kicks computer butt at certain tasks, like pattern recognition. But the reverse is true of other tasks, like logic and math. A device that partly resembles the hardwiring of the brain and partly that of modern computers seems much more likely to get any given level of overall performance for the dollar.
(C) Evolutionary algorithm-designed architecture is the low hanging fruit here. All you need is a crapload of processing power, an appropriately wide-open search space, and some clever tweaks. Mother nature did it in a few billion years, with only a few generations per year on average, with very weak mutation rates, and very weak selection in favor of intelligence.

(C) also seems to assume a huge human brain that developed into the nice small one we have now.

(B) is the "catch" that there is something in our abilities that does make us different from machines.
 
It's not.
(A) It's not low hanging, in that enormous technical advances are required to "scan" the brain as needed. Anders Sandberg discusses some of these in the linked video. Note his conclusion that the brain needs to be simulated at least down to the synapse level.
(B) The brain kicks computer butt at certain tasks, like pattern recognition. But the reverse is true of other tasks, like logic and math. A device that partly resembles the hardwiring of the brain and partly that of modern computers seems much more likely to get any given level of overall performance for the dollar.
(C) Evolutionary algorithm-designed architecture is the low hanging fruit here. All you need is a crapload of processing power, an appropriately wide-open search space, and some clever tweaks. Mother nature did it in a few billion years, with only a few generations per year on average, with very weak mutation rates, and very weak selection in favor of intelligence.
Thanks for the Sandberg link.

You know, you're so overwhelmingly, obviously correct that I realised I must have misunderstood something in his talk (which I still recommend). The non-person AI has much more incentive to be the main contribution to AI technology for quite a bit of time. The thesis is still interesting, because it talks about how an explosion of AI persons will have a huge economic effect.
 
How would you select for intelligence? How would you rapidly test for it?

Tasks (or simulated tasks for simulated "organisms"), much as psychologists pose to test animal intelligence. Bear in mind, the portion of evolutionary history required to get to a basic mammalian brain was much larger than that required to get from there to human.

(C) also seems to assume a huge human brain that developed into the nice small one we have now.

The evolutionary-algorithm approach to mind design requires simulating the environment, not just the organisms. So naturally, the "brain" doing those calculations is going to have to be huge.
 
The evolutionary-algorithm approach to mind design requires simulating the environment, not just the organisms. So naturally, the "brain" doing those calculations is going to have to be huge.

It's a chicken and egg problem, admit it! In other words, a mirage.
 
perfect representation/simulation/modeling of an environment is a bit of an old-fashioned idea. it's more computationally efficient to just build a kind of referential map of the environment instead. you don't need to reproduce a database of facts that reflect the environment one-on-one most of the time; that's a recipe for redundant processing. it's better to just let the environment be your database and use a self-made map to navigate said natural database.

modeling probably does happen in isolated situations, but it doesn't quite have the central role that is popularly ascribed to it.
 
You're describing a neural net, the basics of which we understand. The human brain is an incredibly complex neural net though.. Basic neural nets we understand, but the nuances of the human one.. not so easy to figure out.

I once took a course on neural nets. The bottom line was that designing even a simple useful neural net
is not something that is well understood but rather a black art that relies more on experience and hunches than fixed rules.

So we don't really understand neural nets and it is more than the nuances of the human brain that we don't understand yet.


There is a nice article by Paul Allen (cofounder of Microsoft) why the singularity isn't near:
http://www.technologyreview.com/blog/guest/27206/

the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesn't happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.

[...]

Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
 
Video game AI is doing it's part too, particularly with fuzzy logic.

Yes! In fact, this is a field I have been working in for some time: fuzzy logic for solving algorithms dynamically. This is a huge focus in much of applied (computer) science today, and is of particular interest to the space program (for example) when it comes to designing computers that will be capable of making rational and well-reasoned decisions about how to solve a certain problem, or what to do next in any given process. That is to say, it would be of great use to us if we could have an entire Mars base automated - including its direction.

I use the word rationally freely, as you'd be hard-pressed to argue that the machines/programs are truly rational or free-thinkers. However, I describe it to most people as "teaching machines to approximate heuristics," like a kind of linearization of the function of thought.
 
In the year twenty forty-five
The robopocalypse is going to arrive
All our thoughts and all of our dreams
Made obsolete by our washing machines
Whoa whoa

Is this best sung to the tune of the Cleopatra 2525 theme?

It's unlikely that machines will govern our lives as they don't have aspirations.

Correct; they do only what they are programmed to do. Also, even the most powerful AI is susceptible to this thing called "pulling the plug". Regardless, don't trust them with the nuclear codes. That never ends well.

I am very fond of my meat, thank you very much. I want to smell the world, feel it, touch it, experience it as only a human can.

Such things are extremely overrated. I'd rather exist as pure though, unconstrained by needs like eating, sleeping, and crapping... but, you know, still be able to manipulate the world around me.

Corporations don't literally have aspirations, either. But as certain science fiction memes point out :borg::assimilate: they have a good start on governing our lives.

Really? Which corporation governs your life?
 
perfect representation/simulation/modeling of an environment is a bit of an old-fashioned idea. it's more computationally efficient to just build a kind of referential map of the environment instead.

Good point. Still, if you're simulating the behavior and survival of a small-brained creature, the "environment" part of the simulation could easily be much bigger than the "brain" part.

Also, even the most powerful AI is susceptible to this thing called "pulling the plug".

Unless the AI is distributed intelligence. In which case, there may be a lot of plugs to pull and pulling them all may have intolerable economic consequences.

Really? Which corporation governs your life?

The ones that have bought and paid for (and in many cases directly written) legislation, of course.
 
Unless the AI is distributed intelligence. In which case, there may be a lot of plugs to pull and pulling them all may have intolerable economic consequences.

Let's pray that we are not so stupid as to create an AI that we can't shut down...

The ones that have bought and paid for (and in many cases directly written) legislation, of course.

Oh yeah... :sad:
 
Back
Top Bottom