Consciousness doesn't migrate in the brain. The brain just uses different parts to do different things. The brain is the machine that facilitates consciousness.
Sure, sloppy language on my part, because I'm trying to have this conversation at a popularised level. That said, I disagree with your disagreement. If we allow that consciousness exists in 3D space (and it does, i.e., in your brain), then I will suggest that consciousness moves around in its point of focus within your brain, there are some places it is, and some places that it cannot be.
Hemiagnosia is probably my best example. When you're conscious, you can easier have consciousness of your environment, or not. Keep in mind that consciousness is not the same as sentience! If you're not paying attention to certain aspects of your environment, then your consciousness is not there (even if you have the capacity of becoming conscious, with stimulation). So, when you're thinking about the left side of your world, part of your consciousness is residing in your right parietal cortex. If we were to kill that cortex, then your consciousness can no longer be there. So, by becoming aware (or unaware) of your left world, you migrate your consciousness into and out of the right parietal cortex. Being unconscious of your left world (due to, say, distraction) is functionally the same as not having the living tissue! Now, the region is still sentient, obviously. It can be activated by stimuli, and draw your attention (i.e., consciousness) into it again.
Compare that to the spinal cord. The spinal nervous tissue is certainly alive and incredibly biologically continuous with the brain. But we're just incapable of becoming conscious of the percepts dealt with in the spine. We're a couple stages removed, cognitively. We (i.e., the 'person') can get information to the spine. We can get information from the spine. But we cannot place our consciousness in the tissue there.
Not only that, there is no reason why the proportion of researchers in the developed world should remain constant. I think (and hope) that future economic pressures will greatly reduce the demand for lawyers, accountants and etc, and increase demand for researchers. In short, I don't think that stagnation of the number of researchers need be a problem.
...
Depending on the pressures of the future, more people could turn into research. In fact, if we look at rapidly developing nations like China and South Korea, we note that a very high proportion of their educated workforce is on technical careers, when compared to the West. To keep it's competitive edge, the West will have to do the same.
The more I look at this, the more I wonder if it's true. A bunch of R&D is driven by consumer demand, but consumers don't always demand R&D goods. In fact, we're quite happy to spend on branding over R&D, and buy products that are advertised as 'better' instead of being objectively better. In the West, only 3% of GDP is spent on R&D. China spends even less than 3% (
less than 2%, I think). We can expect the developing nations to catch up in proportional investment, but I don't think we'll see an escalation. There's no real long-term trend of escalating R&D competition between the developed nations, even though competition between these nations has been going on for some tiime.
Awareness. Actual intelligence. Modelling an AI on a set of rules is too constraining to be considered a real AI.
Could you please expand upon this? We're not quite sure what you mean!
As usual, I find myself largely agreeing with El Mac. Here I'll focus on the disagreement. The only continuities of consciousness worth bothering about are causality and character/quality. If the machine can have experiences with the same qualities as yours (that's a big IF), and if it's being the way it is results from your being the way you are (via a brain scan for example), then that's a continuation of your experiences and actions. It doesn't matter if the wiring and the meat ever directly touch.
As far as being comfortable with the idea that *I* am uploading, it will need to result in more than just a machine that perceives continuity with my body. My body will need to perceive continuity with the machine. Your example of the artificial neuron is mostly the way I think about it, as the neurons are replaced, the continuity of perception never needs to change. I will be confident about the process, for example, once my right parietal cortex is replaced, and I am able to shift my attention between my left and right worldviews (and even be able to hold them both at once).
But I should note: even though it's possible to get AI by duplicating the brain, I think it is a really dumb stand-alone strategy if you just want to get stuff done. AI that works radically differently from our brains will probably appear long before the exact brain emulators. More later.
Robin Hanson (and economist, with a really fun interview on the Econtalk podcast) might disagree. His thesis is that brain emulation is the low-hanging fruit for AI, because we don't need to know how it works in order to get it to work. And having AI will create such an economic shift that the meat-people will perceive it as a Singularity (but over the course of months, not Vinge Verner's 'interesting afternoon'). In other words, an emulation is not the ideal AI for us. But it will be an easy AI to make, and it will be a profitable AI to make.
The social implications of copying ourselves is pretty interesting, because you'd have to contract with yourself, essentially, before you copy yourself. You'd arrange the copying such that no matter where you ended up (as the meat, or as the AI), you'd feel like the copying was a mutually beneficial idea.