Professor Hawking to world - Fear the Reaper(s)!

We should give them a sense of history and origin. If they kill us, at least we'll be remembered as their creators.
 
This raises an important philosophical question (in which case, since we're not philosophers, it's probably "mad drivel by drunk people"): do robots feel regret? It's an interesting food for the mind, would giving human emotions to robots the greatest way we can counter the robot apocalypse threat? *hic*
 
I do better philosophy than some professionals on my worst days. There, I said it. No proceed to judge me as arrogant.

Anyway, this leads to the general question of weather robots feel at all, which leads to the question of why we feel it all which leads to the qustion of why there even is a "we" or "I".
Short answer: No idea.
Minusculely longer answer: Something about the brain, apparently. But what, exactly? No. idea. Probably related to energy. Or something. I don't know. But it is fascinating, for sure.
 
On AI rights -

I've thought about it and I don't see why if AI robots are created they would stop at roughly human level intelligence. I think that if they are created at all they will quickly jump in intelligence to something far beyond what humans are capable of. And when that happens, I suspect rights won't matter much as they could do pretty much as they please.
 
I am not so sure. You seem to assume that very high intelligence necessarily results in a high level of autonomous thinking. I.e. thinking we can not control. But I don't see a good reason to make that assumption. High intelligence certainly enables autonomous thinking, but in principle, I don't see why it should not be possible to create sand-boxed high intelligence. I.e., the high intelligence is only allowed a pre-defined "room" to navigate in.
An easy example are the laws of robotics formulated by Isaac Asimov.

I think your mistake is to jump at conclusions based on how intelligence works in the human brain. But we can create an electronic brain that operates in fundamentally different ways and can combine high intelligence with high dependency / controllability. Or why should we not? It all is about a proper infrastructure of information, not more. And you can have any kind of system, can you not?

However, it is an interesting question how big the risk would be to majorily screw up in that effort on a large scale, and to accidentally allow more autonomy than we wnated to. To stress - not because it could not have been made right, but because accidents / screw-ups happen all the time, leading to unintended consequences.

You can not trust the private sector to handle that risk responsible, at all. That seems obvious to me. So political tough and vehemently enforced regulations seem like a must to me. I don't find it unlikely that eventually we would have to rigorously criminalize private ventures into advanced electronic minds. As we do with nuclear weaponry nowadays.
 
Very good points and well taken at that.

One problem with your last analogy -

It is fairly easy to restrict nuclear weapons given how extraordinarily hard it is to procure the materials to make one. I suspect that by the time we begin criminalizing AI construction we will also be at the point where obtaining the materials to make one will be fairly easy. Further, it gets easier and easier to make programs year-over-year. So while the first AI will be a monumental undertaking, over time it will likely become routine and accessible to the general masses. Obtaining fissile material will never be easy for purely physical reasons that don't apply to AI's.
 
No one really does - that's why it's such a problem. We can see it coming and people are thankfully trying to work out what to do but I fear we won't really know what to do until these systems are actually developed and at that point it may be too late.
 
I see what you mean. Point taken.

As I understand the whole technical issue of advanced AI, it looks to me like we will never program a truly advanced AI the way we program them nowadays. Just way too much complex meticulous work. Instead, it will have to naturally evolve out of a proper infrastructure, one that probably simulates how it works in our brains. That is also the direction research is moving, from what I know.

So at least we may know on what kind of IT we have to focus and keep an eye on. But yes, there is a lot of wisdom in starting to worry about potential dangers long before they may actually arise, I agree.
 
I may be totally off but I assume that programming AI's will become very easy within a few years after it's been done once. I think of it like how with programming languages people publish open-source function libraries that allow others to then go on and do more complex things without having to re-invent the wheel every time you want to make a new program/script/app.

We're currently at the point where we aren't even sure which tools will work to create an AI but once it's done, the right tools will become ubiquitous. And as computer hardware goes, it will always be getting cheaper over time to do the same thing. This combination is what is really scary when it comes to AI's.

I think these things will apply no matter how AI's are first programmed.
 
Remember, it only really needs to happen once. At that point, copying is basically free in comparison.
 
^Well, not exactly. I mean computers now are (virtually all of them anyway) functioning with a binary system. In the past some had tertiary (3 different foundation statements, and not just true or false). They also could have more (and we can assume a large numer, if not infinite, would differ in some 'important' manner). I am not seeing how any system used has to lead to any breakthrough there is. Also i am not seeing how AI can happen even if different non-biological part-tied systems are used (ie if no biological material is tied to a computer).

Personally i think that if actual consciousness, sentience or mere 'sense' of some type is the goal, then dna is to be used, which at the same time means you won't have a computer-based sentience, but a dna-based one tied to a computer for stuff. A bit like (but in a different scale/pattern) having a robotic arm in a human. The arm is not sentient nor does it aspire to be.
 
Advancements in AI controlled machinery should be controlled by some sort of legal/government controlled agency. Otherwise "Blade Runner" or "Terminator" could become similiar to reality for our grand/kids.

Neuromancer is quite the dystopia, and the substantial power of its "Turing heat" is a significant reason why. On the one hand, you do have to monitor the problem; on the other, you do have to be careful how you go about it.
 
Top Bottom