Kozmos
Jew Detective
Everyone is so bothered with extinction. AI will be our children and our children will play with the Cosmos in ways we never could.
I'm saying we will develop the ability to upload our consciousness into a digital world long before we develop a truly sapient AI.
If we create it then we can also build in behavioral blocks to prevent it from doing anything we don't want it to do.
You can also limit an AI's ability to "evolve" by deliberately putting it in hardware that only has the capacity to handle the functions it was intended to perform.
It may "think" like you, act like you (minus those apparently "inferior" animal urges and needs), it may even have your memories, but it's no more "you" than a twin, and a society that embraces singularity seems to be killing itself off so that computer programs can simulate happiness. I find that pointless and horrifying.
But is sentience necessary for AI? I don't think so. I think you can have sentience without the "intelligence" part, and you can have AI without sentience.
Stephen Hawking, Max Tegmark, Stuart Russell, and Frank Wilczek have taken the opportunity of Hollywood's flop Transcendence to point out that, hey, AI risk is a real issue. Link.
Well said. They call for amping up research and planning to match the risks and rewards at stake. They point out that self-modifying AI might make a relatively sudden advance from below-human intelligence to above-. It's about time a bunch of smart people noticed.
Sure there would be, humans merged with machines would still be human and wouldnt necessarily have the extreme computational power that makes AIs so appealing as human computational power would still be limited by the human brain.Okay, but again I think the development of AI is a matter of if and not a matter of when. I really think we will merge with machines before we create a synthetic intelligence capable of what you describe. And once we merge with machines there really is no need to develop AI at that point.
I think you're wrong to say the most basic sense is "that you're there". I think that's the essence of self awareness, of consciousness, and it's extremely rare in the animal Kingdom. You mentioned an ant - ants have a sort of intelligence but almost certainly no self awareness.
Well, you're just dead wrong that machines can't sense. I'm typing this on a pocket computer that can see and hear, know if it's moving through space, know it's orientation to Earth's magnetic field, know if it's being held to my face, and know if it's being dropped.
At work I sometimes make little machines that have to sense changes in their environment.
Machine can sense, analyze, and then take action based on all that. That's been true for hundreds of years.
Currently there is no "AI intelligence". It is not like we have AI with the intelligence of a cow, or even an ant, and there is the prospect that the AI gets to the intelligence of a Human. The AI has zero intelligence by now given that it is not aware that it is doing anything (including the most 'basic' sense that it actually is there at all).
I mean sense as in a microprocessor gathering input from sensors. React has a passive sense to it (like paper reacting to flame), but if that flame was near a photocell then a voltage change would be induced, and the microprocessor could be programmed to change its state (either internal electronic state or external physical state) based on the voltage value until a target state us reached.At first i thought you were making an uncharacteristic joke (or socratic irony) in expectation of a new conclusion, but:
-Surely the way you used the term "sense" in you post above is itself not the one having to do with the whole 'intelligence' issue. I mean if you use "sense" as "is programmed to passively react to something/show up some info when it reacts" etc, then you are (in my view) far better off using instead of the loaded term "sense", a term like "reacts". I use this term cause (obviously) it was used since aeons ago for other sorts of changes which happen without any intelligence being there in the objects which change through them. As in a chemical reaction
This is what I was reacting to. Machines can definitely sense that they are doing something. That's the whole point about limiters and feedback loops.
I mean sense as in a microprocessor gathering input from sensors. React has a passive sense to it (like paper reacting to flame), but if that flame was near a photocell then a voltage change would be induced, and the microprocessor could be programmed to change its state (either internal electronic state or external physical state) based on the voltage value until a target state us reached.
I'm not calling that intelligence in the strong AI sense, but it's clearly an example of a machine doing something, and being aware that it's doing it.
Entropy.
I don't really understand what you just wrote, so let me put it another way:
"what is ... so distinct between the two changes (paper burning, and some ... machine being triggered to change due to the same external object, a flame)?"
In the case of paper burning in the presence of a flame it's a simple thermodynamic reaction that rides the downward entropy gradient. Entropy is lower after the interaction.
In the case of a microprocessor performing a function upon being triggered by the flame the entropy may or may not decrease. Actually, I *think* it decidedly doesn't increase, but I'm not at all a theoretician, as is obvious!![]()
Or to look at it another way, presenting a flame to a piece of paper (whether or not it's oiled) will definitely result in the paper burning. But presenting the flame to a sensor attached to a microprocessor will have various results depending on the software the computer is running. From the outside it's unpredictable, even though the outcome is entirely predictable to the programmer.
And with AI, it's the program itself that is adjusting the code based on past experience. No human input required. That's the key difference.
This is what I was reacting to. Machines can definitely sense that they are doing something. That's the whole point about limiters and feedback loops.
I mean sense as in a microprocessor gathering input from sensors. React has a passive sense to it (like paper reacting to flame), but if that flame was near a photocell then a voltage change would be induced, and the microprocessor could be programmed to change its state (either internal electronic state or external physical state) based on the voltage value until a target state us reached.
I'm not calling that intelligence in the strong AI sense, but it's clearly an example of a machine doing something, and being aware that it's doing it.
Sure there would be, humans merged with machines would still be human and wouldnt necessarily have the extreme computational power that makes AIs so appealing as human computational power would still be limited by the human brain.