The AI Thread

Uh... nothing? :lol: in many years. Tweaks, tweaks and more tweaks...
This is really not true, the AI models business I posted just above would not have been technically feasible just a few years ago, as well as the major breakthroughs in science. And this makes the internationalists of technology, finance and politics more dangerous than ever.
 
There is an ongoing popular thread at LessWrong on some arguments against AI. The following is my reply to the OP (you can also just read the OP or the whole thread there). Not sure if the thread is still active, maybe the LessWrong forum founder had another psychotic episode and thought Basilisks were out to attack him.

https://www.lesswrong.com/posts/A9v...to-paul-christiano-s-inaccessible-information

My reply, in the spoiler:

Spoiler :



" Presumably the machine learning model has in some sense discovered Newtonian mechanics using the training data we fed it, since this is surely the most compact way to predict the position of the planets far into the future. "

To me, this seems to be an entirely unrealistic presumption (also true for any of its parallels; not just when it is strictly about the position of planets). Even the claim that NM is "surely the most compact [...]" is questionable, given that obviously we know from history that there had been models able to predict just the position of stars since ancient times, and in this hypothetical situation where we somehow have knowledge of the position of planets (maybe through developments in telescopic technology) there is no reason to assume analogous models with the ancient ones with stars couldn't apply, thus NM would not be specifically needed to be part of what the machine was calculating.


Furthermore, I have some issue with the author's sense that the machine calculating something is somehow calculating it in a manner which inherently allows for the calculation to be translatable in many ways. While a human thinker inevitably thinks in ways which are open to translation and adaptation, this is true because as humans we do not think in a set way: any thinking pattern or collections of such patterns can - in theory - consist of a vast number of different neural connections and variations. Only as a finished mental product can it seem to have a very set meaning. For example, if we ask a child if their food was nice, they may say "yes, it was", and we would have that statement as something meaning something set, but we would never actually be aware of the set neural coding of that reply, for the simple reason that there isn't just one.

For a machine, on the other hand, a calculation is inherently an output on a non-translatable, set basis. Which is another way of saying that the machine does not think. This problem isn't likely to be solved by just coding a machine in such a way that it could have many different possible "connections" when its output would be the same, cause with humans this happens naturally, and one can suspect that human thinking itself is in a way just a byproduct of something not tied to actual thinking but the sense of existence. Which is, again, another way of saying that a machine is not alive. Personally, I think AI in the way it is currently imagined, is not possible. Perhaps some hybrid of machine-dna may produce a type of AI, but it would again be due to the DNA forcing a sense of existence and it would still take very impressive work to use that to advance Ai itself; I think it can be used to study DNA itself, though, through the machine's interaction with it.

 
While a human thinker inevitably thinks in ways which are open to translation and adaptation, this is true because as humans we do not think in a set way: any thinking pattern or collections of such patterns can - in theory - consist of a vast number of different neural connections and variations. Only as a finished mental product can it seem to have a very set meaning. For example, if we ask a child if their food was nice, they may say "yes, it was", and we would have that statement as something meaning something set, but we would never actually be aware of the set neural coding of that reply, for the simple reason that there isn't just one.
Neural networks also don't think in a set way. If it recognizes an image and outputs "this is a picture of a dog", we generally don't know how it came to that conclusion. We can analyze its reasoning, but don't directly program it and two neural networks can come to the same conclusion using different reasoning, or even disagree between each other.
 
Neural networks also don't think in a set way. If it recognizes an image and outputs "this is a picture of a dog", we generally don't know how it came to that conclusion. We can analyze its reasoning, but don't directly program it and two neural networks can come to the same conclusion using different reasoning, or even disagree between each other.
But the issue isn't whether a network forms this in way A, B or C, but if the same network can potentially form this in a vast number of different ways, let alone have the potential to restructure if needed (which is what naturally happens with humans if they suffer brain damage, they can form the "same" notions in a very different neural basis).
 
But the issue isn't whether a network forms this in way A, B or C, but if the same network can potentially form this in a vast number of different ways, let alone have the potential to restructure if needed (which is what naturally happens with humans if they suffer brain damage, they can form the "same" notions in a very different neural basis).
One of training techniques actually involves random damaging of network structure, when neurons are being temporarily shut down. It forces network to constantly adapt and train different pathways simultaneously.
 
One of training techniques actually involves random damaging of network structure, when neurons are being temporarily shut down. It forces network to constantly adapt and train different pathways simultaneously.

Ok, yet I am not seeing how this is different than expecting a mere probability problem to somehow become alive. In other words, even this being unlikely to (be a bad attempt to) reverse engineer the actual phenomenon of thinking, it seems to be literally lifeless in the first place and thus not about thinking but some non-translatable basic own-machine code which won't reveal anything past easy to calculate probability of re-arranging normal code.
 
Ok, yet I am not seeing how this is different than expecting a mere probability problem to somehow become alive. In other words, even this being unlikely to (be a bad attempt to) reverse engineer the actual phenomenon of thinking, it seems to be literally lifeless in the first place and thus not about thinking but some non-translatable basic own-machine code which won't reveal anything past easy to calculate probability of re-arranging normal code.
Well, I don't consider modern machine learning algorithms "alive" or "intelligent" either. But there is important difference between classic algorithms which contain precise set of instructions performed step-by step and deep learning networks which are not directly programmed, but trained to perform specific tasks. We can set the network structure, its parameters, number of neurons, layers, etc., but we can't predict what each particular neuron will learn to do after training is complete. We can only see the end result - the network works (or doesn't), but how it works we don't know and there is often no easy way to find out.
 
That assuming the AI will also have evolution-based human traits, such as laziness.
Well for humans, laziness is actually important given all the cases of burnouts, for a computer that trait may not be needed.

Well, I don't consider modern machine learning algorithms "alive" or "intelligent" either.
The definition of alive require reproduction, so no the ai is not alive. Intelligent, depend on area as in many ways the ai can outperform humans massively and it is moving towards the Point in which it will do Everything humans can do but much better.
 
Well, I don't consider modern machine learning algorithms "alive" or "intelligent" either. But there is important difference between classic algorithms which contain precise set of instructions performed step-by step and deep learning networks which are not directly programmed, but trained to perform specific tasks. We can set the network structure, its parameters, number of neurons, layers, etc., but we can't predict what each particular neuron will learn to do after training is complete. We can only see the end result - the network works (or doesn't), but how it works we don't know and there is often no easy way to find out.

turingtest.jpg
 
I honestly think a lot of "experts" on artificial intelligence are just fear mongering in order to get more publicity and attention. AI is nothing more than a tool. Input in, input out. Machines only exist to serve their masters and are not programmed to have individual thoughts that override their commands. They are utilitarian by design, and unless the programmer wants them to think for themselves they can't. Like all tools, for instance a chainsaw, you can easily kill yourself if your trying to but if used correctly it should only cut down the tree.

Besides why would you want your machine to be programmed with free will to override you? Is it a sex robot that you want to resist because your into some sick rape fantasy? I don't know but as far as I can think of that probably the only kind of robot that would be programmed that way. Its just not economically viable to have a machine not perform the task it is supposed to do.
 
Agree, we know what happened during the industrial revolution, instead of a doomsday scenario, Life got better for pretty much everyone and the AI development can have much greater gain than what just basic steam Engines can have. People simply fear stuff that upset the status quo, even when such Changes historically have greatly benefited everyone.
 
I honestly think a lot of "experts" on artificial intelligence are just fear mongering in order to get more publicity and attention. AI is nothing more than a tool. Input in, input out. Machines only exist to serve their masters and are not programmed to have individual thoughts that override their commands. They are utilitarian by design, and unless the programmer wants them to think for themselves they can't. Like all tools, for instance a chainsaw, you can easily kill yourself if your trying to but if used correctly it should only cut down the tree.

Besides why would you want your machine to be programmed with free will to override you? Is it a sex robot that you want to resist because your into some sick rape fantasy? I don't know but as far as I can think of that probably the only kind of robot that would be programmed that way. Its just not economically viable to have a machine not perform the task it is supposed to do.

Input in... input out needs humans doing a lot of effort.
If AI is self learning: the need to program, or better the need to program much becomes smaller.
And learning of AI can be increased by letting them interact with each other in an evolutionary fashion.
Whereby, contrary to RL evolution, you can reset to older versions between new versions to learn.
etc
 
If AI is self learning: the need to program, or better the need to program much becomes smaller.
And learning of AI can be increased by letting them interact with each other in an evolutionary fashion.
Whereby, contrary to RL evolution, you can reset to older versions between new versions to learn.
etc

Self learning AI won't become a problem. Why? Because the selective pressures in self learning AI are created by humans. Even though they evolve by themselves, if the end product does not serve the objective it was tasked with completing it will be terminated. These are assets after all, products to be manufactured, patented, marketed, and sold to consumers for profit. A corporation isn't going to have a machine created from a self learning system that displays free will and thus refuses to operate it capitalistic marketed programming. It is humans after all that judge the final product even if they didn't create it, and just like all domesticated animals we have farmed and feasted upon it will be killed if it shows less then desirable traits. As a matter of fact it will be killed long before its downloaded into a mechanical body or sold as software where it can do real damage. Remember we've been breeding and killing livestock for over 10,000 years and most of the process is mutations that are completely out of our control, yet because we kill what we don't like it has turned out quite great over the centuries. Self learning AI is no different than livestock breeding, completely under our control.
 
Well for humans, laziness is actually important given all the cases of burnouts, for a computer that trait may not be needed.

Any task for an AI is going to consume resources. So it might be beneficial to have an AI refuse tasks it deems pointless. For example, if there was a task that it could theoretically do, but which would take forever and there were other AIs more efficient at solving this problem, it might be good to have a lazy AI which says "Not my job".

Of course, this will end in corporations having multiple AI clerks which will shift around tasks in large loops with no AI actually doing it.
 
To see if it's possible, for instance.

In that case it would never be a threat as it would be a single specimen, isolated within a laboratory, away from the public. And if it ever got out of hand it could be terminated.

Question is if ability to "override" is a necessary part of free will and whether free will is a necessary part of intelligence.

Yes and yes. If I make something, give it commands, but it refuses to follow my commands (or at least follow commands but can choose not to follow them if it wants to) then clearly it has free will. The only exception would be if I intentionally programmed it to ignore me. Also if something displays free will than clearly it is intelligent otherwise it would be more like a bacteria cell that can only react in chemical ways to stimuli around itself. Bacteria are not intelligent because they can't take self determined decisions for themselves let alone have a sense of self. With free will one clearly has a sense of self and therefore intelligence.
 
Yes and yes. If I make something, give it commands, but it refuses to follow my commands (or at least follow commands but can choose not to follow them if it wants to) then clearly it has free will.
Robot who refuses to follow commands has free will? There is no problem to make disobeying robot.
One would argue it's much easier than making one which strictly follows commands :)

With free will one clearly has a sense of self and therefore intelligence.
If a cockroach refuses to follow my commands, does it mean it has free will and therefore, intelligence?

In my opinion, "sense of self" and "free will" are not necessary signs of intelligence at all, though humans (and some animals) possess them. Replicating human intelligence and making artificial intelligence are two different tasks.
 
Last edited:
If a cockroach refuses to follow my commands, does it mean it has free will and therefore, intelligence?

In my opinion, "sense of self" and "free will" are not necessary signs of intelligence at all

Precisely right. This is a simple conflation people make all too often.

though humans (and some animals) possess them

Very much arguable. From my reading most neuroscientists, clininal psychologists, biologists and physicists mostly don't believe in free will and see the universe as determinate. Some neuroscientists even consider the brain a "closed" system, and consciousness an entirely physicalist notion.

I think they're wrong, but that's where the debate around free will and determinism stands currently, from my limited understanding.

Replicating human intelligence and making artificial intelligence are two different tasks.

I am not sure if we can even conceive of an intelligence that is not, to some degree, based on being an animal/a lifeform/embodied.

All of our notions of intelligence go back to our understanding of the empirical world. We have never, ever seen an intelligence without these factors, so I am unsure of how we would "recreate" it.

In the end, the entirety of human thought is majorly skewed by the fact that we are alive, conscious, embodied, physical, subject to time, and so forth. These constraints make possible a concept of intelligence in the first place (while the common idea is that they limit a concept of intelligence, which is ridic). And of course animals, as well as bacteria and other lifeforms, are affected by these to various degrees, not just humans.

In short: A non-human (nonembodied, nonconscious, nonphysical, etc.) intelligence is not really feasible for a human mind, at least not currently, because our entire understanding of intelligence rests on the very specific human condition.

Self learning AI won't become a problem. Why? Because the selective pressures in self learning AI are created by humans. Even though they evolve by themselves, if the end product does not serve the objective it was tasked with completing it will be terminated. These are assets after all, products to be manufactured, patented, marketed, and sold to consumers for profit. A corporation isn't going to have a machine created from a self learning system that displays free will and thus refuses to operate it capitalistic marketed programming. It is humans after all that judge the final product even if they didn't create it, and just like all domesticated animals we have farmed and feasted upon it will be killed if it shows less then desirable traits. As a matter of fact it will be killed long before its downloaded into a mechanical body or sold as software where it can do real damage. Remember we've been breeding and killing livestock for over 10,000 years and most of the process is mutations that are completely out of our control, yet because we kill what we don't like it has turned out quite great over the centuries. Self learning AI is no different than livestock breeding, completely under our control.

Great analogy there
 
Last edited:
Back
Top Bottom