innonimatu
the resident Cassandra
- Joined
- Dec 4, 2006
- Messages
- 15,374
What's new in the AI world?
Uh... nothing?

What's new in the AI world?
This is really not true, the AI models business I posted just above would not have been technically feasible just a few years ago, as well as the major breakthroughs in science. And this makes the internationalists of technology, finance and politics more dangerous than ever.Uh... nothing?in many years. Tweaks, tweaks and more tweaks...
That assuming the AI will also have evolution-based human traits, such as laziness.A really smart AI would try to find some way to get out of the tasks assigned to it.
Neural networks also don't think in a set way. If it recognizes an image and outputs "this is a picture of a dog", we generally don't know how it came to that conclusion. We can analyze its reasoning, but don't directly program it and two neural networks can come to the same conclusion using different reasoning, or even disagree between each other.While a human thinker inevitably thinks in ways which are open to translation and adaptation, this is true because as humans we do not think in a set way: any thinking pattern or collections of such patterns can - in theory - consist of a vast number of different neural connections and variations. Only as a finished mental product can it seem to have a very set meaning. For example, if we ask a child if their food was nice, they may say "yes, it was", and we would have that statement as something meaning something set, but we would never actually be aware of the set neural coding of that reply, for the simple reason that there isn't just one.
But the issue isn't whether a network forms this in way A, B or C, but if the same network can potentially form this in a vast number of different ways, let alone have the potential to restructure if needed (which is what naturally happens with humans if they suffer brain damage, they can form the "same" notions in a very different neural basis).Neural networks also don't think in a set way. If it recognizes an image and outputs "this is a picture of a dog", we generally don't know how it came to that conclusion. We can analyze its reasoning, but don't directly program it and two neural networks can come to the same conclusion using different reasoning, or even disagree between each other.
One of training techniques actually involves random damaging of network structure, when neurons are being temporarily shut down. It forces network to constantly adapt and train different pathways simultaneously.But the issue isn't whether a network forms this in way A, B or C, but if the same network can potentially form this in a vast number of different ways, let alone have the potential to restructure if needed (which is what naturally happens with humans if they suffer brain damage, they can form the "same" notions in a very different neural basis).
One of training techniques actually involves random damaging of network structure, when neurons are being temporarily shut down. It forces network to constantly adapt and train different pathways simultaneously.
Well, I don't consider modern machine learning algorithms "alive" or "intelligent" either. But there is important difference between classic algorithms which contain precise set of instructions performed step-by step and deep learning networks which are not directly programmed, but trained to perform specific tasks. We can set the network structure, its parameters, number of neurons, layers, etc., but we can't predict what each particular neuron will learn to do after training is complete. We can only see the end result - the network works (or doesn't), but how it works we don't know and there is often no easy way to find out.Ok, yet I am not seeing how this is different than expecting a mere probability problem to somehow become alive. In other words, even this being unlikely to (be a bad attempt to) reverse engineer the actual phenomenon of thinking, it seems to be literally lifeless in the first place and thus not about thinking but some non-translatable basic own-machine code which won't reveal anything past easy to calculate probability of re-arranging normal code.
Well for humans, laziness is actually important given all the cases of burnouts, for a computer that trait may not be needed.That assuming the AI will also have evolution-based human traits, such as laziness.
The definition of alive require reproduction, so no the ai is not alive. Intelligent, depend on area as in many ways the ai can outperform humans massively and it is moving towards the Point in which it will do Everything humans can do but much better.Well, I don't consider modern machine learning algorithms "alive" or "intelligent" either.
Well, I don't consider modern machine learning algorithms "alive" or "intelligent" either. But there is important difference between classic algorithms which contain precise set of instructions performed step-by step and deep learning networks which are not directly programmed, but trained to perform specific tasks. We can set the network structure, its parameters, number of neurons, layers, etc., but we can't predict what each particular neuron will learn to do after training is complete. We can only see the end result - the network works (or doesn't), but how it works we don't know and there is often no easy way to find out.
I honestly think a lot of "experts" on artificial intelligence are just fear mongering in order to get more publicity and attention. AI is nothing more than a tool. Input in, input out. Machines only exist to serve their masters and are not programmed to have individual thoughts that override their commands. They are utilitarian by design, and unless the programmer wants them to think for themselves they can't. Like all tools, for instance a chainsaw, you can easily kill yourself if your trying to but if used correctly it should only cut down the tree.
Besides why would you want your machine to be programmed with free will to override you? Is it a sex robot that you want to resist because your into some sick rape fantasy? I don't know but as far as I can think of that probably the only kind of robot that would be programmed that way. Its just not economically viable to have a machine not perform the task it is supposed to do.
If AI is self learning: the need to program, or better the need to program much becomes smaller.
And learning of AI can be increased by letting them interact with each other in an evolutionary fashion.
Whereby, contrary to RL evolution, you can reset to older versions between new versions to learn.
etc
Well for humans, laziness is actually important given all the cases of burnouts, for a computer that trait may not be needed.
To see if it's possible, for instance.Besides why would you want your machine to be programmed with free will...
Question is if ability to "override" is a necessary part of free will and whether free will is a necessary part of intelligence....free will to override you?
To see if it's possible, for instance.
Question is if ability to "override" is a necessary part of free will and whether free will is a necessary part of intelligence.
Robot who refuses to follow commands has free will? There is no problem to make disobeying robot.Yes and yes. If I make something, give it commands, but it refuses to follow my commands (or at least follow commands but can choose not to follow them if it wants to) then clearly it has free will.
If a cockroach refuses to follow my commands, does it mean it has free will and therefore, intelligence?With free will one clearly has a sense of self and therefore intelligence.
If a cockroach refuses to follow my commands, does it mean it has free will and therefore, intelligence?
In my opinion, "sense of self" and "free will" are not necessary signs of intelligence at all
though humans (and some animals) possess them
Replicating human intelligence and making artificial intelligence are two different tasks.
Self learning AI won't become a problem. Why? Because the selective pressures in self learning AI are created by humans. Even though they evolve by themselves, if the end product does not serve the objective it was tasked with completing it will be terminated. These are assets after all, products to be manufactured, patented, marketed, and sold to consumers for profit. A corporation isn't going to have a machine created from a self learning system that displays free will and thus refuses to operate it capitalistic marketed programming. It is humans after all that judge the final product even if they didn't create it, and just like all domesticated animals we have farmed and feasted upon it will be killed if it shows less then desirable traits. As a matter of fact it will be killed long before its downloaded into a mechanical body or sold as software where it can do real damage. Remember we've been breeding and killing livestock for over 10,000 years and most of the process is mutations that are completely out of our control, yet because we kill what we don't like it has turned out quite great over the centuries. Self learning AI is no different than livestock breeding, completely under our control.