The AI Thread

But while I'm at it: The whole differentiation between "weak AI" and "strong AI" is BS made up by politicians.

"Strong AI" is science fiction and will remain science fiction for a loooong time. And weak AI is just a fancy name ("artificial intelligence") used for promotion value to sell what can be better succinctly described as specialized systems.
 
Most people, when they talk about A.I., wrongly have games in mind. And sure, it is important to give the sense to the player that the adversary is competitive, but in this case it is arguably more important to examine what built exists below the surface and the game. And in the case of A.I., I don't think there is actual ability of practice. In practice, not a game, A.I is incapacitated, and while it is reflexive to focus on the result - the game - even mere ability to practice, or even ability to feel anything about actual practice, is important.
 
The strong AI is intuitively associated with ability of computer system to "understand" things. What we see now in the speech recognition, NLP (natural language processing) fields already resembles "understanding". Modern translation engines take input text, convert it into internal representation and then translate this representation back to another language. I think next several years we will continue to see different strong AI elements emerging.
 
The problem with calling that "understanding" is that there is no sense of acting or context or building something. If you code a simple program which will keep giving as output any number which is 3x+1 (x an integer), a computer won't have notions of integer or individual notions of numbers in that list. A human, on the other hand, has to sense (eg) 4 as different than 7, 10 etc. Also, we don't actually know how anyone senses anything (cause it isn't used in communication, not practical).
Even an ant has a sense of stuff (we don't know what sense it is). Any bio life seems to. A computer cannot.
 
The problem with calling that "understanding" is that there is no sense of acting or context or building something.
There are tests used to check if a person can read and understand foreign language text. A person reads text and answers questions about it, which require to understand the meaning. Computers already give good results on these tests. Basically, as long as you can define quantitative metric of "understanding" and provide algorithm with sufficient data, it's possible to train neural network which will give good results by that metric. Is it real "understanding" or just imitation, we don't know. But the funny thing is that as long as we can find the metric where humans outperform the algorithm, we can use this metric to improve algorithm.
 
There are tests used to check if a person can read and understand foreign language text. A person reads text and answers questions about it, which require to understand the meaning. Computers already give good results on these tests. Basically, as long as you can define quantitative metric of "understanding" and provide algorithm with sufficient data, it's possible to train neural network which will give good results by that metric. Is it real "understanding" or just imitation, we don't know. But the funny thing is that as long as we can find the metric where humans outperform the algorithm, we can use this metric to improve algorithm.

I think that claiming we "do not know" if that is real understanding or imitation is a bit like claiming (to use a favorite example) that we don't know if a rock senses gravity when it falls. Of course, maybe even minerals have something like a "sense" (like bio matter has), but even in such a case the computer would again be having sense due to material acting like dna, not due to the actual program.
Now, in scifi territory, maybe even pure routines or other events may lead to some kind of sense. But this isn't what is being researched and even if it would exist it would have to be triggered specifically (I mean rocks don't seem to develop a sense just by gravity affecting them). We don't have such tech. What AI currently seems to be examined as is just programs somehow being more than a collection of programs, forming a focal point ala an ego, which isn't realistic with what we know. Sure, a thermostat doesn't have to know of what heat is, or words for it, or things related to it or metaphors about it, but if it was actually an actor/had sense it should at least form some account of it which is built out of a pool of materials it has to form senses about. An ant, from what one can tell, does have a sense of the ground it moves in, regardless of what that actual sense may be, or of what the sensation of pheromones is, etc.

Maybe the matter gets complicated also due to prehistoric animistic beliefs (and their not conscious nor direct) remains in contemporary humans: at some time in prehistory I am pretty sure rocks or other such objects were imagined to have life of this kind. Not because prehistoric people were dumb, but because it is a very easy to express mental connection/thought.
 
Last edited:
I think that claiming we "do not know" if that is real understanding or imitation is a bit like claiming (to use a favorite example) that we don't know if a rock senses gravity when it falls. Of course, maybe even minerals have something like a "sense" (like bio matter has), but even in such a case the computer would again be having sense due to material acting like dna, not due to the actual program.
Now, in scifi territory, maybe even pure routines or other events may lead to some kind of sense. But this isn't what is being researched and even if it would exist it would have to be triggered specifically (I mean rocks don't seem to develop a sense just by gravity effecting them). We don't have such tech. What AI currently seems to be examined as is just programs somehow being more than a collection of programs, forming a focal point ala an ego, which isn't realistic with what we know. Sure, a thermostat doesn't have to know of what heat is, or words for it, or things related to it or metaphors about it, but it was actually an actor/had sense it should at least form some account of it which is built out of a pool of materials it has to form senses about. An ant, from what one can tell, does have a sense of the ground it moves in, regardless of what that actual sense may be, or of what the sensation of pheromones is, etc.

Maybe the matter gets complicated also due to prehistoric animistic beliefs (and their not conscious nor direct) remains in contemporary humans: at some time in prehistory I am pretty sure rocks or other such objects were imagined to have life of this kind. Not because prehistoric people were dumb, but because it is a very easy to express mental connection/thought.
Whether the rock has sense of gravity is a purely philosophical question. If we want to check whether a person read and understood "War and Peace", we can ask him to answer questions about the novel or write short essay about it. If a computer will be able to do the same it will IMO qualify as understanding. Because it will act like a human who read and understood the novel and there will be no reason to assume human understanding is better or more real. We can't be sure whether computer's understanding is "real" but to be fair we don't know what "real" means in this context and whether humans understanding is also real.
 
Whether the rock has sense of gravity is a purely philosophical question. If we want to check whether a person read and understood "War and Peace", we can ask him to answer questions about the novel or write short essay about it. If a computer will be able to do the same it will IMO qualify as understanding. Because it will act like a human who read and understood the novel and there will be no reason to assume human understanding is better or more real. We can't be sure whether computer's understanding is "real" but to be fair we don't know what "real" means in this context and whether humans understanding is also real.

It's not understanding to code something which will secure some input which can pass a test on whether the thing answering is human or intelligent. I mean, if you have a computer do an IQ test, having been fed answers to similar stuff and correctly linking the questions it got answers to to derivative questions, you don't actually have any action there, nor any sense.
I think we do already have a definition of sense or understanding: it is the ability to form something out of a largely adaptable pool of potential. Some servant in ancient Athens could understand what a diagonal was, if Socrates explained the notions, but the key here is that the specific way in which the servant, Socrates, or anyone else, formed a sense of the diagonal, is neither the same nor actually examined in depth. In a computer, depth isn't there in the first place, though I see no reason why a bio-computer hybrid cannot provide new insight on both computers and bio-matter (and by hybrid I don't mean an actual brain; just some cells).
 
I mean, if you have a computer do an IQ test, having been fed answers to similar stuff and correctly linking the questions it got answers to to derivative questions, you don't actually have any action there, nor any sense.
But why? Let's say AI achieve similar results as humans in some complex task, how we can claim AI is just coded to answer questions but we have deeper understanding or sense of them? In my opinion, if the problem is complex enough to require deep understanding in order to achieve good results, then computer which is able to solve it, should also understand it.

I think we do already have a definition of sense or understanding: it is the ability to form something out of a largely adaptable pool of potential.
I can't understand your definition of understanding :)

Some servant in ancient Athens could understand what a diagonal was, if Socrates explained the notions, but the key here is that the specific way in which the servant, Socrates, or anyone else, formed a sense of the diagonal, is neither the same nor actually examined in depth.
You can train ten neural networks to recognize cats and dogs on pictures. They will do it in different ways and with different accuracy (sometimes disagreeing with each other) and there won't be an easy way to find out the exact reasoning behind their answers in each case.
 
But why? Let's say AI achieve similar results as humans in some complex task, how we can claim AI is just coded to answer questions but we have deeper understanding or sense of them? In my opinion, if the problem is complex enough to require deep understanding in order to achieve good results, then computer which is able to solve it, should also understand it.


I can't understand your definition of understanding :)


You can train ten neural networks to recognize cats and dogs on pictures. They will do it in different ways and with different accuracy (sometimes disagreeing with each other) and there won't be an easy way to find out the exact reasoning behind their answers in each case.

I don't see that as training, but as coding some tie. Eg you can code the machine to look for color patterns, or size or movement etc. The thing missing would be an active entity actually sensing it has some input and then producing some output (right or wrong, that isn't important), in effect forming something new because it has the ability to form something out of a pool which is not tied to forming that something or anything else specific. For the same reason, if you take an ant and you place it in some environment it never would be in by itself, the ant won't just stop having sense of the environment, it will still react and form a sense, it just won't be able to cope. The machine by definition cannot produce output if its code does not present the input as actual input, while for a living entity input itself is not a task to be identified but something sensed automatically.
An infant may sense the lava flowing on the hill you placed it at as something funny and smile. The computer won't print any output if you write something not in code it picks up, cause there is no pool of forming anything for the computer; it just follows the code without sensing there is code or sensing reacting to the code is a thing either.
 
Most people, when they talk about A.I., wrongly have games in mind. And sure, it is important to give the sense to the player that the adversary is competitive, but in this case it is arguably more important to examine what built exists below the surface and the game. And in the case of A.I., I don't think there is actual ability of practice. In practice, not a game, A.I is incapacitated, and while it is reflexive to focus on the result - the game - even mere ability to practice, or even ability to feel anything about actual practice, is important.
Why is feeling things important for machines?

The purpose of machines is to solve problems not catch feelings.

Shoot even the point of feelings is to solve problems (loneliness, hunger, anger, joy all drive and reinforce behavior)

Maybe would be useful for machines to "feel" but probably not necessary and potentially dangerous.
 
For the same reason, if you take an ant and you place it in some environment it never would be in by itself, the ant won't just stop having sense of the environment, it will still react and form a sense, it just won't be able to cope.
Wouldn't it be the same if you replace ant with autonomous vacuum cleaner?

The machine by definition cannot produce output if its code does not present the input as actual input, while for a living entity input itself is not a task to be identified but something sensed automatically.
Living entity's brain also only processes input from sensory organs and produces reactions. You can do the same with a computer if you connect a camera and microphone to it. There is no principal difference, we are just programmed by evolution to be autonomous and to survive in our environment. Computers in most cases are programmed to do different things and less advanced than we are.
 
Wouldn't it be the same if you replace ant with autonomous vacuum cleaner?


Living entity's brain also only processes input from sensory organs and produces reactions. You can do the same with a computer if you connect a camera and microphone to it. There is no principal difference, we are just programmed by evolution to be autonomous and to survive in our environment. Computers in most cases are programmed to do different things and less advanced than we are.

I am not seeing how computers "do" something, in the sense that they actually have any sense of doing something. If you throw rocks at the sea, they may create small circles on the surface - the sea didn't sense the event, much like a computer doesn't sense anything. The event is caused not due to an actor participating in something but due to laws which affect both sea and rock as non-actors & just affect without there being a translation of their effect by an actual actor (eg a human or other live being).
Any bio matter, on the other hand, will react as part of a system which has an own end and progresses with some (conscious or not) reference to that end. Eg a fungus will react to fungicide and revert to spore status until the fungicide is no longer there. It is basic, but also not something which can be manufactured (ie requires bio matter). A machine senses nothing at all, like the rock and the sea, regardless of the circles you may like watching as the output (and which a human wouldn't easily create without using those lifeless media).

To tie this to the Kasparov video I posted: Yes, the computer(s) beat Kasparov, but they obviously didn't sense either Kasparov, the game, a progression, thinking forward, difference between starting and ending moves or anything else in any way. They were better than Kasparov in coming up with moves, much like any pocket calculator is better than any human in presenting the output of multiplication of huge numbers.
 
Any bio matter, on the other hand, will react as part of a system which has an own end and progresses with some (conscious or not) reference to that end. Eg a fungus will react to fungicide and revert to spore status until the fungicide is no longer there. It is basic, but also not something which can be manufactured (ie requires bio matter). A machine senses nothing at all, like the rock and the sea, regardless of the circles you may like watching as the output (and which a human wouldn't easily create without using those lifeless media).
What's so special about bio matter? We and computers are made from atoms, the differences are programming and level of complexity, which is not qualitative difference. If we build complex enough computer, find a right way to program it, and give it sensors, wouldn't it be able to also sense and understand things? I think we can and we are already on that path.
 
What's so special about bio matter? We and computers are made from atoms, the differences are programming and level of complexity, which is not qualitative difference. If we build complex enough computer, find a right way to program it, and give it sensors, wouldn't it be able to also sense and understand things? I think we can and we are already on that path.

We and and dogs have limbs, does that mean dogs are on their way to build computers?

Bio matter is "special" in that it seems to be a prerequisite for anything to have a sense*

*maybe there are properties we don't know about, and allow the rise of sense from non bio matter. But if that is the case, certainly it isn't something to do with the AI talk going on in our time. Maybe mineral or electric or other also can cause sense in some condition (or in states we do not pick up due to how we sense the world), but it isn't tied to the current talk of AI in our age (I mentioned this as 'scifi' stuff).
 
And finally, because I like OPs with pictures:
Spoiler A cool image manipulation algorithm :
main.jpg

The algorithm manipulates the left-most images according to the categories blonde hair, gender, etc.

Spoiler But the true state-of-the-art AI is... :
k61eaep.jpg


What I'm getting from this is that these are all presumably celebrities, and I only recognise one of them. That makes me happy somehow.
 
The strong AI is intuitively associated with ability of computer system to "understand" things. What we see now in the speech recognition, NLP (natural language processing) fields already resembles "understanding". Modern translation engines take input text, convert it into internal representation and then translate this representation back to another language. I think next several years we will continue to see different strong AI elements emerging.
I'd add to this that "internal representations" are actually a pretty general feature of deep learning algorithms, whether it's computer vision, natural language processing, or anything else. In fact, it's very closely connected to one of the key reasons why deep learning is good at a lot of things that other algorithms fail at - the idea of "automatic feature learning."

In case you're not familiar (or if anyone else is interested), a short explanation is that a classic problem in AI is figuring out what "features" of an input are needed for classifying it. For example, what are the cat-like features of an image we need in order to build a cat detector? A naive (and extremely difficult) way would be to try to manually code structures to detect things like whiskers, ears, and tails. Deep learning, on the other hand, essentially does this on its own. This picture tries to illustrate:
Spoiler :
main-qimg-10cba1fabb8c6c1b39d87fa96fe862d9


This is actually a convolutional neural network, or CNN. The different cells are showing the "learned" values of the "convolutional filters." Once the filters are learned, you classify a new image by essentially sliding each of them around the image and using them as "detectors" for the features they've learned.

So a CNN for detecting cats will learn filters that act as internal representations of cat-like features. Though this is just a CNN, all deep learning algorithms do something similar. A lot of people compare this kind of thing to "concepts." Maybe that's anthropomorphizing neural nets too much, but it's not a bad analogy imo.
 
Last edited:
Back
Top Bottom