Artificial Intelligence, friend or foe?

You are basically describing physics. Trillions of inanimates does not an animate make. (Well, not usually. There was this one exception.) all things interact, whether dead or alive. But they key question is: Why would a man made object have a mind? Obviously, singular cells are alive. But would anyone argue that a single cell has a mind?
 
...So I don't see machines acquiring it any time soon. I reckon the best bet is a designer stumbling on it and creating a sentient machine by accident.

OK, you are aware that many corporations, governments and militaries are actively working to achieve sentient AI - it won't just be "stumbled-upon" by some designer.

For instance, check out JAIR. Or this. Or this. Or this. Or especially this.
 
I wish them good luck with that. In order to give a machine intelligence, one first should know what intelligence (or mind) is. So excuse me if I'm not overly impressed by those many corporations, governments and militaries' efforts. It sounds a bit like many entities (fill in your preferred) 'working towards peace'. Not going to happen, but I wish them all the luck in the world.
 
As I said: It is all about the data. Put racist (or sexist, or otherwise biased) data in and you get an AI that is all those things. Unfortunately it is quite hard to provide an unbiased dataset for training.
 
Is it though? In order to get from the search for baby pictures to get exclusively white babies some specific parameter must have been available enabling that. I doubt it's all about data, since it's kind of hard to imagine that only white babies get their pictures posted online. In short, it's not about data, but about data selection. Or, in other words, AI is only as intelligent as its designer. And therein lies the problem with trying to create artificial intelligence.
 
Is it though? In order to get from the search for baby pictures to get exclusively white babies some specific parameter must have been available enabling that.

I very much doubt that. It would make no sense to code a specific parameter for that. Why invest time and money just to exclude babies of non-European ancestry? Any search engine developer will try to minimize any special code for specific terms, for economic reasons and because the more they start doing that the less they can point to the AI as an excuse ("We did not intend that, it was just the result of our algorithm in this very specific case")

I doubt it's all about data, since it's kind of hard to imagine that only white babies get their pictures posted online. In short, it's not about data, but about data selection.

And how do you think the data selection is made? It is made with data. I do not know how Google's algorithm works exactly, but generally it will try to find the most popular images on the (American part of the - for google.com) internet that are somehow tagged "baby". Since a majority of people that use google.com are of European ancestry, their babies will end up as most popular and the algorithm will decide to display these. Go to google.de and the fraction of babies with African ancestry will be even less. Go to google.co.in and you start to see other babies, but as long as you put the English word in there, the result will reflect the preferences of the majority of English speaking people. Put in the Swahili word for baby and almost all results will be eastern African babies.

Or, in other words, AI is only as intelligent as its designer. And therein lies the problem with trying to create artificial intelligence.

Intelligence is a very nebulous term, but the statement is at best misleading. Given good data, I can code an AI that makes decisions about a subject I have no idea about. For that specific subject, this AI would have more intelligence than I do as its designer. Take AlphaGo as an example: It is a much better Go player than any of its designers. Yet, the intelligence is limited to that very specific subject. For any subject it was not trained for---like AI design---, it has no intelligence at all, so the designer is clearly more intelligent in those areas.
 
And how do you think the data selection is made? It is made with data. I do not know how Google's algorithm works exactly, but generally it will try to find the most popular images on the (American part of the - for google.com) internet that are somehow tagged "baby". Since a majority of people that use google.com are of European ancestry, their babies will end up as most popular and the algorithm will decide to display these. Go to google.de and the fraction of babies with African ancestry will be even less. Go to google.co.in and you start to see other babies, but as long as you put the English word in there, the result will reflect the preferences of the majority of English speaking people. Put in the Swahili word for baby and almost all results will be eastern African babies.

So... it's about data selection. Wasn't that what I just said? Already a selection is being made based on 'most popular results', apparently.

Intelligence is a very nebulous term, but the statement is at best misleading. Given good data, I can code an AI that makes decisions about a subject I have no idea about. For that specific subject, this AI would have more intelligence than I do as its designer. Take AlphaGo as an example: It is a much better Go player than any of its designers. Yet, the intelligence is limited to that very specific subject. For any subject it was not trained for---like AI design---, it has no intelligence at all, so the designer is clearly more intelligent in those areas.

In this argument intelligence seems indeed a nebulous term. How will a program have intelligence on a subject you know nothing about? It's like trying to let a computer translate (which involves zero calculation, but plenty of judgement). If you don't have any knowledge on a particular subject, you wouldn't even know where to start. You couldn't even program data selection, for one.
 
So... it's about data selection. Wasn't that what I just said? Already a selection is being made based on 'most popular results', apparently.
Which is data - data selection is made based on data. So if you have the data, the algorithm knows how to select.

In this argument intelligence seems indeed a nebulous term. How will a program have intelligence on a subject you know nothing about? It's like trying to let a computer translate (which involves zero calculation, but plenty of judgement). If you don't have any knowledge on a particular subject, you wouldn't even know where to start. You couldn't even program data selection, for one.

You need data. In this case, data would be a collection of texts and their translation. You do not have to understand either of these languages and the only thing you know is that the translation is accurate (even better would be a score of accuracy). You would then divide these texts into three parts. For the first part, you would feed both the texts and the translations into the AI and the algorithm would try to find a mapping between those. For the second part you would just feed the texts into the AI, and compare the result from the mappings it has learned to the verified translation and generate a score how good the result is. You would do this multiple times and select the one with the best results. Finally, you would use the third part to see how good you final result is.

Language is a difficult subject and you would need to know how to design language algorithms in general. But you would not need to know anything specific about the languages themselves and if it works, the algorithm can translate texts you cannot translate yourself. So I would claim that it has more "intelligence" in the subject of translation than you do. Now if you would know how to translate between these languages, you could try to make up specific rules to improve the translations. However, this would be a tedious task and might not even lead to better results, because the machine can process more texts than you have ever read and might come up with better translations than you could.
 
As Agent327 has pointed out repeatedly, the predictions that Machine Intelligence will literally attack and kill us is exaggerated and overblown. But they are gradually taking our jobs and controlling our economy. They, of course, are not yet sentient. But by time we do give them true intelligence, they will already be running things.

The head of AI research at Google said something very similar last year. (Paraphrasing...)
Don't worry about nonsense like killer robots, start thinking about more immediate, realistic
problems, like what you are going to do with millions of unemployed truck drivers.
 
Which is data - data selection is made based on data. So if you have the data, the algorithm knows how to select.



You need data. In this case, data would be a collection of texts and their translation. You do not have to understand either of these languages and the only thing you know is that the translation is accurate (even better would be a score of accuracy). You would then divide these texts into three parts. For the first part, you would feed both the texts and the translations into the AI and the algorithm would try to find a mapping between those. For the second part you would just feed the texts into the AI, and compare the result from the mappings it has learned to the verified translation and generate a score how good the result is. You would do this multiple times and select the one with the best results. Finally, you would use the third part to see how good you final result is.

Language is a difficult subject and you would need to know how to design language algorithms in general. But you would not need to know anything specific about the languages themselves and if it works, the algorithm can translate texts you cannot translate yourself. So I would claim that it has more "intelligence" in the subject of translation than you do. Now if you would know how to translate between these languages, you could try to make up specific rules to improve the translations. However, this would be a tedious task and might not even lead to better results, because the machine can process more texts than you have ever read and might come up with better translations than you could.

How will you provide "context"? Without that, translation could be
completely wrong, and no more than a dictionary look-up.
Without context, machines would just be "craunching marmosets".
https://en.wikipedia.org/wiki/English_As_She_Is_Spoke

How would you accommodate variable, dynamically-changing contexts,
e.g. the way teenagers (in particular) use opposite meanings to include
or exclude people from their cliques and social groups?

Language translation is difficult enough with complete information.
Without complete information, or given equally-likely opposite meanings of
words and phrases, it's going to be very tough.
 
Psychologically, consciousness may well be an illusion to the same extent as free will. But I don't see it developing any time soon from inanimate objects. Hence the idea of 'stumbling upon it by accident.' This may very well be how life itself started.

We know how to create human consciousness. It's easy, and can be great fun too. :)
(Some problems can arise at 2, again around 13 and, according to my wife, at 61.)

And why aim low and try to create human-like consciousness from available biochemicals
and systems? Far superior types could emerge from more focused, directed evolutionary
experiments with molecules and systems that aren't used in standard Earth biology.
 
How will you provide "context"? Without that, translation could be
completely wrong, and no more than a dictionary look-up.
Without context, machines would just be "craunching marmosets".
https://en.wikipedia.org/wiki/English_As_She_Is_Spoke

How would you accommodate variable, dynamically-changing contexts,
e.g. the way teenagers (in particular) use opposite meanings to include
or exclude people from their cliques and social groups?

Language translation is difficult enough with complete information.
Without complete information, or given equally-likely opposite meanings of
words and phrases, it's going to be very tough.

If you had a large enough sample of texts, examples of these would all be included in these texts. So you could gather the necessary context from the texts itself. It is comparable with a children learning a language: they start with the superficial meaning of sentences and start to grasp the contexts as they are exposed to a wider range of speech and text.

Nevertheless, I agree that it is going to be very tough, but that is the way you would go about it, if you were to create an AI for translation. I doubt that it would be very good on current hardware and it is hard to say how far this can be pushed. I admit that it is somewhat of a bad example, because of the difficulty, but I felt is was good enough to convey the point.
 
If you had a large enough sample of texts, examples of these would all be included in these texts. So you could gather the necessary context from the texts itself. It is comparable with a children learning a language: they start with the superficial meaning of sentences and start to grasp the contexts as they are exposed to a wider range of speech and text.

Texts are frozen in time. Any machine learning from texts will be inflexible,
and unable to cope with, among many others, jargon, argot, and thieves' cant.
There is no text or reference work that the machine can even call on to help in
translation.

If you don't know it, you might like to consider the enormous problems raised by
the Chinese Room Argument.
https://en.wikipedia.org/wiki/Chinese_room
For some AI functions, that situation can be dismissed almost out of hand; for other
translation objectives it presents an insurmountable obstacle.

Humans also pick up a myriad of subtle visual cues during face-to-face interactions,
and these can differ widely depending on where the conversation is taking place,
and on the age and ethnicity of the speaker and listener.

Suppose you and I were talking, face-to-face, and I said I was going down to the bank.
It's very unlikely that you would think I was going to a building to withdraw money if we
were near a river and I was carrying a fishing rod at the time. Nor would you even
bother to ask which I meant - building or river bank. I might not even be carrying the
rod that day, but you saw me with it yesterday. I'm not sure how a machine would
cope with that missing context.
 
Which is data - data selection is made based on data. So if you have the data, the algorithm knows how to select.

I'm beginning to see a misunderstanding here. Data selection isn't about data: it's about selection. (And algorithms don't 'know' anything. Someone needs to tell them what to do. This is called programming.) Similar how a news program isn't about news, but primarily about news selection. There is no shortage of news, but a program needs to select which (minute) part of the overall collection of data that represents news will be shown.

You need data. In this case, data would be a collection of texts and their translation. You do not have to understand either of these languages and the only thing you know is that the translation is accurate (even better would be a score of accuracy). You would then divide these texts into three parts. For the first part, you would feed both the texts and the translations into the AI and the algorithm would try to find a mapping between those. For the second part you would just feed the texts into the AI, and compare the result from the mappings it has learned to the verified translation and generate a score how good the result is. You would do this multiple times and select the one with the best results. Finally, you would use the third part to see how good you final result is.

I gather you've never tried to use Google translate, or worked as a translator. The problem with translations is that most words have multiple meanings and the specific meaning of a word depends on the context. (Also, you may note that a translation program actually uses existing translations. In other words, it uses the work already done by actual translators.)

Language is a difficult subject and you would need to know how to design language algorithms in general. But you would not need to know anything specific about the languages themselves and if it works, the algorithm can translate texts you cannot translate yourself. So I would claim that it has more "intelligence" in the subject of translation than you do. Now if you would know how to translate between these languages, you could try to make up specific rules to improve the translations. However, this would be a tedious task and might not even lead to better results, because the machine can process more texts than you have ever read and might come up with better translations than you could.

I'm not even sure what a 'language algorithm' is supposed to be. But you are right that language is a difficult subject. Algorithms can't translate texts; what they can do is select a meaning from a fixed list of meanings. That selected meaning is as likely to be wrong as right. (In fact, more likely to be wrong, but let's leave that out for the sake of argument.) The problem is with understanding the context within which a word is used, as that determines its actual meaning. In other words, the meaning of any given word is determined by the words surrounding it (as well as the order of those words). My best guess is that even a linguist couldn't program a translation algorithm (assuming that linguist has programming skills). Even if the program had a list of most probable outcomes of any given word, that would be not particularly helpful with a translation. In short: in no way would the result be more intelligent than the person programming. (The program might have a wider vocabulary though, since that would be a list, i.e. calculable.)

That's one. Another is, of course, that languages tend to be 'updated'. Meaning the meaning of actual words tend to shift over time. So any translation program would need regular updates. And again, you'd need an intelligence to execute that (or even find updated word meanings). Lastly, the argument seems to be that the calculus is 'more intelligent' than the inventor of the calculus. That is patently absurd.

If you had a large enough sample of texts, examples of these would all be included in these texts. So you could gather the necessary context from the texts itself. It is comparable with a children learning a language: they start with the superficial meaning of sentences and start to grasp the contexts as they are exposed to a wider range of speech and text.

The difference, of course, is that a child has actual intelligence. Even at 2. a program, however, has no means to determine how a specific meaning derives from a specific context. That's because a program - unlike a child of 2 - lacks the capability of understanding anything. (Simply put: it doesn't see the connection between context and meaning.)

Nevertheless, I agree that it is going to be very tough, but that is the way you would go about it, if you were to create an AI for translation. I doubt that it would be very good on current hardware and it is hard to say how far this can be pushed. I admit that it is somewhat of a bad example, because of the difficulty, but I felt is was good enough to convey the point.

Theoretically you may be right - except there is no way to go about it. So practically a computer translation gives a number of possible meanings you might have found anyway if you looked the words up yourself. In a dictionary.

You can program an algorithm that 'understands' a + b = c, because that is not intelligence. It's logic. Oddly, logic derives from language. But language isn't logical. It has rules entirely of its own, completely unguided by logic. Unlike mathematics, the basis of all programming..
 
Last edited:
Suppose you and I were talking, face-to-face, and I said I was going down to the bank.
It's very unlikely that you would think I was going to a building to withdraw money if we
were near a river and I was carrying a fishing rod at the time. Nor would you even
bother to ask which I meant - building or river bank. I might not even be carrying the
rod that day, but you saw me with it yesterday. I'm not sure how a machine would
cope with that missing context.

I was thinking about an AI that translates text. Face-to-face conversations are a level above that. However, it would still be (theoretically) possible to pick up these clues. For example I could use videos of conversations from which an image recognition could spot the fishing rod. Or I could collect you movement profile from which I can discern taht you go from this psot to the river almost every time and almost never to a bank building. Such cues can mislead and the algorithm might be wrong, but there are plenty of misunderstandings between humans as well.

I'm beginning to see a misunderstanding here. Data selection isn't about data: it's about selection. (And algorithms don't 'know' anything. Someone needs to tell them what to do. This is called programming.) Similar how a news program isn't about news, but primarily about news selection. There is no shortage of news, but a program needs to select which (minute) part of the overall collection of data that represents news will be shown.

And any AI algorithm that deserves that name makes the selection with data. As I will explain below, the algorithm knows more about how to select than the programmer, becuase it is able to process vast amounts of data.

I gather you've never tried to use Google translate, or worked as a translator. The problem with translations is that most words have multiple meanings and the specific meaning of a word depends on the context. (Also, you may note that a translation program actually uses existing translations. In other words, it uses the work already done by actual translators.)

I have done both, and I know about the problems that exist. But if you take a text and translate it, you have the same input as the AI and the problem is to extract the context fromt he surrounding text. This is a very hard problem and I do not claim that there is an AI that can do this yet, but I see no particular reason that this should be impossible (it might be limited by available computing power, of course).


I'm not even sure what a 'language algorithm' is supposed to be. But you are right that language is a difficult subject. Algorithms can't translate texts; what they can do is select a meaning from a fixed list of meanings. That selected meaning is as likely to be wrong as right. (In fact, more likely to be wrong, but let's leave that out for the sake of argument.) The problem is with understanding the context within which a word is used, as that determines its actual meaning. In other words, the meaning of any given word is determined by the words surrounding it (as well as the order of those words). My best guess is that even a linguist couldn't program a translation algorithm (assuming that linguist has programming skills). Even if the program had a list of most probable outcomes of any given word, that would be not particularly helpful with a translation. In short: in no way would the result be more intelligent than the person programming. (The program might have a wider vocabulary though, since that would be a list, i.e. calculable.)

With language algorithm I mean rules to can compare two texts (in the same language) and say how close they are. That say how you can (or cannot) rearrange words. How to spot the strucure of a sentence and so on.
Anyway, I think you have no idea how AI works. For an AI you do not get a linguist to formalize all he knwos about the language and then put it into an algorithm. Rather you would program an algorithm that can analyze texts and deduce these rules from those. With the former apporach you are obviously limited by the knowledge of the linguist, but with the latter you can feed more and more texts into the language to improve the algorithm beyond my own understanding of language.

That's one. Another is, of course, that languages tend to be 'updated'. Meaning the meaning of actual words tend to shift over time. So any translation program would need regular updates. And again, you'd need an intelligence to execute that (or even find updated word meanings).

Of course you would need to update it (and the date when a text was written would be an important context when trying to translate it). But if you have the learning algorithm in place, there is no additional intelligence needed. You would feed the new text into the same algorithm as the old texts and let the AI gather the new meanings from those.

Lastly, the argument seems to be that the calculus is 'more intelligent' than the inventor of the calculus. That is patently absurd.

It is not, if you think about it: The best chess programm can beat any human. Surely it is more "intelligent" at playing chess than its programmers (who would probably be easily beaten by the world champion). It has not been conclusively proven, yet, but I suspect the situation will be the same soon with Go.

Since my argument about translation has met so many (not entirely invalid) objections, let me make a less hypothetical scenario, from which I actually know that it works:
Suppose I have lot of devices which each provide a bunch of technical parameters and I want to know which are broken. I have no idea about what these parameters mean and how they are connected with broken devices. I have sent someone to look at a fraction of these to check them and he has provided me with a list, which ones are broken and which are not (let's say he checked whether a light was blinking, which cannot be accessed from far away. I can use the parameters and the list of broken devices, feed them into an AI and let it learn from these data sets. If I do this correctly, the AI now has a model which of these parameters signify a broken device. Because I have only supplied it with the learing algorithm, I have no idea about that model. The guy who I sent to check the devices never saw these parameters, so he cannot know anything about this model either. Therefore, the AI is now more "intelligent" at recognizing broken devices from far away than either of us. Of course I can now try to understand the model the AI has generated, but first, I do not have to for the thing to work and second, if the model is complicated enough, I might not even be able to understand it.



So now back to language (ugh):

The difference, of course, is that a child has actual intelligence. Even at 2. a program, however, has no means to determine how a specific meaning derives from a specific context. That's because a program - unlike a child of 2 - lacks the capability of understanding anything. (Simply put: it doesn't see the connection between context and meaning.)

That is a bold statement that is easily disproven: Let's take the "bank" example from above. I feed an AI several texts which use both concepts and have accurate translations for these in a language that uses different words for both concepts. From a dictionary, the algorithm can know which words in the other language can translate "bank". I program the learnign algorithm in such a way that it only considers those usages of the word "bank", where it is clear what the translation is. From a simple word frequency analysis in the surrounding sentences the algorithm will find that one translation of "bank" will come with words like building, money, door, depositions and so on. The other one will be surrounded with river, water, sand, etc. So if it now encounters a text without a translation, it could look at the words surrounding it and then chose the word with which to translate it. The algorithm would not know what a bank actually is, but for a translation this would not be possible. Note that I did not put the words "river" or "money" in the algorithm, but I just instructed it to look at the surrounding words. The same procdure can be applied to any other words as well and also to words, which I do not know myself.

You can program an algorithm that 'understands' a + b = c, because that is not intelligence. It's logic. Oddly, logic derives from language. But language isn't logical. It has rules entirely of its own, completely unguided by logic. Unlike mathematics, the basis of all programming..

The program does not really understand a + b = c (and to be fair, not many humans do). It just follows instructions. But you can make instructions to learn things like language rules. These rules have in no way to be logical - there just have to be rules which can be learned.
 
I was thinking about an AI that translates text. Face-to-face conversations are a level above that. However, it would still be (theoretically) possible to pick up these clues. For example I could use videos of conversations from which an image recognition could spot the fishing rod. Or I could collect you movement profile from which I can discern taht you go from this psot to the river almost every time and almost never to a bank building. Such cues can mislead and the algorithm might be wrong, but there are plenty of misunderstandings between humans as well.

I think you are beginning to grasp the immense problems translation requires. Try translating a manual with no pictures attached for explanation.

And any AI algorithm that deserves that name makes the selection with data. As I will explain below, the algorithm knows more about how to select than the programmer, becuase it is able to process vast amounts of data.

Yes. Except algorithms don't know anything more than the person who programmed it.

I have done both, and I know about the problems that exist. But if you take a text and translate it, you have the same input as the AI and the problem is to extract the context fromt he surrounding text. This is a very hard problem and I do not claim that there is an AI that can do this yet, but I see no particular reason that this should be impossible (it might be limited by available computing power, of course).

The future will no doubt be better tomorrow.

With language algorithm I mean rules to can compare two texts (in the same language) and say how close they are. That say how you can (or cannot) rearrange words. How to spot the strucure of a sentence and so on.
Anyway, I think you have no idea how AI works. For an AI you do not get a linguist to formalize all he knwos about the language and then put it into an algorithm. Rather you would program an algorithm that can analyze texts and deduce these rules from those. With the former apporach you are obviously limited by the knowledge of the linguist, but with the latter you can feed more and more texts into the language to improve the algorithm beyond my own understanding of language.

Your whole program is based on the work already done by humans. Grammar, syntax, linguistics. Without that you can't even begin to start an algorithm on language. (And arguments that start with 'you have no clue' generally don't hold up well.)

It is not, if you think about it: The best chess programm can beat any human. Surely it is more "intelligent" at playing chess than its programmers (who would probably be easily beaten by the world champion). It has not been conclusively proven, yet, but I suspect the situation will be the same soon with Go.

Ah, games. The basic rules of which any child can grasp. But you need a room-sized computer to 'ananlyze' it. Deep Blue isn't more 'intelligent' than its progrmamers: it can hold vastly more data than its programmers. It's an enhanced calculator, after all.

Since my argument about translation has met so many (not entirely invalid) objections, let me make a less hypothetical scenario, from which I actually know that it works:
Suppose I have lot of devices which each provide a bunch of technical parameters and I want to know which are broken. I have no idea about what these parameters mean and how they are connected with broken devices. I have sent someone to look at a fraction of these to check them and he has provided me with a list, which ones are broken and which are not (let's say he checked whether a light was blinking, which cannot be accessed from far away. I can use the parameters and the list of broken devices, feed them into an AI and let it learn from these data sets. If I do this correctly, the AI now has a model which of these parameters signify a broken device. Because I have only supplied it with the learing algorithm, I have no idea about that model. The guy who I sent to check the devices never saw these parameters, so he cannot know anything about this model either. Therefore, the AI is now more "intelligent" at recognizing broken devices from far away than either of us. Of course I can now try to understand the model the AI has generated, but first, I do not have to for the thing to work and second, if the model is complicated enough, I might not even be able to understand it.

Similarly a computer needs to have no clue about math while doing math. It just uses its memory. I'm not sure how you get from that to 'the program is more intelligent than the programmer'.

That is a bold statement that is easily disproven: Let's take the "bank" example from above. I feed an AI several texts which use both concepts and have accurate translations for these in a language that uses different words for both concepts. From a dictionary, the algorithm can know which words in the other language can translate "bank". I program the learnign algorithm in such a way that it only considers those usages of the word "bank", where it is clear what the translation is. From a simple word frequency analysis in the surrounding sentences the algorithm will find that one translation of "bank" will come with words like building, money, door, depositions and so on. The other one will be surrounded with river, water, sand, etc. So if it now encounters a text without a translation, it could look at the words surrounding it and then chose the word with which to translate it. The algorithm would not know what a bank actually is, but for a translation this would not be possible. Note that I did not put the words "river" or "money" in the algorithm, but I just instructed it to look at the surrounding words. The same procdure can be applied to any other words as well and also to words, which I do not know myself.

You did prove something. That language is immensely more complex than math. I wouldn't hold my breath to see a programmer program a transaltion program that makes sense of translations anytime soon.

The program does not really understand a + b = c (and to be fair, not many humans do). It just follows instructions. But you can make instructions to learn things like language rules. These rules have in no way to be logical - there just have to be rules which can be learned.

Exactly my point: programs don't know anything. They lack understanding. Understanding, like emotion, is not something programmable. A program simply regurgitates its input, applied to data. That's not knowledge, and it's not learning even.

Let's finally be frank about one thing: a computer only makes the mistakes that a programmer put in. Any human can make mistakes entirely on its own. And making mistakes is an important part in any learning process. You can learn from it. No program can do that. It will simply make the same mistake over and over again - until a programmer decides to correct it.
 
Exactly my point: programs don't know anything. They lack understanding. Understanding, like emotion, is not something programmable. A program simply regurgitates its input, applied to data. That's not knowledge, and it's not learning even.

Let's finally be frank about one thing: a computer only makes the mistakes that a programmer put in. Any human can make mistakes entirely on its own. And making mistakes is an important part in any learning process. You can learn from it. No program can do that. It will simply make the same mistake over and over again - until a programmer decides to correct it.

Uhh...no.
https://en.wikipedia.org/wiki/Reinforcement_learning
 
I'm not sure what 'Uhhh... no', followed by a link is supposed to argue. I was talking about bugs (mistakes), in case that wasn't clear. 'Learning programs' have nothing to do with that, and only confirm that it's the programmers that learn; the AI just executes its program. Seriously, it's the humans that do all the actual thinking. So 'uhhh... yes' would be more appropriate a reply.
 
Your whole program is based on the work already done by humans. Grammar, syntax, linguistics. Without that you can't even begin to start an algorithm on language.

Yes, if it was simple we could just scan in the hundreds of thousands
of extant cuneiform tablets and the program would spit out translations.

Without human help, the program wouldn't know which way up
the tablet was supposed to be, or whether the writing runs left-right,
or top to bottom, or whether the fashion at the time was to spell out
words in full, or whether it was from a time when using "text speak"
type abbreviations was in vogue.
 
Top Bottom