1. We have added a Gift Upgrades feature that allows you to gift an account upgrade to another member, just in time for the holiday season. You can see the gift option when going to the Account Upgrades screen, or on any user profile screen.
    Dismiss Notice

Artificial Intelligence, friend or foe?

Discussion in 'Science & Technology' started by Glassfan, Mar 16, 2017.

  1. Agent327

    Agent327 Observer

    Joined:
    Oct 28, 2006
    Messages:
    16,093
    Location:
    In orbit
    You are basically describing physics. Trillions of inanimates does not an animate make. (Well, not usually. There was this one exception.) all things interact, whether dead or alive. But they key question is: Why would a man made object have a mind? Obviously, singular cells are alive. But would anyone argue that a single cell has a mind?
     
  2. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent
    OK, you are aware that many corporations, governments and militaries are actively working to achieve sentient AI - it won't just be "stumbled-upon" by some designer.

    For instance, check out JAIR. Or this. Or this. Or this. Or especially this.
     
  3. Agent327

    Agent327 Observer

    Joined:
    Oct 28, 2006
    Messages:
    16,093
    Location:
    In orbit
    I wish them good luck with that. In order to give a machine intelligence, one first should know what intelligence (or mind) is. So excuse me if I'm not overly impressed by those many corporations, governments and militaries' efforts. It sounds a bit like many entities (fill in your preferred) 'working towards peace'. Not going to happen, but I wish them all the luck in the world.
     
  4. Agent327

    Agent327 Observer

    Joined:
    Oct 28, 2006
    Messages:
    16,093
    Location:
    In orbit
  5. uppi

    uppi Deity

    Joined:
    Feb 2, 2007
    Messages:
    4,561
    As I said: It is all about the data. Put racist (or sexist, or otherwise biased) data in and you get an AI that is all those things. Unfortunately it is quite hard to provide an unbiased dataset for training.
     
  6. Agent327

    Agent327 Observer

    Joined:
    Oct 28, 2006
    Messages:
    16,093
    Location:
    In orbit
    Is it though? In order to get from the search for baby pictures to get exclusively white babies some specific parameter must have been available enabling that. I doubt it's all about data, since it's kind of hard to imagine that only white babies get their pictures posted online. In short, it's not about data, but about data selection. Or, in other words, AI is only as intelligent as its designer. And therein lies the problem with trying to create artificial intelligence.
     
  7. uppi

    uppi Deity

    Joined:
    Feb 2, 2007
    Messages:
    4,561
    I very much doubt that. It would make no sense to code a specific parameter for that. Why invest time and money just to exclude babies of non-European ancestry? Any search engine developer will try to minimize any special code for specific terms, for economic reasons and because the more they start doing that the less they can point to the AI as an excuse ("We did not intend that, it was just the result of our algorithm in this very specific case")

    And how do you think the data selection is made? It is made with data. I do not know how Google's algorithm works exactly, but generally it will try to find the most popular images on the (American part of the - for google.com) internet that are somehow tagged "baby". Since a majority of people that use google.com are of European ancestry, their babies will end up as most popular and the algorithm will decide to display these. Go to google.de and the fraction of babies with African ancestry will be even less. Go to google.co.in and you start to see other babies, but as long as you put the English word in there, the result will reflect the preferences of the majority of English speaking people. Put in the Swahili word for baby and almost all results will be eastern African babies.

    Intelligence is a very nebulous term, but the statement is at best misleading. Given good data, I can code an AI that makes decisions about a subject I have no idea about. For that specific subject, this AI would have more intelligence than I do as its designer. Take AlphaGo as an example: It is a much better Go player than any of its designers. Yet, the intelligence is limited to that very specific subject. For any subject it was not trained for---like AI design---, it has no intelligence at all, so the designer is clearly more intelligent in those areas.
     
  8. Agent327

    Agent327 Observer

    Joined:
    Oct 28, 2006
    Messages:
    16,093
    Location:
    In orbit
    So... it's about data selection. Wasn't that what I just said? Already a selection is being made based on 'most popular results', apparently.

    In this argument intelligence seems indeed a nebulous term. How will a program have intelligence on a subject you know nothing about? It's like trying to let a computer translate (which involves zero calculation, but plenty of judgement). If you don't have any knowledge on a particular subject, you wouldn't even know where to start. You couldn't even program data selection, for one.
     
    Ferocitus likes this.
  9. uppi

    uppi Deity

    Joined:
    Feb 2, 2007
    Messages:
    4,561
    Which is data - data selection is made based on data. So if you have the data, the algorithm knows how to select.

    You need data. In this case, data would be a collection of texts and their translation. You do not have to understand either of these languages and the only thing you know is that the translation is accurate (even better would be a score of accuracy). You would then divide these texts into three parts. For the first part, you would feed both the texts and the translations into the AI and the algorithm would try to find a mapping between those. For the second part you would just feed the texts into the AI, and compare the result from the mappings it has learned to the verified translation and generate a score how good the result is. You would do this multiple times and select the one with the best results. Finally, you would use the third part to see how good you final result is.

    Language is a difficult subject and you would need to know how to design language algorithms in general. But you would not need to know anything specific about the languages themselves and if it works, the algorithm can translate texts you cannot translate yourself. So I would claim that it has more "intelligence" in the subject of translation than you do. Now if you would know how to translate between these languages, you could try to make up specific rules to improve the translations. However, this would be a tedious task and might not even lead to better results, because the machine can process more texts than you have ever read and might come up with better translations than you could.
     
  10. Ferocitus

    Ferocitus Deity

    Joined:
    Aug 7, 2016
    Messages:
    2,893
    Gender:
    Male
    Location:
    Adelaide, South Australia
    The head of AI research at Google said something very similar last year. (Paraphrasing...)
    Don't worry about nonsense like killer robots, start thinking about more immediate, realistic
    problems, like what you are going to do with millions of unemployed truck drivers.
     
  11. Ferocitus

    Ferocitus Deity

    Joined:
    Aug 7, 2016
    Messages:
    2,893
    Gender:
    Male
    Location:
    Adelaide, South Australia
    How will you provide "context"? Without that, translation could be
    completely wrong, and no more than a dictionary look-up.
    Without context, machines would just be "craunching marmosets".
    https://en.wikipedia.org/wiki/English_As_She_Is_Spoke

    How would you accommodate variable, dynamically-changing contexts,
    e.g. the way teenagers (in particular) use opposite meanings to include
    or exclude people from their cliques and social groups?

    Language translation is difficult enough with complete information.
    Without complete information, or given equally-likely opposite meanings of
    words and phrases, it's going to be very tough.
     
  12. Ferocitus

    Ferocitus Deity

    Joined:
    Aug 7, 2016
    Messages:
    2,893
    Gender:
    Male
    Location:
    Adelaide, South Australia
    We know how to create human consciousness. It's easy, and can be great fun too. :)
    (Some problems can arise at 2, again around 13 and, according to my wife, at 61.)

    And why aim low and try to create human-like consciousness from available biochemicals
    and systems? Far superior types could emerge from more focused, directed evolutionary
    experiments with molecules and systems that aren't used in standard Earth biology.
     
  13. uppi

    uppi Deity

    Joined:
    Feb 2, 2007
    Messages:
    4,561
    If you had a large enough sample of texts, examples of these would all be included in these texts. So you could gather the necessary context from the texts itself. It is comparable with a children learning a language: they start with the superficial meaning of sentences and start to grasp the contexts as they are exposed to a wider range of speech and text.

    Nevertheless, I agree that it is going to be very tough, but that is the way you would go about it, if you were to create an AI for translation. I doubt that it would be very good on current hardware and it is hard to say how far this can be pushed. I admit that it is somewhat of a bad example, because of the difficulty, but I felt is was good enough to convey the point.
     
    Ferocitus likes this.
  14. Ferocitus

    Ferocitus Deity

    Joined:
    Aug 7, 2016
    Messages:
    2,893
    Gender:
    Male
    Location:
    Adelaide, South Australia
    Texts are frozen in time. Any machine learning from texts will be inflexible,
    and unable to cope with, among many others, jargon, argot, and thieves' cant.
    There is no text or reference work that the machine can even call on to help in
    translation.

    If you don't know it, you might like to consider the enormous problems raised by
    the Chinese Room Argument.
    https://en.wikipedia.org/wiki/Chinese_room
    For some AI functions, that situation can be dismissed almost out of hand; for other
    translation objectives it presents an insurmountable obstacle.

    Humans also pick up a myriad of subtle visual cues during face-to-face interactions,
    and these can differ widely depending on where the conversation is taking place,
    and on the age and ethnicity of the speaker and listener.

    Suppose you and I were talking, face-to-face, and I said I was going down to the bank.
    It's very unlikely that you would think I was going to a building to withdraw money if we
    were near a river and I was carrying a fishing rod at the time. Nor would you even
    bother to ask which I meant - building or river bank. I might not even be carrying the
    rod that day, but you saw me with it yesterday. I'm not sure how a machine would
    cope with that missing context.
     
  15. Agent327

    Agent327 Observer

    Joined:
    Oct 28, 2006
    Messages:
    16,093
    Location:
    In orbit
    I'm beginning to see a misunderstanding here. Data selection isn't about data: it's about selection. (And algorithms don't 'know' anything. Someone needs to tell them what to do. This is called programming.) Similar how a news program isn't about news, but primarily about news selection. There is no shortage of news, but a program needs to select which (minute) part of the overall collection of data that represents news will be shown.

    I gather you've never tried to use Google translate, or worked as a translator. The problem with translations is that most words have multiple meanings and the specific meaning of a word depends on the context. (Also, you may note that a translation program actually uses existing translations. In other words, it uses the work already done by actual translators.)

    I'm not even sure what a 'language algorithm' is supposed to be. But you are right that language is a difficult subject. Algorithms can't translate texts; what they can do is select a meaning from a fixed list of meanings. That selected meaning is as likely to be wrong as right. (In fact, more likely to be wrong, but let's leave that out for the sake of argument.) The problem is with understanding the context within which a word is used, as that determines its actual meaning. In other words, the meaning of any given word is determined by the words surrounding it (as well as the order of those words). My best guess is that even a linguist couldn't program a translation algorithm (assuming that linguist has programming skills). Even if the program had a list of most probable outcomes of any given word, that would be not particularly helpful with a translation. In short: in no way would the result be more intelligent than the person programming. (The program might have a wider vocabulary though, since that would be a list, i.e. calculable.)

    That's one. Another is, of course, that languages tend to be 'updated'. Meaning the meaning of actual words tend to shift over time. So any translation program would need regular updates. And again, you'd need an intelligence to execute that (or even find updated word meanings). Lastly, the argument seems to be that the calculus is 'more intelligent' than the inventor of the calculus. That is patently absurd.

    The difference, of course, is that a child has actual intelligence. Even at 2. a program, however, has no means to determine how a specific meaning derives from a specific context. That's because a program - unlike a child of 2 - lacks the capability of understanding anything. (Simply put: it doesn't see the connection between context and meaning.)

    Theoretically you may be right - except there is no way to go about it. So practically a computer translation gives a number of possible meanings you might have found anyway if you looked the words up yourself. In a dictionary.

    You can program an algorithm that 'understands' a + b = c, because that is not intelligence. It's logic. Oddly, logic derives from language. But language isn't logical. It has rules entirely of its own, completely unguided by logic. Unlike mathematics, the basis of all programming..
     
    Last edited: Apr 18, 2017
  16. uppi

    uppi Deity

    Joined:
    Feb 2, 2007
    Messages:
    4,561
    I was thinking about an AI that translates text. Face-to-face conversations are a level above that. However, it would still be (theoretically) possible to pick up these clues. For example I could use videos of conversations from which an image recognition could spot the fishing rod. Or I could collect you movement profile from which I can discern taht you go from this psot to the river almost every time and almost never to a bank building. Such cues can mislead and the algorithm might be wrong, but there are plenty of misunderstandings between humans as well.

    And any AI algorithm that deserves that name makes the selection with data. As I will explain below, the algorithm knows more about how to select than the programmer, becuase it is able to process vast amounts of data.

    I have done both, and I know about the problems that exist. But if you take a text and translate it, you have the same input as the AI and the problem is to extract the context fromt he surrounding text. This is a very hard problem and I do not claim that there is an AI that can do this yet, but I see no particular reason that this should be impossible (it might be limited by available computing power, of course).


    With language algorithm I mean rules to can compare two texts (in the same language) and say how close they are. That say how you can (or cannot) rearrange words. How to spot the strucure of a sentence and so on.
    Anyway, I think you have no idea how AI works. For an AI you do not get a linguist to formalize all he knwos about the language and then put it into an algorithm. Rather you would program an algorithm that can analyze texts and deduce these rules from those. With the former apporach you are obviously limited by the knowledge of the linguist, but with the latter you can feed more and more texts into the language to improve the algorithm beyond my own understanding of language.

    Of course you would need to update it (and the date when a text was written would be an important context when trying to translate it). But if you have the learning algorithm in place, there is no additional intelligence needed. You would feed the new text into the same algorithm as the old texts and let the AI gather the new meanings from those.

    It is not, if you think about it: The best chess programm can beat any human. Surely it is more "intelligent" at playing chess than its programmers (who would probably be easily beaten by the world champion). It has not been conclusively proven, yet, but I suspect the situation will be the same soon with Go.

    Since my argument about translation has met so many (not entirely invalid) objections, let me make a less hypothetical scenario, from which I actually know that it works:
    Suppose I have lot of devices which each provide a bunch of technical parameters and I want to know which are broken. I have no idea about what these parameters mean and how they are connected with broken devices. I have sent someone to look at a fraction of these to check them and he has provided me with a list, which ones are broken and which are not (let's say he checked whether a light was blinking, which cannot be accessed from far away. I can use the parameters and the list of broken devices, feed them into an AI and let it learn from these data sets. If I do this correctly, the AI now has a model which of these parameters signify a broken device. Because I have only supplied it with the learing algorithm, I have no idea about that model. The guy who I sent to check the devices never saw these parameters, so he cannot know anything about this model either. Therefore, the AI is now more "intelligent" at recognizing broken devices from far away than either of us. Of course I can now try to understand the model the AI has generated, but first, I do not have to for the thing to work and second, if the model is complicated enough, I might not even be able to understand it.



    So now back to language (ugh):

    That is a bold statement that is easily disproven: Let's take the "bank" example from above. I feed an AI several texts which use both concepts and have accurate translations for these in a language that uses different words for both concepts. From a dictionary, the algorithm can know which words in the other language can translate "bank". I program the learnign algorithm in such a way that it only considers those usages of the word "bank", where it is clear what the translation is. From a simple word frequency analysis in the surrounding sentences the algorithm will find that one translation of "bank" will come with words like building, money, door, depositions and so on. The other one will be surrounded with river, water, sand, etc. So if it now encounters a text without a translation, it could look at the words surrounding it and then chose the word with which to translate it. The algorithm would not know what a bank actually is, but for a translation this would not be possible. Note that I did not put the words "river" or "money" in the algorithm, but I just instructed it to look at the surrounding words. The same procdure can be applied to any other words as well and also to words, which I do not know myself.

    The program does not really understand a + b = c (and to be fair, not many humans do). It just follows instructions. But you can make instructions to learn things like language rules. These rules have in no way to be logical - there just have to be rules which can be learned.
     
  17. Agent327

    Agent327 Observer

    Joined:
    Oct 28, 2006
    Messages:
    16,093
    Location:
    In orbit
    I think you are beginning to grasp the immense problems translation requires. Try translating a manual with no pictures attached for explanation.

    Yes. Except algorithms don't know anything more than the person who programmed it.

    The future will no doubt be better tomorrow.

    Your whole program is based on the work already done by humans. Grammar, syntax, linguistics. Without that you can't even begin to start an algorithm on language. (And arguments that start with 'you have no clue' generally don't hold up well.)

    Ah, games. The basic rules of which any child can grasp. But you need a room-sized computer to 'ananlyze' it. Deep Blue isn't more 'intelligent' than its progrmamers: it can hold vastly more data than its programmers. It's an enhanced calculator, after all.

    Similarly a computer needs to have no clue about math while doing math. It just uses its memory. I'm not sure how you get from that to 'the program is more intelligent than the programmer'.

    You did prove something. That language is immensely more complex than math. I wouldn't hold my breath to see a programmer program a transaltion program that makes sense of translations anytime soon.

    Exactly my point: programs don't know anything. They lack understanding. Understanding, like emotion, is not something programmable. A program simply regurgitates its input, applied to data. That's not knowledge, and it's not learning even.

    Let's finally be frank about one thing: a computer only makes the mistakes that a programmer put in. Any human can make mistakes entirely on its own. And making mistakes is an important part in any learning process. You can learn from it. No program can do that. It will simply make the same mistake over and over again - until a programmer decides to correct it.
     
    Ferocitus likes this.
  18. uppi

    uppi Deity

    Joined:
    Feb 2, 2007
    Messages:
    4,561
    Uhh...no.
    https://en.wikipedia.org/wiki/Reinforcement_learning
     
  19. Agent327

    Agent327 Observer

    Joined:
    Oct 28, 2006
    Messages:
    16,093
    Location:
    In orbit
    I'm not sure what 'Uhhh... no', followed by a link is supposed to argue. I was talking about bugs (mistakes), in case that wasn't clear. 'Learning programs' have nothing to do with that, and only confirm that it's the programmers that learn; the AI just executes its program. Seriously, it's the humans that do all the actual thinking. So 'uhhh... yes' would be more appropriate a reply.
     
  20. Ferocitus

    Ferocitus Deity

    Joined:
    Aug 7, 2016
    Messages:
    2,893
    Gender:
    Male
    Location:
    Adelaide, South Australia
    Yes, if it was simple we could just scan in the hundreds of thousands
    of extant cuneiform tablets and the program would spit out translations.

    Without human help, the program wouldn't know which way up
    the tablet was supposed to be, or whether the writing runs left-right,
    or top to bottom, or whether the fashion at the time was to spell out
    words in full, or whether it was from a time when using "text speak"
    type abbreviations was in vogue.
     

Share This Page