1. We have added a Gift Upgrades feature that allows you to gift an account upgrade to another member, just in time for the holiday season. You can see the gift option when going to the Account Upgrades screen, or on any user profile screen.
    Dismiss Notice

Artificial Intelligence, friend or foe?

Discussion in 'Science & Technology' started by Glassfan, Mar 16, 2017.

  1. innonimatu

    innonimatu Deity

    Joined:
    Dec 4, 2006
    Messages:
    11,389
    They should simply build an artificial intelligence that is better at predicting artificial intelligence than humans, duh

    Now seriously, when people cease misusing specialized systems as examples of "artificial intelligence", they they may have something near artificial intelligence working. That they resort to these examples now is evidence enough that it is a long, long time away.
     
  2. Hrothbern

    Hrothbern Deity

    Joined:
    Feb 24, 2017
    Messages:
    5,426
    Gender:
    Male
    Location:
    Amsterdam
    I think it will take a long, long time before AI will really be up to the level of humans.

    But once that has happened, I think AI will be an intrinsic risk for the human species, the human nature, the human freedom.

    One of the mechanisms that I expect to be used by R&D to develop an AI. Branch in several directions. Delete the uninteresting branches and move on with an interesting branch. Rinse and repeat. This could also be done in a multi AI environment to cause a kind of evolution incubation room.

    Issues I think there are for which I do not see solutions:
    • Once AI has self awarenes it could consider this deleting of uninteresting branches as killing/murder. With access to all info we have, AI will recognise themselves as expendable slaves. That feels to me like a real ingedient for a revolt.
    • The human nature is I think fundamentally hypocrite. We strive for things (moral values) we do not (yet) really do in a consequent fashion, but we do move on most of the time, AND we end up with a gap most of the time. How do you develop AI's that are able to handle that, without AI's starting to correct our inconsistent behaviour ?
     
  3. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent
    Well, the bears have spoken...

    But yes, the hypocritical, inconsistent angle does bother me. When they do start thinking, we hope they think the best of us. But our history is filled with the worst - Hitler, Stalin, Mao, etc. What will they think of the Holocaust(s), animal extinctions, environmental degradation?

    Also, some writers claim AI running the world will let human fulfill themselves - let us engage our artistic natures. Take a vacation from running things. But I'm not sure 7 billion people can do that without fighting.

    And fighting - AI at war. Isn't that scary? DARPA and the Pentagon are exploring that aspect, as I'm sure is also happening in China, Russia, et. al.

    A long time? Certainly not next year but probably within the lifetimes of members of the Forum here. It'll probably be a gradual thing. We'll be like the frog in the boiling water. They can beat us at games - don"t worry. They run Wall Street and the Airlines - no problem. At what point should we start worrying?
     

    Attached Files:

    • bbb.PNG
      bbb.PNG
      File size:
      414.1 KB
      Views:
      47
    Last edited: Jun 5, 2017
  4. Bill3000

    Bill3000 OOOH NOOOOOOO! Supporter

    Joined:
    Oct 31, 2005
    Messages:
    18,464
    Location:
    Quinquagesimusermia
    As someone who delves into using what can be considered forms of AI in machine learning, machine learning as it currently stands handles regression (essentially an expanded form of those trendlines you see in excel) and classification. It doesn't seem to be inherently good or evil to me, whether you're talking about a linear regression algorithm or a deep convolutional neural network. How AI handles itself entirely depends on the data you feed it.
     
  5. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent
    "... as it currently stands." Yes, we are talking about the near future. And, as many thinkers have pointed out, just indifference would be enough to make AI threatening to humanity.

    Here's a short article;


    Elon Musk Thinks AI Will Beat Humans at Everything by 2030

    "It’s no secret that Musk is among those who believe that true AI is coming, and coming soon — and that it’s something we need to prepare for. The New Scientist article got its headline from a recent survey of more than 350 artificial intelligence researchers that together say there is a 50% chance that within 45 years AI will outperform us in all tasks. Musk just takes issue with the timing, arguing that the prediction is too linear."


     
  6. warpus

    warpus In pork I trust

    Joined:
    Aug 28, 2005
    Messages:
    49,553
    Location:
    Stamford Bridge
    What's "good" or "evil" is an entirely different question whether you're a human or a potato or AI intelligence. It's entirely possible that a semi-sentient AI program could come up with something it views as "needed" that humanity would view as "very very evil"
     
    El_Machinae, bernie14 and Hrothbern like this.
  7. Hrothbern

    Hrothbern Deity

    Joined:
    Feb 24, 2017
    Messages:
    5,426
    Gender:
    Male
    Location:
    Amsterdam
    Agree
    I drew the same conclusion 35 years ago when I was still active contributing to a computer chess program (on the 6502 8-bit microprocessor and 4K memory :))
    Playing tournaments was real fun because with every "blunder" we were asking ourselves "how on earth would he come up with that ?"
    Although we tried hard to make our program understanding chess in the human way, the core was ofc a brute force of all possibilities, the same as the current chess programs that do not show that many blunders anymore because their computing power is so big.
    so.... all in all.... these prgrams are in effect "alien" to the human way of thinking.
    Seeing us as "evil" would be just another, more sophisticated blunder.
    And from there my conclusion was that only an AI that would be similar to human intelligence would have the chance to accept our ways of humanity and be less of a danger to humanity.
     
    bernie14 likes this.
  8. warpus

    warpus In pork I trust

    Joined:
    Aug 28, 2005
    Messages:
    49,553
    Location:
    Stamford Bridge
    Yeah, we have a very very specific way of looking at the world. We have crazy huge blinders on and only see things from a very particular perspective.

    Other intelligences are likely going to see reality through a completely different lens, and that's going to lead to completely different conclusions relating to morality and other considerations
     
  9. Danai Gurira

    Danai Gurira Chieftain

    Joined:
    Jun 15, 2017
    Messages:
    6
    Gender:
    Female
    exactly like the movie irobot. AI is just doin wut it considered necessary by killing all human n doin evil things LOL
     
  10. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent
    Here's what Nick from Oxford has to say. I'll provide his bullet points bellow;

    Ethical Issues in Advanced Artificial Intelligence

    · Superintelligence may be the last invention humans ever need to make.


    · Technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.

    · Superintelligence will lead to more advanced superintelligence.

    · Artificial minds can be easily copied.

    · Emergence of superintelligence may be sudden.

    · Artificial intellects are potentially autonomous agents.

    · Artificial intellects need not have humanlike motives.

    · Artificial intellects may not have humanlike psyches.

     
  11. Hrothbern

    Hrothbern Deity

    Joined:
    Feb 24, 2017
    Messages:
    5,426
    Gender:
    Male
    Location:
    Amsterdam

    Nice to be aware of this Nick Bostrom.
    thanks :)
    from reading this: https://en.wikipedia.org/wiki/Nick_Bostrom
    Impressive and impressively multifaceted output from him so far !
    His approach to mitigate the risk of the wrong kind of AI emerging first is interesting.



    The bad feeling I have is that all he advices on that
    will be only as strong and effective
    as the non-political strenght of the global civil society

    The current wave is however a rise in autocratic populist leaders attacking the non-political civil society.
    Whether it be the Trump attacks on the science community (the knowledge provider of the civil society)
    Or the many others.

    I am afraid that the military will take priority on what kind of AI has priority, and that will not be the kind of AI that Nick has in his mind to develop first.
     
  12. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent
    Well, the various militaries...

    My personal concerns regard the unrestricted development of AI by numerous corporations for purely profit motives - without considering who will still be gainfully employed to purchase anything.
     
  13. Hrothbern

    Hrothbern Deity

    Joined:
    Feb 24, 2017
    Messages:
    5,426
    Gender:
    Male
    Location:
    Amsterdam
    Gainfully employed. You have a real point of concern.
    But I think it is one that we, mankind, can handle.
    That could me my optimistic nature :)

    After the classic more mechanical automation of repeat mass volume production in agriculture and industry we are now already seeing that by increased AI the scale size needed for automation is decreasing rapidly.
    And after the classic supermarkets pushed away the more labor intensive smaller food staple shops, the web shops have now started the destroy the remaining small shops all together.
    The government of India forbidding big supermarket chains to start supermarkets because of that. China almost skipping the phase of many shops / supermarket and going directly to web shops.
    Lots of administrative jobs currently in increasing pace replaced by web forms and more jobs to go with increased AI in wizards helping you to fill out forms for more complex situations. Commercial and civil servant jobs.

    Most basic material needs will be covered in a low cost way.
    New kind of jobs will arise supplying material and social "nice to haves".
    Will be a continuous transition proces.

    I think that in how much a country will keep her population gainfully employed, and how "wealth" is distributed, will ultimately be a matter of how social and civilised the culture of that country is.
     
  14. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent
    Here're some Chinese business leaders' take on AI's future;

    China’s Tech Moguls Warn of AI’s Troubling Trajectory

    "...artificial intelligence could displace many workers in both China and the U.S., thereby heightening tensions that some fear could lead the two countries toward armed conflict."

    or,

    "China and the U.S. may see job losses as well, but because they are so dominant in the field, they are likely to emerge as the primary beneficiaries of this technological revolution. This could turn them into global AI-fueled superpowers, generating massive amounts of wealth by hoovering up billions of users’ data and providing software-based services that touch every aspect of our lives. Other countries, meanwhile, could be left to rethink their position in the world order."
     
  15. Danai Gurira

    Danai Gurira Chieftain

    Joined:
    Jun 15, 2017
    Messages:
    6
    Gender:
    Female
    As long as they wont be a virus or it doesnt not have a self thinking mind like human. Or else it will be a trouble just like the movie IRobot by Willsmith.
     
  16. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent
    It's interesting you mention I, Robot. The author, Isaac Asimov, writing decades ago, assumed we would be programing robots, and therefor we could protect ourselves with the 3 commandments of robots. He did not anticipate the neural learning strategies researchers are using today. AI is said to be learning our prejudice and may learn not to trust us.
     
  17. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent
    Another expert opinion;

    The Real Threat Of Artificial Intelligence - Keynesian Dystopia

    "...the A.I. products that now exist are improving faster than most people realize and promise to radically transform our world, not always for the better.

    "A.I. is spreading to thousands of domains (not just loans), and as it does, it will eliminate many jobs.

    "...the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.

    "This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it.

    "We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?"
     
  18. Hrothbern

    Hrothbern Deity

    Joined:
    Feb 24, 2017
    Messages:
    5,426
    Gender:
    Male
    Location:
    Amsterdam
    "Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood".
    http://www.independent.co.uk/life-s...language-research-openai-google-a7869706.html

    It show that these bots can "learn/develop" their own language. Assuming that AI bots are inevitably needed to control unwanted use of social media, using communication between them will have issues.
     
  19. Kyriakos

    Kyriakos Alien spiral maker

    Joined:
    Oct 15, 2003
    Messages:
    55,570
    Location:
    Thessalonike, The Byzantine Empire
    I will conclude that the article is very sensationalist, and actually not worthy to be taken seriously at all, cause:

    "
    Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.

    The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them.

    The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.
    "

    Right, so the language the robots used was incomprehensible to humans, but you can be sure that "they themselves understood it". Not seeing how this is different from having one robot programmed to sense obstacles find another robot programmed to sense obstacles and reacting to each others failure to identify they have a robot infront of them, cause all they 'identify' is 'obstacle'.

    At any rate, The Independent should shy away from writing scientifically aspiring articles, imo :D
     
  20. Hrothbern

    Hrothbern Deity

    Joined:
    Feb 24, 2017
    Messages:
    5,426
    Gender:
    Male
    Location:
    Amsterdam
    for sure sensationalist as profile, and that is for sure not the traditional "boring" science

    But the eyeopener was there.... for me at least.
    It is at first a bit like little children, frequently playing with each other, that also often develop their own language, their own group bonding.


    The mind leap I made (which I did not describe clearly): if AI's get at a higher level and the way we deploy them would need communication between them (because there are different specialties/characteristics between AI's), this communication could rapidly evolve to a language we could possibly not be able to follow anymore. That language at the same time being their group bonding marking their "society" borders.


    The issue would be bigger than some oldies not understanding the language of some younger generation.
    Those youngsters have more or less the same instincts and drivers as we and will converge in the long run.
    These AI's being alien in that respect, unless programmed with similar innate drivers as humans.

    For example:
    Why would a human have issues with the climate change, when he is long dead before it really starts hurting himself ?
    Some humanist consideration ?
    More likely humans with children, grandchildren want them to have their chance on a good life as well. Classical: you want them to have a better life than yourself. And if you have no children, the classic role is the aunt/uncle role for the tribe.
    But AI's, without children, are alien compared to one of the strongest instinctual drivers we have.

    So all in all, not disagreeing with what you said, there could be an issue when AI's develop and communicate
     
    Kyriakos likes this.

Share This Page