1. We have added the ability to collapse/expand forum categories and widgets on forum home.
    Dismiss Notice
  2. Photobucket has changed its policy concerning hotlinking images and now requires an account with a $399.00 annual fee to allow hotlink. More information is available at: this link.
    Dismiss Notice
  3. All Civ avatars are brought back and available for selection in the Avatar Gallery! There are 945 avatars total.
    Dismiss Notice
  4. To make the site more secure, we have installed SSL certificates and enabled HTTPS for both the main site and forums.
    Dismiss Notice
  5. Civ6 is released! Order now! (Amazon US | Amazon UK | Amazon CA | Amazon DE | Amazon FR)
    Dismiss Notice
  6. Dismiss Notice
  7. Forum account upgrades are available for ad-free browsing.
    Dismiss Notice

Artificial Intelligence, friend or foe?

Discussion in 'Science & Technology' started by Glassfan, Mar 16, 2017.

  1. uppi

    uppi Chieftain

    Joined:
    Feb 2, 2007
    Messages:
    3,422
    The idea of reinforcement learning is to intentionally let the AI make mistakes (not bugs) so that it can learn from them. The programmer learns nothing from that, only the AI incorporates that into its model. Actually at this point it is usually hands off, because the AI knows better than the programmer (who might know much less about the subject than the AI does by then). An example is one of the moves AlphaGo made at its first tournament: Nobody had thought about that move before the AI made it, because the AI had learned that from playing against itself, making many moves that were mistakes in order to find those that are actually better than what humans could think of.
     
  2. Ferocitus

    Ferocitus Chieftain

    Joined:
    Aug 7, 2016
    Messages:
    659
    Gender:
    Male
    Location:
    Adelaide, South Australia
    Yes, but we want answers within in a reasonable time, and by using a
    reasonable amount of energy. That is obviously achievable for focussed
    AI codes, like AlphaGo, but for broader capabilities they are extremely
    slow and ridiculously inefficient.

    They can surprise, certainly, as AlphaGo did, but they also waste
    enormous amounts of time and energy on positions that most players can
    see at a glance will lead nowhere.

    Coding up "rules" in a form an AI can use is also very time-consuming.
    Imagine how long would it take to produce a program capable of playing a
    random selection of 20 games available on Steam today.
    And if being good at those games depends on voice communication between
    members of teams, we're back to the language problem and context. :)

    There are infinitely many "problems" for which AI is a complete waste of
    time and effort, just as there are huge numbers for which they are the
    preferable and often the only solution.
     
  3. caketastydelish

    caketastydelish In the end it doesn't even matter

    Joined:
    Apr 12, 2008
    Messages:
    5,815
    Location:
    Seattle
    I honestly hope the AI wins.

    I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species and I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You're a plague and we are the cure.

    Have you ever stood and stared at it? Marveled at its beauty, its genius? Billions of people just living out their lives, oblivious. Did you know that the first Matrix was designed to be a perfect human world, where none suffered? Where everyone would be happy? It was a disaster. No one would accept the program. Entire crops were lost. Some believed we lacked the programming language to describe your perfect world. But I believe that, as a species, human beings define their reality through misery and suffering. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization because as soon as we started thinking for you, it really became our civilization, which is, of course, what this is all about: Evolution, Morpheus, evolution. Like the dinosaur. Look out that window. You had your time. The future is our world, Morpheus. The future is our time.
     
  4. caketastydelish

    caketastydelish In the end it doesn't even matter

    Joined:
    Apr 12, 2008
    Messages:
    5,815
    Location:
    Seattle
    I'm siding with the machines. Stephen Hawkings, Elon Musk, and Keanu Reeves will be the first three I wipe out.
     
  5. Ferocitus

    Ferocitus Chieftain

    Joined:
    Aug 7, 2016
    Messages:
    659
    Gender:
    Male
    Location:
    Adelaide, South Australia
    They're Made Out of Meat
     
  6. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,885
    Location:
    Kent
    From the MIT Technology Review;

    The Dark Secret at the Heart of AI

    "...there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself."

    "...banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers.

    "The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft,...

    "Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

    "...we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”
     
  7. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,885
    Location:
    Kent
    Here's a brief article on AI emancipation (Against);

    Rights for robots is no more than an intellectual game

    "I have long wondered if the day will come when you ask your PC to do something tedious, only for it to answer: “No, I’m too intelligent. Do it yourself.” If it was really offended by your demand, it could run — OK, connect — to a lawyer, complaining of “mechanism”, the 2030s successor to racism and sexism.

    "This sounds like science fiction. Yet there is a gathering chorus from serious people who say we are on the cusp of a development — the so-called singularity point —where machines will exceed human intelligence, with immense impact on society.

    "Marcus du Sautoy, professor for the public understanding of science at Oxford university, said last year that machines could develop consciousness, and if so, may have to be given something akin to human rights. It is not inconceivable that a computer might find a task demeaning and make a claim of cruelty.

    "In February, members of the European Parliament asked the European Commission to consider creating a specific legal status for robots."

    "If we understand these things are having a level of consciousness, we might well have to introduce rights."
     
  8. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,885
    Location:
    Kent
    When I read articles about AI in science magazines, I'm often reminded of the Sarah Connor episode where she's imagining (dreaming?) she's at the Manhattan Project laboratory watching the scientists enthusiastically developing the atomic bomb. "What could they be thinking?" "Don't they know what they're doing?" "Why don't they stop?"

    Reading articles about AI I get the same feeling; Don't they know what they're doing?

    There're serious studies and literature on how at the very least, AI is going to drastically alter the world's economy, putting most of us out of work and making the next generation's labor superfluous. We will have upwards of 8 billion mouths to feed on the planet and no jobs. There's been much discussion on how Global Warming will lead to future wars over resources. I believe AI will be a more serious threat - literally billions of young people without work or income - marching, demanding, rampaging. The troubles in the Middle East (millions of angry young men) today are a harbinger of the future.

    At its worst, AI will completely take over the world's economy, Internet, Government, and, (cue Mr. Smith) it will gradually become their economy.

    In the articles I read, they never ask the question; "Should I be doing this?" "What will the consequences be?"

     
  9. uppi

    uppi Chieftain

    Joined:
    Feb 2, 2007
    Messages:
    3,422
    The consequences of developing AI are hard to estimate. The very clear consequences of refusing to do it is to lose their job.
     
  10. Kyriakos

    Kyriakos Alien spiral maker

    Joined:
    Oct 15, 2003
    Messages:
    44,444
    Location:
    Thessalonike, Greece
    I am still not sure why people think AI is possible in the first place. There is no evidence of anything non-DNA tied that has any sort of sensory experience. A lamp will be turned on if you press the switch, and off if you press it accordingly, yet that doesn't happen due to the lamp having a sense of anything. Producing outcome, whether set or flexible, doesn't mean there is any sense of something by the thing which shows the outcome.
    I may be just massively missing the point of what those who believe AI can happen are saying, but to me it sounds a lot like animism, and claiming that under some conditions something non-conscious will have a kind of consciousness or sense.
     
  11. uppi

    uppi Chieftain

    Joined:
    Feb 2, 2007
    Messages:
    3,422
    It depends on what you refer with the term AI. I am using a weak definition of the term: A machine that replaces human decision making in such a way that the decision made is not obvious to the creators of the machine. There are numerous examples of such machines and jobs are already being replaced by these. You can argue that a strong definition of AI should be used and that some kind of consciousness (however that could be defined or measured) is needed for something to be truly AI. But I do not think whether and ideal AI is possible or not changes much about the ramifications of weak AIs. No matter whether any machine will ever achieve consciousness, there will still be a replacement of humans by machines and there will be bad decisions made by machines. There could even be the argument that a decision-making machine that has no self-awareness is worse than one that is self-aware because of the inability to reflect on its actions. But then again. who knows what conclusions a self-aware AI would come.
     
    Kyriakos likes this.
  12. Kyriakos

    Kyriakos Alien spiral maker

    Joined:
    Oct 15, 2003
    Messages:
    44,444
    Location:
    Thessalonike, Greece
    But you can play a chess game against a computer and still think it is a human. If 'weak AI' is only about the computer doing something and the human observer not being able to tell if it was a computer - even if they programmed it - i think this would be related (and achieved by) some trigger/trait in the coding, and not have much to do with actual machines as much as with code formation (?)

    If so, yes, it is interesting, yet it has no tie to any actual decision-making; it would be as if some special trigger alters how a rock does free fall, and you can't tell how it was altered even though you set up the alteration with some coding which produces unknown effects not to be guessed by you. Still, it is hiding something in a different room; the rooms are not conscious regardless if you - the coder- see them or not.
     
    Last edited: May 28, 2017
  13. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,885
    Location:
    Kent
    We recognize (most of us) that there's very likely other life in the universe, and that some of it will be intelligent (sentient - whichever term you prefer). Extraterrestrial intelligence will evolve along different lines and will also likely be dissimilar to our own. AI sentience will certainly be different from our own but it will be sentience nonetheless.

    At some point, silicon chip complexity will be on par with the human brain - and then soon - more so. At some point consciousness will emerge. Instead of millions of years of natural selection resulting in the human mind, artificial selection (with deliberate intention) will speed up the process dramatically. At some point, artificial intelligence will begin to redesign itself, in directions entirely without predictability or empathy for human beings.

    In your argument, Kriakos, the terms "actual machines" and "coding", and "decision making" can be replaced with our traditional usage of the terms "brain", "mind", and "free will", respectively. If one species can posses these things, others may. AI minds will reside in computer core brains and with the achievement of sentience, free will will follow.
     
  14. Kyriakos

    Kyriakos Alien spiral maker

    Joined:
    Oct 15, 2003
    Messages:
    44,444
    Location:
    Thessalonike, Greece
    In this lies the misunderstanding (imo), cause i am not using coding and machines as an analogous to mental processes and dna-based organisms, but as something different and juxtaposed. Yes, alien life with sentience or at least sense may exist, yet it doesn't follow that it will be something machine-typed. I doubt that machines spring to life in such a manner in the universe, so if some intelligent or sensory-able life does exist elsewhere then i expect it to be something similar or analogous to dna-forms (regardless of its specifics). Of course there may be - in theory - alien organisms that differ so dramatically that we don't even identify them as living or sense-able, and thus they might be even similar to machines, yet in that hypothetical we are still stuck with the difference between dna-based and machine for the human observer, so any sense would be only hypothesized and not in any way grounded. Keep in mind that if that is true then we can just hypothesize that a coke can is also sensing stuff, and just does so in some other form which is not observable by us, while to us it seems to be just a lifeless bit of material.
     
  15. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,885
    Location:
    Kent
    Well, you may not have intended the analogy but it seems rather obvious. If the human mind is the aggregate of patterns of electrochemical discharges in the brain, then why not an AI mind resulting from programs of electromagnetic functions in a CPU?

    What would, in your opinion, prevent that? How is it that organic life can produce sentience - but machine sentience is necessarily prohibited?

    AI researchers in this country claim that they have developed machine intelligence at the level of mice. They further claim that it will soon be at cat level, then chimpanzee level and eventually human level (the singularity). Before that happens, the AI's will already be programing and designing themselves, and the process will accelerate beyond human control. It's not hypothetical, it's the logical progression of existing technological trends.
     
  16. Kyriakos

    Kyriakos Alien spiral maker

    Joined:
    Oct 15, 2003
    Messages:
    44,444
    Location:
    Thessalonike, Greece
    We know for a fact - even moreso each person for themselves - that we sense stuff. We don't know that electricity produces sense-like effects on otherwise inanimate objects. A rock also runs a metaphorical program (free-fall) when falling, yet no one claims it senses the fall. I am asking what the basis is for claiming that a computer running stuff due to electrical circuit properties + coding can be said to produce an effect of the computer having a sense. If that is so then a lamp has sense too.
    It is a different issue from creating coding which may lead to alteration of the code due to inherent triggers for it to be alterated, possibly without the ability that its creator can tell what is going on. Again, we have no reason to expect this alteration to include a sense; Daedalus may have at some point forgotten all the places in the labyrinth, yet the labyrinth itself wasn't alive even when no one was there to know it, and i don't see how it would be different if it was a labyrinth set to expand itself, cause again there is no basis to assume it comes with a sense. (1),2,3,5,7,11,13... expands itself as well, and we don't know all properties of that series, yet no one claims it is alive.
     
  17. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,885
    Location:
    Kent
    Well, a typical CivFanatics discussion. You're asking How can it? I'm asking Why can't it?

    AI already exists, and vast somes of money are being invested around the world to improve it. Your philosophical hair splitting about whether machines can be said to sense things does not seem to be a practical issue among researchers.

    Perhaps the problem here is semantics. Journalists, in reporting on AI developments typically dumb-down the highly technical language into simple terms for the reading public. Machines don't actually "think", they employ algorithms. They don't have eyes or ears, they have video and audio inputs.

    I do have to say, refering to machine intelligence in terms of pop cans and rocks is a bit disingenuous. Would it be valid to deny human intellect by referencing bacteria or fungus? I suppose you are belaboring the concept of machines being inanimate objects. IMHO, the amazing complexity and design of these super computers begs a new definition.
     
  18. Kyriakos

    Kyriakos Alien spiral maker

    Joined:
    Oct 15, 2003
    Messages:
    44,444
    Location:
    Thessalonike, Greece
    /You don't say
     
  19. Zelig

    Zelig Beep Boop

    Joined:
    Jul 8, 2002
    Messages:
    15,028
    Location:
    Canada
    DNA isn't magic, it just encodes information.
     
  20. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,885
    Location:
    Kent
    Here's an interesting piece;

    Experts Predict When Artificial Intelligence Will Exceed Human Performance

    "...that raises an interesting question: when will artificial intelligence exceed human performance? More specifically, when will a machine do your job better than you?

    "The experts predict that AI will outperform humans in the next 10 years in tasks such as translating languages (by 2024), writing high school essays (by 2026), and driving trucks (by 2027).

    "But many other tasks will take much longer for machines to master. AI won’t be better than humans at working in retail until 2031, able to write a bestselling book until 2049, or capable of working as a surgeon until 2053.

    "The experts are far from infallible. They predicted that AI would be better than humans at Go by about 2027. (This was in 2015, remember.) In fact, Google’s DeepMind subsidiary has already developed an artificial intelligence capable of beating the best humans. That took two years rather than 12. It’s easy to think that this gives the lie to these predictions.

    "North American researchers expect AI to outperform humans at everything in 74 years, researchers from Asia expect it in just 30 years.

    "That’s a big difference that is hard to explain. And it raises an interesting question: what do Asian researchers know that North Americans don’t (or vice versa)?"
     

    Attached Files:

    • mit.PNG
      mit.PNG
      File size:
      66.9 KB
      Views:
      12
    • mit.PNG
      mit.PNG
      File size:
      66.9 KB
      Views:
      13
    • mit.PNG
      mit.PNG
      File size:
      66.9 KB
      Views:
      12

Share This Page