1. We have added a Gift Upgrades feature that allows you to gift an account upgrade to another member, just in time for the holiday season. You can see the gift option when going to the Account Upgrades screen, or on any user profile screen.
    Dismiss Notice

Artificial Intelligence, friend or foe?

Discussion in 'Science & Technology' started by Glassfan, Mar 16, 2017.

  1. caketastydelish

    caketastydelish By any means necessary

    Joined:
    Apr 12, 2008
    Messages:
    8,513
    Gender:
    Male
    If Robocops aren't racist they could be a legit upgrade in America, at least to some extent.
     
  2. caketastydelish

    caketastydelish By any means necessary

    Joined:
    Apr 12, 2008
    Messages:
    8,513
    Gender:
    Male


     
    Kaitzilla likes this.
  3. Glassfan

    Glassfan Mostly harmless

    Joined:
    Sep 17, 2006
    Messages:
    3,940
    Location:
    Kent

    We've discussed the UEA Robocops above. They are not true AI, but rather programmable machines.

    They would only be as good or bad as the corporation's proprietary software, written in conformity to the rules and regulations and human rights standards of the local police department's Culture. In UAE, this presumably would include sexism and religious bigotry rather than overt racism.

    In America, cities are generally Blue territories.. This might suggest extreme PC behavior on the part of Robotic Cops. One wonders how that would work out.
     
  4. Truthy

    Truthy Ambulating

    Joined:
    Oct 9, 2010
    Messages:
    1,934
    Late response, but I didn't even notice this thread until now (can we not have an AI thread in OT?). Anyway, I don't think the "singularity is nigh" at all, but I also don't think this paper is a very alarming rebuke of the state of ML research.

    Myth 1: TensorFlow is a Tensor manipulation library - in the worst case, this is an implementation problem that the TensorFlow and PyTorch teams can fix (and maybe have already fixed). However, the issue also strikes me as greatly overstated. It's extremely rare for anyone to use Newton's method or something else that needs the Hessian. They also mention SVM optimization, but everyone uses Liblinear or LibSVM for that, which are super well-designed and efficient toolboxes. And no one uses deep learning libraries for stuff like Lasso regressions. The severity and relevance of this point seems off to me.

    Myth 2: Image datasets are representative of real images found in the wild - They point out some important limitations of deep learning, but this isn't really a myth. Everyone knows this isn't true. And ML people are generally quite concerned with the distributional limitations of their datasets.

    Myth 3: Machine Learning researchers do not use the test set for validation - This one's pretty bad and I think it's largely correct. A lot of people, both inadvertently and deliberately, leak information from the test set in one form or another, causing overfitting and optimistic performance.

    Myth 4: Every datapoint is used in training a neural network - I guess this one is kind of a myth? But I'm not sure the statement "Shockingly, 30% of the datapoints in CIFAR-10 can be removed, without changing test accuracy by much" is actually shocking. It's also something that a lot of people study. If I recall correctly, the "forgetting" phenomenon they mention has gotten a lot of attention through things like the "information bottleneck" literature. And sample importance is a hot topic. For example, computing Shapley scores for the samples. So this just is not really an "alarming" issue.

    Myth 5: We need (batch) normalization to train very deep residual networks - I don't know much about resnets or how important batch normalization is. Overall, this myth seems pretty minor and esoteric to me.

    Myth 6: Attention > Convolution - I think you can interpret convolution as a type of attention. But otherwise... so what? It's not like attention isn't a super important feature of recent models (for example, transformers). Or that people don't think convolution is one of the most important features.

    Myth 7: Saliency maps are robust ways to interpret neural networks - I think this is one of those cases where by the time someone has written a paper like this, a lot of the field's insiders are well aware of the criticism.
     
  5. hobbsyoyo

    hobbsyoyo Deity

    Joined:
    Jul 13, 2012
    Messages:
    22,931
    Gender:
    Male
    Location:
    The pale blue dot.
    You're free to start one. I'd be an enthusiastic participant.
     
    Truthy likes this.

Share This Page