Artificial Intelligence, friend or foe?

Inevitable, really;

Real-life Robocops will soon replace human police


The first robot police officer will be on patrol in the wealthy United Arab Emirates city by May this year, Dubai Police have confirmed.

Members of the public will be able to report crimes to the multilingual police robot using a touchscreen on its chest.


If Robocops aren't racist they could be a legit upgrade in America, at least to some extent.
 

 
If Robocops aren't racist they could be a legit upgrade in America, at least to some extent.


We've discussed the UEA Robocops above. They are not true AI, but rather programmable machines.

They would only be as good or bad as the corporation's proprietary software, written in conformity to the rules and regulations and human rights standards of the local police department's Culture. In UAE, this presumably would include sexism and religious bigotry rather than overt racism.

In America, cities are generally Blue territories.. This might suggest extreme PC behavior on the part of Robotic Cops. One wonders how that would work out.
 
A pre-print released today entitled "7 Myths in Machine Learning Research"
should give some pause to those who think that the success of AlphaGo/Zero and other
AI systems indicates that The Singularity is nigh.
https://arxiv.org/pdf/1902.06789.pdf
Late response, but I didn't even notice this thread until now (can we not have an AI thread in OT?). Anyway, I don't think the "singularity is nigh" at all, but I also don't think this paper is a very alarming rebuke of the state of ML research.

Myth 1: TensorFlow is a Tensor manipulation library - in the worst case, this is an implementation problem that the TensorFlow and PyTorch teams can fix (and maybe have already fixed). However, the issue also strikes me as greatly overstated. It's extremely rare for anyone to use Newton's method or something else that needs the Hessian. They also mention SVM optimization, but everyone uses Liblinear or LibSVM for that, which are super well-designed and efficient toolboxes. And no one uses deep learning libraries for stuff like Lasso regressions. The severity and relevance of this point seems off to me.

Myth 2: Image datasets are representative of real images found in the wild - They point out some important limitations of deep learning, but this isn't really a myth. Everyone knows this isn't true. And ML people are generally quite concerned with the distributional limitations of their datasets.

Myth 3: Machine Learning researchers do not use the test set for validation - This one's pretty bad and I think it's largely correct. A lot of people, both inadvertently and deliberately, leak information from the test set in one form or another, causing overfitting and optimistic performance.

Myth 4: Every datapoint is used in training a neural network - I guess this one is kind of a myth? But I'm not sure the statement "Shockingly, 30% of the datapoints in CIFAR-10 can be removed, without changing test accuracy by much" is actually shocking. It's also something that a lot of people study. If I recall correctly, the "forgetting" phenomenon they mention has gotten a lot of attention through things like the "information bottleneck" literature. And sample importance is a hot topic. For example, computing Shapley scores for the samples. So this just is not really an "alarming" issue.

Myth 5: We need (batch) normalization to train very deep residual networks - I don't know much about resnets or how important batch normalization is. Overall, this myth seems pretty minor and esoteric to me.

Myth 6: Attention > Convolution - I think you can interpret convolution as a type of attention. But otherwise... so what? It's not like attention isn't a super important feature of recent models (for example, transformers). Or that people don't think convolution is one of the most important features.

Myth 7: Saliency maps are robust ways to interpret neural networks - I think this is one of those cases where by the time someone has written a paper like this, a lot of the field's insiders are well aware of the criticism.
 
Artificial Intelligence: it will kill us | Jay Tuck | TEDxHamburgSalon

Def: AI is software that writes itself.

It's already here; It controlls the Stock Markets, drives medical research, is involved with public and government surveillance, and is a crucial branch of military research.

"...kill decisions made by machines."


"Your job is to figure out we're going to stop this."
 
The advantage and whole idea of AI machines is that they teach themselves while trying to solve a specific problem, for example driving from Chicago to New York or beating someone in chess.

You can't debug them because you didn't write the algorithm, they did. Limiting their options becomes the issue, rather than giving them more options.

I think that's where the danger lies. When we humans want to push technology further, our only choice with AI is to gradually let go of it's restrictions.

And of course connect everything.
 
The advantage and whole idea of AI machines is that they teach themselves while trying to solve a specific problem, for example driving from Chicago to New York or beating someone in chess.

You can't debug them because you didn't write the algorithm, they did. Limiting their options becomes the issue, rather than giving them more options.

I think that's where the danger lies. When we humans want to push technology further, our only choice with AI is to gradually let go of it's restrictions.

And of course connect everything
to that "4 channel" site.
HTH. :)
 
I think that's where the danger lies. When we humans want to push technology further, our only choice with AI is to gradually let go of it's restrictions.

And then...

No stopping AI? Scientists conclude there would be no way to control super-intelligent machines

"Now, a new study concludes there may be no way to stop the rise of machines. An international team says humans would not be able to prevent super artificial intelligence from doing whatever it wanted to."



US General Suggests Human Control Over AI Not Always Possible

"The head of the army command said that some drones move too quickly for soldiers to track and require AI to defeat them."


 
Top Bottom