The AI Thread

I think regulating research on AI is a stupid idea, because:
- "AI" is a nebulous buzzword (hint: if someone says AI, he is likely trying to sell you something). Trying to regulate "AI" could easily result in some algorithms being heavily regulated and similar ones not at all, because the former fulfill some formal definition of "AI"
- I do not see how the research on AI could be harmful. The application of the developed algorithms is dangerous, but not the research.
- Software does not need to be fancy AI to be dangerous. A single wrong line of code can have massive implication.
- Algorithms are essentially ideas and it is hard to suppress ideas. As soon as something is known by enough people, it will find its way to you eventually.

Instead, I think there are applications that should be regulated and how heavily would depend on how dangerous the application is. I do not see how an AI for playing Go could hurt anything but human pride. A self-driving car, however, endangers human life and should be regulated, no matter whether it is a fancy AI or a hand-crafted conventional decision tree.
 
The AI is good enough to cause serious harm if weaponized.
What AI and how does one weaponize it in a way that's relevant to civil regulations? You can use AI for autonomous weapons, but we don't let people buy explosives.

As an aside, specifically with machine learning, the fancy/new stuff isn't necessarily what would be weaponized. A lot of stuff just comes down good sensors, low network latency, good programming, and old-fashioned algorithms (A*, Fast Fourier Transform, etc).
 
Last edited:
What AI and how does one weaponize it in a way that's relevant to civil regulations? You can use AI for autonomous weapons, but we don't let people buy explosives.
This is more relevant to international treaties may be, rather than domestic regulations.

As an aside, specifically with machine learning, the fancy/new stuff isn't necessarily what would be weaponized.
All new technologies will be weaponized, that's what humans do.
Satellite images post-processing, optical guidance systems, submarine detection, whatever comes to mind.
 
Elon Musk says all AI development (including Tesla's), should be regulated.

Agree or disagree? Hot takes?

That'll be impossible to regulate in a sensible way.
Or to control.
You want that every end user product which is connected to AI gets controlled? Programmers reviewing the code and asserting if it's all okay? That would probably require as much time as the end product development, and seriously slow down everything. And would probably also not allow to find major flaws, since reading code is sometimes like reading someone's mind, and bad enough code is impossible to review.
 
Ok so the idea has flaws but can anyone explain how best to handle the potential issues that AI can cause?

Uppi had an idea, but he fell back on some form of regulation after saying it couldn't be regulated. I don't really see how to square this circle. I do not see a difference between regulating the applications and 'regulating AI'. Regulating the application is regulating AI, even if only in a specific way.
 
I think the recently most popular issues with wrong-going AI is an AI, which is biased due to the underlying dataset.
Things which got attention recently are face-recognition algorithms, which don't recognize women or people of colour very well, or advertisement algorithms, which don't show certain job adverts to women, but only to men.
Other examples were that predictions for the crime relapse rate (whatever the word in English is) were biased, because it would base such predictions based on prior data which relate to ethnicity and the home address... and I had somehting else in mind, but just forgot right now in this second :think:.

EDIT: I read that as what the issues are themself, so disregard.

EDIT2: Not all use of AI ends up in an application. You have lots of R&D going on. You don't want to regulate anything which is in a testing phase, right? Also some of that stuff ends up only being research (like AI research itself). I don't think these things should be regulated, but rather only things which actively affect people.

EDIT3: Unless it's unethical, but for these things you already have the Helsinki declaration etc., which can be applied to AI research already.
 
Last edited:
You don't want to regulate anything which is in a testing phase, right?
This regulation doesn't necessarily have to be heavy handed. I apologize that I keep relating this back to space - it's just that this is the example I am familiar with - but in space R&D itself is regulated in the US, but only slightly. It's only when it comes to selling technology or moving that research overseas that it gets problematic and more heavy handed.

I agree that we shouldn't make every researcher feel like big brother is looking over there shoulder. But I think there is a potential for enough harm to warrant some basic checks like requiring them to follow best practices. I think there are other industries that are successful even with a heavy regulatory burden like the medical fields. Therefore a bit of regulation to at least put guardrails on the road probably won't crush the industry and discourage research.
 
Ok so the idea has flaws but can anyone explain how best to handle the potential issues that AI can cause?

Uppi had an idea, but he fell back on some form of regulation after saying it couldn't be regulated. I don't really see how to square this circle. I do not see a difference between regulating the applications and 'regulating AI'. Regulating the application is regulating AI, even if only in a specific way.

First we need to define what we actually want to regulate. My opinion on that is, that it should apply to any program that takes decisions which potentially endanger humans.

Then we need to come up with some actual useful rules.

I think, the most basic rule is that any such program needs to have an easily accessible kill switch, which can be operated by multiple people. Humans and machines can both make mistakes, but humans are likely to pause at one point and reflect whether this is really the best cause of action. A machine lacking the correct feedback loop (either because it was not implemented or it is broken) will do the same mistake over and over again and usually much faster than a human would. From personal experience, a program gone rogue is hard to fight directly and usually needs to be taken offline to limit the damage. If you cannot do so, then you have a real problem. The existence of a kill switch should be mandatory and could be quite easily legislated.
 
These are pretty fun. You can try a lot of word math and get results that make sense or are funny. Like:

obama - america + russia = putin
america - obesity = europe
israel - jewish = syria
knowledge - wisdom = information
netherlands - interesting = belgium
hitler - holocuast = donitz
nice :)



As trivia how practical powerful AI can already be:
https://www.theguardian.com/society...drug-resistant-bacteria-discovered-through-ai

Powerful antibiotic discovered using machine learning for first time
Team at MIT says halicin kills some of the world’s most dangerous strains
A powerful antibiotic that kills some of the most dangerous drug-resistant bacteria in the world has been discovered using artificial intelligence.
The drug works in a different way to existing antibacterials and is the first of its kind to be found by setting AI loose on vast digital libraries of pharmaceutical compounds.
Tests showed that the drug wiped out a range of antibiotic-resistant strains of bacteria, including Acinetobacter baumannii and Enterobacteriaceae, two of the three high-priority pathogens that the World Health Organization ranks as “critical” for new antibiotics to target.
“In terms of antibiotic discovery, this is absolutely a first,” said Regina Barzilay, a senior researcher on the project and specialist in machine learning at Massachusetts Institute of Technology (MIT).
“I think this is one of the more powerful antibiotics that has been discovered to date,” added James Collins, a bioengineer on the team at MIT. “It has remarkable activity against a broad range of antibiotic-resistant pathogens.”
 
(>'_')> A.I. <('_'<)


And now, watch an AI with 200 "years" of experience crush the stuffing out of top StarCraft 2 players. :)


Cough, 1500 APM might be cheating.
 
Last edited:
Ok so the idea has flaws but can anyone explain how best to handle the potential issues that AI can cause?
It depends on what exactly you are worried about. If it is clearview AI, then being careful about how your PII is handled online, particularly your photo, is your only protection. I do not see the law being much protection here 'cos unless they advertise what they are doing (as clearview are) then no-one would ever know.

When it comes to state use of AI that could be regulated. I cannot imagine how any law will seriously impact what a private company does, especially in such a global business as the internet.
 
I cannot imagine how any law will seriously impact what a private company does, especially in such a global business as the internet.
We have models of successful regulations on sensitive business. I do not know why this is really seen as an impossible ask.
 
We have models of successful regulations on sensitive business. I do not know why this is really seen as an impossible ask.
Which models do you think are relevant to this situation? You are effectively trying to legislate what sort of maths you can do. The closest I can think of is the attempt to prevent strong encryption leaving the US, and I do not think that can be classed as successful.
 
Which models do you think are relevant to this situation?
Regulations on weapons research and manufacturing as well as biomedical research.
You are effectively trying to legislate what sort of maths you can do.
I do not think this is a fair or accurate depiction of what I am suggesting.
 
Example regulations:
  • R&D firms working in certain defined computer science fields have to register with the government
  • These firms have to report what research they are doing yearly, even if it's not funded by the government
  • Development of best practices for AI development, require firms to follow them
  • Prohibition of a narrow range of sensitive AI developments (and/or just heavier handed regulation/oversight)
  • Creation of an oversight committee staffed by experts from industry/academia


The biggest problem with all this is that it assumes you have a common set of definitions to work from when it comes to regulating 'AI' or 'sensitive AI developments'. The next biggest problem is that it will require active participation from firms and academia to work - it would be easy to undermine.
 
Regulations on weapons research and manufacturing as well as biomedical research.

I do not think this is a fair or accurate depiction of what I am suggesting.
I'm having the same confusion as Samson, then.

However, I can think of various regulatory ideas that could be pertinent in the near-term. For example:

- Clear definition of a category of classifiers based on their application. For example, "for any classifier that makes decisions, or aids in decision making, with respect to the granting of paroles, the following criteria must hold: ..."
- For the classifier(s) in question, it's likely you want things like false negative rates to be below some threshold. For example, make sure the classifier isn't violating Blackstone's principle (i.e., err on the side of incarcerating fewer people, err on the side of the letting people get paroled early, etc).
- Possibly want some sizable chunk of the decisions have to still be made by humans, totally independently of any classifiers. This one is tricky, however, if you think human decision makers are more "biased"/prejudiced (I put "biased" in quotes because "bias" has a technical meanings in statistics and AI)
- The classifier has to be retrained on fresh data every so often.
- In many cases, ban the use of socially/legally sensitive features for model training. Race would be an obvious example.
- If the government is using a high-risk classifier, the model and anonymized data should be publicly available for anyone to audit. The government shouldn't use proprietary or black box models for sensitive decisions. The definition of "black box" is both vague and technical. But I think it's something that could legally defined fairly well.

Something I don't want to be legally mandated in very many cases, but is likely best practice: for high-stakes decisions, err on the side of simpler models and avoid black box models. That is to say, models where it's hard or impossible to understand why they made a particular decision.
 
You effectively wrote the low-level version of what I was thinking of that I myself couldn't produce. I only understand these things at a very high level.
Example regulations:
  • R&D firms working in certain defined computer science fields have to register with the government
  • These firms have to report what research they are doing yearly, even if it's not funded by the government
  • Development of best practices for AI development, require firms to follow them
  • Prohibition of a narrow range of sensitive AI developments (and/or just heavier handed regulation/oversight)
  • Creation of an oversight committee staffed by experts from industry/academia


The biggest problem with all this is that it assumes you have a common set of definitions to work from when it comes to regulating 'AI' or 'sensitive AI developments'. The next biggest problem is that it will require active participation from firms and academia to work - it would be easy to undermine.

Really I think it comes down to a problem of definitions! We all know there are issues, but if we don't accurately and precisely describe the problems, we cannot work to prevent them. Laymen like me do not help much in that regard and can come across as Luddites. :sad:
 
Example regulations:
  • R&D firms working in certain defined computer science fields have to register with the government
  • These firms have to report what research they are doing yearly, even if it's not funded by the government
  • Development of best practices for AI development, require firms to follow them
  • Prohibition of a narrow range of sensitive AI developments (and/or just heavier handed regulation/oversight)
  • Creation of an oversight committee staffed by experts from industry/academia


The biggest problem with all this is that it assumes you have a common set of definitions to work from when it comes to regulating 'AI' or 'sensitive AI developments'. The next biggest problem is that it will require active participation from firms and academia to work - it would be easy to undermine.
Do you have any evidence that these have prevented the sort of "evil company" harm we are envisioning coming from AI? My suspicion would be that any company that wanted to do something that would get blocked by this process would easily get around such regulation, principally by doing it in another country.
 
Yeah the same argument can be said about weapon development. Yet we still have processes and safeguards to try and head off proliferation of the worst aspects of weapons development like biological and nuclear weapons.

Most AI development is harmless and I'm not proposing a tightly regulated system. But I think it's a bit off to have no regulation whatsoever.

Cough, 1500 APM might be cheating
What is average for a top human player?
 
Last edited:
Back
Top Bottom