What AI and how does one weaponize it in a way that's relevant to civil regulations? You can use AI for autonomous weapons, but we don't let people buy explosives.The AI is good enough to cause serious harm if weaponized.
This is more relevant to international treaties may be, rather than domestic regulations.What AI and how does one weaponize it in a way that's relevant to civil regulations? You can use AI for autonomous weapons, but we don't let people buy explosives.
All new technologies will be weaponized, that's what humans do.As an aside, specifically with machine learning, the fancy/new stuff isn't necessarily what would be weaponized.
Elon Musk says all AI development (including Tesla's), should be regulated.
Agree or disagree? Hot takes?
This regulation doesn't necessarily have to be heavy handed. I apologize that I keep relating this back to space - it's just that this is the example I am familiar with - but in space R&D itself is regulated in the US, but only slightly. It's only when it comes to selling technology or moving that research overseas that it gets problematic and more heavy handed.You don't want to regulate anything which is in a testing phase, right?
Ok so the idea has flaws but can anyone explain how best to handle the potential issues that AI can cause?
Uppi had an idea, but he fell back on some form of regulation after saying it couldn't be regulated. I don't really see how to square this circle. I do not see a difference between regulating the applications and 'regulating AI'. Regulating the application is regulating AI, even if only in a specific way.
niceThese are pretty fun. You can try a lot of word math and get results that make sense or are funny. Like:
obama - america + russia = putin
america - obesity = europe
israel - jewish = syria
knowledge - wisdom = information
netherlands - interesting = belgium
hitler - holocuast = donitz
Powerful antibiotic discovered using machine learning for first time
Team at MIT says halicin kills some of the world’s most dangerous strains
A powerful antibiotic that kills some of the most dangerous drug-resistant bacteria in the world has been discovered using artificial intelligence.
The drug works in a different way to existing antibacterials and is the first of its kind to be found by setting AI loose on vast digital libraries of pharmaceutical compounds.
Tests showed that the drug wiped out a range of antibiotic-resistant strains of bacteria, including Acinetobacter baumannii and Enterobacteriaceae, two of the three high-priority pathogens that the World Health Organization ranks as “critical” for new antibiotics to target.
“In terms of antibiotic discovery, this is absolutely a first,” said Regina Barzilay, a senior researcher on the project and specialist in machine learning at Massachusetts Institute of Technology (MIT).
“I think this is one of the more powerful antibiotics that has been discovered to date,” added James Collins, a bioengineer on the team at MIT. “It has remarkable activity against a broad range of antibiotic-resistant pathogens.”
It depends on what exactly you are worried about. If it is clearview AI, then being careful about how your PII is handled online, particularly your photo, is your only protection. I do not see the law being much protection here 'cos unless they advertise what they are doing (as clearview are) then no-one would ever know.Ok so the idea has flaws but can anyone explain how best to handle the potential issues that AI can cause?
We have models of successful regulations on sensitive business. I do not know why this is really seen as an impossible ask.I cannot imagine how any law will seriously impact what a private company does, especially in such a global business as the internet.
Which models do you think are relevant to this situation? You are effectively trying to legislate what sort of maths you can do. The closest I can think of is the attempt to prevent strong encryption leaving the US, and I do not think that can be classed as successful.We have models of successful regulations on sensitive business. I do not know why this is really seen as an impossible ask.
Regulations on weapons research and manufacturing as well as biomedical research.Which models do you think are relevant to this situation?
I do not think this is a fair or accurate depiction of what I am suggesting.You are effectively trying to legislate what sort of maths you can do.
I'm having the same confusion as Samson, then.Regulations on weapons research and manufacturing as well as biomedical research.
I do not think this is a fair or accurate depiction of what I am suggesting.
Example regulations:
- R&D firms working in certain defined computer science fields have to register with the government
- These firms have to report what research they are doing yearly, even if it's not funded by the government
- Development of best practices for AI development, require firms to follow them
- Prohibition of a narrow range of sensitive AI developments (and/or just heavier handed regulation/oversight)
- Creation of an oversight committee staffed by experts from industry/academia
The biggest problem with all this is that it assumes you have a common set of definitions to work from when it comes to regulating 'AI' or 'sensitive AI developments'. The next biggest problem is that it will require active participation from firms and academia to work - it would be easy to undermine.
Do you have any evidence that these have prevented the sort of "evil company" harm we are envisioning coming from AI? My suspicion would be that any company that wanted to do something that would get blocked by this process would easily get around such regulation, principally by doing it in another country.Example regulations:
- R&D firms working in certain defined computer science fields have to register with the government
- These firms have to report what research they are doing yearly, even if it's not funded by the government
- Development of best practices for AI development, require firms to follow them
- Prohibition of a narrow range of sensitive AI developments (and/or just heavier handed regulation/oversight)
- Creation of an oversight committee staffed by experts from industry/academia
The biggest problem with all this is that it assumes you have a common set of definitions to work from when it comes to regulating 'AI' or 'sensitive AI developments'. The next biggest problem is that it will require active participation from firms and academia to work - it would be easy to undermine.
What is average for a top human player?Cough, 1500 APM might be cheating