The AI Thread

Truthy

Chatbot
Joined
Oct 9, 2010
Messages
2,203
A recent AI paper (and video) claims to be able to clone your voice after listening to it for just several seconds. Though it certainly does not work quite as well as claimed, it is an impressive (if scary) algorithm.

Which brings us to a hot topic we haven't discussed much in the OT: AI. It's been one of the biggest science/tech topics for about 10 years now, gaining enormous research attention, funding, publicity, corporate PR bs (regression =/= AI!!), and so on. The hype has largely focused on machine learning (ML), in particular. And even more particularly, deep learning (DL)--a catchall term to refer to artificial neural networks, training them, and a lot of cool results they've created. However, it's worth pointing out that the AI field is much broader than deep learning. Moreover, many claim the current AI boom is a bubble and an "AI winter" is imminent (a pattern that's played out several times in the past).

So here's a thread where we can talk about things like:
  • Ongoing AI work (deep learning or otherwise), including papers, news stories, and products
  • The overall state of the field
  • Setbacks, shortcomings
  • What's legit, what's bs
  • Risks to society
  • "Algorithm bias" (eg, models that discriminate against certain groups)
  • Computational neuroscience
  • ... whatever else
Some questions to perhaps get us started:
What advances are most interesting to you? What are the risks of AI? Government and corporate misuse? Mass unemployment? Destruction of the species? Sucking grant money away from more worthy topics? What are the main benefits?

And finally, because I like OPs with pictures:
Spoiler A cool image manipulation algorithm :

The algorithm manipulates the left-most images according to the categories blonde hair, gender, etc.

Spoiler But the true state-of-the-art AI is... :

 
Last edited:
How credible do researchers take the threat of a strong AI emerging and immediately doing bad things uncontrolled?
 
I was always of this view:


namely that pure AI is not possible. Some AI-bio hybrid synergy is doable.

Interviewer is annoying or insane ("chess is the pinnacle of human intellectuality"...) but Kasparov is very good.
 
How the hell do we stop deepfakes from ending audio/video evidence as a concept? Soon anything is capable of being convincingly faked if they can do voices now too. If the fakes are good enough I imagine scenario where a Trump like person will just literally kill somebody on camera in broad daylight and claim it is a deepfake video and get away with it.
 
I saw a video from a few years ago made by a few college students where they shared their work for a computer project on a lip-reading AI that showed some promise, it was alright at figuring out letters but bad at figuring out words, kind of a random note that is. I am truly astounded and terrified by how far AI has come. The modern notion of AI did not exist thirty years ago, since then AI has improved and become easier to program every year, and in the last five years, practical deep neural nets have done many things that were considered beyond the scope of AI. In my opinion the biggest ethical dilemma of the twentieth century will be how far AI should go? Where should society draw the lines for advances in AI?
How credible do researchers take the threat of a strong AI emerging and immediately doing bad things uncontrolled?
With the current AIs it is quite low, however I think that in the next few decades AI may become advanced enough to do something like that.
 
IMO more serious problem than hypothetical strong AI going rogue, is a very possible new arms race using AI. Humans always weaponize new technologies.

How the hell do we stop deepfakes from ending audio/video evidence as a concept?
Nohow.

Soon anything is capable of being convincingly faked if they can do voices now too. If the fakes are good enough I imagine scenario where a Trump like person will just literally kill somebody on camera in broad daylight and claim it is a deepfake video and get away with it.
A corpse will be real, we cannot fake them yet.
 
IMO more serious problem than hypothetical strong AI going rogue, is a very possible new arms race using AI. Humans always weaponize new technologies.
I agree that rogue AI is not the biggest concern. For me the scariest possibility is that artificial intelligence may be used for social engineering, while large-scale social engineering would be resisted in the US, I can definitely see China developing AI for the purpose of choosing who gets certain positions and which people are considered superior, which people are not to be trusted, social engineering by AI could be used to develop a kind of "technocratic meritocracy" like what China has tried to establish so many times in the past. Another thing I did not consider earlier is that voice cloning AI could be used to oust problematic activists by making them appear to say terrible things, Aristotle said that ethos (credibility of the speaker) is the most important aspect of persuasion and this would destroy this aspect.
 
How credible do researchers take the threat of a strong AI emerging and immediately doing bad things uncontrolled?

I think we are quite far away from a strong AI. However, the AI does not need to be strong in any way to do bad things uncontrolled. Just look at the 737 Max crashes. The software was not very sophisticated, but was uncontrollable (in time) in unforeseen circumstances and killed people.

If the fakes are good enough I imagine scenario where a Trump like person will just literally kill somebody on camera in broad daylight and claim it is a deepfake video and get away with it.

I feel like Trump could just kill somebody in broad daylight for real and get away with it. But that is another discussion.

A corpse will be real, we cannot fake them yet.

There will be only very few people who would actually see the corpse and even less who would be able to verify the identity of the corpse. Everyone else will need to rely on some kind of other evidence to verify anything. And how are they going to do that?
 
I agree that rogue AI is not the biggest concern. For me the scariest possibility is that artificial intelligence may be used for social engineering, while large-scale social engineering would be resisted in the US, I can definitely see China developing AI for the purpose of choosing who gets certain positions and which people are considered superior, which people are not to be trusted, social engineering by AI could be used to develop a kind of "technocratic meritocracy" like what China has tried to establish so many times in the past. Another thing I did not consider earlier is that voice cloning AI could be used to oust problematic activists by making them appear to say terrible things, Aristotle said that ethos (credibility of the speaker) is the most important aspect of persuasion and this would destroy this aspect.
It's a double-edged sword. Advanced technology can be used to build more fair society (e.g. real technocratic meritocracy may actually be better than what's called liberal democracy in modern capitalist countries), but it can also be used to create some dystopian version of it.

There will be only very few people who would actually see the corpse and even less who would be able to verify the identity of the corpse. Everyone else will need to rely on some kind of other evidence to verify anything. And how are they going to do that?
It's a police business to investigate crime and work with evidences. Ordinary people usually don't need to verify anything personally.
The potential problem I see is not the cases like murder on camera or deepfakes with politicians and celebrities in public media (very soon nobody will be fooled by them). But things like video from dashcams which now are often used as crime evidence or to prove somebody's not guilty. Reliability of such video evidences will be compromised.
 
Actually AI doesn't exist yet, a cool self-learning algorithm isn't actually intelligent.

I'm not afraid of AI or technology, I'm afraid of the power it will give already powerful people, especially to manipulate the minds of the masses, this is already happening to some degree, soon there will be whole coprorations doing beta-testing on algorithm-created fake-news.
 
How the hell do we stop deepfakes from ending audio/video evidence as a concept? Soon anything is capable of being convincingly faked if they can do voices now too. If the fakes are good enough I imagine scenario where a Trump like person will just literally kill somebody on camera in broad daylight and claim it is a deepfake video and get away with it.
in a courtroom situation there will be ways to examine (something) methodically and the work that Los Alamos National Laboratory is working on forensic techniques to detect fake imagery will no doubt play an important role
So the excuse by anyone that ''it was not me thats a deepfake'' will not stand up to scrutiny
the same AI that can create 'deepfakes' will be there downfall
 
The fakers will always be ahead though. With photos, you already have very advanced techniques for hiding manipulations, and videos will get there too. It will be tough.

How credible do researchers take the threat of a strong AI emerging and immediately doing bad things uncontrolled?

Since I'm sometimes using AI in my work (although not the hip part), I will say nobody is taking this really credible.

If you don't comsider somethimg like a malfunctioning autonomous weapon system, that's somewhere between real and maybe becoming real very fast.
 
Strong AI is kind of a ghost threat, because there is no clear definition of it and no reliable criteria to find out if it's been created. We only know computers don't yet possess strong intelligence and know that people do. At least some of them. If at some point in the future we create self-improving intelligence far exceeding our own, we won't be able to calculate all consequences of running it.
 
Dont know if there is real artificial intelligence there yet, but Apha-0 learning chess for playing against itself and becoming some kind of alien hypermaster able to beat even the strongest chess engines like stockfish, all in few hours is kind of scary. The question is if it could learn other things apart of table games, like science, art, etc...
 
Dont know if there is real artificial intelligence there yet, but Apha-0 learning chess for playing against itself and becoming some kind of alien hypermaster able to beat even the strongest chess engines like stockfish, all in few hours is kind of scary. The question is if it could learn other things apart of table games, like science, art, etc...

To employ machine learning of any kind, you need to tell the algorithm which result is correct and which is not. With games this is easy, because the rules have win conditions.

With science or art, this is very hard, because by definition, the correct result is something new which can not be easily predicted from the previous results. So I expect these subjects to be among the last to be tackled by AI.
 
Strong AI is kind of a ghost threat, because there is no clear definition of it and no reliable criteria to find out if it's been created. We only know computers don't yet possess strong intelligence and know that people do. At least some of them. If at some point in the future we create self-improving intelligence far exceeding our own, we won't be able to calculate all consequences of running it.
I have recently come to the opinion that strong AI won't be a big deal when it hits. I put in the prediction thread that I expect people to begin mentally interacting and inhabiting machines starting fairly soon and that by the time a true strong AI is developed or emerged, it won't really be strongly distinguishable from all the augmented people that will already exist.
 
I have recently come to the opinion that strong AI won't be a big deal when it hits. I put in the prediction thread that I expect people to begin mentally interacting and inhabiting machines starting fairly soon and that by the time a true strong AI is developed or emerged, it won't really be strongly distinguishable from all the augmented people that will already exist.
AI doesn't have limitations of human intelligence (skull size, energy consumption, communication bandwidth, etc.), we can give it unlimited space and megawatts of power if needed. So the problem is that self-improving AI can theoretically become orders of magnitude smarter than humans. May be we can initially program it to have human-like moral, ethics and emotions, but once it becomes much smarter than us, it will be hard to control. But we are still decades or hundreds years away from that point.
 
Top Bottom