The AI Thread

I was referring to RNN encoder-decoder architecture, but yes, I agree that CNN converts input image into internal representation too.
I figured :)

Also, not trying to pontificate/soapbox or assume you don't know about this stuff... Just clarify things. And thought that was important to what you guys were talking about on the previous page.
 
Last edited:
I am lost
Yeah, good idea to try to avoid the issues we talked about in this thread :lol:

What he's talking about is a family algorithms that people use for translating between languages. For example, Google Translate.

Sometime they're called "encoder-decoder models", sometimes they're called "sequence to sequence models." The idea is you take a sentence in, say, English, then somehow compress it into an "internal" form, and then decompress it into, say, French.
 
Last edited:
Is it almost like trying to distill meaning into an easily searchable machine form, then translating that machine form into another language? Instead of going from language-to-language directly?
 
Is it almost like trying to distill meaning into an easily searchable machine form, then translating that machine form into another language? Instead of going from language-to-language directly?
It takes your English sentence and turns it into a vector. Usually a vector with something like 256 dimensions (the programmer sets the size). And then the part that translates into French takes in the vector and uses it as like a seed to build the French sentence.

To be honest, it doesn't seem like it should work, but it often does. If you have enough pairs of English sentences with their French translations, you can get it to learn how to be fairly good at translations.
 
Is it almost like trying to distill meaning into an easily searchable machine form, then translating that machine form into another language? Instead of going from language-to-language directly?
Yes, it looks like machine is trying to figure out the sense of a phrase, then translates it into another language it knows. The same method can work in case of text-to-speech (speech generating) and speech-to-text (automatic subtitles) problems. Another cool thing is that algorithm is usually given word embedding vectors instead of words directly. Words are converted into vectors which contain word semantics and easier for algorithm to process. A common example is arithmetic relations between words, such as

E('king')+E('woman')-E('man')~=E('queen'), where E is word embedding.
 
If the goal is that the machine may fool someone in thinking it has a sense, then it can work (ala the more primitive first Turing machines example). But just having a sizable (even immense) set of algorithms won't lead by itself to a sense. Compare to a human, where potentially there are immense possibilities, but usually not much is achieved when compared to a hypothetical ideal, for many reasons (being content, losing hope, anything in between) ; in that case there is acting, trying, opening up new roads which potentially exist and so on. The mere existence of algorithms does not in any logical way itself lead to the formation of sense, which is of course also why in machines the algorithms are reverse manufactured (from an external agent: humans working on the code) and not internally tapped into.
One can also see the effect in a way as the lack of a different order of action: there is running of a code, but not running of primary code which uses that code conditionally and itself sets apart the agent as the center of what runs the secondary code (as in humans, with the ego and the dna-based pool of "codes"). The difference, as usual, is between a large number (human sense) and absolute zero (machine).
 
Yes, it looks like machine is trying to figure out the sense of a phrase, then translates it into another language it knows. The same method can work in case of text-to-speech (speech generating) and speech-to-text (automatic subtitles) problems. Another cool thing is that algorithm is usually given word embedding vectors instead of words directly. Words are converted into vectors which contain word semantics and easier for algorithm to process. A common example is arithmetic relations between words, such as

E('king')+E('woman')-E('man')~=E('queen'), where E is word embedding.
These are pretty fun. You can try a lot of word math and get results that make sense or are funny. Like:

obama - america + russia = putin
america - obesity = europe
israel - jewish = syria
knowledge - wisdom = information
netherlands - interesting = belgium
hitler - holocuast = donitz
 
These are pretty fun. You can try a lot of word math and get results that make sense or are funny. Like:

obama - america + russia = putin
america - obesity = europe
israel - jewish = syria
knowledge - wisdom = information
netherlands - interesting = belgium
hitler - holocuast = donitz
There are a bunch of freemium games for mobile phones that work on a similar premise. You have some objects and you have to combine them to make other things. Usually the games are called alchemist or some variation of that. They are fun, but I gave up on the genre when everyone I tried used extremely illogical combinations and did not use the straightforward combinations.
 
These are pretty fun. You can try a lot of word math and get results that make sense or are funny. Like:

obama - america + russia = putin
america - obesity = europe
israel - jewish = syria
knowledge - wisdom = information
netherlands - interesting = belgium
hitler - holocuast = donitz

This post was amazing
 
Elon Musk says all AI development (including Tesla's), should be regulated.

Agree or disagree? Hot takes?
 
Elon Musk says all AI development (including Tesla's), should be regulated.

Agree or disagree? Hot takes?
I don't really agree or understand what he's trying to achieve.

One concern: combine the alarmism with overly-broad regulations and we could wind up with AI/ML categorized as a sensitive technology. In turn, that'll impose (I think) unnecessary and vague bureaucratic burdens, while making it harder for people to come to the US to do AI research. In particular, higher risk of visa denials, delays, and constriction of the supply of research by foreigners. All of that could be a blow to the field and hurt the US's AI lead--contrary to all the talk about Chinese AI, I think the US is still solidly in the lead. But that, in large part, hinges on US companies and universities being able to attract foreigners and bring them here.

I'm also just not convinced there's a whole lot that really needs regulation. However - an exception could be algorithms used for law enforcement.
 
Last edited:
I guess what his fear is is that unregulated AI research could lead to catastrophic accidents and it's best to put the safeguards in first and risk slowing things down than to risk catastrophe.

You're right about the impact that overly broad regulations can have on fields like this, I struggle with this all the time personally. I just turned down a potential job on a stupid reality show precisely because an interviewer asked a red-flag question that potentially violated regulations. I'm quite positive she didn't mean to do anything wrong, but the regulations are so strict that it was instantly a nope.jpg moment.

At the same time, I can't help but wonder if some regulation or at least oversight is warranted. I know there are a lot of ad hoc and more formal controls on certain genetic and animal research. Would it not be appropriate to have that on AI as well? Even if companies are not say trying to develop strong AI (skynet), they could be developing algorithms and such that would have an extremely problematic social cost, like say enhancing racist policies or something like that. We need to get ahead of that kind of thing by at least having the government be aware of and oversee it, if not formally regulate it.

And something should also be done to counter the spread of Chinese surveillance systems which frequently depend on AI features to work - and this requires government leadership I believe.
 
I guess what his fear is is that unregulated AI research could lead to catastrophic accidents and it's best to put the safeguards in first and risk slowing things down than to risk catastrophe.

You're right about the impact that overly broad regulations can have on fields like this, I struggle with this all the time personally. I just turned down a potential job on a stupid reality show precisely because an interviewer asked a red-flag question that potentially violated regulations. I'm quite positive she didn't mean to do anything wrong, but the regulations are so strict that it was instantly a nope.jpg moment.

At the same time, I can't help but wonder if some regulation or at least oversight is warranted. I know there are a lot of ad hoc and more formal controls on certain genetic and animal research. Would it not be appropriate to have that on AI as well? Even if companies are not say trying to develop strong AI (skynet), they could be developing algorithms and such that would have an extremely problematic social cost, like say enhancing racist policies or something like that. We need to get ahead of that kind of thing by at least having the government be aware of and oversee it, if not formally regulate it.

And something should also be done to counter the spread of Chinese surveillance systems which frequently depend on AI features to work - and this requires government leadership I believe.
The question that comes to my mind is exact what are you regulating? There are 2 sorts of complaints about AI: AI that is is not good enough (so crashing self driving cars, and algorithms that only match the bias of the training data not the real world), and AI that is too good, so it takes over from us. The 1st I can at least imagine how you would try and regulate it, but not really sure of how the details could be worked out to actually ensure any particular outcome. The 2nd seems pretty much impossible, unless you go full Butlerian Jihad - Thou shalt not build a computer in the image of the human mind.
 
There should be a 3 sort of complaint: the AI is good enough (and getting better every day!) to cause serious harm to society if deployed incorrectly

I think this complaint should dwarf the other two in both severity and likelihood.
 
Back
Top Bottom