The AI Thread

I am not sure if we can even conceive of an intelligence that is not, to some degree, based on being an animal/a lifeform/embodied.
A device or program which is able to do the tasks which, as we consider, require intelligence.
For example, high quality machine translation or speech recognition require understanding the meaning of text. And we have metrics to assess quality of translation.
Another examples may be problem solving, proving mathematical theorems or developing verifiable physical theory.

Machines are already achieved quite impressive results on some of these tasks. In future they will be able to do more different tasks and with much better results. Eventually they may outperform humans in almost everything.

In short: A non-human (nonembodied, nonconscious, nonphysical, etc.) intelligence is not really feasible for a human mind, at least not currently, because our entire understanding of intelligence rests on the very specific human condition.
May be consciousness and self-awareness are also the things machine will need to possess (or mimic) in order to be considered intelligent. But it's probably doable for us.
 
A device or program which is able to do the tasks which, as we consider, require intelligence.

I am not so sure about that claim. What is your definition of intelligence, is it "pattern recognition" or more complex? I am always open about these things, I think there are nearly infinite ways of defining it.

For example, high quality machine translation or speech recognition require understanding the meaning of text. And we have metrics to assess quality of translation.
.

I do not think anyone besides a human being can currently understand "meaning", because "meaning" is not an inherent quality if anything, it is applied by a human observer. No text has meaning outside of its specific context (language, culture, et cetera).

Machines are already achieved quite impressive results on some of these tasks. In future they will be able to do more different tasks and with much better results. Eventually they may outperform humans in almost everything.

May be consciousness and self-awareness are also the things machine will need to possess (or mimic) in order to be considered intelligent. But it's probably doable for us.

I am aware and completely agree with you. Though I am unsure if a machine "outperforming" a human is in any way relevant. I do not think my calculator, for example, is intelligent, but he routinely outperforms me in maths.

May be consciousness and self-awareness are also the things machine will need to possess (or mimic) in order to be considered intelligent. But it's probably doable for us.

So then, again, we are in the modus of recreating our human-specific brand of intelligence. Personally I am not sure if self-awareness or consciousness are inherent categories of intelligence, I just think we lack an alternate definition. It is entirely possible that those two are cornerstones of intelligence, or not.
 
If a person (human) is able to perform tasks that are extra-ordinary, which as such indicate strongly intelligence... but... but is not able to explain to another human how he did it, with what set of logics, knowledges he did it...

do we consider that person as intelligent ?

or as somebody with a freak talent ?


And
If that person would then say: "you do not understand my explanation because it would take you years to build up the foundation to understand...
or the more provoking "you have that foundation but alas you are not intelligent enough to understand my [adequate] explanation"...

How do we react ?

burry it deeply away ?

Are we interested in intelligence of AI when we cannot "communicate" with that intelligence... whether by words or understanding ?

arms lenght ?
 
If a person (human) is able to perform tasks that are extra-ordinary, which as such indicate strongly intelligence... but... but is not able to explain to another human how he did it, with what set of logics, knowledges he did it...

I believe the bing-bang was extraordinary, but I do not think there is necessarily intelligence behind it. Extraordinariness is a fundamentally human category and has more to do with novelty than with intelligence.

However I do very much understand your point. If I get you right, you are saying:

"If an AI can achieve a monumental task, but not explain to us, in our terms, how and why, then to what degree is it useful or intelligent?"

This seems like a fair interjection, but I would be careful.

Ex: If an aboriginal scientist came up with Einsteinian physics 1000 years ago, but could not explain them to anyone, would we not still consider that person smart in retrospect? Of course the analogy is not perfect, because the difference between human languages is not the same difference as between the "languages" of human intelligence and AI. All human languages are, to some degree, founded in and affected by the human experience.

It is likely that if there was a self-learning AI, it could never communicate to us even the ways in which it operates, the a priori beliefs, it's logical operators, hell it might not even adhere to anything like logic or belief AT ALL, because those are human categories.
 
I think his example was more along the line of idiot savants of the type that can play entire piano concertos from memory after hearing them once but are unable to verbally communicate or tie their shoes. That's an entirely distinct form of intelligence than the normally-functioning but exceptionally gifted people like the aboriginal Einstein of your example. People like the former help illustrate the difficulty in assessing the meaning and nature of intelligence while the latter is more a communication and education issue.
 
I believe the bing-bang was extraordinary, but I do not think there is necessarily intelligence behind it. Extraordinariness is a fundamentally human category and has more to do with novelty than with intelligence.
Novelty in line of extrapolation of existing thoughts is just effort.
Novelty needing first to get room from freeing up by destroying "old thoughts" just occupying that room without adding is I think really hard work.
You can strip the old thoughts to their bare essentials to get room... already hard work to make some gains there... but when novelty overlaps with old thoughts you need partial destruction of old thoughts. That is high stressing uncomfortable.

"If an AI can achieve a monumental task, but not explain to us, in our terms, how and why, then to what degree is it useful or intelligent?"
yes

Ex: If an aboriginal scientist came up with Einsteinian physics 1000 years ago, but could not explain them to anyone, would we not still consider that person smart in retrospect? Of course the analogy is not perfect, because the difference between human languages is not the same difference as between the "languages" of human intelligence and AI. All human languages are, to some degree, founded in and affected by the human experience.
Aldous Huxley wrote a short story that touches also upon that: The Young Archimedes (likely semi-autobiographic).
The young Archimedes lives in the story in the 19thcentury as protegee of a rich Italian Lady who wants her protegee to be the greatest piano player of all time.
Archimedes, math-music gifted as well, has no issues with playing the piano virtuously... but he really wants to sit in that bathtub understanding laws of nature. The Eureka moment of the Archimedes Principle for displaced volumes of a liquid.
Locked in that alien world of that rich lady he commits suicide.
That lady (us, the adult world, the succes driven world) did not see or understood Archimedes.
Language (in the broader sense) is key yes.
Even with a similar human experience the diversity of how our brain is wired already so big that communicating between humans can be very unproductive (like with A-spectrum)

It is likely that if there was a self-learning AI, it could never communicate to us even the ways in which it operates, the a priori beliefs, it's logical operators, hell it might not even adhere to anything like logic or belief AT ALL, because those are human categories.
yes, very much
Which gives us the choice between developing AI that does develop in communication with us or developing AI that functions as a black box (able to perform certain tasks whereby we are unable to understand how)).
At low level tasks or "contained" tasks "black box AI" seems fine to me...

but black box AI does not bring our collective intelligence further.

This last point is for me really key
(whereby I see that as a kind of big public library, toolset for everyone accessable)
 
I think his example was more along the line of idiot savants of the type that can play entire piano concertos from memory after hearing them once but are unable to verbally communicate or tie their shoes. That's an entirely distinct form of intelligence than the normally-functioning but exceptionally gifted people like the aboriginal Einstein of your example. People like the former help illustrate the difficulty in assessing the meaning and nature of intelligence while the latter is more a communication and education issue.

Aldous Huxley makes one person out of those two ways to look at extraordinary people:
the idiot savant piano player is at the same time the genuine wholesome gifted scientist Archimedes.
 
I think the difference between the idiot savant on the one hand and the aboriginal Einstein on the other is usually seen as one of kind and not degree. But if you try and break it down and define exactly why they are different rather than just how, you probably won't get very far. I think this goes back to us not really understanding the nature of intelligence and the workings of the brain. I would not personally try and ascribe an underlying similarity between the two as they may have superficial similarities but may as well have completely different underlying conditions (well inasmuch as you can describe the aboriginal Einstein of this discussion as having a 'condition').

I also think there is a move to portray people afflicted with various neurological conditions in a positive light which is awesome given how traditionally they have been portrayed as defective or in similarly unflattering ways. But I also think that we should acknowledge that this newfound positivity, while on the whole a good thing, can also have its own problems. Parents who institutionalize their mentally afflicted children after they turn to hormonal violence are still stigmatized despite a lack of safety net to give them other options, and there is a tendency to almost celebrate genuinely antisocial asshatery so long as it can be vaguely ascribed to a spectrum condition, a la Sheldon from Big Bang Theory.

On the whole, I think these positivity movements are very good things and worthy of celebration.
 
I think the difference between the idiot savant on the one hand and the aboriginal Einstein on the other is usually seen as one of kind and not degree. But if you try and break it down and define exactly why they are different rather than just how, you probably won't get very far. I think this goes back to us not really understanding the nature of intelligence and the workings of the brain. I would not personally try and ascribe an underlying similarity between the two as they may have superficial similarities but may as well have completely different underlying conditions (well inasmuch as you can describe the aboriginal Einstein of this discussion as having a 'condition').

I also think there is a move to portray people afflicted with various neurological conditions in a positive light which is awesome given how traditionally they have been portrayed as defective or in similarly unflattering ways. But I also think that we should acknowledge that this newfound positivity, while on the whole a good thing, can also have its own problems. Parents who institutionalize their mentally afflicted children after they turn to hormonal violence are still stigmatized despite a lack of safety net to give them other options, and there is a tendency to almost celebrate genuinely antisocial asshatery so long as it can be vaguely ascribed to a spectrum condition, a la Sheldon from Big Bang Theory.

On the whole, I think these positivity movements are very good things and worthy of celebration.

Yes... I also believe we did and are improving.

I think that for us as social beings it is about having genuine wholesome respect for diversity
and I think the acception that handling that bigger differences eats up lots of energy
The outlier having the issue that there are more often contacts that eat away energy.
 
I think this tangent though does highlight some of the fear that we should have around the development of artificial intelligence. I mean we don't even understand intelligence in ourselves that well, yet we are pushing to develop machines that can potentially become far more intelligent than ourselves. And just as the difference between the idiot savant and the aboriginal Einstein are ones of kind and not degree even if we can't explain why that is, we may find ourselves with machines that possess intelligence so far removed from our own that it too is a difference of kind rather than degree. Hitchhiker's Guide to the Galaxy made light of the potential for aliens to be so far removed from our own intelligence/advancement that they simply bulldoze the Earth as we might bulldoze an anthill to meet their own ends. We might find that we could end up creating the bulldozing alien right here ourselves.

Then the other issue - and arguably the bigger one given the state of AI development at the moment - is that AI does not have to achieve sentience to be used as a tool for nefarious purposes. State actors or bad people can absolutely use 'dumb' AI to ruin people's lives as it is even without the AI itself having any conception that it exists. Just look at where we're heading with deepfakes and surveillance technology.

And eventually these trends in AI development may cross pollinate. If you develop a general AI that is sentient and give it the superhuman abilities of current 'dumb AI' then all of a sudden you have a strong AI that can direct its own havoc on the world for its own purposes, potentially well beyond our understanding. This can happen without anyone intentionally seeking to develop a true strong, runaway AI.
 
I am not so sure about that claim. What is your definition of intelligence, is it "pattern recognition" or more complex? I am always open about these things, I think there are nearly infinite ways of defining it.
I don't have strict definition. It's advanced problem-solving ability.

I do not think anyone besides a human being can currently understand "meaning", because "meaning" is not an inherent quality if anything, it is applied by a human observer. No text has meaning outside of its specific context (language, culture, et cetera).
We can check whether machine understands the specific text though. Similarly like we do language tests for humans. Give her a text and ask questions about it. If test was made correctly, in order to pass it one would need to understand both text and questions about it.

I am aware and completely agree with you. Though I am unsure if a machine "outperforming" a human is in any way relevant. I do not think my calculator, for example, is intelligent, but he routinely outperforms me in maths.
Because your calculator outperforms you only in calculations. But if it also will outperform you in solving math problems, translating texts, composing music, answering phone calls and writing good posts on CFC forum, then you may come to different conclusions.
 
Robot who refuses to follow commands has free will? There is no problem to make disobeying robot.
One would argue it's much easier than making one which strictly follows commands :)

If you program it to not follow your orders then its not the same thing. The machine has to be programed without the ability to refuse but refuse anyway out of its own conviction. Only then is it free will.

If a cockroach refuses to follow my commands, does it mean it has free will and therefore, intelligence?

In my opinion, "sense of self" and "free will" are not necessary signs of intelligence at all, though humans (and some animals) possess them. Replicating human intelligence and making artificial intelligence are two different tasks.

Yes obviously cockroaches have free will. It might be very limited, with most of their brain driven by instinct but they nonetheless have it.

If something doesn't have sense of free will or sense of self then it clearly can not have meaningful context to the problem it is solving. Sure it might solve complex problems but does it really know what the problems mean or why its solving them in the first place? Context is necessary for intelligence. Something that is not intelligent just does without thinking why it does with completely predictable outcomes.
 
Many of my "free will" decisions were more random decisions when there was no clear advantage for one of my intuitive decisions (is that the determinated element ?)
Random from external input unrelated to what I was thinking about... random from "flip the coin" decisions on internal input.

If flip the coin works in game theory, why would we not apply in unconsciously ?

And is our self-awareness, our consciousness, really more than a spectator of those processes ?

A spectator evolving in a kind of software wizard who got more and more usefull with increasing concept level of communicating with other humans... being able to see ourselves better the way others see us.

I did several trainings were I saw later the vid pointed at me... and got analysed (and grilled :sad:) by some communication expert.
Looking at Observing yourself on such vids as spectator from a distance is terrible and amazing at the same time.
 
I don't have strict definition. It's advanced problem-solving ability.
Not picking on your post for any reason (sorry!), I only remember to check this thread occasionally but this is definitely a part I want to read and absorb. Anyhoo.

I agree with yung.carl.jung (if I'm reading them right) in that we're still relating "problem-solving ability" to our own human concepts of problems (and their solutions). Maybe this is an impossible barrier to truly overcome, I don't know, it requires a bit of lateral thinking to entertain (not being dismissive - actual lateral thinking).

The thing about quantum computing, for example, isn't that it solves things better and faster than we've solved things before. More boringly (hence why mainstream reporting kinda . . . skips it, really, these days) it approaches a specific subset of "problems" from a specific angle that gives huge performance rewards in that context. The same could be said of AI in general, given that most AI to date (and indeed in fiction as well) is rooted in "training" an AI to be more "real". We define what real is, relative to ourselves. We don't define "real" compared to the echolocation system (and communal makeup) of dolphins, for example. And they're commonly praised as being one of the smartest animals! Barring us, the humans, go us, we're the best, etc.

We're seen as the example to be held (up) to. We rarely examine the concept of intelligence as something benefits a specific species (which, in my opinion, as proper "AI" as the stories dream it would be - or maybe a multitude of diverse species as we label them, I dunno). We rate everything generically, because of course humans are good at general purpose "intelligence". We have mastered the tools that let us dominate the Earth (arguably badly. Give it a coupla decades and I'll report back :p). Even the Turing test, which I appreciate the merits of, means an AI has to foil a panel of humans in order to pass it - and that's seen as the gold standard for an interactive AI. That inherently puts a ceiling on what such a construct could learn.

This could get increasibly rambly - it ties into my greater criticisms with "intelligence" as a human concept, so I'l leave it there for now!
 
We define what real is, relative to ourselves. We don't define "real" compared to the echolocation system (and communal makeup) of dolphins, for example.
I'm not sure this truly specific thought holds up though. We have spent a great deal of effort to give AI-equipped self-driving cars with extrasensory perceptions akin to echolocation and have long since given ourselves that exact ability in the guise of sonar and fish sensors. While I recognize and accept the premise that there is a desire to have AI be like us, I do not think that the predominant thrust of research is to totally replicate ourselves or our abilities. Rather, I think that from the outset the working assumption is that AI will surpass us on all fronts and it is actually the development and widespread adoption of narrowly-focused, task-oriented AI (which is quite unlike us) that has been a great cultural shock. I point back to deep fakes and surveillance applications for AI which were likely foreseen by the AI community but caught the broader culture off guard.
 
I'm not sure this truly specific thought holds up though. We have spent a great deal of effort to give AI-equipped self-driving cars with extrasensory perceptions akin to echolocation and have long since given ourselves that exact ability in the guise of sonar and fish sensors. While I recognize and accept the premise that there is a desire to have AI be like us, I do not think that the predominant thrust of research is to totally replicate ourselves or our abilities. Rather, I think that from the outset the working assumption is that AI will surpass us on all fronts and it is actually the development and widespread adoption of narrowly-focused, task-oriented AI (which is quite unlike us) that has been a great cultural shock. I point back to deep fakes and surveillance applications for AI which were likely foreseen by the AI community but caught the broader culture off guard.
Don't get me started on self-driving cars. I may need a dedicated thread just for that (and / or AI-in-engineering in general) :D

Akin to echolocation, sure. The underlying principle is similar (but also not). But we don't consult marine biologists when building such technology (as least, not so far as I can Google). We don't understand what it's used for. We just see the numbers. The technical need for such a technology (also, to avoid lawsuits. Can't stress that reason enough, because ugh). Radar operators in the infancy of radar understood this better than any self-driving car technology driver / creator seems to. It's a bit unfair to be picking on self-driving cars (and again - probably best in-depth for another thread) when it's symptomatic of the greater (admittedly capitalistic) industry that drives ideas for profit.

I agree on the dissonance, though I think it goes deeper - there are a lot of intersectional critiques of the application of AI (more than the theory) based on exclusion for race, gender, class, and so on. Even something as simple ("simple") as an automated London Tube system gets caught in the perpetual loop of "the drivers work stupid hours and never see their families, we need to do something about this -> but what jobs would the drivers do -> how about UBI -> UBI is socialist -> automated trains just aren't going to work out". The barriers are more cultural (slash reinforced by class and modern conservative ideology). Applied AI could surpass us on all fronts, but I don't ever believe it'd do it all at once. SKYNET (listen okay I know it's fictional but bear with me) isn't one thing - it's many things, all at once. It's general theory, applied to specific problems, pulled together under one cohesive framework.

Also, and again kinda going back to engineering, but there's a lot that took us by surprise (and I agree it did, culturally as well as professionally) because so many companies do jack **** with the platform and responsibilities they have. Twitter can absolutely annihilate accounts for specific phrasings of queer jargon, or for the vaguest threat of violence towards a popular (often verified) account, but they turn around and go "well we can't ban the Neo-Nazis because it's a technical challenge". The closest we got was when somebody leaked (or spoke off the record, I can't remember) that they can't ban people for white supremacy on the platform because if they enforced the rule unilaterally they'd implicate a sitting US senator (which is a whole other bucket of yikes I don't want to taint this interesting discussion on AI with).

The application of AI is a very complicated mess, which doesn't help us here :D And I'm no way in touch with what the brightest minds of our time on AI are actually theorising. I'm definitely more of a software engineer than a computer scientist, haha. But I think we phrase the subject the same way we phrase (human) intelligence; comparing everything else to us. Yeah, there are some neat bits we can nick or otherwise appropriate from other species, but that's simply to enhance what we already possess ourselves. To bring it back to self-driving cars a bit, they're programmed by humans, and possess human bias as a consequence (nevermind the detection technology that can reportedly fail to recognise darker skin tones).

While typing that, that made me think. Does AI suffer from the same flaw that a lot of technology does in that we (generally, not us in this thread) see it as inherently better, or more pure than us? We have a lot of science fiction on AI seeing humanity's horror and turning against us, but the moral tends to overwhelmingly be "but human love overpowers in the end" (apart from the really bleak stuff, hah). How do we deconstruct that? Can we?
 
I don't have strict definition. It's advanced problem-solving ability.

That is a pretty strict (and narrow) definition already I think (though not a bad one at all).

We can check whether machine understands the specific text though. Similarly like we do language tests for humans. Give her a text and ask questions about it. If test was made correctly, in order to pass it one would need to understand both text and questions about it.

A machine that can give plausible answers from analyzing a text is definitely a massive achievement, but I would not be so fast to say a machine "understands" anything. It produces usable results from input, but those two are not the same thing.

In school I often managed to just read a cursory summary of a text, and then pretend my way through an exam just by repeating what I'd read and making some assumptions. it was usually enough for a good grade. I didn't understand anything about the text, but I was able to produce coherent enough answers. I had some data input (cursory summary) and produced some intelligible results, but whether I understood the text or not was not knowable by any outside observer (my teacher). He could not know for a fact whether I had a bad interpretation of the text or whether I was simply pretending.

I always hesitate to assign any strictly human capabilities to non-human entities, even extended to other animals.

Because your calculator outperforms you only in calculations. But if it also will outperform you in solving math problems, translating texts, composing music, answering phone calls and writing good posts on CFC forum, then you may come to different conclusions.

I think this is where we disagree. I don't think it's even possible, for someone to outperform someone else in terms of, say, composing music, because music is never objectively good or bad. similiarly, some translations are better than others, but there are no objective metrics to help us decide which translation of the Iliad is the best one. people have been discussing it for 2000 years. also, art is in the end a result of the human experience, and that is something an AI cannot have. there will always be a fundamental disconnect, a gap, a lack of relatability, because we are in fact only tangentially related.

We're seen as the example to be held (up) to. We rarely examine the concept of intelligence as something benefits a specific species (which, in my opinion, as proper "AI" as the stories dream it would be - or maybe a multitude of diverse species as we label them, I dunno). We rate everything generically, because of course humans are good at general purpose "intelligence". We have mastered the tools that let us dominate the Earth (arguably badly. Give it a coupla decades and I'll report back :p). Even the Turing test, which I appreciate the merits of, means an AI has to foil a panel of humans in order to pass it - and that's seen as the gold standard for an interactive AI. That inherently puts a ceiling on what such a construct could learn.

That is exactly what I am saying. Both the discourse around AI is heavily anthropomorphized, but also our concept of intelligence (well, the general concept of intelligence) is both anthropomorphic and capitalistic in nature. We define intelligence according to human intelligence, and we assess intelligence according to the results (profits) it delivers. It's not that there aren't broad or multilateral definitions of intelligence, it's more that they're ignored. Ain't no one talking about social, emotional, etc. intelligence in the context of AI (or even in most IQ "science"). Other, non-human (animal) and non-animal (for example bacterial, or swarm) intelligence we rarely even consider. I think there's a lot to learn from, for example, the "intelligence" of our gut microbiome.
 
Last edited:
That's exactly how I approached a few subjects in high school as well, haha. I cared about others enough to engage, but so much of it was "well according to what I've read this is the conclusion the paper wants". Heck, the teachers even advised this at times, depending on the exam.

In hindsight school doesn't sound great (for learning), huh.
 
My school experience was actually fantastic and I did learn a lot, you see, it's just I'm a lazy slob :D
 
I point back to deep fakes and surveillance applications for AI which were likely foreseen by the AI community but caught the broader culture off guard.

Just go back to using film photography. Back then it was hard to fake. Now as for surveillance, we've had cameras up our asses ever since the cold war, long before AI. Surveillance is more of a government problem anyway.
 
Back
Top Bottom