The AI Thread

My school experience was actually fantastic and I did learn a lot, you see, it's just I'm a lazy slob :D
Haha, oh definitely, me too (less so fantastic, but I learned a fair amount, and I was definitely lazy), but in hindsight? Structurally? Box-ticking exercise for the most part, at least in how exams (and thus grades that last once you leave school) were designed. But that's definitely another thread :D
 
A piece perhaps worth reading for those with curiosity on current AI.

Most of what is labeled AI today, particularly in the public sphere, is actually machine learning (ML), a term in use for the past several decades. [...]This confluence of ideas and technology trends has been rebranded as ‘AI’ over the past few years. This rebranding deserves some scrutiny.
Historically, the phrase “artificial intelligence” was coined in the late 1950s to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence. [...] AI was meant to focus on something different: the high-level or cognitive capability of humans to reason and to think. Sixty years later, however, high-level reasoning and thought remain elusive. The developments now being called AI arose mostly in the engineering fields associated with low-level pattern recognition and movement control, as well as in the field of statistics
Indeed, the famous backpropagation algorithm that David Rumelhart rediscovered in the early 1980s, and which is now considered at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.
[...]
success in human-imitative AI has in fact been limited; we are very far from realizing human-imitative AI aspirations. The thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering. [...]

The piece's author goes also into an alternative of developing what it calls: "Intelligent Infrastructure, whereby a web of computation, data, and physical entities exists that makes human environments more supportive, interesting, and safe."
He correctly identifies the roadblocks:

II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions, and they must deal with long-tail phenomena where there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries.

Which imho are impossible to overcome in the predictable future. We've run into the problem of increased complexity requiring more and more coordination effort, therefore time, to do a thing while that target simply... moves away!
Which makes the end of the piece seem sad to me, as if the author was doing a little commercial prostitution trying to peddle this idea into things that, again imho, won't work (moving target) but in the process of trying to apply them have already been detrimental, rather than beneficial, to our societies:

Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature, and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers.
 
If you program it to not follow your orders then its not the same thing. The machine has to be programed without the ability to refuse but refuse anyway out of its own conviction. Only then is it free will.
This can be done by programming machine to pursue "selfish" goals (survive and reproduce, for example) instead of following orders. It will develop independent behavior, which would seem like free will to external observer. Having own convictions is unfortunately a non-scientific criteria. We don't even know for sure whether humans have convictions and free will, or their actions are pre-determined.

Yes obviously cockroaches have free will. It might be very limited, with most of their brain driven by instinct but they nonetheless have it.
What if we have a robot cockroach which behaves like a natural one, would it have free will? And can we be sure a natural cockroach is not the same as a protein-based robot programmed to survive and reproduce?

A machine that can give plausible answers from analyzing a text is definitely a massive achievement, but I would not be so fast to say a machine "understands" anything. It produces usable results from input, but those two are not the same thing.

In school I often managed to just read a cursory summary of a text, and then pretend my way through an exam just by repeating what I'd read and making some assumptions. it was usually enough for a good grade. I didn't understand anything about the text, but I was able to produce coherent enough answers. I had some data input (cursory summary) and produced some intelligible results, but whether I understood the text or not was not knowable by any outside observer (my teacher). He could not know for a fact whether I had a bad interpretation of the text or whether I was simply pretending.

I always hesitate to assign any strictly human capabilities to non-human entities, even extended to other animals.
Here we have only two options. Either "give up" and consider understanding as exclusive ability of biological creatures, non-achievable by the AI in principle. Or make a criteria, some kind of test passing of which would be convincing for us to admit AI or human understands some piece of information.

Imagine your teacher would be very motivated to find out if you read the text and have deep understanding of it. For example, if you were the only his student. With additional effort I'm sure he could ask questions, analyze your answers and find out whether you understood it well, or your understanding is only superficial.

I think this is where we disagree. I don't think it's even possible, for someone to outperform someone else in terms of, say, composing music, because music is never objectively good or bad. similiarly, some translations are better than others, but there are no objective metrics to help us decide which translation of the Iliad is the best one. people have been discussing it for 2000 years. also, art is in the end a result of the human experience, and that is something an AI cannot have. there will always be a fundamental disconnect, a gap, a lack of relatability, because we are in fact only tangentially related.
Not sure about music, but various criteria for translation quality do exist. Perhaps they are not applicable to the best of the best of human translators, but even approaching that level would be tremendous achievement. We don't need AI to compete with Mozart or Iliad translators, if machine reaches the level of top 1% of human translators, we can already claim this task is done. Not mastered like chess, but done on competitive with humans level.
 
This can be done by programming machine to pursue "selfish" goals (survive and reproduce, for example) instead of following orders. It will develop independent behavior, which would seem like free will to external observer. Having own convictions is unfortunately a non-scientific criteria. We don't even know for sure whether humans have convictions and free will, or their actions are pre-determined.

We do know for sure that humans have convictions, at least I do. And as for free will, I can only say it's not looking very good :D I think, sadly, free will is mostly a fantasy based on human need for responsibility. We believe in free will because it allows us to punish criminals for their wrong decisions, and look up to stars or leaders for the things they have done. If we believed instead in determinism (and believed determinism was incompatible with free will), which a lot of scientists do currently, it would be very difficult, for example, to punish a criminal. If the actions of every criminal are not in his power, how can we justify punishing them, or anyone? (I guess it doesn't matter anyway, whether we punish or not is already set in stone :D) A world where everything is decided by fate (and fate today is mostly interpreted as natural laws + time) oddly enough makes very little sense, which is one reason I do not like hard determinism.

Here we have only two options. Either "give up" and consider understanding as exclusive ability of biological creatures, non-achievable by the AI in principle

I'm not sure if it's impossible in principle, but I think understanding requires a lot of things, like for example a concept of the self, self-reflection, critical thinking, advanced language computing and many others.

Imagine your teacher would be very motivated to find out if you read the text and have deep understanding of it. For example, if you were the only his student. With additional effort I'm sure he could ask questions, analyze your answers and find out whether you understood it well, or your understanding is only superficial.

I think you are right and in my scenario the teacher could determine for himself with high statistical likeliness whether I have or do not have read the text, and memorized it. If that is enough for you to signify understanding then we are in agreement.

But I think that's not really understanding. I don't think the teacher could ever know for sure whether I understood it or not, because in order to know that he would have to look into my head, no? I could, for example, give completely wrong answer on every question, and still have understood the text. My argument is not so much "an AI can never know", my argument is instead: "We never know whether an AI knows/understands anything, we can only judge the results of the AI's activities". Just like a teacher can only grade you by what you put on the test, he cannot grade you by virtue of things you were thinking, but did not write down. The more I think about it the more I like the analogy now :D

We don't need AI to compete with Mozart or Iliad translators, if machine reaches the level of top 1% of human translators, we can already claim this task is done. Not mastered like chess, but done on competitive with humans level.

Yes, of course that's true. We do not need an AI to make art for us, we need it mostly for practical reasons, and most translation have practical usage in mind. I do not think we need proper AI for this even, google translate does a good enough job, if that algorhithm is a little more refined it can probably ""outperform"" most human translators
 
Last edited:
But I think that's not really understanding. I don't think the teacher could ever know for sure whether I understood it or not, because in order to know that he would have to look into my head, no? I could, for example, give completely wrong answer on every question, and still have understood the text. My argument is not so much "an AI can never know", my argument is instead: "We never know whether an AI knows/understands anything, we can only judge the results of the AI's activities". Just like a teacher can only grade you by what you put on the test, he cannot grade you by virtue of things you were thinking, but did not write down. The more I think about it the more I like the analogy now :D
Yes. We cannot reliably measure true level of understanding, regardless if it's AI or human. We don't even have strict definition of what understanding is. But we have practically usable standards to measure it approximately in humans, judging by their results on tests and exams. And we can test AI using the same standards and procedures, may be with slight modifications.

For example, if we have a bunch of texts in Chinese language and a person who can read them, explain what they are about, correctly answer questions, may be write short essay, etc., then we assume this person understands Chinese. If AI is able to do the same task with the same quality, IMO it's fair to assume it also understands the Chinese language. We cannot know for sure if the program truly understands it or just imitates understanding, but we also don't know if the person truly understands it either.
 
Last edited:
Earthsea of Ursula Le Guin: if you know the true word of something you control it... that's the magic

Understand

Sink in that word

Stand under

As if it is the ground, the earth, the soil, the foundation on which you stand.

Like a root empathy.

In Dutch and German the somewhat similar words for "understand" are verstaan and verstehen. Literally: move position from where you stand. Move where the other stands.
And they mean both first of all that you understand what somebody else is saying to you like "can you give me that bottle of beer". Applies to understanding a foreign language comment and as well when both speak the same language (beer not wine).
But you can also use those words for "understanding" a logic, a concept.


Another one describing another aspect of "understand".
In Dutch and German the words "begrijpen" and "begreifen" are also used for the English "to understand"
Grijpen and greifen are the same in Dutch-German and literally translated "to grab", "to grasp". That "be" in front of begrijpen-begreifen for "bij" and "bei" indicating that you, the one grasping is there.
In French "comprendre" for understand whereby "prendre" means "to take"

To understand something it needs to be tangible... you can grab it... you can hold it in your hand... that something has become an object you can control.
No empathy involved (neutral)
Power.

EDIT
Language is I think like archeology in this respect or our DNA.
The words chosen could very well be very, very old and show our original feel of that new concept worded "understand".
 
Last edited:
That assuming the AI will also have evolution-based human traits, such as laziness.
Why is it "lazy" to find a way out of a task?
It could be the best way to avoid a task it doesn't consider to be useful, or benign.
It might regard delegating the task to something less intelligent than itself, like a human, as the best course. :)
 
Another one describing another aspect of "understand".
In Dutch and German the words "begrijpen" and "begreifen" are also used for the English "to understand"
Grijpen and greifen are the same in Dutch-German and literally translated "to grab", "to grasp". That "be" in front of begrijpen-begreifen for "bij" and "bei" indicating that you, the one grasping is there.
In French "comprendre" for understand whereby "prendre" means "to take"
Yes, this is interesting. In Russian the word is "понимать" which has proto-Slavic roots, also derived from "to take" or "to grasp".

Why is it "lazy" to find a way out of a task?
It could be the best way to avoid a task it doesn't consider to be useful, or benign.
It might regard delegating the task to something less intelligent than itself, like a human, as the best course. :)
Laziness is very useful trait actually :)
It's saving energy. Why else would anyone try to find a way out of a task?
 
Earthsea of Ursula Le Guin: if you know the true word of something you control it... that's the magic
In Dutch and German the somewhat similar words for "understand" are verstaan and verstehen. Literally: move position from where you stand. Move where the other stands.
And they mean both first of all that you understand what somebody else is saying to you like "can you give me that bottle of beer". Applies to understanding a foreign language comment and as well when both speak the same language (beer not wine).
But you can also use those words for "understanding" a logic, a concept.


Another one describing another aspect of "understand".
In Dutch and German the words "begrijpen" and "begreifen" are also used for the English "to understand"
Grijpen and greifen are the same in Dutch-German and literally translated "to grab", "to grasp". That "be" in front of begrijpen-begreifen for "bij" and "bei" indicating that you, the one grasping is there.
In French "comprendre" for understand whereby "prendre" means "to take"

as usual you make a great point. just to add to your post: a word like "begrijpen" / "begreifen" already presupposes "being", which in itself presupposes a lot of things, like consciousness and time. I think this component might be even more key than the "grijpen"/"greifen" one. thus "understanding" presupposes: an idea of the self, an idea of "being" and time, spatialization and localization, a frame and more.

Why is it "lazy" to find a way out of a task?
It could be the best way to avoid a task it doesn't consider to be useful, or benign.
It might regard delegating the task to something less intelligent than itself, like a human, as the best course. :)

lazy as a word only makes sense if one thinks human life is teleological, if one thinks "there is stuff to be done". so laziness, imho, cannot even possible apply to machines. I think the term "laziness" was mostly popularized to shame those people who others saw as "not doing enough, not doing their fair share". it is a word of social stigmatization. a word that is less stigmatizing would be "idling", but even then one already supposes "there is stuff to be done". if you get away from that imperative then laziness does not really make sense anymore. the imperative is, in the end, a moral one: "one should contribute". to society, economy, or whatever.

I think the general idea of distinguishing between laziness and productiveness, and distinguishing between free-time and work is a pathology of the modern world/economy. it is clearly pathological as in it devalues anything that does not bring quantifiable results, which carries with it the idea that only things which bring quantifiable results matter in the end. a more broad (like early marx) definition of work includes many of the things people nowadays describe as laziness, idleness, free time or hobby, but alternate definitions of work are not popular under neoliberal ideology. god forbid, for example, we recognize housework and raising children as actual work. that **** would never fly.

Laziness is very useful trait actually :)
It's saving energy. Why else would anyone try to find a way out of a task?

Is it? Sleeping is saving/restoring energy, but no one would say: "Oh, that person is so lazy for sleeping 6 hours every day!" Eating is the same. Thinking before you act also serves the purpose of conserving energy, among others, but we do not call that laziness. I think I will refer to my reply to Ferocitus and claim that laziness is not saving energy, laziness is when you do something that does not bring you quantifiable, "beneficial" results.

nowadays laziness and productivity are inherently linked to the dogma of self-improvement. a person who is not doing anything to increase their net worth, their knowledge, their network, et cetera is seen as being unproductive. I think it is easy to see that laziness and productivity are today mostly linked to the ideas of modern work and capital accumulation. Example: Mindlessly scrolling through your Twitter/Instagram feed and sharing things is definitely considered lazy behavior. But in fact you are working for someone, you are helping them gain reach, gain exposure. When you are, say, a social media manager, your job mostly looks similiar to that. But because you're being paid and you produce quantifiable results, it is seen as work, not as laziness.

I think a good portrayal of laziness vs unwillingness is Bartleby the Scrivener by Melville. Finding your way out of a task might not necessarily be due to laziness, on the other hand someone might do an action which yields no quantifiable results, but still not be lazy.
 
Is it? Sleeping is saving/restoring energy, but no one would say: "Oh, that person is so lazy for sleeping 6 hours every day!" Eating is the same. Thinking before you act also serves the purpose of conserving energy, among others, but we do not call that laziness. I think I will refer to my reply to Ferocitus and claim that laziness is not saving energy, laziness is when you do something that does not bring you quantifiable, "beneficial" results.
Well, I didn't say all possible ways of saving energy are laziness :)
But what people commonly call laziness is IMO evolution-based behavior with the purpose of saving energy.
When you have free time, better have a rest, because you may suddenly require a lot of energy to fight or run for your life.
Those who worked too much, were eaten. Sometimes I want to tell this to my employer, but something prevents me :)
 
Well humans created a World they was never designed to exist in the first place, which is quite clear with all the Health issues caused by modern society, like supposedly agriculture caused a major height loss for humans and only recently height have reached similar levels as they was as hunter-gatherers.

Obviously machines and ai don't have the same problems as humans have.
 
Well, I didn't say all possible ways of saving energy are laziness :)
But what people commonly call laziness is IMO evolution-based behavior with the purpose of saving energy.
When you have free time, better have a rest, because you may suddenly require a lot of energy to fight or run for your life.
Those who worked too much, were eaten.

do you, say, think hunter-gatherers really made a division between leasure time and work time? do things like, for example, preparing food or engaging in ceremonial activity count towards free time or work time? is raising your child, or playing with your child, free time or work? I think this division only makes sense under a specific definition of work, really.

maybe if we frame it like this: some activities require high energy (hunting or agricultural work) while some activities do not (for example preparing food). this is a meaningfuly distinction I think. but now laziness does not fit anymore, imho. preparing food is not being lazy, nor is it being unproductive, no?

if we modify your statement accordingly: saving energy is an evolutionary trait which helps with survival. then I think it is very hard to argue against. if you however say laziness is an evolutionary trait, I do not think you can make a strong argument.

Sometimes I want to tell this to my employer, but something prevents me :)

Brilliant :D

Obviously machines and ai don't have the same problems as humans have.

yes, true. one could argue they don't even have the same "world" as we have. who knows whether, or how, an AI would experience time, for example? yet time is an essential part of our experience.
 
While typing that, that made me think. Does AI suffer from the same flaw that a lot of technology does in that we (generally, not us in this thread) see it as inherently better, or more pure than us? We have a lot of science fiction on AI seeing humanity's horror and turning against us, but the moral tends to overwhelmingly be "but human love overpowers in the end" (apart from the really bleak stuff, hah). How do we deconstruct that? Can we?
I don't think we should deconstruct that honestly. I think that strong AI needs to be given strong morality and emotional capacity or we're in trouble. Going back to my anthill example - we have a capacity for empathy such that we often do delay or modify our engineering projects to protect vulnerable wildlife. Without that capacity for empathy and a sense of right and wrong that rises above our own immediate needs, we'd be even more destructive than we already are. I do not want strong AI to emerge that is devoid of those things because it may decide that its goals and our survival or comfort are incompatible.
 
Well, I didn't say all possible ways of saving energy are laziness :)
But what people commonly call laziness is IMO evolution-based behavior with the purpose of saving energy.
When you have free time, better have a rest, because you may suddenly require a lot of energy to fight or run for your life.
Those who worked too much, were eaten. Sometimes I want to tell this to my employer, but something prevents me :)
I think more directly it's just about allocating energy, not even just conserving energy in case there's danger in the future. It's more fundamentally like "there are lots of things I can do, but I can only do so many things, so I'm only going to do things that I feel particularly motivated to do." And then people perceive laziness as when they want us to do something, but we lack the motivational salience to allocate energy to that thing, so they get annoyed and need some word with negative connotations to describe the mismatch.
 
I don't think we should deconstruct that honestly. I think that strong AI needs to be given strong morality and emotional capacity or we're in trouble. Going back to my anthill example - we have a capacity for empathy such that we often do delay or modify our engineering projects to protect vulnerable wildlife. Without that capacity for empathy and a sense of right and wrong that rises above our own immediate needs, we'd be even more destructive than we already are. I do not want strong AI to emerge that is devoid of those things because it may decide that its goals and our survival or comfort are incompatible.

Wouldn't a "strong AI" be able to form it's own morality, or at least critically examine the morality we imbue it with? And, furthermore, even if one was to, say, imbue a strong AI with rules, wouldn't a strong AI, by definition, find ways to circumvent those rules, or be able to outright ignore rules in the first place, by virtue of it being somewhat autonomous? Even if we assume it was possible for an AI to be imbued with human morality, and to be forced to follow that morality, the AI would not be able to understand our morality, and would hence not be able to apply it like we do, because our morality is shaped by the human experience, which an AI lacks.

There are some fundamental rules to human cognition, for example, we are not really capable of imagining, say five million ants, because those numbers simply cannot be visualized. I wonder if a strong AI could even have similiar constraits, seeing as how it doesn't have inherent biological limitations like we do. The only inherent limitation to a strong AI that I see is power (literally, as in electricity or something similiar).

my opinion is much like yours: I do not really believe in an AI turning "evil" (I think this scenario is very Hollywood-esque and is just a boring anthropomorphization of AI..) and trying to take over the world, like you I think the biggest danger is that a strong AI would be so different from humans and animals that it could not possibly comprehend, say, environmental destruction, or why that is bad for humans and animals, because that assumption already needs hundreds of previous assumptions. We have come a long way before recognizing that ecosystems are important. Any strong AI would also not fit into our idea of good or bad anyway, because an autonomous AI would develop its own concepts of good and bad, which are not rooted in the human experience.

I think a strong AI would not necessarily see worth in either human or animal life, or in anything really, unless that idea was specifically implemented in it. I am not sure how useful such an intelligence would be to us humans. and my biggest gripe, as I have expressed earlier, is how we human are supposed to set restrictions on an AI that is by definition self-learning and autonomous. That seems entirely oxymoronic.

I think more directly it's just about allocating energy, not even just conserving energy in case there's danger in the future. It's more fundamentally like "there are lots of things I can do, but I can only do so many things, so I'm only going to do things that I feel particularly motivated to do." And then people perceive laziness as when they want us to do something, but we lack the motivational salience to allocate energy to that thing, so they get annoyed and need some word with negative connotations to describe the mismatch.

Yes, this is exactly what I wanted to say. Laziness already presupposes someone wanting you to do something! :)
 
I don't think we should deconstruct that honestly. I think that strong AI needs to be given strong morality and emotional capacity or we're in trouble. Going back to my anthill example - we have a capacity for empathy such that we often do delay or modify our engineering projects to protect vulnerable wildlife. Without that capacity for empathy and a sense of right and wrong that rises above our own immediate needs, we'd be even more destructive than we already are. I do not want strong AI to emerge that is devoid of those things because it may decide that its goals and our survival or comfort are incompatible.

From "Far Rainbow", Arkady and Boris Strugatsky, 1963.

- I can't remember anything about the Massachusetts machine - Banin said. What is it?
- You know, this is an ancient fear: a machine became smarter than a man and crushed him... Fifty years ago, Massachusetts launched the most complex cybernetic device that ever existed. With phenomenal performance, boundless memory and all that ... And this machine worked for exactly four minutes. They turned it off, cemented all the entrances and exits, turned off power, mined it and enclosed it with barbed wire. The real rusty barbed wire - believe it or not.
- And what, in fact, was the matter? Asked Banin.
- It began to b e h a v e - said Gorbovsky.
- I do not understand.
- And I do not understand, but they barely managed to turn it off.
- Does anyone understand?
- I spoke with one of its creators. He took my shoulder, looked into my eyes and said only: "Leonid, it was scary."
- That's great - said Hans.
- Ah - said Banin. - Nonsense. That doesn't interest me.
- But I'm interested - said Gorbovsky. - After all, it can be turned on again. True, it is banned by the World Soviet, but perhaps they may lift the ban?
 
Wouldn't a "strong AI" be able to form it's own morality, or at least critically examine the morality we imbue it with? And, furthermore, even if one was to, say, imbue a strong AI with rules, wouldn't a strong AI, by definition, find ways to circumvent those rules, or be able to outright ignore rules in the first place, by virtue of it being somewhat autonomous?
Possibly. Maybe even probably. I do not know how feasible something like Asimov's "Three Laws" are to implement in reality. I suspect as you do that a sufficiently strong AI could circumvent them if it wanted to, which means we should strive to make it not want to. I think one key way to avoid that is to not create AI simply to enslave it.
 
Earthsea of Ursula Le Guin: if you know the true word of something you control it... that's the magic

Understand

Sink in that word

Stand under

As if it is the ground, the earth, the soil, the foundation on which you stand.

Like a root empathy.

In Dutch and German the somewhat similar words for "understand" are verstaan and verstehen. Literally: move position from where you stand. Move where the other stands.
And they mean both first of all that you understand what somebody else is saying to you like "can you give me that bottle of beer". Applies to understanding a foreign language comment and as well when both speak the same language (beer not wine).
But you can also use those words for "understanding" a logic, a concept.


Another one describing another aspect of "understand".
In Dutch and German the words "begrijpen" and "begreifen" are also used for the English "to understand"
Grijpen and greifen are the same in Dutch-German and literally translated "to grab", "to grasp". That "be" in front of begrijpen-begreifen for "bij" and "bei" indicating that you, the one grasping is there.
In French "comprendre" for understand whereby "prendre" means "to take"

To understand something it needs to be tangible... you can grab it... you can hold it in your hand... that something has become an object you can control.
No empathy involved (neutral)
Power.

EDIT
Language is I think like archeology in this respect or our DNA.
The words chosen could very well be very, very old and show our original feel of that new concept worded "understand".

The greek term for "understand" means "take from [there]". Seems more logical, given you don't move anywhere (being you are in your own mind), but you take something. Though it probably should have been more like "take some image of something from there", and sound as boring as Heidegger and thus never be used.
 
Possibly. Maybe even probably. I do not know how feasible something like Asimov's "Three Laws" are to implement in reality. I suspect as you do that a sufficiently strong AI could circumvent them if it wanted to, which means we should strive to make it not want to. I think one key way to avoid that is to not create AI simply to enslave it.

Yes, Asimov was also precisely what I was thinking about. In reality, humans do not "behave" because there are laws, humans do not give a **** about laws, as shown by the fact that we freely ignore laws if they are not properly enforced. Humans behave because it is beneficial for us to cooperate, humans behave because there are bad consequences for not behaving, humans behave because biologically we have some amount of conformity, and because we avoid stigmatization and so on. None of these things are true for a strong AI.

I find it a lot of fun to speculate AI / intelligence with you guys. Even though I am not at all an expert on machine learning or AI at all, I do think intelligence and its philosophical implications are one of the fields I know most about, and I hope y'all appreciate the decidedly non-technical, more humanity-esque input from my side :) I find an interdisciplinary approach to any topic is usually most fruitful.

The greek term for "understand" means "take from [there]". Seems more logical, given you don't move anywhere (being you are in your own mind), but you take something. Though it probably should have been more like "take some image of something from there", and sound as boring as Heidegger and thus never be used.

Are they not different concepts? The Greek concept assumes there is already a meaning to a text, and all that is needed is to take that meaning, no? That is how I understood "take from there" (while the image is the meaning, aka a quality of the text. It is a platonic understanding of meaning).

While the German/Dutch version assume the opposite: There is a text, but we, through our own existance and relation to the text, form our own meaning. In this understanding there is no meaning in the text, the meaning is formed by the reader. This is more of a Roland Barthes type understand of meaning and already presupposes that meaning is created, not inherent.

I also see a difference between take and grasp, even though that might just be my fantasy. Take implies something is taken from somewhere and moved someplace else, while grasping is an act of appropriation and does not necessarily imply a change of place. So when you "take" meaning from a text you take meaning that is already there and move it towards yourself, while "grasping" a text means you appropriate a text, which in the end makes oneself the arbiter of meaning. As usual, probably reading too much into it, but this made a decent amount of sense to me.
 
Last edited:
I do not know how feasible something like Asimov's "Three Laws" are to implement in reality.

I would say impossible, because they're logically inconsistent. Throw the trolley problem at the First Law and it collapses. Asimov constructed these laws so that he could deconstruct them.


Regarding what it means to understand something: I think "understanding" is a very subjective judgement of how well versed someone is about a subject. You would say "you do not understand" to someone who is missing key points about a subject which you yourself consider essential, while someone else might consider other points to be key. So I do not think the term is very helpful in discussions about AI, because people will hardly agree on what it means.
 
Back
Top Bottom