The AI Thread

I am not seeing how any computer (just AI, no DNA attached) has agency. If all that is going on is the act (eg a computation), there is no room for an agent, neither can the agent morph into a computation or vice-versa.
Doesn't matter how complicated the action/computation/code running is. I have to suspect that where agency exists (such as in humans, but also in ants and anything else alive), it does so cause there is a very clear split between any action of any moment, and the entity which is undergoing or undertaking that action, consciously or not.

Some kind of sense seems to be needed for that, although in humans sense arguably had to exist anyway since before some early stage of civilization the human being would just perish if not made by senses to do stuff needed.
 
The greek term for "understand" means "take from [there]". Seems more logical, given you don't move anywhere (being you are in your own mind), but you take something. Though it probably should have been more like "take some image of something from there", and sound as boring as Heidegger and thus never be used.

You indeed "get" something when you understand something from someone else.

But the neutral "take from [there] does not show that to be able to take it, there are or that could be empathy hurdles involved. That aspect is handled in understand-standunder.
If "the other" converts his understanding into some universal language, abstract and defined enough, seemingly no hurdles.
But the point is I think that there is no "universal language" and if the other converts his understanding into universal language it will no longer be "his understanding".
What is transferred can come close... a concept for a crankshaft to convert rotating into up and down, can for sure be transferred, the engine build will perform, planks sawed by windmills, clothes woven.
=> At the do-level, for the do-aspect, as object, it will function.
 
Some kind of sense seems to be needed for that, although in humans sense arguably had to exist anyway since before some early stage of civilization the human being would just perish if not made by senses to do stuff needed.

Ants do fine without a sense of purpose (we assume), they still get stuff done. I think human beings developed sense or meaning as a response to becoming self-conscious. Sense, meaning or purpose is based on that psychological need, and it only exists to fill that need.

I am not seeing how any computer (just AI, no DNA attached) has agency. If all that is going on is the act (eg a computation), there is no room for an agent, neither can the agent morph into a computation or vice-versa.
Doesn't matter how complicated the action/computation/code running is. I have to suspect that where agency exists (such as in humans, but also in ants and anything else alive), it does so cause there is a very clear split between any action of any moment, and the entity which is undergoing or undertaking that action, consciously or not.

Some kind of sense seems to be needed for that, although in humans sense arguably had to exist anyway since before some early stage of civilization the human being would just perish if not made by senses to do stuff needed.

That is the distinction Hobbs and I were talking about, wherein it is assumed that so called "strong AI" has a form of agency or autonomy (where this would come from is not explained yet, it's a hypothetical after all).

Also, I am unsure if we can assume that human beings or other animals have agency, certainly we have not been able to prove it empirically yet. But I think we can say this:

Let's assume natural laws. Let's assume time and space. A human being will always be "doing something", like sitting, reading, thinking. In our everyday life, we have something called a choice. Let's assume you go to the supermarket and decide what to buy. There is only one way you can decide. You can decide to buy an apple, or decide to buy a banana. You can decide to buy both a banana and an apple. You cannot decide to both buy an apple and not buy an apple. That is impossible. They are mutually exclusive. This experiment proves: It is irrelevant whether we live in a deterministic or not-deterministic universe, there is only one way we will de-facto decide. You cannot decide both ways, you cannot "buy the apple and also not buy the apple". You will always end up doing something (even if that means not buying anything), and there will always only be one way you decide.

So now that we know this, we can frame this in different ways.

An incompatibilist would say that, assuming we know the state of the universe before you go to the grocery, and knowing natural laws and time, we can know for a fact what you will end up doing, it is even calculatable. And it is unchangeable. It is not a result of your free will, but a result of millions of physical events (brain chemistry et cetera). There is no meaningful choice here, no matter how hard you think about whether you want the Apple or the Banana, everything you have thought, are thinking, and will be thinking is predestined. So therefore there is no "choice", and no free will.

A compatibilist would maybe say that our brain is not a closed system, and that our consciousness is not merely a result of material determinism. He would argue that not every thought is predestined, because they're not entirely physical, and therefore you can make an actual decision, based on your own thoughts.

My argument would be: Both of those are completely irrelevant. There is only one de-facto way any person can decide at any time. There are near infinite possibilities to decide, but in the end we can only execute one of those possibilities. The incompatibilist argues that all the things leading up to the decision are predestined, the compatibilist argues that they're a function of agency/autonomous thinking. However, they are both describing the exact same thing. Whether we decide freely or not seems completely arbitrary. Moreso, even if we assume free will, that does not mean our decisions are entirely free, we are still constrained by our beliefs, by our cognitive biases, by our material conditions and so forth.

One thing is clear: In both cases, a string of events leads us to do specifically one thing. You can frame this as agency or determinism. But it's the same thing, no? There are factors which influence our thoughts/cognition which in turn influence our reponse/action. It seems only the framing is different.

When I decide whether to take the Apple or the Banana, I will think about which one tastes better, which one costs more, and so forth. Does it matter whether I freely decide that the Apple tastes better than the Banana, or whether the entirety of my past experiences forces me to prefer the Apple to the Banana? What is the actual difference between those two scenarios? Surely such a statement as: "the Apple tastes better because it's more sour" is completely subjective and arbitrary, and every decisions boils down to Axioms like this. When you try to pin down free will to what is really is, it seems super elusive. Who is the "I" deciding one is tastier than the other? Is it my brain, consciousness, subconscious? Free will seems to be about two things: 1) Freely cultivating and exercising a will 2) Having the freedom of choice for your actions. But in what way does the illusion of choice meaningfully differ from "actual" free choice when there is only one way we will de-facto decide? No one has been able to answer this for me.
 
^By "agency" I don't mean anything fancy like free will or lack of it. I just mean that there is a separate entity from the actual program which is running. A thinker is not morphing into a thought, but remains a core which senses the thought running - let alone being affected by myriads of unconsciously running routines. A computer's effect is just the calculation, not a sense of it, and making another calculation be a feedback of the first calculation doesn't solve anything: if you have a ghost city the issue won't be solved by another forking or looping pathway.

You indeed "get" something when you understand something from someone else.

But the neutral "take from [there] does not show that to be able to take it, there are or that could be empathy hurdles involved. That aspect is handled in understand-standunder.
If "the other" converts his understanding into some universal language, abstract and defined enough, seemingly no hurdles.
But the point is I think that there is no "universal language" and if the other converts his understanding into universal language it will no longer be "his understanding".
What is transferred can come close... a concept for a crankshaft to convert rotating into up and down, can for sure be transferred, the engine build will perform, planks sawed by windmills, clothes woven.
=> At the do-level, for the do-aspect, as object, it will function.

It can be said that "understanding" something at any rate means that you are able to form your own notion of that something, so in effect you bring something from some not yet distinct other-region of your own mental world (which reflects the thing you try to understand) to the realm of consciousness and stable thinking. So in this sense you bring something from there to here, just that both there and here are essentially in your mind. :)

Btw, the greek term for science, episteme, just means to stand over something (meaning that you understand it, or just that you became a stalker of a stable enough collection of acquired notions :p )
Lastly, a synonym of katalavaino (take from []) is katanoo, which means "think of from []".
 
Last edited:
^By "agency" I don't mean anything fancy like free will or lack of it. I just mean that there is a separate entity from the actual program which is running. A thinker is not morphing into a thought, but remains a core which senses the thought running - let alone being affected by myriads of unconsciously running routines.
What's the nature of this agency in your opinion - is it something more than electro-chemical processes in our brain?
Is there any particular reason to assume it cannot be the result of calculations - may be very complex ones, but theoretically implementable in a machine?
 
What's the nature of this agency in your opinion - is it something more than electro-chemical processes in our brain?
Is there any particular reason to assume it cannot be the result of calculations - may be very complex ones, but theoretically implementable in a machine?

Well, do you have a sense of any collections of (however many) thoughts that actually form an Ego? Besides, we already have one Ego, so it's not like there is actual room for a second one, though you can theoretically split yourself into many parts which arguably form cores to some extent (but more than likely not past some point, eg only one you will get info on your health state).
One can always try to trick by making something similar, but surely the point isn't just to trick, no? And I do not see how you get from literally zero sense (computer) to even some infinitesimal sense. Though with DNA you'll already have the bio matter take care of the phenomenon of sense.
 
^By "agency" I don't mean anything fancy like free will or lack of it. I just mean that there is a separate entity from the actual program which is running.

And in the case of us humans, or in the case of an animal, what is that separate entity?

A thinker is not morphing into a thought, but remains a core which senses the thought running.

The way I understand this is that you presuppose mind-body dualism, where our biology is the program, and the activity is thinking, while the thoughts themselves are separate from that/from us, and are not "our" thoughts, but something we can simply grasp, or find. is that correct?

In that conception, where do thoughts "come from", for lack of a better word?

It can be said that "understanding" something at any rate means that you are able to form your own notion of that something, so in effect you bring something from some not yet distinct other-region of your own mental world (which reflects the thing you try to understand) to the realm of consciousness and stable thinking. So in this sense you bring something from there to here, just that both there and here are essentially in your mind. :)

yes, exactly, that was also my take. in the end, I think understanding is very close to "appropriation". there is
1) an impetus from something outside of you, experienced and internalized and
2) a momentum of original thought which captures that impetus and transform it.

maybe that is the most succint definition I can come up with. as we already stated earlier, this presupposes some concept of self (inside vs. outside), being (experience), agency (original) and conscious thought (transform). none of these qualities are found in either "AI" or machine learning, about neural networks I am not sure.

Well, do you have a sense of any collections of (however many) thoughts that actually form an Ego? Besides, we already have one Ego, so it's not like there is actual room for a second one, though you can theoretically split yourself into many parts which arguably form cores to some extent (but more than likely not past some point, eg only one you will get info on your health state).
One can always try to trick by making something similar, but surely the point isn't just to trick, no? And I do not see how you get from literally zero sense (computer) to even some infinitesimal sense. Though with DNA you'll already have the bio matter take care of the phenomenon of sense.

I will freely admit I don't understand this :lol:
 
And in the case of us humans, or in the case of an animal, what is that separate entity?



The way I understand this is that you presuppose mind-body dualism, where our biology is the program, and the activity is thinking, while the thoughts themselves are separate from that/from us, and are not "our" thoughts, but something we can simply grasp, or find. is that correct?

In that conception, where do thoughts "come from", for lack of a better word?

I think that thoughts are in a way a bit like a body of water where you move, although at the same time you are also the water and anything around it. Basically a thought is something separate from the actual Ego, since the Ego has to be a core and more stable, while a thought is a progression and can alter more dramatically or even be stopped - or negated. Though I won't profess to be able to give an analytical view of how thoughts are distinct from the Ego. FWIW I was very heavily involved in this very question, in my first year of University. It led to a not so nice mental collapse, and not much of a breakthrough either :) But I was 18,5 then, and thought this was to be my work in life.
 
Well, do you have a sense of any collections of (however many) thoughts that actually form an Ego? Besides, we already have one Ego, so it's not like there is actual room for a second one, though you can theoretically split yourself into many parts which arguably form cores to some extent (but more than likely not past some point, eg only one you will get info on your health state).
One can always try to trick by making something similar, but surely the point isn't just to trick, no? And I do not see how you get from literally zero sense (computer) to even some infinitesimal sense. Though with DNA you'll already have the bio matter take care of the phenomenon of sense.
I'm not sure things like separate Ego and sense even exist. Our thoughts are products of brain activity and what we consider an Ego is also a part of it. We may sense our Ego as something like a phonograph where our thoughts are being played, but this is probably an illusion. Which we have only because mrs Evolution considered that being self-aware is beneficial for our survival.
 
I don't think we should deconstruct that honestly. I think that strong AI needs to be given strong morality and emotional capacity or we're in trouble. Going back to my anthill example - we have a capacity for empathy such that we often do delay or modify our engineering projects to protect vulnerable wildlife. Without that capacity for empathy and a sense of right and wrong that rises above our own immediate needs, we'd be even more destructive than we already are. I do not want strong AI to emerge that is devoid of those things because it may decide that its goals and our survival or comfort are incompatible.
The problem is - how do you give that to an AI? Morality and emotional capacity can also trend to destruction. We still have to imbue it with reasonably-human qualities to this end. The problem is, our morality is rooted at its core in helping our fellow person, but our fellow person specifically. Not our fellow fox. Or vole. Or badger. Or coral reef. We can make efforts towards all of these species and ecosystems, but typically when such things fall apart is when it comes up against human greed, or even basic human self-preservation. The latter is less of a situation these days, but the former seems to be on an upwards swing.

How does an AI rationalise that with discriminating against at least some elements of humanity. Where does its self-learning take it from there?

Doomsdays scenarios involving AIs are always a cliche (I can't remember who criticised it, I think it was you) - I agree. But then what does an AI do when presented with the current (or near-future) iteration of humanity? Does it justify bias against bad-faith actors, or outright greed? We can't even get a handle on it ourselves. I don't know, a lot of my early science fiction thoughts were influenced by (early) Asimov and the Hitchhiker's Guide to the Galaxy. As I look at the state of the world, and have serious discussions with my wife about the kind of Earth we're raising children in (something that would've given us pause half a decade or so ago, had we known) . . . what does an AI's morality do in this context? Does it benefit from unparalleled multitasking and a fantastic accuracy in only discriminating (for whatever context the AI is performing a nominally-superhuman task that affects a human, or humans, in some way) against people who deserve it?

We can't do that ourselves accurately. How can we build an AI with enough rigour that even with self-learning and a relative lack of starter bias it can overcome the inherent weaknesses in its creators?

I'm not trying to gotcha you here, this is something I trip my head against a lot. I'm not a psychologist, or versed in philosophy well enough at all. I just can't understand how in practise we can devise some kind of moral input that allows such a construct to help the world without in some way also going a tad GLaDOS on us.
 
I'm not sure things like separate Ego and sense even exist. Our thoughts are products of brain activity and what we consider an Ego is also a part of it. We may sense our Ego as something like a phonograph where our thoughts are being played, but this is probably an illusion. Which we have only because mrs Evolution considered that being self-aware is beneficial for our survival.

Doesn't really matter if on some deeper level it is separate or not, I mean maybe on some vastly deeper level every notion we have now is like a duplo block. What matters is that, readily, you can tell the difference between sensing you exist, and having any particular thought.
 
Besides, we already have one Ego

Do we ?
IDK

I feel no distinct one Ego
it feels much more like a team inside me... or better worded some kind of continuum...
I am not even sure that my being of spectator of all that is all the time the same

The only thing that is perhaps safe to say is that "I" share the history of "myself".
 
Last edited:
Do we ?
IDK

I feel no distinct one Ego
it feels much more like a team inside me... or better worded some kind of continuum...
I am not even sure that my being of spectator of all that is all the time the same

The only thing that is perhaps safe to say is that "I" share the history of "myself".

Call it one "stable" then. After all, I doubt what remains "the same" is more than a small core, and it very well can be also like a ship of Theseus.
 
TIL about the ship of Theseus :)
 
Yes, this is interesting. In Russian the word is "понимать" which has proto-Slavic roots, also derived from "to take" or "to grasp".


Laziness is very useful trait actually :)
It's saving energy. Why else would anyone try to find a way out of a task?
To avoid being blamed for the consequences of a calculation or prediction. :)
 
I'm not sure things like separate Ego and sense even exist. Our thoughts are products of brain activity and what we consider an Ego is also a part of it. We may sense our Ego as something like a phonograph where our thoughts are being played, but this is probably an illusion. Which we have only because mrs Evolution considered that being self-aware is beneficial for our survival.
I like the idea of the ego being the little person in your head that looks out on the world.
A bit like the man with his head resting on his hand in Hieronymus Bosch's famous painting.

hbosch.png
 
I like the idea of the ego being the little person in your head that looks out on the world.
A bit like the man with his head resting on his hand in Hieronymus Bosch's famous painting.

View attachment 559993

Maybe it's just that the mental world is a universe that comes with a built-in observer :)
 
Back
Top Bottom