Some kind of sense seems to be needed for that, although in humans sense arguably had to exist anyway since before some early stage of civilization the human being would just perish if not made by senses to do stuff needed.
Ants do fine without a sense of purpose (we assume), they still get stuff done. I think human beings developed sense or meaning as a response to becoming self-conscious. Sense, meaning or purpose is based on that psychological need, and it only exists to fill that need.
I am not seeing how any computer (just AI, no DNA attached) has agency. If all that is going on is the act (eg a computation), there is no room for an agent, neither can the agent morph into a computation or vice-versa.
Doesn't matter how complicated the action/computation/code running is. I have to suspect that where agency exists (such as in humans, but also in ants and anything else alive), it does so cause there is a very clear split between any action of any moment, and the entity which is undergoing or undertaking that action, consciously or not.
Some kind of sense seems to be needed for that, although in humans sense arguably had to exist anyway since before some early stage of civilization the human being would just perish if not made by senses to do stuff needed.
That is the distinction Hobbs and I were talking about, wherein it is assumed that so called "strong AI" has a form of agency or autonomy (where this would come from is not explained yet, it's a hypothetical after all).
Also, I am unsure if we can assume that human beings or other animals have agency, certainly we have not been able to prove it empirically yet. But I think we can say this:
Let's assume natural laws. Let's assume time and space. A human being will always be "doing something", like sitting, reading, thinking. In our everyday life, we have something called a choice. Let's assume you go to the supermarket and decide what to buy. There is only one way you can decide. You can decide to buy an apple, or decide to buy a banana. You can decide to buy both a banana and an apple. You cannot decide to both buy an apple and not buy an apple. That is impossible. They are mutually exclusive. This experiment proves:
It is irrelevant whether we live in a deterministic or not-deterministic universe, there is only one way we will de-facto decide. You cannot decide both ways, you cannot "buy the apple and also not buy the apple". You will always end up doing something (even if that means not buying anything), and there will always only be one way you decide.
So now that we know this, we can frame this in different ways.
An incompatibilist would say that, assuming we know the state of the universe before you go to the grocery, and knowing natural laws and time, we can know for a fact what you will end up doing, it is even calculatable. And it is unchangeable. It is not a result of your free will, but a result of millions of physical events (brain chemistry et cetera). There is no meaningful choice here, no matter how hard you think about whether you want the Apple or the Banana, everything you have thought, are thinking, and will be thinking is predestined. So therefore there is no "choice", and no free will.
A compatibilist would maybe say that our brain is not a closed system, and that our consciousness is not merely a result of material determinism. He would argue that not every thought is predestined, because they're not entirely physical, and therefore you can make an actual decision, based on your own thoughts.
My argument would be: Both of those are completely irrelevant. There is only one de-facto way any person can decide at any time. There are near infinite possibilities to decide, but in the end we can only execute one of those possibilities. The incompatibilist argues that all the things leading up to the decision are predestined, the compatibilist argues that they're a function of agency/autonomous thinking. However, they are both describing the exact same thing. Whether we decide freely or not seems completely arbitrary. Moreso, even if we assume free will, that does not mean our decisions are entirely free, we are still constrained by our beliefs, by our cognitive biases, by our material conditions and so forth.
One thing is clear: In both cases, a string of events leads us to do specifically one thing. You can frame this as agency or determinism. But it's the same thing, no? There are factors which influence our thoughts/cognition which in turn influence our reponse/action. It seems only the framing is different.
When I decide whether to take the Apple or the Banana, I will think about which one tastes better, which one costs more, and so forth. Does it matter whether I
freely decide that the Apple tastes better than the Banana, or whether the entirety of my past experiences forces me to prefer the Apple to the Banana? What is the actual difference between those two scenarios? Surely such a statement as: "the Apple tastes better because it's more sour" is completely subjective and arbitrary, and every decisions boils down to Axioms like this. When you try to pin down free will to what is really is, it seems super elusive. Who is the "I" deciding one is tastier than the other? Is it my brain, consciousness, subconscious? Free will seems to be about two things: 1) Freely cultivating and exercising a will 2) Having the freedom of choice for your actions. But in what way does the illusion of choice meaningfully differ from "actual" free choice when there is only one way we will de-facto decide? No one has been able to answer this for me.