Sabine Hossenfelder has a youtube (spoilered below) about two papers on AI having free will. It is at least an interesting idea that casts the question in a somewhat objective manner.
Artificial intelligence and free will: generative agents utilizing large language models have functional free will
Combining large language models (LLMs) with memory, planning, and execution units has made possible almost human-like agentic behavior, where the artificial intelligence creates goals for itself, breaks them into concrete plans, and refines the tactics based on sensory feedback. Do such generative LLM agents possess free will? Free will requires that an entity exhibits intentional agency, has genuine alternatives, and can control its actions. Building on Dennett’s intentional stance and List’s theory of free will,
I will focus on functional free will, where we observe an entity to determine whether we need to postulate free will to understand and predict its behavior. Focusing on two running examples, the recently developed Voyager, an LLM-powered Minecraft agent, and the fictitious Spitenik, an assassin drone, I will argue that the best (and only viable) way of explaining both of their behavior involves postulating that they have goals, face alternatives, and that their intentions guide their behavior. While this does not entail that they have consciousness or that they possess physical free will, where their intentions alter physical causal chains,
we must nevertheless conclude that they are agents whose behavior cannot be understood without postulating that they possess functional free will.
Artificial Intelligence (AI) and the Relationship between Agency, Autonomy, and Moral Patiency
The proliferation of Artificial Intelligence (AI) systems exhibiting complex and seemingly agentive behaviours necessitates a critical philosophical examination of their agency, autonomy, and moral status. In this paper we undertake a systematic analysis of the differences between basic, autonomous, and moral agency in artificial systems.
We argue that while current AI systems are highly sophisticated, they lack genuine agency and autonomy because: they operate within rigid boundaries of pre-programmed objectives rather than exhibiting true goal-directed behaviour within their environment; they cannot authentically shape their engagement with the world; and they lack the critical self-reflection and autonomy competencies required for full autonomy. Nonetheless, we do not rule out the possibility of future systems that could achieve a limited form of artificial moral agency without consciousness through hybrid approaches to ethical decision-making. This leads us to suggest, by appealing to the necessity of consciousness for moral patiency, that such non-conscious AMAs might represent a case that challenges traditional assumptions about the necessary connection between moral agency and moral patiency.