Do we have free will? Is the world indeterministic?

That's a pretty good starting place, I agree! But what is it that we identify as "us"? At times we isolate parts of our own body from ourselves; if I hold my hand up to examine it, I'm not examining it as part of my body-subject, but as an object. At the same time, if I write with a pen, it is as effectively part of my body-subject as the hand holding it, just far more loosely integrated. So it becomes quickly apparent that the non-precision of the concept of "I" has some pretty major implications for any statement involving that "I", and while they certainly don't preclude making such statements accurately (we do it all the time!), they do cast some doubt on the meaning of more abstract claims such as "I have free will". Not imply that they're false, as such, but certainly raising the question of whether they represent effective ways of approaching questions of selfhood.
As long as I have agency and consciousness, whatever else is part of me doesn't make a whole lot of difference on free will.

@Leoreth
The reason that intuition is a suitable source of evidence here is because the subject is not nature but semantics. There's no experiment you could do that would tell you how to define free will.

Defining "free will" is not unlike defining sport to answer questions like "is golf a sport?" The answer in that case depends largely on if golf feels like a sport and if we can frame a definition of sport to be consistent with other known sports. Our intuition of the what the word means is used to make the definition of sport consistent with other times the word is used. This prevents the definition of sport from being arbitrary, and the question of whether golf is a sport a legitimate one.

With free will it's not golf that needs to be determined to fit the difinition, but a rational mind. Can a rational, and therefore deterministic, mind have free will?
 
And this doesn't really address my argument, which is that being under mind control is the same situation no matter if free will is real or an illusion. Your autonomy is taken from you by force. The effect of it doesn't depend on the question if there is a free will exercising this autonomy or not.
What distinction are you making between autonomy and free will and why is this distinction useful?
Because it is how we perceive reality, obviously.
I disagree that you are using the words "free will" in a way consistent with our perception of reality.

Yes, I agree. But as I said before, that doesn't change whether free will exists or not.
It does because if we can have less free will, that means we have free will.
 
As long as I have agency and consciousness, whatever else is part of me doesn't make a whole lot of difference on free will.
But that just takes us right back to the question of how we understand "I". If "I" can at a given moment include your clothes, and at another moment exclude parts of your own cognitive system, then what is it that we are actually saying is possessed of free will? And if we can't identify what it is that is exercising this "will", where this "I" begins or ends, then how can we meaningfully claim that a discrete phenomenon of "willing" occurs?
 
But that just takes us right back to the question of how we understand "I". If "I" can at a given moment include your clothes, and at another moment exclude parts of your own cognitive system, then what is it that we are actually saying is possessed of free will? And if we can't identify what it is that is exercising this "will", where this "I" begins or ends, then how can we meaningfully claim that a discrete phenomenon of "willing" occurs?
Your boundries may be blurry, but you exist. Some have claimed that's the one thing we can be 100% positive about. You are in control of your body and can make it do stuff. And you want to do stuff. That seems like observable evidence enough that you can "will".
 
Your boundries may be blurry, but you exist. Some have claimed that's the one thing we can be 100% positive about. You are in control of your body and can make it do stuff. And you want to do stuff. That seems like observable evidence enough that you can "will".

You can certainly make body do stuff. But how many bodily processes are automated/ done subconsciously? That doesnt look like we are in full possesion of our body. So even when it comes to physical reality free will is something conditional/limited.
 
Your boundries may be blurry, but you exist. Some have claimed that's the one thing we can be 100% positive about. You are in control of your body and can make it do stuff. And you want to do stuff. That seems like observable evidence enough that you can "will".
If we don't understand the nature of the subject, how can we meaningful attribute the capacity of "will" to it? We may as well say that a computer has "will", because it's clearly able to do act of its own accord.
 
The past and present are not what you make choices about. You make choices about the future. According to MWI, you have many futures. You should care about all of them. That's all I'm saying.

No. That would get me nowhere. According to MWI there is an uncountable infinite amount futures for me. No one has the brain capacity to deal infinitely many futures. I do not have a backup plan for the (infinitely many!) copies of me that spontaneously tunnel to Jupiter and die a cold death there. I might care about a few likely futures, but in retrospect I will only care for one timeline.

There is no (known) way to distinguish the Copenhagen interpretation to the Many-worlds interpretation with respect to their actual effects. So even if the MWI is theoretically deterministic, I will always experience it as random. Thus it is effectively indeterministic.


There have been many billions of generations of animals, whose behavioral repertoires eventually built up into what we have now. At any point in that history, an animal which behaves truly randomly would have a reproductive disadvantage against an animal which behaves pseudo-randomly when, say, exploring new territory, but absolutely deterministically (or at least, as close as a fundamentally quantum universe will allow) when it comes to not eating poison berries. Our brains are built on top of their basic neural plan, with minor variations. OK, maybe 1-per-million-generations poison-eating wouldn't be sufficiently selected against to die out by now - or maybe it would. At any rate the probability would have to be quite tiny.

I disagree: A small part of the population deviating from the norm by behaving randomly would result in the loss of that part of the population mist of the time, because it is eating those berries. But in some cases it might pay of: The animal might have gained a beneficial mutation that allows it to eat those berries. Or the berries might be similar to those in another area, but are actually not poisonous. In that case the payoff would be huge: The animal would be the only one eating those berries and thus would have much better access to food. This would usually result in much more reproductive effects.

Randomness is at the core of evolution: Most mutations are actually detrimental to reproductive success as they can inhibit a vital function. And if the rate of mutations is too high, the organism suffers and is more likely to die. So illuminating someone with nuclear radiation usually does not result in superpowers, but in cancer. But without this randomness, evolution would not work at all. Without mutations, there is no development. So there has to be a balance between inhibiting mutations so that not too much of the population dies off and having enough mutation to be able to adapt (and adapt quicker than the other populations competing for the same space)

I suppose, it's possible in principle for an animal to have a deterministic part of the brain, and an indeterministic one. But I'm not seeing any results in neurology that suggest that the neurons or glial cells or networks differ from one another in this respect, i.e. their susceptibility to quantum noise. Wouldn't it be simpler - and thus easier for evolution to hit upon - to use deterministic processes everywhere, and then add pseudo-randomness where necessary (easily copied from the environment, which provides many pseudo-random events)?

No. Why go for two parts of the brain, which is quite complicated, when the same effect would be much easier to attain, if the brain behaves randomly, but with a very small variance. Why should evolution totally eliminate quantum noise, when it can profit from a structure that is resilient, but not immune to quantum noise.
 
In the instant of creation God finished His work. Viewing all not from the perspective of passing time, but as a tapestry laid at his feet, God granted perfect free will to creatures whose lives ended no sooner nor later than the moment they began, from the divine perspective.

I am Alpha and Omega, the beginning and the end, the first and the last.
Blessed are they that do his commandments, that they may have right to the tree of life, and may enter in through the gates into the city. For without are dogs, and sorcerers, and whoremongers, and murderers, and idolaters, and whosoever loveth and maketh a lie.
I Jesus have sent mine angel to testify unto you these things in the churches. I am the root and the offspring of David, and the bright and morning star. And the Spirit and the bride say, Come. And let him that heareth say, Come. And let him that is athirst come. And whosoever will, let him take the water of life freely.
Coolio. Now if it turns out that the actions of our free will is already known and determined, and we discover that to be true, why do we suddenly decide then to become whoremongers and sorcerers instead of continuing to strive for good? Most people want a good and just and normal society. I don't want to ask this again.


That said if everyone turned to sorcery and whoremongering the conservatives' favorite economic models would actually work...
 
You can certainly make body do stuff. But how many bodily processes are automated/ done subconsciously? That doesnt look like we are in full possesion of our body. So even when it comes to physical reality free will is something conditional/limited.
I agree.
If we don't understand the nature of the subject, how can we meaningful attribute the capacity of "will" to it? We may as well say that a computer has "will", because it's clearly able to do act of its own accord.
When to attribute will to computers is a tricky question. But I can answer why primitive life cannot be said to have will -- it is not conscious. That means it does not have a mental model of itself. Note: I am not saying that consciousness is a sufficient condition for will, just a necessary one.
 
@Leoreth
The reason that intuition is a suitable source of evidence here is because the subject is not nature but semantics. There's no experiment you could do that would tell you how to define free will.

Defining "free will" is not unlike defining sport to answer questions like "is golf a sport?" The answer in that case depends largely on if golf feels like a sport and if we can frame a definition of sport to be consistent with other known sports. Our intuition of the what the word means is used to make the definition of sport consistent with other times the word is used. This prevents the definition of sport from being arbitrary, and the question of whether golf is a sport a legitimate one.

Exactly. The adequacy of definitions is tested against ultimately against usage, but usage flows from the linguistic intuitions of speakers. So gathering intuitions on "does golf count as a sport?", or "does that act count as free?", is an obvious and appropriate strategy.

According to MWI there is an uncountable infinite amount futures for me. No one has the brain capacity to deal infinitely many futures. I do not have a backup plan for the (infinitely many!) copies of me that spontaneously tunnel to Jupiter and die a cold death there. I might care about a few likely futures,

You have to distinguish between norms of rationality and prescriptions. Norms are principles; prescriptions tell us how to reasonably approximate the ideal, given that we have limited resources. The "likely" futures are the ones with essentially all the measure; the measure of the tunnels-to-Jupiter is vanishingly small. So pragmatically - as a prescriptive rule - it makes sense to ignore them. From a normative point of view, they still matter, but only a tiny tiny bit because of their small measure. The same exact point applies to Copenhagen, by the way, only the relevant measure becomes a probability.

but in retrospect I will only care for one timeline.

Emphasis added. What a branch-world-you will care about retrospectively is beside the point, when it comes to you-now making a decision about your future.

So even if the MWI is theoretically deterministic, I will always experience it as random. Thus it is effectively indeterministic.

Yeah, it looks the same. I already agreed to that. But the MWI fits the technical definition of determinism, and Copenhagen doesn't. That's why I say, on the determinism question: dunno and don't care!

The animal might have gained a beneficial mutation that allows it to eat those berries. Or the berries might be similar to those in another area, but are actually not poisonous. In that case the payoff would be huge: The animal would be the only one eating those berries and thus would have much better access to food.

In a new region with similar berries, the advantage will last about 5 minutes. Then other animals, with more deterministic behavior, will see, and copy. Mutations almost never go from zero to sixty in one generation; a mutated organism will be slightly more resistant to the poison than its fellows. Risky experimentation is beneficial on average, only when the risks are small, or the case is desperate. The point is that beneficial behavior almost always falls into pretty narrow, well defined domains. There is only a little room for random behavior.

No. Why go for two parts of the brain, which is quite complicated, when the same effect would be much easier to attain, if the brain behaves randomly, but with a very small variance. Why should evolution totally eliminate quantum noise, when it can profit from a structure that is resilient, but not immune to quantum noise.

That's a good argument. But my point is that it has to be a very very small variance, if the organism is to survive in those cases, which are common and vital, where there is one clear Simon Darwin Says command to follow. If the brain areas used in creative activity share circuits with what's used in life-or-death decisions, the creativity had damned well better use mostly pseudo-random processes, with only a tiny bit, at most, of true randomness. The pseudo-randomness can be turned on or off, pardon the pun, at will.
 
I agree. When to attribute will to computers is a tricky question. But I can answer why primitive life cannot be said to have will -- it is not conscious. That means it does not have a mental model of itself. Note: I am not saying that consciousness is a sufficient condition for will, just a necessary one.
Why is conciousness a necessary condition for will? Are you suggesting that only acts which are performed consciously and explicitly can be described as acts of "free will"?
 
I agree. When to attribute will to computers is a tricky question. But I can answer why primitive life cannot be said to have will -- it is not conscious. That means it does not have a mental model of itself. Note: I am not saying that consciousness is a sufficient condition for will, just a necessary one.

There always seem to be kind of will for preservation of the individual life. Thats something computer cant have since it doesnt have life.
This will is present even though the particular life may not have self-aware consciousness. So the presence of mental life doesnt seem to be necessity for its function either.
 
Why is conciousness a necessary condition for will? Are you suggesting that only acts which are performed consciously and explicitly can be described as acts of "free will"?
Any acts of will can be done on subconscious level but the presence of consciousness seem to be necessary for any kind of will otherwise one executes will of others (like computers).
 
You have to distinguish between norms of rationality and prescriptions. Norms are principles; prescriptions tell us how to reasonably approximate the ideal, given that we have limited resources. The "likely" futures are the ones with essentially all the measure; the measure of the tunnels-to-Jupiter is vanishingly small. So pragmatically - as a prescriptive rule - it makes sense to ignore them. From a normative point of view, they still matter, but only a tiny tiny bit because of their small measure. The same exact point applies to Copenhagen, by the way, only the relevant measure becomes a probability.

The measure in both cases is given by the Born rule. In the case of the Copenhagen interpetation it is the probability that something happens, in the MWI it is the amount of future mes ending up in one universe divided by the total amount of future mes. The latter perfectly fits the frequentist definition of probability. So once I start ignoring improbable events and the probabilities I consider sum up to less than one, my decision making is based on probabilistic thinking. So once I exclude improbable events, the MWI goes from determinism to effective indeterminism.

In a new region with similar berries, the advantage will last about 5 minutes. Then other animals, with more deterministic behavior, will see, and copy. Mutations almost never go from zero to sixty in one generation; a mutated organism will be slightly more resistant to the poison than its fellows. Risky experimentation is beneficial on average, only when the risks are small, or the case is desperate. The point is that beneficial behavior almost always falls into pretty narrow, well defined domains. There is only a little room for random behavior.

When a berry is not harmful although all animals think it is, there has to be one animal who starts eating those berries. After some time it will be copied by other animals, but without that animal taking a risk they will never start eating them and in the long run will be outcompeted by the population of animals which starts eating those berries. So even if the individual animal taking that risk might not be rewarded, the population with some individuals prone to random behavior will be rewarded. And as evolution works on the level of populations, it will prefer those populations, even if the individual reproductive success of an animal taking many risks might actually be worse.

I agree that the room for random behavior is very limited, but my point is that a population with no random behavior at all (no matter the source) actually has a disadvantage.

That's a good argument. But my point is that it has to be a very very small variance, if the organism is to survive in those cases, which are common and vital, where there is one clear Simon Darwin Says command to follow. If the brain areas used in creative activity share circuits with what's used in life-or-death decisions, the creativity had damned well better use mostly pseudo-random processes, with only a tiny bit, at most, of true randomness. The pseudo-randomness can be turned on or off, pardon the pun, at will.

Pseudo-randomness and true randomness have in principle no difference in the result, only in the source. I agree that the variance has to be small, but that is in no way linked to whether the source of the variance is pseudo-random or truly random. And once pseudo-randomness is inbuilt into the brains structure, it cannot be turned off, unless you carefully control all environmental conditions.
 
Why is conciousness a necessary condition for will? Are you suggesting that only acts which are performed consciously and explicitly can be described as acts of "free will"?
Yes; All acts that can be ascribed to free will are necessarily able to be framed as a conscious choice.
 
What distinction are you making between autonomy and free will and why is this distinction useful?
Free will is an illusion, my decisions are made based on myriads of contingent factors inside my brain. But these influences still come from me, as long as I am acting autonomously. When I am mind controlled, these influences are "shut off" and "overridden" by the decisions of whoever is mind controlling me.

I disagree that you are using the words "free will" in a way consistent with our perception of reality.
I don't understand, can you be more specific?

It does because if we can have less free will, that means we have free will.
If we have less of an illusion of free will, that only means we have illusionary free will.
 
Free will is an illusion, my decisions are made based on myriads of contingent factors inside my brain. But these influences still come from me, as long as I am acting autonomously. When I am mind controlled, these influences are "shut off" and "overridden" by the decisions of whoever is mind controlling me.
So why call free will an illusion, and make it a useless and valueless term about predictability, when you could instead be using it to describe the distinction in autonomy you describe here?
I don't understand, can you be more specific?
For example, It is not apparent that a logical mind must inherently not be free, even though someone who is logical is 100% predicable (presuming you also know what they know).

If we have less of an illusion of free will, that only means we have illusionary free will.
It's not the illusion that' overridden by mind control. It's freedom. A person under mind control is less free, not less deluded.
 
The [MWI] perfectly fits the frequentist definition of probability. So once I start ignoring improbable events and the probabilities I consider sum up to less than one, my decision making is based on probabilistic thinking. So once I exclude improbable events, the MWI goes from determinism to effective indeterminism.

I don't follow this reasoning. For the sake of comparison, if I decide that I care about people everywhere but Christmas Island, does that mean I'm no longer living on Earth? I'm living on {Earth minus Christmas Island}? Presumably not, so why does the determinism/indeterminism case differ?

So even if the individual animal taking that risk might not be rewarded, the population with some individuals prone to random behavior will be rewarded. And as evolution works on the level of populations, it will prefer those populations, even if the individual reproductive success of an animal taking many risks might actually be worse.

Only a little bit of evolution works on the level of populations. Even by partly agreeing with you, I'm making the controversial assertion that group selection does happen. Some of the best evidence on the topic can be found here. For a case against, see here.

I agree that the room for random behavior is very limited, but my point is that a population with no random behavior at all (no matter the source) actually has a disadvantage.

I agree with that.

And once pseudo-randomness is inbuilt into the brains structure, it cannot be turned off, unless you carefully control all environmental conditions.

But that's the difference: pseudo-randomness doesn't have to be "inbuilt" into the brain's structure, it can be borrowed from the environment. Thus it becomes a surface phenomenon of brain activity rather than a fundamental building block, as partially-damped QM noise would be.

True story: I was sitting with a friend in a boring high school class, and he asked me to pick one of four numbers. I did, and he revealed a prediction he had written, which was correct. He challenged me to more of this game, picking various objects - letters, words, whatever. After he'd gotten 3 or so right in a row, I was annoyed, but I knew I could win: I looked around and saw a picture of a president with a big nose. Of the letters I was to pick from, Z looked most like a nose. I won most rounds after adopting my pseudo-random strategy.

Yes; All acts that can be ascribed to free will are necessarily able to be framed as a conscious choice.

I think unconscious reasons for action that are ultimately guided by conscious choices and evaluations should be "grandfathered" in.
 
I don't follow this reasoning. For the sake of comparison, if I decide that I care about people everywhere but Christmas Island, does that mean I'm no longer living on Earth? I'm living on {Earth minus Christmas Island}? Presumably not, so why does the determinism/indeterminism case differ?

MWI works around the indeterminism of measurement results, that all of the possible results happen. Once you start considering a subset of the possible results, this does not work anymore: You have to introduce probabilities. You can still say, all of them happen, but only in 99% of the cases. But determinism requires 100% certainty.

The interpretation itself is still deterministic, but as the human brain does not have infinite capacity it will never be able to grasp the deterministic global wavefunction but will always have to settle for an indeterministic subset of it.
 
The interpretation itself is still deterministic, but as the human brain does not have infinite capacity it will never be able to grasp the deterministic global wavefunction but will always have to settle for an indeterministic subset of it.

OK, I get it now: you're talking about human knowledge and thought that go into a decision. I was (and still am) only concerned with the reality that we are situated in. That is where incompatibilists try to make trouble for free will, or else try to sell us on weird metaphysics in order to save what they define as free will. The former tack, making trouble for free will, is often taken by incompatibilists who have a naturalistic world view. The latter, by those with a supernaturalist bent. Neither type disputes that we don't know what's coming. But they often say that a specific future is coming (naturalistic determinists) or that, if one assumes naturalism is true (say the supernaturalists), then one implies something is coming.

More sophisticated versions present a dilemma: either determinism is true, or it's random. Randomness doesn't confer free will, they (rightly!) argue, so (and here comes the mistake): no free will.

What I want to do is attack the dilemma on its supposed strong point: the determinism horn. Suppose that a specific future is indeed coming. (As far as I know, that may be so.) So what? The future in question depends on us. It is caused by (in part by) conscious, (moderately) rational and intelligent, willful beings: and that is precisely where free will comes in.
 
Back
Top Bottom