@Sill: you might find Nietzsche's "The Genealogy of morals" to be of interest
Everybody with a modicum of interest in moral philosophy must read this book.
Gentleman, you have sparked my curiosity.
Based upon a priori standards. Which is how most of the population sees morality as being obtained.
If you are going to argue with hearsay you really shouldn't argue at all.
I am not interested in a democratic determination of morality. I am not interested in what the most think something is. I am interested in
arguments. And the argument that "many see it so and so" is I believe established to be a rather weak argument. Especially, when I came up with a much better one.
Further, I need to
prove that feelings are the only source of quality value? Isn't it - rather obvious? What else would there be after all? An imagined god? Something besides fairy tales? I don't believe so. So there is your evidence. And proof.
What I said was that emotional actions generally are not the result of rational consideration and therefore not utilitarian. That's not the same as saying one cannot rationally analyze emotions or emotional actions.
How is this relevant? Is it in principle possible to rationally asses what makes one feel good? If yes - that's all there is required for morality to be inherently utilitarian. How able we are then also to do so is an entirely different question. That is a question which leads us from realizing what morality is to thinking about how morality can be realized. An interesting discussion to be sure*, but not what this thread is primarily about.
*And incidentally, I think that the practice of morality benefits from some constructed moral absolutes. In other words, I think that we are ill-advised to directly refer back to the utilitarian nature of morality in every instance. For example - to treat murder as something wrong in itself is probably a good idea from a moral POV. But not because murder itself actually was immoral, merely because it is a morally useful assumption.
What? No. That's not what rationality is. Rationality is the quality of having reason, of being coherent. There is nothing inherent in rationality that it must be about the pursuit of goals. I can say:
Max is a cat.
All cats have four legs.
∴ Max has four legs.
That's a perfectly rational argument, and it has nothing to do with goals.
True, I talked some bull here. However, the point remains that it is possible to rationally pursue goals. You eat the broccoli because you believe you will feel better for it, even if not on the shortest and direct route - i.e. even if not because it tastes so wonderful.
That's a big element of what are missing in your post. You seemingly only look at the consequences for the actor in determining whether or not an action is moral. You neglect to review the consequences for other people.
Not at all. Me making two others happy instead of only me is naturally more morale than only making me happy. Making me
and two others happy is even more moral etcetera.
Those consequences can extend beyond the emotional realm. The victim of a confidence game may be quite happy to be defrauded of his money and the con man may be perfectly happy to obtain it, but neither of those facts modify the morality of the action. Neither the desire of the victim nor that of the con man changes the morality.
Why?
Why does a consequence matter if it moves beyond the emotional realm? Does it matter if a stone rolls down a hill without any consequence on the emotional realm? How? To whom?
I think that while an objectively ideal moral system can exist, I don't think humans are capable of discovering it.
See IMO a moral
system is already the first step in aiming for something less than ideal. Systems are about generalized patterns, generalization comes hand in hand with rough edges, unfairness etc. System and ideal is already in itself as a misnomer IMO.
An ideal moral society would require some sort of almost magical mutual understanding and mutual interaction guided by the aim to get the most out of life for everyone. Even if we had the technology and data for such a thing, I doubt humans could sufficiently interact with the tech and the data. Basically, we would need to breed a new kind of human.
Can we get to 'perfect'? No, but that doesn't mean we should let perfect be the enemy of good.
Amen, brother.
This conclusion I disagree with completely. What alternate standard do you judge morality by to dismiss the nuances of our moral intuition as error? And why is that moral standard superior?
*Hint OP hint*