contractualist

contractualist OP t1_j9j91rn wrote

  1. People may agree to execute criminals and will very likely agree to involuntary taxation, given the coordination problem and benefits of collective action.

  2. The article’s goal is only to say what morality is and isn’t. To the extent that issue is in dispute, as it is in meta-ethics, then having at the very least a defined term is useful for settling disagreement. I’ll get into more specifics in later pieces on what “reasonably rejectable” really means.

1

contractualist OP t1_j9i1cym wrote

It’s not about actual consent but what would reasonable people agree to. No way would anyone reasonably agree to be enslaved, sacrificed, or raped. Abortion and eating meat relate to the boundaries of our moral community (not necessarily the agreement, but who is a party) whereas the death penalty (given certain evidence) may be morally excused.

1

contractualist OP t1_j9hqt1g wrote

It wouldn’t give them the right to have gains preserved for them. That’s not a right I have heard from any libertarian theory. So long as the lockean proviso is met, there is no duty to benefit the future. And future people wouldn’t accept such a lottery.

1

contractualist OP t1_j9hh3n4 wrote

>+ objective and agent-relative:

It is better for John not to become a lawyer and pursue a career as a clarinetist. John would probably not even pass the bar exams and the profession would invariably burn him out. He doesn't have the personality for it.

Woah, this definitely sneaks in valuing well-being. If we replaced it with "challenge seeking" or "self-development", we'd have a different ruling. And how do you decide between which values are truly objective, well-being or challenge? I actually discuss this issue in my last section here (although my thoughts need some more fleshing out)

1

contractualist OP t1_j9g9et8 wrote

Yes this is meta ethical constructivism. I will argue later on that was is objective is just shared subjectivity, so my argument fits into our normal notion of objective. People might actually disagree based on private reasons but I ask what they would agree to based on public reasons. There will be agreement on morals claims the same way there is agreement on objective reality.

1

contractualist OP t1_j9db1ki wrote

Thanks for the review!

I'll probably write more about my thoughts on value, but I do come in with the assumption that value isn't intrinsic but a creation by free, conscious beings. And if value isn't intrinsic but subjective, it can't be publicly examined and judged. There's no point of reference to say that one value is right due to X property and the other is wrong
due to Y property. These values might give our lives meaning, but are not reason based the way that morality is (which is a product of certain values).

If you think there is a better way to capture the dispute between realists and relativists, I'd appreciate any insight. My writing is only my perspective and the reason I share it is so I can get feedback like this.

I wouldn't say any controversy makes an issue personal, but only ones where there are reasonable enough arguments on both sides that choosing one or the other would be acceptable. I do think this is the case for the Trolley Problem so that its not a duty to pull the lever.

1

contractualist OP t1_j9d05zg wrote

I get absolute vs. relative, but I treat agent-relative and agent independent the same as objective and subjective. If you have time, could you explain what I'm missing or point me in the right direction? If there is a distinction, I'll have to re-work my writing.

1

contractualist OP t1_j9c97mb wrote

I wouldn't say morality is divorced from ethics either. To have normative reasons, you need values that create those reasons, which I argue are freedom and reason. However, there are objective reasons to act given those values, which belong in the reason core (along with logic and mathematics). Value themselves, since they are agent-relative, would be in the freedom-residual.

In this case, what I am distinguishing are concepts that are either agent-independent or agent-relative, since we might be getting lost on objective/subjective.

1

contractualist OP t1_j9c4ion wrote

My reading of Scanlon's account of value would still be that they are agent-relative, and have reason-creating power. Friendship wouldn't be inherently valuable independent of our judgments of it being so. Yet given our judgments of it being valuable, it provides reasons for certain actions.

1

contractualist OP t1_j9bezym wrote

Hello all, I appreciated the feedback I got on my previous piece. This is a follow-up, and I'd be happy to respond to any additional feedback.

Summary: Since morality is those principles that can not be reasonably rejected based on public reasons, morality would exclude those principles that are motivated by private reasons. This includes one's conception of the good, sense of meaning, and personal values. While these values are what makes life worth living, they couldn't be reasonably accepted by others and therefore lack moral authority. They aren't objective properties that can be analyzed and judged, but are subjective properties that we impose on the world. They would be in the "freedom residual" of our lives, whereas morality is in the "reason core." Meta-ethics is about finding out what claims belong where. Additionally, given the "acceptance" condition of morality, the Repugnant Conclusion, utilitarianism, and libertarianism would also be excluded as ethical determinations.

I get that this is controversial, but this article only seeks to defend the current definition of morality that hopefully can be used more often in moral discourse.

−2

contractualist OP t1_j96iv85 wrote

Being a part to the moral community doesn’t rely on reasoning ability, but the laws of the moral community would be reason-based. They would have to be justifiable to others. Membership in the community relies on consciousness and free will.

If you read the article I sent, I argue that ascent to the social contract would be based on agreement to principles that are in accordance with higher-order values. Morality asks what principles of conduct would free reasonable people accept. It doesn’t say morality is reserved for the reasonable.

I’m not sure what freedom you’re talking about but if you have a specific question I’m happy to address it.

1

contractualist OP t1_j92r8pr wrote

Thanks for the review. The article doesn't require that any values be shared, it only states what values lead to morality. What percentage of people share these values (freedom and reason) isn't within the scope of my writing. And values outside of these two aren't relevant for meta-ethics.

As to the scenario you laid out, the issue relates to ethics rather than meta-ethics that the article is about, but I'll still address it. The values of freedom and security would have to be justifiable to someone else. We wouldn't let someone's irrational paranoia guide national security policy, and any reasons provided when making policy (and in the social contract) would need to be public and comprehensible to all that are affected.

And any national security policy would have to be guided by the reason-based moral principles of the social contract. If it goes outside of those principles and acts arbitrarily, then it loses its morality and hence its political authority (imagine a requirement that all redhead people be subject to a special reporting requirement). Only reason has the authority to decide the rights vs. security question. and there will be a range of acceptable policies that respect the boundaries of the social contract. And political communities can give different priorities to the social contract's moral principles based on the national facts and circumstances (it must still value those principles, but it can apply them and prioritize them differently based on reason). See here for a discussion on how the social contract can specify rights.

And the error in the last section was treating X's freedom and Y's freedom separately. Freedom is an objective property that cannot reasonably be differentiated. Its not agent-relative, it is agency. There is no X's freedom or Y's freedom, there is only freedom that both X and Y happen to possess.

0

contractualist OP t1_j92h4wd wrote

Kant certainly wasn't providing a descriptive account, whereas Rawls didn't make his views very clear. Evolution is useful for explaining our desires, but it doesn't justify why these desires should be respected or what we should do given these desires.

There are no "should" statements when examining morality through a pure evolutionary lens and morality would be the same (the derivatives of the values of freedom and reason) even if we had evolved differently and developed different desires. Given a different evolutionary trajectory, our moral rules might be different, but meta-ethics remains the same.

That being said, science is useful for discovering the moral principles of the social contract, but it doesn't play a role in the first principles discussion that I'm focusing on.

1

contractualist OP t1_j92d657 wrote

Yes meta-ethical constructivism. I've read some of her work. For me, it has been either hit or miss and her focus on identity steers too much into subjectivism.

Here, I try not to make any assumptions, not even about rationality. Only that by valuing freedom and reason can we get moral principles. That's what is good about valuing freedom, you don't have to care about what people want, but only recognize that they have wants.

1

contractualist OP t1_j92ck17 wrote

Thanks for the review. The biological adaption relates to descriptive morality, whereas I focus on normative morality.

If the problem is with discovering non-reasonably rejectable reasons, then it's only a problem of administrability. This is fine, and not a problem with the philosophy in principle. However, what else would morality be, the code of conduct of our treatment of others, if it could not be reasonably accepted by others? I'll discuss this in a later piece.

0