Submitted by contractualist t3_115h6a0 in philosophy
KingJeff314 t1_j92mylj wrote
Your article hinges on the idea that humans share values and therefore can come to a normative consensus. It is much more complex than that. Humans have many different values, often conflicting with each other, and each person weighs values and who the values apply to differently.
Some people value security more than freedom, for instance. Should a government do more invasive searches under the threat of a terrorist attack? Either they do nothing and potentially allow a terrorist attack, or they act to stop it and violate citizen’s freedoms in the process. This is a Trolley Problem. Your article suggests "No answer would be justifiable to all involved parties since they would all have a reasonable claim to not being [killed/invasively searched]". Your Trolley Problem article also states, "Like so many other life dilemmas, pure reason cannot provide a definite answer to the trolley problem. Only the free self can make a choice whenever there are sufficient reasons for either side of a decision." Basically, when we get to moral problems with any degree of complexity, your model of pure reason is insufficient.
Additionally, your reasoning is insufficient that "valuing freedom necessarily implies valuing the freedom of others". To show the gap in logic, let me present this statement in propositional logic:
Definitions: Freedom(X,Y) means that X values Y's freedom, Free(X) means that X is a free agent, and H is the set of humans. We can assume (∀X in H, Free(X)^Freedom(X,X)). "∀" means "for all"
So then your claim is that (∀X,Y in H, Freedom(X,X) ⇒ Freedom(X,Y)). Your justification in the linked article is "If others are regarded as having similar freedom to his own—by having the capacity to freely make decisions, including the decision whether or not to be moral—then he cannot deny the value of their own freedom". Propositionally, this is (if ∀X,Y in H, Free(X)^Free(Y)^Freedom(X,X) then Freedom(X,Y)). This does not follow. It assumes a symmetry that does not necessarily exist.
Overall, I caution you against playing loosely with assumptions about values. Can we even be sure that any two humans share the exact same set of values?
contractualist OP t1_j92r8pr wrote
Thanks for the review. The article doesn't require that any values be shared, it only states what values lead to morality. What percentage of people share these values (freedom and reason) isn't within the scope of my writing. And values outside of these two aren't relevant for meta-ethics.
As to the scenario you laid out, the issue relates to ethics rather than meta-ethics that the article is about, but I'll still address it. The values of freedom and security would have to be justifiable to someone else. We wouldn't let someone's irrational paranoia guide national security policy, and any reasons provided when making policy (and in the social contract) would need to be public and comprehensible to all that are affected.
And any national security policy would have to be guided by the reason-based moral principles of the social contract. If it goes outside of those principles and acts arbitrarily, then it loses its morality and hence its political authority (imagine a requirement that all redhead people be subject to a special reporting requirement). Only reason has the authority to decide the rights vs. security question. and there will be a range of acceptable policies that respect the boundaries of the social contract. And political communities can give different priorities to the social contract's moral principles based on the national facts and circumstances (it must still value those principles, but it can apply them and prioritize them differently based on reason). See here for a discussion on how the social contract can specify rights.
And the error in the last section was treating X's freedom and Y's freedom separately. Freedom is an objective property that cannot reasonably be differentiated. Its not agent-relative, it is agency. There is no X's freedom or Y's freedom, there is only freedom that both X and Y happen to possess.
KingJeff314 t1_j92wme4 wrote
> And the error in the last section was treating X's freedom and Y's freedom separately. Freedom is an objective property that cannot reasonably be differentiated. Its not agent-relative, it is agency. There is no X's freedom or Y's freedom, there is only freedom that both X and Y happen to possess.
To make a statement like "you should not kidnap a person", you have to appeal to a value like "you value that person's freedom", not "you value freedom", which is nebulous and non-specific. Supposing that I was a psychopath and only cared about my own freedom (ie. Freedom(Me, Me)), what rational grounds do you have to make me care about anyone else?
Viewing a single comment thread. View all comments