How I don’t think about morality

Ethical philosophy–and introductory ethics courses–brim with quandaries. There’s a trolly car barreling down the track toward three people. They can’t get out of the way and will surely die if the car hits them. But wait! The track splits and on the other fork sits just one guy. And there’s a switch right in front of you that’ll cause the track to switch and the car to kill him instead of the three people it’ll kill if you don’t act. What do you do?

Theses quandaries exist, of course, to provoke moral thinking. They complicate assumptions of freshmen, they illuminate intuitions, and they serve to distinguish ethical theories at a fine level. What’s more, a moral theory that fails to answer one of these quandaries fails as a moral theory. Because it’s the hard questions that matter, right?

Not really. I’ve grown tired of quandary ethics and it’s why, in part, I find virtue ethics so compelling. The kind of ethical thinking quandaries represent, where factors and rules are weighed and examined to produce an algorithm for morality, seems as far divorced from the way we actually think about morality as the computer code underlying Adobe Photoshop is from the paintings the artist creates within it.

This divide has been particularly clear to me due to what amounts to a timing accident. Concurrent with my exploration of virtue ethics, I’m reading, for the first time, Derek Parfit’s monumental Reasons and Persons. And, while the former resonates, the latter often leaves me cold.

Pushing ethics towards quandaries and improbable scenarios moves it away from the problems all of us really encounter–the problems we need moral philosophy to address. I’ve never been in a trolly car situation and likely never will be.

Much modern moral thinking holds fast to the idea that we should imagine increasingly bizarre situations, apply our theories to those, and mark them as failures if they can’t come up with the right–or even an–answer. The method informs a great deal of Parfit’s book, particularly its early portions about self-defeating theories and whatnot.

Two things stand out this contemporary style of ethical thinking. First, its need for odd hypotheticals to expose a theory’s failures leaves open the possibility that many of these discarded theories would do quite well in every moral circumstance most of us will ever be in. Thus discarding a theory because it fails in weird setups with vanishingly small likelihoods is like discarding the West Coast offense because it’ll do you no good if the other team fields nine foot tall defensive backs who can run the 40 in half a second.

Second, the need to answer quandaries strikes me as something of a false dilemma. Isn’t it more likely that there exist moral situations with no good answer? That no matter what we do in the trolly car scenario, we’ll do wrong? That, even if we (somehow) pick the “better” answer, we’ll still have something to atone for?

This is why ethics as a set of rules to follow gets it at least partly wrong. Presumably following the rules precisely to a conclusion would mean getting the “right” answer. But it seems obvious that many moral situations have no right answer. The same applies to the consequentialist approach. Measuring utility will always point to a “right” answer–except in probably nonexistent situations where the utility gains (or losses) from the two options match exactly.

Thus hard-and-fast deontology and consequentialism don’t, I believe, get it right. They insist on “correct” answers where none exist and, more importantly, fail to match the way we actually think about ethics. Both have much to contribute, of course, but as theories meant to explain the whole of morality, I find both decidedly lacking.