73 Comments

That's...kinda extremely horrifying. And exactly what C. S. Lewis in The Magician's Nephew and over 70 years later, Susana Clarke in Piranesi warn us about: "You mean that little boys ought to keep their promises. Very true: most right and proper, I’m sure, and I’m very glad you have been taught to do it. But of course you must understand that rules of that sort, however excellent they may be for little boys—and servants—and women—and even people in general, can’t possibly be expected to apply to profound students and great thinkers and sages. No, Digory. Men like me, who possess hidden wisdom, are freed from common rules just as we are cut off from common pleasures. Ours, my boy, is a high and lonely destiny.”

To which we, like Digory, should retort angrily: "That just means you think you can do whatever you want."

Expand full comment

You say he has *appeared* to change his mind about it being morally obligatory to give to the point of marginal utility. But I don’t see that. It just seems that he has stopped emphasizing the obligatory, and instead emphasized the points of greatest leverage.

A utilitarian doesn’t think that doing what is obligatory matters in any distinctive way beyond any other equivalent improvement.

Non-utilitarians might think that the obligatory is all that matters, but utilitarians think that every improvement matters in proportion to how big it is, with the last improvement being no more important than others.

Expand full comment

i think this post is kinda conspiratorial and overly suspicious.

I have three issues with this mindset

(bad grammar here, have a cold, so a bit groggy)

1: failure to imagine people having different views of moral obligations (ie, how one must act if they see something as moral). Morality for me here would be more like "have more of good thing" rather then "if i don't do X thing im a monster". If you think that view of moral obligations is wrong, well, thats that i suppose: but its not being hypocritical in the classic sense.

2: Viewing that if someone endorses giving all surplus money to charity as optimal(A), but encourages people to give 10% in public(B), as being dishonest, or concealing counciously. My view is unless im lying and saying i dont think A is moral, but B is, then that isnt dishonest at all. It just normal human social savyness.

Im a pescitarian (health issues mean i need fish, otherwise 90% vegitarian), but i know most people wont give up 100% of their meat consumption even though i think that would be the most moral thing. so instead i encourage people to try meatless mondays.

Is that dishonest?

ps: in the "people i mostly admire" episode with peter singer, im pretty certain that peter singer says giving 10% is less than the most good people could do, but its still very good to give and its actionable. if people decide to give more then that then even better!

3: Feels like brian thinks that if a philosopher thinks of a hypothetical scenario and what would be optimal in that scenario, and then concluding that those philosophers would apply that to anything vaguely looking like that scenario, and thus, they are immoral and untrustworthy, even if they say that the scenario is super unlikely and not useful to act on most of the time

i think that if the surgeon dilemma happened, and you KNOW with 100% that it has the consequences that it is posited to have, that its moral right and good to kill the one person to save 5. But Practically speaking, those conditions never happen like that, and its much better to focus on growth or improvement. and since the human mind has a strong tendency for biases, you either need absurd certainty for the killing of the one to be correct, or much much higher numbers of people saved with high certainty.

i suppose you can argue that utilitarians with these caveats are still gambling with the devil and being overconfident, or that in practice help Genocides or political catastrophes happen.

a last note: brian seems to have high certainty in commense sense morality/ Intuitionism being the best way to deduce what is moral: so if an intuition says X Utilitarian conclusion is repugnant, then that is super strong evidence that utilitarianism is wrong. I think intuitionism is interesting and useful as a tool for reasoning about morality, but i think Brian and michael Huemer Strongly overestimate its validity, partly i suspect because the have different personality traits then a lot of people.

perhaps this makes me a monster, but honestly my intuition is super weak, so intuitionism is sorta useless to me, while utilitarianism serve a good guide to moral behaviour as long as you add uncertainty to it. "it feels wrong" doesnt compel me much at all, while "X amount of Utility" reads much more convincingly to me. I then just remodel peoples "Its wrong" into different levels of Anti-utility for that individual. Maybe its antisocial, but it helps my autistic brain be better and help people

Expand full comment

Very interesting. I get why Singer thinks he shouldn't lay out all the implications of his extreme utilitarianism. But how does that absolve HIM from acting in accordance with his own strongly held philosophical views? Isn't THAT hypocritical?

Expand full comment

As further evidence, see Singer's conversation with Tyler Cowen from 2009 where he is fairly direct on this issue, foreshadowing his eventual adoption of a Noble Lie:

"TC: You think a Utilitarian has to be a kind of Straussian and embrace certain kinds of public lies to incentivise people?

PS: I think that's a really interesting issue. Yeah, I would say he has to be a Sidgwickian. I prefer being a Sidgwickian to a Straussian, just because Straussians have a rather bad flavor to it after they were used in the Bush administration. You could say that the Iraq War conspiracy was kind of Straussian. But, of course, Henry Sidgwick talked about that, he said that for a Utilitarian it is sometimes going to be the case that you should do good, but you need to do it secretly because if you talk publicly about what you're doing this would set an example that would be misleading to others and would lead to bad consequences. I think that's true, and I think for a Utilitarian it's inevitable that there will sometimes be circumstances in which that's the case."

https://www.lesswrong.com/posts/WHJkPQ8jeCW3FaQGx/peter-singer-and-tyler-cowen-transcript

Expand full comment

Isn't Singer's publicly presenting this argument itself a violation of the argument?

Expand full comment

I say that consequentialism and deontology are to be mutually sustained and developed so as to be brought into correspondences with one another.

I elaborate in a video here (watch at 1.75x speed):

https://www.youtube.com/watch?v=5S7NoL_5eVY

The presentation is about 15 minutes at 1.75x speed.

The bifurcating or fundamentally demarcating of consequentialism and deontology is a huge and tragic error.

I think Singer & Lazari-Radek get consequentialism wrong when they say:

"We agree that the consequentialist must accept that, in these circumstances, the right thing for the surgeon to do would be to kill the one to save the four..."

Expand full comment

The Noble Lie is in itself a good utilitarian remedy if you think that your reason is to feign moderation in expectancy of good results from that lie.

Expand full comment

Although I'm proud to be a member of the "foolish masses," I did graduate with Honors from Princeton, and like many of my fellow alumni, was quite unhappy that, after searching the world for a professor to round out a new position on "moral behavior" in the philosophy department. the powers that be chose Peter Singer. Right from the start, Singer came on strong, spouting some crazy and eugenic ideas which shocked almost everyone.

Since those early days of his tenure, Singer may be feigning a degree of moderation to calm down his detractors. He may believe that a professor can only go so far to stimulate discussion and debate. And, most likely, he may realize that his superiors could end his stay at the college if he foments too much controversy.

Shortly after his arrival at Princeton, he was promulgating the concept that a healthy new born pig was more valuable than a new born human with disabilities---That the unhealthy human should be euthanized rather than the pig---That newborn babies could reasonably be aborted/killed for a brief time period post-delivery. These positions were not presented as questions for debate: "Do you think that it is moral to kill a dysfunctional new born baby?" Instead they were stated and supported as perfectly moral positions by Professor Singer and they were met with considerable horror by many.

But there was little that could be done--Princeton had already begun to distance itself from its religious roots that dated back to John Witherspoon, the pastor from Scotland who was drafted to come over and accept the presidency of the College in the 18th century. I suspect many of the alumni scaled back their financial contributions to the college, but that did little to decrease the meteoric increase in their endowment from many big-donors and the many givers of more moderate amounts.

Like most colleges in the country, ninety percent of the employees, from the President on down to the instructors, are on the left side of the political spectrum and teach accordingly. Singer may be A-Political, but he is certainly a radical on "ideas." While it can be argued that all ideas, of every nature, should be presented for debate, the real hypocrisy at Princeton, and most every other college, is that opposing, or conservative ideas are rarely available. And when a rare conservative does visit, their speech is usually disrupted by protesters who obviously have no interest in debating ideas that differ from their own notions.

The utilitarian arguments that Caplan and Singer engaged in are more ethically moderate than Singer's ideas on life and death but they are still overly abstract--as most philosophers like to have them. I suggest that there is no "perfect" dollar amount that can govern charitable contributions. The real question is to whom are the gifts made, what do those recipients do with the money, what is the real-world impact of the gift, and how much should the donor and his family suffer from the loss? Obviously, some gifts to some organizations do more harm than good, and there is no reason to believe that a person on minimum wages should give away the same percentage as a wealthy individual, and finally, is there any reason to give anything to those colleges with vast endowments that cannot even teach why some nations succeed more than others or why the Industrial Revolution "happened" in Europe?.

Expand full comment

The noble lie also makes politicians much better than they first appear.

They almost all seem to lie all the time, but they have to hold power in our system, and they all probably think someone whose will come along if they becoming unwilling to lie.

Of course this mea s from the outside it's impossible or very difficult to tell the noblieblie politicians from the lie for personal gain and lolz politicians

Expand full comment

I think the theory of the noble lie explains a surprisingly high percentage of elite discourse.

I think at least two other popular intellectuals engage in the noble lie

1. Daniel Dennett on free will (his argument seems to be wink wink nudge nudge we have to pretend it exists as a useful social construct)

2. Jordan Peterson also seems to not beleive in free will but he never spells it out.

He says things like "predominently leftist" silicon valley billionaires are not responsible for their IQ and hard work ethic, which seems to suggest no free will, yet I think he knows his audience would like him less if he said that.

I also wonder about his christainity, both because it's pretty rare for an overnight conversion from atheism and it's extremely rare for a Christian not to believe in free will.

I'm personally undecided about free will.

Expand full comment

If the Lie is really Noble, it is wrong for you to expose it as a lie.

Expand full comment

Caplan, I'm surprised you would "expose" Singer like this. Here are my set of beliefs which led me to being surprised. Can you clarify which of these beliefs is wrong, and/or why you wrote this post?

1. Caplan is also a utilitarian (though does not share exactly the same beliefs as Singer)

2. Caplan understands, and is sympathetic to Singer's "secrecy argument".

3. Caplan evaluated that it would be wrong from a utilitarian perspective (do more harm than good) to signal boost Singer's secrecy argument to subscribers to his substack.

Expand full comment

It will always be easier to use other people instrumentally than to do the work to come up with solutions that don’t require their human sacrifice. When we restrict that sacrifice we have to look elsewhere for answers. Isn’t that how we develop brain surgery in the first place?

Expand full comment

As I recall Plato's Gorgias, Soc. argues that the worst of all evils is to intentionally put error into another person's mind.

A danger in interpreting someone's work esoterically is (as I have seen among some U of C Straussians) to take things one agrees with as straight and things one disagrees with as esoteric, and then finding arguments to make it so. I do not see that error here, however. But it seems undeniable that this is the right way to read many authors; see Arthur Melzer, Philosophy Between the Lines.

Expand full comment

Reading the comment section, i feel like part of the divide here is about how much you can trust other people, and some people feeling that if An utilitarian says something weird, that it's a strong sign they are untrustworthy/immoral. And if humans are generally untrustworthy, that being simple and consistent in an easily understandable way is highly important, while complicated or Algorithm based decisionmaking is untrustworthy.

I wonder if belief in utilitarianism is correlated with your belief in general human trustworthiness: I happen to be kinda ridiculously trusting as a default and im pretty utilitarian. But that's a sample of one tho.

My impression of Brians view of humanity is that he thinks humans are awesome, and that you can generally trust people, but that some are powerhungry and evil: on a "how trustworthy are humans" scale, from 1 to 10, i think i would give him 6, while i think i have a 9 (which is too high in practical matters, 7 or 8 is probably best , but my brain is not good at reprogramming itself)

Expand full comment