From PhilPapers forum Normative Ethics:

2013-02-24
Sweatshop or death--are my preferences irrational?

Hi John,

I'm afraid I've been a little late in gathering my thoughts so this may be somewhat delayed.

Thanks for your very full answer to my question. Can I start off my response by sketching out what I think utilitarianism is - which I hope will help here, and in any future discussion.

First, an important distinction. This is between utilitarianism and something else which can be mistaken for it. The latter is comprised of an eclectic ethical outlook which includes, when appropriate, deliberations about consequences. We all know that we need to pay attention to how things are likely to turn out, and this was known and thought about long before Bentham and Mill. But it is not in any sense utilitarianism.

Utilitarianism (at least in its hedonistic guise) starts with the belief that the only ethical good is happiness, and there is no other. Kindness for instance is only of ethical interest if it conduces to the maximization of happiness. I think that you and Derek must be right in thinking that something like a belief in human worth underpins a utilitarian commitment, but this can be no more than an intuition. If it were erected into some kind of principle alongside the happiness principle, then there is the potential for conflict and a dilution of the all important central principle.

Therefore for the utilitarian all (ethical) thought and intention is directed to increasing the sum of happiness - however that is defined (and as you make clear, one would want that concept to include much more than instant gratification). Of course this does not include a guarantee that in an emergency (which child to pull from the raging torrent?) one would be motivated purely by utilitarian principles. All the same, as a utilitarian I would want to hone my habitual responses so that they became consistently utilitarian in the majority of situations.

This has an important consequence: the good utilitarian needn't always think like a utilitarian. Sidgwick, and I think Hare, reckoned that one could have motivations that weren't utilitarian as long as ones actions produced an increase in happiness. For in the end motivations and intentions don't matter, only the increase of happiness, however that is to be achieved.

If you can buy this account as plausible and (I hope) convincing then the questions I put to you previously assume some importance. There's first the general problem of how any normal human being with very imperfect knowledge can hope to accurately decide which of a number of potential actions can bring about the maximal general happiness. But of much more importance I think is the problem of integrity: the achievement of increased general happiness must take precedence over every finer feeling - care of our loved ones etc. All other considerations must be disregarded in the light of the utilitarian imperative. Of course in the light of the previous paragraph, such finer feelings might conduce to the general happiness, but then again, they may not, and if not then they should be suppressed. Thus utilitarianism in its suppression of integrity and conscience comes to lack true humanity.

In the light of all this I can see little to recommend this kind of ethical approach. Happiness will of course go on being regarded as important and so it should. But we should see it as only one of a number of things which we should be pursuing as ethical agents rather than the only thing.

I thought your comments suggested that you would want to make room for much more in ethics than is allowed for in utilitarianism. If that is so, then you aren't really a utilitarian, or rather you aren't really representing a utilitarian position. Do you think I've got it right here or have I simply misunderstood you?

Romney