People who punish others the least earn the biggest rewards in repeated interactions, according to a new study published in the journal Nature and authored by Martin Nowak, director of the evolutionary dynamics lab at Harvard University.

At the same time, we are happiest when we’re spending money on others instead of on ourselves, says another team of researchers out of the University of British Columbia and Harvard Business School.

Has the “nice guys finish last” theory finally been put to rest?

J. Glenn

"Has the "nice guys finish last" theory finally been put to rest?"

Not when love is concerned.


Your whimsical question goes to a deep issue.

The way economics is taught, or written about, it is often hard to tell the difference between the concepts of self interest, greed, desire, need, and so on. Supply and demand theory, for example, doesn't address the nature of demand, only the fact of it.

We assume that economic systems are effective at distributing resources to people who want them because the systems aggregate "self interest" in some way.

But if the nature of "self interest" is complex instead of simple, as we tend to assume, then our economic theories may be less than shadows of the truth.

Consider the conundrum that soda pop, which is cheap to produce, sells at a per-gallon price which vastly exceeds the per-gallon price of gasoline, which is expensive to produce. Why should this occur, when it is in no one's self interest to pay high prices for cheap goods?

The answer to your whimsical question might be that "nice guys" don't drink soda pop. Price points, for example, might be based on complex, instead of simple, mechanisms.

And if that is the case, much of classical economic theory will have to be revisited.


Jamie C

Aside from increased participation, this sounds identical to the Flood-Dresher experiment (RAND), conducted in the early 1950's. It was criticized for several reasons: 1) Is it a long-run game or truly a combination of independent short-run games? 2) Testing human rationality with small change didn't seem to effectivly simulate the stakes that exist in reality.
This study seems to at least improve on #1, and maybe even improve the absolute incentives or disincentives of #2, however, the pesky problem of human (ir)rationality still leaves a lot of room for error. I want to know what Robert Axelrod and other tit-for-tat guys think of this. William Poundstone's "Prisoner's Dilemma" has a great historical overview.


Excellent point in # 2...about folks with little money giving the higher PERCENTAGES of their income to charity. For this reason, it irritates me to no end when people like Bill Gates etc are touted as "philanthropists". I mean, I know WHY they get all the publicity (esp. cuz their lauders hope the money will continue to flow to them), but still...what about the little guy philanthropist? Why does he never make the front page news?


An extension for the latter study: Do we feel happier when we give or when we recieve? What do you think?


Aaron@#2: People who act so kindly are likely to be the kind of folks that wind up in heaven!

Alas, Aaron, 'tisn't necessarily so.

One can be a kind, generous Muslim, or atheist, or deist, or Buddhist, or pantheist, or . . .

Sadly, none of us'll be singing with the Heavenly Choir anytime soon.



People who act so kindly are likely to be the kind of folks that wind up in heaven!

Further, I find it glorious that it is the poorer folks who give the greater percentage of their (already meager) incomes to charitable giving. Kind of like the story of the widow's two mites--less than everyone else...yet in God's eyes, more than everyone else.


Wow, I'm surprised people are so skeptical about the idea that being nice has real-world advanatges. I've been in the corporate world for more than 20 years and seen my fair share of nasty in-fighting and corp politics. I've also seen a fair number of cases where in the end, those who were the worst eventually lost out and either were sidelined or moved out of the business completely (though it may have taken a while). I've always believed in the old stand by phrase "What goes around, comes around."


Are we in fact happier because we are spending money on others? It's quite probable that when we have money to spend and are happy, we tend to be in a giving mood.
Cause and effect versus correlation strikes again.


"Has the 'nice guys finish last' theory finally been put to rest?"

Just for controlled situations where everyone is completely equal.
I'd guess that this type of environment almost never exists ( or exists for long ) outside of these lab experiments.



Well, if the game is structured such that "nice" players win, then of course people will be nicer. But if the scoring system is altered such that "evil" acts are worth more points, then people will act more evil. It's all based on the reward system.


Nice guys finish last. Where at?


Eliot Spitzer spent $80,000 on others, according to some reports. He might have been happier if he had not.


Perhaps people enjoy spending money on others, not themselves, because it makes them feel good about themselves. Spending money on your oneself is a recipe of guilt, but who doesn't feel good when being nice? It's probably because they like thinking of themselves as "nice." True enough, all acts could be considered "selfish" if that is the truth. But it may be the truth.

I do think that has something to do with the data, though.


Those extremely simplistic games are all fancy and dandy indeed, but are not relevant to the most omnipresent kind of human interaction among strangers: trade. When we trade, we "spend money on ourselves" and "spend money on others" simultaneously. That's why it's one of the few human activities among strangers where the two parts say "thank you" simultaneously. And there's no need to design fancy and dandy experiments to show that it works and improve the lots of society.


Seems at first look like either a poorly constructed study, or a poorly written article. They've arbitrarily assigned a net -.50 on the collective 2 parties for each punishment, and a net 0 for each act of non-punishment. Moreover, each act of punishment has an immediate cost of .10 to the punisher. Right away, punishment is obviously a disadvantage. The only way this study becomes meaningful is to show that punishment DOES NOT HAVE THE POWER to overcome this immediate and obvious disadvantage by effecting the future behavior of the rival. The article doesn't begin to tell you whether this is true. Are we to conclude that with infinite trials, punishment never has the desired effect? How much effect did it have? How more likely are people to cooperate in future trials after receiving punishment? What about if the punishment is more severe (e.g., .50) and the cost of administering it less (e.g., .5)? Then is it beneficial to punish?

What's missing here is the ability to control the punishment. If everyone knows ahead of time exactly how much they could be punished for defecting, then they account for this in their decision to defect. A much, much better study would be to simply say the defectee could penalize the defector as much as he wants (x), with a cost to himself of x/4. So if he wants to punish him the standard .40, then he pays .10. But if he wants to punish him 1.00, then he pays .25. The key here is the THREAT of punishment is initially unknown when the decision is made to defect, and becomes dependent on the strategy used by the punisher. In this case, ESCALATING PUNISHMENT can possibly become of successful strategy. I'll punish you .40 the first time you screw me, but .60 the next time, and .80 the next time, and so on. Sooner or later, can this strategy gain 100% cooperation? Or will it become a never-ending game of chicken? What if I make my punishment strategy unpredictable - based on game theory? What if there are escalators and stable-punishers in the same game?

There are just too many questions left unanswered to conclude punishment doesn't pay.


Pascal Warnimont

If i'm not wrong, game theory tells you what is the best strategy in the long run, provided you now the cost matrix.

What I don't understand from the article is if punishment should yiels equal results whatever the matrix or the punishment incentives, which is counter intuitive. I can only assume that the rule of the experiment's game has been so designed.

The more general question to me is how to extend the result of such an experiment to real life. If the result are contradictory to theory, how can you be sure in real life experience that the knowledge of the right behavior will not spread after a time? I they are not, how can one generalize the results of a situation were all odds are known to a real life more often not mesurable?

In my 15 years working life, i got both rewarded and punished for both cooperating and defecting. My only obvious conclusion is that I still coooperate today with the people i cooperated best.

I would be more interested in studies were the reward is given when players find a solution to a problem for which the information is scattered among players, and cooperation is defined as sharing the information.



This is pretty basic game theory - look up the studies/tournaments Axelrod ran on the prisoner's dilemma for more detail, but the winning strategy was always one of the simplest: tit-for-tat, which rewards "nice" play/cooperation.

As far as i know, noone has come up with a better strategy for the prisoner's dilemma, although i would be interested to hear if it has been bested.


Without having read the paper-(did the game have a certain fixed or uncertain end point? ) I think one Nash equilibrium would to be simply to play cooperate until a deviation occurred then punish the other player until he punishes himself(by playing NC as well) then to cooperate again (Nash reversion). It might not even be a Nash equilibrium to use the heavy punishment card in a trembling hand equilibrium.