Reciprocal altruism

Reciprocal altruism is a social interaction phenomenon where an individual makes sacrifices for another individual in expectation of similar treatment in the future.

The high school
yearbook of society

Sociology
Memorable cliques
Class projects
v - t - e

Originally introduced as a concept by biologist Robert Trivers, reciprocal altruism explains how altruistic behavior and morality can arise from evolutionary causes, as evolution selects for the best possible game theory results.

If the benefit is higher than the initial cost, then multiple reciprocal interactions can actually outcompete more "greedy" forms of relationships, thus providing an evolutionary incentive for altruistic behavior.

At the same time (and in opposition to unlimited altruism), reciprocity ensures that cheaters are also harmed when they choose to do so and are gradually made less fit as a result of their own behavior.

Modern ethnology seems to support at least part of this hypothesis, as pure bartering economies (as in neoliberal dream land) are rarely observed in the wild, but many societies on all continents have developed highly complex forms of gift economyFile:Wikipedia's W.svg where gifts are given with no immediately obvious material return, but the implicit societal expectation of "repayment" in gift form at some later point in time. Amazingly, those societies work. The custom of giving gifts for birthdays in the West may be seen as a remnant of this.

Game theory

B
Strong Weak
A Strong A: -3

B: -3

A: +4

B: -2

Weak A: -2

B: +4

A: +3

B: +3

Reciprocal morality and its benefits are often encountered, and can be modeled in game theory. The usual example is the effectiveness of the simple tit-for-tat algorithm in an iterated Prisoner's dilemma situation, that is, the scenario is repeated many times and the gains and losses are counted.

Consider the following situation: Two agents are asked to provide a "Strong" and "Weak" position.

  • If one agent gives a Strong position and another one a Weak position, then the Strong one gains a benefit, and the Weak one loses. (The Strong one screws the Weak one over)
  • If both agents give a Strong position (both try to screw each other) then they both incur a penalty.
  • If both give the Weak position, then they both gain a benefit.

This simple "game" actually happens to model quite a few situations in life, examples of which will be given further down below. At that point, quite a few people, of a simplistic mindset would "solve" this game by attempting to screw the other one over going for the maximum one-off reward, often justifying it with a thought like this: "if I don't, then the other one will do so. If he doesn't, I will win. If he does, at least I won't get caught a sucker."

The tit-for-tat algorithm instead, prescribes the following form of play:

  • First, be nice :) (Altruism: The tit-for-tat algorithm is never the one to give first the Strong position)
  • Be provocable: return defection for defection, cooperation for cooperation. (Reciprocity)

Results:

It turns out, that tit-for-tat is the optimum play, because most people do not take into account the following two conditions:

  • That someone might be involved into multiple such interactions.
  • That in the wider scheme of things some might not only be competing against his partner, but also against every other such A:B pair in the world, who might be opting for a different kind of play.

In large scale simulations where multiple such algorithms have been placed to compete with each other, tit-for-tat consistently ended up in the first places. That's because although "evil" algorithms might on occasion have been able to extract an one off profit from tit-for-tat, reciprocity ensured that they would get subsequently damaged from it. Furthermore, when two "evil" algorithms competed with each other they pretty much destroyed themselves.

On the other hand, every time two tit-for-tats met met each other (aka, both opened with a Weak position, and continued to do so), they quickly massed up the points. They did that, every single time, no matter the complexity of competing algorithms (e.g., algorithms that first cooperate, and then try to suddenly cheat).

Further notes:

It is interesting to note that in the above examples, although the maximum individual reward is 4. (In a Strong-Weak situation) The maximum reward someone can extract from the system is 6. (Weak-Weak). If someone defined "points" as "money," this means that by the end of this transaction, the "world" of the second pair, would overall be richer than the first.

Examples

Pretend two companies, Google and Apple, decide to jointly develop together a new gadget (like a car). They both insist on a Strong position. Google wants the car to be marketed under its name and Apple wants the car to be marketed under its name. If by any chance either of them manages to convince the other to do so they would've gained a significant advantage over the other one. However, because they both offer Strong positions, they are deadlocked and nothing gets done.

At the same time, two other companies, Microsoft and Yahoo, decide to jointly develop a car, but decide whose brand it'll be marketed under (that is, they both decide on a Weak position). As a result, they quickly move forwards, end up in the market first and make lots of money. Because this worked out so well, they continue their partnership and continue to make even more money together. The lesson is that, ultimately, it's not about making "more" money than your partner, but about making money at all. These two scenarios taken together also illustrate the frequent suboptimality of strategies termed "Nash equilibria"File:Wikipedia's W.svg - strategies where each player treats any concession to another as loss and so acts only in what appears to be his or her interest - that has been the subject of much study in game theory, though it should be noted that in this example, a case of the more general prisoner's dilemma given above, the Nash equilibrium happens to offer the optimal strategy.

Consider a cleaner fish, that cleans a larger predatory fish. Sure enough, the larger fish could just snap its jaws and kill the little fish (aka, offer a strong position). However, in time it will become dirty and possibly ill. By not eating the cleaner fish, both fish benefit — the larger fish by getting cleaned, and the cleaner fish by getting fed. This continuous, mutual benefit is more important than a one-off unfair transaction.

Consider the USA and the Soviet Union (the classical example). During the Cold War, both could offer a "Strong" (firing nukes) and a weak (not doing so) position. Sure enough, if at some point the US or the USSR offered a "Strong" position (pre-emptive strike) and the other guy didn't retaliate, they would have won with maximum profit. On the other hand, if they both offered a strong position (both fired nukes), then they would have incurred a huge penalty. As a result, the optimum strategy, as it proved to be, was for both to offer the "Weak" position (not firing nukes), and as a result no one got killed.

Economics

It is interesting to note that the only algorithm that has been able to outcompete tit-for-tat in the previously mentioned simulations, were a "collective" algorithm where drone algorithms in master/slave relationships sacrificed themselves and gave their points to another one.[1]

However, it is worth noting that when two such hive structures meet they can still collide and damage each other, while tit-for-tat algorithms always work with each other.

For a real world example, consider merchants vs aristocratic/religious authorities. In classical or even medieval times, it was not unusual for traders to operate without regards for borders or status and amass considerable wealth; however, they could hardly "outcompete" kings and aristocratic families who derived their wealth through the exploitation of populations. While traders however simply had to sail into a peaceful port to immediately begin transactions with people of their kind, when two aristocratic structures tried to exploit each other, the result was usually war and extensive damage to one or both.

Interpersonal relations

It's not uncommon for someone to engage in this behavior with the object of their affection, i.e. being nice to them with the expectation of a sexual relationship. Since a lot of these situations tend to involve lonely, single straight men, the common term for this is "Nice Guy"[2] — in other words, the suitor's claim "but I'm a nice guy..." translates to "I went through all the motions and she still won't sleep with me." As a general rule, this is not an effective strategy, and often even drifts into stalking behavior. Women who engage in the same behavior do not get as much attention, but are still known (naturally) as Nice Girls. Either way, such people are seldom actually nice, and frequently come off as manipulative and bitter without realizing it.

The fallacy lies in their equating sexual relationship with being nice - if their expectation of tit for tat was actually equal, aka being nice for being nice and being honest for being honest (which they, coming into relationship with entirely different expectations than they communicate, fail at), they wouldn't face such a problem. In general, healthy sexual relationships absolutely do work on a tit for tat basis, as well as non-sexual. You just shouldn't expect a much larger tit (pun not intended) for your tiny barely noticeable 'common decency' tat. Duh.

A large portion of the men's rights movement is made out of bitter Nice Guys who consistently fail to reject this transactional model of relationships.[3]

gollark: I thought it was the MT thing.
gollark: Did they?
gollark: As my alt, you're 25 too.
gollark: I am, of course, 25.
gollark: Kit is -19π.

Documentary:

References

  1. Wendy M. Grossman. New Tack Wins Prisoner's Dilemma. Wired. 2004 October 13.
  2. Sometimes spelled with a sarcastic (tm) at the end.
  3. e.g. love-shy.com
  4. A play on the quote "Nice guys finish last", which the famed Dodgers manager Leo DurocherFile:Wikipedia's W.svg said while talking about managing baseball.
This article is issued from Rationalwiki. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.