Accessories Activism Alpha Complex avatar battle blooper Blunderbuss Cannon castle age Character class Conspiracy Conspiracy theory Control Systems Creativity defense democracy diagram of effects drafting game Economic Economics game design game history game idea Games Guild Battle guild wars Guns Handgun images leadership Massive Multiplayer Online mindset Money monster noob offense oligarchy organisational patterns Paranoia (role-playing game) pirates Players prisoner's dilemma propaganda Psychology quality without a name Religion resources roleplaying scouting Single-shot social city social network Social Sciences systems thinking timeless tribal wars Virginia Satir West End Games zero sum game
Roleplaying! As if the world weren't full of enough history without inventing more …
Tag Archives: systems thinking
27/06/2005Posted by on
Phil Trice wrote last April:
“The ‘Fooled By Randomness’ book is about the relationship of signal and noise and how we mistake noise for signal, or mistake what signals ‘mean’. These Games seem to be organized around Equilibria — dynamic arrangements around equilibria.”
On a site – I don’t remember where – I found this wonderful metaphor where I remember imagining two criminals, Peter and John, arrested for having committed a crime together and the police not having sufficient proof in order to have them convicted.
From notes I made: The two prisoners are isolated and each is offered a deal: the one who offers evidence against the other one will be freed:
- John and Peter both confess and are punished, but less severely than if they had refused to talk; the police gets all the proof they need. John gets 0, Peter 0, gain = 0
- One of the two accepts, the other does not, one gains while the other receives full punishment; the police has sufficient proof. One criminal gets -10, the other gets 10, gain = 0
- Professional criminals cooperating against the police will both only get a small punishment because of lack of proof. Peter and John both gain. John and Peter both get 5, gain = 10.
The dilemma seems to reside in that each prisoner has a choice between only two options and cannot make a good decision without knowing what the other one will do. Such a distribution of possible losses and gains seems common to me. The potential professional thief whose action is not returned will lose resources to the dishonorable %^%$@#$, without either of them being able to collect the additional gain coming from the “synergy” of their cooperation.
The expected gain when cooperating against the police must be smaller than the offered gain for one-sided defection in order for a “temptation to defect” to exist.
In most prisoner dilemma situations we can assume the collaborative effect to be smaller than the gains made by betrayal, like accepting value without providing receivable value in return. Imagine two thieves able to do a heist twice as large as the largest one each of them might have done on his own. “Even if an altruistic criminal would gain some loot and give it to another thief, and the other thief would do nothing in return, the selfish thief would still have less money than if he had helped his companion to do the heist.”
The problem with the dilemma is that if both determiners were purely rational they would never cooperate. Determination by thinking means making the one which is best for your self no matter what the other thief chooses:
- Suppose my colleague accepts the offer , it would be better to accept, meaning no personal gain, but whatever happens at least I am not getting stuck with a -10 loss.
- Suppose my colleague does not accept the offer, then I will gain anyway, and I will gain more if I do the dishonorable thing, so if I were to determine by rationality, my choice would be to confess.
- If both determine by thinking, both will decide to defect, and neither will gain anything. However, if both would “irrationally” decide to cooperate, both would gain 5 points.
The prisoner’s dilemma seems to be meant to study short term determinations where actors do not have specific expectations about future interactions or collaborations. Our other layers and other values are not included. This is the normal situation during blind-variation-and-selective-retention evolution. Long term cooperations can only evolve after short term ones have been selected: evolution is cumulative, adding small improvements upon small improvements, without blindly making major jumps in the middle of nowhere. Perhaps cooperation can happen if we take into account that working together (“synergy”) usually only gets its full power after a long term process of mutual cooperation – a big heist is quite a time-consuming and complicated business to plan.
And of course, not getting caught by the police so one does not enter the dilemma in the first place, is the ever best way of aiming for and being successful at gaining big spoils and richess! As for risk reduction, choosing more fully humane professionals to work with is also a smart move.