The evolution of inclusion/exclusion mechanisms in social dilemma situations [1]
Eiji
TAKAGI
(Saitama University, Japan)
This paper
examines how a selective inclusion mechanism emerges and solves the
'relationship type' social dilemmas.
I present a simple computer simulation model supposing that every agent
in turn calls for cooperators and try to form a cooperative relationship. The simulation analyses demonstrated
the following. (1) The selective
inclusion strategy, which includes only cooperative agents into the
relationship, emerged through strategic evolution and established highly
cooperative relationships in the agentsf society. (2) Within a limit, agents' trust tended to increase rather
than decrease as temptation to defect increases. High trust can be considered as an adaptive response in an
environment with high defection temptation. (3) In order for the selective inclusion strategy to be successful,
it must dictate the carrier agent not only to selectively offer an opportunity to
be a cooperator, but also to accept an offer selectively. (4) An agent should be provided with
high incentive to become an organizer, in order to establish full-fledged
cooperation. Finally, this paper
argues that the 'societal type' social dilemmas will be also solved by
inclusion/exclusion mechanisms under specific conditions.
1. Introduction: Social dilemma in a
cooperative relationship
Solving
social dilemmas has been a central theme in the social dilemmas literature, and
a lot of arguments have been devoted to finding out the solutions to attain
cooperation, avoiding the deficient equilibrium where most actors defect (e.g.,
Foddy, Smithson, Hogg, & Schneider, 1999). Among the possible solutions, this paper focuses attention
on a promising one, that is, the inclusion of cooperators and exclusion of
defectors.
It
seems obvious that a social dilemma will be solved if a society or a
relationship can select members on the basis of their cooperativeness,
including only cooperative actors and excluding non-cooperative actors. Such a selective inclusion/exclusion is
important, at least in the following two ways. First, some individuals will consistently act cooperatively
or uncooperatively depending on the motivational orientation (Kramer, McClintock,
& Messick, 1986; Liebrand, & van Run, 1985). If we expect that much of prospective members will cooperate
or defect constantly, it is evidently rational to accept only those who will
cooperate as friends, and to reject those who will defect as enemies. Secondly, the working of such
mechanisms will convert most of the possible defectors into cooperators,
assuming that joining a cooperative relationship will bring them higher payoffs
in the long run. The significance
of inclusion/exclusion mechanisms rests on their ability to make most actors
cooperative, rather than the ability to expel defectors.
Though
we can safely suppose that such a selective inclusion/exclusion mechanism will
work to enhance cooperation, whether it is possible for such a mechanism to
come to exist, or how it comes to exist remains to be settled. From the eevolutionary perspectivef
adopted here, the emergence of inclusion/exclusion mechanisms should be
explained in the bottom-up fashion, that is, explained as an equilibrium point where
actors choose the strategies which make selective inclusion/exclusion possible.
In
considering inclusion/exclusion, it seems meaningful to consider the two types
of social dilemmas, the societal type of dilemmas and the relationship type
dilemmas. The effective mechanisms
to control the dilemma will vary depending on the type.
By
a esocietal typef dilemma, I mean a situation where all the members of a
society are more or less involved in a social dilemma. The members are already included in the
society, and externalities exist among them so that any memberfs action
influences other members. This is
a typical situation which social dilemma researchers have been studying. The difficulty in solving this type of
dilemma lies in the fact that defectors as well as cooperators are already in
the society, and by some reasons one cannot expel the defectors from the
society. Then, in order for
cooperators to expel defectors from the society, they must contrive some
mechanism to detect and expel the defectors. Such a job is hard to accomplish and it is one reason why
solving this type of social dilemmas is difficult.
The
erelationship typef of social dilemma is the situation where some members of a
society form a cooperative relationship containing an element of social
dilemma. Consider the instance
where some actors form a social exchange relationship. Such a cooperative relationship
constitutes a social dilemma, because each actor can potentially cheat. Yet, a cooperator may be able to select
the actors who are unlikely to be defectors as coworkers. Assuming that an actor can tell who
would be defectors, realizing a defector-free situation is relatively easy in
comparison with the case of societal type dilemma.
In this paper, I restrict my arguments to
solving relationship type of social dilemmas. The purpose of this paper is to examine how a selective
inclusion mechanism emerges in a society and solves the 'relationship type'
social dilemmas there, conducting computer simulation experiments. I will demonstrate that selective
inclusion strategies, which discriminate actors on the basis of their
cooperativeness, can be predominant, and that these strategies establish a high
level of cooperation as well as high trust in the specific conditions. In the final section, I will make an
argument about the societal type dilemmas and the possibility of the
society-level ostracism.
2. Selective inclusion and trust
We
can intuitively infer that an actor who try to form a cooperative relationship
will follow the strategy which dictates to include only trustworthy others into
the relationship. Then one might
well argue that, if most actors adopt such a strategy, only cooperative actors
would be included in the relationships and a high level of cooperation would be
easily establishedD
A
little more elaborated thinking will reveal that the story is not so
simple. At the initial stage of
interaction, an actor will find a lot of unknown actors. Then, cautious actors will be reluctant
to form a cooperative relationship with others. If it is the case, an attempt to form a cooperative
relationship might be discouraged at the early stage of interaction, and
strategies seeking cooperation might not be selected in the process of
strategic evolution, because strategies seeking cooperation with others might
disappear at the early stage of interaction process.
The
above argument leads us to the idea that etrustf will play an important role in
the emergence of cooperation. In
accordance with Yamagishi (1998, 1999), I define trust as a default value of
subjective probability of cooperation, which the actor attributes to any other
unknown actor. An actor with high
trust is the one who believe that any other actor will cooperate with high
probability. With this definition
of trust, it will be reasonably inferred that actors must be highly trusting in
order for cooperation to start in the society, and that the combination of high
trust and the selective inclusion strategy will constitutes the social
mechanism to assure the emergence of cooperation.
However,
this inference does not necessarily mean that the selective inclusion strategy
coupled with high trust can evolve and can be dominant among the actors. From the evolutionary perspective, a
strategy will be selected by many actors only if this strategy brings its
carriers high profits. Highly
trusting actors are vulnerable to exploitation by defectors so that high trust
might be extinct in the process of strategic evolution.
For the purpose of examining whether the
selective inclusion mechanism evolves from the bottom up and produces
cooperation as argued above, I conducted the simulation analyses. The simulation model is based on the eOrganizing
Cooperation Game.f Using this
simulation model, I will demonstrate below that the selective inclusion
strategy as well as high trust can evolve and establish cooperation in the
simulated society.
3. Simulation model
Organizing Cooperation Game
The
simulation model I use is based on the eeOrganizing Cooperation Game,f which
presumes as follows.
A society is composed by one hundred
actors (agents). Each of the
agents is running its own business, e.g., managing its own farm. An agent can do its job alone, but if
it gets cooperation from others, the outcome will be larger. It is assumed that an agent has in turn
a chance to be an eorganizer,f who can make offers to others in order to ask
them to be cooperative epartners.f[2] To whom the organizer make offers
depends on the organizerfs einclusion strategy.f The agent who received the offer from the organizer decides
to accept or reject the offer, depending on the agentfs eacceptance strategy.f
The cooperative relationship an organizer
can try to form is of N-person type.
The profit for a partner increases as the total number of actors
involved increases, while there is an optimal relationship size N*
for the organizer, because the revenue function is marginally decreasing and a
fix cost per additional partner incurs to the organizer. If the cooperative relationship is successfully
organized, its size will be close to the optimal size.
After joining a cooperative relationship,
agents including an organizer can choose to cooperate or defect. The probability to defect is also
specified in the agentfs strategy.
Since a edefecting cooperatorf gets additional profit in the expense of
other participating agents, the cooperative relationships are considered as
social dilemma situations.
An agent acts as its strategy dictates,
and the agentsf strategies will change according to the same procedure as
genetic algorithm (crossover and mutation). Basically, agentsf strategies will shift to those proven to
produce larger profits.
Payoff
structure
Let Nt be a total
number of agents involved in a given cooperative relationship, including the
organizer. Then the profit for an
organizer Uo, and the profit for a partner Up
are expressed as follows.
Uo = a·Nt1/2 - b·Nt. [1]
Up = a·Nt1/2 /10. [2]
As is
described, the basic revenue from a cooperative relationship (a·Nt1/2)
is an increasing function of Nt, but marginally
decreasing. Since the cost (-
b·Nt) incurs to the organizer, too large relationship is
undesirable for the organizer. I
assign 3 to a, and 1/2 to b. The optimal relationship size for the organizer is 9, assuming
that none defects. Therefore, a erationalf
organizer may try to get 8 cooperators.
If
an agent defect in a relationship, each of other agents involved, of course
including the organizer, gets a damage of - a·Nt 1/2 /5. If an agent has faced Nd
defectors, this agentfs loss amounts to Nd·a·Nt 1/2
/5. Since a·Nt 1/2
/5 > Uc = a·Nt1/2
/10, an agent will lose its asset if only one agent involved defects. A defecting agent gets the profit of
(Nt -1)·w·a·Nt1/2 /10.[3]
The coefficient w is the defection-incentive coefficient, the larger the
coefficient, the larger the incentive to defect for every agent.
As
described so far, the payoff structure embedded in such a cooperative
relationship conforms to that of social dilemma. A characteristic feature of this game is the asymmetry of
payoffs, that is, Uo > Up, assuming that Nt
does not exceed the optimal relationship size too much. It should be noted, however, that an
agent may get profits as a cooperative partner more than it gets as an
organizer. During the trial-period
of this game, an agent becomes an organizer only once, while it can usually be
a partner many times, maximally 99 times.[4] Of course, such asymmetry of the payoff
structure might influence the simulation results. Therefore, as will be described below, I manipulate the
degree of asymmetry to examine the effects of asymmetric nature of the payoff
structure.
The
payoff structure becomes more complicated if econspiracyf is introduced. With the current model, a simulation
run can be conducted in the conspiracy condition. In this condition, an agent can be either with or without
conspiracy, depending on the strategy.
It is assumed that randomly selected 20 agents are exposed to the
information that a given agent will be an organizer next. An informed agent with conspiracy makes
a conspiracy proposal to the prospective organizer, if this agent decides to
defect with its defection probability defined in its strategy. If the prospective organizer is without
conspiracy, it simply ignores this proposal. If the prospective organizer is also with conspiracy, it
decides to conspire with the proposing agent, and the conspiracy results. In case of conspiracy, the organizer
includes the proposing agents into the cooperative relationship, knowing that they
will definitely defect. In
exchange for such inclusion, the proposing agent must pay 1.5·w·a·Nt1/2
/5 to the organizer.
Conspiracy is the mechanism to include an agent who could not be
included if inclusion is selective.
Introducing conspiracy is expected to increase tendency to defect among
the agents.
An
agentfs strategy is represented by an array of nineteen bits. A strategy is composed by einclusion
strategy,f eacceptance strategy,f edefect strategy,f econspiracy strategy,f and
etrust.f
Inclusion
strategy (7 bits) applies when the carrier agent is an organizer. It chooses the way of selecting
cooperative partners. This
strategy works as follows. First,
it decides how many agents should be partner candidates. It designates any integer number from 0
to 15. Secondly, it decides
whether the agent chooses candidates randomly, or chooses selectively on the
basis of their cooperation-defection history. Third, it decides the criterion of the defection rate. When an agent chooses candidates
selectively, it will invite the agents whose past defection rates are under
this criterion. If the agent finds
more agents than it can make offers, it will select candidates randomly.
Acceptance
strategy (5 bits) applies when the agent receives an offer from an
organizer. First, it specifies
whether the agent decides to accept randomly or selectively. Second, it specifies the probability to
accept an offer in case of random acceptance. Third, it chooses the criterion of erelationship risk
tolerance.f It is assumed that an
agent knows the organizer and the partner-candidates, and that the agent
calculates the probability that at least one agent among them defects, based of
their past defection rates. If the
calculated probability is less than the criterion value chosen, the agent
accept the offer from an organizer.
Defect
strategy (3 bits) is a simple probability that the carrier agent defects in
the proposed cooperative relationship.
Conspiracy
strategy (2 bits) determines if the agent uses the conspiracy
opportunity. First, it specifies
if the agent makes a conspiracy offer to the organizer when it is not an
organizer and it has just been exposed to the information of who would be a
prospective organizer. Secondly,
it determines whether or not the agent accept the conspiracy offer when the
agent is the organizer.
Trust
(2 bits) is a default value of subjective probability to cooperate, which the
agent attributes to any other agent whose past defection rate is not
available. When an agentfs trust
is high, the agent assumes that a stranger will not defect with a high
probability. An agent uses trust
not only when it tries to select the cooperators as an organizer, but also when
it estimates the relationship risk.
Agentfs
memory
How
an agent estimates the probability that other agents cooperate plays an
important role in the simulation model described here. The model assumes that an agent (A) use
the observed cooperation rate of another agent (B) as Afs subjective
probability that B cooperates. It
is further assumed that an agent observes only the relationships it has
actually participated in+, retains actual instances of cooperation/defection it
has observed, and calculate the other agentsf cooperation rates on the basis of
its memory. The model does not
assume the formation and working of reputation.[5]
A
simulation run is composed to be a series of generations, and at the end of
each generation strategic change (crossover[6]
and mutation[7]) is
introduced. As generation proceeds,
the poor strategies providing their carriers with low profits tend to be
replaced by superior ones. I
repeated 200 generations for each run.
In the first generation, agentsf strategies are completely
randomized.
During
a generation, I repeated 200 trial-periods. In each trial-period, an agent has in turn one opportunity
to be an organizer. The order to
be an organizer was randomized in each trial period. Then, maximally 20,000 cooperative relationships can be
formed during a generation. An
agentfs etrial profitf is the sum of the profits it gained during a trial
period. The egeneration profitf of an agent is defined as a sum of trial
profits throughout the generation period.
This
simulation is conducted under the 2 x 4 factorial design. The first factor (2 levels) is whether
or not an agent has a chance to use conspiracy strategy. In the No-conspiracy condition, every
agent has no chance of conspiracy.
The second factor (4 levels) is the defection incentive size. This factor is manipulated by the
defection-incentive coefficient w.
Defection incentive can be small (w=0.1), middle (w=2/3),
large (w=1.0), or very large (w=5/3). I repeated 10 simulation runs for each of the eight
conditions. Since the cooperation
levels in the simulated societies were stabilized at about the 50th
generation, a simulation run was terminated when the 200th
generation was over.
4. Simulation Results
The
simulation results demonstrated that the evolution of selective inclusion
strategies, cooperation, and trust took place in all the conditions. Figure 1 and Figure 2
show an example of the process of a simulation run (The first 100 generations
of the first run in the no conspiracy - large defection incentive condition). Agents tended to adopt the selective
offer - selective acceptance strategy, rather than selective offer - random
acceptance, random offer - selective acceptance, or random offer - random
acceptance strategy (Figure 1).
Together with this strategic change, we can see the increase of agentsf
cooperation rate and trust, followed by the increase of the mean cooperation
size (the size of cooperative relationships, Figure 2). Both the
cooperation rate and mean trust approached to the upper limits (1.0), and the
mean cooperation size also tended to approach to the optimal size (Nt=9). It should be noticed that Figure 1 and
2 show an example in the condition likely to produce cooperation. In some other conditions, these indices
are a little lower.
Cooperation
rate, mean trust, and mean cooperation size were analyzed by 2 (conspiracy) x 4
(defection incentive) design. The
dependent measures were obtained by averaging cooperation rate, mean trust, and
mean cooperative relationship size in each of the last generation block (the
last 40 generations).
Regarding
the cooperation rate, only the main effect of defection incentive factor was
significant (F(3,72)=12.03,
p< .001). As is seen
in Figure 3, cooperation rate is lower as the incentive becomes
large. However, the significant
difference was found only between the Very Large incentive condition and the
other three conditions (SNK test, p<.05).
The
analysis of mean cooperation size revealed the two main effects. First, it was influenced by the
incentive factor (F(3,72)=5.88, p< .001, Figure 4). In accordance with the result of cooperation rate,
cooperation size was significantly smaller in the Very Large defection
incentive condition than the other conditions. In the Small, Middle, and Large conditions, mean cooperation
size was more than 8, and very close to the optimal size 9. Secondly, mean cooperation size was
larger in the No conspiracy condition than in the Conspiracy condition (F(1,72)=4.89,
p< .05). Therefore, it can
be said that the two factors of temptation to defect, defection incentive and
conspiracy, tended to lower the size of cooperative relationships.
The
analysis of mean trust revealed the main effect of the incentive (F(3,72)=25.50,
p< .001), the main effect of conspiracy (F(1,72)=68.23, p< .001),
and the interaction effect of the two factors (F(3,72)=10.22, p< .001). As Figure 3 shows, the effect of
incentive size is curvilinear.
Interestingly, in the Small, Middle, and Large incentive conditions,
which do not differ from each other in cooperation rate and cooperation size,
trust increases as the defection incentive increases. Similarly, mean trust is higher when
conspiracy is available. The
significant interaction effect means that the difference between the Conspiracy
condition and the No conspiracy condition disappears as the defection incentive
increases. These results imply
that, except in the Very Large incentive condition, where establishing
cooperation was much difficult, higher temptation to defect increased rather
than decreased trust.
The
results on mean trust is in accordance with those on the distribution of
strategies. The proportion of the
selective inclusion strategy (selective offer-selective acceptance) showed the
same pattern as mean trust. Also
found were the main effect of defection incentive (F(3,72)=13.26, p< .001),
the main effect of Conspiracy (F(1,72)=21.18, p< .001), and the
interaction effect (F(3,72)=6.25, p< .001). Again, the dependent measure increases as the incentive
increases except in the Very Large condition, is higher in the Conspiracy
condition, and the difference between the Conspiracy and No Conspiracy
condition disappears in the Very Large condition. Then, It can be inferred that trust went hand in hand with
the selective inclusion strategy.
5. Discussion
The
simulation results showed the following two points.
First,
under the assumptions posited here, the selective inclusion mechanism can
emerges from the agentsf strategic interaction processes. That is, the agents will choose the
selective inclusion strategy, which works as the selective inclusion mechanism
in a society, and tend to hold high trust. Such a strategy and high trust will establish the high level
of cooperation as long as the agents can select strategies freely. I infer that such strategic evolution
took place in the past human societies so that human beings can form
cooperative relationships despite that these relationships have contained the
element of social dilemma.
Secondly,
it can also be inferred that high trust is an adaptive response to the
possibility of defection. Rather
paradoxically, the simulation results showed that temptation of defection
increased rather than decreased trust, as long as the defection incentive is
not very high. In the High incentive condition, for example, agents trusted
others on the average more than in the Low incentive condition, though the
actual cooperation rates were almost the same. The most plausible reason of this is that, agents in the non-cooperative
environment responded by choosing the selective inclusion mechanism coupled
with high trust. Therefore, it can
be predicted that in the society where there is almost no temptation to defect
because the authoritative agents or the other social devices there maintain
peace and order, peoplefs trust will be lowered (Yamagishi, 1998, 1999).
In
the simulation described above, since the selective inclusion strategy which
has the two component, eselective offerf and eselective acceptance,f became dominant, this
strategy must have a survival value higher than other three strategies. Nevertheless, some might argue that the
strategies other than this would do the same job, though less effectively. For example, suppose that agents have
the strategy of eselective offerf and erandom acceptance.f In such an instance, organizers will
discriminate against less cooperative agents, and these agents with
uncooperative strategies might die out.
Then, one might insist that under the eselective offerf and erandom
acceptancef environment, a high degree of cooperation would result. In order to clarify this, I conducted
the next simulation.
This
simulation was conducted under 2 (offer factor) x 2 (acceptance factor)
design. In the Selective Offer
condition, agentfs strategy can be selectively offering as well as randomly
offering, as in the simulation described above. In the Random Offer condition, strategy can only be randomly
offering. In the Selective Acceptance
condition, agentfs strategy can be selectively offering or randomly
offering. In the Random Acceptance
condition, strategy always specify random acceptance. I assumed that defection incentive was High and there is no
conspiracy. In each condition, I
repeated 10 runs.
The
results clearly demonstrated that both of eselective offerf component and eselective
acceptancef component were needed in order for cooperation to evolve. It was only in the Selective Offer –
Selective Acceptance condition that high levels of cooperation rate,
cooperative size, and trust were observed (Figure 5).
So
far, I described the simulations whose payoff structures were asymmetric. The profit for an organizer was higher
than that for a cooperative partner.
One might argue that this asymmetric nature was responsible for some
simulation results. As I wrote
before, an agent will earn as a partner more than as an organizer, because it
will have more opportunities to be a partner than opportunities to be an
organizer. However, this asymmetry
might cause some effects, so I conducted the next simulation by varying the
profit size of an organizer, in order to examine the effects of asymmetry.
I
defined an organizerfs profit as Uo/10*w, where Uo
is given in [1].[8] An integer value in [1, 10] was
assigned to w. The
simulation was run in the No Conspiracy – High defection incentive
condition. I conducted 10
simulation runs for each of the 10 conditions.
The
analysis of this simulation revealed that the cooperation indices increased and
reached an asymptotic value as the value of w increased. Mean cooperation size increased as w
increased, but it remained the same if w is more than 6 (SNK,
P<05, Figure 6).
When w is 1, mean size of cooperative relationships less than
optimal, but it can be said that a certain level of cooperation is still
maintained. The cooperation rate
reached the asymptotic value when w is 4. Mean trust and frequency of the selective inclusion strategy
became unchanged when w is 3.
The
results imply the following.
First, asymmetric nature of payoff structures is not necessary for a
certain level of cooperation to emerge.
Secondly, however, providing an organizer with sufficient incentive will
be a necessary condition for full-fledged cooperation to be established in a
society.
The
implications of the simulation results described above can be summarized as
follows.
(1)
In
a situation where people want cooperative relationships but the temptation to
defect exists, the selective inclusion strategy will emerge to resolve the
social dilemma embedded in such relationships.
(2)
High
trust coupled with the selective inclusion strategy will also evolve in a
society, in response to the temptation to defect.
(3)
In
order for the selective inclusion strategy to be successful, it must have both
of eselective offerf component and eselective acceptancef component.
(4)
High
incentive for an organizer of cooperation must be provided in order for a
cooperative relationship to be fully established.
6. The evolution of societal level ostracism
The most important implication of the simulation results above is that the selective inclusion mechanism can emerge in response to the social dilemmas embedded in the cooperative relationships. The effectiveness of this mechanism comes from the simple fact that in forming a cooperative relationship, actors can select partners on the basis of their cooperativeness. Then, such mechanism cannot be applied to the societal type of social dilemmas, since defecting actors are already in a society. Other inclusion/exclusion mechanism must be installed in order to resolve societal social dilemmas.
My idea on this issue is that generalized exchange will emerge in a society and the resultant egeneralized exchange clubf will work as an effective inclusion/exclusion mechanism to resolve a societal type dilemma. Just look at what generalized exchange is, and how it works as a social control mechanism.
I once proposed the generalized exchange perspective (Takagi, 1996, 1999). From this perspective, altruism, which can be observed in a human society, can be seen as generalized exchange. Generalized exchange is defined as a social situation where any party gives his/her own resource to other parties without expecting direct return. An actor who is involved in generalized exchange does favors for others, and receives help from someone later. There is no definite connection between giving and receiving.
I conducted computer simulation analyses to see if generalized exchange as altruism can evolve among artificial agents most of whom were holding the egoist strategy (Takagi, 1996). The results showed that an altruistic strategy can evolves in an egoist-dominant environment. Moreover, it was found that the strategy which evolved and robustly established altruism was highly exclusive in nature. This strategy dictates its carrier to give its resource only to those who are nice only to altruists. It views not only non-altruists but also altruists who do favors for non-altruists as 'enemies,' and discriminates against them. Since this strategy is highly in-group oriented, I call this strategy as 'in-group altruist strategy.' I can say that this strategy make generalized exchange a club good. The rule of this generalized exchange club prescribes that every member must be nice to other members, and that violators or those who support a violator will also lose membership. The more tolerant strategies, including the econditional altruist strategy,f which stipulates to give resource only to altruists, were found not strong enough to beat egoism.
Though the generalized exchange perspective explains altruism as generalized exchange, the scope of this perspective is not restricted to the emergence of altruism. Since ecommunal societiesf are characterized by altruism within them, and since stable altruism or generalized exchange will be supported by the highly in-group oriented strategy, this perspective predicts that a communal society as a generalized exchange club will come to have characteristic emergent properties. One of these emergent properties is the ability to solve social dilemmas (Takagi, 1999).
I predicted that a communal society would come to be able to solve social dilemmas in the following way. Consider a small-scale society where members interact more or less directly with each other and assume that this society is confronted with a social dilemma, e.g., a public good problem. Members will be better off if the public good is amply provided. Then, it is no surprise that some members would come to think that cooperation in this situation should be treated as a requirement to be a member of this communal society and to get a fruit of generalized exchange. These members will advocate a strategy linking cooperation with generalized exchange. As the number of members holding such a 'linkage strategy' increases, members in general will cease to be defectors as regards the social dilemma, since they must forgo the temptation to defect in order to have the fruit of generalized exchange. In this way, generalized exchange will 'pull up' the cooperation level in a dilemma situation.
A computer simulation analysis revealed that such line of reasoning is logically consistent. In the simulation model, each of agents plays two games, Generalized Exchange Game and Public Goods Game. An agentfs payoff is the sum of the payoffs in both games. In the condition where an agent can select the strategy linking the two games, a linkage strategy tended to be selected, and both of generalized exchange and the public good were amply provided. What was selected was the strategy which dictates a carrier agent to support only the supporters of the 'public good supporting altruists.'
These simulation results have much relevance on the current topic. In a small-scale society, generalized exchange will spontaneously emerge from strategic interaction, and a generalized exchange club will be formed. Such a club will provide its members with benefits but will reject outsiders. Then, as long as generalized exchange provides ample benefits, the societal members will tend to be cooperators of the dilemma game, in order to be included in the club or to avoid exclusion from the club. It can be said that generalized exchange works as a system of social ostracism to maintain the cooperation level in the society on behalf of the society itself.
It should be noted that generalized exchange will be dominant only in a small-scale society. Other social mechanism such as a law system will have to do the job of selective inclusion or exclusion. How such a system emerges is the topic outside of the scope of this paper.
References
Foddy,M,,
Smithson, M., Hogg, M. &
Schneider, S. (1999) (Eds.) Resolving Social Dilemmas. NY: Psychology
Press.
Kramer,
R.M., McClintock, C.G. & Messick, D.M. (1986) Social values and cooperative response to a simulated
resource conservation crisis. Journal of Personality, 54, 576-592.
Liebrand,
W.B.G. & van Run, G.J. (1985) The effects of social motives on behavior in
social dilemmas. Journal of
Experimental Social Psychology, 21, 86-102.
Takagi,
E. (1996) The generalized exchange perspective on the evolution of altruism. In
W.B.G. Liebrand & D.M. Messick (Eds.) Frontiers in Social Dilemmas
Research. Berlin: Springer-Verlag, Pp.311-336.
Takagi,
E. (1999) Solving Social Dilemmas is Easy in a Communal Society. In M. Foddy,
M. Smithson, M. Hogg & S. Schneider (Eds.) Resolving Social Dilemmas.
NY: Psychology Press, Pp.33-54.
Yamagishi,
T. (1998) Structure of Trust
(in Japanese). Tokyo: Univ. of Tokyo Press.
Yamagishi,
T. (1999) From the Security Society to the
Trust Society (in Japanese). Tokyo: Cyuou-koronsya.
[1] This paper is to be presented
at the Social Dilemma session in the XV ISA World Congress of Sociology, held
in Brisbane, Australia, July 7-13, 2002.
Correspondence should be sent to Eiji TAKAGI, Faculty of Liberal Arts,
Saitama University, Shimo-okubo 255, Saitama City, Saitama prefecture 338-8570,
Japan. E-mail:
NCC00521@nifty.ne.jp.
[2] This game postulates that only one agent (organizer), rather than all the concerned agents, takes the initiative in forming a cooperative relationship. I put this postulate, since it nicely simplifies the relationship formation processes. If it is assumed that many agents have equal says regarding relationships formation and who should be included, we would have to postulate complicated multilateral negotiation process, whose validity will be easily challenged.
[3] I assume that, in case that the number of involved agents in the relationship is only one (the organizer), the organizer simply ecooperate.f
[4] If we can assume that no agent defect and every cooperative relationship is optimal-sized (9), given the parameter values described, then an agent can get the profit of 4.5 as an organizer. On the other hand, an agent can be a partner, on the average, 8 times, and will get the profit of 0.9 on every occasion. Therefore, an agent will obtain the profit of 7.2, more than 4.5, as a partner of the other organizers.
[5] The establishment of reputation is itself a complex process, and introduction of reputation requires additional assumptions.
[6] At the end of a generation, agents are reordered along the size of egeneration profit.f Then, the superior 10 agents group and the inferior 10 agents group were defined. The strategy of an inferior agent was replaced by a new strategy given by a crossover procedure. The new strategy is an offspring whose parents were the strategies of the two agents belonging to the superior group. A superior agent was selected to be a parent in proportion to its egeneration profitf size.
[7] At the end of a generation, after the crossover processing was over, every strategy dimension value of an agent was changed to the different value (e.g., from 1 to 0) with probability .015.
[8] I defined an organizerf profit in this way because, this definition make the optimal cooperation size unchanged regardless of the value of w.