Brewis, A., D. Hruschka, A. Wutich. (2011) Vulnerability to Fat Stigma in Women’s Everyday Relationships. Social Science & Medicine. 73(4):491-497.

Brewis, A., D. Hruschka, A. Wutich. (2011) Vulnerability to Fat Stigma in Women’s Everyday Relationships. Social Science & Medicine. 73(4):491-497.

Please download to get full document.

View again

of 9
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.


Publish on:

Views: 0 | Pages: 9

Extension: PDF | Download: 0

  The challenge of understanding decisions in experimental studies of common poolresource governance  John M. Anderies a,b,c, ⁎ , Marco A. Janssen a,b , François Bousquet d , Juan-Camilo Cardenas e , Daniel Castillo f  ,Maria-Claudio Lopez f  , Robert Tobias g , Björn Vollan h , Amber Wutich a,b a School of Human Evolution and Social Change, Arizona State University, Tempe, USA b Center for the Study of Institutional Diversity, Arizona State University, Tempe, USA c School of Sustainability, Arizona State University, Tempe, USA d CIRAD-GREEN, Montpellier, France e Facultad de Economia, Universidad de los Andes, Bogota, Colombia f  Facultad de Estudios Ambietales y Rurales, Universidad Javeriana, Bogota, Colombia g System Analysis, Integrated Assessment and Modelling, Swiss Federal Institute of Aquatic Science and Technology Department, Dübendorf, Switzerland h Department of Economics, University of Mannheim, Mannheim, Germany a b s t r a c ta r t i c l e i n f o  Article history: Received 30 December 2009Received in revised form 13 January 2011Accepted 13 January 2011Available online 23 February 2011 Keywords: Common pool resourcesCollective actionExperimental economicsMethodologyContext Common pool resource experiments in the laboratory and the  󿬁 eld have provided insights that havecontrasted to those derived from conventional non-cooperative game theory. Contrary to predictions fromnon-cooperative game theory, participants are sometimes willing to restrain voluntarily from over extractingresources and use costly punishment to sanction other participants. Something as simple as face-to-facecommunication has been shown to increase average earnings signi 󿬁 cantly. In the next generation of experiments,bothinthelaboratoryandinthe 󿬁 eld,weneedtoextractmoreinformationthatprovidesinsightconcerning why people make the decisions they make. More information is needed concerning attributes of individuals aswell as thesocial and social – ecological context in which theyinteract thatmay give rise to suchdeviations from theoretical predictions. In the process of extracting more information from participants andthe contexts in which they interact, we face several methodological and ethical challenges which we addressin this paper.© 2011 Elsevier B.V. All rights reserved. 1. Introduction Collective action problems facing groups who jointly harvest froma common-pool resource such as a  󿬁 shing ground, pasture, forest, orwater system are dif  󿬁 cult to solve. In such common-pool resourcedilemmas, the incentives are such that each individual would bebetter off if everyone else cooperated and they could  “ free ride ”  andobtain bene 󿬁 ts from the resource without any sacri 󿬁 ce. The outcomepredicted by non-cooperative game theory for such a situations is aNash equilibrium in which no one cooperates. Hardin's (1968) 1 in 󿬂 uentialarticleonthetragedyofthecommonswaswidelyaccepteddue both to its consistency with game theoretical predictions andbecause of well-known incidents of overharvesting of   󿬁 shery andforest resources.Perhaps because of the high pro 󿬁 le of whaling and the collapse of some large  󿬁 sheries in the 1970's (e.g. Peruvian anchovy  󿬁 shery in1972), the problem of commons dilemmas has been most extensivelystudied in the context of   󿬁 sheries. Much of this work early on wastheoretical and relied on the simple Gordon – Schaefer  󿬁 shery model(Gordon, 1954). This work has since been extended in manydirections but has often focused on the nature of institutionalresponses to the problem in terms of some sort of tax or de 󿬁 nitionof property rights, in theory (e.g.Smith, 1969;Clark, 1973,1990; Clarket al., 1979). Although it generated many important general insights,it is not surprising that management efforts based on this work wereoften unsuccessful as the underlying models were stick- 󿬁 gures of realsituations (Clark, 2006).During the last 30 years, extensive  󿬁 eld studies have uncoveredmany counter examples of long-lasting social – ecological systemswhere resource users have developed institutional arrangementswithout the external imposition of private or state ownership or theuse of taxes as suggested by these simple models (e.g. NRC, 1986;Ostrom et al., 2002; Dietz et al., 2003). Many variables in the  󿬁 eldpotentially affect when and how resourceusers themselves overcomestrong incentives to act in their short-term material interests andignore the long-term bene 󿬁 ts that they and others would obtain from Ecological Economics 70 (2011) 1571 – 1579 ⁎  Corresponding author at: School of Human Evolution and Social Change, ArizonaState University, Tempe, USA. Tel.: +1 480 965 6518. E-mail address: (J.M. Anderies). 1 Hardin's work has been in 󿬂 uential in recent decades, but various scholars haveaddressed the commons dilemma before such as Malthus (1798), and the study of the commons relate to the broader problem of externalities (Pigou, 1920; Coase, 1960).0921-8009/$  –  see front matter © 2011 Elsevier B.V. All rights reserved.doi:10.1016/j.ecolecon.2011.01.011 Contents lists available at ScienceDirect Ecological Economics  journal homepage:  cooperation. A robust  󿬁 nding across many studies, for example, is theimportance of users monitoring one another (Gibson et al., 2005;Hayes, 2006; Ostrom and Nagendra, 2006; Coleman, 2009). Further,the mounting intensity with which humans are impacting manycommon-pool resources highlights the need to put some  󿬂 esh on thestick  󿬁 gure models that have long dominated resource managementpolicyand to reconcile the con 󿬂 ict betweenthe simpletheory andthecomplexity of empirical examples. This has led to the developmentand use of new methods to study the commons including a broadrange of experimental techniques (Poteete et al., 2010).One speci 󿬁 c areaof experimentalenquiry surrounds understandingwhat individuals actually do in common-pool resource managementsituations: how they process information, how they make decisions,and how sensitive theseactivitiesare to subtle changes incontextor inincentive structures. Experimental study of the commons started insocial-psychology (e.g. Stern, 1976; Dawes et al., 1977), and in recentyears has been a topic of investigation in behavioral economics (e.g.Ostrom et al., 1994). However, in order to meet rigorous requirementsfor experimental validity, these experiments typically have beenextremely simple, designed to test very simple models of humanbehavior based on rational choice theory. After hundreds of economicsexperiments,someare beginning to callintoquestionthe value ofsuchnarrowly de 󿬁 ned experiments that rely heavily on the presumptionthat participants think like economists in such experimental contexts(Smith, 2010). There is mixed evidence for the external validity of experimentalstudies:behaviorintheexperimentssometimesdoesnotmatch observed behavior outside the experiment (Gurven andWinking, 2008), and sometimes it does (Rustagi et al., 2010). Much of the work presented in this issue has to do with theimportance of themicro-situationalvariablesandthe broadercontextbasedonthetheoreticalframeworkproposedbyPoteeteetal.(2010).Due to space limitations, here we provide only a brief description of the framework. For a more comprehensive discussion we refer thereader to chapter 9 in Poteete et al. (2010). At the broadest level, theframework includes learning and norm-adopting (in contrast tosel 󿬁 sh rational) individuals. The decisions made by these individualsare affected by micro-situational variables and the broader context(Fig. 1). Examples of micro-situational variables include group size,communication, heterogeneity among participants, reputation, andtime horizons. For example we know that an increase in group sizetypicallymakescollective actionmoredif  󿬁 cult whilethe possibilityof communicationincreasesthepotentialforsuccessfulcollectiveaction.Knowledge is lacking regarding how these variables interact indifferent contexts. That is why experiments may provide the propertool to test the impact of these micro-situational variables. Examplesof broader context are policies at higher levels of organization,resource dynamics, and geography. Because of differences in broadercontextual variables,  󿬁 eld experiments are needed to test whetherpeople in different contexts differ in the decisions they make.To further the development of a broader framework for collectiveactionandthecommonsweneedtocollectdiversesetsofinformationin addition to decisions made and basic demographic information.This article pulls together what we have learned through a seriesexperiments and the new challenges they have highlighted as wemove into the future. Addressing the challenges of how to deriveadditional information such as social structure, mental models,beliefs, and trust relationships is the focus of the following sections.In what follows, we  󿬁 rst discuss different types of experimentsthathavebeenconductedas part of the workpresentedinthisspecialissue and relate them to past experimental work. This is followedby adiscussion of how to extract information concerning the individualbehavior of participants and the decision-making context. Given theincreasing use of these tools in the  󿬁 eld, we also address the ethicalissues involved. We conclude by synthesizing the insights derivedfrom past work and challenges that lie ahead. 2. Types of Experiments In this special issue various articles present results of studies thatuse a variety of experimental methods, from experiments withundergraduate students in a laboratory to rural villagers in Namibia,South Africa, Thailand, and Colombia. Each type of experiment has itsown challenges, strengths and weaknesses. Harrison and List (2004)present a taxonomy of experiments by distinguishing six factors: thenature of the participant pool, the nature of the information that theparticipantsbringtothetask,thenatureofthecommodity,thenatureof the task or trading rules applied, the nature of the stakes, and thenature of the environment in which the participant operates(Harrison and List, 2004; p. 1012). They de 󿬁 ne a  conventionallaboratory experiment   as one that uses students as the participantpoolandemploysanabstractframingandanimposedsetofrules.The artefactual  󿬁 eld experiment   differs from the conventional laboratoryexperiment in that it has a nonstandard participant pool, but still usesan abstract framing and an imposed set of rules. The  framed  󿬁 eldexperiment   differs from the artefactual  󿬁 eld experiment by changingthe nature of the commodity, say from monetary incentives to actualgoods, or changing the information participants can use, for example,bydoingtrading experiments withexperiencedtraders.In the  natural  󿬁 eld experiment   participants do not know that they are in anexperiment and participants naturally undertake the tasks of theexperiment.The laboratory experiment is a very controlled setting whereoutcomes of decisions can be measured precisely. But it is also anarti 󿬁 cialcontextcreatedbytheexperimenterwhichmayaffectthewayparticipants make decisions. As Vernon Smith (2010) suggests, it islikelythatparticipantsdonotmakedecisionsineconomicexperimentsthe way economists do and therefore researchers need to be careful ininterpreting the results of such experiments. This also calls intoquestion what we expect to learn from such experiments. Is it anattempt to verify certain axioms about human behavior? If so, Smithsuggeststheprospectsareslim.Ifwearehappytoidentifyrelationshipsbetween patterns of human behavior and potential biological, social,technological,andothercontextualdeterminantsofthatbehavior,thenexperiments provide fertile ground. When we relax experimentalconstraints and allow for different participant populations and morenatural tasks, a better understanding of the background of theparticipants  vis à vis  the context of the experimental setting becomesvery important for the interpretation of the results. By comparing theoutcomesofexperimentsacrosssuchcharacteristicsoftheparticipants,we can begin to get at the complex interactions between participantsand contextual variables in commons dilemmas. As such, experimentsare increasingly combined with other methods to generate betterinformation regarding the micro-situational variables that affectdecision making and about the broader context in which theexperimental task occurs.Recently,wehaveseenanincreasinguseofartefactualandframed 󿬁 eld experiments in combination with conventional laboratory Fig. 1.  Based on behavioral theory, cooperation in commons dilemmas is dependent onindividual learning and norms, as well as the micro-situational variables and broadercontext.Based on Poteete et al. (2010).1572  J.M. Anderies et al. / Ecological Economics 70 (2011) 1571 – 1579  experiments. We are not aware of natural  󿬁 eld experiments oncommon pool resource dilemmas. Before we discuss the challenges of combining different types of experiments, we present some lessonslearned from the early years of commons experiments. 3. Common Pool Resource Experiments In a typical experiment, the experimenter creates an environmentin which a number of participants make decisions in a controlledsetting. The rules (institutional arrangements) of the experimentde 󿬁 ne the payoff structure, the information participants have, andwho belongs to which group. Participants voluntarily consent to takepart in an experiment prior its initiation. They receive instructions onthe possible actions about which they can make decisions andpossible outcomes that depend on the decisions of all participants inthe experiment. Decisions are made in private by writing informationon a paper form or entering it on a computer. Salient incentives areprovided in terms of monetary returns, or other relevant rewards,depending on the decisions made. 2 In 󿬂 uential CPR experiments were performed by Ostrom et al.(1994) 3 who started with a static, baseline situation that is as simpleas could be speci 󿬁 ed without losing crucial aspects of the problemsthat real resource harvesters face. A quadratic production functionwas used for the resource itself  — the payoff that one participant couldobtain was similar to the theoretical function speci 󿬁 ed by Gordon(1954) for bionomic equilibrium. Much earlier experiments oncooperation and the voluntary provision of public goods had beenconducted over the previous decades in the lab (Ledyard, 1995) butthese were based on a pure public good problem whereas the CPR experiments mentioned here incorporate the non-linearity andrivalry or subtractability issues that are crucialto commonsproblems.The experiments were formulated in the following way. The initialresource endowment of each participant consisted of a given set of tokens that the participant allocates between two markets: Market 1,which had a  󿬁 xed return; and Market 2, which functioned as acommon-pool resource that was non-excludable and rival, and had areturn determined in part by the actions of the other participants inthe experiment. Each participant could choose to invest a portion of their endowment in the common-pool resource Market 2 (e.g. investtime in  󿬁 shing), and the remaining portion was then invested inMarket 1. The participants received aggregated information on thedecisions of others.Participants from student participant pools in baseline experimentssubstantially overinvested in Market 2 as predicted by theory. In arepeated game, at the aggregate level, the groups approach the Nashequilibrium or apply harvest efforts of even more than the Nashequilibrium. When participants are allowed to talk about the experi-mentface-to-faceunderanon-bindingsettingofopentalk(cheaptalk),the harvesting effort declines toward the cooperative equilibrium (seeOstrom and Walker, 1991; Sally, 1995; Balliet, 2010; Ahn et al. thisvolume). These  󿬁 ndings hold even if there is heterogeneity among theparticipants in their initial endowment (Hackett et al., 1994).In experiments where participants were allowed to reduce theearnings of others at a cost to themselves (costly punishment),Ostrom et al. (1992) found that participants use costly punishmentbut, as would be expected, this leads to lower net average returns.When groups could choose whether to use costly punishment or not,the earnings increased when costly punishment was chosen, but thenumberof actualpunishment eventswas low. Cardenas(2000a)useda hand-run variation of the srcinal design of the CPR experiments,changed the choice variable in the instructions to the number of months a year spent in extraction from the common-pool resource( 󿬁 rewood),andrantheexperimentsinruralvillagesinColombiawithactual users of local forests. Although the basic results of  Ostrom et al.(1994) were replicated, the results were more variable. For example,Cardenas found that  “ social distance and group inequality based onthe economic wealth of the people in the group seemed to constrainthe effectiveness of communication for this same sample of groups ” (Cardenas, 2003). Cardenas et al. (2000) found that in these same experiments where the optimal rule was imposed and modestlyenforced, the performance was lower than experiments whereparticipants were allowed to have face-to-face communication. Thephenomenon that people cooperate less in a situation with imposedregulation compared to one in which the same regulation was chosenby the group is called the crowding-out effect of voluntary behavior.This design was later tested by increasing the probability of beingcaught by the regulator and the possibility of voting on theenforcement of monitoring and sanctioning by an external regulator.Thisledtoa 󿬁 ndingthatstricterenforcementcouldleadtoresultssimilarto those under self-governing through face-to-face communication(Rodríguez-Sickert et al., 2008). Vollan (2008) conducted a framed  󿬁 eldexperimentinNamibiaandSouthAfricaandfoundthatthecrowding-outeffect depended on three factors: how controlling versus supportive wasthe external intervention, the level of trust within a social group, and thelevel of self-determination within the group. 4 Other cases of crowding-out are reported by Barr (2001), Velez et al. (2010) and Lopez et al.(forthcoming). In all these papers there is a complementarity betweendifferent types of community enforcement systems which can reinforceone another. However, Castillo and Saysel (2005) report experimentalresults with  󿬁 shermen on a Colombian Caribbean island where externalregulation triggers better cooperation levels as compared to the baselinecase without external regulation.A more recent development is the focus on ecological dynamics.Traditional experiments use abstract resource dilemmas withoutdynamicsand space.In fact, ineachroundparticipants experience thesame commons dilemma. Based on insights from dynamic decisionmaking (Brehmer,1992), new experimentshave been developed thatexplicitly include dynamics of ecological systems ( Janssen et al.,2010). Theseexperimentsshowthatinclusion of temporal andspatialdynamicssuggeststhatcostlypunishment hasnopositiveeffectifit isnot combined with communication in contrast to earlier experimentswith common pool resource and public good experiments (Ostromet al., 1994; Fehr and Gächter, 2000). Field experiments with morerelevant ecological dynamics provide mixed results (Cardenas et al.,forthcoming). On the one hand, we  󿬁 nd that participants makedecisions that re 󿬂 ect their experience with the actual resource(Castillo et al. this volume). On the other hand, ecological dynamicscan provide incentive structures that lead to similar results withstudents and rural villagers (Janssen et al. this volume).Overall, these experiments on CPRs have shown that manypredictions of the conventional theory of collective action do nothold; more cooperation occurs than predicted,  “ cheap talk ”  increasescooperation, and participants engage in sanctioning free riders (e.g.Ostrom et al., 1992). Experiments have also established that there ismotivationalheterogeneityininvestmentorcontributiondecisionsaswell as sanctioning decisions. 2 The experiments discussed in the paper typically involve a sample of participants,use no deception, and provide monetary incentives. Commons experiments are also anarea of investigation in social-psychology, but they use deception. Typicallyparticipants are told they are in a group experiment while they are experiencing aprescribed scenario, and they do often not derive rewards based on their actions. Toavoid confusion we restrict our discussion to actual group experiments. 3 For resource experiments in psychology see Dawes (1980). 4 Vollan (2008) tested the in 󿬂 uence of penalties vs. rewards where the reward wasframed as a supportive drought relief scheme but only penalty lead to the crowding-out effect. Participants could vote for their preferred rule (penalty, reward, andcommunication). The crowding-out effect for penalty only occurred when 2 out of 5people voted for penalty and when it was played in the region where trust was about50% higher (Namibia vs. Namaqualand). In Prediger et al. (this issue) the same crosscountry setup is further explored to highlight how underlying norms and ecologicalcharacteristics in 󿬂 uence experimental results.1573  J.M. Anderies et al. / Ecological Economics 70 (2011) 1571 – 1579  4. Measuring Micro-Situational Variables and the Broader ContextUsing Experimental Methods 4.1. Surveys and Individual Level Attributes The actions of participants in experimental situations do notdirectly reveal personal characteristics or motivations. To avoidinterpretations based on subjective assumptions, instruments thatcan be used to uncover determinants and processes that lead toobserved actions are needed. Econometric analysis of observedbehaviortomeasuresocialin 󿬂 uencehasanalyticalproblems(Durlauf,2002). Themostcommonmethod of gathering data onanindividual'sattributesandbeliefsissimplytoaskthem.Surveydataarecommonlyused in social science research, the methodology is vigorouslydiscussed (e.g., Hutchinson, 2004), and there is now a large literatureregarding how to design and test questionnaires (e.g. Saris andGallhofer, 2007; Sudman et al., 2004). Within common-pool resourceexperiments, onecan(1)asktheparticipantsaboutfactorsthatmightaffect their decisions like socio-demographic information, experiencewith similar tasks, trust in others, perceived social pressure, a feelingof   ‘ group identity ’ , social status, etc. (2) ask the participants toarticulatetheirgoalsorrulestheyfolloworexpecttheotherstofollowin the experiment, (3) ask participants about their expectationsconcerning how other participants will act, how the resource willdevelop, or what the consequences of their decisions might look like,or (4) let the participants explain and evaluate their decisions. Suchinformationhelpsresearcherstobetterunderstandwhyexperimentalparticipants decide or act the way they are observed to act.Questionnaire items are constructed starting with the answers, i.e.the information that shall be gathered. Different types of answers arecommon: (1) Open-ended questions allow any answer and minimizereactivity. However, answering open-ended questions is time-consuming and the information provided is often unsatisfying dueto a lack of guidance. Thus, open-ended questions are typicallyavoided and are used only for explorative purposes; (2) While, inopen-ended questions, the categories of the answers are determinedafter the data is gathered, in multiple-choice questions, thesecategories are de 󿬁 ned beforehand. Thus, answering the questionsand analyzing the results are both easier but designing suchquestionnaires is more dif  󿬁 cult since all relevant response categoriesmust be covered and the response options must give good guidancewithout biasing the data; (3) Numeric responses and psychometricscales are the most common form for items in psychological research.Scales allow the participants to mark a value between two extremes(e.g.  ‘ do not trust at all ’  and  ‘ trust completely ’ ). Thus, for example, itcan be assessed how much a person trusts others instead of onlywhether or not he or she trusts others; and (4) More complexresponse formats allow the gathering of comprehensive datastructures such as strategies used to tackle a problem. However,such items are laborious to design, answer and analyze and requireextensive introductions. For methodological reasons (test of reliabil-ity, to reduce noise and biases, etc.), the same information should begathered with a number of items.In experimental research, asking participants questions might alsobe criticized on three counts: (1) the quality of the data so gathered,(2) biasing effects of surveys on the experiment and vice versa, and(3) practical issues. The  󿬁 rst issue has been investigated quitethoroughly (see e.g. Krosnick, 1999). Many studies have shownpossible design  󿬂 aws (e.g., suggestive questions or answeringoptions) or certain response tendencies (e.g. a preference for positiveanswers).Thebiggestchallengeistomaketherespondentanswerthesame question the researcher has in mind without inducing aparticular answer to that question. People often do not answerquestions literally but, rather, try to  󿬁 ll the inferred knowledge gap of the inquirer, i.e. try to guess what the questioner really wants toknow. This inference is made based on the question, the answeringoptions, other questionnaire items, and any other informationavailable in the situation. If the respondent cannot determine whatinformation is being asked for or does not have an opinion on orknowledge of the subject in question, the quality of the data gatheredwill obviously be poor. Also, when sensitive subjects are touchedupon, responses might be biased towards socially desirable answers.Althoughsuchproblemslimitwhatinformationcanbegatheredusingquestionnaires, a well-designed questionnaire allows the gathering of fairly accurate data and countless studies have generated valuableresults from surveys.The second issue, biasing effects of surveys on experiments andvice versa ,  has not, to our knowledge, been investigated explicitly.Research on priming and framing (e.g. Higgins, 1996) suggests thateffectsonactionsshouldappearonlyifthequestionsareaskedshortly(seconds or minutes) before the decisions are made. For example,questions on economic considerations and beliefs might lead to more ‘ rational ’  decisions. Causality may  󿬂 ow in the other direction as well:actions will have strong effects on answers if the consequences of thedecisions are known. In this case, answers will be post-hocinterpretations that might be quite different from the actual statesand processes that determined the decisions. These considerationssuggestthatthebesttimetoaskquestionsisrightafterdecisionshavebeen made but before their consequences are known to theparticipants. Both of the above-mentioned effects can be controlledfor by carefully designing the questionnaires and the experiment.Finally, we turn to practical issues: designing questionnaires andintegrating them into experimental designs requires considerableeffort in order to avoid deleterious effects of the above-mentionedproblems on data quality. Many additional tests are necessary toensure that the experimental procedures and questionnaireinstruments work well together (e.g. Presser et al., 2004). Beyondthe efforts of scientists, data questionnaires require additional efforton the part of the participants: experiments get much longer whichcan be taxing and answering questionnaires is sometimes perceivedas boring or dif  󿬁 cult.On the other hand, well designed questionnairescan support the participants and spice up an experiment byintroducing a form of communication. To conclude, gathering dataon individual level attributes via questionnaires allows for betterunderstanding of the determinants that lead to observed decisionsand actions. Even though designing questionnaires and integratingthem into experimental designs requires signi 󿬁 cant care andresources, a better understanding of the motivational and cognitiveprocesses behind observed behaviors is well worth the effort. 4.2. Measuring Social Context in Experimental Settings The social setting is often neglected or considered irrelevant inlaboratory experiments because they are generally designed tocontrol for and reproduce social settings exactly across repetitionsof experimental treatments. It is also likely that the variation in socialcontext across undergraduate students in most universities issuf  󿬁 ciently low as to not impede statistical explorations of experimental data. Field experiments are an exception to this rule,as they are often designed to understand behaviors within or acrosssocial contexts (Henrich et al., 2006; Marlowe et al., 2008; Benediktet al., 2008). Field experiments designed with built-in cross-sitecomparisonsappeartoprovidethebestguidanceonhowtostudyandmeasure social context. However, studies that situate the  󿬁 eldexperiment within a speci 󿬁 c social context also provide importantguidance regarding how to study these issues.Culture  –  an extremely complex multi-dimensional concept  –  isone of the most commonly studied aspects of broader context ineconomic experiments. The most prevalent class of such studiescompare across cultural, national, or ethnic groups. Whitt andWilson's (2007) comparison of Bosnjaks, Croats, and Serbs inBosnia-Herzegovina and Takahashi et al.'s (2008) study in China, 1574  J.M. Anderies et al. / Ecological Economics 70 (2011) 1571 – 1579   Japan, and Taiwan are examples of such work. A drawback of thisapproach is that we often cannot determine what aspect of culturedrives different experimental outcomes. More sophisticatedapproaches actually measure cultural traits within and across groups.Anexampleofthistypeof researchis Oosterbeeketal.'s(2004)meta-analysis of  Hofstede (1991) and Inglehart's (2000) culturalclassi 󿬁 cation systems across 75 cases which found that, in ultimatumgames (UG), proposers with greater respect for authority make loweroffers. An approachcalled  “ cultural framing ”  (Cronk, 2007), comparesthe results of classic economic experiments with those explicitlyframed as a salient cultural institution. In her work with Ju/'hoanBushmen, Wiessner (2009) compared the results of dictator andultimatumgameswithreal-lifesharingbehavior.In astudyof Kenyanherders, Lesorogol (2007) compared the results of a classic dictatorgame with those of a dictator game modeled on a local meat-sharinginstitution. Both Wiessner and Lesorgol found that behavior in theclassic dictator game differed substantially from behavior in real life.However, Lesorgol's  󿬁 ndings also demonstrate that local norms andindividual demographics were associated with game behavior whenthe dictator game was contextualized in a local institution.Another way of examining social context is via language anddiscourse. Following the  󿬁 nding that conversation increasescooperation (Balliet, 2010), researchers have developed increasinglycomplex and innovative ways of studying communication ineconomic experiments. Beyond the manipulation of communicationunder experimental conditions, some scholars examine free- 󿬂 owingtext provided by study participants using qualitative methods (Pavittet al., 2005; Janssen, 2010). In a different vein of research, scholarshave become interested in understanding how linguistic knowledgeprovides access for group members to different sets of indigenousknowledge and norms, which then shapes their behavior in economicexperiments. For example, Henrich et al. (2004) collected data onindigenous language use in their study of   󿬁 fteen small-scale societies.In a variation on this approach, Gurven (2004a,b) examinedcompetence in a national language (Spanish) via interviewer-administered tests of written and spoken competence in experimentswith the multilingual Tsimane of Bolivia.A third set of methods for studying social context focuses on socialinteractions and relations. One simple approach categorizes societiesaccording to their kinship organization (Gneezy et al., 2009),residence patterns (e.g., Henrich et al., 2004), or the density of kinship ties(Barr, 2004). At the individual level, researchers have alsosurveyed experimental participants regarding their householdstructure or the local presence of kin (e.g., Macfarlan and Quinlan,2008). Additionally, experimentalists have used observationalmethods such as time blocks or focal follows (e.g., Hill and Gurven,2004) and social network studies based on recall interviews (e.g.,D'Exelle, 2008; Attanasio et al., 2009) to collect data on socialinteractions at the individual and community level. This work onsocial interactions and relations has provided a rich set of insightsincluding Crosonand Gneezy's(2009) review of gender differences inexperiments, which shows that women are reliably more risk averse,less competitive, and sensitive to social cues than men. Becausecollectinginteractionaldataisgenerallyverycostly(forobservations)or unreliable (for interviews), people have sought other ways tocapture data on social connectedness. Another set of techniquesexamines social capital using structured protocols to surveyexperiment participants. Different protocols focus on aspects of socialcapital such as trust (Danielson and Holm, 2007), trustworthiness(Wilson et al., 2009), or fairness (Karlan, 2005), and past trusting behavior (Glaeser et al., 2000). Studying social capital in this way hasshown that trust attitudes are related to some aspects of gamebehavior (Glaeser et al., 2000). Bouma et al. (2008), for example, measure individual and village trust levels and correlate this withcaste heterogeneity at village scale. They  󿬁 nd that non-agriculturaldependent participants trust other participants less.A fourth and  󿬁 nal approach to characterizing social contextinvolves political and economic data. At the group level,experimentalists have characterized population size (Marlowe et al.,2008)andsettlementsize(Henrichetal.,2004)basedonpublishedor self-collected census data. An alternative, used by Henrich et al.(2004), is  a priori  classi 󿬁 cation of groups'political complexity, marketintegration, or payoff to cooperation. At lower levels of analysis,scholars have collected survey data on individuals' village af  󿬁 liation(Gurven et al., 2008) and pile sort data on political coalitions (Patton, 2004). To date, few scholars have sought to use experimentaleconomics with migratory, refugee, or resettled populations (butsee Fong and Luttmer, 2009 for a study with refugees from HurricaneKatrina). In a handful of cases, scholars have relied on simpletechniques, such as categorizing communities as sedentary/nomadic(e.g., McElreath, 2004) or resettled/non-resettled (Barr, 2004), to study population-level mobility. Future studies may utilize moresophisticated survey and qualitative techniques to studythese mobilepopulations. Another important set of measures collects datacharacterizing local economies and individuals' participation inthem. Henrich et al. (2004) demonstrated the importance of studyingpayoffs to cooperation and market integration for explainingprosociality in small-scale societies. At the community level, thismay involve ranking societies based on their reliance on cooperativeproduction(Alvard,2004)ormeasuringthedistancebetweenvillagesand their nearest marketplace (Gurven, 2004a). At the individuallevel,anumberofscholarshavealsostudiedmarketintegrationbasedonself-reportsofwagelaborparticipation(Ensminger,2004),incomeearned from cash-cropping (Tracer, 2004), or number of visits tomarkets (Gurven, 2004a,b). For the particular case of commons users,Cardenas (2000b) found a strong correlation between actualexperience of the participants with extraction of resources and theircapacity to solve the commons dilemma in the experiment.Theworkdiscussedinthissectionillustratesanumberofeffortstomeasure social context so that we can investigate its effect onexperimentaloutcomes.Mostofthisresearchhasusedultimatumanddictator games to explore basic questions about human prosocialityand was not designed to understand commons dilemmas.Nevertheless, these studies demonstrate the value of studying therelationship between social context and cooperative behavior. Webelieve these approaches have the potential to help address thechallenge posed in the introduction: to reconcile the con 󿬂 ict betweensimple theory and the complexity of empirical examples. 4.3. Tools for Measuring Broader Social – Ecological Context  Participants in  󿬁 eld experiments make decisions that provideinsight into particular aspects of human behavior that are dif  󿬁 cult toobtain from classic ethnographic tools and surveys. In essence,participants are placed in a relatively simple, simulated actionsituation (Ostrom, 2005) in which there is a higher degree of controlover several variables than in the system in which they live. For anassessment of broader social – ecological context, one can associate 󿬁 eld experiments with other methodologies. Classical sources of information very commonly used with the experiments are socio-economic surveys. The information gathered with surveys helpsresearchers to understand how individual characteristics affectbehavior during the experiment. We consider here two speci 󿬁 cmethods: participatory rural appraisal (PRA) and role-playing games(RPGs).Participatory rural appraisal tools (PRA) were srcinally designedto assist local development processes and have been modi 󿬁 ed to alsoaddress research questions (Chambers, 1994). Cárdenas et al. (2003) and Lopez (2010) have used them in combination with  󿬁 eldexperiments in order to understand broader context and microsituational variables (Ostrom, 2007; Poteete et al., 2010). The authorsexplained that after running all the experimental sections in locations 1575  J.M. Anderies et al. / Ecological Economics 70 (2011) 1571 – 1579
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks