[29] There is a scenario where a private actor might develop AI in secret from the government, but this is unlikely to be the case as government surveillance capabilities improve. Photo Credit: NATO photo by Capt. 75 0 obj <>stream An example of the game of Stag Hunt can be illustrated by neighbours with a large hedge that forms the boundary between their properties. In each of these models, the payoffs can be most simply described as the anticipated benefit from developing AI minus the anticipated harm from developing AI. Together, these elements in the arms control literature suggest that there may be potential for states as untrusting, rational actors existing in a state of international anarchy to coordinate on AI development in order to reduce future potential global harms. It would be much better for each hunter, acting individually, to give up total autonomy and minimal risk, which brings only the small reward of the hare. In times of stress, individual unicellular protists will aggregate to form one large body. Explain Rousseau's metaphor of the 'stag hunt'. The hedge is shared so both parties are responsible for maintaining it. \wb94W(F}pYY"[17/x(K+jf+M)S_3ZP7~Nj\TgTId=/o7Mx{a[ K} To what extent are today's so-called 'new wars' (Mary Kaldor) post Clausewitzean in nature? Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Table 8. Table 3. I refer to this as the AI Coordination Problem. 0000006229 00000 n These differences create four distinct models of scenarios we can expect to occur: Prisoners Dilemma, Deadlock, Chicken, and Stag Hunt. In this book, you will make an introduction to realism, liberalism and economic structuralism as major traditions in the field, their historical evolution and some theories they have given birth . %PDF-1.7 % 0 But what is even more interesting (even despairing) is, when the situation is more localized and with a smaller network of acquainted people, most players still choose to hunt the hare as opposed to working together to hunt the stag. In a case with a random group of people, most would choose not to trust strangers with their success. One is the coordination of slime molds. Under which four circumstances does the new norm of the 'Responsibility to Protect' arise? This democratic peace proposition not only challenges the validity of other political systems (i.e., fascism, communism, authoritarianism, totalitarianism), but also the prevailing realist account of international relations, which emphasises balance-of-power calculations and common strategic interests in order to explain the peace and stability that characterises relations between liberal democracies. I refer to this as the AI Coordination Problem. Formally, a stag hunt is a game with two pure strategy Nash equilibriaone that is risk dominant and another that is payoff dominant. However, a hare is seen by all hunters moving along the path. We see this in the media as prominent news sources with greater frequency highlight new developments and social impacts of AI with some experts heralding it as the new electricity.[10] In the business realm, investments in AI companies are soaring. On the other hand, real life examples of poorly designed compensation structures that create organizational inefficiencies and hinder success are not uncommon. Using game theory as a way of modelingstrategicallymotivated decisions has direct implications for understanding basic international relations issues. But for the argument to be effective against a fool, he must believe that the others with whom he interacts are notAlwaysfools.Defect. Nations are able to communicate with each other freely, something that is forbidden in the traditional PD game. The coincident timing of high-profile talks with a leaked report that President Trump seeks to reduce troop levels by half has already triggered a political frenzy in Kabul. The stag hunt differs from the prisoner's dilemma in that there are two pure-strategy Nash equilibria:[2] one where both players cooperate, and one where both players defect. In the current Afghan context, the role of the U.S. military is not that of third-party peacekeeper, required to guarantee the peace in disinterested terms; it has the arguably less burdensome job of sticking around as one of several self-interested hunters, all of whom must stay in the game or risk its collapse. Continuous coordination through negotiation in a Prisoners Dilemma is somewhat promising, although a cooperating actor runs the risk of a rival defecting if there is not an effective way to ensure and enforce cooperation in an AI Cooperation Regime. c For example, if the players could flip a coin before choosing their strategies, they might agree to correlate their strategies based on the coin flip by, say, choosing ballet in the event of heads and prize fight in the event of tails. Therefore, if it is likely that both actors perceive to be in a state of Prisoners Dilemma when deciding whether to agree on AI, strategic resources should be especially allocated to addressing this vulnerability. This distribution variable is expressed in the model as d, where differing effects of distribution are expressed for Actors A and B as dA and dB respectively.[54]. Evaluate this statement. 0000003638 00000 n If all the hunters work together, they can kill the stag and all eat. On the face of it, it seems that the players can then 'agree' to play (c,c); though the agreement is not enforceable, it removes each player's doubt about the other one playing c". The area of international relations theory that is most characterized by overt metaphorical imagery is that of game theory.Although the imagery of game theory would suggest that the games were outgrowths of metaphorical thinking, the origins of game theory actually are to be found in the area of mathematics. [13] Tesla Inc., Autopilot, https://www.tesla.com/autopilot. <> hVN0ii ipv@B\Z7 'Q{6A"@](v`Q(TJ}Px^AYbA`Z&gh'{HoF4 JQb&b`#B$03an8"3V0yFZbwonu#xZ? Published by the Lawfare Institute in Cooperation With, Lawfare Resources for Teachers and Students, Documents Related to the Mueller Investigation, highly contentious presidential elections, Civil Liberties and Constitutional Rights. The Stag Hunt represents an example of compensation structure in theory. Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of both the likelihood that the actor themselves will develop a harmful AI times that harm, as well as the expected harm of their opponent developing a harmful AI. Like the hunters in the woods, Afghanistans political elites have a great deal, at least theoretically, to gain from sticking together. b For Rousseau, in his famous parable of the stag hunt, war is inevitable because of the security dilemma and the lack of trust between states. The closestapproximationof this in International Relations are universal treaties, like the KyotoProtocolenvironmental treaty. A major terrorist attack launched from Afghanistan would represent a kind of equal opportunity disaster and should make a commitment to establishing and preserving a capable state of ultimate value to all involved. In international relations, examples of Chicken have included the Cuban Missile Crisis and the concept of Mutually Assured Destruction in nuclear arms development. Throughout history, armed force has been a ubiquitous characteristic of the relations between independent polities, be they tribes, cities, nation-states or empires. might complicate coordination efforts. Actor As preference order: DC > CC > CD > DD, Actor Bs preference order: CD > CC > DC > DD. She argues that states are no longer A common example of the Prisoners Dilemma in IR is trade agreements. Every country operates selfishly in the international order. Here, both actors demonstrate a high degree of optimism in both their and their opponents ability to develop a beneficial AI, while this likelihood would only be slightly greater under a cooperation regime. [26] Stephen Hawking, Stuart Russell, Max Tegmark, Frank Wilczek, Transcendence looks at the implications of artificial intelligence but are we taking AI seriously enough? The Indepndent, May 1, 2014, https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html. Within these levels of analysis, there are different theories that have could be considered. Although the development of AI at present has not yet led to a clear and convincing military arms race (although this has been suggested to be the case[43]), the elements of the arms race literature described above suggest that AIs broad and wide-encompassing capacity can lead actors to see AI development as a threatening technological shock worth responding to with reinforcements or augmentations in ones own security perhaps through bolstering ones own AI development program. Payoff matrix for simulated Stag Hunt. trailer Most events in IR are not mutually beneficial, like in the Battle of the Sexes. [53] A full list of the variables outlined in this theory can be found in Appendix A. [6], Aumann proposed: "Let us now change the scenario by permitting pre-play communication. It comes with colossal opportunities, but also threats that are difficult to predict. We have recently seen an increase in media acknowledgement of the benefits of artificial intelligence (AI), as well as the negative social implications that can arise from its development. For instance, if the expected punishment is 2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction. [8] If truly present, a racing dynamic[9] between these two actors is a cause for alarm and should inspire strategies to develop an AI Coordination Regime between these two actors. They are the only body responsible for their own protection. The remainder of this section looks at these payoffs and the variables that determine them in more detail.[53]. While each actors greatest preference is to defect while their opponent cooperates, the prospect of both actors defecting is less desirable then both actors cooperating. If all the hunters work together, they can kill the stag and all eat. As will hold for the following tables, the most preferred outcome is indicated with a 4, and the least preferred outcome is indicated with a 1., Actor As preference order: DC > CC > DD > CD, Actor Bs preference order: CD > CC > DD > DC. September 21, 2015 | category: Both games are games of cooperation, but in the Stag-hunt there is hope you can get to the "good" outcome. Advanced AI technologies have the potential to provide transformative social and economic benefits like preventing deaths in auto collisions,[17] drastically improving healthcare,[18] reducing poverty through economic bounty,[19] and potentially even finding solutions to some of our most menacing problems like climate change.[20]. This allows for coordination, and enables players to move from the strategy with the lowest combined payoff (both cheat) to the strategy with the highest combined payoff (both cooperate). Each player must choose an action without knowing the choice of the other. Posted June 3, 2008 By Presh Talwalkar. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect. David Hume provides a series of examples that are stag hunts. Based on the values that each actor assigns to their payoff variables, we can expect different coordination models (Prisoners Dilemma, Chicken, Deadlock, or Stag Hunt) to arise. 695 0 obj Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. 0000004572 00000 n In the event that both actors are in a Stag Hunt, all efforts should be made to pursue negotiations and persuade rivals of peaceful intent before the window of opportunity closes. [38] Michael D. Intriligator & Dagobert L. Brito, Formal Models of Arms Races, Journal of Peace Science 2, 1(1976): 7788. Use integration to find the indicated probabilities. [47] George W. Downs, David M. Rocke, & Randolph M. Siverson, Arms Races and Cooperation, World Politics, 38(1: 1985): 118146. [43] Edward Moore Geist, Its already too late to stop the AI arms race We must manage it instead, Bulletin of the Atomic Scientists 72, 5(2016): 318321. Here, values are measured in utility. As a result, there is no conflict between self-interest and mutual benefit, and the dominant strategy of both actors would be to defect. endstream endobj 76 0 obj <>stream Downs et al. Why do trade agreements even exist? Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea. and other examples to illustrate how game theory might be applied to understand the Taiwan Strait issue. In this example, each player has a dominantstrategy. Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see Skyrms 2004). For the painting about stag hunting, see, In this symmetric case risk dominance occurs if (. In biology many circumstances that have been described as prisoner's dilemma might also be interpreted as a stag hunt, depending on how fitness is calculated. The familiar Prisoners Dilemma is a model that involves two actors who must decide whether to cooperate in an agreement or not. Weiss and Agassi wrote about this argument: "This we deem somewhat incorrect since it is an oversight of the agreement that may change the mutual expectations of players that the result of the game depends on Aumanns assertion that there is no a priori reason to expect agreement to lead to cooperation requires completion; at times, but only at times, there is a posteriori reason for that How a given player will behave in a given game, thus, depends on the culture within which the game takes place".[8]. This essay first appeared in the Acheson Prize 2018 Issue of the Yale Review of International Studies. [27] An academic survey conducted showed that AI experts and researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years. A sudden drop in current troop levels will likely trigger a series of responses that undermine the very peace and stability the United States hopes to achieve. Is human security a useful approach to security? To what extent does today's mainstream media provide us with an objective view of war? 0000002252 00000 n These are a few basic examples of modeling IR problems with game theory. However, if one doesn't, the other wastes his effort. [7] Aumann concluded that in this game "agreement has no effect, one way or the other." He found various theories being proposed, suggesting a level analysis problem. 0000002790 00000 n Some observers argue that a precipitous American retreat will leave the countryand even the capital, Kabulvulnerable to an emboldened, undeterred Taliban given the limited capabilities of Afghanistans national security forces. You note that the temptation to cheat creates tension between the two trading nations, but you could phrase this much more strongly: theoretically, both players SHOULD cheat. The Stag Hunt The Stag Hunt is a story that became a game. Some have accused rivals of being Taliban sympathizers while others have condemned their counterparts for being against peace. Moreover, each actor is more confident in their own capability to develop a beneficial AI than their opponents. [11] In our everyday lives, we store AI technology as voice assistants in our pockets[12] and as vehicle controllers in our garages. In this game "each player always prefers the other to play c, no matter what he himself plays. For example, suppose we have a prisoner's dilemma as pictured in Figure 3. 0000016501 00000 n [36] Colin S. Gray, The Arms Race Phenomenon, World Politics, 24, 1(1971): 39-79 at 41. This section defines suggested payoffs variables that impact the theory and simulate the theory for each representative model based on a series of hypothetical scenarios. Civilians and civilian objects are protected under the laws of armed conflict by the principle of distinction. xref For example, if the two international actors cooperate with one another, we can expect some reduction in individual payoffs if both sides agree to distribute benefits amongst each other. 695 20 As the infighting continues, the impulse to forego the elusive stag in favor of the rabbits on offer will grow stronger by the day. 0000002169 00000 n 0000004367 00000 n arguing that territorial conflicts in international relations follow a strategic logic but one defined by the cost-benefit calculations that . If both choose to row they can successfully move the boat. Finally, I discuss the relevant policy and strategic implications this theory has on achieving international AI coordination, and assess the strengths and limitations of the theory in practice. The game is a prototype of the social contract. 0000002555 00000 n If participation is not universal, they cannot surround the stag and it escapes, leaving everyone that hunted stag hungry. Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. Examples of the stag hunt [ edit] The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI. Name four key thinkers of the theory of non-violent resistance, Gandhi, martin luther king, malcon X, cesar chavex. In this section, I briefly argue that state governments are likely to eventually control the development of AI (either through direct development or intense monitoring and regulation of state-friendly companies)[29], and that the current landscape suggests two states in particular China and the United States are most likely to reach development of an advanced AI system first. Absolute gains looks at the total effect of the decision while relative gains only looks at the individual gains in respect to others. 8,H7kcn1qepa0y|@. This iterated structure creates an incentive to cooperate; cheating in the first round significantly reduces the likelihood that the other player will trust one enough to attempt to cooperate in the future. Uneven distribution of AIs benefits couldexacerbate inequality, resulting in higher concentrations of wealth within and among nations. [58] Downs et al., Arms Races and Cooperation, 143-144. international relations-if the people made international decisions stag hunt, chicken o International relations is a perfect example of an [25] For more on the existential risks of Superintelligence, see Bostrom (2014) at Chapters 6 and 8. They suggest that new weapons (or systems) that derive from radical technological breakthroughs can render a first strike more attractive, whereas basic arms buildups provide deterrence against a first strike. The stag is the reason the United States and its NATO allies grew concerned with Afghanistan's internal political affairs in the first place, and they remain invested in preventing networks, such as al-Qaeda and the Islamic State, from employing Afghan territory as a base. What are some good examples of coordination games? The Nash equilibrium for each nation is to cheat, so it would be irrational to do otherwise. Collision isdisastrousfor everyone, but swerving is losing bad too. Here, I also examine the main agenda of this paper: to better understand and begin outlining strategies to maximize coordination in AI development, despite relevant actors varying and uncertain preferences for coordination. In their paper, the authors suggest Both the game that underlies an arms race and the conditions under which it is conducted can dramatically affect the success of any strategy designed to end it[58]. [52] Stefan Persson, Deadlocks in International Negotiation, Cooperation and Conflict 29, 3(1994): 211244. Payoff variables for simulated Stag Hunt, Table 14. I thank my advisor, Professor Allan Dafoe, for his time, support, and introduction to this papers subject matter in his Global Politics of AI seminar. Understanding the Stag Hunt Game: How Deer Hunting Explains Why People are Socially Late. The stag may not pass every day, but the hunters are reasonably certain that it will come. Depending on the payoff structures, we can anticipate different likelihoods of and preferences for cooperation or defection on the part of the actors. Prisoners Dilemma, Stag Hunt, Battle of the Sexes, and Chicken are discussed in our text. Press: 1992). (1) the responsibility of the state to protect its own population from genocide, war crimes, ethnic cleansing and crimes against humanity, and from their incitement; What is the difference between structural and operational conflict prevention? In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. As stated, which model (Prisoners Dilemma, Chicken, Deadlock, or Stag Hunt) you think accurately depicts the AI Coordination Problem (and which resulting policies should be pursued) depends on the structure of payoffs to cooperating or defecting. This is expressed in the following way: The intuition behind this is laid out in Armstrong et al.s Racing to the precipice: a model of artificial intelligence.[55] The authors suggest each actor would be incentivized to skimp on safety precautions in order to attain the transformative and powerful benefits of AI before an opponent. [32] Notably, discussions among U.S. policymakers to block Chinese investment in U.S. AI companies also began at this time.[33]. Another example is the hunting practices of orcas (known as carousel feeding). This is visually represented in Table 3 with each actors preference order explicitly outlined. the 'inherent' right to individual and collective self-defence recognized by Article 51 of the Charter and enforcement measures involving the use of force sanctioned by the Security Council under Chapter VII thereof. [51] An analogous scenario in the context of the AI Coordination Problem could be if both international actors have developed, but not yet unleashed an ASI, where knowledge of whether the technology will be beneficial or harmful is still uncertain. [15] Sam Byford, AlphaGo beats Lee Se-dol again to take Google DeepMind Challenge series, The Verge, March 12, 2016, https://www.theverge.com/2016/3/12/11210650/alphago-deepmind-go-match-3-result. As a result of this, security-seeking actions such as increasing technical capacity (even if this is not explicitly offensive this is particularly relevant to wide-encompassing capacity of AI) can be perceived as threatening and met with exacerbated race dynamics.
Hancock Forest Management Hunting Permit,
Skate Uk Levels Bronze, Silver Gold,
Advantages And Disadvantages Of General Teaching Council For England,
Do Guys Ever Change Their Minds About Wanting A Relationship,
Articles S